Updates from: 01/19/2024 02:21:36
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
Title: Configure authentication in a sample Node.js web application by using Azure Active Directory B2C (Azure AD B2C)
-description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a Node.js web application.
+description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a Node.js web application.
-+ Last updated 01/11/2024
# Configure authentication in a sample Node.js web application by using Azure Active Directory B2C
-This sample article uses a sample Node.js application to show how to add Azure Active Directory B2C (Azure AD B2C) authentication to a Node.js web application. The sample application enables users to sign in, sign out, update profile and reset password using Azure AD B2C user flows. The sample web application uses [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) to handle authentication and authorization.
+This sample article uses a sample Node.js application to show how to add Azure Active Directory B2C (Azure AD B2C) authentication to a Node.js web application. The sample application enables users to sign in, sign out, update profile and reset password using Azure AD B2C user flows. The sample web application uses [Microsoft Authentication Library (MSAL) for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) to handle authentication and authorization.
In this article, youΓÇÖll do the following tasks: - Register a web application in the Azure portal. - Create combined **Sign in and sign up**, **Profile editing**, and **Password reset** user flows for the app in the Azure portal. - Update a sample Node application to use your own Azure AD B2C application and user flows.-- Test the sample application.
+- Test the sample application.
## Prerequisites
In this article, youΓÇÖll do the following tasks:
## Step 1: Configure your user flows ## Step 2: Register a web application
-To enable your application sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app registration establishes a trust relationship between the app and Azure AD B2C.
+To enable your application sign in with Azure AD B2C, register your app in the Azure AD B2C directory. The app registration establishes a trust relationship between the app and Azure AD B2C.
-During app registration, you'll specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID, and the redirect URI to create authentication requests.
+During app registration, you'll specify the *Redirect URI*. The redirect URI is the endpoint to which the user is redirected by Azure AD B2C after they authenticate with Azure AD B2C. The app registration process generates an *Application ID*, also known as the *client ID*, that uniquely identifies your app. After your app is registered, Azure AD B2C uses both the application ID, and the redirect URI to create authentication requests.
-### Step 2.1: Register the app
+### Step 2.1: Register the app
To register the web app, follow these steps:
To register the web app, follow these steps:
1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **App registrations**, and then select **New registration**. 1. Under **Name**, enter a name for the application (for example, *webapp1*).
-1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
+1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**.
1. Under **Redirect URI**, select **Web** and then, in the URL box, enter `http://localhost:3000/redirect`. 1. Under **Permissions**, select the **Grant admin consent to openid and offline_access permissions** checkbox. 1. Select **Register**.
The `views` folder contains Handlebars files for the application's user interfac
## Step 5: Configure the sample web app
-Open your web app in a code editor such as Visual Studio Code. Under the project root folder, open the *.env* file. This file contains information about your Azure AD B2C identity provider. Update the following app settings properties:
+Open your web app in a code editor such as Visual Studio Code. Under the project root folder, open the *.env* file. This file contains information about your Azure AD B2C identity provider. Update the following app settings properties:
|Key |Value | |||
Your final configuration file should look like the following sample:
You can now test the sample app. You need to start the Node server and access it through your browser on `http://localhost:3000`. 1. In your terminal, run the following code to start the Node.js web server:
-
+ ```bash node index.js ```
You can now test the sample app. You need to start the Node server and access it
### Test profile editing
-1. After you sign in, select **Edit profile**.
-1. Enter new changes as required, and then select **Continue**. You should see the page with sign-in status with the new changes, such as **Given Name**.
+1. After you sign in, select **Edit profile**.
+1. Enter new changes as required, and then select **Continue**. You should see the page with sign-in status with the new changes, such as **Given Name**.
### Test password reset
-1. After you sign in, select **Reset password**.
+1. After you sign in, select **Reset password**.
1. In the next dialog that appears, you can cancel the operation by selecting **Cancel**. Alternatively, enter your email address, and then select **Send verification code**. You'll receive a verification code to your email account. Copy the verification code in your email, enter it into the password reset dialog, and then select **Verify code**. 1. Select **Continue**. 1. Enter your new password, confirm it, and then select **Continue**. You should see the page that shows sign-in status. ### Test sign-out
-After you sign in, select **Sign out**. You should see the page that has a **Sign in** button.
+After you sign in, select **Sign out**. You should see the page that has a **Sign in** button.
## Next steps
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
Title: Enable authentication in your own Python web application using Azure Active Directory B2C
-description: This article explains how to enable authentication in your own Python web application using Azure AD B2C
+description: This article explains how to enable authentication in your own Python web application using Azure AD B2C
-+ Last updated 01/11/2024
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
1. On your file system, create a project folder for this tutorial, such as `my-python-web-app`. 1. In your terminal, change directory into your Python app folder, such as `cd my-python-web-app`. 1. Run the following command to create and activate a virtual environment named `.venv` based on your current interpreter.
-
- # [Linux](#tab/linux)
-
+
+ # [Linux](#tab/linux)
+ ```bash sudo apt-get install python3-venv # If needed python3 -m venv .venv
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
``` # [macOS](#tab/macos)
-
+ ```zsh python3 -m venv .venv source .venv/bin/activate ```
-
+ # [Windows](#tab/windows)
-
+ ```cmd py -3 -m venv .venv .venv\scripts\activate
This article uses [Python 3.9+](https://www.python.org/) and [Flask 2.1](https:/
``` python -m pip install --upgrade pip
- ```
+ ```
1. To enable the Flask debug features, switch Flask to the development environment to `development` mode. For more information about debugging Flask apps, check out the [Flask documentation](https://flask.palletsprojects.com/en/2.1.x/config/#environment-and-debug-features).
- # [Linux](#tab/linux)
-
+ # [Linux](#tab/linux)
+ ```bash export FLASK_ENV=development ``` # [macOS](#tab/macos)
-
+ ```zsh export FLASK_ENV=development ```
-
+ # [Windows](#tab/windows)
-
+ ```cmd set FLASK_ENV=development ```
msal>=1.7,<2
In your terminal, install the dependencies by running the following commands:
-# [Linux](#tab/linux)
+# [Linux](#tab/linux)
```bash python -m pip install -r requirements.txt
py -m pip install -r requirements.txt
-## Step 3: Build app UI components
+## Step 3: Build app UI components
-Flask is a lightweight Python framework for web applications that provides the basics for URL routing and page rendering. It leverages Jinja2 as its template engine to render the content of your app. For more information, check out the [template designer documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/). In this section, you add the required templates that provide the basic functionality of your web app.
+Flask is a lightweight Python framework for web applications that provides the basics for URL routing and page rendering. It leverages Jinja2 as its template engine to render the content of your app. For more information, check out the [template designer documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/). In this section, you add the required templates that provide the basic functionality of your web app.
### Step 3.1 Create a base template
Add the following templates under the templates folder. These templates extend t
{% extends "base.html" %} {% block title %}Home{% endblock %} {% block content %}
-
+ <h1>Microsoft Identity Python Web App</h1>
-
+ {% if user %} <h2>Claims:</h2> <pre>{{ user |tojson(indent=4) }}</pre>
-
-
++ {% if config.get("ENDPOINT") %} <li><a href='/graphcall'>Call Microsoft Graph API</a></li> {% endif %}
-
+ {% if config.get("B2C_PROFILE_AUTHORITY") %} <li><a href='{{_build_auth_code_flow(authority=config["B2C_PROFILE_AUTHORITY"])["auth_uri"]}}'>Edit Profile</a></li> {% endif %}
-
+ <li><a href="/logout">Logout</a></li>
-
+ {% else %} <li><a href='{{ auth_url }}'>Sign In</a></li> {% endif %}
-
+ {% endblock %} ```
Add the following templates under the templates folder. These templates extend t
```html {% extends "base.html" %} {% block title%}Error{% endblock%}
-
+ {% block metadata %} {% if config.get("B2C_RESET_PASSWORD_AUTHORITY") and "AADB2C90118" in result.get("error_description") %} <!-- See also https://learn.microsoft.com/azure/active-directory-b2c/active-directory-b2c-reference-policies#linking-user-flows -->
Add the following templates under the templates folder. These templates extend t
content='0;{{_build_auth_code_flow(authority=config["B2C_RESET_PASSWORD_AUTHORITY"])["auth_uri"]}}'> {% endif %} {% endblock %}
-
+ {% block content %} <h2>Login Failure</h2> <dl> <dt>{{ result.get("error") }}</dt> <dd>{{ result.get("error_description") }}</dd> </dl>
-
+ <a href="{{ url_for('index') }}">Homepage</a> {% endblock %} ```
B2C_PROFILE_AUTHORITY = authority_template.format(
B2C_RESET_PASSWORD_AUTHORITY = authority_template.format( tenant=b2c_tenant, user_flow=resetpassword_user_flow)
-REDIRECT_PATH = "/getAToken"
+REDIRECT_PATH = "/getAToken"
# This is the API resource endpoint ENDPOINT = '' # Application ID URI of app registration in Azure portal
if __name__ == "__main__":
In the Terminal, run the app by entering the following command, which runs the Flask development server. The development server looks for `app.py` by default. Then, open your browser and navigate to the web app URL: `http://localhost:5000`.
-# [Linux](#tab/linux)
+# [Linux](#tab/linux)
```bash python -m flask run --host localhost --port 5000
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
To configure your tenant application as an eID-ME relying party in eID-Me, suppl
| Application privacy policy URL| Appears to the end user| >[!NOTE]
->When the relying party is configurede, ID-Me provides a Client ID and a Client Secret. Note the Client ID and Client Secret to configure the identity provider (IdP) in Azure AD B2C.
+>When the relying party is configured, ID-Me provides a Client ID and a Client Secret. Note the Client ID and Client Secret to configure the identity provider (IdP) in Azure AD B2C.
## Add a new Identity provider in Azure AD B2C
ai-services Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/manage-costs.md
Enabling capabilities such as sending data to Azure Monitor Logs and alerting in
You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those products and services found in the Azure Marketplace.
+### HTTP Error response code and billing status in Azure OpenAI Service
+
+If the service performs processing, you may be charged even if the status code is not successful (not 200).
+For example, a 400 error due to a content filter or input limit, or a 408 error due to a timeout.
+
+If the service doesn't perform processing, you won't be charged.
+For example, a 401 error due to authentication or a 429 error due to exceeding the Rate Limit.
+ ## Monitor costs As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals, such as seconds, minutes, hours, and days, or by unit usage, such as bytes and megabytes. As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in the [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
ai-services Audio Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/audio-processing-overview.md
Previously updated : 09/07/2022 Last updated : 1/18/2024
The Microsoft Audio Stack is a set of enhancements optimized for speech processi
* **Beamforming** - Localize the origin of sound and optimize the audio signal using multiple microphones. * **Dereverberation** - Reduce the reflections of sound from surfaces in the environment. * **Acoustic echo cancellation** - Suppress audio being played out of the device while microphone input is active.
-* **Automatic gain control** - Dynamically adjust the personΓÇÖs voice level to account for soft speakers, long distances, or non-calibrated microphones.
+* **Automatic gain control** - Dynamically adjust the personΓÇÖs voice level to account for soft speakers, long distances, or noncalibrated microphones.
[ ![Block diagram of Microsoft Audio Stack's enhancements.](media/audio-processing/mas-block-diagram.png) ](media/audio-processing/mas-block-diagram.png#lightbox)
-Different scenarios and use-cases can require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
+Different scenarios and use-cases can require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it's acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it's unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely affect a machine-learned speech recognition model's accuracy, but it's acceptable to have minor levels of echo residual.
Processing is performed fully locally where the Speech SDK is being used. No audio data is streamed to MicrosoftΓÇÖs cloud services for processing by the Microsoft Audio Stack. The only exception to this is for the Conversation Transcription Service, where raw audio is sent to MicrosoftΓÇÖs cloud services for processing.
The Microsoft Audio Stack also powers a wide range of Microsoft products:
The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. Some of the key Microsoft Audio Stack features available via the Speech SDK include: * **Real-time microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
-* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario does not include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
+* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario doesn't include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
* **Custom microphone geometries** - The SDK allows you to provide your own custom microphone geometry information, in addition to supporting preset geometries like linear two-mic, linear four-mic, and circular 7-mic arrays (see more information on supported preset geometries at [Microphone array recommendations](speech-sdk-microphone.md#microphone-geometry)). * **Beamforming angles** - Specific beamforming angles can be provided to optimize audio input originating from a predetermined location, relative to the microphones.
ai-services Audio Processing Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/audio-processing-speech-sdk.md
Previously updated : 09/16/2022 Last updated : 1/18/2024 ms.devlang: cpp
-# ms.devlang: cpp, csharp, java
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
## Custom microphone geometry This sample shows how to use MAS with a custom microphone geometry on a specified audio input device. In this example:
-* **Enhancement options** - The default enhancements will be applied on the input audio stream.
+* **Enhancement options** - The default enhancements are applied on the input audio stream.
* **Custom geometry** - A custom microphone geometry for a 7-microphone array is provided via the microphone coordinates. The units for coordinates are millimeters. * **Audio input** - The audio input is from a file, where the audio within the file is expected from an audio input device corresponding to the custom geometry specified.
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
This sample shows how to use MAS with a custom set of enhancements on the input audio. By default, all enhancements are enabled but there are options to disable dereverberation, noise suppression, automatic gain control, and echo cancellation individually by using `AudioProcessingOptions`. In this example:
-* **Enhancement options** - Echo cancellation and noise suppression will be disabled, while all other enhancements remain enabled.
+* **Enhancement options** - Echo cancellation and noise suppression are disabled, while all other enhancements remain enabled.
* **Audio input device** - The audio input device is the default microphone of the device. ### [C#](#tab/csharp)
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
## Specify beamforming angles This sample shows how to use MAS with a custom microphone geometry and beamforming angles on a specified audio input device. In this example:
-* **Enhancement options** - The default enhancements will be applied on the input audio stream.
+* **Enhancement options** - The default enhancements are applied on the input audio stream.
* **Custom geometry** - A custom microphone geometry for a 4-microphone array is provided by specifying the microphone coordinates. The units for coordinates are millimeters.
-* **Beamforming angles** - Beamforming angles are specified to optimize for audio originating in that range. The units for angles are degrees. In the sample code below, the start angle is set to 70 degrees and the end angle is set to 110 degrees.
+* **Beamforming angles** - Beamforming angles are specified to optimize for audio originating in that range. The units for angles are degrees.
* **Audio input** - The audio input is from a push stream, where the audio within the stream is expected from an audio input device corresponding to the custom geometry specified.
+In the following code example, the start angle is set to 70 degrees and the end angle is set to 110 degrees.
+ ### [C#](#tab/csharp) ```csharp
ai-services Batch Synthesis Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis-properties.md
Previously updated : 11/16/2022 Last updated : 1/18/2024
Batch synthesis properties are described in the following table.
|`description`|The description of the batch synthesis.<br/><br/>This property is optional.| |`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.| |`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
-|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result will be written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but keep in mind that the maximum JSON payload size (including all text inputs and other properties) that will be accepted is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
+|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.| |`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.| |`properties`|A defined set of optional batch synthesis configuration settings.|
Batch synthesis properties are described in the following table.
|`properties.durationInTicks`|The audio output duration in ticks.<br/><br/>This property is read-only.| |`properties.failedAudioCount`|The count of batch synthesis inputs to audio output failed.<br/><br/>This property is read-only.| |`properties.outputFormat`|The audio output format.<br/><br/>For information about the accepted values, see [audio output formats](rest-text-to-speech.md#audio-outputs). The default output format is `riff-24khz-16bit-mono-pcm`.|
-|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file will be included in the results data ZIP file.|
+|`properties.sentenceBoundaryEnabled`|Determines whether to generate sentence boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If sentence boundary data is requested, then a corresponding `[nnnn].sentence.json` file is included in the results data ZIP file.|
|`properties.succeededAudioCount`|The count of batch synthesis inputs to audio output succeeded.<br/><br/>This property is read-only.| |`properties.timeToLive`|A duration after the synthesis job is created, when the synthesis results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. This optional setting is `P31D` (31 days) by default. The maximum time to live is 31 days. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.<br/><br/>Otherwise, you can call the [delete](./batch-synthesis.md#delete-batch-synthesis) synthesis method to remove the job sooner.|
-|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file will be included in the results data ZIP file.|
+|`properties.wordBoundaryEnabled`|Determines whether to generate word boundary data. This optional `bool` value ("true" or "false") is "false" by default.<br/><br/>If word boundary data is requested, then a corresponding `[nnnn].word.json` file is included in the results data ZIP file.|
|`status`|The batch synthesis processing status.<br/><br/>The status should progress from "NotStarted" to "Running", and finally to either "Succeeded" or "Failed".<br/><br/>This property is read-only.| |`synthesisConfig`|The configuration settings to use for batch synthesis of plain text.<br/><br/>This property is only applicable when `textType` is set to `"PlainText"`.| |`synthesisConfig.pitch`|The pitch of the audio output.<br/><br/>For information about the accepted values, see the [adjust prosody](speech-synthesis-markup-voice.md#adjust-prosody) table in the Speech Synthesis Markup Language (SSML) documentation. Invalid values are ignored.<br/><br/>This optional property is only applicable when `textType` is set to `"PlainText"`.|
The latency for batch synthesis is as follows (approximately):
### Best practices
-When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency does not meet your needs, you might consider using real-time API.
+When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency doesn't meet your needs, you might consider using real-time API.
## HTTP status codes
Here are examples that can result in the 400 error:
- The number of requested text inputs exceeded the limit of 1,000. - The `top` query parameter exceeded the limit of 100. - You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.-- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
+- You tried to delete a batch synthesis job that isn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. - You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
Here's an example request that results in an HTTP 400 error, because the `top` q
curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" ```
-In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
+In this case, the response headers include `HTTP/1.1 400 Bad Request`.
-The response body will resemble the following JSON example:
+The response body resembles the following JSON example:
```json {
ai-services Batch Synthesis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-synthesis.md
Previously updated : 11/16/2022 Last updated : 1/18/2024
The `values` property in the json response lists your synthesis requests. The li
## Delete batch synthesis
-Delete the batch synthesis job history after you retrieved the audio output results. The Speech service will keep each synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
+Delete the batch synthesis job history after you retrieved the audio output results. The Speech service keeps batch synthesis history for up to 31 days, or the duration of the request `timeToLive` property, whichever comes sooner. The date and time of automatic deletion (for synthesis jobs with a status of "Succeeded" or "Failed") is equal to the `lastActionDateTime` + `timeToLive` properties.
To delete a batch synthesis job, make an HTTP DELETE request using the URI as shown in the following example. Replace `YourSynthesisId` with your batch synthesis ID, replace `YourSpeechKey` with your Speech resource key, and replace `YourSpeechRegion` with your Speech resource region.
The summary file contains the synthesis results for each text input. Here's an e
} ```
-If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file will be included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file will be included in the results.
+If sentence boundary data was requested (`"sentenceBoundaryEnabled": true`), then a corresponding `[nnnn].sentence.json` file is included in the results. Likewise, if word boundary data was requested (`"wordBoundaryEnabled": true`), then a corresponding `[nnnn].word.json` file is included in the results.
Here's an example word data file with both audio offset and duration in milliseconds:
The latency for batch synthesis is as follows (approximately):
### Best practices
-When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency does not meet your needs, you might consider using real-time API.
+When considering batch synthesis for your application, it's recommended to assess whether the latency meets your requirements. If the latency aligns with your desired performance, batch synthesis can be a suitable choice. However, if the latency doesn't meet your needs, you might consider using real-time API.
## HTTP status codes
Here are examples that can result in the 400 error:
- The number of requested text inputs exceeded the limit of 1,000. - The `top` query parameter exceeded the limit of 100. - You tried to use an invalid deployment ID or a custom voice that isn't successfully deployed. Make sure the Speech resource has access to the custom voice, and the custom voice is successfully deployed. You must also ensure that the mapping of `{"your-custom-voice-name": "your-deployment-ID"}` is correct in your batch synthesis request.-- You tried to delete a batch synthesis job that hasn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
+- You tried to delete a batch synthesis job that isn't started or hasn't completed running. You can only delete batch synthesis jobs that have a status of "Succeeded" or "Failed".
- You tried to use a *F0* Speech resource, but the region only supports the *Standard* Speech resource pricing tier. - You tried to create a new batch synthesis job that would exceed the limit of 200 active jobs. Each Speech resource can have up to 200 batch synthesis jobs that don't have a status of "Succeeded" or "Failed".
Here's an example request that results in an HTTP 400 error, because the `top` q
curl -v -X GET "https://YourSpeechRegion.customvoice.api.speech.microsoft.com/api/texttospeech/3.1-preview1/batchsynthesis?skip=0&top=200" -H "Ocp-Apim-Subscription-Key: YourSpeechKey" ```
-In this case, the response headers will include `HTTP/1.1 400 Bad Request`.
+In this case, the response headers include `HTTP/1.1 400 Bad Request`.
-The response body will resemble the following JSON example:
+The response body resembles the following JSON example:
```json {
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
Previously updated : 10/21/2022 Last updated : 1/18/2024 ms.devlang: csharp
You can specify one or multiple audio files when creating a transcription. We re
## Supported audio formats and codecs
-The batch transcription API supports a number of different formats and codecs, such as:
+The batch transcription API supports many different formats and codecs, such as:
- WAV - MP3
Follow these steps to create a storage account and upload wav files from your lo
Follow these steps to create a storage account and upload wav files from your local directory to a new container.
-1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account will be created. Use the same subscription and resource group as your Speech resource.
+1. Set the `RESOURCE_GROUP` environment variable to the name of an existing resource group where the new storage account is created. Use the same subscription and resource group as your Speech resource.
```azurecli-interactive set RESOURCE_GROUP=<your existing resource group name>
This section explains how to set up and limit access to your batch transcription
> [!NOTE] > With the trusted Azure services security mechanism, you need to use [Azure Blob storage](../../storage/blobs/storage-blobs-overview.md) to store audio files. Usage of [Azure Files](../../storage/files/storage-files-introduction.md) is not supported.
-If you perform all actions in this section, your Storage account will be in the following configuration:
+If you perform all actions in this section, your Storage account is configured as follows:
- Access to all external network traffic is prohibited. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited.
If you perform all actions in this section, your Storage account will be in the
So in effect your Storage account becomes completely "locked" and can't be used in any scenario apart from transcribing audio files that were already present by the time the new configuration was applied. You should consider this configuration as a model as far as the security of your audio data is concerned and customize it according to your needs.
-For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
+For example, you can allow traffic from selected public IP addresses and Azure Virtual networks. You can also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
> [!NOTE] > Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the storage account. You can use a private endpoint for batch transcription API requests, while separately accessing the source audio files from a secure storage account, or the other way around.
-By following the steps below, you'll severely restrict access to the storage account. Then you'll assign the minimum required permissions for Speech resource managed identity to access the Storage account.
+By following the steps below, you severely restrict access to the storage account. Then you assign the minimum required permissions for Speech resource managed identity to access the Storage account.
### Enable system assigned managed identity for the Speech resource
-Follow these steps to enable system assigned managed identity for the Speech resource that you will use for batch transcription.
+Follow these steps to enable system assigned managed identity for the Speech resource that you use for batch transcription.
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. 1. Select the Speech resource.
Follow these steps to assign the **Storage Blob Data Reader** role to the manage
Now the Speech resource managed identity has access to the Storage account and can access the audio files for batch transcription.
-With system assigned managed identity, you'll use a plain Storage Account URL (no SAS or other additions) when you [create a batch transcription](batch-transcription-create.md) request. For example:
+With system assigned managed identity, you use a plain Storage Account URL (no SAS or other additions) when you [create a batch transcription](batch-transcription-create.md) request. For example:
```json {
The previous command returns a SAS token. Append the SAS token to your container
-You will use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example:
+You use the SAS URL when you [create a batch transcription](batch-transcription-create.md) request. For example:
```json {
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Previously updated : 11/7/2023 Last updated : 1/18/2024 zone_pivot_groups: speech-cli-rest
Here are some property options that you can use to configure a transcription whe
|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0) then it will be ignored and only 2 speakers will be identified.|
+|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) contains a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0), then it's ignored and only 2 speakers are identified.|
|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1 and later).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.| |`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.| |`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the displayWords property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
To use a Custom Speech model for batch transcription, you need the model's URI.
> [!TIP] > A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
-Batch transcription requests for expired models will fail with a 4xx error. You'll want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
## Using Whisper models
spx csr list --base --api-version v3.2-preview.1
``` ::: zone-end
-The `displayName` property of a Whisper model will contain "Whisper Preview" as shown in this example. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
+The `displayName` property of a Whisper model contains "Whisper Preview" as shown in this example. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
```json {
ai-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md
Previously updated : 11/29/2022 Last updated : 1/18/2024 zone_pivot_groups: speech-cli-rest
You should receive a response body in the following format:
} ```
-The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report are available when the transcription status is `Succeeded`.
::: zone-end
You should receive a response body in the following format:
} ```
-The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report will be available when the transcription status is `Succeeded`.
+The `status` property indicates the current status of the transcriptions. The transcriptions and transcription report are available when the transcription status is `Succeeded`.
For Speech CLI help with transcriptions, run the following command:
Depending in part on the request parameters set when you created the transcripti
|`combinedRecognizedPhrases`|The concatenated results of all phrases for the channel.| |`confidence`|The confidence value for the recognition.| |`display`|The display form of the recognized text. Added punctuation and capitalization are included.|
-|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`displayWords`|The timestamps for each word of the transcription. The `displayFormWordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
|`duration`|The audio duration. The value is an ISO 8601 encoded duration.|
-|`durationInTicks`|The audio duration in ticks (1 tick is 100 nanoseconds).|
+|`durationInTicks`|The audio duration in ticks (one tick is 100 nanoseconds).|
|`itn`|The inverse text normalized (ITN) form of the recognized text. Abbreviations such as "Doctor Smith" to "Dr Smith", phone numbers, and other transformations are applied.| |`lexical`|The actual words recognized.|
-|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property is not present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
+|`locale`|The locale identified from the input the audio. The `languageIdentification` request property must be set, otherwise this property isn't present.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.|
|`maskedITN`|The ITN form with profanity masking applied.| |`nBest`|A list of possible transcriptions for the current phrase with confidences.| |`offset`|The offset in audio of this phrase. The value is an ISO 8601 encoded duration.|
-|`offsetInTicks`|The offset in audio of this phrase in ticks (1 tick is 100 nanoseconds).|
+|`offsetInTicks`|The offset in audio of this phrase in ticks (one tick is 100 nanoseconds).|
|`recognitionStatus`|The recognition state. For example: "Success" or "Failure".| |`recognizedPhrases`|The list of results for each phrase.| |`source`|The URL that was provided as the input audio source. The source corresponds to the `contentUrls` or `contentContainerUrl` request property. The `source` property is the only way to confirm the audio input for a transcription.|
-|`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property is not present.|
+|`speaker`|The identified speaker. The `diarization` and `diarizationEnabled` request properties must be set, otherwise this property isn't present.|
|`timestamp`|The creation date and time of the transcription. The value is an ISO 8601 encoded timestamp.|
-|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property is not present.|
+|`words`|A list of results with lexical text for each word of the phrase. The `wordLevelTimestampsEnabled` request property must be set to `true`, otherwise this property isn't present.|
## Next steps
ai-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription.md
Previously updated : 09/15/2023 Last updated : 1/18/2024 ms.devlang: csharp
ai-services Bring Your Own Storage Speech Resource Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md
Previously updated : 03/28/2023 Last updated : 1/18/2024
-# Use the Bring your own storage (BYOS) Speech resource for Speech to text
+# Use the Bring your own storage (BYOS) Speech resource for speech to text
-Bring your own storage (BYOS) can be used in the following Speech to text scenarios:
+Bring your own storage (BYOS) can be used in the following speech to text scenarios:
- Batch transcription-- Real-time transcription with audio and transcription result logging enabled-- Custom Speech
+- Real-time transcription with audio and transcription results logging enabled
+- Custom speech
-One Speech resource to Storage account pairing can be used for all scenarios simultaneously.
+One Speech resource to storage account pairing can be used for all scenarios simultaneously.
-This article explains in depth how to use a BYOS-enabled Speech resource in all Speech to text scenarios. The article implies, that you have [a fully configured BYOS-enabled Speech resource and associated Storage account](bring-your-own-storage-speech-resource.md).
+This article explains in depth how to use a BYOS-enabled Speech resource in all speech to text scenarios. The article implies, that you have [a fully configured BYOS-enabled Speech resource and associated Storage account](bring-your-own-storage-speech-resource.md).
## Data storage
-When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in Custom Speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
+When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in the Custom speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
BYOS-associated Storage account stores the following data:
BYOS-associated Storage account stores the following data:
**Real-time transcription with audio and transcription result logging enabled** - Audio and transcription result logs
-**Custom Speech**
+**Custom speech**
- Source files of datasets for model training and testing (optional) - All data and metadata related to Custom models hosted by the BYOS-enabled Speech resource (including copies of datasets for model training and testing)
URL of this format ensures that only Microsoft Entra identities (users, service
> [!WARNING] > If `sasValidityInSeconds` parameter is omitted in [Get Base Model Logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Endpoints_ListBaseModelLogs) request or similar ones, then a [User delegation SAS](../../storage/common/storage-sas-overview.md) with the validity of 30 days will be generated for each data file URL returned. This SAS is signed by the system assigned managed identity of your BYOS-enabled Speech resource. Because of it, the SAS allows access to the data, even if storage account key access is disabled. See details [here](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens).
-## Custom Speech
+## Custom speech
-With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [Custom Speech overview](custom-speech-overview.md).
+With Custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [Custom speech overview](custom-speech-overview.md).
-There's nothing specific about how you use Custom Speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
+There's nothing specific about how you use Custom speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
-- `customspeech-models` - Location of Custom Speech models-- `customspeech-artifacts` - Location of all other Custom Speech related data
+- `customspeech-models` - Location of Custom speech models
+- `customspeech-artifacts` - Location of all other Custom speech related data
-Note that the Blob container structure is provided for your information only and subject to change without a notice.
+The Blob container structure is provided for your information only and subject to change without a notice.
> [!CAUTION]
-> Speech service relies on pre-defined Blob container paths and file names for Custom Speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and Custom Speech related folders of `customspeech-artifacts` container.
+> Speech service relies on pre-defined Blob container paths and file names for Custom speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and Custom speech related folders of `customspeech-artifacts` container.
> > Failure to do so very likely will result in hard to debug errors and may lead to the necessity of custom model retraining. >
-> Use standard tools, like REST API and Speech Studio to interact with the Custom Speech related data. See details in [Custom Speech section](custom-speech-overview.md).
+> Use standard tools, like REST API and Speech Studio to interact with the Custom speech related data. See details in [Custom speech section](custom-speech-overview.md).
-### Use of REST API with Custom Speech
+### Use of REST API with Custom speech
[Speech to text REST API](rest-speech-to-text.md) fully supports BYOS-enabled Speech resources. However, because the data is now stored within the BYOS-enabled Storage account, requests like [Get Dataset Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles) interact with the BYOS-associated Storage account Blob storage, instead of Speech service internal resources. It allows using the same REST API based code for both "regular" and BYOS-enabled Speech resources.
URL of this format ensures that only Microsoft Entra identities (users, service
- [Set up the Bring your own storage (BYOS) Speech resource](bring-your-own-storage-speech-resource.md) - [Batch transcription overview](batch-transcription.md) - [How to log audio and transcriptions for speech recognition](logging-audio-transcription.md)-- [Custom Speech overview](custom-speech-overview.md)
+- [Custom speech overview](custom-speech-overview.md)
ai-services Bring Your Own Storage Speech Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md
Previously updated : 03/28/2023 Last updated : 1/18/2024
Azure portal option has tighter requirements:
If any of these extra requirements don't fit your scenario, use Cognitive Services API option (PowerShell, Azure CLI, REST request).
-To use any of the methods above you need an Azure account that is assigned a role allowing to create resources in your subscription, like *Subscription Contributor*.
+To use any of the methods above, you need an Azure account that is assigned a role allowing to create resources in your subscription, like *Subscription Contributor*.
# [Azure portal](#tab/portal)
If you used Azure portal for creating a BYOS-enabled Speech resource, it's fully
### (Optional) Verify Speech resource BYOS configuration
-You may always check, whether any given Speech resource is BYOS enabled, and what is the associated Storage account. You can do it either via Azure portal, or via Cognitive Services API.
+You can always check, whether any given Speech resource is BYOS enabled, and what is the associated Storage account. You can do it either via Azure portal, or via Cognitive Services API.
# [Azure portal](#tab/portal)
Use the [Accounts - Get](/rest/api/cognitiveservices/accountmanagement/accounts/
## Configure BYOS-associated Storage account
-To achieve high security and privacy of your data you need to properly configure the settings of the BYOS-associated Storage account. In case you didn't use Azure portal to create your BYOS-enabled Speech resource, you also need to perform a mandatory step of role assignment.
+To achieve high security and privacy of your data, you need to properly configure the settings of the BYOS-associated Storage account. In case you didn't use Azure portal to create your BYOS-enabled Speech resource, you also need to perform a mandatory step of role assignment.
### Assign resource access role
This step is **mandatory** if you didn't use Azure portal to create your BYOS-en
BYOS uses the Blob storage of a Storage account. Because of this, BYOS-enabled Speech resource managed identity needs *Storage Blob Data Contributor* role assignment within the scope of BYOS-associated Storage account.
-If you used Azure portal to create your BYOS-enabled Speech resource, you may skip the rest of this subsection. Your role assignment is already done. Otherwise, follow these steps.
+If you used Azure portal to create your BYOS-enabled Speech resource, you can skip the rest of this subsection. Your role assignment is already done. Otherwise, follow these steps.
> [!IMPORTANT] > You need to be assigned the *Owner* role of the Storage account or higher scope (like Subscription) to perform the operation in the next steps. This is because only the *Owner* role can assign roles to others. See details [here](../../role-based-access-control/built-in-roles.md).
If you used Azure portal to create your BYOS-enabled Speech resource, you may sk
This section describes how to set up Storage account security settings, if you intend to use BYOS-associated Storage account only for Speech to text scenarios. In case you use the BYOS-associated Storage account for Text to speech or a combination of both Speech to text and Text to speech, use [this section](#configure-storage-account-security-settings-for-text-to-speech).
-For Speech to text BYOS is using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) to communicate with Storage account. The mechanism allows setting very restricted Storage account data access rules.
+For Speech to text BYOS is using the [trusted Azure services security mechanism](../../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) to communicate with Storage account. The mechanism allows setting restricted storage account data access rules.
-If you perform all actions in the section, your Storage account will be in the following configuration:
+If you perform all actions in the section, your Storage account is in the following configuration:
- Access to all external network traffic is prohibited. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens))
So in effect your Storage account becomes completely "locked" and can only be ac
You should consider this configuration as a model as far as the security of your data is concerned and customize it according to your needs.
-For example, you may allow traffic from selected public IP addresses and Azure Virtual networks. You may also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
+For example, you can allow traffic from selected public IP addresses and Azure Virtual networks. You can also set up access to your Storage account using [private endpoints](../../storage/common/storage-private-endpoints.md) (see as well [this tutorial](../../private-link/tutorial-private-endpoint-storage-portal.md)), re-enable access using Storage account key, allow access to other Azure trusted services, etc.
> [!NOTE] > Using [private endpoints for Speech](speech-services-private-link.md) isn't required to secure the Storage account. Private endpoints for Speech secure the channels for Speech API requests, and can be used as an extra component in your solution.
Having restricted access to the Storage account, you need to grant networking ac
This section describes how to set up Storage account security settings, if you intend to use BYOS-associated Storage account for Text to speech or a combination of both Speech to text and Text to speech. In case you use the BYOS-associated Storage account for Speech to text only, use [this section](#configure-storage-account-security-settings-for-speech-to-text). > [!NOTE]
-> Text to speech requires more relaxed settings of Storage account firewall, compared to Speech to text. If you use both Speech to text and Text to speech, and need maximally restricted Storage account security settings to protect your data, you may consider using different Storage accounts and the corresponding Speech resources for Speech to Text and Text to speech tasks.
+> Text to speech requires more relaxed settings of Storage account firewall, compared to Speech to text. If you use both Speech to text and Text to speech, and need maximally restricted Storage account security settings to protect your data, you can consider using different Storage accounts and the corresponding Speech resources for Speech to Text and Text to speech tasks.
-If you perform all actions in the section, your Storage account will be in the following configuration:
+If you perform all actions in the section, your Storage account is in the following configuration:
- External network traffic is allowed. - Access to Storage account using Storage account key is prohibited. - Access to Storage account blob storage using [shared access signatures (SAS)](../../storage/common/storage-sas-overview.md) is prohibited. (Except for [User delegation SAS](../../storage/common/shared-key-authorization-prevent.md#understand-how-disallowing-shared-key-affects-sas-tokens)) - Access to the BYOS-enabled Speech resource is allowed using the resource [system assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) and [User delegation SAS](../../storage/common/storage-sas-overview.md#user-delegation-sas).
-These are the most restricted security settings possible for Text to speech scenario. You may further customize them according to your needs.
+These are the most restricted security settings possible for the text to speech scenario. You can further customize them according to your needs.
**Restrict access to the Storage account**
Custom neural voice uses [User delegation SAS](../../storage/common/storage-sas-
## Configure BYOS-associated Storage account for use with Speech Studio
-Many [Speech Studio](https://speech.microsoft.com/) operations like dataset upload, or custom model training and testing don't require any special configuration in the case of BYOS-enabled Speech resource.
+Many [Speech Studio](https://speech.microsoft.com/) operations like dataset upload, or custom model training and testing don't require any special configuration of a BYOS-enabled Speech resource.
-However, if you need to read data stored withing BYOS-associated Storage account through Speech Studio Web interface, you need to configure additional settings of your BYOS-associated Storage account. For example, it's required to view the contents of a dataset.
+However, if you need to read data stored withing BYOS-associated Storage account through Speech Studio Web interface, you need to configure more settings of your BYOS-associated Storage account. For example, it's required to view the contents of a dataset.
### Configure Cross-Origin Resource Sharing (CORS)
Speech Studio needs permission to make requests to the Blob storage of the BYOS-
### Configure Azure Storage firewall
-You need to allow access for the machine, where you run the browser using Speech Studio. If your Storage account firewall settings allow public access from all networks, you may skip this subsection. Otherwise, follow these steps.
+You need to allow access for the machine, where you run the browser using Speech Studio. If your Storage account firewall settings allow public access from all networks, you can skip this subsection. Otherwise, follow these steps.
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account. 1. Select the Storage account.
ai-services Call Center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-overview.md
Previously updated : 09/18/2022 Last updated : 1/18/2024 # Call Center Overview
-Azure AI services for Language and Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels. With the Language and Speech services, you can further analyze call center transcriptions, extract and redact conversation personally identifiable information (PII), summarize the transcription, and detect the sentiment.
+Azure AI Language and Azure AI Speech can help you realize partial or full automation of telephony-based customer interactions, and provide accessibility across multiple channels. With the Language and Speech services, you can further analyze call center transcriptions, extract and redact conversation (PII), summarize the transcription, and detect the sentiment.
Some example scenarios for the implementation of Azure AI services in call and contact centers are:-- Virtual agents: Conversational AI-based telephony-integrated voicebots and voice-enabled chatbots
+- Virtual agents: Conversational AI-based telephony-integrated voice bots and voice-enabled chatbots
- Agent-assist: Real-time transcription and analysis of a call to improve the customer experience by providing insights and suggest actions to agents - Post-call analytics: Post-call analysis to create insights into customer conversations to improve understanding and support continuous improvement of call handling, optimization of quality assurance and compliance control as well as other insight driven optimizations.
Some example scenarios for the implementation of Azure AI services in call and c
A holistic call center implementation typically incorporates technologies from the Language and Speech services.
-Audio data typically used in call centers generated through landlines, mobile phones, and radios is often narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text. The Speech service recognition models are trained to ensure that you can get high-quality transcriptions, however you choose to capture the audio.
+Audio data typically used in call centers generated through landlines, mobile phones, and radios are often narrowband, in the range of 8 KHz, which can create challenges when you're converting speech to text. The Speech service recognition models are trained to ensure that you can get high-quality transcriptions, however you choose to capture the audio.
-Once you've transcribed your audio with the Speech service, you can use the Language service to perform analytics on your call center data such as: sentiment analysis, summarizing the reason for customer calls, how they were resolved, extracting and redacting conversation PII, and more.
+Once you transcribe your audio with the Speech service, you can use the Language service to perform analytics on your call center data such as: sentiment analysis, summarizing the reason for customer calls, how they were resolved, extracting and redacting conversation PII, and more.
### Speech service
The Speech service offers the following features that can be used for call cente
- [Real-time speech to text](./how-to-recognize-speech.md): Recognize and transcribe audio in real-time from multiple inputs. For example, with virtual agents or agent-assist, you can continuously recognize audio input and control how to process results based on multiple events. - [Batch speech to text](./batch-transcription.md): Transcribe large amounts of audio files asynchronously including speaker diarization and is typically used in post-call analytics scenarios. Diarization is the process of recognizing and separating speakers in mono channel audio data.-- [Text to speech](./text-to-speech.md): Text to speech enables your applications, tools, or devices to convert text into humanlike synthesized speech.
+- [Text to speech](./text-to-speech.md): Text to speech enables your applications, tools, or devices to convert text into human like synthesized speech.
- [Speaker identification](./speaker-recognition-overview.md): Helps you determine an unknown speakerΓÇÖs identity within a group of enrolled speakers and is typically used for call center customer verification scenarios or fraud detection. - [Language Identification](./language-identification.md): Identify languages spoken in audio and can be used in real-time and post-call analysis for insights or to control the environment (such as output language of a virtual agent).
The Speech service works well with prebuilt models. However, you might want to f
| Speech customization | Description | | -- | -- |
-| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
+| [Custom Speech](./custom-speech-overview.md) | A speech to text feature used to evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
| [Custom neural voice](./custom-neural-voice.md) | A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. | ### Language service
ai-services Call Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-quickstart.md
Previously updated : 09/20/2022 Last updated : 1/18/2024 ms.devlang: csharp
ai-services Call Center Telephony Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/call-center-telephony-integration.md
Previously updated : 08/10/2022 Last updated : 1/18/2024
ai-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md
Previously updated : 06/02/2022 Last updated : 1/18/2024 zone_pivot_groups: programming-languages-speech-sdk-cli
The following are aspects to consider when using captioning:
> > Try the [Azure AI Video Indexer](/azure/azure-video-indexer/video-indexer-overview) as a demonstration of how you can get captions for videos that you upload.
-Captioning can accompany real-time or pre-recorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
+Captioning can accompany real-time or prerecorded speech. Whether you're showing captions in real-time or with a recording, you can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) to recognize speech and get transcriptions. You can also use the [Batch transcription API](batch-transcription.md) for pre-recorded video.
## Caption output format
For captioning of a prerecording, send file input to the Speech service. For mor
## Caption and speech synchronization
-You'll want to synchronize captions with the audio track, whether it's done in real-time or with a prerecording.
+You want to synchronize captions with the audio track, whether it's in real-time or with a prerecording.
The Speech service returns the offset and duration of the recognized speech.
For captioning of prerecorded speech or wherever latency isn't a concern, you co
Real-time captioning presents tradeoffs with respect to latency versus accuracy. You could show the text from each `Recognizing` event as soon as possible. However, if you can accept some latency, you can improve the accuracy of the caption by displaying the text from the `Recognized` event. There's also some middle ground, which is referred to as "stable partial results".
-You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service will affirm recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
+You can request that the Speech service return fewer `Recognizing` events that are more accurate. This is done by setting the `SpeechServiceResponse_StablePartialResultThreshold` property to a value between `0` and `2147483647`. The value that you set is the number of times a word has to be recognized before the Speech service returns a `Recognizing` event. For example, if you set the `SpeechServiceResponse_StablePartialResultThreshold` property value to `5`, the Speech service affirms recognition of a word at least five times before returning the partial results to you with a `Recognizing` event.
::: zone pivot="programming-language-csharp" ```csharp
spx recognize --file caption.this.mp4 --format any --property SpeechServiceRespo
``` ::: zone-end
-Requesting more stable partial results will reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
+Requesting more stable partial results reduce the "flickering" or changing text, but it can increase latency as you wait for higher confidence results.
### Stable partial threshold example In the following recognition sequence without setting a stable partial threshold, "math" is recognized as a word, but the final text is "mathematics". At another point, "course 2" is recognized, but the final text is "course 201".
RECOGNIZED: Text=Welcome to applied Mathematics course 201.
## Language identification
-If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected be in the audio. The Speech service returns the most likely language in the audio.
+If the language in the audio could change, use continuous [language identification](language-identification.md). Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). You provide up to 10 candidate languages, at least one of which is expected in the audio. The Speech service returns the most likely language in the audio.
## Customizations to improve accuracy
Examples of phrases include:
* Homonyms * Words or acronyms unique to your industry or organization
-There are some situations where [training a custom model](custom-speech-overview.md) is likely the best option to improve accuracy. For example, if you're captioning orthodontics lectures, you might want to train a custom model with the corresponding domain data.
+There are some situations where [training a custom model](custom-speech-overview.md) is likely the best option to improve accuracy. For example, if you're captioning orthodontic lectures, you might want to train a custom model with the corresponding domain data.
## Next steps
ai-services Custom Commands Encryption Of Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-encryption-of-data-at-rest.md
Previously updated : 07/05/2020 Last updated : 1/18/2024
[!INCLUDE [deprecation notice](./includes/custom-commands-retire.md)]
-Custom Commands automatically encrypts your data when it is persisted to the cloud. The Custom Commands service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+Custom Commands automatically encrypts your data when it's persisted to the cloud. The Custom Commands service encryption protects your data and to help you to meet your organizational security and compliance commitments.
> [!NOTE] > Custom Commands service doesn't automatically enable encryption for the LUIS resources associated with your application. If needed, you must enable encryption for your LUIS resource from [here](../luis/encrypt-data-at-rest.md).
Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki
## About encryption key management
-When you use Custom Commands, speech service will store following data in the cloud:
+When you use Custom Commands, speech service stores following data in the cloud:
* Configuration JSON behind the Custom Commands application * LUIS authoring and prediction key
By default, your subscription uses Microsoft-managed encryption keys. However, y
> [!IMPORTANT] > Customer-managed keys are only available resources created after 27 June, 2020. To use CMK with the Speech service, you will need to create a new Speech resource. Once the resource is created, you can use Azure Key Vault to set up your managed identity.
-To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Speech service, you'll need to create a new Speech resource from the Azure portal.
+To request the ability to use customer-managed keys, fill out and submit Customer-Managed Key Request Form. It takes approximately 3-5 business days to hear back on the status of your request. Depending on demand, you might be placed in a queue and approved as space becomes available. Once approved for using CMK with the Speech service, you need to create a new Speech resource from the Azure portal.
> [!NOTE] > **Customer-managed keys (CMK) are supported only for custom commands.** >
To request the ability to use customer-managed keys, fill out and submit Custome
You must use Azure Key Vault to store customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Speech resource and the key vault must be in the same region and in the same Microsoft Entra tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
-When a new Speech resource is created and used to provision Custom Commands application - data is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier required for CMK.
+When a new Speech resource is created and used to provision Custom Commands applications, data is always encrypted using Microsoft-managed keys. It's not possible to enable customer-managed keys at the time that the resource is created. Customer-managed keys are stored in Azure Key Vault, and the key vault must be provisioned with access policies that grant key permissions to the managed identity that is associated with the Azure AI services resource. The managed identity is available only after the resource is created using the Pricing Tier required for CMK.
-Enabling customer managed keys will also enable a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), a feature of Microsoft Entra ID. Once the system assigned managed identity is enabled, this resource will be registered with Microsoft Entra ID. After being registered, the managed identity will be given access to the Key Vault selected during customer managed key setup.
+Enabling customer managed keys also enables a system assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), a feature of Microsoft Entra ID. Once the system assigned managed identity is enabled, this resource is registered with Microsoft Entra ID. After being registered, the managed identity is given access to the Key Vault selected during customer managed key setup.
> [!IMPORTANT] > If you disable system assigned managed identities, access to the key vault will be removed and any data encrypted with the customer keys will no longer be accessible. Any features depended on this data will stop working.
Enabling customer managed keys will also enable a system assigned [managed ident
## Configure Azure Key Vault
-Using customer-managed keys requires that two properties be set in the key vault, **Soft Delete** and **Do Not Purge**. These properties are not enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
+Using customer-managed keys requires that two properties be set in the key vault, **Soft Delete** and **Do Not Purge**. These properties aren't enabled by default, but can be enabled using either PowerShell or Azure CLI on a new or existing key vault.
> [!IMPORTANT] > If you do not have the **Soft Delete** and **Do Not Purge** properties enabled and you delete your key, you won't be able to recover the data in your Azure AI services resource.
Only RSA keys of size 2048 are supported with Azure Storage encryption. For more
To enable customer-managed keys in the Azure portal, follow these steps: 1. Navigate to your Speech resource.
-1. On the **Settings** blade for your Speech resource, select **Encryption**. Select the **Customer Managed Keys** option, as shown in the following figure.
+1. On the **Settings** page for your Speech resource, select **Encryption**. Select the **Customer Managed Keys** option, as shown in the following figure.
![Screenshot showing how to select Customer Managed Keys](media/custom-commands/select-cmk.png)
After you enable customer-managed keys, you'll have the opportunity to specify a
To specify a key as a URI, follow these steps:
-1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then click the key to view its versions. Select a key version to view the settings for that version.
+1. To locate the key URI in the Azure portal, navigate to your key vault, and select the **Keys** setting. Select the desired key, then select the key to view its versions. Select a key version to view the settings for that version.
1. Copy the value of the **Key Identifier** field, which provides the URI. ![Screenshot showing key vault key URI](../media/cognitive-services-encryption/key-uri-portal.png)
To change the key used for encryption, follow these steps:
You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. When the key is rotated, you must update the Speech resource to use the new key URI. To learn how to update the resource to use a new version of the key in the Azure portal, see [Update the key version](#update-the-key-version).
-Rotating the key does not trigger re-encryption of data in the resource. There is no further action required from the user.
+Rotating the key doesn't trigger re-encryption of data in the resource. There's no further action required from the user.
## Revoke access to customer-managed keys
ai-services Custom Commands References https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands-references.md
Previously updated : 06/18/2020 Last updated : 1/18/2024
Parameters are information required by the commands to complete a task. In compl
Completion rules are a series of rules to be executed after the command is ready to be fulfilled, for example, when all the conditions of the rules are satisfied. ### Interaction rules
-Interaction rules are additional rules to handle more specific or complex situations. You can add additional validations or configure advanced features such as confirmations or a one-step correction. You can also build your own custom interaction rules.
+Interaction rules are extra rules to handle more specific or complex situations. You can add more validations or configure advanced features such as confirmations or a one-step correction. You can also build your own custom interaction rules.
## Parameters configuration
A parameter is identified by the name property. You should always give a descrip
### Required This check box indicates whether a value for this parameter is required for command fulfillment or completion. You must configure responses to prompt the user to provide a value if a parameter is marked as required.
-Note that, if you configured a **required parameter** to have a **Default value**, the system will still explicitly prompt for the parameter's value.
+If you configured a **required parameter** to have a **Default value**, the system still prompts for the parameter's value.
### Type Custom Commands supports the following parameter types:
A rule in Custom Commands is defined by a set of *conditions* that, when met, ex
Custom Commands supports the following rule categories: * **Completion rules**: These rules must be executed upon command fulfillment. All the rules configured in this section for which the conditions are true will be executed.
-* **Interaction rules**: These rules can be used to configure additional custom validations, confirmations, and a one-step correction, or to accomplish any other custom dialog logic. Interaction rules are evaluated at each turn in the processing and can be used to trigger completion rules.
+* **Interaction rules**: These rules can be used to configure extra custom validations, confirmations, and a one-step correction, or to accomplish any other custom dialog logic. Interaction rules are evaluated at each turn in the processing and can be used to trigger completion rules.
The different actions configured as part of a rule are executed in the order in which they appear in the authoring portal.
Conditions are the requirements that must be met for a rule to execute. Rules co
* **All required parameters**: All the parameters that were marked as required have a value. * **Updated parameters**: One or more parameter values were updated as a result of processing the current input (utterance or activity). * **Confirmation was successful**: The input utterance or activity was a successful confirmation (yes).
-* **Confirmation was denied**: The input utterance or activity was not a successful confirmation (no).
+* **Confirmation was denied**: The input utterance or activity wasn't a successful confirmation (no).
* **Previous command needs to be updated**: This condition is used in instances when you want to catch a negated confirmation along with an update. Behind the scenes, this condition is configured for when the dialog engine detects a negative confirmation where the intent is the same as the previous turn, and the user has responded with an update. ### Actions
Expectations are used to configure hints for the processing of the next user inp
The post-execution state is the dialog state after processing the current input (utterance or activity). It's of the following types: * **Keep current state**: Keep current state only.
-* **Complete the command**: Complete the command and no additional rules of the command will be processed.
+* **Complete the command**: Complete the command and no more rules of the command are processed.
* **Execute completion rules**: Execute all the valid completion rules. * **Wait for user's input**: Wait for the next user input.
ai-services Custom Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/custom-commands.md
Previously updated : 03/11/2020 Last updated : 1/18/2024
Applications such as [Voice assistants](voice-assistants.md) listen to users and take an action in response, often speaking back. They use [speech to text](speech-to-text.md) to transcribe the user's speech, then take action on the natural language understanding of the text. This action frequently includes spoken output from the assistant generated with [text to speech](text-to-speech.md). Devices connect to assistants with the Speech SDK's `DialogServiceConnector` object.
-Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity, helping you focus on building the best solution for your voice commanding scenarios.
+Custom Commands makes it easy to build rich voice commanding apps optimized for voice-first interaction experiences. It provides a unified authoring experience, an automatic hosting model, and relatively lower complexity. Custom Commands helps you focus on building the best solution for your voice commanding scenarios.
Custom Commands is best suited for task completion or command-and-control scenarios such as "Turn on the overhead light" or "Make it 5 degrees warmer". Custom Commands is well suited for Internet of Things (IoT) devices, ambient and headless devices. Examples include solutions for Hospitality, Retail and Automotive industries, where you want voice-controlled experiences for your guests, in-store inventory management or in-car functionality.
ai-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-custom-speech-test-and-train.md
Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX</a>
### Custom display text formatting data for training
-Learn more about [display text formatting with speech to text](./display-text-format.md).
+Learn more about [preparing display text formatting data](./how-to-custom-speech-display-text-format.md) and [display text formatting with speech to text](./display-text-format.md).
Automatic Speech Recognition output display format is critical to downstream tasks and one-size doesnΓÇÖt fit all. Adding Custom Display Format rules allows users to define their own lexical-to-display format rules to improve the speech recognition service quality on top of Microsoft Azure Custom Speech Service.
aks App Routing Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-migration.md
description: Learn how to migrate from the HTTP application routing feature to t
-+ Last updated 11/03/2023
In this article, you learn how to migrate your Azure Kubernetes Service (AKS) cl
- path: / pathType: Prefix backend:
- service:
+ service:
name: aks-helloworld
- port:
+ port:
number: 80 ```
In this article, you learn how to migrate your Azure Kubernetes Service (AKS) cl
- path: / pathType: Prefix backend:
- service:
+ service:
name: aks-helloworld
- port:
+ port:
number: 80 ```
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
Title: Use availability zones in Azure Kubernetes Service (AKS) description: Learn how to create a cluster that distributes nodes across availability zones in Azure Kubernetes Service (AKS)-+ Last updated 12/06/2023
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster. -+ Last updated 11/24/2023
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
Title: Create a persistent volume with Azure Blob storage in Azure Kubernetes Se
description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS) -+ Last updated 11/28/2023
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
Title: Create a persistent volume with Azure Disks in Azure Kubernetes Service (
description: Learn how to create a static or dynamic persistent volume with Azure Disks for use with multiple concurrent pods in Azure Kubernetes Service (AKS) -+ Last updated 11/28/2023
When you create an Azure disk for use with AKS, you can create the disk resource
```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-
+ # Output MC_myResourceGroup_myAKSCluster_eastus ```
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
Title: Create a persistent volume with Azure Files in Azure Kubernetes Service (
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) -+ Last updated 11/28/2023
Kubernetes needs credentials to access the file share created in the previous st
```bash kubectl delete pod mypod
-
+ kubectl apply -f azure-files-pod.yaml ```
spec:
readOnly: false volumes: - name: azure
- csi:
+ csi:
driver: file.csi.azure.com volumeAttributes: secretName: azure-secret # required
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disk on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disk in an Azure Kubernetes Service (AKS) cluster. -+ Last updated 04/19/2023
metadata:
name: azuredisk-csi-waitforfirstconsumer provisioner: disk.csi.azure.com parameters:
- skuname: StandardSSD_LRS
+ skuname: StandardSSD_LRS
allowVolumeExpansion: true reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Service (AKS) description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks. -+ Last updated 11/24/2023
keyVaultId=$(az keyvault show --name myKeyVaultName --query "[id]" -o tsv)
keyVaultKeyUrl=$(az keyvault key show --vault-name myKeyVaultName --name myKeyName --query "[key.kid]" -o tsv) # Create a DiskEncryptionSet
-az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName -g myResourceGroup --source-vault $keyVaultId --key-url $keyVaultKeyUrl
+az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName -g myResourceGroup --source-vault $keyVaultId --key-url $keyVaultKeyUrl
``` > [!IMPORTANT]
az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIden
## Create a new AKS cluster and encrypt the OS disk
-Either create a new resource group, or select an existing resource group hosting other AKS clusters, then use your key to encrypt the either using network-attached OS disks or ephemeral OS disk. By default, a cluster uses ephemeral OS disk when possible in conjunction with VM size and OS disk size.
+Either create a new resource group, or select an existing resource group hosting other AKS clusters, then use your key to encrypt the either using network-attached OS disks or ephemeral OS disk. By default, a cluster uses ephemeral OS disk when possible in conjunction with VM size and OS disk size.
Run the following command to retrieve the DiskEncryptionSet value and set a variable:
aksIdentity=$(az aks show -g $RG_NAME -n $CLUSTER_NAME --query "identity.princip
az role assignment create --role "Contributor" --assignee $aksIdentity --scope $diskEncryptionSetId ```
-Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed.
+Create a file called **byok-azure-disk.yaml** that contains the following information. Replace *myAzureSubscriptionId*, *myResourceGroup*, and *myDiskEncrptionSetName* with your values, and apply the yaml. Make sure to use the resource group where your DiskEncryptionSet is deployed.
```yaml kind: StorageClass
-apiVersion: storage.k8s.io/v1
+apiVersion: storage.k8s.io/v1
metadata: name: byok provisioner: disk.csi.azure.com # replace with "kubernetes.io/azure-disk" if aks version is less than 1.21
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. -+ Last updated 01/11/2024
allowVolumeExpansion: true
parameters: resourceGroup: <resourceGroup> storageAccount: <storageAccountName>
- server: <storageAccountName>.file.core.windows.net
+ server: <storageAccountName>.file.core.windows.net
reclaimPolicy: Delete volumeBindingMode: Immediate mountOptions:
The output of the command resembles the following example:
```output storageclass.storage.k8s.io/private-azurefile-csi created ```
-
+ Create a file named `private-pvc.yaml`, and then paste the following example manifest in the file:
-
+ ```yaml apiVersion: v1 kind: PersistentVolumeClaim
spec:
requests: storage: 100Gi ```
-
+ Create the PVC by using the [kubectl apply][kubectl-apply] command:
-
+ ```bash kubectl apply -f private-pvc.yaml ```
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Title: Configure Azure NetApp Files for Azure Kubernetes Service description: Learn how to configure Azure NetApp Files for an Azure Kubernetes Service cluster. -+ Last updated 05/08/2023
The following considerations apply when you use Azure NetApp Files:
* The Azure CLI version 2.0.59 or higher installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * After the initial deployment of an AKS cluster, you can choose to provision Azure NetApp Files volumes statically or dynamically. * To use dynamic provisioning with Azure NetApp Files with Network File System (NFS), install and configure [Astra Trident][astra-trident] version 19.07 or higher. To use dynamic provisioning with Azure NetApp Files with Secure Message Block (SMB), install and configure Astra Trident version 22.10 or higher. Dynamic provisioning for SMB shares is only supported on windows worker nodes.
-* Before you deploy Azure NetApp Files SMB volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](../azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md). Both the AKS cluster and Azure NetApp Files must have connectivity to the same AD.
+* Before you deploy Azure NetApp Files SMB volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. For more information, see [Understand guidelines for Active Directory Domain Services site design and planning](../azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md). Both the AKS cluster and Azure NetApp Files must have connectivity to the same AD.
## Configure Azure NetApp Files for AKS workloads
-This section describes how to set up Azure NetApp Files for AKS workloads. It's applicable for all scenarios within this article.
+This section describes how to set up Azure NetApp Files for AKS workloads. It's applicable for all scenarios within this article.
1. Define variables for later usage. Replace *myresourcegroup*, *mylocation*, *myaccountname*, *mypool1*, *poolsize*, *premium*, *myvnet*, *myANFSubnet*, and *myprefix* with appropriate values for your environment.
This section describes how to set up Azure NetApp Files for AKS workloads. It's
SUBNET_NAME="myANFSubnet" ADDRESS_PREFIX="myprefix" ```
-
+ 2. Register the *Microsoft.NetApp* resource provider by running the following command: ```azurecli-interactive
This section describes how to set up Azure NetApp Files for AKS workloads. It's
--service-level $SERVICE_LEVEL ```
-5. Create a subnet to [delegate to Azure NetApp Files][anf-delegate-subnet] using the command [`az network vnet subnet create`][az-network-vnet-subnet-create]. Specify the resource group hosting the existing virtual network for your AKS cluster. Replace the variables shown in the command with your Azure NetApp Files information.
+5. Create a subnet to [delegate to Azure NetApp Files][anf-delegate-subnet] using the command [`az network vnet subnet create`][az-network-vnet-subnet-create]. Specify the resource group hosting the existing virtual network for your AKS cluster. Replace the variables shown in the command with your Azure NetApp Files information.
> [!NOTE] > This subnet must be in the same virtual network as your AKS cluster.
This section describes how to set up Azure NetApp Files for AKS workloads. It's
## Statically or dynamically provision Azure NetApp Files volumes for NFS or SMB After you [configure Azure NetApp Files for AKS workloads](#configure-azure-netapp-files-for-aks-workloads), you can statically or dynamically provision Azure NetApp Files using NFS, SMB, or dual-protocol volumes within the capacity pool. Follow instructions in:
-* [Provision Azure NetApp Files NFS volumes for Azure Kubernetes Service](azure-netapp-files-nfs.md)
+* [Provision Azure NetApp Files NFS volumes for Azure Kubernetes Service](azure-netapp-files-nfs.md)
* [Provision Azure NetApp Files SMB volumes for Azure Kubernetes Service](azure-netapp-files-smb.md) * [Provision Azure NetApp Files dual-protocol volumes for Azure Kubernetes Service](azure-netapp-files-dual-protocol.md)
aks Best Practices Performance Scale Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale-large.md
Title: Performance and scaling best practices for large workloads in Azure Kuber
description: Learn the best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS). Previously updated : 11/03/2023 Last updated : 01/18/2024 # Best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS)
Kubernetes has a multi-dimensional scale envelope with each resource type repres
The control plane manages all the resource scaling in the cluster, so the more you scale the cluster within a given dimension, the less you can scale within other dimensions. For example, running hundreds of thousands of pods in an AKS cluster impacts how much pod churn rate (pod mutations per second) the control plane can support.
-The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports two control plane tiers as part of the Base SKU: the Free tier and the Standard tier. For more information, see [Free and Standard pricing tiers for AKS cluster management][free-standard-tier].
+The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports three control plane tiers as part of the Base SKU: Free, Standard, and Premium tier. For more information, see [Free, Standard, and Premium pricing tiers for AKS cluster management][pricing-tiers].
> [!IMPORTANT]
-> We highly recommend using the Standard tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits:
+> We highly recommend using the Standard or Premium tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits:
> > * Up to 5,000 nodes per AKS cluster > * 200,000 pods per AKS cluster (with Azure CNI Overlay)
As you scale your AKS clusters to larger scale points, keep the following node p
[managed-nat-gateway]: ./nat-gateway.md [azure-cni-dynamic-ip]: ./configure-azure-cni-dynamic-ip-allocation.md [azure-cni-overlay]: ./azure-cni-overlay.md
-[free-standard-tier]: ./free-standard-pricing-tiers.md
+[pricing-tiers]: ./free-standard-pricing-tiers.md
[cluster-autoscaler]: cluster-autoscaler.md [azure-npm]: ../virtual-network/kubernetes-network-policies.md
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
Title: Configure kube-proxy (iptables/IPVS) (Preview)
description: Learn how to configure kube-proxy to utilize different load balancing configurations with Azure Kubernetes Service (AKS). -+ Last updated 09/25/2023
You can view the full `kube-proxy` configuration structure in the [AKS Cluster S
```azurecli-interactive # Create a new cluster az aks create -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json
-
+ # Update an existing cluster az aks update -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json ```
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
Last updated 12/07/2023-+ # Use dual-stack kubenet networking in Azure Kubernetes Service (AKS)
AKS configures the required supporting services for dual-stack networking. This
* Load balancer setup for IPv4 and IPv6 services. > [!NOTE]
-> When using Dualstack with an [outbound type][outbound-type] of user-defined routing, you can choose to have a default route for IPv6 depending on if you need your IPv6 traffic to reach the internet or not. If you don't have a default route for IPv6, a warning will surface when creating a cluster but will not prevent cluster creation.
+> When using Dualstack with an [outbound type][outbound-type] of user-defined routing, you can choose to have a default route for IPv6 depending on if you need your IPv6 traffic to reach the internet or not. If you don't have a default route for IPv6, a warning will surface when creating a cluster but will not prevent cluster creation.
## Deploying a dual-stack cluster
aks Csi Secrets Store Configuration Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-configuration-options.md
Title: Azure Key Vault provider for Secrets Store CSI Driver for Azure Kubernetes Service (AKS) configuration and troubleshooting options description: Learn configuration and troubleshooting options for the Azure Key Vault provider for Secrets Store CSI Driver in Azure Kubernetes Service (AKS).-+ -+ Last updated 10/19/2023-+ # Azure Key Vault provider for Secrets Store CSI Driver for Azure Kubernetes Service (AKS) configuration and troubleshooting options
You might want to create a Kubernetes secret to mirror your mounted secrets cont
metadata: name: azure-sync spec:
- provider: azure
+ provider: azure
secretObjects: # [OPTIONAL] SecretObjects defines the desired state of synced Kubernetes secret objects - data: - key: username # data field to populate
You might want to create a Kubernetes secret to mirror your mounted secrets cont
spec: containers: - name: busybox
- image: registry.k8s.io/e2e-test-images/busybox:1.29-1
+ image: registry.k8s.io/e2e-test-images/busybox:1.29-1
command: - "/bin/sleep" - "10000"
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
Title: Use the Azure Key Vault provider for Secrets Store CSI Driver for Azure Kubernetes Service (AKS) secrets description: Learn how to use the Azure Key Vault provider for Secrets Store CSI Driver to integrate secrets stores with Azure Kubernetes Service (AKS).-+ -+ Last updated 12/06/2023-+ # Use the Azure Key Vault provider for Secrets Store CSI Driver in an Azure Kubernetes Service (AKS) cluster
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Title: Access Azure Key Vault with the CSI Driver Identity Provider description: Learn how to integrate the Azure Key Vault Provider for Secrets Store CSI Driver with your Azure credentials and user identities.-+ Last updated 12/19/2023-+ # Connect your Azure identity provider to the Azure Key Vault Secrets Store CSI Driver in Azure Kubernetes Service (AKS)
You can use one of the following access methods:
A [Microsoft Entra Workload ID][workload-identity] is an identity that an application running on a pod uses to authenticate itself against other Azure services, such as workloads in software. The Secret Store CSI Driver integrates with native Kubernetes capabilities to federate with external identity providers.
-In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID then uses OIDC to discover public signing keys and verify the authenticity of the service account token before exchanging it for a Microsoft Entra token. For your workload to exchange a service account token projected to its volume for a Microsoft Entra token, you need the Azure Identity client library in the Azure SDK or the Microsoft Authentication Library (MSAL)
+In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID then uses OIDC to discover public signing keys and verify the authenticity of the service account token before exchanging it for a Microsoft Entra token. For your workload to exchange a service account token projected to its volume for a Microsoft Entra token, you need the Azure Identity client library in the Azure SDK or the Microsoft Authentication Library (MSAL)
> [!NOTE] >
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
export UAMI=<name for user assigned identity> export KEYVAULT_NAME=<existing keyvault name> export CLUSTER_NAME=<aks cluster name>
-
+ az account set --subscription $SUBSCRIPTION_ID ```
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
```bash export SERVICE_ACCOUNT_NAME="workload-identity-sa" # sample name; can be changed export SERVICE_ACCOUNT_NAMESPACE="default" # can be changed to namespace of your workload
-
+ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
spec: provider: azure parameters:
- usePodIdentity: "false"
+ usePodIdentity: "false"
clientID: "${USER_ASSIGNED_CLIENT_ID}" # Setting this to use workload identity keyvaultName: ${KEYVAULT_NAME} # Set to the name of your key vault cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
apiVersion: v1 metadata: name: busybox-secrets-store-inline-wi
- labels:
+ labels:
azure.workload.identity/use: "true" spec: serviceAccountName: "workload-identity-sa"
In this security model, the AKS cluster acts as token issuer. Microsoft Entra ID
readOnly: true volumeAttributes: secretProviderClass: "azure-kvname-wi"
- EOF
+ EOF
``` <a name='access-with-a-user-assigned-managed-identity'></a>
-## Access with managed identity
+## Access with managed identity
-A [Microsoft Entra Managed ID][managed-identity] is an identity that an administrator uses to authenticate themselves against other Azure services. The managed identity uses RBAC to federate with external identity providers.
+A [Microsoft Entra Managed ID][managed-identity] is an identity that an administrator uses to authenticate themselves against other Azure services. The managed identity uses RBAC to federate with external identity providers.
In this security model, you can grant access to your cluster's resources to team members or tenants sharing a managed role. The role is checked for scope to access the keyvault and other credentials. When you [enabled the Azure Key Vault provider for Secrets Store CSI Driver on your AKS Cluster](./csi-secrets-store-driver.md#create-an-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support), it created a user identity.
In this security model, you can grant access to your cluster's resources to team
Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set using the following commands. ```azurecli-interactive
- az identity create -g <resource-group> -n <identity-name>
+ az identity create -g <resource-group> -n <identity-name>
az vmss identity assign -g <resource-group> -n <agent-pool-vmss> --identities <identity-resource-id> az vm identity assign -g <resource-group> -n <agent-pool-vm> --identities <identity-resource-id> ```
aks Deploy Confidential Containers Default Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-confidential-containers-default-policy.md
Title: Deploy an AKS cluster with Confidential Containers (preview)
description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Confidential Containers (preview) and a default security policy by using the Azure CLI. Last updated 01/10/2024-+ # Deploy an AKS cluster with Confidential Containers and a default policy
To configure the workload identity, perform the following steps described in the
The following steps configure end-to-end encryption for Kafka messages using encryption keys managed by [Azure Managed Hardware Security Modules][azure-managed-hsm] (mHSM). The key is only released when the Kafka consumer runs within a Confidential Container with an Azure attestation secret provisioning container injected in to the pod.
-This configuration is basedon the following four components:
+This configuration is based on the following four components:
* Kafka Cluster: A simple Kafka cluster deployed in the Kafka namespace on the cluster. * Kafka Producer: A Kafka producer running as a vanilla Kubernetes pod that sends encrypted user-configured messages using a public key to a Kafka topic.
For this preview release, we recommend for test and evaluation purposes to eithe
>The managed identity is the value you assigned to the `USER_ASSIGNED_IDENTITY_NAME` variable. >[!NOTE]
- >To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac],or [Owner][owner-rbac].
+ >To add role assignments, you must have `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Key Vault Data Access Administrator][key-vault-data-access-admin-rbac], [User Access Administrator][user-access-admin-rbac], or [Owner][owner-rbac].
Run the following command to set the scope: ```azurecli-interactive
- AKV_SCOPE=$(az keyvault show --name <AZURE_AKV_RESOURCE_NAME> --query id --output tsv)
+ AKV_SCOPE=$(az keyvault show --name <AZURE_AKV_RESOURCE_NAME> --query id --output tsv)
``` Run the following command to assign the **Key Vault Crypto Officer** role.
For this preview release, we recommend for test and evaluation purposes to eithe
targetPort: kafka-consumer ```
-1. Create a Kafka namespace by running the following command:
+1. Create a kafka namespace by running the following command:
```bash kubectl create namespace kafka ```
-1. Install the Kafka cluster in the Kafka namespace by running the following command::
+1. Install the Kafka cluster in the kafka namespace by running the following command:
```bash kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka ```
-1. Run the following command to apply the `Kafka` cluster CR file.
+1. Run the following command to apply the `kafka` cluster CR file.
```bash kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
For this preview release, we recommend for test and evaluation purposes to eithe
```
-1. Prepare the RSA Encryption/Decryption key by [https://github.com/microsoft/confidential-container-demos/blob/main/kafka/setup-key.sh] the Bash script for the workload from GitHub. Save the file as `setup-key.sh`.
+1. Prepare the RSA Encryption/Decryption key by the [bash script](https://github.com/microsoft/confidential-container-demos/raw/main/kafka/setup-key.sh) for the workload from GitHub. Save the file as `setup-key.sh`.
1. Set the `MAA_ENDPOINT` environmental variable to match the value for the `SkrClientMAAEndpoint` from the `consumer.yaml` manifest file by running the following command.
For this preview release, we recommend for test and evaluation purposes to eithe
1. Get the IP address of the web service using the following command: ```bash
- kubectl get svc consumer -n kafka
+ kubectl get svc consumer -n kafka
```
-Copy and paste the external IP address of the consumer service into your browser and observe the decrypted message.
+1. Copy and paste the external IP address of the consumer service into your browser and observe the decrypted message.
-The following resemblers the output of the command:
+ The following resembles the output of the command:
-```output
-Welcome to Confidential Containers on AKS!
-Encrypted Kafka Message:
-Msg 1: Azure Confidential Computing
-```
+ ```output
+ Welcome to Confidential Containers on AKS!
+ Encrypted Kafka Message:
+ Msg 1: Azure Confidential Computing
+ ```
-You should also attempt to run the consumer as a regular Kubernetes pod by removing the `skr container` and `kata-cc runtime class` spec. Since you aren't running the consumer with kata-cc runtime class, you no longer need the policy.
+1. You should also attempt to run the consumer as a regular Kubernetes pod by removing the `skr container` and `kata-cc runtime class` spec. Since you aren't running the consumer with kata-cc runtime class, you no longer need the policy.
-Remove the entire policy and observe the messages again in the browser after redeploying the workload. Messages appear as base64-encoded ciphertext because the private encryption key can't be retrieved. The key can't be retrieved because the consumer is no longer running in a confidential environment, and the `skr container` is missing, preventing decryption of messages.
+1. Remove the entire policy and observe the messages again in the browser after redeploying the workload. Messages appear as base64-encoded ciphertext because the private encryption key can't be retrieved. The key can't be retrieved because the consumer is no longer running in a confidential environment, and the `skr container` is missing, preventing decryption of messages.
## Cleanup When you're finished evaluating this feature, to avoid Azure charges, clean up your unnecessary resources. If you deployed a new cluster as part of your evaluation or testing, you can delete the cluster using the [az aks delete][az-aks-delete] command. ```azurecli-interactive
-az aks delete --resource-group myResourceGroup --name myAKSCluster
+az aks delete --resource-group myResourceGroup --name myAKSCluster
``` If you enabled Confidential Containers (preview) on an existing cluster, you can remove the pod(s) using the [kubectl delete pod][kubectl-delete-pod] command.
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Title: Frequently asked questions for Azure Kubernetes Service (AKS)
description: Find answers to some of the common questions about Azure Kubernetes Service (AKS). Last updated 11/06/2023-+ # Frequently asked questions about Azure Kubernetes Service (AKS)
Any patch, including a security patch, is automatically applied to the AKS clust
## What is the purpose of the AKS Linux Extension I see installed on my Linux Virtual Machine Scale Sets instances?
-The AKS Linux Extension is an Azure VM extension that installs and configures monitoring tools on Kubernetes worker nodes. The extension is installed on all new and existing Linux nodes. It configures the following monitoring tools:
+The AKS Linux Extension is an Azure VM extension that installs and configures monitoring tools on Kubernetes worker nodes. The extension is installed on all new and existing Linux nodes. It configures the following monitoring tools:
- [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
description: Deploy a Java application with Open Liberty/WebSphere Liberty on an
Last updated 12/21/2022 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes-+ # Deploy a Java application with Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
The following steps guide you to create a Liberty runtime on AKS. After completi
1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type *IBM WebSphere Liberty and Open Liberty on Azure Kubernetes Service*. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section. If you prefer, you can go directly to the offer with this shortcut link: [https://aka.ms/liberty-aks](https://aka.ms/liberty-aks). 1. Select **Create**.
-1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. Select *East US* as **Region**. Select **Next** to **AKS** pane.
+1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. Select *East US* as **Region**. Select **Next** to **AKS** pane.
1. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next** to **Load balancing** pane. 1. Next to **Connect to Azure Application Gateway?** select **Yes**. This section lets you customize the following deployment options. 1. You can customize the virtual network and subnet into which the deployment will place the resources. Leave these values at their defaults.
You can now run and test the project locally before deploying to Azure. For conv
### Build image for AKS deployment
-You can now run the `docker build` command to build the image.
+You can now run the `docker build` command to build the image.
```bash cd <path-to-your-repo>/java-app/target
The following steps deploy and test the application.
``` Copy the value of **ADDRESS** from the output, this is the frontend public IP address of the deployed Azure Application Gateway.
-
+ 1. Go to `https://<ADDRESS>` to test the application. For your convenience, this shell command will create an environment variable whose value you can paste straight into the browser.
-
+ ```bash export APP_URL=https://$(kubectl get ingress | grep javaee-cafe-cluster-agic-ingress | cut -d " " -f14)/ echo $APP_URL
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
Title: HTTP application routing add-on for Azure Kubernetes Service (AKS) (retired) description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS) (retired). -+ Last updated 04/05/2023
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
apiVersion: apps/v1 kind: Deployment metadata:
- name: aks-helloworld
+ name: aks-helloworld
spec: replicas: 1 selector:
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
apiVersion: v1 kind: Service metadata:
- name: aks-helloworld
+ name: aks-helloworld
spec: type: ClusterIP ports:
The add-on deploys two components: a [Kubernetes ingress controller][ingress] an
- path: / pathType: Prefix backend:
- service:
+ service:
name: aks-helloworld
- port:
+ port:
number: 80 ```
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
Title: Use TLS with an ingress controller on Azure Kubernetes Service (AKS)
description: Learn how to install and configure an ingress controller that uses TLS in an Azure Kubernetes Service (AKS) cluster. -+
In the following example, traffic is routed as such:
backend: service: name: aks-helloworld-one
- port:
+ port:
number: 80 ```
Alternatively, you can delete the resource individually.
$ helm list --namespace ingress-basic NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
- cert-manager ingress-basic 1 2020-01-15 10:23:36.515514 -0600 CST deployed cert-manager-v0.13.0 v0.13.0
- nginx ingress-basic 1 2020-01-15 10:09:45.982693 -0600 CST deployed nginx-ingress-1.29.1 0.27.0
+ cert-manager ingress-basic 1 2020-01-15 10:23:36.515514 -0600 CST deployed cert-manager-v0.13.0 v0.13.0
+ nginx ingress-basic 1 2020-01-15 10:09:45.982693 -0600 CST deployed nginx-ingress-1.29.1 0.27.0
``` 3. Uninstall the releases using the `helm uninstall` command. The following example uninstalls the NGINX ingress and cert-manager deployments.
aks Istio Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md
This article addresses upgrade experiences for Istio-based service mesh add-on f
## How Istio components are upgraded
-**Minor version:** Currently the Istio add-on only has minor version 1.17 available. Minor version upgrade experiences are planned for when newer versions of Istio (1.18) are introduced.
+### Minor version upgrade
-**Patch version:**
+Istio add-on allows upgrading the minor version using [canary upgrade process][istio-canary-upstream]. When an upgrade is initiated, the control plane of the new (canary) revision is deployed alongside the old (stable) revision's control plane. You can then manually roll over data plane workloads while using monitoring tools to track the health of workloads during this process. If you don't observe any issues with the health of your workloads, you can complete the upgrade so that only the new revision remains on the cluster. Else, you can roll back to the previous revision of Istio.
+
+If the cluster is currently using a supported minor version of Istio, upgrades are only allowed one minor version at a time. If the cluster is using an unsupported version of Istio, you must upgrade to the lowest supported minor version of Istio for that Kubernetes version. After that, upgrades can again be done one minor version at a time.
+
+The following example illustrates how to upgrade from revision `asm-1-17` to `asm-1-18`. The steps are the same for all minor upgrades.
+
+1. Use the [az aks mesh get-upgrades](/cli/azure/aks/mesh#az-aks-mesh-get-upgrades) command to check which revisions are available for the cluster as upgrade targets:
+
+ ```bash
+ az aks mesh get-upgrades --resource-group $RESOURCE_GROUP --name $CLUSTER
+ ```
+
+ If you expect to see a newer revision not returned by this command, you may need to upgrade your AKS cluster first so that it's compatible with the newest revision.
+
+1. Initiate a canary upgrade from revision `asm-1-17` to `asm-1-18` using [az aks mesh upgrade start](/cli/azure/aks/mesh#az-aks-mesh-upgrade-start):
+
+ ```bash
+ az aks mesh upgrade start --resource-group $RESOURCE_GROUP --name $CLUSTER --revision asm-1-18
+ ```
+
+ A canary upgrade means the 1.18 control plane is deployed alongside the 1.17 control plane. They continue to coexist until you either complete or roll back the upgrade.
+
+1. Verify control plane pods corresponding to both `asm-1-17` and `asm-1-18` exist:
+
+ * Verify `istiod` pods:
+
+ ```bash
+ kubectl get pods -n aks-istio-system
+ ```
+
+ Example output:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ istiod-asm-1-17-55fccf84c8-dbzlt 1/1 Running 0 58m
+ istiod-asm-1-17-55fccf84c8-fg8zh 1/1 Running 0 58m
+ istiod-asm-1-18-f85f46bf5-7rwg4 1/1 Running 0 51m
+ istiod-asm-1-18-f85f46bf5-8p9qx 1/1 Running 0 51m
+ ```
+
+ * If ingress is enabled, verify ingress pods:
+
+ ```bash
+ kubectl get pods -n aks-istio-ingress
+ ```
+
+ Example output:
+
+ ```
+ NAME READY STATUS RESTARTS AGE
+ aks-istio-ingressgateway-external-asm-1-17-58f889f99d-qkvq2 1/1 Running 0 59m
+ aks-istio-ingressgateway-external-asm-1-17-58f889f99d-vhtd5 1/1 Running 0 58m
+ aks-istio-ingressgateway-external-asm-1-18-7466f77bb9-ft9c8 1/1 Running 0 51m
+ aks-istio-ingressgateway-external-asm-1-18-7466f77bb9-wcb6s 1/1 Running 0 51m
+ aks-istio-ingressgateway-internal-asm-1-17-579c5d8d4b-4cc2l 1/1 Running 0 58m
+ aks-istio-ingressgateway-internal-asm-1-17-579c5d8d4b-jjc7m 1/1 Running 0 59m
+ aks-istio-ingressgateway-internal-asm-1-18-757d9b5545-g89s4 1/1 Running 0 51m
+ aks-istio-ingressgateway-internal-asm-1-18-757d9b5545-krq9w 1/1 Running 0 51m
+ ```
+
+ Observe that ingress gateway pods of both revisions are deployed side-by-side. However, the service and its IP remain immutable.
+
+1. Relabel the namespace so that any new pods get the Istio sidecar associated with the new revision and its control plane:
+
+ ```bash
+ kubectl label namespace default istio.io/rev=asm-1-18 --overwrite
+ ```
+
+ Relabeling doesn't affect your workloads until they're restarted.
+
+1. Individually roll over each of your application workloads by restarting them. For example:
+
+ ```bash
+ kubectl rollout restart deployment <deployment name> -n <deployment namespace>
+ ```
+
+1. Check your monitoring tools and dashboards to determine whether your workloads are all running in a healthy state after the restart. Based on the outcome, you have two options:
+
+ * **Complete the canary upgrade**: If you're satisfied that the workloads are all running in a healthy state as expected, you can complete the canary upgrade. This will remove the previous revision's control plane and leave behind the new revision's control plane on the cluster. Run the following command to complete the canary upgrade:
+
+ ```bash
+ az aks mesh upgrade complete --resource-group $RESOURCE_GROUP --name $CLUSTER
+ ```
+
+ * **Rollback the canary upgrade**: In case you observe any issues with the health of your workloads, you can roll back to the previous revision of Istio:
+
+ * Relabel the namespace to the previous revision
+
+ ```bash
+ kubectl label namespace default istio.io/rev=asm-1-17 --overwrite
+ ```
+
+ * Roll back the workloads to use the sidecar corresponding to the previous Istio revision by restarting these workloads again:
+
+ ```bash
+ kubectl rollout restart deployment <deployment name> -n <deployment namespace>
+ ```
+
+ * Roll back the control plane to the previous revision:
+
+ ```
+ az aks mesh upgrade rollback --resource-group $RESOURCE_GROUP --name $CLUSTER
+ ```
+
+> [!NOTE]
+> Manually relabeling namespaces when moving them to a new revision can be tedious and error-prone. [Revision tags](https://istio.io/latest/docs/setup/upgrade/canary/#stable-revision-labels) solve this problem. Revision tags are stable identifiers that point to revisions and can be used to avoid relabeling namespaces. Rather than relabeling the namespace, a mesh operator can simply change the tag to point to a new revision. All namespaces labeled with that tag will be updated at the same time. However, note that you still need to restart the workloads to make sure the correct version of `istio-proxy` sidecars are injected.
+
+### Patch version upgrade
* Istio add-on patch version availability information is published in [AKS weekly release notes][aks-release-notes].
-* Patches are rolled out automatically for istiod and ingress pods as part of these AKS weekly releases.
+* Patches are rolled out automatically for istiod and ingress pods as part of these AKS weekly releases, which respect the `default` [planned maintenance window](./planned-maintenance.md) set up for the cluster.
* User needs to initiate patches to Istio proxy in their workloads by restarting the pods for reinjection: * Check the version of the Istio proxy intended for new or restarted pods. This version is the same as the version of the istiod and Istio ingress pods after they were patched:
This article addresses upgrade experiences for Istio-based service mesh add-on f
productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.17.0, mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless ```
-[aks-release-notes]: https://github.com/Azure/AKS/releases
+[aks-release-notes]: https://github.com/Azure/AKS/releases
+[istio-canary-upstream]: https://istio.io/latest/docs/setup/upgrade/canary/
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Bicep
description: Learn how to quickly deploy a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS). Last updated 12/27/2023-+ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using Azure CLI. Last updated 01/10/2024-+ #Customer intent: As a developer or cluster operator, I want to deploy an AKS cluster and deploy an application so I can see how to run applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the Azure portal. Last updated 01/11/2024-+ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure
description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell. Last updated 01/11/2024-+ #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
To deploy the application, you use a manifest file to create all the objects req
[rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata:
- name: rabbitmq-enabled-plugins
+ name: rabbitmq-enabled-plugins
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
memory: 50Mi limits: cpu: 75m
- memory: 128Mi
+ memory: 128Mi
apiVersion: v1 kind: Service
To deploy the application, you use a manifest file to create all the objects req
ports: - containerPort: 8080 name: store-front
- env:
+ env:
- name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL
aks Tutorial Kubernetes Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md
Title: Tutorial - Use a workload identity with an application on Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy an Azure Kubernetes Service cluster and configure an application to use a workload identity. -+ Last updated 05/24/2023
To help simplify steps to configure the identities required, the steps below def
2. Add a secret to the vault using the [az keyvault secret set][az-keyvault-secret-set] command. The password is the value you specified for the environment variable `KEYVAULT_SECRET_NAME` and stores the value of **Hello!** in it. ```azurecli-interactive
- az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET_NAME}" --value 'Hello!'
+ az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET_NAME}" --value 'Hello!'
``` 3. Add the Key Vault URL to the environment variable `KEYVAULT_URL` using the [az keyvault show][az-keyvault-show] command.
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
Title: Limit Network Traffic with Azure Firewall in Azure Kubernetes Service (AKS) description: Learn how to control egress traffic with Azure Firewall to set restrictions for outbound network connections in AKS clusters. -+ Last updated 12/05/2023
#Customer intent: As a cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
-# Limit network traffic with Azure Firewall in Azure Kubernetes Service (AKS)
+# Limit network traffic with Azure Firewall in Azure Kubernetes Service (AKS)
Learn how to use the [Outbound network and FQDN rules for AKS clusters][outbound-fqdn-rules] to control egress traffic using the Azure Firewall in AKS. To simplify this configuration, Azure Firewall provides an Azure Kubernetes Service (`AzureKubernetesService`) Fully Qualified Domain Name (FQDN) tag that restricts outbound traffic from the AKS cluster. This article shows how you can configure your AKS Cluster traffic rules through Azure firewall.
If you don't have user-assigned identities, follow the steps in this section. If
The output should resemble the following example output: ```output
- {
+ {
"clientId": "<client-id>", "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
"location": "eastus", "name": "myIdentity", "principalId": "<principal-id>",
- "resourceGroup": "aks-egress-rg",
+ "resourceGroup": "aks-egress-rg",
"tags": {}, "tenantId": "<tenant-id>", "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
If you don't have user-assigned identities, follow the steps in this section. If
{ "clientId": "<client-id>", "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/aks-egress-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
"location": "westus2", "name": "myKubeletIdentity", "principalId": "<principal-id>",
- "resourceGroup": "aks-egress-rg",
+ "resourceGroup": "aks-egress-rg",
"tags": {}, "tenantId": "<tenant-id>", "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
Use the [az aks create][az-aks-create] command to deploy an AKS cluster with an
|SSH parameter |Description |Default value | |--|--|--| |--generate-ssh-key |If you don't have your own SSH key, specify `--generate-ssh-key`. The Azure CLI first looks for the key in the `~/.ssh/` directory. If the key exists, it's used. If the key doesn't exist, the Azure CLI automatically generates a set of SSH keys and saves them in the specified or default directory.||
-|--ssh-key-vaule |Public key path or key contents to install on node VMs for SSH access. For example, `ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm`.|`~.ssh\id_rsa.pub` |
+|--ssh-key-value |Public key path or key contents to install on node VMs for SSH access. For example, `ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm`.|`~/.ssh/id_rsa.pub` |
|--no-ssh-key | If you don't require an SSH key, specify this argument. However, AKS automatically generates a set of SSH keys because the Azure Virtual Machine resource dependency doesnΓÇÖt support an empty SSH key file. As a result, the keys aren't returned and can't be used to SSH into the node VMs. || >[!NOTE]
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster node
Last updated 01/08/2024 -+ #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem. # Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you eventually need to directly access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you eventually need to directly access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations.
-You access a node through authentication, which methods vary depending on your Node OS and method of connection. You securely authenticate against AKS Linux and Windows nodes using SSH. Alternatively, for Windows Servers you can also connect to Windows Server nodes using the [remote desktop protocol (RDP)][aks-windows-rdp].
+You access a node through authentication, which methods vary depending on your Node OS and method of connection. You securely authenticate against AKS Linux and Windows nodes using SSH. Alternatively, for Windows Servers you can also connect to Windows Server nodes using the [remote desktop protocol (RDP)][aks-windows-rdp].
For security reasons, AKS nodes aren't exposed to the internet. Instead, to connect directly to any AKS nodes, you need to use either `kubectl debug` or the host's private IP address.
This guide shows you how to create a connection to an AKS node and update the SS
To follow along the steps, you need to use Azure CLI that supports version 2.0.64 or later. Run `az --version` to check the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-Complete these steps if you don't have an SSH key. Create an SSH key depending on your Node OS Image, for [macOS and Linux][ssh-nix], or [Windows][ssh-windows]. Make sure you save the key pair in the OpenSSH format, avoid unsupported formats such as `.ppk`. Next, refer to [Manage SSH configuration][manage-ssh-node-access] to add the key to your cluster.
+Complete these steps if you don't have an SSH key. Create an SSH key depending on your Node OS Image, for [macOS and Linux][ssh-nix], or [Windows][ssh-windows]. Make sure you save the key pair in the OpenSSH format, avoid unsupported formats such as `.ppk`. Next, refer to [Manage SSH configuration][manage-ssh-node-access] to add the key to your cluster.
## Linux and macOS
To create an interactive shell connection, use the `kubectl debug` command to ru
Sample output: ```output
- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE
- aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS
- aks-nodepool1-37663765-vmss000001 Ready agent 166m v1.25.6 10.224.0.4 <none> Ubuntu 22.04.2 LTS
- aksnpwin000000 Ready agent 160m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter
+ NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE
+ aks-nodepool1-37663765-vmss000000 Ready agent 166m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS
+ aks-nodepool1-37663765-vmss000001 Ready agent 166m v1.25.6 10.224.0.4 <none> Ubuntu 22.04.2 LTS
+ aksnpwin000000 Ready agent 160m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter
``` 2. Use the `kubectl debug` command to start a privileged container on your node and connect to it.
kubectl delete pod node-debugger-aks-nodepool1-37663765-vmss000000-bkmmx
## Private IP Method
-If you don't have access to the Kubernetes API, you can get access to properties such as ```Node IP``` and ```Node Name``` through the [AKS Agent Pool Preview API][agent-pool-rest-api] (preview version 07-02-2023 or above) to troubleshoot node-specific issues in your AKS node pools.
+If you don't have access to the Kubernetes API, you can get access to properties such as ```Node IP``` and ```Node Name``` through the [AKS Agent Pool Preview API][agent-pool-rest-api] (preview version 07-02-2023 or above) to troubleshoot node-specific issues in your AKS node pools.
### Create an interactive shell connection to a node using the IP address
For convenience, the nodepools are exposed when the node has a public IP assigne
Sample output: ```output
- Name Ip
+ Name Ip
-- aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4; aks-nodepool1-33555069-vmss000001 10.224.0.6,family:IPv4;
- aks-nodepool1-33555069-vmss000002 10.224.0.4,family:IPv4;
+ aks-nodepool1-33555069-vmss000002 10.224.0.4,family:IPv4;
``` To target a specific node inside the nodepool, add a `--machine-name` flag:
For convenience, the nodepools are exposed when the node has a public IP assigne
Sample output: ```output
- Name Ip
+ Name Ip
-- aks-nodepool1-33555069-vmss000000 10.224.0.5,family:IPv4; ```
To connect to another node in the cluster, use the `kubectl debug` command. For
> [!IMPORTANT] >
-> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. The AKS Update command can also be used to manage, create SSH keys on an existing AKS cluster. For more information, see [manage SSH node access][manage-ssh-node-access].
+> The following steps for creating the SSH connection to the Windows Server node from another node can only be used if you created your AKS cluster using the Azure CLI and the `--generate-ssh-keys` parameter. The AKS Update command can also be used to manage, create SSH keys on an existing AKS cluster. For more information, see [manage SSH node access][manage-ssh-node-access].
Finish the prior steps to use kubectl debug, then return to this section, as you need to run the `kubectl debug` in your proxy.
Finish the prior steps to use kubectl debug, then return to this section, as you
Sample output: ```output
- NAME INTERNAL_IP
- aks-nodepool1-19409214-vmss000003 10.224.0.8
+ NAME INTERNAL_IP
+ aks-nodepool1-19409214-vmss000003 10.224.0.8
``` In the previous example, *10.224.0.62* is the internal IP address of the Windows Server node.
To learn about managing your SSH keys, see [Manage SSH configuration][manage-ssh
[view-control-plane-logs]: monitor-aks-reference.md#resource-logs [install-azure-cli]: /cli/azure/install-azure-cli [aks-windows-rdp]: rdp.md
-[azure-bastion]: ../bastion/bastion-overview.md
+[azure-bastion]: ../bastion/bastion-overview.md
[ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md [ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md [agent-pool-rest-api]: /rest/api/aks/agent-pools/get#agentpool
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
Title: Upgrade Azure Kubernetes Service (AKS) node images description: Learn how to upgrade the images on AKS cluster nodes and node pools. -+ Last updated 03/28/2023
This article shows you how to upgrade AKS cluster node images and how to update
> [!NOTE] > The AKS cluster must use virtual machine scale sets for the nodes.
->
+>
> It's not possible to downgrade a node image version (for example *AKSUbuntu-2204 to AKSUbuntu-1804*, or *AKSUbuntu-2204-202308.01.0 to AKSUbuntu-2204-202307.27.0*). ## Check for available node image upgrades
aks Node Updates Kured https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md
Title: Handle Linux node reboots with kured
description: Learn how to update Linux nodes and automatically reboot them with kured in Azure Kubernetes Service (AKS) -+ Last updated 04/19/2023 #Customer intent: As a cluster administrator, I want to know how to automatically apply Linux updates and reboot nodes in AKS for security and/or compliance
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
There are two options to provide access to Azure Monitor for containers:
| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. | | **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. | | **`*.monitoring.azure.com`** | **`HTTPS:443`** | This endpoint is used to send metrics data to Azure Monitor. |
+| **`<cluster-region-name>.ingest.monitor.azure.com`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor managed service for Prometheus metrics ingestion.|
+| **`<cluster-region-name>.handler.control.monitor.azure.com`** | **`HTTPS:443`** | This endpoint is used to fetch data collection rules for a specific cluster. |
+
+#### Microsoft Azure operated by 21Vianet required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
+| **`*.ods.opinsights.azure.cn`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
+| **`*.oms.opinsights.azure.cn`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
+| **`global.handler.control.monitor.azure.cn`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for accessing the control service. |
+| **`<cluster-region-name>.handler.control.monitor.azure.cn`** | **`HTTPS:443`** | This endpoint is used to fetch data collection rules for a specific cluster. |
+
+#### Azure US Government required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
+| **`*.ods.opinsights.azure.us`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
+| **`*.oms.opinsights.azure.us`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
+| **`global.handler.control.monitor.azure.us`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for accessing the control service. |
+| **`<cluster-region-name>.handler.control.monitor.azure.us`** | **`HTTPS:443`** | This endpoint is used to fetch data collection rules for a specific cluster. |
### Azure Policy
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Last updated 12/27/2023-+ # Quickstart: Deploy an application using the Dapr cluster extension for Azure Kubernetes Service (AKS) or Arc-enabled Kubernetes
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
Title: RDP to AKS Windows Server nodes
description: Learn how to create an RDP connection with Azure Kubernetes Service (AKS) cluster Windows Server nodes for troubleshooting and maintenance tasks. -+ Last updated 04/26/2023 #Customer intent: As a cluster operator, I want to learn how to use RDP to connect to nodes in an AKS cluster to perform maintenance or troubleshoot a problem.
Last updated 04/26/2023
Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS Windows Server node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access the AKS Windows Server nodes using RDP. For security purposes, the AKS nodes aren't exposed to the internet.
-Alternatively, if you want to SSH to your AKS Windows Server nodes, you need access to the same key-pair that was used during cluster creation. Follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps].
+Alternatively, if you want to SSH to your AKS Windows Server nodes, you need access to the same key-pair that was used during cluster creation. Follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps].
This article shows you how to create an RDP connection with an AKS node using their private IP addresses.
You'll need to get the subnet ID used by your Windows Server node pool and query
* The subnet ID ```azurepowershell-interactive
-$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+$CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
$VNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Name $ADDRESS_PREFIX = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).AddressSpace | Select-Object -ExpandProperty AddressPrefixes $SUBNET_NAME = (Get-AzVirtualNetwork -ResourceGroupName $CLUSTER_RG).Subnets[0].Name
First, get the resource group and name of the NSG to add the rule to:
```azurepowershell-interactive $CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
-$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
+$NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name
``` Then, create the NSG rule:
Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Add-
### [Azure CLI](#tab/azure-cli) To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [az aks install-cli][az-aks-install-cli] command:
-
+ ```azurecli az aks install-cli ```
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
### [Azure PowerShell](#tab/azure-powershell) To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, use the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
-
+ ```azurepowershell Install-AzAksKubectl ```
Alternatively, you can use [Azure Bastion][azure-bastion] to connect to your Win
### Deploy Azure Bastion
-To deploy Azure Bastion, you'll need to find the virtual network your AKS cluster is connected to.
+To deploy Azure Bastion, you'll need to find the virtual network your AKS cluster is connected to.
1. In the Azure portal, go to **Virtual networks**. Select the virtual network your AKS cluster is connected to. 1. Under **Settings**, select **Bastion**, then select **Deploy Bastion**. Wait until the process is finished before going to the next step.
az aks show -n myAKSCluster -g myResourceGroup --query 'nodeResourceGroup' -o ts
#### [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
-(Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+(Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
```
When you're finished, exit the Bastion session and remove the Bastion resource.
1. In the Azure portal, go to **Bastion** and select the Bastion resource you created. 1. At the top of the page, select **Delete**. Wait until the process is complete before proceeding to the next step.
-1. In the Azure portal, go to **Virtual networks**. Select the virtual network that your AKS cluster is connected to.
+1. In the Azure portal, go to **Virtual networks**. Select the virtual network that your AKS cluster is connected to.
1. Under **Settings**, select **Subnet**, and delete the **AzureBastionSubnet** subnet that was created for the Bastion resource. ## Next steps
If you need more troubleshooting data, you can [view the Kubernetes primary node
[install-azure-powershell]: /powershell/azure/install-az-ps [ssh-steps]: ssh.md [view-primary-logs]: monitor-aks.md#aks-control-planeresource-logs
-[azure-bastion]: ../bastion/bastion-overview.md
+[azure-bastion]: ../bastion/bastion-overview.md
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
Title: Resize node pools in Azure Kubernetes Service (AKS) description: Learn how to resize node pools for a cluster in Azure Kubernetes Service (AKS) by cordoning and draining. -+ Last updated 02/08/2023 #Customer intent: As a cluster operator, I want to resize my node pools so that I can run more or larger workloads.
kube-system metrics-server-774f99dbf4-h52hn 1/1 Running 1
Use the [az aks nodepool add][az-aks-nodepool-add] command to create a new node pool called `mynodepool` with three nodes using the `Standard_DS3_v2` VM SKU: ```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name mynodepool \
- --node-count 3 \
- --node-vm-size Standard_DS3_v2 \
- --mode System \
- --no-wait
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name mynodepool \
+ --node-count 3 \
+ --node-vm-size Standard_DS3_v2 \
+ --mode System \
+ --no-wait
``` > [!NOTE]
aks Upgrade Windows 2019 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md
Title: Upgrade Azure Kubernetes Service (AKS) workloads from Windows Server 2019 to 2022 description: Learn how to upgrade the OS version for Windows workloads on Azure Kubernetes Service (AKS). -+ Last updated 09/12/2023
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
Title: Pod Sandboxing (preview) with Azure Kubernetes Service (AKS) description: Learn about and deploy Pod Sandboxing (preview), also referred to as Kernel Isolation, on an Azure Kubernetes Service (AKS) cluster. -+ Last updated 06/07/2023
Learn more about [Azure Dedicated hosts][azure-dedicated-hosts] for nodes with y
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kata-network-limitations]: https://github.com/kata-containers/kata-containers/blob/main/docs/Limitations.md#host-network [cloud-hypervisor]: https://www.cloudhypervisor.org
-[kata-container]: https://katacontainers.io
+[kata-container]: https://katacontainers.io
<!-- INTERNAL LINKS --> [install-azure-cli]: /cli/azure
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
Title: Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview) description: Learn how to create a WebAssembly System Interface (WASI) node pool in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload on Kubernetes. -+ Last updated 05/17/2023
az provider register --namespace Microsoft.ContainerService
## Limitations
-* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASM/WASI node pools.
+* Currently, there are only containerd shims available for [spin][spin] and [slight][slight] applications, which use the [wasmtime][wasmtime] runtime. In addition to wasmtime runtime applications, you can also run containers on WASM/WASI node pools.
* You can run containers and wasm modules on the same node, but you can't run containers and wasm modules on the same pod. * The WASM/WASI node pools can't be used for system node pool. * The *os-type* for WASM/WASI node pools must be Linux.
az aks nodepool add \
--cluster-name myAKSCluster \ --name mywasipool \ --node-count 1 \
- --workload-runtime WasmWasi
+ --workload-runtime WasmWasi
``` > [!NOTE]
az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipoo
"WasmWasi" ```
-Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
+Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command:
```azurecli-interactive az aks get-credentials -n myakscluster -g myresourcegroup
aks Workload Identity Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-deploy-cluster.md
Title: Deploy and configure an Azure Kubernetes Service (AKS) cluster with workload identity description: In this Azure Kubernetes Service (AKS) article, you deploy an Azure Kubernetes Service cluster and configure it with a Microsoft Entra Workload ID. -+ Last updated 09/27/2023
This article assumes you have a basic understanding of Kubernetes concepts. For
To help simplify steps to configure the identities required, the steps below define environmental variables for reference on the cluster.
-Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`.
+Run the following commands to create these variables. Replace the default values for `RESOURCE_GROUP`, `LOCATION`, `SERVICE_ACCOUNT_NAME`, `SUBSCRIPTION`, `USER_ASSIGNED_IDENTITY_NAME`, and `FEDERATED_IDENTITY_CREDENTIAL_NAME`.
```bash export RESOURCE_GROUP="myResourceGroup"
export SERVICE_ACCOUNT_NAMESPACE="default"
export SERVICE_ACCOUNT_NAME="workload-identity-sa" export SUBSCRIPTION="$(az account show --query id --output tsv)" export USER_ASSIGNED_IDENTITY_NAME="myIdentity"
-export FEDERATED_IDENTITY_CREDENTIAL_NAME="myFedIdentity"
+export FEDERATED_IDENTITY_CREDENTIAL_NAME="myFedIdentity"
``` ## Create AKS cluster
To check whether all properties are injected properly by the webhook, use the [k
kubectl describe pod quick-start | grep "SECRET_NAME:" ```
-If successful, the output should be similar to the following:
+If successful, the output should be similar to the following:
```bash SECRET_NAME: ${KEYVAULT_SECRET_NAME} ```
To verify that pod is able to get a token and access the resource, use the kubec
kubectl logs quick-start ```
-If successful, the output should be similar to the following:
+If successful, the output should be similar to the following:
```bash I0114 10:35:09.795900 1 main.go:63] "successfully got secret" secret="Hello\\!" ```
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity. -+ Last updated 07/31/2023
kubectl logs podName
The following log output resembles successful communication through the proxy sidecar. Verify that the logs show a token is successfully acquired and the GET operation is successful. ```output
-I0926 00:29:29.968723 1 proxy.go:97] proxy "msg"="starting the proxy server" "port"=8080 "userAgent"="azure-workload-identity/proxy/v0.13.0-12-gc8527f3 (linux/amd64) c8527f3/2022-09-26-00:19"
-I0926 00:29:29.972496 1 proxy.go:173] proxy "msg"="received readyz request" "method"="GET" "uri"="/readyz"
-I0926 00:29:30.936769 1 proxy.go:107] proxy "msg"="received token request" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>"
+I0926 00:29:29.968723 1 proxy.go:97] proxy "msg"="starting the proxy server" "port"=8080 "userAgent"="azure-workload-identity/proxy/v0.13.0-12-gc8527f3 (linux/amd64) c8527f3/2022-09-26-00:19"
+I0926 00:29:29.972496 1 proxy.go:173] proxy "msg"="received readyz request" "method"="GET" "uri"="/readyz"
+I0926 00:29:30.936769 1 proxy.go:107] proxy "msg"="received token request" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>"
I0926 00:29:31.101998 1 proxy.go:129] proxy "msg"="successfully acquired token" "method"="GET" "uri"="/metadata/identity/oauth2/token?resource=https://management.core.windows.net/api-version=2018-02-01&client_id=<client_id>" ```
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> | | [Pass-through WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ | | [Pass-through gRPC](grpc-api.md) | ❌ | ❌ | ✔️ |
-| [Circuit Breaker](backends.md#circuit-breaker-preview) | ✔️ | ✔️ | ✔️ |
+| [Circuit breaker in backend](backends.md#circuit-breaker-preview) | ✔️ | ❌ | ✔️ |
+| [Load-balanced backend pool](backends.md#load-balanced-pool-preview) | ✔️ | ✔️ | ✔️ |
<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported.
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
Examples:
* [Configure credential manager - Microsoft Graph API](credentials-how-to-azure-ad.md) * [Configure credential manager - GitHub API](credentials-how-to-github.md)
-* [Configure credential manager - user delegated access to backend APIs](credentials-how-to-github.md)
+* [Configure credential manager - user delegated access to backend APIs](credentials-how-to-user-delegated.md)
## Other options to secure APIs
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
Starting in API version 2023-03-01 preview, API Management exposes a [circuit br
The backend circuit breaker is an implementation of the [circuit breaker pattern](/azure/architecture/patterns/circuit-breaker) to allow the backend to recover from overload situations. It augments general [rate-limiting](rate-limit-policy.md) and [concurrency-limiting](limit-concurrency-policy.md) policies that you can implement to protect the API Management gateway and your backend services.
+> [!NOTE]
+> * Currently, the backend circuit breaker isn't supported in the **Consumption** tier of API Management.
+> * Because of the distributed nature of the API Management architecture, circuit breaker tripping rules are approximate. Different instances of the gateway do not synchronize and will apply circuit breaker rules based on the information on the same instance.
+ ### Example Use the API Management [REST API](/rest/api/apimanagement/backend) or a Bicep or ARM template to configure a circuit breaker in a backend. In the following example, the circuit breaker in *myBackend* in the API Management instance *myAPIM* trips when there are three or more `5xx` status codes indicating server errors in a day. The circuit breaker resets after one hour.
Use a backend pool for scenarios such as the following:
To create a backend pool, set the `type` property of the backend to `pool` and specify a list of backends that make up the pool. > [!NOTE]
-> Currently, you can only include single backends in a backend pool. You can't add a backend of type `pool` to another backend pool.
+> * Currently, you can only include single backends in a backend pool. You can't add a backend of type `pool` to another backend pool.
+> * Because of the distributed nature of the API Management architecture, backend load balancing is approximate. Different instances of the gateway do not synchronize and will load balance based on the information on the same instance.
+ ### Example
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
Previously updated : 10/18/2023 Last updated : 01/11/2024
API Management platform migration from `stv1` to `stv2` involves updating the un
For an API Management instance that's not deployed in a VNet, migrate your instance using the **Platform migration** blade in the Azure portal, or invoke the Migrate to `stv2` REST API.
-You can choose whether the virtual IP address of API Management will change, or whether the original VIP address is preserved.
+During the migration, the VIP address of your API Management instance will be preserved.
-* **New virtual IP address (recommended)** - If you choose this mode, API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
-
-* **Preserve IP address** - If you preserve the VIP address, API requests will be unresponsive for approximately 15 minutes while the IP address is migrated to the new infrastructure. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes. No further configuration is required after migration.
+* API requests will be unresponsive for approximately 15 minutes while the IP address is migrated to the new infrastructure.
+* Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes.
+* No further configuration is required after migration.
#### [Portal](#tab/portal) 1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance. 1. In the left menu, under **Settings**, select **Platform migration**.
-1. On the **Platform migration** page, select one of the two migration options:
-
- * **New virtual IP address (recommended)**. The VIP address of your API Management instance will change automatically. Your service will have no downtime, but after migration you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
-
- * **Preserve IP address** - The VIP address of your API Management instance won't change. Your instance will have downtime for up to 15 minutes.
-
- :::image type="content" source="media/migrate-stv1-to-stv2/platform-migration-portal.png" alt-text="Screenshot of API Management platform migration in the portal.":::
-
-1. Review guidance for the migration process, and prepare your environment.
-
+1. On the **Platform migration** page, review guidance for the migration process, and prepare your environment.
1. After you've completed preparation steps, select **I have read and understand the impact of the migration process.** Select **Migrate**. #### [Azure CLI](#tab/cli)
RG_NAME={name of your resource group}
# Get resource ID of API Management instance APIM_RESOURCE_ID=$(az apim show --name $APIM_NAME --resource-group $RG_NAME --query id --output tsv)
-# Call REST API to migrate to stv2 and change VIP address
-az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "NewIp"}'
-
-# Alternate call to migrate to stv2 and preserve VIP address
-# az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "PreserveIp"}'
+# Call REST API to migrate to stv2 and preserve VIP address
+az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "PreserveIp"}'
```
az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03
To verify that the migration was successful, when the status changes to `Online`, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`.
-### Update network dependencies
-
-On successful migration, update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
- ## Scenario 2: Migrate a network-injected API Management instance Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration to use new network settings (see the following section). After that update completes, as an optional step, you can migrate back to the original VNet and subnet you used.
app-service Tutorial Multi Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md
Last updated 11/18/2022 -+ # Tutorial: Create a multi-container (preview) app in Web App for Containers
redis: image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
- environment:
+ environment:
- ALLOW_EMPTY_PASSWORD=yes restart: always ```
application-gateway Ingress Controller Autoscale Pods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-autoscale-pods.md
description: This article provides instructions on how to scale your AKS backend
-+ Last updated 10/26/2023
In the following tutorial, we explain how you can use Application Gateway's `Avg
Use following two components:
-* [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller.
+* [`Azure Kubernetes Metric Adapter`](https://github.com/Azure/azure-k8s-metrics-adapter) - We use the metric adapter to expose Application Gateway metrics through the metric server. The Azure Kubernetes Metric Adapter is an open source project under Azure, similar to the Application Gateway Ingress Controller.
* [`Horizontal Pod Autoscaler`](../aks/concepts-scale.md#horizontal-pod-autoscaler) - We use HPA to use Application Gateway metrics and target a deployment for scaling. > [!NOTE]
Use following two components:
## Setting up Azure Kubernetes Metric Adapter
-1. First, create a Microsoft Entra service principal and assign it `Monitoring Reader` access over Application Gateway's resource group.
+1. First, create a Microsoft Entra service principal and assign it `Monitoring Reader` access over Application Gateway's resource group.
```azurecli applicationGatewayGroupName="<application-gateway-group-id>"
application-gateway Ingress Controller Expose Service Over Http Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-expose-service-over-http-https.md
Title: Expose an AKS service over HTTP or HTTPS using Application Gateway
-description: This article provides information on how to expose an AKS service over HTTP or HTTPS using Application Gateway.
+description: This article provides information on how to expose an AKS service over HTTP or HTTPS using Application Gateway.
-+ Last updated 07/23/2023
-# Expose an AKS service over HTTP or HTTPS using Application Gateway
+# Expose an AKS service over HTTP or HTTPS using Application Gateway
These tutorials help illustrate the usage of [Kubernetes Ingress Resources](https://kubernetes.io/docs/concepts/services-networking/ingress/) to expose an example Kubernetes service through the [Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) over HTTP or HTTPS.
Without specifying hostname, the guestbook service is available on all the host-
servicePort: 80 ```
- > [!NOTE]
+ > [!NOTE]
> Replace `<guestbook-secret-name>` in the above Ingress Resource with the name of your secret. Store the above Ingress Resource in a file name `ing-guestbook-tls.yaml`. 1. Deploy ing-guestbook-tls.yaml by running
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
Title: Create an ingress controller with an existing Application Gateway
-description: This article provides information on how to deploy an Application Gateway Ingress Controller with an existing Application Gateway.
+ Title: Create an ingress controller with an existing Application Gateway
+description: This article provides information on how to deploy an Application Gateway Ingress Controller with an existing Application Gateway.
-+ Last updated 07/28/2023
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
# Verbosity level of the App Gateway Ingress Controller verbosityLevel: 3
-
+ ################################################################################ # Specify which application gateway the ingress controller must manage #
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
subscriptionId: <subscriptionId> resourceGroup: <resourceGroupName> name: <applicationGatewayName>
-
+ # Setting appgw.shared to "true" creates an AzureIngressProhibitedTarget CRD. # This prohibits AGIC from applying config for any host/path. # Use "kubectl get AzureIngressProhibitedTargets" to view and change this. shared: false
-
+ ################################################################################ # Specify which kubernetes namespace the ingress controller must watch # Default value is "default"
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
# # kubernetes: # watchNamespace: <namespace>
-
+ ################################################################################ # Specify the authentication with Azure Resource Manager # # Two authentication methods are available:
- # - Option 1: Azure-AD-workload-identity
+ # - Option 1: Azure-AD-workload-identity
armAuth: type: workloadIdentity identityClientID: <identityClientId>
-
+ ## Alternatively you can use Service Principal credentials # armAuth: # type: servicePrincipal # secretJSON: <<Generate this value with: "az ad sp create-for-rbac --role Contributor --sdk-auth | base64 -w0" >>
-
+ ################################################################################ # Specify if the cluster is Kubernetes RBAC enabled or not rbac: enabled: false # true/false
-
+ # Specify aks cluster related information. THIS IS BEING DEPRECATED. aksClusterConfiguration: apiServerAddress: <aks-api-server-address> ``` 1. Edit helm-config.yaml and fill in the values for `appgw` and `armAuth`.
-
+ > [!NOTE] > The `<identity-client-id>` is a property of the Microsoft Entra Workload ID you setup in the previous section. You can retrieve this information by running the following command: `az identity show -g <resourcegroup> -n <identity-name>`, where `<resourcegroup>` is the resource group hosting the infrastructure resources related to the AKS cluster, Application Gateway and managed identity.
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Title: Creating an ingress controller with a new Application Gateway
-description: This article provides information on how to deploy an Application Gateway Ingress Controller with a new Application Gateway.
+ Title: Creating an ingress controller with a new Application Gateway
+description: This article provides information on how to deploy an Application Gateway Ingress Controller with a new Application Gateway.
-+ Last updated 07/28/2023
To install Microsoft Entra Pod Identity to your cluster:
```bash wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-helm-config.yaml -O helm-config.yaml ```
- Or copy the YAML file below:
-
+ Or copy the YAML file below:
+ ```yaml # This file contains the essential configs for the ingress controller helm chart # Verbosity level of the App Gateway Ingress Controller verbosityLevel: 3
-
+ ################################################################################ # Specify which application gateway the ingress controller will manage #
To install Microsoft Entra Pod Identity to your cluster:
subscriptionId: <subscriptionId> resourceGroup: <resourceGroupName> name: <applicationGatewayName>
-
+ # Setting appgw.shared to "true" will create an AzureIngressProhibitedTarget CRD. # This prohibits AGIC from applying config for any host/path. # Use "kubectl get AzureIngressProhibitedTargets" to view and change this. shared: false
-
+ ################################################################################ # Specify which kubernetes namespace the ingress controller will watch # Default value is "default"
To install Microsoft Entra Pod Identity to your cluster:
# # kubernetes: # watchNamespace: <namespace>
-
+ ################################################################################ # Specify the authentication with Azure Resource Manager #
To install Microsoft Entra Pod Identity to your cluster:
type: aadPodIdentity identityResourceID: <identityResourceId> identityClientID: <identityClientId>
-
+ ## Alternatively you can use Service Principal credentials # armAuth: # type: servicePrincipal # secretJSON: <<Generate this value with: "az ad sp create-for-rbac --subscription <subscription-uuid> --role Contributor --sdk-auth | base64 -w0" >>
-
+ ################################################################################ # Specify if the cluster is Kubernetes RBAC enabled or not rbac: enabled: false # true/false
-
+ # Specify aks cluster related information. THIS IS BEING DEPRECATED. aksClusterConfiguration: apiServerAddress: <aks-api-server-address>
To install Microsoft Entra Pod Identity to your cluster:
sed -i "s|<identityResourceId>|${identityResourceId}|g" helm-config.yaml sed -i "s|<identityClientId>|${identityClientId}|g" helm-config.yaml ```
-
+ > [!NOTE] > **For deploying to Sovereign Clouds (e.g., Azure Government)**, the `appgw.environment` configuration parameter must be added and set to the appropriate value as documented below.
To install Microsoft Entra Pod Identity to your cluster:
- `kubernetes.watchNamespace`: Specify the namespace that AGIC should watch. The namespace value can be a single string value, or a comma-separated list of namespaces. - `armAuth.type`: could be `aadPodIdentity` or `servicePrincipal` - `armAuth.identityResourceID`: Resource ID of the Azure Managed Identity
- - `armAuth.identityClientID`: The Client ID of the Identity. More information about **identityClientID** is provided below.
- - `armAuth.secretJSON`: Only needed when Service Principal Secret type is chosen (when `armAuth.type` has been set to `servicePrincipal`)
+ - `armAuth.identityClientID`: The Client ID of the Identity. More information about **identityClientID** is provided below.
+ - `armAuth.secretJSON`: Only needed when Service Principal Secret type is chosen (when `armAuth.type` has been set to `servicePrincipal`)
> [!NOTE]
application-gateway Ingress Controller Letsencrypt Certificate Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-letsencrypt-certificate-application-gateway.md
Title: Use LetsEncrypt.org certificates with Application Gateway
-description: This article provides information on how to obtain a certificate from LetsEncrypt.org and use it on your Application Gateway for AKS clusters.
+description: This article provides information on how to obtain a certificate from LetsEncrypt.org and use it on your Application Gateway for AKS clusters.
-+ Last updated 08/01/2023
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
--namespace cert-manager \ --version v1.10.1 \ # --set installCRDs=true
-
- # To automatically install and manage the CRDs as part of your Helm release,
+
+ # To automatically install and manage the CRDs as part of your Helm release,
# you must add the --set installCRDs=true flag to your Helm installation command. ```
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
The default challenge type in the following YAML is `http01`. Other challenges are documented on [letsencrypt.org - Challenge Types](https://letsencrypt.org/docs/challenge-types/)
- > [!IMPORTANT]
+ > [!IMPORTANT]
> Update `<YOUR.EMAIL@ADDRESS>` in the following YAML. ```bash
Use the following steps to install [cert-manager](https://docs.cert-manager.io)
Ensure your Application Gateway has a public Frontend IP configuration with a DNS name (either using the default `azure.com` domain, or provision a `Azure DNS Zone` service, and assign your own custom domain). The annotation `certmanager.k8s.io/cluster-issuer: letsencrypt-staging`, which tells cert-manager to process the tagged Ingress resource.
- > [!IMPORTANT]
+ > [!IMPORTANT]
> Update `<PLACEHOLDERS.COM>` in the following YAML with your own domain (or the Application Gateway one, for example 'kh-aks-ingress.westeurope.cloudapp.azure.com') ```bash
application-gateway Ingress Controller Private Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-private-ip.md
Title: Use private IP address for internal routing for an ingress endpoint
-description: This article provides information on how to use private IPs for internal routing and thus exposing the Ingress endpoint within a cluster to the rest of the VNet.
+ Title: Use private IP address for internal routing for an ingress endpoint
+description: This article provides information on how to use private IPs for internal routing and thus exposing the Ingress endpoint within a cluster to the rest of the VNet.
-+ Last updated 07/23/2023
-# Use private IP for internal routing for an Ingress endpoint
+# Use private IP for internal routing for an Ingress endpoint
This feature exposes the ingress endpoint within the `Virtual Network` using a private IP. > [!TIP] > Also see [What is Application Gateway for Containers?](for-containers/overview.md) currently in public preview.
-## Prerequisites
+## Prerequisites
Application Gateway with a [Private IP configuration](./configure-application-gateway-with-private-frontend-ip.md) There are two ways to configure the controller to use Private IP for ingress,
For Application Gateways without a Private IP, Ingresses annotated with `appgw.i
Events: Type Reason Age From Message - - - -
- Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway
+ Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway
applicationgateway3026 has a private IP address ```
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
Title: Application Gateway Ingress Controller troubleshooting
-description: This article provides documentation on how to troubleshoot common questions and issues with the Application Gateway Ingress Controller.
+description: This article provides documentation on how to troubleshoot common questions and issues with the Application Gateway Ingress Controller.
-+ Last updated 08/01/2023
The following conditions must be in place for AGIC to function as expected:
delyan@Azure:~$ kubectl get services -o wide --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR LABELS
- aspnetapp ClusterIP 10.2.63.254 <none> 80/TCP 17h app=aspnetapp <none>
+ aspnetapp ClusterIP 10.2.63.254 <none> 80/TCP 17h app=aspnetapp <none>
```
- 3. **Ingress**, annotated with `kubernetes.io/ingress.class: azure/application-gateway`, referencing the previous service.
+ 3. **Ingress**, annotated with `kubernetes.io/ingress.class: azure/application-gateway`, referencing the previous service.
Verify this configuration from [Cloud Shell](https://shell.azure.com/) with `kubectl get ingress -o wide --show-labels` ```output delyan@Azure:~$ kubectl get ingress -o wide --show-labels
The following conditions must be in place for AGIC to function as expected:
``` The ingress resource must be annotated with `kubernetes.io/ingress.class: azure/application-gateway`.
-
+ ### Verify Observed Namespace
The following conditions must be in place for AGIC to function as expected:
```bash # What namespaces exist on your cluster kubectl get namespaces
-
+ # What pods are currently running kubectl get pods --all-namespaces -o wide ```
The following conditions must be in place for AGIC to function as expected:
* Do you have a Kubernetes [Service](https://kubernetes.io/docs/concepts/services-networking/service/) and [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources?
-
+ ```bash # Get all services across all namespaces kubectl get service --all-namespaces -o wide
-
+ # Get all ingress resources across all namespaces kubectl get ingress --all-namespaces -o wide ``` * Is your [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) annotated with: `kubernetes.io/ingress.class: azure/application-gateway`? AGIC only watches for Kubernetes Ingress resources that have this annotation.
-
+ ```bash # Get the YAML definition of a particular ingress resource kubectl get ingress --namespace <which-namespace?> <which-ingress?> -o yaml
application-gateway Ingress Controller Update Ingress Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-update-ingress-controller.md
Title: Upgrade ingress controller with Helm
-description: This article provides information on how to upgrade an Application Gateway Ingress using Helm.
+description: This article provides information on how to upgrade an Application Gateway Ingress using Helm.
-+ Last updated 07/23/2023
-# How to upgrade Application Gateway Ingress Controller using Helm
+# How to upgrade Application Gateway Ingress Controller using Helm
The Azure Application Gateway Ingress Controller for Kubernetes (AGIC) can be upgraded using a Helm repository hosted on Azure Storage.
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
Last updated 11/06/2023 -+ # Quickstart: Direct web traffic with Azure Application Gateway - Azure CLI
-In this quickstart, you use Azure CLI to create an application gateway. Then you test it to make sure it works correctly.
+In this quickstart, you use Azure CLI to create an application gateway. Then you test it to make sure it works correctly.
The application gateway directs application web traffic to specific resources in a backend pool. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, this article uses a simple setup with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines in the backend pool.
You can also complete this quickstart using [Azure PowerShell](quick-create-powe
## Create resource group
-In Azure, you allocate related resources to a resource group. Create a resource group by using `az group create`.
+In Azure, you allocate related resources to a resource group. Create a resource group by using `az group create`.
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
For Azure to communicate between the resources that you create, it needs a virtual network. The application gateway subnet can contain only application gateways. No other resources are allowed. You can either create a new subnet for Application Gateway or use an existing one. In this example, you create two subnets: one for the application gateway, and another for the backend servers. You can configure the Frontend IP of the Application Gateway to be Public or Private as per your use case. In this example, you'll choose a Public Frontend IP address.
It can take up to 30 minutes for Azure to create the application gateway. After
## Test the application gateway
-Although Azure doesn't require an NGINX web server to create the application gateway, you installed it in this quickstart to verify whether Azure successfully created the application gateway. To get the public IP address of the new application gateway, use `az network public-ip show`.
+Although Azure doesn't require an NGINX web server to create the application gateway, you installed it in this quickstart to verify whether Azure successfully created the application gateway. To get the public IP address of the new application gateway, use `az network public-ip show`.
```azurecli-interactive az network public-ip show \
az network public-ip show \
``` Copy and paste the public IP address into the address bar of your browser.
-ΓÇï
+ΓÇï
![Test application gateway](./media/quick-create-cli/application-gateway-nginxtest.png) When you refresh the browser, you should see the name of the second VM. This indicates the application gateway was successfully created and can connect with the backend.
When you refresh the browser, you should see the name of the second VM. This ind
When you no longer need the resources that you created with the application gateway, use the `az group delete` command to delete the resource group. When you delete the resource group, you also delete the application gateway and all its related resources.
-```azurecli-interactive
+```azurecli-interactive
az group delete --name myResourceGroupAG ```
application-gateway Redirect Http To Https Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md
description: Learn how to create an HTTP to HTTPS redirection and add a certific
-+ Last updated 04/27/2023
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
az network public-ip create \
## Create the application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
-The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
+The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
```azurecli-interactive az network application-gateway create \
application-gateway Redirect Internal Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md
description: Learn how to create an application gateway that redirects internal
-+ Last updated 04/27/2023
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* that's needed by the backend pool of servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create).
az network public-ip create \
## Create an application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
```azurecli-interactive az network application-gateway create \
It may take several minutes for the application gateway to be created. After the
- *rule1* - The default routing rule that is associated with *appGatewayHttpListener*.
-## Add listeners and rules
+## Add listeners and rules
A listener is required to enable the application gateway to route traffic appropriately to the backend pool. In this tutorial, you create two listeners for your two domains. In this example, listeners are created for the domains of *www\.contoso.com* and *www\.contoso.org*.
az network application-gateway http-listener create \
--frontend-port appGatewayFrontendPort \ --resource-group myResourceGroupAG \ --gateway-name myAppGateway \
- --host-name www.contoso.org
+ --host-name www.contoso.org
``` ### Add the redirection configuration
az network application-gateway redirect-config create \
### Add routing rules
-Rules are processed in the order in which they are created, and traffic is directed using the first rule that matches the URL sent to the application gateway. For example, if you have a rule using a basic listener and a rule using a multi-site listener both on the same port, the rule with the multi-site listener must be listed before the rule with the basic listener in order for the multi-site rule to function as expected.
+Rules are processed in the order in which they are created, and traffic is directed using the first rule that matches the URL sent to the application gateway. For example, if you have a rule using a basic listener and a rule using a multi-site listener both on the same port, the rule with the multi-site listener must be listed before the rule with the basic listener in order for the multi-site rule to function as expected.
In this example, you create two new rules and delete the default rule that was created. You can add the rule using [az network application-gateway rule create](/cli/azure/network/application-gateway/rule#az-network-application-gateway-rule-create).
application-gateway Self Signed Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/self-signed-certificates.md
Last updated 01/17/2024--++ # Generate an Azure Application Gateway self-signed certificate with a custom root CA The Application Gateway v2 SKU introduces the use of Trusted Root Certificates to allow TLS connections with the backend servers. This provision removes the use of authentication certificates (individual Leaf certificates) that were required in the v1 SKU. The *root certificate* is a Base-64 encoded X.509(.CER) format root certificate from the backend certificate server. It identifies the root certificate authority (CA) that issued the server certificate and the server certificate is then used for the TLS/SSL communication.
-Application Gateway trusts your website's certificate by default if it's signed by a well-known CA (for example, GoDaddy or DigiCert). You don't need to explicitly upload the root certificate in that case. For more information, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md). However, if you have a dev/test environment and don't want to purchase a verified CA signed certificate, you can create your own custom Root CA and a leaf certificate signed by that Root CA.
+Application Gateway trusts your website's certificate by default if it's signed by a well-known CA (for example, GoDaddy or DigiCert). You don't need to explicitly upload the root certificate in that case. For more information, see [Overview of TLS termination and end to end TLS with Application Gateway](ssl-overview.md). However, if you have a dev/test environment and don't want to purchase a verified CA signed certificate, you can create your own custom Root CA and a leaf certificate signed by that Root CA.
> [!NOTE] > Self-generated certificates are not trusted by default, and can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority.
In this article, you will learn how to:
## Prerequisites -- **[OpenSSL](https://www.openssl.org/) on a computer running Windows or Linux**
+- **[OpenSSL](https://www.openssl.org/) on a computer running Windows or Linux**
While there could be other tools available for certificate management, this tutorial uses OpenSSL. You can find OpenSSL bundled with many Linux distributions, such as Ubuntu. - **A web server**
In this article, you will learn how to:
For example, Apache, IIS, or NGINX to test the certificates. - **An Application Gateway v2 SKU**
-
+ If you don't have an existing application gateway, see [Quickstart: Direct web traffic with Azure Application Gateway - Azure portal](quick-create-portal.md). ## Create a root CA certificate
Create your root CA certificate using OpenSSL.
``` openssl ecparam -out contoso.key -name prime256v1 -genkey ```
-
+ ### Create a Root Certificate and self-sign it 1. Use the following command to generate the Certificate Signing Request (CSR).
openssl s_client -connect localhost:443 -servername www.fabrikam.com -showcerts
## Upload the root certificate to Application Gateway's HTTP Settings
-To upload the certificate in Application Gateway, you must export the .crt certificate into a .cer format Base-64 encoded. Since .crt already contains the public key in the base-64 encoded format, just rename the file extension from .crt to .cer.
+To upload the certificate in Application Gateway, you must export the .crt certificate into a .cer format Base-64 encoded. Since .crt already contains the public key in the base-64 encoded format, just rename the file extension from .crt to .cer.
### Azure portal
Add-AzApplicationGatewayRequestRoutingRule `
-HttpListener $listener ` -BackendAddressPool $bepool
-Set-AzApplicationGateway -ApplicationGateway $gw
+Set-AzApplicationGateway -ApplicationGateway $gw
``` ### Verify the application gateway backend health
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
Last updated 04/27/2023--++ # Manage web traffic with an application gateway using the Azure CLI
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
- ```azurecli-interactive
+ ```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip).
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* usin
## Create an application gateway
-Use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myPublicIPAddress* that you previously created.
+Use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myPublicIPAddress* that you previously created.
```azurecli-interactive az network application-gateway create \
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
Last updated 04/27/2023 -+ #Customer intent: As an IT administrator, I want to use Azure CLI to configure Application Gateway to host multiple web sites , so I can ensure my customers can access the web information they need.
az network public-ip create \
## Create the application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway#az-network-application-gateway-create) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created.
```azurecli-interactive az network application-gateway create \
az network application-gateway http-listener create \
--frontend-port appGatewayFrontendPort \ --resource-group myResourceGroupAG \ --gateway-name myAppGateway \
- --host-name www.fabrikam.com
+ --host-name www.fabrikam.com
``` ### Add routing rules
done
## Create a CNAME record in your domain
-After the application gateway is created with its public IP address, you can get the DNS address and use it to create a CNAME record in your domain. You can use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show) to get the DNS address of the application gateway. Copy the *fqdn* value of the DNSSettings and use it as the value of the CNAME record that you create.
+After the application gateway is created with its public IP address, you can get the DNS address and use it to create a CNAME record in your domain. You can use [az network public-ip show](/cli/azure/network/public-ip#az-network-public-ip-show) to get the DNS address of the application gateway. Copy the *fqdn* value of the DNSSettings and use it as the value of the CNAME record that you create.
```azurecli-interactive az network public-ip show \
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
Last updated 04/27/2023 -+ # Create an application gateway with TLS termination using the Azure CLI
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
az network public-ip create \
## Create the application gateway
-You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
+You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings.
-The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
+The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress* that you previously created. In this example, you associate the certificate that you created and its password when you create the application gateway.
```azurecli-interactive az network application-gateway create \
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
Last updated 04/27/2023 -+ #Customer intent: As an IT administrator, I want to use Azure CLI to set up URL path redirection of web traffic to specific pools of servers so I can ensure my customers have access to the information they need.
A resource group is a logical container into which Azure resources are deployed
The following example creates a resource group named *myResourceGroupAG* in the *eastus* location.
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupAG --location eastus ```
-## Create network resources
+## Create network resources
Create the virtual network named *myVNet* and the subnet named *myAGSubnet* using [az network vnet create](/cli/azure/network/vnet). You can then add the subnet named *myBackendSubnet* that's needed by the backend servers using [az network vnet subnet create](/cli/azure/network/vnet/subnet). Create the public IP address named *myAGPublicIPAddress* using [az network public-ip create](/cli/azure/network/public-ip).
az network application-gateway create \
### Add backend pools and ports
-You can add backend address pools named *imagesBackendPool* and *videoBackendPool* to your application gateway by using [az network application-gateway address-pool create](/cli/azure/network/application-gateway/address-pool). You add the frontend ports for the pools using [az network application-gateway frontend-port create](/cli/azure/network/application-gateway/frontend-port).
+You can add backend address pools named *imagesBackendPool* and *videoBackendPool* to your application gateway by using [az network application-gateway address-pool create](/cli/azure/network/application-gateway/address-pool). You add the frontend ports for the pools using [az network application-gateway frontend-port create](/cli/azure/network/application-gateway/frontend-port).
```azurecli-interactive az network application-gateway address-pool create \
Replace \<azure-user> and \<password> with a user name and password of your choi
for i in `seq 1 3`; do if [ $i -eq 1 ] then
- poolName="appGatewayBackendPool"
+ poolName="appGatewayBackendPool"
fi if [ $i -eq 2 ] then
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
Last updated 04/27/2023 -+ #Customer intent: As an IT administrator, I want to use Azure CLI to set up routing of web traffic to specific pools of servers based on the URL that the customer uses, so I can ensure my customers have the most efficient route to the information they need.
for i in `seq 1 3`; do
if [ $i -eq 1 ] then
- poolName="appGatewayBackendPool"
+ poolName="appGatewayBackendPool"
fi if [ $i -eq 2 ]
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
description: This article describes how to run runbooks on machines in your loca
Last updated 11/21/2023--++ # Run Automation runbooks on a Hybrid Runbook Worker
Jobs for Hybrid Runbook Workers run under the local **System** account.
> [!NOTE] > To create environment variable in Windows systems, follow these steps:
-> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
-> 1. In **System Properties** select **Environment variables**.
+> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
+> 1. In **System Properties** select **Environment variables**.
> 1. In **System variables**, select **New**.
-> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
+> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
> 1. Restart the VM or logout from the current user and login to implement the environment variable changes. **PowerShell 7.2** To run PowerShell 7.2 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
-After PowerShell 7.2 installation is complete, create an environment variable with Variable name as powershell_7_2_path and Variable value as location of the executable *PowerShell*. Restart the Hybrid Runbook Worker after environment variable is created successfully.
+After PowerShell 7.2 installation is complete, create an environment variable with Variable name as powershell_7_2_path and Variable value as location of the executable *PowerShell*. Restart the Hybrid Runbook Worker after environment variable is created successfully.
**PowerShell 7.1**
If the *Python* executable file is at the default location *C:\Python27\python.e
> [!NOTE] > To create environment variable in Windows systems, follow these steps:
-> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
-> 1. In **System Properties** select **Environment variables**.
+> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
+> 1. In **System Properties** select **Environment variables**.
> 1. In **System variables**, select **New**.
-> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
+> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
> 1. Restart the VM or logout from the current user and login to implement the environment variable changes. **PowerShell 7.1**
If the *Python* executable file is at the default location *C:\Python27\python.e
#### [Extension-based Hybrid Workers](#tab/Lin-extn-hrw) > [!NOTE]
-> To create environment variable in Linux systems, follow these steps:
-> 1. Open /etc/environment.
+> To create environment variable in Linux systems, follow these steps:
+> 1. Open /etc/environment.
> 1. Create a new Environment variable by adding VARIABLE_NAME="variable_value" in a new line in /etc/environment (VARIABLE_NAME is the name of the new Environment variable and variable_value represents the value it is to be assigned). > 1. Restart the VM or logout from current user and login after saving the changes to /etc/environment to implement environment variable changes.
After Python 3.10 installation is complete, create an environment variable with
**Python 3.8**
-To run Python 3.8 runbooks on a Linux Hybrid Worker, install *Python* on the Hybrid Worker.
+To run Python 3.8 runbooks on a Linux Hybrid Worker, install *Python* on the Hybrid Worker.
Ensure to add the executable *Python* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation. **Python 2.7**
Ensure to add the executable *Python* file to the PATH environment variable and
#### [Agent-based Hybrid Workers](#tab/Lin-agt-hrw)
-Create Service accounts **nxautomation** and **omsagent** for agent-based Hybrid Workers. The creation and permission assignment script can be viewed at [linux data](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/installer/datafiles/linux.data). The accounts, with the corresponding sudo permissions, must be present during [installation of a Linux Hybrid Runbook worker](automation-linux-hrw-install.md).
+Create Service accounts **nxautomation** and **omsagent** for agent-based Hybrid Workers. The creation and permission assignment script can be viewed at [linux data](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/installer/datafiles/linux.data). The accounts, with the corresponding sudo permissions, must be present during [installation of a Linux Hybrid Runbook worker](automation-linux-hrw-install.md).
If you try to install the worker, and the account is not present or doesn't have the appropriate permissions, the installation fails. Do not change the permissions of the `sudoers.d` folder or its ownership. Sudo permission is required for the accounts and the permissions shouldn't be removed. Restricting this to certain folders or commands may result in a breaking change. The **nxautomation** user enabled as part of Update Management executes only signed runbooks.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
```powershell # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with system-assigned managed identity $AzureContext = (Connect-AzAccount -Identity).context
-
+ # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
$AzureContext # Get all VM names from the subscription
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
> This will **NOT** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, it is no longer possible to use the VM Managed Identity and then, it is only possible to use the Automation Account System-Assigned Managed Identity as mentioned in option 1 above. Use any **one** of the following managed identities:
-
+ # [VM's system-assigned managed identity](#tab/sa-mi)
-
+ 1. [Configure](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md) a System Managed Identity for the VM. 1. Grant this identity the [required permissions](../active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md#grant-your-vm-access-to-a-resource-group-in-resource-manager) within the subscription to perform its tasks. 1. Update the runbook to use the [Connect-Az-Account](/powershell/module/az.accounts/connect-azaccount) cmdlet with the `Identity` parameter to authenticate to Azure resources. This configuration reduces the need to use a Run As Account and perform the associated account management.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
```powershell # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with system-assigned managed identity $AzureContext = (Connect-AzAccount -Identity).context
-
+ # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
$AzureContext # Get all VM names from the subscription
- Get-AzVM -DefaultProfile $AzureContext | Select Name
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
```
-
+ # [VM's user-assigned managed identity](#tab/ua-mi) 1. [Configure](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) a User Managed Identity for the VM.
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
```powershell # Ensures you do not inherit an AzContext in your runbook Disable-AzContextAutosave -Scope Process
-
+ # Connect to Azure with user-managed-assigned managed identity. Replace <ClientId> below with the Client Id of the User Managed Identity $AzureContext = (Connect-AzAccount -Identity -AccountId <ClientId>).context
-
+ # set and store context
- $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
+ $AzureContext = Set-AzContext -SubscriptionName $AzureContext.Subscription -DefaultProfile
$AzureContext # Get all VM names from the subscription
- Get-AzVM -DefaultProfile $AzureContext | Select Name
+ Get-AzVM -DefaultProfile $AzureContext | Select Name
```
-
+ > [!NOTE] > You can find the client Id of the user-assigned managed identity in the Azure portal.
- > :::image type="content" source="./media/automation-hrw-run-runbooks/managed-identities-client-id-inline.png" alt-text="Screenshot of client id in Managed Identites." lightbox="./media/automation-hrw-run-runbooks/managed-identities-client-id-expanded.png":::
+ > :::image type="content" source="./media/automation-hrw-run-runbooks/managed-identities-client-id-inline.png" alt-text="Screenshot of client id in Managed Identites." lightbox="./media/automation-hrw-run-runbooks/managed-identities-client-id-expanded.png":::
There are two ways to use the Managed Identities in Hybrid Runbook Worker script
1. You can grant this Managed Identity access to resources in your subscription in the Access control (IAM) blade for the resource by adding the appropriate role assignment. :::image type="content" source="./media/automation-hrw-run-runbooks/access-control-add-role-assignment.png" alt-text="Screenshot of how to select managed identities.":::
-
+ 2. Add the Azure Arc Managed Identity to your chosen role as required. :::image type="content" source="./media/automation-hrw-run-runbooks/select-managed-identities-inline.png" alt-text="Screenshot of how to add role assignment in the Access control blade." lightbox="./media/automation-hrw-run-runbooks/select-managed-identities-expanded.png":::
-
+ > [!NOTE] > This will **NOT** work in an Automation Account which has been configured with an Automation account Managed Identity. As soon as the Automation account Managed Identity is enabled, it is no longer possible to use the Arc Managed Identity and then, it is **only** possible to use the Automation Account System-Assigned Managed Identity as mentioned in option 1 above. >[!NOTE]
->By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence).
+>By default, the Azure contexts are saved for use between PowerShell sessions. It is possible that when a previous runbook on the Hybrid Runbook Worker has been authenticated with Azure, that context persists to the disk in the System PowerShell profile, as per [Azure contexts and sign-in credentials | Microsoft Docs](/powershell/azure/context-persistence).
For instance, a runbook with `Get-AzVM` can return all the VMs in the subscription with no call to `Connect-AzAccount`, and the user would be able to access Azure resources without having to authenticate within that runbook. You can disable context autosave in Azure PowerShell, as detailed [here](/powershell/azure/context-persistence#save-azure-contexts-across-powershell-sessions).
-
+ ### Use runbook authentication with Hybrid Worker Credentials Instead of having your runbook provide its own authentication to local resources, you can specify Hybrid Worker Credentials for a Hybrid Runbook Worker group. To specify a Hybrid Worker Credentials, you must define a [credential asset](./shared-resources/credentials.md) that has access to local resources. These resources include certificate stores and all runbooks run under these credentials on a Hybrid Runbook Worker in the group.
By default, the Hybrid jobs run under the context of System account. However, to
1. Select **Settings**. 1. Change the value of **Hybrid Worker credentials** from **Default** to **Custom**. 1. Select the credential and click **Save**.
-1. If the following permissions are not assigned for Custom users, jobs might get suspended.
+1. If the following permissions are not assigned for Custom users, jobs might get suspended.
| **Resource type** | **Folder permissions** | | | | |Azure VM | C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows (read and execute) | |Arc-enabled Server | C:\ProgramData\AzureConnectedMachineAgent\Tokens (read)</br> C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows (read and execute) |
-
+ >[!NOTE] >Linux Hybrid Worker doesn't support Hybrid Worker credentials.
-
+ ## Start a runbook on a Hybrid Runbook Worker [Start a runbook in Azure Automation](start-runbooks.md) describes different methods for starting a runbook. Starting a runbook on a Hybrid Runbook Worker uses a **Run on** option that allows you to specify the name of a Hybrid Runbook Worker group. When a group is specified, one of the workers in that group retrieves and runs the runbook. If your runbook does not specify this option, Azure Automation runs the runbook as usual.
You can configure a Windows Hybrid Runbook Worker to run only signed runbooks.
> Once you've configured a Hybrid Runbook Worker to run only signed runbooks, unsigned runbooks fail to execute on the worker. > [!NOTE]
-> PowerShell 7.x does not support signed runbooks for Windows and Linux Hybrid Runbook Worker.
+> PowerShell 7.x does not support signed runbooks for Windows and Linux Hybrid Runbook Worker.
### Create signing certificate
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Title: Deploy an agent-based Linux Hybrid Runbook Worker in Automation
description: This article tells how to install an agent-based Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. -+ Last updated 09/17/2023-+ # Deploy an agent-based Linux Hybrid Runbook Worker in Automation
The Hybrid Runbook Worker feature supports the following distributions. All oper
* Oracle Linux 6, 7, and 8 * Red Hat Enterprise Linux Server 5, 6, 7, and 8 * Debian GNU/Linux 6, 7, and 8
-* SUSE Linux Enterprise Server 12, 15, and 15.1 (SUSE didn't release versions numbered 13 or 14)
+* SUSE Linux Enterprise Server 12, 15, and 15.1 (SUSE didn't release versions numbered 13 or 14)
* Ubuntu **Linux OS** | **Name** | | |
- 20.04 LTS | Focal Fossa
- 18.04 LTS | Bionic Beaver
- 16.04 LTS | Xenial Xerus
- 14.04 LTS | Trusty Tahr
+ 20.04 LTS | Focal Fossa
+ 18.04 LTS | Bionic Beaver
+ 16.04 LTS | Xenial Xerus
+ 14.04 LTS | Trusty Tahr
> [!IMPORTANT] > Before enabling the Update Management feature, which depends on the system Hybrid Runbook Worker role, confirm the distributions it supports [here](update-management/operating-system-requirements.md).
Run the following commands as root on the agent-based Linux Hybrid Worker:
> [!NOTE]
- > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role.
+ > - This script doesn't remove the Log Analytics agent for Linux from the machine. It only removes the functionality and configuration of the Hybrid Runbook Worker role.
> - After you disable the Private Link in your Automation account, it might take up to 60 minutes to remove the Hybrid Runbook worker. > - After you remove the Hybrid Worker, the Hybrid Worker authentication certificate on the machine is valid for 45 minutes.
To check the version of agent-based Linux Hybrid Runbook Worker, go to the follo
```bash sudo cat /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION ```
-The file *VERSION* has the version number of Hybrid Runbook Worker.
+The file *VERSION* has the version number of Hybrid Runbook Worker.
## Next steps
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
description: This article describes the Change Tracking and Inventory feature, w
Last updated 12/13/2023-+
Change Tracking and Inventory doesn't support or has the following limitations:
- Different installation methods - ***.exe** files stored on Windows - The **Max File Size** column and values are unused in the current implementation.-- If you are tracking file changes, it is limited to a file size of 5 MB or less.
+- If you are tracking file changes, it is limited to a file size of 5 MB or less.
- If the file size appears >1.25MB, then FileContentChecksum is incorrect due to memory constraints in the checksum calculation. - If you try to collect more than 2500 files in a 30-minute collection cycle, Change Tracking and Inventory performance might be degraded. - If network traffic is high, change records can take up to six hours to display.
Change Tracking and Inventory now support Python 2 and Python 3. If your machine
> [!NOTE] > To use the OMS agent compatible with Python 3, ensure that you first uninstall Python 2; otherwise, the OMS agent will continue to run with python 2 by default.
-#### [Python 2](#tab/python-2)
-- Red Hat, CentOS, Oracle:
+#### [Python 2](#tab/python-2)
+- Red Hat, CentOS, Oracle:
```bash sudo yum install -y python2 ``` - Ubuntu, Debian:
-
+ ```bash sudo apt-get update sudo apt-get install -y python2 ``` - SUSE:
-
+ ```bash sudo zypper install -y python2 ```
Change Tracking and Inventory now support Python 2 and Python 3. If your machine
```bash sudo yum install -y python3 ```-- Ubuntu, Debian:
+- Ubuntu, Debian:
```bash sudo apt-get update sudo apt-get install -y python3 ```-- SUSE:
-
+- SUSE:
+ ```bash sudo zypper install -y python3 ```
-
+ ## Network requirements
A key capability of Change Tracking and Inventory is alerting on changes to the
|ConfigurationChange <br>&#124; where RegistryKey contains @"HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\SharedAccess\\Parameters\\FirewallPolicy"| Useful for tracking changes to firewall settings.|
-## Update Log Analytics agent to latest version
+## Update Log Analytics agent to latest version
-For Change Tracking & Inventory, machines use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. Soon, Azure will no longer accept connections from older versions of the Windows Log Analytics (LA) agent, also known as the Windows Microsoft Monitoring Agent (MMA), that uses an older method for certificate handling. We recommend to upgrade your agent to the latest version as soon as possible.
+For Change Tracking & Inventory, machines use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. Soon, Azure will no longer accept connections from older versions of the Windows Log Analytics (LA) agent, also known as the Windows Microsoft Monitoring Agent (MMA), that uses an older method for certificate handling. We recommend to upgrade your agent to the latest version as soon as possible.
-[Agents that are on version - 10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version) or newer aren't affected in response to this change. If youΓÇÖre on an agent prior to that, your agent will be unable to connect, and the Change Tracking & Inventory pipeline & downstream activities can stop. You can check the current LA agent version in HeartBeat table within your LA Workspace.
+[Agents that are on version - 10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version) or newer aren't affected in response to this change. If youΓÇÖre on an agent prior to that, your agent will be unable to connect, and the Change Tracking & Inventory pipeline & downstream activities can stop. You can check the current LA agent version in HeartBeat table within your LA Workspace.
-Ensure to upgrade to the latest version of the Windows Log Analytics agent (MMA) following these [guidelines](../../azure-monitor/agents/agent-manage.md).
+Ensure to upgrade to the latest version of the Windows Log Analytics agent (MMA) following these [guidelines](../../azure-monitor/agents/agent-manage.md).
## Next steps
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
description: This article provides information on how to migrate an existing age
Last updated 12/10/2023-+ #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
> [!IMPORTANT] > Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
-This article describes the benefits of Extension-based User Hybrid Runbook Worker and how to migrate existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers.
+This article describes the benefits of Extension-based User Hybrid Runbook Worker and how to migrate existing Agent-based User Hybrid Runbook Workers to Extension-based Hybrid Workers.
There are two Hybrid Runbook Workers installation platforms supported by Azure Automation: - **Agent based hybrid runbook worker** (V1) - The Agent-based hybrid runbook worker depends on theΓÇ»[Log Analytics Agent](../azure-monitor/agents/log-analytics-agent.md).
The process of executing runbooks on Hybrid Runbook Workers remains the same for
The purpose of the Extension-based approach is to simplify the installation and management of the Hybrid Worker and remove the complexity working with the Agent-based version. Here are some key benefits: -- **Seamless onboarding** – The Agent-based approach for onboarding Hybrid Runbook worker is dependent on the Log Analytics Agent, which is a multi-step, time-consuming, and error-prone process. The Extension-based approach offers more security and is no longer dependent on the Log Analytics Agent.
+- **Seamless onboarding** – The Agent-based approach for onboarding Hybrid Runbook worker is dependent on the Log Analytics Agent, which is a multi-step, time-consuming, and error-prone process. The Extension-based approach offers more security and is no longer dependent on the Log Analytics Agent.
-- **Ease of Manageability** – It offers native integration with Azure Resource Manager (ARM) identity for Hybrid Runbook Worker and provides the flexibility for governance at scale through policies and templates.
+- **Ease of Manageability** – It offers native integration with Azure Resource Manager (ARM) identity for Hybrid Runbook Worker and provides the flexibility for governance at scale through policies and templates.
-- **Microsoft Entra ID based authentication** – It uses a VM system-assigned managed identities provided by Microsoft Entra ID. This centralizes control and management of identities and resource credentials.
+- **Microsoft Entra ID based authentication** – It uses a VM system-assigned managed identities provided by Microsoft Entra ID. This centralizes control and management of identities and resource credentials.
-- **Unified experience** – It offers an identical experience for managing Azure and off-Azure Arc-enabled machines.
+- **Unified experience** – It offers an identical experience for managing Azure and off-Azure Arc-enabled machines.
-- **Multiple onboarding channels** – You can choose to onboard and manage Extension-based workers through the Azure portal, PowerShell cmdlets, Bicep, ARM templates, REST API and Azure CLI.
+- **Multiple onboarding channels** – You can choose to onboard and manage Extension-based workers through the Azure portal, PowerShell cmdlets, Bicep, ARM templates, REST API and Azure CLI.
- **Default Automatic upgrade** – It offers Automatic upgrade of minor versions by default, significantly reducing the manageability of staying updated on the latest version. We recommend enabling Automatic upgrades to take advantage of any security or feature updates without the manual overhead. You can also opt out of automatic upgrades at any time. Any major version upgrades are currently not supported and should be managed manually.
The purpose of the Extension-based approach is to simplify the installation and
- 4 GB of RAM - **Non-Azure machines** must have the [Azure Connected Machine agent](../azure-arc/servers/agent-overview.md) installed. To install the `AzureConnectedMachineAgent`, see [Connect hybrid machines to Azure from the Azure portal](../azure-arc/servers/onboard-portal.md) for Arc-enabled servers or see [Manage VMware virtual machines Azure Arc](../azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md#enable-guest-management) to enable guest management for Arc-enabled VMware vSphere VMs. - The system-assigned managed identity must be enabled on the Azure virtual machine, Arc-enabled server or Arc-enabled VMware vSphere VM. If the system-assigned managed identity isn't enabled, it will be enabled as part of the installation process through the Azure portal.
-
+ ### Supported operating systems | Windows (x64) | Linux (x64) |
To install Hybrid worker extension on an existing agent based hybrid worker, fol
1. Select **Add** to append the machine to the group.
- The **Platform** column shows the same Hybrid worker as both **Agent based (V1)** and **Extension based (V2)**. After you're confident of the extension based Hybrid Worker experience and use, you can [remove](#remove-agent-based-hybrid-worker) the agent based Worker.
+ The **Platform** column shows the same Hybrid worker as both **Agent based (V1)** and **Extension based (V2)**. After you're confident of the extension based Hybrid Worker experience and use, you can [remove](#remove-agent-based-hybrid-worker) the agent based Worker.
:::image type="content" source="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/hybrid-workers-group-platform-inline.png" alt-text="Screenshot of platform field showing agent or extension based hybrid worker." lightbox="./media/migrate-existing-agent-based-hybrid-worker-extension-based-hybrid-worker/hybrid-workers-group-platform-expanded.png":::
Follow the steps mentioned below as an example:
1. Create a Hybrid Worker Group. 1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
-1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
1. Generate a new GUID and pass it as the name of the Hybrid Worker. 1. Enable System-assigned managed identity on the VM. 1. Install Hybrid Worker Extension on the VM.
Follow the steps mentioned below as an example:
1. Create a Hybrid Worker Group. 1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
-1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
1. Generate a new GUID and pass it as the name of the Hybrid Worker. 1. Enable System-assigned managed identity on the VM. 1. Install Hybrid Worker Extension on the VM.
Review the parameters used in this template.
| osVersion | The OS for the new Windows VM. The default value is `2019-Datacenter`. | | dnsNameForPublicIP | The DNS name for the public IP. |
-
+ #### [REST API](#tab/rest-api) **Prerequisites**
To install and use Hybrid Worker extension using REST API, follow these steps. T
GET https://westcentralus.management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Automation/automationAccounts/{automationAccountName}/hybridRunbookWorkerGroups/{hybridRunbookWorkerGroupName}/hybridRunbookWorkers/{hybridRunbookWorkerId}?api-version=2021-06-22 ```
-
+ 1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM. 1. Get the automation account details using this API call.
To install and use Hybrid Worker extension using REST API, follow these steps. T
The API call will provide the value with the key: `AutomationHybridServiceUrl`. Use the URL in the next step to enable extension on the VM.
-1. Install the Hybrid Worker Extension on Azure VM by using the following API call.
-
+1. Install the Hybrid Worker Extension on Azure VM by using the following API call.
+ ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmName}/extensions/HybridWorkerExtension?api-version=2021-11-01 ```
-
+ The request body should contain the following information: ```json
To install and use Hybrid Worker extension using REST API, follow these steps. T
} ```
-
+ For ARC VMs, use the below API call for enabling the extension: ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HybridCompute/machines/{machineName}/extensions/{extensionName}?api-version=2021-05-20 ```
-
+ The request body should contain the following information: ```json
Follow the steps mentioned below as an example:
1. Install Hybrid Worker Extension on the VM ```azurecli-interactive
- az vm extension set --name HybridWorkerExtension --publisher Microsoft.Azure.Automation.HybridWorker --version 1.1 --vm-name <vmname> -g <resourceGroupName> \
- --settings '{"AutomationAccountURL" = "<registration-url>";}' --enable-auto-upgrade true
+ az vm extension set --name HybridWorkerExtension --publisher Microsoft.Azure.Automation.HybridWorker --version 1.1 --vm-name <vmname> -g <resourceGroupName> \
+ --settings '{"AutomationAccountURL" = "<registration-url>";}' --enable-auto-upgrade true
``` 1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
Follow the steps mentioned below as an example:
1. Create a Hybrid Worker Group. ```powershell-interactive
- New-AzAutomationHybridRunbookWorkerGroup -AutomationAccountName "Contoso17" -Name "RunbookWorkerGroupName" -ResourceGroupName "ResourceGroup01"
+ New-AzAutomationHybridRunbookWorkerGroup -AutomationAccountName "Contoso17" -Name "RunbookWorkerGroupName" -ResourceGroupName "ResourceGroup01"
``` 1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as `hybridRunbookWorkerGroupName`. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal. ```azurepowershell
- New-AzAutomationHybridRunbookWorker -AutomationAccountName "Contoso17" -Name "RunbookWorkerName" -HybridRunbookWorkerGroupName "RunbookWorkerGroupName" -VmResourceId "VmResourceId" -ResourceGroupName "ResourceGroup01"
+ New-AzAutomationHybridRunbookWorker -AutomationAccountName "Contoso17" -Name "RunbookWorkerName" -HybridRunbookWorkerGroupName "RunbookWorkerGroupName" -VmResourceId "VmResourceId" -ResourceGroupName "ResourceGroup01"
``` 1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM. 1. Install Hybrid Worker Extension on the VM.
-
+ **Hybrid Worker extension settings** ```powershell-interactive
Follow the steps mentioned below as an example:
"AutomationAccountURL" = "<registrationurl>"; }; ```
-
+ **Azure VMs** ```powershell
automation Remove Node And Configuration Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/state-configuration/remove-node-and-configuration-package.md
description: This article explains how to remove an Azure Automation State Confi
-+ Last updated 04/16/2021
To find the package names and other relevant details, see the [PowerShell Desire
```bash rpm -e <package name>
-```
+```
### dpkg-based systems
automation Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md
Title: Troubleshoot agent-based Hybrid Runbook Worker issues in Azure Automation
description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation agent-based Hybrid Runbook Workers. Last updated 09/17/2023--++ # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation
This error can occur due to the following reasons:
- The Hybrid Runbook Worker extension has been uninstalled from the machine. #### Resolution-- Ensure that the machine exists, and Hybrid Runbook Worker extension is installed on it. The Hybrid Worker should be healthy and should give a heartbeat. Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job. -- You can also monitor [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
+- Ensure that the machine exists, and Hybrid Runbook Worker extension is installed on it. The Hybrid Worker should be healthy and should give a heartbeat. Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
### Scenario: Job was suspended as it exceeded the job limit for a Hybrid Worker
Job gets suspended with the following error message:
#### Cause Jobs might get suspended due to any of the following reasons:-- Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The Worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid Worker within the Hybrid Worker Group pings the Automation service first picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds and no other Worker picks up the job, the job might get suspended. -- Hybrid Worker might not be polling as expected every 30 seconds. This could happen if the Worker is not healthy or there are network issues.
+- Each active Hybrid Worker in the group will poll for jobs every 30 seconds to see if any jobs are available. The Worker picks jobs on a first-come, first-serve basis. Depending on when a job was pushed, whichever Hybrid Worker within the Hybrid Worker Group pings the Automation service first picks up the job. A single hybrid worker can generally pick up four jobs per ping (that is, every 30 seconds). If your rate of pushing jobs is higher than four per 30 seconds and no other Worker picks up the job, the job might get suspended.
+- Hybrid Worker might not be polling as expected every 30 seconds. This could happen if the Worker is not healthy or there are network issues.
#### Resolution-- If the job limit for a Hybrid Worker exceeds four jobs per 30 seconds, you can add more Hybrid Workers to the Hybrid Worker group for high availability and load balancing. You can also schedule jobs so they do not exceed the limit of four jobs per 30 seconds. The processing time of the jobs queue depends on the Hybrid worker hardware profile and load. Ensure that the Hybrid Worker is healthy and gives a heartbeat. -- Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job. -- You can also monitor the [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
+- If the job limit for a Hybrid Worker exceeds four jobs per 30 seconds, you can add more Hybrid Workers to the Hybrid Worker group for high availability and load balancing. You can also schedule jobs so they do not exceed the limit of four jobs per 30 seconds. The processing time of the jobs queue depends on the Hybrid worker hardware profile and load. Ensure that the Hybrid Worker is healthy and gives a heartbeat.
+- Troubleshoot any network issues by checking the Microsoft-SMA event logs on the Workers in the Hybrid Runbook Worker Group that tried to run this job.
+- You can also monitor the [HybridWorkerPing](../../azure-monitor/essentials/metrics-supported.md#microsoftautomationautomationaccounts) metric that provides the number of pings from a Hybrid Worker and can help to check ping-related issues.
You can't see the Hybrid Runbook Worker or VMs when the worker machine has been
#### Cause
-The Hybrid Runbook Worker machine hasn't pinged Azure Automation for more than 30 days. As a result, Automation has purged the Hybrid Runbook Worker group or the System Worker group.
+The Hybrid Runbook Worker machine hasn't pinged Azure Automation for more than 30 days. As a result, Automation has purged the Hybrid Runbook Worker group or the System Worker group.
#### Resolution
Start the worker machine, and then re-register it with Azure Automation. For ins
A runbook running on a Hybrid Runbook Worker fails with the following error message:
-`Connect-AzAccount : No certificate was found in the certificate store with thumbprint 0000000000000000000000000000000000000000`
-`At line:3 char:1`
-`+ Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -Appl ...`
-`+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
-` + CategoryInfo : CloseError: (:) [Connect-AzAccount],ArgumentException`
+`Connect-AzAccount : No certificate was found in the certificate store with thumbprint 0000000000000000000000000000000000000000`
+`At line:3 char:1`
+`+ Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -Appl ...`
+`+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
+` + CategoryInfo : CloseError: (:) [Connect-AzAccount],ArgumentException`
` + FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.ConnectAzAccountCommand` #### Cause
The worker's initial registration phase fails, and you receive the following err
The following issues are possible causes:
-* There's a mistyped workspace ID or workspace key (primary) in the agent's settings.
+* There's a mistyped workspace ID or workspace key (primary) in the agent's settings.
* The Hybrid Runbook Worker can't download the configuration, which causes an account linking error. When Azure enables features on machines, it supports only certain regions for linking a Log Analytics workspace and an Automation account. It's also possible that an incorrect date or time is set on the computer. If the time is +/- 15 minutes from the current time, feature deployment fails. * Log Analytics Gateway is not configured to support Hybrid Runbook Worker.
You might also need to update the date or time zone of your computer. If you sel
Follow the steps mentioned [here](../../azure-monitor/agents/gateway.md#configure-for-automation-hybrid-runbook-workers) to add Hybrid Runbook Worker endpoints to the Log Analytics Gateway.
-### <a name="set-azstorageblobcontent-execution-fails"></a>Scenario: Set-AzStorageBlobContent fails on a Hybrid Runbook Worker
+### <a name="set-azstorageblobcontent-execution-fails"></a>Scenario: Set-AzStorageBlobContent fails on a Hybrid Runbook Worker
#### Issue
Hybrid workers send [Runbook output and messages](../automation-runbook-output-a
#### Issue
-A script running on a Windows Hybrid Runbook Worker can't connect as expected to Microsoft 365 on an Orchestrator sandbox. The script is using [Connect-MsolService](/powershell/module/msonline/connect-msolservice) for connection.
+A script running on a Windows Hybrid Runbook Worker can't connect as expected to Microsoft 365 on an Orchestrator sandbox. The script is using [Connect-MsolService](/powershell/module/msonline/connect-msolservice) for connection.
If you adjust **Orchestrator.Sandbox.exe.config** to set the proxy and the bypass list, the sandbox still doesn't connect properly. A **Powershell_ise.exe.config** file with the same proxy and bypass list settings seems to work as you expect. Service Management Automation (SMA) logs and PowerShell logs don't provide any information about proxy.ΓÇï #### Cause
-The connection to Active Directory Federation Services (AD FS) on the server can't bypass the proxy. Remember that a PowerShell sandbox runs as the logged user. However, an Orchestrator sandbox is heavily customized and might ignore the **Orchestrator.Sandbox.exe.config** file settings. It has special code for handling machine or Log Analytics agent proxy settings, but not for handling other custom proxy settings.
+The connection to Active Directory Federation Services (AD FS) on the server can't bypass the proxy. Remember that a PowerShell sandbox runs as the logged user. However, an Orchestrator sandbox is heavily customized and might ignore the **Orchestrator.Sandbox.exe.config** file settings. It has special code for handling machine or Log Analytics agent proxy settings, but not for handling other custom proxy settings.
#### Resolution You can resolve the issue for the Orchestrator sandbox by migrating your script to use the Microsoft Entra modules instead of the MSOnline module for PowerShell cmdlets. For more information, see [Migrating from Orchestrator to Azure Automation (Beta)](../automation-orchestrator-migration.md).
-ΓÇïIf you want to continue to use the MSOnline module cmdlets, change your script to use [Invoke-Command](/powershell/module/microsoft.powershell.core/invoke-command). Specify values for the `ComputerName` and `Credential` parameters.
+ΓÇïIf you want to continue to use the MSOnline module cmdlets, change your script to use [Invoke-Command](/powershell/module/microsoft.powershell.core/invoke-command). Specify values for the `ComputerName` and `Credential` parameters.
```powershell $Credential = Get-AutomationPSCredential -Name MyProxyAccessibleCredentialΓÇï
-Invoke-Command -ComputerName $env:COMPUTERNAME -Credential $Credential
+Invoke-Command -ComputerName $env:COMPUTERNAME -Credential $Credential
{ Connect-MsolService … }​ ```
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
Last updated 11/01/2021 -+ # Troubleshoot Linux update agent issues
To verify if a VM is an Azure VM, check for Asset tag value using the below comm
sudo dmidecode ```
-If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot VM to initiate re-registration.
+If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot VM to initiate re-registration.
## Monitoring agent service health checks
If the asset tag is different than 7783-7084-3265-9085-8269-3286-77, then reboot
To fix this, install Azure Log Analytics Linux agent and ensure it communicates the required endpoints. For more information, see [Install Log Analytics agent on Linux computers](../../azure-monitor/agents/agent-linux.md).
-This task checks if the folder is present -
+This task checks if the folder is present -
*/etc/opt/microsoft/omsagent/conf/omsadmin.conf* ### Monitoring Agent status
-
-To fix this issue, you must start the OMS Agent service by using the following command:
+
+To fix this issue, you must start the OMS Agent service by using the following command:
```bash sudo /opt/microsoft/omsagent/bin/service_control restart ```
-To validate you can perform process check using the below command:
+To validate you can perform process check using the below command:
```bash
-process_name="omsagent"
-ps aux | grep %s | grep -v grep" % (process_name)"
+process_name="omsagent"
+ps aux | grep %s | grep -v grep" % (process_name)"
``` For more information, see [Troubleshoot issues with the Log Analytics agent for Linux](../../azure-monitor/agents/agent-linux-troubleshoot.md)
To fix this issue, purge the OMS Agent completely and reinstall it with the [wor
Validate that there are no more multihoming by checking the directories under this path:
- */var/opt/microsoft/omsagent*.
+ */var/opt/microsoft/omsagent*.
As they are the directories of workspaces, the number of directories equals the number of workspaces on-boarded to OMSAgent. ### Hybrid Runbook Worker
-To fix the issue, run the following command:
+To fix the issue, run the following command:
```bash sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py' ```
-This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
Validate to check if the following two paths exists:
To fix this issue, run the following command:
sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/PerformRequiredConfigurationChecks.py' ```
-This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
+This command forces the omsconfig agent to talk to Azure Monitor and retrieve the latest configuration.
If the issue still persists, run the [omsagent Log Collector tool](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
HTTP_PROXY
To fix this issue, allow access to IP **169.254.169.254**. For more information, see [Access Azure Instance Metadata Service](../../virtual-machines/windows/instance-metadata-service.md#azure-instance-metadata-service-windows)
-After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
+After the network changes, you can either rerun the Troubleshooter or run the below commands to validate:
```bash curl -H \"Metadata: true\" http://169.254.169.254/metadata/instance?api-version=2018-02-01
After the network changes, you can either rerun the Troubleshooter or run the be
### General internet connectivity
-This check makes sure that the machine has access to the internet and can be ignored if you have blocked internet and allowed only specific URLs.
+This check makes sure that the machine has access to the internet and can be ignored if you have blocked internet and allowed only specific URLs.
CURL on any http url.
Fix this issue by allowing the prerequisite Repo URL. For RHEL, see [here](../..
Post making Network changes you can either rerun the Troubleshooter or
-Curl on software repositories configured in package manager.
+Curl on software repositories configured in package manager.
-Refreshing repos would help to confirm the communication.
+Refreshing repos would help to confirm the communication.
```bash sudo apt-get check
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Title: Use Azure Private Link to securely connect servers to Azure Arc description: Learn how to use Azure Private Link to securely connect networks to Azure Arc. -+ Last updated 06/20/2023
See the visual diagram under the section [How it works](#how-it-works) for the n
1. Enter a **Name** for the endpoint. 1. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone.
-
+ > [!NOTE] > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Arc-enabled servers.
Once your Azure Arc Private Link Scope is created, you need to connect it with o
1. On the **Configuration** page,
- a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server.
+ a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server.
b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones might be different from what is shown in the screenshot below.
If you're only planning to use Private Links to support a few machines or server
### Configure a new Azure Arc-enabled server to use Private link
-When connecting a machine or server with Azure Arc-enabled servers for the first time, you can optionally connect it to a Private Link Scope. The following steps are
+When connecting a machine or server with Azure Arc-enabled servers for the first time, you can optionally connect it to a Private Link Scope. The following steps are
1. From your browser, go to the [Azure portal](https://portal.azure.com).
When connecting a machine or server with Azure Arc-enabled servers for the first
1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**.
-After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you might need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent.
+After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you might need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent.
The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager.
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
Zone-redundant Enterprise and Enterprise Flash tier caches are available in the
| Canada Central* | North Europe | | | Australia East | | Central US* | UK South | | | Central India | | East US | West Europe | | | Southeast Asia |
-| East US 2 | | | | |
-| South Central US | | | | |
+| East US 2 | | | | Japan East* |
+| South Central US | | | | East Asia* |
| West US 2 | | | | |
+| West US 3 | | | | |
+| Brazil South | | | | |
\* Enterprise Flash tier not available in this region.
azure-functions Configure Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md
The following chart describes the main categories of logs that the runtime creat
| Category | Table | Description | | -- | -- | -- |
+| **`Function`** | **traces**| Includes function started and completed logs for all function runs. For successful runs, these logs are at the `Information` level. Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages).|
| **`Function.<YOUR_FUNCTION_NAME>`** | **dependencies**| Dependency data is automatically collected for some services. For successful runs, these logs are at the `Information` level. For more information, see [Dependencies](functions-monitoring.md#dependencies). Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). | | **`Function.<YOUR_FUNCTION_NAME>`** | **customMetrics**<br/>**customEvents** | C# and JavaScript SDKs lets you collect custom metrics and log custom events. For more information, see [Custom telemetry data](functions-monitoring.md#custom-telemetry-data).| | **`Function.<YOUR_FUNCTION_NAME>`** | **traces**| Includes function started and completed logs for specific function runs. For successful runs, these logs are at the `Information` level. Exceptions are logged at the `Error` level. The runtime also creates `Warning` level logs, such as when queue messages are sent to the [poison queue](functions-bindings-storage-queue-trigger.md#poison-messages). |
With scale controller logging enabled, you're now able to [query your scale cont
## Enable Application Insights integration
-For a function app to send data to Application Insights, it needs to know the instrumentation key of an Application Insights resource. The key must be in an app setting named **APPINSIGHTS_INSTRUMENTATIONKEY**.
+For a function app to send data to Application Insights, it needs to connect to the Application Insights resource using **only one** of these application settings:
+
+| Setting name | Description |
+| - | - |
+| **[APPLICATIONINSIGHTS_CONNECTION_STRING](functions-app-settings.md#applicationinsights_connection_string)** | This is the recommended setting, which is required when your Application Insights instance runs in a sovereign cloud. The connection string supports other [new capabilities](../azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md#new-capabilities). |
+| **[APPINSIGHTS_INSTRUMENTATIONKEY](functions-app-settings.md#appinsights_instrumentationkey)** | Legacy setting, which is deprecated by Application Insights in favor of the connection string setting. |
When you create your function app in the [Azure portal](./functions-get-started.md) from the command line by using [Azure Functions Core Tools](./create-first-function-cli-csharp.md) or [Visual Studio Code](./create-first-function-vs-code-csharp.md), Application Insights integration is enabled by default. The Application Insights resource has the same name as your function app, and it's created either in the same region or in the nearest region.
To review the Application Insights resource being created, select it to expand t
:::image type="content" source="media/functions-monitoring/enable-ai-new-function-app.png" alt-text="Screenshot of enabling Application Insights while creating a function app.":::
-When you select **Create**, an Application Insights resource is created with your function app, which has the `APPINSIGHTS_INSTRUMENTATIONKEY` set in application settings. Everything is ready to go.
+When you select **Create**, an Application Insights resource is created with your function app, which has the `APPLICATIONINSIGHTS_CONNECTION_STRING` set in application settings. Everything is ready to go.
<a id="manually-connect-an-app-insights-resource"></a> ### Add to an existing function app
-If an Application Insights resource wasn't created with your function app, use the following steps to create the resource. You can then add the instrumentation key from that resource as an [application setting](functions-how-to-use-azure-function-app-settings.md#settings) in your function app.
+If an Application Insights resource wasn't created with your function app, use the following steps to create the resource. You can then add the connection string from that resource as an [application setting](functions-how-to-use-azure-function-app-settings.md#settings) in your function app.
1. In the [Azure portal](https://portal.azure.com), search for and select **function app**, and then select your function app.
If an Application Insights resource wasn't created with your function app, use t
The Application Insights resource is created in the same resource group and subscription as your function app. After the resource is created, close the **Application Insights** window.
-1. In your function app, select **Configuration** under **Settings**, and then select **Application settings**. If you see a setting named `APPINSIGHTS_INSTRUMENTATIONKEY`, Application Insights integration is enabled for your function app running in Azure. If for some reason this setting doesn't exist, add it using your Application Insights instrumentation key as the value.
+1. In your function app, select **Configuration** under **Settings**, and then select **Application settings**. If you see a setting named `APPLICATIONINSIGHTS_CONNECTION_STRING`, Application Insights integration is enabled for your function app running in Azure. If for some reason this setting doesn't exist, add it using your Application Insights connection string as the value.
> [!NOTE]
-> Early versions of Functions used built-in monitoring, which is no longer recommended. When you're enabling Application Insights integration for such a function app, you must also [disable built-in logging](#disable-built-in-logging).
+> Older function apps might be using `APPINSIGHTS_INSTRUMENTATIONKEY` instead of `APPLICATIONINSIGHTS_CONNECTION_STRING`. When possible, you should update your app to use the connection string instead of the instrumentation key.
## Disable built-in logging
-When you enable Application Insights, disable the built-in logging that uses Azure Storage. The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. For production monitoring, we recommend Application Insights. If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.
+Early versions of Functions used built-in monitoring, which is no longer recommended. When you enable Application Insights, disable the built-in logging that uses Azure Storage. The built-in logging is useful for testing with light workloads, but isn't intended for high-load production use. For production monitoring, we recommend Application Insights. If built-in logging is used in production, the logging record might be incomplete because of throttling on Azure Storage.
To disable built-in logging, delete the `AzureWebJobsDashboard` app setting. For more information about how to delete app settings in the Azure portal, see the **Application settings** section of [How to manage a function app](functions-how-to-use-azure-function-app-settings.md#settings). Before you delete the app setting, ensure that no existing functions in the same function app use the setting for Azure Storage triggers or bindings.
To configure these values at App settings level (and avoid redeployment on just
| Host.json path | App setting | |-|-| | logging.logLevel.default | AzureFunctionsJobHost__logging__logLevel__default |
-| logging.logLevel.Host.Aggregator | AzureFunctionsJobHost__logging__logLevel__Host.Aggregator |
+| logging.logLevel.Host.Aggregator | AzureFunctionsJobHost__logging__logLevel__Host__Aggregator |
| logging.logLevel.Function | AzureFunctionsJobHost__logging__logLevel__Function |
-| logging.logLevel.Function.Function1 | AzureFunctionsJobHost__logging__logLevel__Function.Function1 |
-| logging.logLevel.Function.Function1.User | AzureFunctionsJobHost__logging__logLevel__Function.Function1.User |
+| logging.logLevel.Function.Function1 | AzureFunctionsJobHost__logging__logLevel__Function__Function1 |
+| logging.logLevel.Function.Function1.User | AzureFunctionsJobHost__logging__logLevel__Function__Function1__User |
You can override the settings directly at the Azure portal Function App Configuration blade or by using an Azure CLI or PowerShell script.
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
ms.devlang: javascript
# ms.devlang: javascript, typescript
+zone_pivot_groups: programming-languages-set-functions-nodejs
# Migrate to version 4 of the Node.js programming model for Azure Functions
Version 4 is designed to provide Node.js developers with the following benefits:
Version 4 of the Node.js programming model requires the following minimum versions:
+- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0
+- [Node.js](https://nodejs.org/en/download/releases/) v18+
+- [Azure Functions Runtime](./functions-versions.md) v4.25+
+- [Azure Functions Core Tools](./functions-run-local.md) v4.0.5382+ (if running locally)
- [`@azure/functions`](https://www.npmjs.com/package/@azure/functions) npm package v4.0.0 - [Node.js](https://nodejs.org/en/download/releases/) v18+ - [TypeScript](https://www.typescriptlang.org/) v4+ - [Azure Functions Runtime](./functions-versions.md) v4.25+ - [Azure Functions Core Tools](./functions-run-local.md) v4.0.5382+ (if running locally) ## Include the npm package
In v4, the [`@azure/functions`](https://www.npmjs.com/package/@azure/functions)
In v4 of the programming model, you can structure your code however you want. The only files that you need at the root of your app are *host.json* and *package.json*.
-Otherwise, you define the file structure by setting the `main` field in your *package.json* file. You can set the `main` field to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). Common values for the `main` field might be:
+Otherwise, you define the file structure by setting the `main` field in your *package.json* file. You can set the `main` field to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). The following table shows example values for the `main` field:
++
+| Example | Description |
+| | |
+| **`src/index.js`** | Register functions from a single root file. |
+| **`src/functions/*.js`** | Register each function from its own file. |
+| **`src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
+ -- TypeScript:
- - `dist/src/index.js`
- - `dist/src/functions/*.js`
-- JavaScript:
- - `src/index.js`
- - `src/functions/*.js`
+
+| Example | Description |
+| | |
+| **`dist/src/index.js`** | Register functions from a single root file. |
+| **`dist/src/functions/*.js`** | Register each function from its own file. |
+| **`dist/src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
+ > [!TIP] > Make sure you define a `main` field in your *package.json* file.
The trigger input, instead of the invocation context, is now the first argument
You no longer have to create and maintain those separate *function.json* configuration files. You can now fully define your functions directly in your TypeScript or JavaScript files. In addition, many properties now have defaults so that you don't have to specify them every time. + # [v4](#tab/v4) +
+# [v3](#tab/v3)
+ ```javascript
-const { app } = require("@azure/functions");
+module.exports = async function (context, req) {
+ context.log(`Http function processed request for url "${request.url}"`);
-app.http('helloWorld1', {
- methods: ['GET', 'POST'],
- handler: async (request, context) => {
- context.log('Http function processed request');
+ const name = req.query.name || req.body || 'world';
- const name = request.query.get('name')
- || await request.text()
- || 'world';
+ context.res = {
+ body: `Hello, ${name}!`
+ };
+};
+```
- return { body: `Hello, ${name}!` };
- }
-});
+```json
+{
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "get",
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ }
+ ]
+}
``` ++++
+# [v4](#tab/v4)
++ # [v3](#tab/v3)
-```javascript
-module.exports = async function (context, req) {
- context.log('HTTP function processed a request');
+```typescript
+import { AzureFunction, Context, HttpRequest } from "@azure/functions"
- const name = req.query.name
- || req.body
- || 'world';
+const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
+ context.log(`Http function processed request for url "${request.url}"`);
- context.res = {
- body: `Hello, ${name}!`
- };
+ const name = req.query.name || req.body || 'world';
+
+ context.res = {
+ body: `Hello, ${name}!`
+ };
};+
+export default httpTrigger;
``` ```json
module.exports = async function (context, req) {
"direction": "out", "name": "res" }
- ]
+ ],
+ "scriptFile": "../dist/HttpTrigger1/index.js"
} ``` + > [!TIP] > Move the configuration from your *function.json* file to your code. The type of the trigger corresponds to a method on the `app` object in the new model. For example, if you use an `httpTrigger` type in *function.json*, call `app.http()` in your code to register the function. If you use `timerTrigger`, call `app.timer()`.
The primary input is also called the *trigger* and is the only required input or
Version 4 supports only one way of getting the trigger input, as the first argument: + ```javascript
-async function helloWorld1(request, context) {
+async function httpTrigger1(request, context) {
const onlyOption = request; ``` ++
+```typescript
+async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ const onlyOption = request;
+```
++ # [v3](#tab/v3) Version 3 supports several ways of getting the trigger input: + ```javascript
-async function helloWorld1(context, request) {
+async function httpTrigger1(context, request) {
+ const option1 = request;
+ const option2 = context.req;
+ const option3 = context.bindings.req;
+```
+++
+```typescript
+async function httpTrigger1(context: Context, req: HttpRequest): Promise<void> {
const option1 = request; const option2 = context.req; const option3 = context.bindings.req; ``` + > [!TIP]
async function helloWorld1(context, request) {
Version 4 supports only one way of setting the primary output, through the return value: + ```javascript return { body: `Hello, ${name}!` }; ``` +
+```typescript
+async function httpTrigger1(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ // ...
+ return {
+ body: `Hello, ${name}!`
+ };
+}
+```
++ # [v3](#tab/v3) Version 3 supports several ways of setting the primary output:
return {
> [!TIP] > Make sure you always return the output in your function handler, instead of setting it with the `context` object.
+### Context logging
+
+In v4, logging methods were moved to the root `context` object as shown in the following example. For more information about logging, see the [Node.js developer guide](./functions-reference-node.md#logging).
+
+# [v4](#tab/v4)
+
+```javascript
+context.log('This is an info log');
+context.error('This is an error');
+context.warn('This is an error');
+```
+
+# [v3](#tab/v3)
+
+```javascript
+context.log('This is an info log');
+context.log.error('This is an error');
+context.log.warn('This is an error');
+```
+++ ### Create a test context Version 3 doesn't support creating an invocation context outside the Azure Functions runtime, so authoring unit tests can be difficult. Version 4 allows you to create an instance of the invocation context, although the information during tests isn't detailed unless you add it yourself.
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
- *Body*. You can access the body by using a method specific to the type that you want to receive:
- ```javascript
+ ```javascript
const body = await request.text(); const body = await request.json(); const body = await request.formData();
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
- *Body*:
+ Use the `body` property to return most types like a `string` or `Buffer`:
+ ```javascript return { body: "Hello, world!" }; ```
+ Use the `jsonBody` property for the easiest way to return a JSON response:
+
+ ```javascript
+ return { jsonBody: { hello: "world" } };
+ ```
+ - *Header*. You can set the header in two ways, depending on whether you're using the `HttpResponse` class or the `HttpResponseInit` interface: ```javascript
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
return { statusCode: 200 }; ``` -- *Body*. You can set a body in several ways:
+- *Body*. You can set a body in several ways and it's the same regardless of the body type (`string`, `Buffer`, JSON object, etc.):
```javascript context.res.send("Hello, world!"); context.res.end("Hello, world!");
- context.res = { body: "Hello, world!" }
+ context.res = { body: "Hello, world!" };
return { body: "Hello, world!" }; ```
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
+
+> [!TIP]
+> Update any logic by using the HTTP request or response types to match the new methods.
++ > [!TIP]
-> Update any logic by using the HTTP request or response types to match the new methods. If you're using TypeScript, you'll get build errors if you use old methods.
+> Update any logic by using the HTTP request or response types to match the new methods. You should get TypeScript build errors to help you identify if you're using old methods.
+ ## Troubleshoot
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
export default httpTrigger;
::: zone pivot="nodejs-model-v4"
-The programming model loads your functions based on the `main` field in your `package.json`. This field can be set to a single file like `src/index.js` or a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)) specifying multiple files like `src/functions/*.js`.
+The programming model loads your functions based on the `main` field in your `package.json`. You can set the `main` field to a single file or multiple files by using a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)). The following table shows example values for the `main` field:
+
+# [JavaScript](#tab/javascript)
+
+| Example | Description |
+| | |
+| **`src/index.js`** | Register functions from a single root file. |
+| **`src/functions/*.js`** | Register each function from its own file. |
+| **`src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
+
+# [TypeScript](#tab/typescript)
+
+| Example | Description |
+| | |
+| **`dist/src/index.js`** | Register functions from a single root file. |
+| **`dist/src/functions/*.js`** | Register each function from its own file. |
+| **`dist/src/{index.js,functions/*.js}`** | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
++ In order to register a function, you must import the `app` object from the `@azure/functions` npm module and call the method specific to your trigger type. The first argument when registering a function is the function name. The second argument is an `options` object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration isn't necessary, you can pass the handler directly as the second argument instead of an `options` object.
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
From the networking isolation standpoint, key benefits of Private Link include:
> > *Extra resources:* > - **[How to manage private endpoint connections on Azure PaaS resources](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-azure-paas-resources)**
-> - **[How to manage private endpoint connections on customer/partner owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customerpartner-owned-private-link-service)**
+> - **[How to manage private endpoint connections on a customer- or partner-owned Private Link service](../private-link/manage-private-endpoint.md#manage-private-endpoint-connections-on-a-customer--or-partner-owned-private-link-service)**
### Data encryption in transit Azure provides many options for [encrypting data in transit](../security/fundamentals/encryption-overview.md#encryption-of-data-in-transit). **Data encryption in transit isolates your network traffic from other traffic and helps protect data from interception**. Data in transit applies to scenarios involving data traveling between:
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Last updated 06/08/2023
Microsoft Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Both Azure and Azure Government have the same comprehensive security controls in place and the same Microsoft commitment on the safeguarding of customer data. Whereas both cloud environments are assessed and authorized at the FedRAMP High impact level, Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening). These commitments may be of interest to customers using the cloud to store or process data subject to US export control regulations.
+> [!NOTE]
+> These lists and tables do not include feature or bundle availability in the Azure Government Secret or Azure Government Top Secret clouds.
+> For more information about specific availability for air-gapped clouds, please contact your account team.
++ ## Export control implications You're responsible for designing and deploying your applications to meet [US export control requirements](./documentation-government-overview-itar.md) such as the requirements prescribed in the EAR, ITAR, and DoE 10 CFR Part 810. In doing so, you shouldn't include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
Title: Troubleshoot Azure Log Analytics Linux Agent | Microsoft Docs description: Describe the symptoms, causes, and resolution for the most common issues with the Log Analytics agent for Linux in Azure Monitor. -+ Last updated 04/25/2023
For more information, see the [Troubleshooting Tool documentation on GitHub](htt
A clean reinstall of the agent fixes most issues. This task might be the first suggestion from our support team to get the agent into an uncorrupted state. Running the Troubleshooting Tool and Log Collector tool and attempting a clean reinstall helps to solve issues more quickly. 1. Download the purge script:
-
+ `$ wget https://raw.githubusercontent.com/microsoft/OMS-Agent-for-Linux/master/tools/purge_omsagent.sh` 1. Run the purge script (with sudo permissions):
-
+ `$ sudo sh purge_omsagent.sh` ## Important log locations and the Log Collector tool
This error indicates that the Linux diagnostic extension (LAD) is installed side
1. If you're using a proxy, check the preceding proxy troubleshooting steps. 1. In some Azure distribution systems, the omid OMI server daemon doesn't start after the virtual machine is rebooted. If this is the case, you won't see Audit, ChangeTracking, or UpdateManagement solution-related data. The workaround is to manually start the OMI server by running `sudo /opt/omi/bin/service_control restart`. 1. After the OMI package is manually upgraded to a newer version, it must be manually restarted for the Log Analytics agent to continue functioning. This step is required for some distros where the OMI server doesn't automatically start after it's upgraded. Run `sudo /opt/omi/bin/service_control restart` to restart the OMI.
-
+ In some situations, the OMI can become frozen. The OMS agent might enter a blocked state waiting for the OMI, which blocks all data collection. The OMS agent process will be running but there will be no activity, which is evidenced by no new log lines (such as sent heartbeats) present in `omsagent.log`. Restart the OMI with `sudo /opt/omi/bin/service_control restart` to recover the agent. 1. If you see a DSC resource *class not found* error in omsconfig.log, run `sudo /opt/omi/bin/service_control restart`. 1. In some cases, when the Log Analytics agent for Linux can't talk to Azure Monitor, data on the agent is backed up to the full buffer size of 50 MB. The agent should be restarted by running the following command: `/opt/microsoft/omsagent/bin/service_control restart`.
This error indicates that the Linux diagnostic extension (LAD) is installed side
mkdir -p /etc/cron.d/ echo "*/15 * * * * omsagent /opt/omi/bin/OMSConsistencyInvoker > 2>&1" | sudo tee /etc/cron.d/OMSConsistencyInvoker ```
-
+ * Also, make sure the cron service is running. You can use `service cron status` with Debian, Ubuntu, and SUSE or `service crond status` with RHEL, CentOS, and Oracle Linux to check the status of this service. If the service doesn't exist, you can install the binaries and start the service by using the following instructions: **Ubuntu/Debian**
-
+ ``` # To Install the service binaries sudo apt-get install -y cron # To start the service sudo service cron start ```
-
+ **SUSE**
-
+ ``` # To Install the service binaries sudo zypper in cron -y
This error indicates that the Linux diagnostic extension (LAD) is installed side
sudo systemctl enable cron sudo systemctl start cron ```
-
+ **RHEL/CentOS**
-
+ ``` # To Install the service binaries sudo yum install -y crond # To start the service sudo service crond start ```
-
+ **Oracle Linux**
-
+ ``` # To Install the service binaries sudo yum install -y cronie
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Title: Install Log Analytics agent on Linux computers description: This article describes how to connect Linux computers hosted in other clouds or on-premises to Azure Monitor with the Log Analytics agent for Linux. -+ Last updated 06/01/2023
OpenSSL 1.1.0 is only supported on x86_x64 platforms (64-bit). OpenSSL earlier t
>[!NOTE] >The Log Analytics Linux agent doesn't run in containers. To monitor containers, use the [Container Monitoring solution](/previous-versions/azure/azure-monitor/containers/containers) for Docker hosts or [Container insights](../containers/container-insights-overview.md) for Kubernetes.
-Starting with versions released after August 2018, we're making the following changes to our support model:
+Starting with versions released after August 2018, we're making the following changes to our support model:
-* Only the server versions are supported, not the client versions.
+* Only the server versions are supported, not the client versions.
* Focus support on any of the [Azure Linux Endorsed distros](../../virtual-machines/linux/endorsed-distros.md). There might be some delay between a new distro/version being Azure Linux Endorsed and it being supported for the Log Analytics Linux agent. * All minor releases are supported for each major version listed. * Versions that have passed their manufacturer's end-of-support date aren't supported. * Only support VM images. Containers aren't supported, even those derived from official distro publishers' images.
-* New versions of AMI aren't supported.
+* New versions of AMI aren't supported.
* Only versions that run OpenSSL 1.x by default are supported. >[!NOTE]
Starting from agent version 1.13.27, the Linux agent will support both Python 2
If you're using an older version of the agent, you must have the virtual machine use Python 2 by default. If your virtual machine is using a distro that doesn't include Python 2 by default, then you must install it. The following sample commands will install Python 2 on different distros: -- **Red Hat, CentOS, Oracle**:
-
+- **Red Hat, CentOS, Oracle**:
+ ```bash sudo yum install -y python2 ```
-
+ - **Ubuntu, Debian**:
+ ```bash sudo apt-get update sudo apt-get install -y python2 ```
+ - **SUSE**:
```bash sudo zypper install -y python2
If you're using an older version of the agent, you must have the virtual machine
Again, only if you're using an older version of the agent, the python2 executable must be aliased to *python*. Use the following method to set this alias: 1. Run the following command to remove any existing aliases:
-
+ ```bash sudo update-alternatives --remove-all python ```
The following table highlights the packages required for [supported Linux distro
|Required package |Description |Minimum version | |--||-|
-|Glibc | GNU C library | 2.5-12
+|Glibc | GNU C library | 2.5-12
|Openssl | OpenSSL libraries | 1.0.x or 1.1.x | |Curl | cURL web client | 7.15.5 | |Python | | 2.7 or 3.6+
-|Python-ctypes | |
-|PAM | Pluggable authentication modules | |
+|Python-ctypes | |
+|PAM | Pluggable authentication modules | |
>[!NOTE] >Either rsyslog or syslog-ng is required to collect syslog messages. The default syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) isn't supported for syslog event collection. To collect syslog data from this version of these distributions, the rsyslog daemon should be installed and configured to replace sysklog.
On a monitored Linux computer, the agent is listed as `omsagent`. `omsconfig` is
### [Wrapper script](#tab/wrapper-script)
-The following steps configure setup of the agent for Log Analytics in Azure and Azure Government cloud. A wrapper script is used for Linux computers that can communicate directly or through a proxy server to download the agent hosted on GitHub and install the agent.
+The following steps configure setup of the agent for Log Analytics in Azure and Azure Government cloud. A wrapper script is used for Linux computers that can communicate directly or through a proxy server to download the agent hosted on GitHub and install the agent.
If your Linux computer needs to communicate through a proxy server to Log Analytics, this configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The `protocol` property accepts `http` or `https`. The `proxyhost` property accepts a fully qualified domain name or IP address of the proxy server.
For example: `https://proxy01.contoso.com:30443`
If authentication is required in either case, specify the username and password. For example: `https://user01:password@proxy01.contoso.com:30443` 1. To configure the Linux computer to connect to a Log Analytics workspace, run the following command that provides the workspace ID and primary key. The following command downloads the agent, validates its checksum, and installs it.
-
+ ``` wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w <YOUR WORKSPACE ID> -s <YOUR WORKSPACE PRIMARY KEY> ```
If authentication is required in either case, specify the username and password.
``` wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh -w <YOUR WORKSPACE ID> -s <YOUR WORKSPACE PRIMARY KEY> -d opinsights.azure.us
- ```
+ ```
The following command includes the `-p` proxy parameter and example syntax when authentication is required by your proxy server:
If authentication is required in either case, specify the username and password.
``` sudo /opt/microsoft/omsagent/bin/service_control restart [<workspace id>]
- ```
+ ```
### [Shell](#tab/shell)
The Log Analytics agent for Linux is provided in a self-extracting and installab
>[!NOTE] > Use the `--upgrade` argument if any dependent packages, such as omi, scx, omsconfig, or their older versions, are installed. This would be the case if the System Center Operations Manager agent for Linux is already installed.
-
+ ``` sudo sh ./omsagent-*.universal.x64.sh --install -w <workspace id> -s <shared key> --skip-docker-provider-install ```
The Log Analytics agent for Linux is provided in a self-extracting and installab
> [!NOTE] > The preceding command uses the optional `--skip-docker-provider-install` flag to disable the Container Monitoring data collection because the [Container Monitoring solution](/previous-versions/azure/azure-monitor/containers/containers) is being retired.
-1. To configure the Linux agent to install and connect to a Log Analytics workspace through a Log Analytics gateway, run the following command. It provides the proxy, workspace ID, and workspace key parameters. This configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The `proxyhost` property accepts a fully qualified domain name or IP address of the Log Analytics gateway server.
+1. To configure the Linux agent to install and connect to a Log Analytics workspace through a Log Analytics gateway, run the following command. It provides the proxy, workspace ID, and workspace key parameters. This configuration can be specified on the command line by including `-p [protocol://][user:password@]proxyhost[:port]`. The `proxyhost` property accepts a fully qualified domain name or IP address of the Log Analytics gateway server.
``` sudo sh ./omsagent-*.universal.x64.sh --upgrade -p https://<proxy address>:<proxy port> -w <workspace id> -s <shared key> ``` If authentication is required, specify the username and password. For example:
-
+ ``` sudo sh ./omsagent-*.universal.x64.sh --upgrade -p https://<proxy user>:<proxy password>@<proxy address>:<proxy port> -w <workspace id> -s <shared key> ```
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
Title: Syslog troubleshooting on Azure Monitor Agent for Linux
+ Title: Syslog troubleshooting on Azure Monitor Agent for Linux
description: Guidance for troubleshooting rsyslog issues on Linux virtual machines, scale sets with Azure Monitor Agent, and data collection rules. Last updated 5/31/2023-+ # Syslog troubleshooting guide for Azure Monitor Agent for Linux
In some cases, `du` might not report any large files or directories. It might be
```bash sudo lsof +L1
-```
+```
```output COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
azure-monitor Tutorial Log Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-log-alert.md
Last updated 11/07/2023 + # Tutorial: Create a log query alert for an Azure resource Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. Log query alert rules create an alert when a log query returns a particular result. For example, receive an alert when a particular event is created on a virtual machine, or send a warning when excessive anonymous requests are made to a storage account.
Once you verify your query, you can create the alert rule. Select **New alert ru
:::image type="content" source="media/tutorial-log-alert/create-alert-rule.png" lightbox="media/tutorial-log-alert/create-alert-rule.png"alt-text="Create alert rule"::: ## Configure condition
-On the **Condition** tab, the **Log query** will already be filled in. The **Measurement** section defines how the records from the log query will be measured. If the query doesn't perform a summary, then the only option will be to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you'll have the option to use number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated.
+On the **Condition** tab, the **Log query** will already be filled in. The **Measurement** section defines how the records from the log query will be measured. If the query doesn't perform a summary, then the only option will be to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you'll have the option to use number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated. For example, if the aggregation granularity is set to 5 minutes, the alert rule will evaluate the data aggregated over the last 5 minutes. If the aggregation granularity is set to 15 minutes, the alert rule will evaluate the data aggregated over the last 15 minutes. It is important to choose the right aggregation granularity for your alert rule, as it can affect the accuracy of the alert.
:::image type="content" source="media/tutorial-log-alert/alert-rule-condition.png" lightbox="media/tutorial-log-alert/alert-rule-condition.png"alt-text="Alert rule condition":::
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Use the following script to identify your Application Insights resources by inge
#### Example
-```azurecli
+```powershell
Get-AzApplicationInsights -SubscriptionId 'Your Subscription ID' | Format-Table -Property Name, IngestionMode, Id, @{label='Type';expression={ if ([string]::IsNullOrEmpty($_.IngestionMode)) { 'Unknown'
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Title: Overview of Container insights in Azure Monitor
-description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure.
+ Title: Azure Monitor features for Kubernetes monitoring
+description: Describes Container insights and Managed Prometheus in Azure Monitor, which work together to monitor your Kubernetes clusters.
Last updated 12/20/2023
-# Overview of Container insights in Azure Monitor
+# Azure Monitor features for Kubernetes monitoring
-Container insights is a feature of Azure Monitor that collects and analyzes container logs from [Azure Kubernetes clusters](../../aks/intro-kubernetes.md) or [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) clusters and their components. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and prebuilt [workbooks](container-insights-reports.md).
-
-Container insights works with [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) for complete monitoring of your Kubernetes environment. It identifies all clusters across your subscriptions and allows you to quickly enable monitoring by both services.
+[Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) and Container insights work together for complete monitoring of your Kubernetes environment. This article describes both features and the data they collect.
+- [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) is a fully managed service based on the [Prometheus](https://aka.ms/azureprometheus-promio) project from the Cloud Native Computing Foundation. It allows you to collect and analyze metrics from your Kubernetes cluster at scale and analyze them using prebuilt dashboards in [Grafana](../../managed-grafan).
+- Container insights is a feature of Azure Monitor that collects and analyzes container logs from [Azure Kubernetes clusters](../../aks/intro-kubernetes.md) or [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) clusters and their components. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and prebuilt [workbooks](container-insights-reports.md).
> [!IMPORTANT] > Container insights collects metric data from your cluster in addition to logs. This functionality has been replaced by [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md). You can analyze that data using built-in dashboards in [Managed Grafana](../../managed-grafan). > > You can continue to have Container insights collect metric data so you can use the Container insights monitoring experience. Or you can save cost by disabling this collection and using Grafana for metric analysis. See [Configure data collection in Container insights using data collection rule](container-insights-data-collection-dcr.md) for configuration options.--
-## Access Container insights
-
-Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular cluster from its page in the Azure portal.
--
+>
## Data collected
-Container insights sends data to a [Log Analytics workspace](../logs/data-platform-logs.md) where you can analyze it using different features of Azure Monitor. This workspace is different than the [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) used by Managed Prometheus. For more information on these other services, see [Monitoring data](../../aks/monitor-aks.md#monitoring-data).
+Container insights sends data to a [Log Analytics workspace](../logs/data-platform-logs.md) where you can analyze it using different features of Azure Monitor. Managed Prometheus sends data to an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md) where it can be accessed by Managed Grafana. See [Monitoring data](../../aks/monitor-aks.md#monitoring-data) for further details on this data.
+
-## Supported configurations
+### Supported configurations
Container insights supports the following environments: - [Azure Kubernetes Service (AKS)](../../aks/index.yml)
Container insights supports the following environments:
> [!NOTE] > Container insights supports ARM64 nodes on AKS. See [Cluster requirements](../../azure-arc/kubernetes/system-requirements.md#cluster-requirements) for the details of Azure Arc-enabled clusters that support ARM64 nodes.-
->[!NOTE]
+>
> Container insights support for Windows Server 2022 operating system is in public preview.
+## Access Container insights
+
+Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular cluster from its page in the Azure portal.
++ ## Agent
Yes, Container Insights supports pod sandboxing through support for Kata Contain
## Next steps
-To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring.
+- See [Enable monitoring for Kubernetes clusters](kubernetes-monitoring-enable.md) to enable Managed Prometheus and Container insights on your cluster.
<!-- LINKS - external --> [aks-release-notes]: https://github.com/Azure/AKS/releases
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Title: Collect custom metrics for Linux VM with the InfluxData Telegraf agent
-description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor.
+description: Instructions on how to deploy the InfluxData Telegraf agent on a Linux VM in Azure and configure the agent to publish metrics to Azure Monitor.
-+ Last updated 08/01/2023 # Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
-This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor.
+This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor.
> [!NOTE] > InfluxData Telegraf is an open source agent and not officially supported by Azure Monitor. For issues with the Telegraf connector, please refer to the Telegraf GitHub page here: [InfluxData](https://github.com/influxdata/telegraf)
-## InfluxData Telegraf agent
+## InfluxData Telegraf agent
-[Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to use specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. Using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
+[Telegraf](https://docs.influxdata.com/telegraf/) is a plug-in-driven agent that enables the collection of metrics from over 150 different sources. Depending on what workloads run on your VM, you can configure the agent to use specialized input plug-ins to collect metrics. Examples are MySQL, NGINX, and Apache. By using output plug-ins, the agent can then write to destinations that you choose. The Telegraf agent has integrated directly with the Azure Monitor custom metrics REST API. It supports an Azure Monitor output plug-in. Using this plug-in, the agent can collect workload-specific metrics on your Linux VM and submit them as custom metrics to Azure Monitor.
:::image type="content" source="./media/collect-custom-metrics-linux-telegraf/telegraf-agent-overview.png" alt-text="A diagram showing the Telegraph agent overview." lightbox="./media/collect-custom-metrics-linux-telegraf/telegraf-agent-overview.png"::: ## Connect to the VM
-Create an SSH connection to the VM where you want to install Telegraf. Select the **Connect** button on the overview page for your virtual machine.
+Create an SSH connection to the VM where you want to install Telegraf. Select the **Connect** button on the overview page for your virtual machine.
:::image source="./media/collect-custom-metrics-linux-telegraf/connect-to-virtual-machine.png" alt-text="A screenshot of the a Virtual machine overview page with the connect button highlighted." lightbox="./media/collect-custom-metrics-linux-telegraf/connect-to-virtual-machine.png":::
-In the **Connect to virtual machine** page, keep the default options to connect by DNS name over port 22. In **Login using VM local account**, a connection command is shown. Select the button to copy the command. The following example shows what the SSH connection command looks like:
+In the **Connect to virtual machine** page, keep the default options to connect by DNS name over port 22. In **Login using VM local account**, a connection command is shown. Select the button to copy the command. The following example shows what the SSH connection command looks like:
```cmd
-ssh azureuser@XXXX.XX.XXX
+ssh azureuser@XXXX.XX.XXX
```
-Paste the SSH connection command into a shell, such as Azure Cloud Shell or Bash on Ubuntu on Windows, or use an SSH client of your choice to create the connection.
+Paste the SSH connection command into a shell, such as Azure Cloud Shell or Bash on Ubuntu on Windows, or use an SSH client of your choice to create the connection.
-## Install and configure Telegraf
+## Install and configure Telegraf
-To install the Telegraf Debian package onto the VM, run the following commands from your SSH session:
+To install the Telegraf Debian package onto the VM, run the following commands from your SSH session:
# [Ubuntu, Debian](#tab/ubuntu) Add the repository: ```bash
-# download the package to the VM
+# download the package to the VM
curl -s https://repos.influxdata.com/influxdb.key | sudo apt-key add - source /etc/lsb-release sudo echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
Install the package:
sudo apt-get update sudo apt-get install telegraf ```
-# [RHEL, CentOS, Oracle Linux](#tab/redhat)
+# [RHEL, CentOS, Oracle Linux](#tab/redhat)
Add the repository:
Install the package:
sudo yum -y install telegraf ```
-Telegraf's configuration file defines Telegraf's operations. By default, an example configuration file is installed at the path **/etc/telegraf/telegraf.conf**. The example configuration file lists all possible input and output plug-ins. However, we'll create a custom configuration file and have the agent use it by running the following commands:
+Telegraf's configuration file defines Telegraf's operations. By default, an example configuration file is installed at the path **/etc/telegraf/telegraf.conf**. The example configuration file lists all possible input and output plug-ins. However, we'll create a custom configuration file and have the agent use it by running the following commands:
```bash
-# generate the new Telegraf config file in the current directory
-telegraf --input-filter cpu:mem --output-filter azure_monitor config > azm-telegraf.conf
+# generate the new Telegraf config file in the current directory
+telegraf --input-filter cpu:mem --output-filter azure_monitor config > azm-telegraf.conf
-# replace the example config with the new generated config
-sudo cp azm-telegraf.conf /etc/telegraf/telegraf.conf
+# replace the example config with the new generated config
+sudo cp azm-telegraf.conf /etc/telegraf/telegraf.conf
```
-> [!NOTE]
-> The preceding code enables only two input plug-ins: **cpu** and **mem**. You can add more input plug-ins, depending on the workload that runs on your machine. Examples are Docker, MySQL, and NGINX. For a full list of input plug-ins, see the **Additional configuration** section.
+> [!NOTE]
+> The preceding code enables only two input plug-ins: **cpu** and **mem**. You can add more input plug-ins, depending on the workload that runs on your machine. Examples are Docker, MySQL, and NGINX. For a full list of input plug-ins, see the **Additional configuration** section.
-Finally, to have the agent start using the new configuration, we force the agent to stop and start by running the following commands:
+Finally, to have the agent start using the new configuration, we force the agent to stop and start by running the following commands:
```bash
-# stop the telegraf agent on the VM
-sudo systemctl stop telegraf
-# start and enable the telegraf agent on the VM to ensure it picks up the latest configuration
-sudo systemctl enable --now telegraf
+# stop the telegraf agent on the VM
+sudo systemctl stop telegraf
+# start and enable the telegraf agent on the VM to ensure it picks up the latest configuration
+sudo systemctl enable --now telegraf
```
-Now the agent collects metrics from each of the input plug-ins specified and emits them to Azure Monitor.
+Now the agent collects metrics from each of the input plug-ins specified and emits them to Azure Monitor.
-## Plot your Telegraf metrics in the Azure portal
+## Plot your Telegraf metrics in the Azure portal
-1. Open the [Azure portal](https://portal.azure.com).
+1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to the new **Monitor** tab. Then select **Metrics**.
+1. Navigate to the new **Monitor** tab. Then select **Metrics**.
1. Select your VM in the resource selector.
-1. Select the **Telegraf/CPU** namespace, and select the **usage_system** metric. You can choose to filter by the dimensions on this metric or split on them.
+1. Select the **Telegraf/CPU** namespace, and select the **usage_system** metric. You can choose to filter by the dimensions on this metric or split on them.
:::image type="content" source="./media/collect-custom-metrics-linux-telegraf/metric-chart.png" alt-text="A screenshot showing a metric chart with telegraph metrics selected." lightbox="./media/collect-custom-metrics-linux-telegraf/metric-chart.png":::
-## Additional configuration
+## Additional configuration
-The preceding walkthrough provides information on how to configure the Telegraf agent to collect metrics from a few basic input plug-ins. The Telegraf agent has support for over 150 input plug-ins, with some supporting additional configuration options. InfluxData has published a [list of supported plugins](https://docs.influxdata.com/telegraf/v1.15/plugins/inputs/) and instructions on [how to configure them](https://docs.influxdata.com/telegraf/v1.15/administration/configuration/).
+The preceding walkthrough provides information on how to configure the Telegraf agent to collect metrics from a few basic input plug-ins. The Telegraf agent has support for over 150 input plug-ins, with some supporting additional configuration options. InfluxData has published a [list of supported plugins](https://docs.influxdata.com/telegraf/v1.15/plugins/inputs/) and instructions on [how to configure them](https://docs.influxdata.com/telegraf/v1.15/administration/configuration/).
-Additionally, in this walkthrough, you used the Telegraf agent to emit metrics about the VM the agent is deployed on. The Telegraf agent can also be used as a collector and forwarder of metrics for other resources. To learn how to configure the agent to emit metrics for other Azure resources, see [Azure Monitor Custom Metric Output for Telegraf](https://github.com/influxdat).
+Additionally, in this walkthrough, you used the Telegraf agent to emit metrics about the VM the agent is deployed on. The Telegraf agent can also be used as a collector and forwarder of metrics for other resources. To learn how to configure the agent to emit metrics for other Azure resources, see [Azure Monitor Custom Metric Output for Telegraf](https://github.com/influxdat).
-## Clean up resources
+## Clean up resources
-When they're no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so, select the resource group for the virtual machine and select **Delete**. Then confirm the name of the resource group to delete.
+When they're no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so, select the resource group for the virtual machine and select **Delete**. Then confirm the name of the resource group to delete.
## Next steps - Learn more about [custom metrics](./metrics-custom-overview.md).
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
Title: Azure NetApp Files for Azure Government | Microsoft Docs
-description: Describes how to connect to Azure Government to use Azure NetApp Files and the Azure NetApp Files feature availability in Azure Government.
+description: Learn how to connect to Azure Government to use Azure NetApp Files and the Azure NetApp Files feature availability in Azure Government.
documentationcenter: ''
Last updated 11/02/2023
-# Azure NetApp Files for Azure Government
+# Azure NetApp Files for Azure Government
-[Microsoft Azure Government](../azure-government/documentation-government-welcome.md) delivers a dedicated cloud enabling government agencies and their partners to transform mission-critical workloads to the cloud.
+[Microsoft Azure Government](../azure-government/documentation-government-welcome.md) delivers a dedicated cloud that enables government agencies and their partners to transform mission-critical workloads to the cloud.
-This article describes Azure NetApp Files feature availability in Azure Government. It also shows you how to access the Azure NetApp Files service within Azure Government.
+This article describes Azure NetApp Files feature availability in Azure Government. It also shows you how to access Azure NetApp Files within Azure Government.
## Feature availability
-For Azure Government regions supported by Azure NetApp Files, see the *[Products Available by Region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true)*.
+For Azure Government regions supported by Azure NetApp Files, see [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
-All [Azure NetApp Files features](whats-new.md) available on Azure public cloud are also available on supported Azure Government regions ***except for the features listed in the following table***:
+All [Azure NetApp Files features](whats-new.md) available on Azure public cloud are also available on supported Azure Government regions, *except for the features listed in the following table*:
| Azure NetApp Files features | Azure public cloud availability | Azure Government availability | |: |: |: |
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
## Portal access
-Azure Government users can access Azure NetApp Files by pointing their browsers to **portal.azure.us**.  The portal site name is **Microsoft Azure Government**. See [Connect to Azure Government using portal](../azure-government/documentation-government-get-started-connect-with-portal.md) for details.
+Azure Government users can access Azure NetApp Files by pointing their browsers to **portal.azure.us**. The portal site name is **Microsoft Azure Government**. For more information, see [Connect to Azure Government using portal](../azure-government/documentation-government-get-started-connect-with-portal.md).
-![Screenshot of the Azure Government portal highlighting portal.azure.us as the URL](../media/azure-netapp-files/azure-government.jpg)
+![Screenshot that shows the Azure Government portal highlighting portal.azure.us as the URL.](../media/azure-netapp-files/azure-government.jpg)
-From the Microsoft Azure Government portal, you can access Azure NetApp Files the same way you would in the Azure portal. For example, you can enter **Azure NetApp Files** in the portal’s Search Resources box, and then select **Azure NetApp Files** from the list that appears.
+From the Azure Government portal, you can access Azure NetApp Files the same way you would in the Azure portal. For example, you can enter **Azure NetApp Files** in the portal's **Search resources** box, and then select **Azure NetApp Files** from the list that appears.
You can follow [Azure NetApp Files](./index.yml) documentation for details about using the service. ## Azure CLI access
-You can connect to Azure Government by setting the cloud name to `AzureUSGovernment` and then proceeding to sign in as you normally would with the `az login` command. After you run the sign-in command, a browser will launch where you enter the appropriate Azure Government credentials.
+You can connect to Azure Government by setting the cloud name to `AzureUSGovernment` and then proceeding to sign in as you normally would with the `az login` command. After you run the sign-in command, a browser launches, where you enter the appropriate Azure Government credentials.
```azurecli
az cloud set --name AzureUSGovernment
```
-To confirm the cloud has been set to `AzureUSGovernment`, run:
+To confirm the cloud was set to `AzureUSGovernment`, run:
```azurecli
az cloud list --output table
```
-This command produces a table with Azure cloud locations. The `isActive` column entry for `AzureUSGovernment` should read `true`.
+This command produces a table with Azure cloud locations. The `isActive` column entry for `AzureUSGovernment` should read `true`.
-See [Connect to Azure Government with Azure CLI](../azure-government/documentation-government-get-started-connect-with-cli.md) for details.
+For more information, see [Connect to Azure Government with Azure CLI](../azure-government/documentation-government-get-started-connect-with-cli.md).
## REST API access
-Endpoints for Azure Government are different from commercial Azure endpoints. For a list of different endpoints, see Azure GovernmentΓÇÖs [Guidance for Developers](../azure-government/compare-azure-government-global-azure.md#guidance-for-developers).
+Endpoints for Azure Government are different from commercial Azure endpoints. For a list of different endpoints, see Azure Government's [Guidance for developers](../azure-government/compare-azure-government-global-azure.md#guidance-for-developers).
## PowerShell access
-When connecting to Azure Government through PowerShell, you must specify an environmental parameter to ensure you connect to the correct endpoints. From there, you can proceed to use Azure NetApp Files as you normally would with PowerShell.
+When you connect to Azure Government through PowerShell, you must specify an environmental parameter to ensure that you connect to the correct endpoints. From there, you can proceed to use Azure NetApp Files as you normally would with PowerShell.
| Connection type | Command | | | |
When connecting to Azure Government through PowerShell, you must specify an envi
| [Azure (Classic deployment model)](/powershell/module/servicemanagement/azure/add-azureaccount) commands |`Add-AzureAccount -Environment AzureUSGovernment` | | [Microsoft Entra ID (Classic deployment model)](/previous-versions/azure/jj151815(v=azure.100)) commands |`Connect-MsolService -AzureEnvironment UsGovernment` |
-See [Connect to Azure Government with PowerShell](../azure-government/documentation-government-get-started-connect-with-ps.md) for details.
+For more information, see [Connect to Azure Government with PowerShell](../azure-government/documentation-government-get-started-connect-with-ps.md).
## Next steps+ * [What is Azure Government?](../azure-government/documentation-government-welcome.md) * [What's new in Azure NetApp Files](whats-new.md) * [Compare Azure Government and global Azure](../azure-government/compare-azure-government-global-azure.md)
azure-netapp-files Azure Netapp Files Create Netapp Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-netapp-account.md
Title: Create a NetApp account for Access Azure NetApp Files | Microsoft Docs
-description: Describes how to access Azure NetApp Files and create a NetApp account so that you can set up a capacity pool and create a volume.
+ Title: Create a NetApp account to access Azure NetApp Files | Microsoft Docs
+description: Learn how to access Azure NetApp Files and create a NetApp account so that you can set up a capacity pool and create a volume.
documentationcenter: ''
Last updated 10/04/2021 + # Create a NetApp account
-Creating a NetApp account enables you to set up a capacity pool and subsequently create a volume. You use the Azure NetApp Files blade to create a new NetApp account.
-## Before you begin
+Creating a NetApp account enables you to set up a capacity pool so that you can create a volume. You use the Azure NetApp Files pane to create a new NetApp account.
-You must have registered your subscription for using the NetApp Resource Provider. See [Register the NetApp Resource Provider](azure-netapp-files-register.md).
+## Before you begin
-## Steps
+You must register your subscription for using the NetApp Resource Provider. For more information, see [Register the NetApp Resource Provider](azure-netapp-files-register.md).
-1. Sign in to the Azure portal.
-2. Access the Azure NetApp Files blade by using one of the following methods:
- * Search for **Azure NetApp Files** in the Azure portal search box.
- * Click **All services** in the navigation, and then filter to Azure NetApp Files.
+## Steps
- You can "favorite" the Azure NetApp Files blade by clicking the star icon next to it.
+1. Sign in to the Azure portal.
+1. Access the Azure NetApp Files pane by using one of the following methods:
+ * Search for **Azure NetApp Files** in the Azure portal search box.
+ * Select **All services** in the navigation, and then filter to Azure NetApp Files.
-3. Click **+ Add** to create a new NetApp account.
- The New NetApp account window appears.
+ To make the Azure NetApp Files pane a favorite, select the star icon next to it.
-4. Provide the following information for your NetApp account:
- * **Account name**
- Specify a unique name for the subscription.
- * **Subscription**
- Select a subscription from your existing subscriptions.
- * **Resource group**
- Use an existing Resource Group or create a new one.
- * **Location**
- Select the region where you want the account and its child resources to be located.
+1. Select **+ Add** to create a new NetApp account.
+ The **New NetApp account** window appears.
- ![New NetApp account](../media/azure-netapp-files/azure-netapp-files-new-netapp-account.png)
+1. Provide the following information for your NetApp account:
+ * **Account name**: Specify a unique name for the subscription.
+ * **Subscription**: Select a subscription from your existing subscriptions.
+ * **Resource group**: Use an existing resource group or create a new one.
+ * **Location**: Select the region where you want the account and its child resources to be located.
+ ![Screenshot that shows New NetApp account.](../media/azure-netapp-files/azure-netapp-files-new-netapp-account.png)
-5. Click **Create**.
- The NetApp account you created now appears in the Azure NetApp Files blade.
+1. Select **Create**.
+ The NetApp account you created now appears in the Azure NetApp Files pane.
-> [!NOTE]
-> If you haven't registered your subscription for using the NetApp Resource Provider, you will receive the following error when you try to create the first NetApp account:
+> [!NOTE]
+> If you didn't register your subscription for using the NetApp Resource Provider, you receive the following error when you try to create the first NetApp account:
> > `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{"code":"NotFound","message":"{\r\n \"error\": {\r\n \"code\": \"InvalidResourceType\",\r\n \"message\": \"The resource type could not be found in the namespace 'Microsoft.NetApp' for api version '20xx-xx-xx'.\"\r\n }\r\n}"}]}`
-## Next steps
+## Next steps
[Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md)-
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
# What is Azure NetApp Files?
-Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, volumes, select service and performance levels, and manage data protection. It allows you to create and manage high-performance, highly available, and scalable file shares, using the same protocols and tools that you're familiar with and enterprise applications rely on on-premises.
+Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, and volumes. You can also select service and performance levels and manage data protection. You can create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and enterprise applications that rely on on-premises.
-Azure NetApp FilesΓÇÖ key attributes are:
+Key attributes of Azure NetApp Files are:
-- Performance, cost optimization and scale-- Simplicity and availability-- Data management and security
+- Performance, cost optimization, and scale.
+- Simplicity and availability.
+- Data management and security.
-Azure NetApp Files supports SMB, NFS and dual protocols volumes and can be used for various use cases such as:
-- file sharing-- home directories -- databases-- high-performance computing and more
+Azure NetApp Files supports SMB, NFS, and dual protocols volumes and can be used for use cases such as:
-For more information about workload solutions leveraging Azure NetApp Files, see [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md).
+- File sharing.
+- Home directories.
+- Databases.
+- High-performance computing.
-## Performance, cost optimization, and scale
+For more information about workload solutions using Azure NetApp Files, see [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md).
-Azure NetApp Files is designed to provide high-performance file storage for enterprise workloads and provide functionality to provide cost optimization and scale. Key features that contribute to these include:
+## Performance, cost optimization, and scale
-| Functionality | Description | Benefit |
+Azure NetApp Files is designed to provide high-performance file storage for enterprise workloads and provide functionality to provide cost optimization and scale. Key features that contribute to these capabilities include:
+
+| Functionality | Description | Benefit |
| - | - | - |
-| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance
-| Multi-protocol support | Supports multiple protocols including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1 and simultaneous dual-protocol | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. |
-| Three flexible performance tiers (standard, premium, ultra) | Three performance tiers with dynamic service level change capability based on workload needs, including cool access for cold data | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
-| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
-| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced size storage pool compared to the initial 4 TiB minimum | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.
-| 1000-TiB maximum capacity pool | 1000-TiB capacity pool is an increased storage pool compared to the initial 500 TiB maximum | Reduce waste by creating larger, pooled capacity and performance budget and share/distribute across volumes.
-| 100-500 TiB large volumes | Store large volumes of data up to 500 TiB in a single volume | Manage large data sets and high-performance workloads with ease.
-| User and group quotas | Set quotas on storage usage for individual users and groups | Control storage usage and optimize resource allocation.
-| Virtual machine (VM) networked storage performance | Higher VM network throughput compared to disk IO limits enable more-demanding workloads on smaller Azure VMs | Improve application performance at a smaller virtual machine footprint, improving overall efficiency and lowering application license cost.
-| Deep workload readiness | Seamless deployment and migration of any-size workload with well-documented deployment guides | Easily migrate any workload of any size to the platform. Enjoy a seamless, cost-effective deployment and migration experience.
-| Datastores for Azure VMware Solution | Use Azure NetApp Files as a storage solution for VMware workloads in Azure, reducing the need for superfluous compute nodes normally included with Azure VMware Solution expansions | Save money by eliminating the need for unnecessary compute nodes when expanding storage, resulting in significant cost savings.
-| Standard storage with cool access | Use the cool access option of Azure NetApp Files Standard service level to move inactive data transparently from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure storage account (the cool tier) | Save money by transitioning data that resides within Azure NetApp Files volumes (the hot tier) by moving blocks to the lower cost storage (the cool tier). |
-
-These features work together to provide a high-performance file storage solution for the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency, cost and scale.
+| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency. | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance.
+| Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. |
+| Three flexible performance tiers (Standard, Premium, Ultra) | Three performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
+| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
+| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced-size storage pool compared to the initial 4-TiB minimum. | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.
+| 1,000-TiB maximum capacity pool | 1000-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes.
+| 100-500 TiB large volumes | Store large volumes of data up to 500 TiB in a single volume. | Manage large datasets and high-performance workloads with ease.
+| User and group quotas | Set quotas on storage usage for individual users and groups. | Control storage usage and optimize resource allocation.
+| Virtual machine (VM) networked storage performance | Higher VM network throughput compared to disk IO limits enable more demanding workloads on smaller Azure VMs. | Improve application performance at a smaller VM footprint, improving overall efficiency and lowering application license cost.
+| Deep workload readiness | Seamless deployment and migration of any-size workload with well-documented deployment guides. | Easily migrate any workload of any size to the platform. Enjoy a seamless, cost-effective deployment and migration experience.
+| Datastores for Azure VMware Solution | Use Azure NetApp Files as a storage solution for VMware workloads in Azure, reducing the need for superfluous compute nodes normally included with Azure VMware Solution expansions. | Save money by eliminating the need for unnecessary compute nodes when you expand storage, resulting in significant cost savings.
+| Standard storage with cool access | Use the cool access option of Azure NetApp Files Standard service level to move inactive data transparently from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure Storage account (the cool tier). | Save money by transitioning data that resides within Azure NetApp Files volumes (the hot tier) by moving blocks to the lower-cost storage (the cool tier). |
+
+These features work together to provide a high-performance file storage solution for the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency, cost, and scale.
## Simplicity and availability
Azure NetApp Files is designed to provide simplicity and high availability for y
| Functionality | Description | Benefit | | - | - | - |
-| Volumes as a Service | Provision and manage volumes in minutes with a few clicks like any other Azure service | Enables businesses to quickly and easily provision and manage volumes without the need for dedicated hardware or complex configurations.
-| Native Azure Integration | Integration with the Azure portal, REST, CLI, billing, monitoring, and security | Simplifies management and ensures consistency with other Azure services, while providing a familiar interface and integration with existing tools and workflows.
-| High availability | Azure NetApp Files provides a [high availability SLA](https://azure.microsoft.com/support/legal/sla/netapp/) with automatic failover | Ensures that data is always available and accessible, avoiding downtime and disruption to business operations.
-| Application migration | Migrate applications to Azure without refactoring | Enables businesses to move their workloads to Azure quickly and easily without the need for costly and time-consuming application refactoring or redesign.
-| Cross-region and cross-zone replication | Replicate data between regions or zones | Provide disaster recovery capabilities and ensure data availability and redundancy across different Azure regions or availability zones.
-| Application volume groups | Application volume groups enable you to deploy all application volumes according to best practices in a single one-step and optimized workflow | Simplified multi-volume deployment for applications, ensuring volumes and mount points are optimized and adhere to best practices in a single step, saving time and effort.
-| Programmatic deployment | Automate deployment and management with APIs and SDKs | Enables businesses to integrate Azure NetApp Files with their existing automation and management tools, reducing the need for manual intervention and improving efficiency.
-| Fault-tolerant bare metal | Built on a fault-tolerant bare metal fleet powered by ONTAP | Ensures high performance and reliability by leveraging a robust, fault-tolerant storage platform and powerful data management capabilities provided by ONTAP.
-| Azure native billing | Integrates natively with Azure billing, providing a seamless and easy-to-use billing experience, based on hourly usage | Easily and accurately manage and track the cost of using the service, allowing for seamless budgeting and cost control. Easily track usage and expenses directly from the Azure portal, providing a unified experience for billing and management. |
+| Volumes as a service | Provision and manage volumes in minutes with a few clicks like any other Azure service. | Enables businesses to quickly and easily provision and manage volumes without the need for dedicated hardware or complex configurations.
+| Native Azure integration | Integration with the Azure portal, REST, CLI, billing, monitoring, and security. | Simplifies management and ensures consistency with other Azure services while providing a familiar interface and integration with existing tools and workflows.
+| High availability | Azure NetApp Files provides a [high-availability SLA](https://azure.microsoft.com/support/legal/sla/netapp/) with automatic failover. | Ensures that data is always available and accessible, avoiding downtime and disruption to business operations.
+| Application migration | Migrate applications to Azure without refactoring. | Enables businesses to move their workloads to Azure quickly and easily without the need for costly and time-consuming application refactoring or redesign.
+| Cross-region and cross-zone replication | Replicate data between regions or zones. | Provides disaster recovery capabilities and ensures data availability and redundancy across different Azure regions or availability zones.
+| Application volume groups | Application volume groups enable you to deploy all application volumes according to best practices in a single one-step and optimized workflow. | Simplified multi-volume deployment for applications ensures volumes and mount points are optimized and adhere to best practices in a single step, saving time and effort.
+| Programmatic deployment | Automate deployment and management with APIs and SDKs. | Enables businesses to integrate Azure NetApp Files with their existing automation and management tools, reducing the need for manual intervention and improving efficiency.
+| Fault-tolerant bare metal | Built on a fault-tolerant bare-metal fleet powered by ONTAP. | Ensures high performance and reliability by using a robust, fault-tolerant storage platform and powerful data management capabilities provided by ONTAP.
+| Azure native billing | Integrates natively with Azure billing, providing a seamless and easy-to-use billing experience, based on hourly usage. | Easily and accurately manage and track the cost of using the service for seamless budgeting and cost control. Easily track usage and expenses directly from the Azure portal for a unified experience for billing and management. |
-These features work together to provide a simple-to-use and highly available file storage solution to ensure that your data is easy to manage and always available, recoverable, and accessible to your applications even in an outage.
+These features work together to provide a simple-to-use and highly available file storage solution. This solution ensures that your data is easy to manage and always available, recoverable, and accessible to your applications, even in an outage.
-## Data management and security
+## Data management and security
Azure NetApp Files provides built-in data management and security capabilities to help ensure the secure storage, availability, and manageability of your data. Key features include: | Functionality | Description | Benefit | | - | - | - |
-| Efficient snapshots and backup | Advanced data protection and faster recovery of data by leveraging block-efficient, incremental snapshots and vaulting | Quickly and easily backup data and restore to a previous point in time, minimizing downtime and reducing the risk of data loss.
-| Snapshot restore to a new volume | Instantly restore data from a previously taken snapshot quickly and accurately | Reduce downtime and save time and resources that would otherwise be spent on restoring data from backups.
-| Snapshot revert | Revert volume to the state it was in when a previous snapshot was taken | Easily and quickly recover data (in-place) to a known good state, ensuring business continuity and maintaining productivity.
-| Application-aware snapshots and backup | Ensure application-consistent snapshots with guaranteed recoverability | Automate snapshot creation and deletion processes, reducing manual efforts and potential errors while increasing productivity by allowing teams to focus on other critical tasks.
-| Efficient cloning | Create and access clones in seconds | Save time and reduce costs for test, development, system refresh and analytics.
-| Data-in-transit encryption | Secure data transfers with protocol encryption | Ensure the confidentiality and integrity of data being transmitted, with peace of mind that information is safe and secure.
-| Data-at-rest encryption | Data-at-rest encryption with platform- or customer-managed keys | Prevent unrestrained access to stored data, meet compliance requirements and enhance data security.
-| Azure platform integration and compliance certifications | Compliance with regulatory requirements and Azure platform integration | Adhere to Azure standards and regulatory compliance, ensure audit and governance completion.
-| Azure Identity and Access Management (IAM) | Azure role-based access control (RBAC) service allows you to manage permissions for resources at any level | Simplify access management and improve compliance with Azure-native RBAC, empowering you to easily control user access to configuration management.
-| AD/LDAP authentication, export policies & access control lists (ACLs) | Authenticate and authorize access to data using existing AD/LDAP credentials and allow for the creation of export policies and ACLs to govern data access and usage | Prevent data breaches and ensure compliance with data security regulations, with enhanced granular control over access to data volumes, directories and files. |
+| Efficient snapshots and backup | Advanced data protection and faster recovery of data by using block-efficient, incremental snapshots and vaulting. | Quickly and easily back up data and restore to a previous point in time, minimizing downtime and reducing the risk of data loss.
+| Snapshot restore to a new volume | Instantly restore data from a previously taken snapshot quickly and accurately. | Reduces downtime and saves time and resources that would otherwise be spent on restoring data from backups.
+| Snapshot revert | Revert volume to the state it was in when a previous snapshot was taken. | Easily and quickly recover data (in-place) to a known good state, ensuring business continuity and maintaining productivity.
+| Application-aware snapshots and backup | Ensure application-consistent snapshots with guaranteed recoverability. | Automates snapshot creation and deletion processes, reducing manual efforts and potential errors while increasing productivity by allowing teams to focus on other critical tasks.
+| Efficient cloning | Create and access clones in seconds. | Saves time and reduces costs for test, development, system refresh, and analytics.
+| Data-in-transit encryption | Secure data transfers with protocol encryption. | Ensures the confidentiality and integrity of data being transmitted for peace of mind that information is safe and secure.
+| Data-at-rest encryption | Data-at-rest encryption with platform- or customer-managed keys. | Prevents unrestrained access to stored data, meets compliance requirements, and enhances data security.
+| Azure platform integration and compliance certifications | Compliance with regulatory requirements and Azure platform integration. | Adheres to Azure standards and regulatory compliance and ensures audit and governance completion.
+| Azure Identity & Access Management (IAM) | Azure role-based access control (RBAC) allows you to manage permissions for resources at any level. | Simplifies access management and improves compliance with Azure-native RBAC, empowering you to easily control user access to configuration management.
+| AD/LDAP authentication, export policies, and access control lists (ACLs) | Authenticate and authorize access to data by using existing AD/LDAP credentials and allow for the creation of export policies and ACLs to govern data access and usage. | Prevents data breaches and ensures compliance with data security regulations, with enhanced granular control over access to data volumes, directories, and files. |
These features work together to provide a comprehensive data management solution that helps to ensure that your data is always available, recoverable, and secure.
These features work together to provide a comprehensive data management solution
* [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) * [Quickstart: Set up Azure NetApp Files and create an NFS volume](azure-netapp-files-quickstart-set-up-account-create-volumes.md)
-* [Understand NAS concepts in Azure NetApp Files](network-attached-storage-concept.md)
-* [Register for NetApp Resource Provider](azure-netapp-files-register.md)
-* [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)
+* [Understand NAS concepts in Azure NetApp Files](network-attached-storage-concept.md)
+* [Register for NetApp Resource Provider](azure-netapp-files-register.md)
+* [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)
* [Azure NetApp Files videos](azure-netapp-files-videos.md)
azure-netapp-files Double Encryption At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md
Azure NetApp Files double encryption at rest is supported for the following regi
* Australia Southeast * Brazil South * Canada Central
-* Canada East
* Central US * East Asia * East US
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
Title: Linux NFS read-ahead best practices for Azure NetApp Files - Session slots and slot table entries | Microsoft Docs
-description: Describes filesystem cache and Linux NFS read-ahead best practices for Azure NetApp Files.
+description: Describes filesystem cache and Linux NFS read-ahead best practices for Azure NetApp Files.
documentationcenter: ''
ms.assetid:
na-+ Last updated 09/29/2022 # Linux NFS read-ahead best practices for Azure NetApp Files
-This article helps you understand filesystem cache best practices for Azure NetApp Files.
+This article helps you understand filesystem cache best practices for Azure NetApp Files.
-NFS read-ahead predictively requests blocks from a file in advance of I/O requests by the application. It is designed to improve client sequential read throughput. Until recently, all modern Linux distributions set the read-ahead value to be equivalent of 15 times the mounted filesystems `rsize`.
+NFS read-ahead predictively requests blocks from a file in advance of I/O requests by the application. It is designed to improve client sequential read throughput. Until recently, all modern Linux distributions set the read-ahead value to be equivalent of 15 times the mounted filesystems `rsize`.
The following table shows the default read-ahead values for each given `rsize` mount option.
The following table shows the default read-ahead values for each currently avail
| Debian | Up to at least 10 | 15 x `rsize` |
-## How to work with per-NFS filesystem read-ahead
+## How to work with per-NFS filesystem read-ahead
NFS read-ahead is defined at the mount point for an NFS filesystem. The default setting can be viewed and set both dynamically and persistently. For convenience, the following bash script written by Red Hat has been provided for viewing or dynamically setting read-ahead for amounted NFS filesystem.
-Read-ahead can be defined either dynamically per NFS mount using the following script or persistently using `udev` rules as shown in this section. To display or set read-ahead for a mounted NFS filesystem, you can save the following script as a bash file, modify the fileΓÇÖs permissions to make it an executable (`chmod 544 readahead.sh`), and run as shown.
+Read-ahead can be defined either dynamically per NFS mount using the following script or persistently using `udev` rules as shown in this section. To display or set read-ahead for a mounted NFS filesystem, you can save the following script as a bash file, modify the fileΓÇÖs permissions to make it an executable (`chmod 544 readahead.sh`), and run as shown.
-## How to show or set read-ahead values
+## How to show or set read-ahead values
-To show the current read-ahead value (the returned value is in KiB), run the following command:
+To show the current read-ahead value (the returned value is in KiB), run the following command:
```bash ./readahead.sh show <mount-point> ```
-To set a new value for read-ahead, run the following command:
+To set a new value for read-ahead, run the following command:
```bash ./readahead.sh set <mount-point> [read-ahead-kb] ```
-
-### Example
+
+### Example
```bash #!/bin/bash
fi
## How to persistently set read-ahead for NFS mounts
-To persistently set read-ahead for NFS mounts, `udev` rules can be written as follows:
+To persistently set read-ahead for NFS mounts, `udev` rules can be written as follows:
1. Create and test `/etc/udev/rules.d/99-nfs.rules`:
To persistently set read-ahead for NFS mounts, `udev` rules can be written as fo
SUBSYSTEM=="bdi", ACTION=="add", PROGRAM="<absolute_path>/awk -v bdi=$kernel 'BEGIN{ret=1} {if ($4 == bdi) {ret=0}} END{exit ret}' /proc/fs/nfsfs/volumes", ATTR{read_ahead_kb}="15380" ```
-2. Apply the `udev` rule:
+2. Apply the `udev` rule:
```bash sudo udevadm control --reload ```
-## Next steps
+## Next steps
* [Linux direct I/O best practices for Azure NetApp Files](performance-linux-direct-io.md) * [Linux filesystem cache best practices for Azure NetApp Files](performance-linux-filesystem-cache.md) * [Linux NFS mount options best practices for Azure NetApp Files](performance-linux-mount-options.md) * [Linux concurrency best practices](performance-linux-concurrency-session-slots.md)
-* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
-* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
+* [Azure virtual machine SKUs best practices](performance-virtual-machine-sku.md)
+* [Performance benchmarks for Linux](performance-benchmarks-linux.md)
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
ms.assetid:
na-+ Last updated 02/21/2023 # Troubleshoot volume errors for Azure NetApp Files
-This article describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes.
+This article describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes.
## Errors for SMB and dual-protocol volumes | Error conditions | Resolutions | |--|-|
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if AD DS and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same VNet. If they're using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Microsoft Entra Domain Services. Microsoft Entra Domain Services should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine (computer) accounts. </li> <li> If you use Microsoft Entra Domain Services, make sure that the user is part of the Microsoft Entra group `Azure AD DC Administrators`. </li></ul> |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if AD DS and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same VNet. If they're using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Microsoft Entra Domain Services. Microsoft Entra Domain Services should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine (computer) accounts. </li> <li> If you use Microsoft Entra Domain Services, make sure that the user is part of the Microsoft Entra group `Azure AD DC Administrators`. </li></ul> |
| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.x.x.x, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. | | The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Microsoft Entra Domain Services, make sure that the organizational unit path is `OU=AADDC Computers`. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. |
| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](./create-active-directory-connections.md#create-an-active-directory-connection) is enabled both in the Active Directory connection and for the service account. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-NTAP-VOL\". Reason: LDAP Error: Strong authentication is required Details: Error: Machine account creation procedure failed\n [ 338] Loaded the preliminary configuration.\n [ nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ nnn ] Successfully connected to ip 10.x.x.x, port 389 using TCP\n [ 765] Unable to connect to LDAP (Active Directory) service on\n dc51.area51.com (Error: Strong(er) authentication\n required)\n*[ nnn] FAILURE: Unable to make a connection (LDAP (Active\n* Directory):contoso.com), result: 7609\n. "` | The LDAP Signing option is not selected, but the AD client has LDAP signing. [Enable LDAP Signing](create-active-directory-connections.md#create-an-active-directory-connection) and retry. | | SMB volume creation fails with the following error: <br> `Failed to create the Active Directory machine account. Reason: LDAP Error: Intialization of LDAP library failed Details: Error: Machine account creation procedure failed` | This error occurs because the service or user account used in the Azure NetApp Files Active Directory connections does not have sufficient privilege to create computer objects or make modifications to the newly created computer object. <br> To solve the issue, you should grant the account being used greater privilege. You can apply a default role with sufficient privilege. You can also delegate additional privilege to the user or service account or to a group it's part of. |
This article describes error messages and resolutions that can help you troubles
|`Error allocating volume - Export policy rules does not match kerberosEnabled flag` | Azure NetApp Files does not support Kerberos for NFSv3 volumes. Kerberos is supported only for the NFSv4.1 protocol. | |`This NetApp account has no configured Active Directory connections` | Configure Active Directory for the NetApp account with fields **KDC IP** and **AD Server Name**. See [Configure the Azure portal](configure-kerberos-encryption.md#configure-the-azure-portal) for instructions. | |`Mismatch between KerberosEnabled flag value and ExportPolicyRule's access type parameter values.` | Azure NetApp Files does not support converting a plain NFSv4.1 volume to Kerberos NFSv4.1 volume, and vice-versa. |
-|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to fewer than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpc-gssd` service as follows. The exact service names may vary on some Linux distributions.<br>Most current distributions use the same service names. Perform the following as root or with `sudo`<br> `systemctl enable nfs-client.target && systemctl start nfs-client.target`<br>(Restart the `rpc-gssd` service.) <br> `systemctl restart rpc-gssd.service` </ul>|
+|`mount.nfs: access denied by server when mounting volume <SMB_SERVER_NAME-XXX.DOMAIN_NAME>/<VOLUME_NAME>` <br> Example: `smb-test-64d9.contoso.com:/nfs41-vol101` | <ol><li> Ensure that the A/PTR records are properly set up and exist in the Active Directory for the server name `smb-test-64d9.contoso.com`. <br> In the NFS client, if `nslookup` of `smb-test-64d9.contoso.com` resolves to IP address IP1 (that is, `10.1.1.68`), then `nslookup` of IP1 must resolve to only one record (that is, `smb-test-64d9.contoso.com`). `nslookup` of IP1 *must* not resolve to multiple names. </li> <li>Set AES-256 for the NFS machine account of type `NFS-<Smb NETBIOS NAME>-<few random characters>` on AD using either PowerShell or the UI. <br> Example commands: <ul><li>`Set-ADComputer <NFS_MACHINE_ACCOUNT_NAME> -KerberosEncryptionType AES256` </li><li>`Set-ADComputer NFS-SMB-TEST-64 -KerberosEncryptionType AES256` </li></ul> </li> <li>Ensure that the time of the NFS client, AD, and Azure NetApp Files storage software is synchronized with each other and is within a five-minute skew range. </li> <li>Get the Kerberos ticket on the NFS client using the command `kinit <administrator>`.</li> <li>Reduce the NFS client hostname to fewer than 15 characters and perform the realm join again. </li><li>Restart the NFS client and the `rpc-gssd` service as follows. The exact service names may vary on some Linux distributions.<br>Most current distributions use the same service names. Perform the following as root or with `sudo`<br> `systemctl enable nfs-client.target && systemctl start nfs-client.target`<br>(Restart the `rpc-gssd` service.) <br> `systemctl restart rpc-gssd.service` </ul>|
|`mount.nfs: an incorrect mount option was specified` | The issue might be related to the NFS client issue. Reboot the NFS client. | |`Hostname lookup failed` | You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.1.1.4`, the hostname of the AD machine (as found by using the hostname command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.1.1.4 -> AD1.contoso.com`. | |`Volume creation fails due to unreachable DNS server` | Two possible solutions are available: <br> <ul><li> This error indicates that DNS is not reachable. The reason might be an incorrect DNS IP or a networking issue. Check the DNS IP entered in AD connection and make sure that the IP is correct. </li> <li> Make sure that the AD and the volume are in same region and in same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets. </li></ul> |
This article describes error messages and resolutions that can help you troubles
| When only primary group IDs are seen and user belongs to auxiliary groups too. | This is caused by a query timeout: <br> -Use [LDAP search scope option](configure-ldap-extended-groups.md). <br> -Use [preferred Active Directory servers for LDAP client](create-active-directory-connections.md#preferred-server-ldap). | | `Error describing volume - Entry doesn't exist for username: <username>, please try with a valid username` | -Check if the user is present on LDAP server. <br> -Check if the LDAP server is healthy. |
-## Errors for volume allocation
+## Errors for volume allocation
When you create a new volume or resize an existing volume in Azure NetApp Files, Microsoft Azure allocates storage and networking resources to your subscription. You might occasionally experience resource allocation failures because of unprecedented growth in demand for Azure services in specific regions.
This section explains the causes of some of the common allocation failures and s
|Out of storage or networking capacity in a region for regular volumes. <br> Error message: `There are currently insufficient resources available to create [or extend] a volume in this region. Please retry the operation. If the problem persists, contact Support.` | The error indicates that there are insufficient resources available in the region to create or resize volumes. <br> Try one of the following workarounds: <ul><li>Create the volume under a new VNet. Doing so will avoid hitting networking-related resource limits.</li> <li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> | |Out of storage capacity when creating a volume with network features set to `Standard`. <br> Error message: `No storage available with Standard network features, for the provided VNet.` | The error indicates that there are insufficient resources available in the region to create volumes with `Standard` networking features. <br> Try one of the following workarounds: <ul><li>If `Standard` network features are not required, create the volume with `Basic` network features.</li> <li>Try creating the volume under a new VNet. Doing so will avoid hitting networking-related resource limits</li><li>Retry after some time. Resources may have been freed in the cluster, region, or zone in the interim.</li></ul> |
-## Activity log warnings for volumes
+## Activity log warnings for volumes
| Warnings | Resolutions | |-|-|
-| The `Microsoft.NetApp/netAppAccounts/capacityPools/volumes/ScaleUp` operation displays a warning: <br> `Percentage Volume Consumed Size reached 90%` | The used size of an Azure NetApp Files volume has reached 90% of the volume quota. You should [resize the volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) soon. |
+| The `Microsoft.NetApp/netAppAccounts/capacityPools/volumes/ScaleUp` operation displays a warning: <br> `Percentage Volume Consumed Size reached 90%` | The used size of an Azure NetApp Files volume has reached 90% of the volume quota. You should [resize the volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) soon. |
-## Next steps
+## Next steps
* [Create an NFS volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
-* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
-* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
+* [Create an SMB volume for Azure NetApp Files](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume for Azure NetApp Files](create-volumes-dual-protocol.md)
* [Configure network features for a volume](configure-network-features.md)
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
ms.assetid:
na-+ Last updated 11/27/2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Customer-managed keys](configure-customer-managed-keys.md) is now generally available (GA).
- You still must register the feature before using it for the first time.
-
+ You still must register the feature before using it for the first time.
+ ## November 2023 * [Capacity pool enhancement:](azure-netapp-files-set-up-capacity-pool.md) New lower limits
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Metrics enhancement: Throughput limits](azure-netapp-files-metrics.md#volumes)
- Azure NetApp Files now supports a "throughput limit reached" metric for volumes. The metric is a Boolean value that denotes the volume is hitting its QoS limit. With this metric, you know whether or not to adjust volumes so they meet the specific needs of your workloads.
+ Azure NetApp Files now supports a "throughput limit reached" metric for volumes. The metric is a Boolean value that denotes the volume is hitting its QoS limit. With this metric, you know whether or not to adjust volumes so they meet the specific needs of your workloads.
* [Standard network features in US Gov regions](azure-netapp-files-network-topologies.md#supported-regions) is now generally available (GA)
-
- Azure NetApp Files now supports Standard network features for new volumes in US Gov Arizona, US Gov Texas, and US Gov Virginia. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
+
+ Azure NetApp Files now supports Standard network features for new volumes in US Gov Arizona, US Gov Texas, and US Gov Virginia. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) is now generally available (GA).
- User and group quotas enable you to stay in control and define how much storage capacity can be used by individual users or groups can use within a specific Azure NetApp Files volume. You can set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can define a default (that is, same for all users) or individual group quotas.
+ User and group quotas enable you to stay in control and define how much storage capacity can be used by individual users or groups can use within a specific Azure NetApp Files volume. You can set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can define a default (that is, same for all users) or individual group quotas.
This feature is Generally Available in Azure commercial regions and US Gov regions where Azure NetApp Files is available.
Azure NetApp Files is updated regularly. This article provides a summary about t
In addition to Citrix App Layering, FSLogix user profiles including FSLogix ODFC containers, and Microsoft SQL Server, Azure NetApp Files now supports [MSIX app attach](../virtual-desktop/create-netapp-files.md) with SMB Continuous Availability shares to enhance resiliency during storage service maintenance operations. Continuous Availability enables SMB transparent failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience. * [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md#supported-regions) in US Gov regions
-
+ Azure NetApp Files now supports [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md?tabs=azure-portal) in US Gov Arizona and US Gov Virginia regions. Azure NetApp Files datastores for Azure VMware Solution provide the ability to scale storage independently of compute and can go beyond the limits of the local instance storage provided by vSAN reducing total cost of ownership. ## October 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
Most of unstructured data is typically infrequently accessed. It can account for more than 50% of the total storage capacity in many storage environments. Infrequently accessed data associated with productivity software, completed projects, and old datasets are an inefficient use of a high-performance storage. You can now use the cool access option in a capacity pool of Azure NetApp Files standard service level to have inactive data transparently moved from Azure NetApp Files standard service-level storage (the *hot tier*) to an Azure storage account (the *cool tier*). This option lets you free up storage that resides within Azure NetApp Files volumes by moving data blocks to the lower cost cool tier, resulting in overall cost savings. You can configure this option on a volume by specifying the number of days (the *coolness period*, ranging from 7 to 183 days) for inactive data to be considered "cool". Viewing and accessing the data stay transparent, except for a higher access time to data blocks that were moved to the cool tier.
-* [Troubleshoot Azure NetApp Files using diagnose and solve problems tool](troubleshoot-diagnose-solve-problems.md)
+* [Troubleshoot Azure NetApp Files using diagnose and solve problems tool](troubleshoot-diagnose-solve-problems.md)
The **diagnose and solve problems** tool simplifies the troubleshooting process, making it effortless to identify and resolve any issues affecting your Azure NetApp Files deployment. With the tool's proactive troubleshooting, user-friendly guidance, and seamless integration with Azure Support, you can more easily manage and maintain a reliable and high-performance Azure NetApp Files storage environment. Experience enhanced issue resolution and optimization capabilities today, ensuring a smoother Azure NetApp Files management experience. * [Snapshot manageability enhancement: Identify parent snapshot](snapshots-restore-new-volume.md)
- You can now see the name of the snapshot used to create a new volume. In the Volume overview page, the **Originated from** field identifies the source snapshot used in volume creation. If the field is empty, no snapshot was used.
+ You can now see the name of the snapshot used to create a new volume. In the Volume overview page, the **Originated from** field identifies the source snapshot used in volume creation. If the field is empty, no snapshot was used.
## September 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Troubleshooting enhancement: validate user connectivity, group membership and access to LDAP-enabled volumes](troubleshoot-user-access-ldap.md)
- Azure NetApp Files now provides you with the ability to validate user connectivity and access to LDAP-enabled volumes based on group membership. When you provide a user ID, Azure NetApp Files reports a list of primary and auxiliary group IDs that the user belongs to from the LDAP server. Validating user access is helpful for scenarios such as ensuring POSIX attributes set on the LDAP server are accurate or when you encounter permission errors.
+ Azure NetApp Files now provides you with the ability to validate user connectivity and access to LDAP-enabled volumes based on group membership. When you provide a user ID, Azure NetApp Files reports a list of primary and auxiliary group IDs that the user belongs to from the LDAP server. Validating user access is helpful for scenarios such as ensuring POSIX attributes set on the LDAP server are accurate or when you encounter permission errors.
## August 2023 * [Cross-region replication enhancement: re-establish deleted volume replication](reestablish-deleted-volume-relationships.md) (Preview)
- Azure NetApp Files now allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. If the destination volume remained operational and no snapshots were deleted, the replication re-establish operation will use the last common snapshot and incrementally synchronize the destination volume based on the last known good snapshot. In that case, no baseline replication is required.
+ Azure NetApp Files now allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. If the destination volume remained operational and no snapshots were deleted, the replication re-establish operation will use the last common snapshot and incrementally synchronize the destination volume based on the last known good snapshot. In that case, no baseline replication is required.
* [Backup vault](backup-vault-manage.md) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) is now generally available (GA).
- To enhance resiliency during storage service maintenance operations, SMB volumes used by Citrix App Layering, FSLogix user profile containers and Microsoft SQL Server on Microsoft Windows Server can be enabled with Continuous Availability. Continuous Availability enables SMB Transparent Failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience.
-
+ To enhance resiliency during storage service maintenance operations, SMB volumes used by Citrix App Layering, FSLogix user profile containers and Microsoft SQL Server on Microsoft Windows Server can be enabled with Continuous Availability. Continuous Availability enables SMB Transparent Failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience.
+ To learn more about Continuous Availability, see the [application resiliency FAQ](faq-application-resilience.md#do-i-need-to-take-special-precautions-for-smb-based-applications) and follow the instructions to enable it on new and existing SMB volumes. * [Configure NFSv4.1 ID domain for non-LDAP volumes](azure-netapp-files-configure-nfsv41-domain.md) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
For details on registering the feature and setting NFSv4.1 ID Domain in Azure NetApp Files, see [Configure NFSv4.1 ID Domain](azure-netapp-files-configure-nfsv41-domain.md).
-* [Moving volumes from *manual* QoS capacity pool to *auto* QoS capacity pool](dynamic-change-volume-service-level.md)
+* [Moving volumes from *manual* QoS capacity pool to *auto* QoS capacity pool](dynamic-change-volume-service-level.md)
- You can now move volumes from a manual QoS capacity pool to an auto QoS capacity pool. When you move a volume to an auto QoS capacity pool, the throughput is changed according to the allocated volume size (quota) of the target pool's service level: `<throughput> = <volume quota> x <Service Level Throughput / TiB>`
+ You can now move volumes from a manual QoS capacity pool to an auto QoS capacity pool. When you move a volume to an auto QoS capacity pool, the throughput is changed according to the allocated volume size (quota) of the target pool's service level: `<throughput> = <volume quota> x <Service Level Throughput / TiB>`
## June 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) (Preview)
- We're excited to announce the addition of double encryption at rest for Azure NetApp Files volumes. This new feature provides an extra layer of protection for your critical data, ensuring maximum confidentiality and mitigating potential liabilities. Double encryption at rest is ideal for industries such as finance, military, healthcare, and government, where breaches of confidentiality can have catastrophic consequences. By combining hardware-based encryption with encrypted SSD drives and software-based encryption at the volume level, your data remains secure throughout its lifecycle. You can select **double** as the encryption type during capacity pool creation to easily enable this advanced security layer.
+ We're excited to announce the addition of double encryption at rest for Azure NetApp Files volumes. This new feature provides an extra layer of protection for your critical data, ensuring maximum confidentiality and mitigating potential liabilities. Double encryption at rest is ideal for industries such as finance, military, healthcare, and government, where breaches of confidentiality can have catastrophic consequences. By combining hardware-based encryption with encrypted SSD drives and software-based encryption at the volume level, your data remains secure throughout its lifecycle. You can select **double** as the encryption type during capacity pool creation to easily enable this advanced security layer.
* Availability zone volume placement enhancement - [Populate existing volumes](manage-availability-zone-volume-placement.md#populate-an-existing-volume-with-availability-zone-information) (Preview) The Azure NetApp Files [availability zone volume placement](manage-availability-zone-volume-placement.md) feature lets you deploy *new volumes* in the availability zone of your choice, in alignment with Azure compute and other services in the same zone. With this "Populate existing volume" enhancement, you can now obtain and, if desired, populate *previously deployed, existing volumes* with the logical availability zone information. This capability automatically maps the physical zone the volumes was deployed in and maps it to the logical zone for your subscription. This feature doesn't move any volumes between zones.
-* [Customer-managed keys](configure-customer-managed-keys.md) for Azure NetApp Files now supports the option to Disable public access on the key vault that contains your encryption key. Selecting this option enhances network security by denying public configurations and allowing only connections through private endpoints.
+* [Customer-managed keys](configure-customer-managed-keys.md) for Azure NetApp Files now supports the option to Disable public access on the key vault that contains your encryption key. Selecting this option enhances network security by denying public configurations and allowing only connections through private endpoints.
-## May 2023
+## May 2023
-* Azure NetApp Files now supports [customer-managed keys](configure-customer-managed-keys.md) on both source and data replication volumes with [cross-region replication](cross-region-replication-requirements-considerations.md) or [cross-zone replication](cross-zone-replication-requirements-considerations.md) relationships.
+* Azure NetApp Files now supports [customer-managed keys](configure-customer-managed-keys.md) on both source and data replication volumes with [cross-region replication](cross-region-replication-requirements-considerations.md) or [cross-zone replication](cross-zone-replication-requirements-considerations.md) relationships.
* [Standard network features - Edit volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes) (Preview)
- Azure NetApp Files volumes have been supported with Standard network features since [October 2021](#october-2021), but only for newly created volumes. This new *edit volumes* capability lets you change *existing* volumes that were configured with Basic network features to use Standard network features. This capability provides an enhanced, more standard, Microsoft Azure Virtual Network experience through various security and connectivity features that are available on Virtual Networks to Azure services. When you edit existing volumes to use Standard network features, you can start taking advantage of networking capabilities, such as (but not limited to):
+ Azure NetApp Files volumes have been supported with Standard network features since [October 2021](#october-2021), but only for newly created volumes. This new *edit volumes* capability lets you change *existing* volumes that were configured with Basic network features to use Standard network features. This capability provides an enhanced, more standard, Microsoft Azure Virtual Network experience through various security and connectivity features that are available on Virtual Networks to Azure services. When you edit existing volumes to use Standard network features, you can start taking advantage of networking capabilities, such as (but not limited to):
* Increased number of client IPs in a virtual network (including immediately peered Virtual Networks) accessing Azure NetApp Files volumes - the [same as Azure VMs](azure-netapp-files-resource-limits.md#resource-limits) * Enhanced network security with support for [network security groups](../virtual-network/network-security-groups-overview.md) on Azure NetApp Files delegated subnets * Enhanced network control with support for [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) to and from Azure NetApp Files delegated subnets * Connectivity over Active/Active VPN gateway setup
- * [ExpressRoute FastPath](../expressroute/about-fastpath.md) connectivity to Azure NetApp Files
+ * [ExpressRoute FastPath](../expressroute/about-fastpath.md) connectivity to Azure NetApp Files
This feature is now in public preview, currently available in [16 Azure regions](azure-netapp-files-network-topologies.md#regions-edit-network-features). It will roll out to other regions. Stay tuned for further information as more regions become available. * [Azure Application Consistent Snapshot tool (AzAcSnap) 8 (GA)](azacsnap-introduction.md)
- Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements:
+ Version 8 of the AzAcSnap tool is now generally available. [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases in Linux environments. AzAcSnap 8 introduces the following new capabilities and improvements:
- * Restore change - ability to revert volume for Azure NetApp Files
- * New global settings file (`.azacsnaprc`) to control behavior of `azacsnap`
- * Logging enhancements for failure cases and new "mainlog" for summarized monitoring
- * Backup (`-c backup`) and Details (`-c details`) fixes
+ * Restore change - ability to revert volume for Azure NetApp Files
+ * New global settings file (`.azacsnaprc`) to control behavior of `azacsnap`
+ * Logging enhancements for failure cases and new "mainlog" for summarized monitoring
+ * Backup (`-c backup`) and Details (`-c details`) fixes
- Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
+ Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
* [Single-file snapshot restore](snapshots-restore-file-single.md) is now generally available (GA) * [Troubleshooting enhancement: break file locks](troubleshoot-file-locks.md)
- In some cases you may encounter (stale) file locks on NFS, SMB, or dual-protocol volumes that need to be cleared. With this new Azure NetApp Files feature, you can now break these locks. You can break file locks for all files in a volume or break all file locks initiated by a specified client.
+ In some cases you may encounter (stale) file locks on NFS, SMB, or dual-protocol volumes that need to be cleared. With this new Azure NetApp Files feature, you can now break these locks. You can break file locks for all files in a volume or break all file locks initiated by a specified client.
## April 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Disable `showmount`](disable-showmount.md) (Preview)
- By default, Azure NetApp Files enables [`showmount` functionality](/windows-server/administration/windows-commands/showmount) to show NFS exported paths. The setting allows NFS clients to use the `showmount -e` command to see a list of exports available on the Azure NetApp Files NFS-enabled storage endpoint. This functionality might cause security scanners to flag the Azure NetApp Files NFS service as having a vulnerability because these scanners often use `showmount` to see what is being returned. In those scenarios, you might want to disable `showmount` on Azure NetApp Files. This setting allows you to enable/disable `showmount` for your NFS-enabled storage endpoints.
+ By default, Azure NetApp Files enables [`showmount` functionality](/windows-server/administration/windows-commands/showmount) to show NFS exported paths. The setting allows NFS clients to use the `showmount -e` command to see a list of exports available on the Azure NetApp Files NFS-enabled storage endpoint. This functionality might cause security scanners to flag the Azure NetApp Files NFS service as having a vulnerability because these scanners often use `showmount` to see what is being returned. In those scenarios, you might want to disable `showmount` on Azure NetApp Files. This setting allows you to enable/disable `showmount` for your NFS-enabled storage endpoints.
* [Active Directory support improvement](create-active-directory-connections.md#preferred-server-ldap) (Preview)
- The Preferred server for LDAP client option allows you to submit the IP addresses of up to two Active Directory (AD) servers as a comma-separated list. Rather than sequentially contacting all of the discovered AD services for a domain, the LDAP client will contact the specified servers first.
+ The Preferred server for LDAP client option allows you to submit the IP addresses of up to two Active Directory (AD) servers as a comma-separated list. Rather than sequentially contacting all of the discovered AD services for a domain, the LDAP client will contact the specified servers first.
## February 2023
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) (Preview)
- Access-based enumeration (ABE) displays only the files and folders that a user has permissions to access. If a user doesn't have Read (or equivalent) permissions for a folder, the Windows client hides the folder from the userΓÇÖs view. This new capability provides an additional layer of security by only displaying files and folders a user has access to, and as a result hiding file and folder information a user has no access to. You can now enable ABE on Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) and [dual-protocol](create-volumes-dual-protocol.md#access-based-enumeration) (with NTFS security style) volumes.
+ Access-based enumeration (ABE) displays only the files and folders that a user has permissions to access. If a user doesn't have Read (or equivalent) permissions for a folder, the Windows client hides the folder from the userΓÇÖs view. This new capability provides an additional layer of security by only displaying files and folders a user has access to, and as a result hiding file and folder information a user has no access to. You can now enable ABE on Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) and [dual-protocol](create-volumes-dual-protocol.md#access-based-enumeration) (with NTFS security style) volumes.
* [Non-browsable shares](azure-netapp-files-create-volumes-smb.md#non-browsable-share) (Preview)
- You can now configure Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#non-browsable-share) or [dual-protocol](create-volumes-dual-protocol.md#non-browsable-share) volumes as non-browsable. This new feature prevents the Windows client from browsing the share, and the share doesn't show up in the Windows File Explorer. This new capability provides an additional layer of security by not displaying shares that are configured as non-browsable. Users who have access to the share will maintain access.
+ You can now configure Azure NetApp Files [SMB](azure-netapp-files-create-volumes-smb.md#non-browsable-share) or [dual-protocol](create-volumes-dual-protocol.md#non-browsable-share) volumes as non-browsable. This new feature prevents the Windows client from browsing the share, and the share doesn't show up in the Windows File Explorer. This new capability provides an additional layer of security by not displaying shares that are configured as non-browsable. Users who have access to the share will maintain access.
-* Option to **delete base snapshot** when you [restore a snapshot to a new volume using Azure NetApp Files](snapshots-restore-new-volume.md)
+* Option to **delete base snapshot** when you [restore a snapshot to a new volume using Azure NetApp Files](snapshots-restore-new-volume.md)
By default, the new volume includes a reference to the snapshot that was used for the restore operation, referred to as the *base snapshot*. If you donΓÇÖt want the new volume to contain this base snapshot, you can select the **Delete base snapshot** option during volume creation.
Azure NetApp Files is updated regularly. This article provides a summary about t
You no longer need to register the features before using them.
-* The `Vaults` API is deprecated starting with Azure NetApp Files REST API version 2022-09-01.
+* The `Vaults` API is deprecated starting with Azure NetApp Files REST API version 2022-09-01.
Enabling backup of volumes doesn't require the `Vaults` API. REST API users can use `PUT` and `PATCH` [Volumes API](/rest/api/netapp/volumes) to enable backup for a volume.
-
+ * [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) (Preview) Azure NetApp Files volumes provide flexible, large and scalable storage shares for applications and users. Storage capacity and consumption by users is only limited by the size of the volume. In some scenarios, you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume user and group quotas, you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas.
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Large volumes](large-volumes-requirements-considerations.md) (Preview) Regular Azure NetApp Files volumes are limited to 100 TiB in size. Azure NetApp Files [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) break this barrier by enabling volumes of 100 TiB to 500 TiB in size. The large volumes capability enables various use cases and workloads that require large volumes with a single directory namespace.
-
+ * [Customer-managed keys](configure-customer-managed-keys.md) (Preview)
- Azure NetApp Files volumes now support encryption with customer-managed keys and Azure Key Vault to enable an extra layer of security for data at rest.
-
- Data encryption with customer-managed keys for Azure NetApp Files allows you to bring your own key for data encryption at rest. You can use this feature to implement separation of duties for managing keys and data. Additionally, you can centrally manage and organize keys using Azure Key Vault. With customer-managed encryption, you are in full control of, and responsible for, a key's lifecycle, key usage permissions, and auditing operations on keys.
-
+ Azure NetApp Files volumes now support encryption with customer-managed keys and Azure Key Vault to enable an extra layer of security for data at rest.
+
+ Data encryption with customer-managed keys for Azure NetApp Files allows you to bring your own key for data encryption at rest. You can use this feature to implement separation of duties for managing keys and data. Additionally, you can centrally manage and organize keys using Azure Key Vault. With customer-managed encryption, you are in full control of, and responsible for, a key's lifecycle, key usage permissions, and auditing operations on keys.
+ * [Capacity pool enhancement](azure-netapp-files-set-up-capacity-pool.md) (Preview) Azure NetApp Files now supports a lower limit of 2 TiB for capacity pool sizing with Standard network features.
Azure NetApp Files is updated regularly. This article provides a summary about t
## December 2022
-* [Azure Application Consistent Snapshot tool (AzAcSnap) 7](azacsnap-introduction.md)
-
- Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases in Linux environments.
+* [Azure Application Consistent Snapshot tool (AzAcSnap) 7](azacsnap-introduction.md)
+
+ Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases in Linux environments.
- The AzAcSnap 7 release includes the following fixes and improvements:
+ The AzAcSnap 7 release includes the following fixes and improvements:
* Shortening of snapshot names * Restore (`-c restore`) improvements
- * Test (`-c test`) improvements
- * Validation improvements
- * Timeout improvements
- * Azure Backup integration improvements
+ * Test (`-c test`) improvements
+ * Validation improvements
+ * Timeout improvements
+ * Azure Backup integration improvements
* Features moved to GA (generally available): None
- * The following features are now in preview:
+ * The following features are now in preview:
* Preliminary support for [Azure NetApp Files backup](backup-introduction.md) * [IBM Db2 database](https://www.ibm.com/products/db2) support adding options to configure, test, and snapshot backup IBM Db2 in an application consistent manner
- Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
+ Download the latest release of the installer [here](https://aka.ms/azacsnapinstaller).
* [Cross-zone replication](create-cross-zone-replication.md) (Preview) With AzureΓÇÖs push towards the use of availability zones (AZs), the need for storage-based data replication is equally increasing. Azure NetApp Files now supports [cross-zone replication](cross-zone-replication-introduction.md). With this new in-region replication capability - by combining it with the new availability zone volume placement feature - you can replicate your Azure NetApp Files volumes asynchronously from one Azure availability zone to another in a fast and cost-effective way.
- Cross-zone replication helps you protect your data from unforeseeable zone failures without the need for host-based data replication. Cross-zone replication minimizes the amount of data required to replicate across the zones, therefore limiting data transfers required and also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO). Cross-zone replication doesnΓÇÖt involve any network transfer costs, hence it's highly cost-effective.
+ Cross-zone replication helps you protect your data from unforeseeable zone failures without the need for host-based data replication. Cross-zone replication minimizes the amount of data required to replicate across the zones, therefore limiting data transfers required and also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO). Cross-zone replication doesnΓÇÖt involve any network transfer costs, hence it's highly cost-effective.
The public preview of the feature is currently available in the following regions: Australia East, Brazil South, Canada Central, Central US, East Asia, East US, East US 2, France Central, Germany West Central, Japan East, North Europe, Norway East, Southeast Asia, South Central US, UK South, West Europe, West US 2, and West US 3.
-
+ In the future, cross-zone replication is planned for all [AZ-enabled regions](../availability-zones/az-overview.md#azure-regions-with-availability-zones) with [Azure NetApp Files presence](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=all&rar=true). * [Azure Virtual WAN](configure-virtual-wan.md) (Preview)
- [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) is now supported on Azure NetApp Files with Standard network features. Azure Virtual WAN is a spoke-and-hub architecture, enabling cloud-hosted network hub connectivity between endpoints, creating networking, security, and routing functionalities in one interface. Use cases for Azure Virtual WAN include remote user VPN connectivity (point-to-site), private connectivity (ExpressRoute), intra-cloud connectivity, and VPN ExpressRoute inter-connectivity.
+ [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md) is now supported on Azure NetApp Files with Standard network features. Azure Virtual WAN is a spoke-and-hub architecture, enabling cloud-hosted network hub connectivity between endpoints, creating networking, security, and routing functionalities in one interface. Use cases for Azure Virtual WAN include remote user VPN connectivity (point-to-site), private connectivity (ExpressRoute), intra-cloud connectivity, and VPN ExpressRoute inter-connectivity.
-## November 2022
+## November 2022
-* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now generally available (GA) with expanded regional coverage.
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now generally available (GA) with expanded regional coverage.
* [Encrypted SMB connections to Domain Controller](create-active-directory-connections.md#encrypted-smb-dc) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
## October 2022
-* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
+* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
- Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
+ Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
* [Application volume group for SAP HANA](application-volume-group-introduction.md) now generally available (GA)
- The application volume group for SAP HANA feature is now generally available. You no longer need to register the feature to use it.
+ The application volume group for SAP HANA feature is now generally available. You no longer need to register the feature to use it.
## August 2022 * [Standard network features](configure-network-features.md) are now generally available [in supported regions](azure-netapp-files-network-topologies.md#supported-regions).
- Standard network features now includes Global virtual network peering.
+ Standard network features now includes Global virtual network peering.
Regular billing for Standard network features on Azure NetApp Files began November 1, 2022.
-
+ ## July 2022 * [Azure Application Consistent Snapshot Tool (AzAcSnap) 6](azacsnap-release-notes.md)
-
+ [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments. With AzAcSnap 6, there's a new [release model](azacsnap-release-notes.md). AzAcSnap 6 also introduces the following new capabilities: Now generally available:
Azure NetApp Files is updated regularly. This article provides a summary about t
* Backint integration to work with Azure Backup * [RunBefore and RunAfter](azacsnap-cmd-ref-runbefore-runafter.md) CLI options to execute custom shell scripts and commands before or after taking storage snapshots
- In preview:
+ In preview:
* Azure Key Vault to store Service Principal content * Azure Managed Disk as an alternate storage back end
Azure NetApp Files is updated regularly. This article provides a summary about t
[Azure NetApp Files datastores for Azure VMware Solution](https://azure.microsoft.com/blog/power-your-file-storageintensive-workloads-with-azure-vmware-solution) is now in public preview. This new integration between Azure VMware Solution and Azure NetApp Files enables you to [create datastores via the Azure VMware Solution resource provider with Azure NetApp Files NFS volumes](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) and mount the datastores on your private cloud clusters of choice. Along with the integration of Azure disk pools for Azure VMware Solution, this capability provides more choice to scale storage needs independently of compute resources. For your storage-intensive workloads running on Azure VMware Solution, the integration with Azure NetApp Files helps to easily scale storage capacity beyond the limits of the local instance storage for Azure VMware Solution provided by vSAN and lower your overall total cost of ownership for storage-intensive workloads.
-* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
+* [Azure Policy built-in definitions for Azure NetApp Files](azure-policy-definitions.md#built-in-policy-definitions)
- Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
+ Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. It also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources. Azure NetApp Files already supports Azure Policy via custom policy definitions. Azure NetApp Files now also provides built-in policy to enable organization admins to restrict creation of unsecure NFS volumes or audit existing volumes more easily.
## May 2022
-* [LDAP signing](create-active-directory-connections.md#ldap-signing) now generally available (GA)
+* [LDAP signing](create-active-directory-connections.md#ldap-signing) now generally available (GA)
The LDAP signing feature is now generally available. You no longer need to register the feature before using it.
Azure NetApp Files is updated regularly. This article provides a summary about t
## April 2022
-* Features that are now generally available (GA)
+* Features that are now generally available (GA)
- The following features are now GA. You no longer need to register the features before using them.
+ The following features are now GA. You no longer need to register the features before using them.
* [Dynamic change of service level](dynamic-change-volume-service-level.md)
- * [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users)
+ * [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users)
## March 2022
-* Features that are now generally available (GA)
+* Features that are now generally available (GA)
- The following features are now GA. You no longer need to register the features before using them.
- * [Backup policy users](create-active-directory-connections.md#backup-policy-users)
+ The following features are now GA. You no longer need to register the features before using them.
+ * [Backup policy users](create-active-directory-connections.md#backup-policy-users)
* [AES encryption for AD authentication](create-active-directory-connections.md#aes-encryption) ## January 2022 * [Azure Application Consistent Snapshot Tool (AzAcSnap) v5.1 Public Preview](azacsnap-release-notes.md)
- [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
-
- The public preview of v5.1 brings the following new capabilities to AzAcSnap:
+ [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (AzAcSnap) is a command-line tool that enables customers to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
+
+ The public preview of v5.1 brings the following new capabilities to AzAcSnap:
* Oracle Database support * Backint Co-existence * Azure Managed Disk
- * RunBefore and RunAfter capability
+ * RunBefore and RunAfter capability
* [LDAP search scope](configure-ldap-extended-groups.md#ldap-search-scope)
- You might be using the Unix security style with a dual-protocol volume or Lightweight Directory Access Protocol (LDAP) with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
+ You might be using the Unix security style with a dual-protocol volume or Lightweight Directory Access Protocol (LDAP) with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
## December 2021
-* [NFS protocol version conversion](convert-nfsv3-nfsv41.md) (Preview)
+* [NFS protocol version conversion](convert-nfsv3-nfsv41.md) (Preview)
- In some cases, you might need to transition from one NFS protocol version to another. For example, when you want an existing NFS NFSv3 volume to take advantage of NFSv4.1 features, you might want to convert the protocol version from NFSv3 to NFSv4.1. Likewise, you might want to convert an existing NFSv4.1 volume to NFSv3 for performance or simplicity reasons. Azure NetApp Files now provides an option that enables you to convert an NFS volume between NFSv3 and NFSv4.1. This option doesn't require creating new volumes or performing data copies. The conversion operations preserve the data and update the volume export policies as part of the operation.
+ In some cases, you might need to transition from one NFS protocol version to another. For example, when you want an existing NFS NFSv3 volume to take advantage of NFSv4.1 features, you might want to convert the protocol version from NFSv3 to NFSv4.1. Likewise, you might want to convert an existing NFSv4.1 volume to NFSv3 for performance or simplicity reasons. Azure NetApp Files now provides an option that enables you to convert an NFS volume between NFSv3 and NFSv4.1. This option doesn't require creating new volumes or performing data copies. The conversion operations preserve the data and update the volume export policies as part of the operation.
* [Single-file snapshot restore](snapshots-restore-file-single.md) (Preview) Azure NetApp Files provides ways to quickly restore data from snapshots (mainly at the volume level). See [How Azure NetApp Files snapshots work](snapshots-introduction.md). Options for user file self-restore are available via client-side data copy from the `~snapshot` (Windows) or `.snapshot` (Linux) folders. These operations require data (files and directories) to traverse the network twice (upon read and write). As such, the operations aren't time and resource efficient, especially with large data sets. If you don't want to restore the entire snapshot to a new volume, revert a volume, or copy large files across the network, you can use the single-file snapshot restore feature to restore individual files directly on the service from a volume snapshot without requiring data copy via an external client. This approach drastically reduces RTO and network resource usage when restoring large files.
-* Features that are now generally available (GA)
+* Features that are now generally available (GA)
The following features are now GA. You no longer need to register the features before using them.
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Application volume group for SAP HANA](application-volume-group-introduction.md) (Preview)
- Application volume group (AVG) for SAP HANA enables you to deploy all volumes required to install and operate an SAP HANA database according to best practices, including the use of proximity placement group (PPG) with VMs to achieve automated, low-latency deployments. AVG for SAP HANA has implemented many technical improvements that simplify and standardize the entire process to help you streamline volume deployments for SAP HANA.
-
+ Application volume group (AVG) for SAP HANA enables you to deploy all volumes required to install and operate an SAP HANA database according to best practices, including the use of proximity placement group (PPG) with VMs to achieve automated, low-latency deployments. AVG for SAP HANA has implemented many technical improvements that simplify and standardize the entire process to help you streamline volume deployments for SAP HANA.
+ ## October 2021 * [Azure NetApp Files cross-region replication](cross-region-replication-introduction.md) now generally available (GA)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Standard network features](configure-network-features.md) (Preview) Azure NetApp Files now supports **Standard** network features for volumes that customers have been asking for since the inception. This capability is a result of innovative hardware and software integration. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
-
- You can now choose *Standard* or *Basic* network features when creating a new Azure NetApp Files volume. Upon choosing Standard network features, you can take advantage of the following supported features for Azure NetApp Files volumes and delegated subnets:
+
+ You can now choose *Standard* or *Basic* network features when creating a new Azure NetApp Files volume. Upon choosing Standard network features, you can take advantage of the following supported features for Azure NetApp Files volumes and delegated subnets:
* Increased IP limits for the virtual networks with Azure NetApp Files volumes at par with VMs * Enhanced network security with support for [network security groups](../virtual-network/network-security-groups-overview.md) on the Azure NetApp Files delegated subnet * Enhanced network control with support for [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes) to and from Azure NetApp Files delegated subnets * Connectivity over Active/Active VPN gateway setup * [ExpressRoute FastPath](../expressroute/about-fastpath.md) connectivity to Azure NetApp Files
- This public preview is currently available starting with **North Central US** and will roll out to other regions. Stay tuned for further information through [Azure Update](https://azure.microsoft.com/updates/) as more regions and features become available.
-
+ This public preview is currently available starting with **North Central US** and will roll out to other regions. Stay tuned for further information through [Azure Update](https://azure.microsoft.com/updates/) as more regions and features become available.
+ To learn more, see [Configure network features for an Azure NetApp Files volume](configure-network-features.md). ## September 2021 * [Azure NetApp Files backup](backup-introduction.md) (Preview)
- Azure NetApp Files online snapshots now support backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion.
+ Azure NetApp Files online snapshots now support backup of snapshots. With this new backup capability, you can vault your Azure NetApp Files snapshots to cost-efficient and ZRS-enabled Azure storage in a fast and cost-effective way. This approach further protects your data from accidental deletion.
- Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. You can restore them to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault.
+ Azure NetApp Files backup extends ONTAP's built-in snapshot technology. When snapshots are vaulted to Azure storage, only changed blocks relative to previously vaulted snapshots are copied and stored, in an efficient format. Vaulted snapshots are still represented in full. You can restore them to a new volume individually and directly, eliminating the need for an iterative, full-incremental recovery process. This advanced technology minimizes the amount of data required to store to and retrieve from Azure storage, therefore saving data transfer and storage costs. It also shortens the backup vaulting time, so you can achieve a smaller Restore Point Objective (RPO). You can keep a minimum number of snapshots online on the Azure NetApp Files service for the most immediate, near-instantaneous data-recovery needs. In doing so, you can build up a longer history of snapshots at a lower cost for long-term retention in the Azure NetApp Files backup vault.
For more information, see [How Azure NetApp Files snapshots work](snapshots-introduction.md).
Azure NetApp Files is updated regularly. This article provides a summary about t
You can already enable the SMB Continuous Availability (CA) feature when you [create a new SMB volume](azure-netapp-files-create-volumes-smb.md#continuous-availability). You can now also enable SMB CA on an existing SMB volume. See [Enable Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md).
-* [Snapshot policy](snapshots-manage-policy.md) now generally available (GA)
+* [Snapshot policy](snapshots-manage-policy.md) now generally available (GA)
The snapshot policy feature is now generally available. You no longer need to register the feature before using it.
-* [NFS `Chown Mode` export policy and UNIX export permissions](configure-unix-permissions-change-ownership-mode.md) (Preview)
+* [NFS `Chown Mode` export policy and UNIX export permissions](configure-unix-permissions-change-ownership-mode.md) (Preview)
- You can now set the Unix permissions and the change ownership mode (`Chown Mode`) options on Azure NetApp Files NFS volumes or dual-protocol volumes with the Unix security style. You can specify these settings during volume creation or after volume creation.
+ You can now set the Unix permissions and the change ownership mode (`Chown Mode`) options on Azure NetApp Files NFS volumes or dual-protocol volumes with the Unix security style. You can specify these settings during volume creation or after volume creation.
- The change ownership mode (`Chown Mode`) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for `Chown Mode` are available:
- * *Restricted* (default), where only the root user can change the ownership of files and directories
- * *Unrestricted*, where non-root users can change the ownership for files and directories that they own
+ The change ownership mode (`Chown Mode`) functionality enables you to set the ownership management capabilities of files and directories. You can specify or modify the setting under a volume's export policy. Two options for `Chown Mode` are available:
+ * *Restricted* (default), where only the root user can change the ownership of files and directories
+ * *Unrestricted*, where non-root users can change the ownership for files and directories that they own
- The Azure NetApp Files Unix Permissions functionality enables you to specify change permissions for the mount path.
+ The Azure NetApp Files Unix Permissions functionality enables you to specify change permissions for the mount path.
- These new features put access control of certain files and directories in the hands of the data user instead of the service operator.
+ These new features put access control of certain files and directories in the hands of the data user instead of the service operator.
-* [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md) (Preview)
+* [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md) (Preview)
Azure NetApp Files already supports dual-protocol access to NFSv3 and SMB volumes as of [July 2020](#july-2020). You can now create an Azure NetApp Files volume that allows simultaneous dual-protocol (NFSv4.1 and SMB) access with support for LDAP user mapping. This feature enables use cases where you might have a Linux-based workload using NFSv4.1 for its access, and the workload generates and stores data in an Azure NetApp Files volume. At the same time, your staff might need to use Windows-based clients and software to analyze the newly generated data from the same Azure NetApp Files volume. The simultaneous dual-protocol access removes the need to copy the workload-generated data to a separate volume with a different protocol for post-analysis, saving storage cost and operational time. This feature is free of charge (normal Azure NetApp Files storage cost still applies) and is generally available. Learn more from the [simultaneous dual-protocol NFSv4.1/SMB access](create-volumes-dual-protocol.md) documentation.
-## June 2021
+## June 2021
* [Azure NetApp Files storage service add-ons](storage-service-add-ons.md)
- The new Azure NetApp Files **Storage service add-ons** menu option provides an Azure portal ΓÇ£launching padΓÇ¥ for available third-party, ecosystem add-ons to the Azure NetApp Files storage service. With this new portal menu option, you can enter a landing page by selecting an add-on tile to quickly access the add-on.
+ The new Azure NetApp Files **Storage service add-ons** menu option provides an Azure portal ΓÇ£launching padΓÇ¥ for available third-party, ecosystem add-ons to the Azure NetApp Files storage service. With this new portal menu option, you can enter a landing page by selecting an add-on tile to quickly access the add-on.
- **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to NetApp Cloud Data Sense. Selecting the **Cloud Data Sense** tile opens a new browser and directs you to the add-on installation page.
+ **NetApp add-ons** is the first category of add-ons introduced under **Storage service add-ons**. It provides access to NetApp Cloud Data Sense. Selecting the **Cloud Data Sense** tile opens a new browser and directs you to the add-on installation page.
-* [Manual QoS capacity pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) now generally available (GA)
+* [Manual QoS capacity pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) now generally available (GA)
- The Manual QoS capacity pool feature is now generally available. You no longer need to register the feature before using it.
+ The Manual QoS capacity pool feature is now generally available. You no longer need to register the feature before using it.
-* [Shared AD support for multiple accounts to one Active Directory per region per subscription](create-active-directory-connections.md#shared_ad) (Preview)
+* [Shared AD support for multiple accounts to one Active Directory per region per subscription](create-active-directory-connections.md#shared_ad) (Preview)
To date, Azure NetApp Files supports only a single Active Directory (AD) per region, where only a single NetApp account could be configured to access the AD. The new **Shared AD** feature enables all NetApp accounts to share an AD connection created by one of the NetApp accounts that belong to the same subscription and the same region. For example, all NetApp accounts in the same subscription and region can use the common AD configuration to create an SMB volume, a NFSv4.1 Kerberos volume, or a dual-protocol volume. When you use this feature, the AD connection is visible in all NetApp accounts that are under the same subscription and same region.
-## May 2021
+## May 2021
-* Azure NetApp Files Application Consistent Snapshot tool [(AzAcSnap)](azacsnap-introduction.md) is now generally available.
+* Azure NetApp Files Application Consistent Snapshot tool [(AzAcSnap)](azacsnap-introduction.md) is now generally available.
- AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool.
+ AzAcSnap is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`). See [Release Notes for AzAcSnap](azacsnap-release-notes.md) for the latest changes about the tool.
-* [Support for capacity pool billing tags](manage-billing-tags.md)
+* [Support for capacity pool billing tags](manage-billing-tags.md)
Azure NetApp Files now supports billing tags to help you cross-reference cost with business units or other internal consumers. Billing tags are assigned at the capacity pool level and not volume level, and they appear on the customer invoice.
-* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
- By default, LDAP communications between client and server applications aren't encrypted. This setting means that it's possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared virtual networks when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
+ By default, LDAP communications between client and server applications aren't encrypted. This setting means that it's possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared virtual networks when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
-* Support for throughput [metrics](azure-netapp-files-metrics.md)
+* Support for throughput [metrics](azure-netapp-files-metrics.md)
- Azure NetApp Files adds support for the following metrics:
+ Azure NetApp Files adds support for the following metrics:
* Capacity pool throughput metrics * *Pool Allocated to Volume Throughput* * *Pool Consumed Throughput*
Azure NetApp Files is updated regularly. This article provides a summary about t
* *Volume Consumed Throughput* * *Percentage Volume Consumed Throughput*
-* Support for [dynamic change of service level](dynamic-change-volume-service-level.md) of replication volumes
+* Support for [dynamic change of service level](dynamic-change-volume-service-level.md) of replication volumes
Azure NetApp Files now supports dynamically changing the service level of replication source and destination volumes. ## April 2021
-* [Manual volume and capacity pool management](volume-quota-introduction.md) (hard quota)
+* [Manual volume and capacity pool management](volume-quota-introduction.md) (hard quota)
The behavior of Azure NetApp Files volume and capacity pool provisioning has changed to a manual and controllable mechanism. The storage capacity of a volume is limited to the set size (quota) of the volume. When volume consumption maxes out, neither the volume nor the underlying capacity pool grows automatically. Instead, the volume will receive an ΓÇ£out of spaceΓÇ¥ condition. However, you can [resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md) as needed. You should actively [monitor the capacity of a volume](monitor-volume-capacity.md) and the underlying capacity pool. This behavior change is a result of the following key requests indicated by many users:
- * Previously, VM clients would see the thinly provisioned (100 TiB) capacity of any given volume when using OS space or capacity monitoring tools. This situation could result in inaccurate capacity visibility on the client or application side. This behavior has been corrected.
- * The previous auto-grow behavior of capacity pools gave application owners no control over the provisioned capacity pool space (and the associated cost). This behavior was especially cumbersome in environments where ΓÇ£run-away processesΓÇ¥ could rapidly fill up and grow the provisioned capacity. This behavior has been corrected.
+ * Previously, VM clients would see the thinly provisioned (100 TiB) capacity of any given volume when using OS space or capacity monitoring tools. This situation could result in inaccurate capacity visibility on the client or application side. This behavior has been corrected.
+ * The previous auto-grow behavior of capacity pools gave application owners no control over the provisioned capacity pool space (and the associated cost). This behavior was especially cumbersome in environments where ΓÇ£run-away processesΓÇ¥ could rapidly fill up and grow the provisioned capacity. This behavior has been corrected.
* Users want to see and maintain a direct correlation between volume size (quota) and performance. The previous behavior allowed for (implicit) over-subscription of a volume (capacity) and capacity pool auto-grow. As such, users couldn't make a direct correlation until the volume quota had been actively set or reset. This behavior has now been corrected. Users have requested direct control over provisioned capacity. Users want to control and balance storage capacity and utilization. They also want to control cost along with the application-side and client-side visibility of available, used, and provisioned capacity and the performance of their application volumes. With this new behavior, all this capability has now been enabled.
-* [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#continuous-availability) (Preview)
+* [SMB Continuous Availability (CA) shares support for FSLogix user profile containers](azure-netapp-files-create-volumes-smb.md#continuous-availability) (Preview)
- [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. You can also use FSLogix solutions to create more portable computing sessions when you use physical devices. FSLogix can provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To enhance FSLogix resiliency to events of storage service maintenance, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. For more information, see Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop).
+ [FSLogix](/fslogix/overview) is a set of solutions that enhance, enable, and simplify non-persistent Windows computing environments. FSLogix solutions are appropriate for virtual environments in both public and private clouds. You can also use FSLogix solutions to create more portable computing sessions when you use physical devices. FSLogix can provide dynamic access to persistent user profile containers stored on SMB shared networked storage, including Azure NetApp Files. To enhance FSLogix resiliency to events of storage service maintenance, Azure NetApp Files has extended support for SMB Transparent Failover via [SMB Continuous Availability (CA) shares on Azure NetApp Files](azure-netapp-files-create-volumes-smb.md#continuous-availability) for user profile containers. For more information, see Azure NetApp Files [Azure Virtual Desktop solutions](azure-netapp-files-solution-architectures.md#windows-virtual-desktop).
-* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) (Preview)
+* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) (Preview)
You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption can't access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might affect the client (CPU overhead for encrypting and decrypting messages). It might also affect storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
-* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
+* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
- By default, Azure NetApp Files supports up to 16 group IDs when handling NFS user credentials, as defined in [RFC 5531](https://tools.ietf.org/html/rfc5531). With this new capability, you can now increase the maximum up to 1,024 if you have users who are members of more than the default number of groups. To support this capability, NFS volumes can now also be added to AD DS LDAP, which enables Active Directory LDAP users with extended groups entries (with up to 1024 groups) to access the volume.
+ By default, Azure NetApp Files supports up to 16 group IDs when handling NFS user credentials, as defined in [RFC 5531](https://tools.ietf.org/html/rfc5531). With this new capability, you can now increase the maximum up to 1,024 if you have users who are members of more than the default number of groups. To support this capability, NFS volumes can now also be added to AD DS LDAP, which enables Active Directory LDAP users with extended groups entries (with up to 1024 groups) to access the volume.
## March 2021
-
-* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
+
+* [SMB Continuous Availability (CA) shares](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume) (Preview)
SMB Transparent Failover enables maintenance operations on the Azure NetApp Files service without interrupting connectivity to server applications storing and accessing data on SMB volumes. To support SMB Transparent Failover, Azure NetApp Files now supports the SMB Continuous Availability shares option for use with SQL Server applications over SMB running on Azure VMs. This feature is currently supported on Windows SQL Server. Azure NetApp Files doesn't currently support Linux SQL Server. This feature provides significant performance improvements for SQL Server. It also provides scale and cost benefits for [Single Instance, Always-On Failover Cluster Instance and Always-On Availability Group deployments](azure-netapp-files-solution-architectures.md#sql-server). See [Benefits of using Azure NetApp Files for SQL Server deployment](solutions-benefits-azure-netapp-files-sql-server.md).
Azure NetApp Files is updated regularly. This article provides a summary about t
## December 2020
-* [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Preview)
+* [Azure Application Consistent Snapshot Tool](azacsnap-introduction.md) (Preview)
- Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
+ Azure Application Consistent Snapshot Tool (AzAcSnap) is a command-line tool that enables you to simplify data protection for third-party databases (SAP HANA) in Linux environments (for example, `SUSE` and `RHEL`).
AzAcSnap uses the volume snapshot and replication functionalities in Azure NetApp Files and Azure Large Instance. It provides the following benefits:
- * Application-consistent data protection
- * Database catalog management
- * *Ad hoc* volume protection
- * Cloning of storage volumes
- * Support for disaster recovery
+ * Application-consistent data protection
+ * Database catalog management
+ * *Ad hoc* volume protection
+ * Cloning of storage volumes
+ * Support for disaster recovery
## November 2020
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports cross-region replication. With this new disaster recovery capability, you can replicate your Azure NetApp Files volumes from one Azure region to another in a fast and cost-effective way. It helps you protect your data from unforeseeable regional failures. Azure NetApp Files cross-region replication uses NetApp SnapMirror® technology; only changed blocks are sent over the network in a compressed, efficient format. This proprietary technology minimizes the amount of data required to replicate across the regions, therefore saving data transfer costs. It also shortens the replication time, so you can achieve a smaller Restore Point Objective (RPO).
-* [Manual QoS Capacity Pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) (Preview)
+* [Manual QoS Capacity Pool](azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) (Preview)
In a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. The total throughput of all volumes created with a manual QoS capacity pool is limited by the total throughput of the pool. It's determined by the combination of the pool size and the service-level throughput. Alternatively, a capacity poolΓÇÖs [QoS type](azure-netapp-files-understand-storage-hierarchy.md#qos_types) can be auto (automatic), which is the default. In an auto QoS capacity pool, throughput is assigned automatically to the volumes in the pool, proportional to the size quota assigned to the volumes.
-* [LDAP signing](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
+* [LDAP signing](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
Azure NetApp Files now supports LDAP signing for secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controllers. This feature is currently in preview. * [AES encryption for AD authentication](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
- Azure NetApp Files now supports AES encryption on LDAP connection to DC to enable AES encryption for an SMB volume. This feature is currently in preview.
+ Azure NetApp Files now supports AES encryption on LDAP connection to DC to enable AES encryption for an SMB volume. This feature is currently in preview.
-* New [metrics](azure-netapp-files-metrics.md):
+* New [metrics](azure-netapp-files-metrics.md):
- * New volume metrics:
+ * New volume metrics:
* *Volume allocated size*: The provisioned size of a volume
- * New pool metrics:
- * *Pool Allocated size*: The provisioned size of the pool
+ * New pool metrics:
+ * *Pool Allocated size*: The provisioned size of the pool
* *Total snapshot size for the pool*: The sum of snapshot size from all volumes in the pool ## July 2020
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports NFS client encryption in Kerberos modes (krb5, krb5i, and krb5p) with AES-256 encryption, providing you with more data security. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies) and is generally available. Learn more from the [NFS v4.1 Kerberos encryption documentation](configure-kerberos-encryption.MD).
-* [Dynamic volume service level change](dynamic-change-volume-service-level.MD) (Preview)
+* [Dynamic volume service level change](dynamic-change-volume-service-level.MD) (Preview)
Cloud promises flexibility in IT spending. You can now change the service level of an existing Azure NetApp Files volume by moving the volume to another capacity pool that uses the service level you want for the volume. This in-place service-level change for the volume doesn't require that you migrate data. It also doesn't affect the data plane access to the volume. You can change an existing volume to use a higher service level for better performance, or to use a lower service level for cost optimization. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies). It's currently in preview. You can register for the feature preview by following the [dynamic volume service level change documentation](dynamic-change-volume-service-level.md).
-* [Volume snapshot policy](snapshots-manage-policy.md) (Preview)
+* [Volume snapshot policy](snapshots-manage-policy.md) (Preview)
Azure NetApp Files allows you to create point-in-time snapshots of your volumes. You can now create a snapshot policy to have Azure NetApp Files automatically create volume snapshots at a frequency of your choice. You can schedule the snapshots to be taken in hourly, daily, weekly, or monthly cycles. You can also specify the maximum number of snapshots to keep as part of the snapshot policy. This feature is free of charge (normal [Azure NetApp Files storage cost](https://azure.microsoft.com/pricing/details/netapp/) still applies) and is currently in preview. You can register for the feature preview by following the [volume snapshot policy documentation](snapshots-manage-policy.md). * [NFS root access export policy](azure-netapp-files-configure-export-policy.md)
- Azure NetApp Files now allows you to specify whether the root account can access the volume.
+ Azure NetApp Files now allows you to specify whether the root account can access the volume.
* [Hide snapshot path](snapshots-edit-hide-path.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
## Next steps * [What is Azure NetApp Files](azure-netapp-files-introduction.md)
-* [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
+* [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config
description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 01/18/2023 Last updated : 01/17/2024 # Add module settings in the Bicep config file
The available profiles are:
You can customize these profiles, or add new profiles for your on-premises environments.
-The available credential types are:
+Bicep uses the [Azure.Identity SDK](/dotnet/api/azure.identity) to do authentication. The available credential types are:
-- AzureCLI-- AzurePowerShell-- Environment-- ManagedIdentity-- VisualStudio-- VisualStudioCode
+- [AzureCLI](/dotnet/api/azure.identity.azureclicredential)
+- [AzurePowerShell](/dotnet/api/azure.identity.azurepowershellcredential)
+- [Environment](/dotnet/api/azure.identity.environmentcredential)
+- [ManagedIdentity](/dotnet/api/azure.identity.managedidentitycredential)
+- [VisualStudio](/dotnet/api/azure.identity.visualstudiocredential)
+- [VisualStudioCode](/dotnet/api/azure.identity.visualstudiocodecredential)
[!INCLUDE [vscode authentication](../../../includes/resource-manager-vscode-authentication.md)]
azure-resource-manager Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/control-plane-metrics.md
Title: Control plane metrics in Azure Monitor description: Azure Resource Manager metrics in Azure Monitor | Traffic and latency observability for subscription-level control plane requests -+ Last updated 04/26/2023
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | automationaccounts | **Yes** | **Yes** | **Yes** [PowerShell script](../../automation/automation-disaster-recovery.md) |
+> | automationaccounts | **Yes** | **Yes** | [PowerShell script](../../automation/automation-disaster-recovery.md) |
> | automationaccounts / configurations | **Yes** | **Yes** | No | > | automationaccounts / runbooks | **Yes** | **Yes** | No |
Before starting your move operation, review the [checklist](./move-resource-grou
> | resources | No | No | No | > | subscriptions | No | No | No | > | tags | No | No | No |
-> | templatespecs | No | No | **Yes**<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
+> | templatespecs | No | No | [Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
> | templatespecs / versions | No | No | No | > | tenants | No | No | No |
azure-sql-edge Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/configure.md
Last updated 09/14/2023 -+ # Configure Azure SQL Edge
-> [!IMPORTANT]
+> [!IMPORTANT]
> Azure SQL Edge no longer supports the ARM64 platform. Azure SQL Edge supports configuration through one of the following two options:
Azure SQL Edge supports configuration through one of the following two options:
- Environment variables - An mssql.conf file placed in the /var/opt/mssql folder
-> [!NOTE]
+> [!NOTE]
> Setting environment variables overrides the settings specified in the mssql.conf file. ## Configure by using environment variables
The following SQL Server on Linux environment variable isn't supported for Azure
| | | | **MSSQL_ENABLE_HADR** | Enable availability group. For example, `1` is enabled, and `0` is disabled. |
-> [!IMPORTANT]
+> [!IMPORTANT]
> The **MSSQL_PID** environment variable for SQL Edge only accepts **Premium** and **Developer** as the valid values. Azure SQL Edge doesn't support initialization using a product key. ### Specify the environment variables
Add values in **Container Create Options**.
:::image type="content" source="media/configure/set-environment-variables-using-create-options.png" alt-text="Screenshot of set by using container create options.":::
-> [!NOTE]
+> [!NOTE]
> In the disconnected deployment mode, environment variables can be specified using the `-e` or `--env` or the `--env-file` option of the `docker run` command. ## Configure by using an `mssql.conf` file
Earlier CTPs of Azure SQL Edge were configured to run as the root users. The fol
Your Azure SQL Edge configuration changes and database files are persisted in the container even if you restart the container with `docker stop` and `docker start`. However, if you remove the container with `docker rm`, everything in the container is deleted, including Azure SQL Edge and your databases. The following section explains how to use **data volumes** to persist your database files even if the associated containers are deleted.
-> [!IMPORTANT]
+> [!IMPORTANT]
> For Azure SQL Edge, it's critical that you understand data persistence in Docker. In addition to the discussion in this section, see Docker's documentation on [how to manage data in Docker containers](https://docs.docker.com/engine/tutorials/dockervolumes/). ### Mount a host directory as data volume
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>" -p 14
This technique also enables you to share and view the files on the host outside of Docker.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Host volume mapping for **Docker on Windows** doesn't currently support mapping the complete `/var/opt/mssql` directory. However, you can map a subdirectory, such as `/var/opt/mssql/data` to your host machine.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Host volume mapping for **Docker on macOS** with the Azure SQL Edge image isn't supported at this time. Use data volume containers instead. This restriction is specific to the `/var/opt/mssql` directory. Reading from a mounted directory works fine. For example, you can mount a host directory using `-v` on macOS and restore a backup from a `.bak` file that resides on the host. ### Use data volume containers
docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>' -p 14
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>" -p 1433:1433 -v sqlvolume:/var/opt/mssql -d mcr.microsoft.com/azure-sql-edge ```
-> [!NOTE]
+> [!NOTE]
> This technique for implicitly creating a data volume in the run command doesn't work with older versions of Docker. In that case, use the explicit steps outlined in the Docker documentation, [Creating and mounting a data volume container](https://docs.docker.com/engine/tutorials/dockervolumes/#creating-and-mounting-a-data-volume-container). Even if you stop and remove this container, the data volume persists. You can view it with the `docker volume ls` command.
If you then create another container with the same volume name, the new containe
To remove a data volume container, use the `docker volume rm` command.
-> [!WARNING]
+> [!WARNING]
> If you delete the data volume container, any Azure SQL Edge data in the container is *permanently* deleted. ## Next steps
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Before you begin the prerequisites, review the [Performance best practices](#per
1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud and a dedicated virtual network connected via ExpressRoute gateway. The virtual network gateway should be configured with the Ultra performance or ErGw3Az SKU and have FastPath enabled. For more information, see [Configure networking for your VMware private cloud](tutorial-configure-networking.md) and [Network planning checklist](tutorial-network-checklist.md). 1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network created in the previous step. 1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP.
- 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
-
- `az feature register --name "ANFAvsDataStore" --namespace "Microsoft.NetApp"`
-
- `az feature show --name "ANFAvsDataStore" --namespace "Microsoft.NetApp" --query properties.state`
1. Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. Select option **Azure VMware Solution Datastore** listed under the **Protocol** section. 1. Create a volume with **Standard** [network features](../azure-netapp-files/configure-network-features.md) if available for ExpressRoute FastPath connectivity. 1. Under the **Protocol** section, select **Azure VMware Solution Datastore** to indicate the volume is created to use as a datastore for Azure VMware Solution private cloud. 1. If you're using [export policies](../azure-netapp-files/azure-netapp-files-configure-export-policy.md) to control access to Azure NetApp Files volumes, enable the Azure VMware private cloud IP range, not individual host IPs. Faulty hosts in a private cloud could get replaced. If the IP isn't enabled, connectivity to datastore is impacted.
->[!NOTE]
->Azure NetApp Files datastores for Azure VMware Solution are generally available. To use it, you must register Azure NetApp Files datastores for Azure VMware Solution.
- ## Supported regions Azure NetApp Files datastores for Azure VMware Solution are currently supported in the following regions:
For performance benchmarks that Azure NetApp Files datastores deliver for VMs on
To attach an Azure NetApp Files volume to your private cloud using Portal, follow these steps: 1. Sign in to the Azure portal.
-1. Select **Subscriptions** to see a list of subscriptions.
-1. From the list, select the subscription you want to use.
-1. Under Settings, select **Resource providers**.
-1. Search for **Microsoft.AVS** and select it.
-1. Select **Register**.
-1. Under **Settings**, select **Preview features**.
- 1. Verify you're registered for both the `CloudSanExperience` and `AnfDatstoreExperience` features.
1. Navigate to your Azure VMware Solution. Under **Manage**, select **Storage**. 1. Select **Connect Azure NetApp Files volume**.
Under **Manage**, select **Storage**.
To attach an Azure NetApp Files volume to your private cloud using Azure CLI, follow these steps:
-1. Verify the subscription is registered to `CloudSanExperience` feature in the **Microsoft.AVS** namespace. If it's not, register it.
-
- `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-
- `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-1. The registration should take approximately 15 minutes to complete. You can also check the status.
-
- `az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state`
-1. If the registration is stuck in an intermediate state for longer than 15 minutes, unregister, then re-register the flag.
-
- `az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-
- `az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS"`
-1. Verify the subscription is registered to `AnfDatastoreExperience` feature in the **Microsoft.AVS** namespace. If it's not, register it.
-
- `az feature register --name " AnfDatastoreExperience" --namespace "Microsoft.AVS"`
-
- `az feature show --name "AnfDatastoreExperience" --namespace "Microsoft.AVS" --query properties.state`
- 1. Verify the VMware extension is installed. If the extension is already installed, verify you're using the latest version of the Azure CLI extension. If an older version is installed, update the extension. `az extension show --name vmware`
To attach an Azure NetApp Files volume to your private cloud using Azure CLI, fo
1. Create a datastore using an existing ANF volume in Azure VMware Solution private cloud cluster. `az vmware datastore netapp-volume create --name MyDatastore1 --resource-group MyResourceGroup ΓÇô-cluster Cluster-1 --private-cloud MyPrivateCloud ΓÇô-volume-id /subscriptions/<Subscription Id>/resourceGroups/<Resourcegroup name>/providers/Microsoft.NetApp/netAppAccounts/<Account name>/capacityPools/<pool name>/volumes/<Volume name>`
-1. If needed, you can display the help on the datastores.
+1. If needed, display the help on the datastores.
`az vmware datastore -h` 1. Show the details of an ANF-based datastore in a private cloud cluster.
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
Title: Recover files and folders from Azure VM backup
description: In this article, learn how to recover files and folders from an Azure virtual machine recovery point. Last updated 06/30/2023-+
To restore files or folders from the recovery point, go to the virtual machine a
## Step 2: Ensure the machine meets the requirements before executing the script
-After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
+After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
### Dynamic disks
You can't run the downloaded executable on the same backed-up VM if the backed-u
### Virtual machine backups having large disks
-If the backed-up machine has large number of disks (>16) or large disks (> 4 TB each) it's not recommended to execute the script on the same machine for restore, since it will have a significant impact on the VM. Instead it's recommended to have a separate VM only for file recovery (Azure VM D2v3 VMs) and then shut it down when not required.
+If the backed-up machine has large number of disks (>16) or large disks (> 4 TB each) it's not recommended to execute the script on the same machine for restore, since it will have a significant impact on the VM. Instead it's recommended to have a separate VM only for file recovery (Azure VM D2v3 VMs) and then shut it down when not required.
See requirements to restore files from backed-up VMs with large disk:<br> [Windows OS](#for-backed-up-vms-with-large-disks-windows)<br> [Linux OS](#for-backed-up-vms-with-large-disks-linux)
-After you choose the correct machine to run the ILR script, ensure that it meets the [OS requirements](#step-3-os-requirements-to-successfully-run-the-script) and [access requirements](#step-4-access-requirements-to-successfully-run-the-script).
+After you choose the correct machine to run the ILR script, ensure that it meets the [OS requirements](#step-3-os-requirements-to-successfully-run-the-script) and [access requirements](#step-4-access-requirements-to-successfully-run-the-script).
## Step 3: OS requirements to successfully run the script
Also, ensure that you have the [right machine to execute the ILR script](#step-2
> [!NOTE] > > The script is generated in English language only and is not localized. Hence it might require that the system locale is in English for the script to execute properly
->
+>
### For Windows
After you meet all the requirements listed in [Step 2](#step-2-ensure-the-machin
:::image type="content" source="./media/backup-azure-restore-files-from-vm/executable-output.png" alt-text="Screenshot shows the executable output for file restore from VM." lightbox="./media/backup-azure-restore-files-from-vm/executable-output.png":::
-When you run the executable, the operating system mounts the new volumes and assigns drive letters. You can use Windows Explorer or File Explorer to browse those drives. The drive letters assigned to the volumes may not be the same letters as the original virtual machine. However, the volume name is preserved. For example, if the volume on the original virtual machine was "Data Disk (E:`\`)", that volume can be attached on the local computer as "Data Disk ('Any letter':`\`). Browse through all volumes mentioned in the script output until you find your files or folder.
+When you run the executable, the operating system mounts the new volumes and assigns drive letters. You can use Windows Explorer or File Explorer to browse those drives. The drive letters assigned to the volumes may not be the same letters as the original virtual machine. However, the volume name is preserved. For example, if the volume on the original virtual machine was "Data Disk (E:`\`)", that volume can be attached on the local computer as "Data Disk ('Any letter':`\`). Browse through all volumes mentioned in the script output until you find your files or folder.
![Recovery volumes attached](./media/backup-azure-restore-files-from-vm/volumes-attached.png) #### For backed-up VMs with large disks (Windows) If the file recovery process hangs after you run the file-restore script (for example, if the disks are never mounted, or they're mounted but the volumes don't appear), perform the following steps:
-
+ 1. Ensure that the OS is WS 2012 or higher. 2. Ensure the registry keys are set as suggested below in the restore server and make sure to reboot the server. The number beside the GUID can range from 0001-0005. In the following example, it's 0004. Navigate through the registry key path until the parameters section.
Make sure that the Volume groups corresponding to script's volumes are active. T
```bash sudo vgdisplay -a
-```
+```
Otherwise, activate the volume group by using the following command.
backup Backup Azure Troubleshoot Vm Backup Fails Snapshot Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md
description: Symptoms, causes, and resolutions of Azure Backup failures related
Last updated 05/05/2022 -+
Azure Backup uses the VM Snapshot Extension to take an application consistent ba
- **Ensure VMSnapshot extension isn't in a failed state**: Follow the steps listed in this [section](backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md#usererrorvmprovisioningstatefailedthe-vm-is-in-failed-provisioning-state) to verify and ensure the Azure Backup extension is healthy. - **Check if antivirus is blocking the extension**: Certain antivirus software can prevent extensions from executing.
-
+ At the time of the backup failure, verify if there are log entries in ***Event Viewer Application logs*** with ***faulting application name: IaaSBcdrExtension.exe***. If you see entries, then it could be the antivirus configured in the VM is restricting the execution of the backup extension. Test by excluding the following directories in the antivirus configuration and retry the backup operation. - `C:\Packages\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot` - `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot`
The Azure VM agent might be stopped, outdated, in an inconsistent state, or not
**Error code**: GuestAgentSnapshotTaskStatusError<br> **Error message**: Could not communicate with the VM agent for snapshot status <br>
-After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-**Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
+**Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
**Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)**
For a backup operation to succeed on encrypted VMs, it must have permissions to
After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting step, and then retry your operation:
-**[The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
+**[The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
## <a name="ExtensionOperationFailed-vmsnapshot-extension-operation-failed"></a>ExtensionOperationFailedForManagedDisks - VMSnapshot extension operation failed **Error code**: ExtensionOperationFailedForManagedDisks <br> **Error message**: VMSnapshot extension operation failed<br>
-After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-**Cause 1: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
-**Cause 2: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
+After you register and schedule a VM for the Azure Backup service, Backup starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+**Cause 1: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
+**Cause 2: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
**Cause 3: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)** ## BackUpOperationFailed / BackUpOperationFailedV2 - Backup fails, with an internal error
After you register and schedule a VM for the Azure Backup service, Backup starts
**Error code**: BackUpOperationFailed / BackUpOperationFailedV2 <br> **Error message**: Backup failed with an internal error - Please retry the operation in a few minutes <br>
-After you register and schedule a VM for the Azure Backup service, Backup initiates the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+After you register and schedule a VM for the Azure Backup service, Backup initiates the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a backup failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-- **Cause 1: [The agent installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)** -- **Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)** -- **Cause 3: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
+- **Cause 1: [The agent installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-installed-in-the-vm-but-unresponsive-for-windows-vms)**
+- **Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)**
+- **Cause 3: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cannot-be-retrieved-or-a-snapshot-cannot-be-taken)**
- **Cause 4: [Backup service doesn't have permission to delete the old restore points because of a resource group lock](#remove_lock_from_the_recovery_point_resource_group)** - **Cause 5**: There's an extension version/bits mismatch with the Windows version you're running or the following module is corrupt: **C:\Packages\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot\\<extension version\>\iaasvmprovider.dll** <br> To resolve this issue, check if the module is compatible with x86 (32-bit)/x64 (64-bit) version of _regsvr32.exe_, and then follow these steps:
Most agent-related or extension-related failures for Linux VMs are caused by iss
If the process isn't running, restart it by using the following commands:
- - For Ubuntu/Debian:
+ - For Ubuntu/Debian:
```bash sudo systemctl restart walinuxagent ```
-
- - For other distributions:
+
+ - For other distributions:
```bash sudo systemctl restart waagent ```
backup Backup Azure Vm File Recovery Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vm-file-recovery-troubleshoot.md
Title: Troubleshoot Azure VM file recovery description: Troubleshoot issues when recovering files and folders from an Azure VM backup. -+ Last updated 07/12/2020
This section provides steps to troubleshoot common issues you might experience w
```bash ping download.microsoft.com ```
-
+ ### The script downloads successfully, but fails to run When you run the Python script for Item Level Recovery (ILR) on SUSE Linux Enterprise Server 12 SP4, it fails with the error "iscsi_tcp module canΓÇÖt be loaded" or "iscsi_tcp_module not found".
If the protected Linux VM uses LVM or RAID Arrays, follow the steps in [Recover
### You can't copy the files from mounted volumes
-The copy might fail with the error "0x80070780: The file cannot be accessed by the system."
+The copy might fail with the error "0x80070780: The file cannot be accessed by the system."
Check if the source server has disk deduplication enabled. If it does, ensure the restore server also has deduplication enabled on the drives. You can leave deduplication unconfigured so that you don't deduplicate the drives on the restore server.
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 10/13/2023 Last updated : 01/18/2024 # Azure Bastion FAQ
Azure Bastion isn't supported with Azure Private DNS Zones in national clouds.
No, Azure Bastion doesn't currently support private link.
+### Why do I get a "Failed to add subnet" error when using "Deploy Bastion" in the portal?
+
+At this time, for most address spaces, you must add a subnet named **AzureBastionSubnet** to your virtual network before you select **Deploy Bastion**.
+ ### <a name="subnet"></a>Can I have an Azure Bastion subnet of size /27 or smaller (/28, /29, etc.)? For Azure Bastion resources deployed on or after November 2, 2021, the minimum AzureBastionSubnet size is /26 or larger (/25, /24, etc.). All Azure Bastion resources deployed in subnets of size /27 prior to this date are unaffected by this change and will continue to work. However, we highly recommend increasing the size of any existing AzureBastionSubnet to /26 in case you choose to take advantage of [host scaling](./configure-host-scaling.md) in the future.
Make sure the user has **read** access to both the VM, and the peered VNet. Addi
|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action| |Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
+### I am connecting to a VM using a JIT policy, do I need additional permissions?
+
+If user is connecting to a VM using a JIT policy, there is no additional permissions needed. For more information on connecting to a VM using a JIT policy, see [Enable just-in-time access on VMs](../defender-for-cloud/just-in-time-access-usage.md)
+ ### My privatelink.azure.com can't resolve to management.privatelink.azure.com This may be due to the Private DNS zone for privatelink.azure.com linked to the Bastion virtual network causing management.azure.com CNAMEs to resolve to management.privatelink.azure.com behind the scenes. Create a CNAME record in their privatelink.azure.com zone for management.privatelink.azure.com to arm-frontdoor-prod.trafficmanager.net to enable successful DNS resolution.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
description: Learn how to deploy Azure Bastion with default settings from the Az
Previously updated : 10/12/2023 Last updated : 01/18/2024
When you deploy from VM settings, Bastion is automatically configured with the f
| **Name** | Based on the virtual network name | | **Public IP address name** | Based on the virtual network name |
+## Configure the AzureBastionSubnet
+
+When you deploy Azure Bastion, resources are created in a specific subnet which must be named **AzureBastionSubnet**. The name of the subnet lets the system know where to deploy resources. Use the following steps to add the AzureBastionSubnet to your virtual network:
++
+After adding the AzureBastionSubnet, you can continue to the next section and deploy Bastion.
+ ## <a name="createvmset"></a>Deploy Bastion When you create an Azure Bastion instance in the portal by using **Deploy Bastion**, you deploy Bastion automatically by using default settings and the Basic SKU. You can't modify, or specify additional values for, a default deployment.
batch Automatic Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/automatic-certificate-rotation.md
Title: Enable automatic certificate rotation in a Batch pool description: You can create a Batch pool with a managed identity and a certificate that will automatically be renewed. -+ Last updated 12/05/2023 # Enable automatic certificate rotation in a Batch pool
batch Batch Account Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-account-create-portal.md
Title: Create a Batch account in the Azure portal
description: Learn how to use the Azure portal to create and manage an Azure Batch account for running large-scale parallel workloads in the cloud. Last updated 07/18/2023-+ # Create a Batch account in the Azure portal
To create a Batch account in the default Batch service mode:
- **Subscription**: Select the subscription to use if not already selected. - **Resource group**: Select the resource group for the Batch account, or create a new one. - **Account name**: Enter a name for the Batch account. The name must be unique within the Azure region, can contain only lowercase characters or numbers, and must be 3-24 characters long.
-
+ > [!NOTE] > The Batch account name is part of its ID and can't be changed after creation. - **Location**: Select the Azure region for the Batch account if not already selected.
- - **Storage account**: Optionally, select **Select a storage account** to associate an [Azure Storage account](accounts.md#azure-storage-accounts) with the Batch account.
+ - **Storage account**: Optionally, select **Select a storage account** to associate an [Azure Storage account](accounts.md#azure-storage-accounts) with the Batch account.
:::image type="content" source="media/batch-account-create-portal/batch-account-portal.png" alt-text="Screenshot of the New Batch account screen.":::
When you create the first user subscription mode Batch account in an Azure subsc
:::image type="content" source="media/batch-account-create-portal/register_provider.png" alt-text="Screenshot of the Resource providers page."::: 1. Return to the **Subscription** page and select **Access control (IAM)** from the left navigation.
-1. At the top of the **Access control (IAM)** page, select **Add** > **Add role assignment**.
+1. At the top of the **Access control (IAM)** page, select **Add** > **Add role assignment**.
1. On the **Add role assignment** screen, under **Assignment type**, select **Privileged administrator role**, and then select **Next**. 1. On the **Role** tab, select either the **Contributor** or **Owner** role for the Batch account, and then select **Next**. 1. On the **Members** tab, select **Select members**. On the **Select members** screen, search for and select **Microsoft Azure Batch**, and then select **Select**.
batch Batch Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md
Title: Autoscale compute nodes in an Azure Batch pool
description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Last updated 08/23/2023-+ # Create a formula to automatically scale compute nodes in a Batch pool
batch Batch Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-ci-cd.md
Title: Use Azure Pipelines to build and deploy an HPC solution
description: Use Azure Pipelines CI/CD build and release pipelines to deploy Azure Resource Manager templates for an Azure Batch high performance computing (HPC) solution. Last updated 04/12/2023 -+ # Use Azure Pipelines to build and deploy an HPC solution
Save the following code as a file named *deployment.json*. This final template a
"accountName": {"value": "[parameters('applicationStorageAccountName')]"} } }
- },
+ },
{ "apiVersion": "2017-05-10", "name": "batchAccountDeployment",
batch Batch Cli Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-cli-templates.md
Title: Run jobs end-to-end using templates
description: With only CLI commands, you can create a pool, upload input data, create jobs and associated tasks, and download the resulting output data. Last updated 09/19/2023-+ # Use Azure Batch CLI templates and file transfer
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Last updated 01/10/2024 ms.devlang: csharp # ms.devlang: csharp, python-+ # Use Azure Batch to run container workloads
batch Batch Js Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-js-get-started.md
description: Learn the basic concepts of Azure Batch and build a simple solution
Last updated 05/16/2023 ms.devlang: javascript-+ # Get started with Batch SDK for JavaScript
Following code snippet first imports the azure-batch JavaScript module and then
import { BatchServiceClient, BatchSharedKeyCredentials } from "@azure/batch";
-// Replace values below with Batch Account details
+// Replace values below with Batch Account details
const batchAccountName = '<batch-account-name>'; const batchAccountKey = '<batch-account-key>'; const batchEndpoint = '<batch-account-url>';
batch Batch Linux Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-linux-nodes.md
Last updated 05/18/2023 ms.devlang: csharp # ms.devlang: csharp, python-+ zone_pivot_groups: programming-languages-batch-linux-nodes # Provision Linux compute nodes in Batch pools
-You can use Azure Batch to run parallel compute workloads on both Linux and Windows virtual machines. This article details how to create pools of Linux compute nodes in the Batch service by using both the [Batch Python](https://pypi.python.org/pypi/azure-batch) and [Batch .NET](/dotnet/api/microsoft.azure.batch) client libraries.
+You can use Azure Batch to run parallel compute workloads on both Linux and Windows virtual machines. This article details how to create pools of Linux compute nodes in the Batch service by using both the [Batch Python](https://pypi.python.org/pypi/azure-batch) and [Batch .NET](/dotnet/api/microsoft.azure.batch) client libraries.
## Virtual Machine Configuration
batch Batch Parallel Node Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-parallel-node-tasks.md
Title: Run tasks concurrently to maximize usage of Batch compute nodes description: Learn how to increase efficiency and lower costs by using fewer compute nodes and parallelism in an Azure Batch pool. -+ Last updated 05/24/2023 ms.devlang: csharp
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md
Title: Use compute-intensive Azure VMs with Batch description: How to take advantage of HPC and GPU virtual machine sizes in Azure Batch pools. Learn about OS dependencies and see several scenario examples. -+ Last updated 05/01/2023 # Use RDMA or GPU instances in Batch pools
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
Title: Create an Azure Batch pool without public IP addresses (preview)
description: Learn how to create an Azure Batch pool without public IP addresses. Last updated 05/30/2023-+ # Create a Batch pool without public IP addresses (preview)
batch Batch Powershell Cmdlets Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-powershell-cmdlets-get-started.md
Title: Get started with PowerShell
description: A quick introduction to the Azure PowerShell cmdlets you can use to manage Batch resources. Last updated 05/24/2023-+ # Manage Batch resources with PowerShell cmdlets
We recommend that you update your Azure PowerShell modules frequently to take ad
``` - **Register with the Batch provider namespace**. You only need to perform this operation **once per subscription**.
-
+ ```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.Batch ```
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
Last updated 11/09/2023 ms.devlang: csharp # ms.devlang: csharp, python-+ # Use the Azure Compute Gallery to create a custom image pool
Using a Shared Image configured for your scenario can provide several advantages
> [!NOTE] > Currently, Azure Batch does not support the ΓÇÿTrustedLaunchΓÇÖ feature. You must use the standard security type to create a custom image instead.
->
-> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
+>
+> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
- **An Azure Batch account.** To create a Batch account, see the Batch quickstarts using the [Azure portal](quick-create-portal.md) or [Azure CLI](quick-create-cli.md).
The following steps show how to prepare a VM, take a snapshot, and create an ima
### Prepare a VM
-If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image.
+If you are creating a new VM for the image, use a first party Azure Marketplace image supported by Batch as the base image for your managed image. Only first party images can be used as a base image.
To get a full list of current Azure Marketplace image references supported by Azure Batch, use one of the following APIs to return a list of Windows and Linux VM images including the node agent SKU IDs for each image: - PowerShell: [Azure Batch supported images](/powershell/module/az.batch/get-azbatchsupportedimage) - Azure CLI: [Azure Batch pool supported images](/cli/azure/batch/pool/supported-images) - Batch service APIs: [Batch service APIs](batch-apis-tools.md#batch-service-apis) and [Azure Batch service supported images](/rest/api/batchservice/account/listsupportedimages)-- List node agent SKUs: [Node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus)
+- List node agent SKUs: [Node agent SKUs](/java/api/com.microsoft.azure.batch.protocol.accounts.listnodeagentskus)
> [!NOTE] > You can't use a third-party image that has additional license and purchase terms as your base image. For information about these Marketplace images, see the guidance for [Linux](../virtual-machines/linux/cli-ps-findimage.md#check-the-purchase-plan-information) or [Windows](../virtual-machines/windows/cli-ps-findimage.md#view-purchase-plan-properties)VMs.
Once you have successfully created your managed image, you need to create an Azu
To create a pool from your Shared Image using the Azure CLI, use the `az batch pool create` command. Specify the Shared Image ID in the `--image` field. Make sure the OS type and SKU matches the versions specified by `--node-agent-sku-id` > [!NOTE]
-> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
+> You need to authenticate using Microsoft Entra ID. If you use shared-key-auth, you will get an authentication error.
> [!IMPORTANT] > The node agent SKU id must align with the publisher/offer/SKU in order for the node to start.
private static void CreateBatchPool(BatchClient batchClient, VirtualMachineConfi
## Create a pool from a Shared Image using Python
-You also can create a pool from a Shared Image by using the Python SDK:
+You also can create a pool from a Shared Image by using the Python SDK:
```python # Import the required modules from the
Use the following steps to create a pool from a Shared Image in the Azure portal
1. Once the node is allocated, use **Connect** to generate user and the RDP file for Windows OR use SSH to for Linux to login to the allocated node and verify. ![Create a pool with from a Shared image with the portal.](media/batch-sig-images/create-custom-pool.png)
-
+ ## Considerations for large pools
batch Batch Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-spot-vms.md
Title: Run Batch workloads on cost-effective Spot VMs
description: Learn how to provision Spot VMs to reduce the cost of Azure Batch workloads. Last updated 04/11/2023-+ # Use Spot VMs with Batch workloads
batch Batch User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-user-accounts.md
Title: Run tasks under user accounts
description: Learn the types of user accounts and how to configure them. Last updated 05/16/2023-+ ms.devlang: csharp # ms.devlang: csharp, java, python
A named user account exists on all nodes in the pool and is available to all tas
A named user account is useful when you want to run all tasks in a job under the same user account, but isolate them from tasks running in other jobs at the same time. For example, you can create a named user for each job, and run each job's tasks under that named user account. Each job can then share a secret with its own tasks, but not with tasks running in other jobs.
-You can also use a named user account to run a task that sets permissions on external resources such as file shares. With a named user account, you control the user identity and can use that user identity to set permissions.
+You can also use a named user account to run a task that sets permissions on external resources such as file shares. With a named user account, you control the user identity and can use that user identity to set permissions.
Named user accounts enable password-less SSH between Linux nodes. You can use a named user account with Linux nodes that need to run multi-instance tasks. Each node in the pool can run tasks under a user account defined on the whole pool. For more information about multi-instance tasks, see [Use multi\-instance tasks to run MPI applications](batch-mpi.md).
Func<ImageReference, bool> isUbuntu1804 = imageRef =>
imageRef.Sku.Contains("20.04-LTS"); // Obtain the first node agent SKU in the collection that matches
-// Ubuntu Server 20.04.
+// Ubuntu Server 20.04.
NodeAgentSku ubuntuAgentSku = nodeAgentSkus.First(sku => sku.VerifiedImageReferences.Any(isUbuntu2004));
batch Create Pool Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-availability-zones.md
description: Learn how to create a Batch pool with zonal policy to help protect
Last updated 05/25/2023 ms.devlang: csharp-+ # Create an Azure Batch pool across Availability Zones
batch Create Pool Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-extensions.md
Title: Use extensions with Batch pools description: Extensions are small applications that facilitate post-provisioning configuration and setup on Batch compute nodes. -+ Last updated 12/05/2023
batch Create Pool Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/create-pool-public-ip.md
Title: Create a Batch pool with specified public IP addresses description: Learn how to create an Azure Batch pool that uses your own static public IP addresses. -+ Last updated 05/26/2023
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
description: Learn how to enable user-assigned managed identities on Batch pools
Last updated 04/03/2023 ms.devlang: csharp-+ # Configure managed identities in Batch pools
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
Title: 'Quickstart: Use the Azure CLI to create a Batch account and run a job'
description: Follow this quickstart to use the Azure CLI to create a Batch account, a pool of compute nodes, and a job that runs basic tasks on the pool. Last updated 04/12/2023-+ # Quickstart: Use the Azure CLI to create a Batch account and run a job
az batch job create \
## Create job tasks
-Batch provides several ways to deploy apps and scripts to compute nodes. Use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create tasks to run in the job. Each task has a command line that specifies an app or script.
+Batch provides several ways to deploy apps and scripts to compute nodes. Use the [az batch task create](/cli/azure/batch/task#az-batch-task-create) command to create tasks to run in the job. Each task has a command line that specifies an app or script.
The following Bash script creates four identical, parallel tasks called `myTask1` through `myTask4`. The task command line displays the Batch environment variables on the compute node, and then waits 90 seconds.
az batch task file download \
--destination ./stdout.txt ```
-You can view the contents of the standard output file in a text editor. The following example shows a typical *stdout.txt* file. The standard output from this task shows the Azure Batch environment variables that are set on the node. You can refer to these environment variables in your Batch job task command lines, and in the apps and scripts the command lines run.
+You can view the contents of the standard output file in a text editor. The following example shows a typical *stdout.txt* file. The standard output from this task shows the Azure Batch environment variables that are set on the node. You can refer to these environment variables in your Batch job task command lines, and in the apps and scripts the command lines run.
```text AZ_BATCH_TASK_DIR=/mnt/batch/tasks/workitems/myJob/job-1/myTask1
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses
description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Last updated 8/14/2023-+ # Create a simplified node communication pool without public IP addresses
batch Tutorial Batch Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-batch-functions.md
description: Learn how to apply OCR to scanned documents as they're added to a s
ms.devlang: csharp Last updated 04/21/2023-+ # Tutorial: Trigger a Batch job using Azure Functions
In this section, you use the Azure portal to create the Batch pool and Batch job
### Create a pool 1. Sign in to the Azure portal using your Azure credentials.
-1. Create a pool by selecting **Pools** on the left side navigation, and then the select the **Add** button above the search form.
+1. Create a pool by selecting **Pools** on the left side navigation, and then the select the **Add** button above the search form.
:::image type="content" source="./media/tutorial-batch-functions/add-pool.png" alt-text="Screenshot of the Pools page in a Batch account that highlights the Add button.":::
-
+ 1. Enter a **Pool ID**. This example names the pool `ocr-pool`. 1. Select **canonical** as the **Publisher**. 1. Select **0001-com-ubuntu-server-jammy** as the **Offer**.
In this section, you use the Azure portal to create the Batch pool and Batch job
1. Set the **Mode** in the **Scale** section to **Fixed**, and enter 3 for the **Target dedicated nodes**. 1. Set **Start task** to **Enabled** the start task, and enter the command `/bin/bash -c "sudo update-locale LC_ALL=C.UTF-8 LANG=C.UTF-8; sudo apt-get update; sudo apt-get -y install ocrmypdf"` in **Command line**. Be sure to set the **Elevation level** as **Pool autouser, Admin**, which allows start tasks to include commands with `sudo`. 1. Select **OK**.
-
+ ### Create a job 1. Create a job on the pool by selecting **Jobs** in the left side navigation, and then choose the **Add** button above the search form.
In this section, you create the Azure Function that triggers the OCR Batch job w
## Trigger the function and retrieve results
-Upload any or all of the scanned files from the [`input_files`](https://github.com/Azure-Samples/batch-functions-tutorial/tree/master/input_files) directory on GitHub to your input container.
+Upload any or all of the scanned files from the [`input_files`](https://github.com/Azure-Samples/batch-functions-tutorial/tree/master/input_files) directory on GitHub to your input container.
You can test your function from Azure portal on the **Code + Test** page of your function.
- 1. Select **Test/run** on the **Code + Test** page.
+ 1. Select **Test/run** on the **Code + Test** page.
1. Enter the path for your input container in **Body** on the **Input** tab. 1. Select **Run**.
-
+ After a few seconds, the file with OCR applied is added to the output container. Log information outputs to the bottom window. The file is then visible and retrievable on Storage Explorer. Alternatively, you can find the log information on the **Monitor** page:
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
description: Learn how to process media files in parallel using ffmpeg in Azure
ms.devlang: python Last updated 05/25/2023-+ # Tutorial: Run a parallel workload with Azure Batch using the Python API
Use Azure Batch to run large-scale parallel and high-performance computing (HPC)
> * Monitor task execution. > * Retrieve output files.
-In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool.
+In this tutorial, you convert MP4 media files to MP3 format, in parallel, by using the [ffmpeg](https://ffmpeg.org/) open-source tool.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)]
In this tutorial, you convert MP4 media files to MP3 format, in parallel, by usi
Sign in to the [Azure portal](https://portal.azure.com). ## Download and run the sample app
To run the script:
python batch_python_tutorial_ffmpeg.py ```
-When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
-
+When you run the sample application, the console output is similar to the following. During execution, you experience a pause at `Monitoring all tasks for 'Completed' state, timeout in 00:30:00...` while the pool's compute nodes are started.
+ ``` Sample start: 11/28/2018 3:20:21 PM
When tasks are running, the heat map is similar to the following:
:::image type="content" source="./media/tutorial-parallel-python/pool.png" alt-text="Screenshot of Pool heat map.":::
-Typical execution time is approximately *5 minutes* when you run the application in its default configuration. Pool creation takes the most time.
+Typical execution time is approximately *5 minutes* when you run the application in its default configuration. Pool creation takes the most time.
[!INCLUDE [batch-common-tutorial-download](../../includes/batch-common-tutorial-download.md)]
input_files = [
Next, the sample creates a pool of compute nodes in the Batch account with a call to `create_pool`. This defined function uses the Batch [PoolAddParameter](/python/api/azure-batch/azure.batch.models.pooladdparameter) class to set the number of nodes, VM size, and a pool configuration. Here, a [VirtualMachineConfiguration](/python/api/azure-batch/azure.batch.models.virtualmachineconfiguration) object specifies an [ImageReference](/python/api/azure-batch/azure.batch.models.imagereference) to an Ubuntu Server 20.04 LTS image published in the Azure Marketplace. Batch supports a wide range of VM images in the Azure Marketplace, as well as custom VM images.
-The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure doesn't have enough capacity. The sample by default creates a pool containing only five Spot nodes in size *Standard_A1_v2*.
+The number of nodes and VM size are set using defined constants. Batch supports dedicated nodes and [Spot nodes](batch-spot-vms.md), and you can use either or both in your pools. Dedicated nodes are reserved for your pool. Spot nodes are offered at a reduced price from surplus VM capacity in Azure. Spot nodes become unavailable if Azure doesn't have enough capacity. The sample by default creates a pool containing only five Spot nodes in size *Standard_A1_v2*.
In addition to physical node properties, this pool configuration includes a [StartTask](/python/api/azure-batch/azure.batch.models.starttask) object. The StartTask executes on each node as that node joins the pool, and each time a node is restarted. In this example, the StartTask runs Bash shell commands to install the ffmpeg package and dependencies on the nodes.
The app creates tasks in the job with a call to `add_tasks`. This defined functi
The sample creates an [OutputFile](/python/api/azure-batch/azure.batch.models.outputfile) object for the MP3 file after running the command line. Each task's output files (one, in this case) are uploaded to a container in the linked storage account, using the task's `output_files` property.
-Then, the app adds tasks to the job with the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations) method, which queues them to run on the compute nodes.
+Then, the app adds tasks to the job with the [task.add_collection](/python/api/azure-batch/azure.batch.operations.taskoperations) method, which queues them to run on the compute nodes.
```python tasks = list()
batch_service_client.task.add_collection(job_id, tasks)
### Monitor tasks
-When tasks are added to a job, Batch automatically queues and schedules them for execution on compute nodes in the associated pool. Based on the settings you specify, Batch handles all task queuing, scheduling, retrying, and other task administration duties.
+When tasks are added to a job, Batch automatically queues and schedules them for execution on compute nodes in the associated pool. Based on the settings you specify, Batch handles all task queuing, scheduling, retrying, and other task administration duties.
There are many approaches to monitoring task execution. The `wait_for_tasks_to_complete` function in this example uses the [TaskState](/python/api/azure-batch/azure.batch.models.taskstate) object to monitor tasks for a certain state, in this case the completed state, within a time limit.
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
Title: Mount a virtual file system on a pool
description: Learn how to mount different kinds of virtual file systems on Batch pool nodes, and how to troubleshoot mounting issues. ms.devlang: csharp-+ Last updated 08/22/2023
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
Last updated 06/06/2022 -+ #Customer intent: As a website owner, I want to enable HTTPS on the custom domain of my CDN endpoint so that my users can use my custom domain to access my content securely.
The following table shows the operation progress that occurs when you disable HT
7. *How do cert renewals work with Bring Your Own Certificate?* To ensure a newer certificate is deployed to PoP infrastructure, upload your new certificate to Azure KeyVault. In your TLS settings on Azure CDN, choose the newest certificate version and select save. Azure CDN will then propagate your new updated cert.
-
+ For **Azure CDN from Edgio** profiles, if you use the same Azure Key Vault certificate on several custom domains (e.g. a wildcard certificate), ensure you update all of your custom domains that use that same certificate to the newer certificate version. ## Next steps
chaos-studio Chaos Studio Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-configure-customer-managed-keys.md
Title: Configure customer-managed keys (preview) for experiment encryption
+ Title: Configure customer-managed keys [preview] for experiment encryption
description: Learn how to configure customer-managed keys (preview) for your Azure Chaos Studio experiment resource using Azure Blob Storage
Last updated 10/06/2023
-# Configure customer-managed keys (preview) for Azure Chaos Studio using Azure Blob Storage
+# Configure customer-managed keys [preview] for Azure Chaos Studio using Azure Blob Storage
Azure Chaos Studio automatically encrypts all data stored in your experiment resource with keys that Microsoft provides (service-managed keys). As an optional feature, you can add a second layer of security by also providing your own (customer-managed) encryption key(s). Customer-managed keys offer greater flexibility for controlling access and key-rotation policies.
chaos-studio Chaos Studio Private Link Agent Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-link-agent-service.md
-# How-to: Configure Private Link for Agent-Based experiments
-This guide explains the steps needed to configure Private Link for a Chaos Studio **Agent-based** Experiment. The current user experience is based on the private endpoints support enabled as part of public preview of the private endpoints feature. Expect this experience to evolve with time as the feature is enhanced to GA quality.
+# How-to: Configure Private Link for Agent-Based experiments [Preview]
+This guide explains the steps needed to configure Private Link for a Chaos Studio **Agent-based** Experiment [Preview]. The current user experience is based on the private endpoints support enabled as part of public preview of the private endpoints feature. Expect this experience to evolve with time as the feature is enhanced to GA quality, as it is currently in **preview**.
## Prerequisites
Example of updated agentInstanceConfig.json:
**IF** you blocked outbound access to Microsoft Certificate Revocation List (CRL) verification endpoints, then you need to update agentSettings.JSON to disable CRL verification check in the agent.
+By default this field is set to **true**, so you can either remove this field or set the value to false. See [here](chaos-studio-tutorial-agent-based-cli.md) for more details.
+ ``` "communicationApi": { "checkCertRevocation": false
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Currently, you can only enable certain resource types for Chaos Studio virtual n
To use Chaos Studio with virtual network injection, you must meet the following requirements. 1. The `Microsoft.ContainerInstance` and `Microsoft.Relay` resource providers must be registered with your subscription. 1. The virtual network where Chaos Studio resources will be injected must have two subnets: a container subnet and a relay subnet. A container subnet is used for the Chaos Studio containers that will be injected into your private network. A relay subnet is used to forward communication from Chaos Studio to the containers inside the private network.
- 1. Both subnets need at least `/28` in the address space. An example is an address prefix of `10.0.0.0/28` or `10.0.0.0/24`.
+ 1. Both subnets need at least `/27` in the address space. An example is an address prefix of `10.0.0.0/28` or `10.0.0.0/24`.
1. The container subnet must be delegated to `Microsoft.ContainerInstance/containerGroups`. 1. The subnets can be arbitrarily named, but we recommend `ChaosStudioContainerSubnet` and `ChaosStudioRelaySubnet`. 1. When you enable the desired resource as a target so that you can use it in Chaos Studio experiments, the following properties must be set:
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
The chaos agent is an application that runs in your VM or virtual machine scale
1. Install the Chaos Studio VM extension. Replace `$VM_RESOURCE_ID` with the resource ID of your VM or replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$VMSS_NAME` with those properties for your virtual machine scale set. Replace `$AGENT_PROFILE_ID` with the agent Profile ID. Replace `$USER_IDENTITY_CLIENT_ID` with the client ID of your managed identity. Replace `$APP_INSIGHTS_KEY` with your Application Insights instrumentation key. If you aren't using Application Insights, remove that key/value pair.
+ #### Full list of default Agent virtual machine extension configuration
+
+ Here is the **minimum agent vm extension configuration** required by the user:
+
+ ```azcli-interactive
+ {
+ "profile": "$AGENT_PROFILE_ID",
+ "auth.msi.clientid": "$USER_IDENTITY_CLIENT_ID"
+ }
+ ```
+
+ Here is **all values for agent vm extension configuration**
+
+ ```azcli-interactive
+ {
+ "profile": "$AGENT_PROFILE_ID",
+ "auth.msi.clientid": "$USER_IDENTITY_CLIENT_ID",
+ "appinsightskey": "$APP_INSIGHTS_KEY",
+ "overrides": {
+ "region": string, default to be null
+ "logLevel": {
+ "default" : string , default to be Information
+ },
+ "checkCertRevocation": boolean, default to be false.
+ }
+ }
+ ```
++ #### Install the agent on a virtual machine Windows ```azurecli-interactive
- az vm extension set --ids $VM_RESOURCE_ID --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vm extension set --ids $VM_RESOURCE_ID --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` Linux ```azurecli-interactive
- az vm extension set --ids $VM_RESOURCE_ID --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vm extension set --ids $VM_RESOURCE_ID --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` #### Install the agent on a virtual machine scale set
The chaos agent is an application that runs in your VM or virtual machine scale
Windows ```azurecli-interactive
- az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosWindowsAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` Linux ```azurecli-interactive
- az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"}'
+ az vmss extension set --subscription $SUBSCRIPTION_ID --resource-group $RESOURCE_GROUP --vmss-name $VMSS_NAME --name ChaosLinuxAgent --publisher Microsoft.Azure.Chaos --version 1.0 --settings '{"profile": "$AGENT_PROFILE_ID", "auth.msi.clientid":"$USER_IDENTITY_CLIENT_ID", "appinsightskey":"$APP_INSIGHTS_KEY"{"Overrides": "CheckCertRevocation" = true}}'
``` 1. If you're setting up a virtual machine scale set, verify that the instances were upgraded to the latest model. If needed, upgrade all instances in the model.
cloud-shell Persisting Shell Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/persisting-shell-storage.md
ms.contributor: jahelmic
Last updated 10/03/2023 tags: azure-resource-manager-+ Title: Persist files in Azure Cloud Shell
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
Title: 'Quickstart: Deploy an AKS cluster with Enclave Confidential Container Intel SGX nodes by using the Azure CLI' description: Learn how to create an Azure Kubernetes Service (AKS) cluster with enclave confidential containers a Hello World app by using the Azure CLI. -+ Last updated 11/06/2023 -+ # Quickstart: Deploy an AKS cluster with confidential computing Intel SGX agent nodes by using the Azure CLI
This section assumes you're already running an AKS cluster that meets the prereq
Run the following command to enable the confidential computing add-on: ```azurecli-interactive
-az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup
+az aks enable-addons --addons confcom --name MyManagedCluster --resource-group MyResourceGroup
``` ### Add a DCsv3 user node pool to the cluster
kubectl get pods --all-namespaces
kube-system sgx-device-plugin-xxxx 1/1 Running ```
-If the output matches the preceding code, your AKS cluster is now ready to run confidential applications.
+If the output matches the preceding code, your AKS cluster is now ready to run confidential applications.
## Deploy Hello World from an isolated enclave application <a id="hello-world"></a>
-You're now ready to deploy a test application.
+You're now ready to deploy a test application.
Create a file named *hello-world-enclave.yaml* and paste in the following YAML manifest. You can find this sample application code in the [Open Enclave project](https://github.com/openenclave/openenclave/tree/master/samples/helloworld). This deployment assumes that you've deployed the *confcom* add-on.
Enclave called into host to print: Hello World!
## Clean up resources
-To remove the confidential computing node pool that you created in this quickstart, use the following command:
+To remove the confidential computing node pool that you created in this quickstart, use the following command:
```azurecli-interactive az aks nodepool delete --cluster-name myAKSCluster --name confcompool1 --resource-group myResourceGroup ```
-To delete the AKS cluster, use the following command:
+To delete the AKS cluster, use the following command:
```azurecli-interactive az aks delete --resource-group myResourceGroup --cluster-name myAKSCluster
confidential-computing Guest Attestation Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-example.md
Last updated 04/11/2023-+
-
+ # Use sample application for guest attestation The [*guest attestation*](guest-attestation-confidential-vms.md) feature helps you to confirm that a confidential VM runs on a hardware-based trusted execution environment (TEE) with security features enabled for isolation and integrity. Sample applications for use with the guest attestation APIs are [available on GitHub](https://github.com/Azure/confidential-computing-cvm-guest-attestation).
-Depending on your [type of scenario](guest-attestation-confidential-vms.md#scenarios), you can reuse the sample code in your client program or workload code.
+Depending on your [type of scenario](guest-attestation-confidential-vms.md#scenarios), you can reuse the sample code in your client program or workload code.
## Prerequisites
To use a sample application in C++ for use with the guest attestation APIs, foll
1. Install the `build-essential` package. This package installs everything required for compiling the sample application. ```bash
- sudo apt-get install build-essential
+ sudo apt-get install build-essential
``` 1. Install the `libcurl4-openssl-dev` and `libjsoncpp-dev` packages. ```bash
- sudo apt-get install libcurl4-openssl-dev
+ sudo apt-get install libcurl4-openssl-dev
``` ```bash
- sudo apt-get install libjsoncpp-dev
+ sudo apt-get install libjsoncpp-dev
``` 1. Download the attestation package from <https://packages.microsoft.com/repos/azurecore/pool/main/a/azguestattestation1/>.
To use a sample application in C++ for use with the guest attestation APIs, foll
## Next steps -- [Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
+- [Learn how to use Microsoft Defender for Cloud integration with confidential VMs with guest attestation installed](guest-attestation-defender-for-cloud.md)
- [Learn more about the guest attestation feature](guest-attestation-confidential-vms.md) - [Learn about Azure confidential VMs](confidential-vm-overview.md)
confidential-computing Quick Create Confidential Vm Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm.md
Last updated 12/01/2023 -+ ms.devlang: azurecli
ms.devlang: azurecli
You can use an Azure Resource Manager template (ARM template) to create an Azure [confidential VM](confidential-vm-overview.md) quickly. Confidential VMs run on both AMD processors backed by AMD SEV-SNP and Intel processors backed by Intel TDX to achieve VM memory encryption and isolation. For more information, see [Confidential VM Overview](confidential-vm-overview.md).
-This tutorial covers deployment of a confidential VM with a custom configuration.
+This tutorial covers deployment of a confidential VM with a custom configuration.
## Prerequisites -- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/).
+- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/).
- If you want to deploy from the Azure CLI, [install PowerShell](/powershell/azure/install-azure-powershell) and [install the Azure CLI](/cli/azure/install-azure-cli). ## Deploy confidential VM template with Azure CLI
To create and deploy your confidential VM using an ARM template through the Azur
az group create -n $resourceGroup -l $region ```
-1. Deploy your VM to Azure using an ARM template with a custom parameter file. For TDX deployments here is an example template: https://aka.ms/TDXtemplate.
+1. Deploy your VM to Azure using an ARM template with a custom parameter file. For TDX deployments here is an example template: https://aka.ms/TDXtemplate.
```azurecli-interactive az deployment group create `
When you create a confidential VM through the Azure Command-Line Interface (Azur
1. Depending on the OS image you're using, copy either the [example Windows parameter file](#example-windows-parameter-file) or the [example Linux parameter file](#example-linux-parameter-file) into your parameter file.
-1. Edit the JSON code in the parameter file as needed. For example, update the OS image name (`osImageName`) or the administrator username (`adminUsername`).
+1. Edit the JSON code in the parameter file as needed. For example, update the OS image name (`osImageName`) or the administrator username (`adminUsername`).
1. Configure your security type setting (`securityType`). Choose `VMGuestStateOnly` for no OS disk confidential encryption. Or, choose `DiskWithVMGuestState` for OS disk confidential encryption with a platform-managed key. For Intel TDX SKUs and Linux-based images only, customers may choose the `NonPersistedTPM` security type to deploy with an ephemeral vTPM. For the `NonPersistedTPM` security type use the minimum "apiVersion": "2023-09-01" under `Microsoft.Compute/virtualMachines` in the template file.
Use this example to create a custom parameter file for a Linux-based confidentia
``` 1. Grant confidential VM Service Principal `Confidential VM Orchestrator` to tenant
-
+ For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role. ```azurecli-interactive Connect-AzureAD -Tenant "your tenant ID"
- New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
``` 1. Set up your Azure key vault. For how to use an Azure Key Vault Managed HSM instead, see the next step.
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $KeyVault = <name of key vault>
- az keyvault create --name $KeyVault --resource-group $resourceGroup --location $region --sku Premium --enable-purge-protection
+ az keyvault create --name $KeyVault --resource-group $resourceGroup --location $region --sku Premium --enable-purge-protection
``` 1. Make sure that you have an **owner** role in this key vault.
Use this example to create a custom parameter file for a Linux-based confidentia
1. (Optional) If you don't want to use an Azure key vault, you can create an Azure Key Vault Managed HSM instead.
- 1. Follow the [quickstart to create an Azure Key Vault Managed HSM](../key-vault/managed-hsm/quick-create-cli.md) to provision and activate Azure Key Vault Managed HSM.
+ 1. Follow the [quickstart to create an Azure Key Vault Managed HSM](../key-vault/managed-hsm/quick-create-cli.md) to provision and activate Azure Key Vault Managed HSM.
1. Enable purge protection on the Azure Managed HSM. This step is required to enable key release.
-
+ ```azurecli-interactive az keyvault update-hsm --subscription $subscriptionId -g $resourceGroup --hsm-name $hsm --enable-purge-protection true ```
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $cvmAgent = az ad sp show --id "bf7b6499-ff71-4aa2-97a4-f372087be7f0" | Out-String | ConvertFrom-Json
- az keyvault role assignment create --hsm-name $hsm --assignee $cvmAgent.Id --role "Managed HSM Crypto Service Release User" --scope /keys/$KeyName
+ az keyvault role assignment create --hsm-name $hsm --assignee $cvmAgent.Id --role "Managed HSM Crypto Service Release User" --scope /keys/$KeyName
``` 1. Create a new key using Azure Key Vault. For how to use an Azure Managed HSM instead, see the next step.
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $KeyName = <name of key> $KeySize = 3072
- az keyvault key create --vault-name $KeyVault --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
+ az keyvault key create --vault-name $KeyVault --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
``` 1. Get information about the key that you created. ```azurecli-interactive $encryptionKeyVaultId = ((az keyvault show -n $KeyVault -g $resourceGroup) | ConvertFrom-Json).id
- $encryptionKeyURL= ((az keyvault key show --vault-name $KeyVault --name $KeyName) | ConvertFrom-Json).key.kid
+ $encryptionKeyURL= ((az keyvault key show --vault-name $KeyVault --name $KeyName) | ConvertFrom-Json).key.kid
``` 1. Deploy a Disk Encryption Set (DES) using a [DES ARM template](https://cvmprivatepreviewsa.blob.core.windows.net/cvmpublicpreviewcontainer/deploymentTemplate/deployDES.json) (`deployDES.json`).
Use this example to create a custom parameter file for a Linux-based confidentia
-p desName=$desName ` -p encryptionKeyURL=$encryptionKeyURL ` -p encryptionKeyVaultId=$encryptionKeyVaultId `
- -p region=$region
+ -p region=$region
``` 1. Assign key access to the DES file. ```azurecli-interactive
- $desIdentity= (az disk-encryption-set show -n $desName -g
+ $desIdentity= (az disk-encryption-set show -n $desName -g
$resourceGroup --query [identity.principalId] -o tsv) az keyvault set-policy -n $KeyVault ` -g $resourceGroup ` --object-id $desIdentity `
- --key-permissions wrapkey unwrapkey get
+ --key-permissions wrapkey unwrapkey get
``` 1. (Optional) Create a new key from an Azure Managed HSM.
Use this example to create a custom parameter file for a Linux-based confidentia
```azurecli-interactive $KeyName = <name of key> $KeySize = 3072
- az keyvault key create --hsm-name $hsm --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
+ az keyvault key create --hsm-name $hsm --name $KeyName --ops wrapKey unwrapkey --kty RSA-HSM --size $KeySize --exportable true --policy "@.\skr-policy.json"
``` 1. Get information about the key that you created. ```azurecli-interactive
- $encryptionKeyURL = ((az keyvault key show --hsm-name $hsm --name $KeyName) | ConvertFrom-Json).key.kid
+ $encryptionKeyURL = ((az keyvault key show --hsm-name $hsm --name $KeyName) | ConvertFrom-Json).key.kid
``` 1. Deploy a DES.
Use this example to create a custom parameter file for a Linux-based confidentia
1. Deploy your confidential VM with the customer-managed key. 1. Get the resource ID for the DES.
-
+ ```azurecli-interactive $desID = (az disk-encryption-set show -n $desName -g $resourceGroup --query [id] -o tsv) ```
confidential-computing Quick Create Confidential Vm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-azure-cli.md
Last updated 12/01/2023 -+ # Quickstart: Create a confidential VM with the Azure CLI
To create a confidential [disk encryption set](../virtual-machines/linux/disks-e
For this step you need to be a Global Admin or you need to have the User Access Administrator RBAC role. [Install Microsoft Graph SDK](/powershell/microsoftgraph/installation) to execute the commands below. ```Powershell Connect-Graph -Tenant "your tenant ID" Application.ReadWrite.All
- New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-MgServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
``` 2. Create an Azure Key Vault using the [az keyvault create](/cli/azure/keyvault) command. For the pricing tier, select Premium (includes support for HSM backed keys). Make sure that you have an owner role in this key vault. ```azurecli-interactive
It takes a few minutes to create the VM and supporting resources. The following
} ``` Make a note of the `publicIpAddress` to use later.
-
+ ## Connect and attest the AMD-based CVM through Microsoft Azure Attestation Sample App To use a sample application in C++ for use with the guest attestation APIs, use the following steps. This example uses a Linux confidential virtual machine. For Windows, see [build instructions for Windows](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-attestation-sample-app).
To use a sample application in C++ for use with the guest attestation APIs, use
3. Install the `build-essential` package. This package installs everything required for compiling the sample application. ```bash
-sudo apt-get install build-essential
+sudo apt-get install build-essential
``` 4. Install the packages below. ```bash
-sudo apt-get install libcurl4-openssl-dev
+sudo apt-get install libcurl4-openssl-dev
sudo apt-get install libjsoncpp-dev sudo apt-get install libboost-all-dev sudo apt install nlohmann-json3-dev
confidential-computing Quick Create Confidential Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md
Last updated 12/01/2023
- mode-ui
- - devx-track-linux
+ - linux-related-content
- has-azure-ad-ps-ref - ignite-2023
You can use the Azure portal to create a [confidential VM](confidential-vm-overv
- An Azure subscription. Free trial accounts don't have access to the VMs used in this tutorial. One option is to use a [pay as you go subscription](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go/). - If you're using a Linux-based confidential VM, use a BASH shell for SSH or install an SSH client, such as [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/download.html).-- If Confidential disk encryption with a customer-managed key is required, please run below command to opt in service principal `Confidential VM Orchestrator` to your tenant.
+- If Confidential disk encryption with a customer-managed key is required, please run below command to opt in service principal `Confidential VM Orchestrator` to your tenant.
```azurecli Connect-AzureAD -Tenant "your tenant ID"
- New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
+ New-AzureADServicePrincipal -AppId bf7b6499-ff71-4aa2-97a4-f372087be7f0 -DisplayName "Confidential VM Orchestrator"
``` ## Create confidential VM
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. On the tab **Basics**, configure the following settings: a. Under **Project details**, for **Subscription**, select an Azure subscription that meets the [prerequisites](#prerequisites).
-
+ b. For **Resource Group**, select **Create new** to create a new resource group. Enter a name, and select **OK**. c. Under **Instance details**, for **Virtual machine name**, enter a name for your new VM.
- d. For **Region**, select the Azure region in which to deploy your VM.
+ d. For **Region**, select the Azure region in which to deploy your VM.
> [!NOTE] > Confidential VMs are not available in all locations. For currently supported locations, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
-
+ e. For **Availability options**, select **No infrastructure redundancy required** for singular VMs or [**Virtual machine scale set**](/azure/virtual-machine-scale-sets/overview) for multiple VMs. f. For **Security Type**, select **Confidential virtual machines**.
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. Under **Disk options**, enable **Confidential OS disk encryption** if you want to encrypt your VM's OS disk during creation.
- 1. For **Key Management**, select the type of key to use.
-
+ 1. For **Key Management**, select the type of key to use.
+ 1. If **Confidential disk encryption with a customer-managed key** is selected, create a **Confidential disk encryption set** before creating your confidential VM. 1. If you want to encrypt your VM's temp disk, please refer to the [following documentation](https://aka.ms/CVM-tdisk-encrypt). 1. (Optional) If necessary, you need to create a **Confidential disk encryption set** as follows. 1. [Create an Azure Key Vault](../key-vault/general/quick-create-portal.md) selecting the **Premium** pricing tier that includes support for HSM-backed keys and enable purge protection. Alternatively, you can create an [Azure Key Vault managed Hardware Security Module (HSM)](../key-vault/managed-hsm/quick-create-cli.md).
-
- 1. In the Azure portal, search for and select **Disk Encryption Sets**.
- 1. Select **Create**.
+ 1. In the Azure portal, search for and select **Disk Encryption Sets**.
+
+ 1. Select **Create**.
- 1. For **Subscription**, select which Azure subscription to use.
+ 1. For **Subscription**, select which Azure subscription to use.
1. For **Resource group**, select or create a new resource group to use.
-
+ 1. For **Disk encryption set name**, enter a name for the set.
- 1. For **Region**, select an available Azure region.
+ 1. For **Region**, select an available Azure region.
1. For **Encryption type**, select **Confidential disk encryption with a customer-managed key**.
- 1. For **Key Vault**, select the key vault you already created.
+ 1. For **Key Vault**, select the key vault you already created.
1. Under **Key Vault**, select **Create new** to create a new key.
To create a confidential VM in the Azure portal using an Azure Marketplace image
1. For the key type, select **RSA-HSM** 1. Select your key size
-
+ n. Under Confidential Key Options select **Exportable** and set the Confidential operation policy as **CVM confidential operation policy**. o. Select **Create** to finish creating the key. p. Select **Review + create** to create new disk encryption set. Wait for the resource creation to complete successfully.
-
+ q. Go to the disk encryption set resource in the Azure portal. r. Select the pink banner to grant permissions to Azure Key Vault.
-
+ > [!IMPORTANT] > You must perform this step to successfully create the confidential VM.
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
description: Creating a Client Certificate with Microsoft Azure confidential led
-+ Last updated 04/11/2023
confidential-ledger Verify Node Quotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/verify-node-quotes.md
Last updated 08/18/2023 -+
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 01/10/2024 Last updated : 01/18/2024 tags: connectors
The Azure Blob Storage connector has different versions, based on [logic app typ
1. Follow the trigger with the Azure Blob Storage managed connector action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking.
+- Azure Blob Storage trigger limits
+
+ - The *managed* connector trigger is limited to 30,000 blobs in the polling virtual folder.
+ - The *built-in* connector trigger is limited to 10,000 blobs in the entire polling container.
+
+ If the limit is exceeded, a new blob might not be able to trigger the workflow, so the trigger is skipped.
+ ## Prerequisites - An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
description: 'Tutorial: learn how to set up Azure Container Apps in your Azure A
-+ Last updated 3/24/2023
This tutorial will show you how to enable Azure Container Apps on your Arc-enabl
> [!NOTE] > During the preview, Azure Container Apps on Arc are not supported in production configurations. This article provides an example configuration for evaluation purposes only. >
-> This tutorial uses [Azure Kubernetes Service (AKS)](../aks/index.yml) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you may not want to enable Azure Arc on an AKS cluster as it is already managed in Azure.
+> This tutorial uses [Azure Kubernetes Service (AKS)](../aks/index.yml) to provide concrete instructions for setting up an environment from scratch. However, for a production workload, you may not want to enable Azure Arc on an AKS cluster as it is already managed in Azure.
Set environment variables based on your Kubernetes cluster deployment.
```bash GROUP_NAME="my-arc-cluster-group" AKS_CLUSTER_GROUP_NAME="my-aks-cluster-group"
-AKS_NAME="my-aks-cluster"
-LOCATION="eastus"
+AKS_NAME="my-aks-cluster"
+LOCATION="eastus"
``` # [PowerShell](#tab/azure-powershell)
LOCATION="eastus"
```azurepowershell-interactive $GROUP_NAME="my-arc-cluster-group" $AKS_CLUSTER_GROUP_NAME="my-aks-cluster-group"
-$AKS_NAME="my-aks-cluster"
-$LOCATION="eastus"
+$AKS_NAME="my-aks-cluster"
+$LOCATION="eastus"
```
The following steps help you get started understanding the service, but for prod
``` # [PowerShell](#tab/azure-powershell)
-
+ ```azurepowershell-interactive az group create --name $AKS_CLUSTER_GROUP_NAME --location $LOCATION az aks create `
The following steps help you get started understanding the service, but for prod
```azurecli-interactive az aks get-credentials --resource-group $AKS_CLUSTER_GROUP_NAME --name $AKS_NAME --admin
-
+ kubectl get ns ```
-1. Create a resource group to contain your Azure Arc resources.
+1. Create a resource group to contain your Azure Arc resources.
# [Azure CLI](#tab/azure-cli)
The following steps help you get started understanding the service, but for prod
```azurecli-interactive CLUSTER_NAME="${GROUP_NAME}-cluster" # Name of the connected cluster resource
-
+ az connectedk8s connect --resource-group $GROUP_NAME --name $CLUSTER_NAME ```
The following steps help you get started understanding the service, but for prod
```azurepowershell-interactive $CLUSTER_NAME="${GROUP_NAME}-cluster" # Name of the connected cluster resource
-
+ az connectedk8s connect --resource-group $GROUP_NAME --name $CLUSTER_NAME ```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
```azurecli-interactive WORKSPACE_NAME="$GROUP_NAME-workspace" # Name of the Log Analytics workspace
-
+ az monitor log-analytics workspace create \ --resource-group $GROUP_NAME \ --workspace-name $WORKSPACE_NAME
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
# [Azure CLI](#tab/azure-cli) ```bash
- EXTENSION_NAME="appenv-ext"
+ EXTENSION_NAME="appenv-ext"
NAMESPACE="appplat-ns" CONNECTED_ENVIRONMENT_NAME="<connected-environment-name>" ```
A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro
```azurepowershell-interactive $EXTENSION_NAME="appenv-ext"
- $NAMESPACE="appplat-ns"
- $CONNECTED_ENVIRONMENT_NAME="<connected-environment-name>"
+ $NAMESPACE="appplat-ns"
+ $CONNECTED_ENVIRONMENT_NAME="<connected-environment-name>"
```
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Last updated 01/10/2024 -+ # Tutorial: Deploy a background processing application with Azure Container Apps
az deployment group create --resource-group "$RESOURCE_GROUP" \
$Params = @{ environment_name = $ContainerAppsEnvironment location = $Location
- queueconnection = $QueueConnectionString
+ queueconnection = $QueueConnectionString
} $DeploymentArgs = @{
container-apps Dapr Functions Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-functions-extension.md
Title: Deploy the Dapr extension for Azure Functions in Azure Container Apps (preview)
-description: Learn how to use and deploy the Azure Functions with Dapr extension in your Dapr-enabled container apps.
+description: Learn how to use and deploy the Azure Functions with Dapr extension in your Dapr-enabled container apps.
-+ Last updated 10/30/2023-+ # Customer Intent: I'm a developer who wants to use the Dapr extension for Azure Functions in my Dapr-enabled container app
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
- Create an Azure Redis Cache for use as a Dapr statestore - Deploy an Azure Container Apps environment to host container apps - Deploy a Dapr-enabled function on Azure Container Apps:
- - One function that invokes the other service
+ - One function that invokes the other service
- One function that creates an Order and saves it to storage via Dapr statestore-- Verify the interaction between the two apps
+- Verify the interaction between the two apps
## Prerequisites
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
## Set up the environment
-1. In the terminal, log into your Azure subscription.
+1. In the terminal, log into your Azure subscription.
```azurecli az login
The [Dapr extension for Azure Functions](../azure-functions/functions-bindings-d
## Create resource group > [!NOTE]
-> Azure Container Apps support for Functions is currently in preview and available in the following regions.
+> Azure Container Apps support for Functions is currently in preview and available in the following regions.
> - Australia East > - Central US > - East US
Specifying one of the available regions, create a resource group for your contai
1. When prompted by the CLI, enter a resource name prefix. The name you choose must be a combination of numbers and lowercase letters, 3 and 24 characters in length. ```
- Please provide string value for 'resourceNamePrefix' (? for help): {your-resource-name-prefix}
+ Please provide string value for 'resourceNamePrefix' (? for help): {your-resource-name-prefix}
``` The template deploys the following resources and might take a while:
Specifying one of the available regions, create a resource group for your contai
- Application Insights - Log Analytics WorkSpace - Dapr Component (Azure Redis Cache) for State Management
- - The following .NET Dapr-enabled Functions:
+ - The following .NET Dapr-enabled Functions:
- `OrderService` - `CreateNewOrder` - `RetrieveOrder`
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Last updated 10/29/2023-+ # Quickstart: Deploy to Azure Container Apps using Visual Studio Code
In this tutorial, you'll deploy a containerized application to Azure Container A
- The following Visual Studio Code extensions installed: - The [Azure Account extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.azure-account) - The [Azure Container Apps extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecontainerapps)
- - The [Docker extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker)
+ - The [Docker extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker)
## Clone the project
The Azure Container Apps extension for Visual Studio Code enables you to choose
In the browser's location bar, append the `/albums` path at the end of the app URL to view data from a sample API request.
-Congratulations! You successfully created and deployed your first container app using Visual Studio code.
+Congratulations! You successfully created and deployed your first container app using Visual Studio Code.
+ ## Clean up resources
Follow these steps in the Azure portal to remove the resources you created:
1. Select the **my-container-app** resource group from the *Overview* section. 1. Select the **Delete resource group** button at the top of the resource group *Overview*. 1. Enter the resource group name **my-container-app** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog.
-1. Select **Delete**.
+1. Select **Delete**.
The process to delete the resource group might take a few minutes to complete. > [!TIP]
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
description: Deploy an existing container image to Azure Container Apps with the
-+ Last updated 08/31/2022
If you have enabled ingress on your container app, you can add `--query properti
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
-$TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image "<REGISTRY_CONTAINER_NAME>"
+$TemplateObj = New-AzContainerAppTemplateObject -Name my-container-app -Image "<REGISTRY_CONTAINER_NAME>"
``` (Replace the \<REGISTRY_CONTAINER_NAME\> with your value.)
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
The following message is displayed when the container app is deployed:
:::image type="content" source="media/get-started/azure-container-apps-quickstart.png" alt-text="Screenshot of container app web page."::: + ## Clean up resources If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
container-apps Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions.md
- devx-track-azurecli
- - devx-track-linux
+ - linux-related-content
- ignite-2023 Last updated 11/09/2022
steps:
uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
-
+ - name: Build and deploy Container App uses: azure/container-apps-deploy-action@v1 with:
You take the following steps to configure a GitHub Actions workflow to deploy to
### Create a GitHub repository and clone source code
-Before creating a workflow, the source code for your app must be in a GitHub repository.
+Before creating a workflow, the source code for your app must be in a GitHub repository.
-1. Log in to Azure with the Azure CLI.
+1. Log in to Azure with the Azure CLI.
```azurecli-interactive az login
Before creating a workflow, the source code for your app must be in a GitHub rep
Create your container app using the `az containerapp up` command in the following steps. This command will create Azure resources, build the container image, store the image in a registry, and deploy to a container app.
-After you create your app, you can add a managed identity to the app and assign the identity the `AcrPull` role to allow the identity to pull images from the registry.
+After you create your app, you can add a managed identity to the app and assign the identity the `AcrPull` role to allow the identity to pull images from the registry.
[!INCLUDE [container-apps-github-devops-setup.md](../../includes/container-apps-github-devops-setup.md)]
The GitHub workflow requires a secret named `AZURE_CREDENTIALS` to authenticate
push: branches: - main
-
+ jobs: build: runs-on: ubuntu-latest
container-apps Jobs Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-cli.md
Job executions output logs to the logging provider that you configured for the C
] ``` + ## Clean up resources If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
container-apps Jobs Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-portal.md
Next, create an environment for your container app.
1. In *Job details*, select **Scheduled** for the *Trigger type*. In the *Cron expression* field, enter `*/1 * * * *`.
-
+ This expression starts the job every minute. 1. Select the **Next: Container** button at the bottom of the page.
Next, create an environment for your container app.
1. Select **Go to resource** to view your new Container Apps job.
-2. Select the **Execution history** tab.
+1. Select the **Execution history** tab.
The *Execution history* tab displays the status of each job execution. Select the **Refresh** button to update the list. Wait up to a minute for the scheduled job execution to start. Its status changes from *Pending* to *Running* to *Succeeded*.
Next, create an environment for your container app.
The logs show the output of the job execution. It may take a few minutes for the logs to appear. + ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Last updated 03/23/2023 -+ # Manage secrets in Azure Container Apps
container-apps Microservices Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr.md
Last updated 09/29/2022 -+ ms.devlang: azurecli
You learn how to:
With Azure Container Apps, you get a [fully managed version of the Dapr APIs](./dapr-overview.md) when building microservices. When you use Dapr in Azure Container Apps, you can enable sidecars to run next to your microservices that provide a rich set of capabilities. Available Dapr APIs include [Service to Service calls](https://docs.dapr.io/developing-applications/building-blocks/service-invocation/), [Pub/Sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/), [Event Bindings](https://docs.dapr.io/developing-applications/building-blocks/bindings/), [State Stores](https://docs.dapr.io/developing-applications/building-blocks/state-management/), and [Actors](https://docs.dapr.io/developing-applications/building-blocks/actors/).
-In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
+In this tutorial, you deploy the same applications from the Dapr [Hello World](https://github.com/dapr/quickstarts/tree/master/tutorials/hello-kubernetes) quickstart.
The application consists of:
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
Copy the FQDN to a web browser. From your web browser, go to the `/albums` endp
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint."::: + ## Clean up resources If you're not going to continue on to the [Deploy a frontend](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart with the following command.
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
Select the link next to *Application URL* to view your application. The followin
:::image type="content" source="media/get-started/azure-container-apps-quickstart.png" alt-text="Your first Azure Container Apps deployment."::: + ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
description: Learn how applications scale in and out in Azure Container Apps.
-+ Last updated 12/08/2022
Scaling is defined by the combination of limits, rules, and behavior.
- **Behavior** is how the rules and limits are combined together to determine scale decisions over time. [Scale behavior](#scale-behavior) explains how scale decisions are calculated.
-
+ As you define your scaling rules, keep in mind the following items: - You aren't billed usage charges if your container app scales to zero.
If you define more than one scale rule, the container app begins to scale once t
## HTTP
-With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. [Container Apps jobs](jobs.md) don't support HTTP scaling rules.
+With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. [Container Apps jobs](jobs.md) don't support HTTP scaling rules.
In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling property is set to 100 concurrent requests per second.
A KEDA scaler may support using secrets in a [TriggerAuthentication](https://ked
1. In your container app, create the [secrets](./manage-secrets.md) that match the `secretTargetRef` properties.
-1. In the CLI command, set parameters for each `secretTargetRef` entry.
+1. In the CLI command, set parameters for each `secretTargetRef` entry.
1. Create a secret entry with the `--secrets` parameter. If there are multiple secrets, separate them with a space.
If the app was scaled to the maximum replica count of 20, scaling goes through t
- No usage charges are incurred when an application scales to zero. For more pricing information, see [Billing in Azure Container Apps](billing.md).
+- You need to enable data protection for all .NET apps on Azure Container Apps. See [Deploying and scaling an ASP.NET Core app on Azure Container Apps](/aspnet/core/host-and-deploy/scaling-aspnet-apps/scaling-aspnet-apps) for details.
+ ### Known limitations - Vertical scaling isn't supported.
container-apps Tutorial Java Quarkus Connect Managed Identity Postgresql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-java-quarkus-connect-managed-identity-postgresql-database.md
Last updated 09/26/2022-+ # Tutorial: Connect to PostgreSQL Database from a Java Quarkus Container App without secrets using a managed identity
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
description: Learn how to integrate a VNET to an internal Azure Container Apps e
-+ Last updated 08/29/2023
$VnetArgs = @{
Location = $Location ResourceGroupName = $ResourceGroupName AddressPrefix = '10.0.0.0/16'
- Subnet = $subnet
+ Subnet = $subnet
} $vnet = New-AzVirtualNetwork @VnetArgs ```
$DnsRecordArgs = @{
ZoneName = $EnvironmentDefaultDomain Name = '*' RecordType = 'A'
- Ttl = 3600
+ Ttl = 3600
PrivateDnsRecords = $DnsRecords } New-AzPrivateDnsRecordSet @DnsRecordArgs
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
description: Learn how to integrate a VNET with an external Azure Container Apps
-+ Last updated 08/31/2022
$VnetArgs = @{
Location = $Location ResourceGroupName = $ResourceGroupName AddressPrefix = '10.0.0.0/16'
- Subnet = $subnet
+ Subnet = $subnet
} $vnet = New-AzVirtualNetwork @VnetArgs ```
$DnsRecordArgs = @{
ZoneName = $EnvironmentDefaultDomain Name = '*' RecordType = 'A'
- Ttl = 3600
+ Ttl = 3600
PrivateDnsRecords = $DnsRecords } New-AzPrivateDnsRecordSet @DnsRecordArgs
container-instances Container Instances Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-custom-dns.md
description: Configure a public or private DNS configuration for a container gro
-+ Last updated 05/25/2022
Last updated 05/25/2022
# Deploy a container group with custom DNS settings
-In [Azure Virtual Network](../virtual-network/virtual-networks-overview.md), you can deploy container groups using the `az container create` command in the Azure CLI. You can also provide advanced configuration settings to the `az container create` command using a YAML configuration file.
+In [Azure Virtual Network](../virtual-network/virtual-networks-overview.md), you can deploy container groups using the `az container create` command in the Azure CLI. You can also provide advanced configuration settings to the `az container create` command using a YAML configuration file.
-This article demonstrates how to deploy a container group with custom DNS settings using a YAML configuration file.
+This article demonstrates how to deploy a container group with custom DNS settings using a YAML configuration file.
For more information on deploying container groups to a virtual network, see the [Deploy in a virtual network article](container-instances-vnet.md).
If you have an existing virtual network that meets these criteria, you can skip
1. Link the DNS zone to your virtual network using the [az network private-dns link vnet create][az-network-private-dns-link-vnet-create] command. The DNS server is only required to test name resolution. The `-e` flag enables automatic hostname registration, which is unneeded, so we set it to `false`. ```azurecli-interactive
- az network private-dns link vnet create \
+ az network private-dns link vnet create \
-g ACIResourceGroup \
- -n aciDNSLink \
+ -n aciDNSLink \
-z private.contoso.com \ -v aci-vnet \ -e false
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
Last updated 12/09/2022-+ # Configure a GitHub Action to create a container instance
This article shows how to set up a workflow in a GitHub repo that performs the f
This article shows two ways to set up the workflow:
-* [Configure GitHub workflow](#configure-github-workflow) - Create a workflow in a GitHub repo using the Deploy to Azure Container Instances action and other actions.
+* [Configure GitHub workflow](#configure-github-workflow) - Create a workflow in a GitHub repo using the Deploy to Azure Container Instances action and other actions.
* [Use CLI extension](#use-deploy-to-azure-extension) - Use the `az container app up` command in the [Deploy to Azure](https://github.com/Azure/deploy-to-azure-cli-extension) extension in the Azure CLI. This command streamlines creation of the GitHub workflow and deployment steps. > [!IMPORTANT]
Save the JSON output because it is used in a later step. Also, take note of the
# [OpenID Connect](#tab/openid)
-OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
+OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security.
-1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
+1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application.
```azurecli-interactive az ad app create --display-name myApp ```
- This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
+ This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later.
You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`.
-1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
+1. Create a service principal. Replace the `$appID` with the appId from your JSON output.
- This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
-
- Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
+ This command generates JSON output with a different `objectId` and will be used in the next step. The new `objectId` is the `assignee-object-id`.
+
+ Copy the `appOwnerTenantId` to use as a GitHub secret for `AZURE_TENANT_ID` later.
```azurecli-interactive az ad sp create --id $appId ```
-1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
+1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli).
```azurecli-interactive az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/ --assignee-principal-type ServicePrincipal
OpenID Connect is an authentication method that uses short-lived tokens. Setting
* Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >` * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`. * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`.
-
+ ```azurecli-interactive az ad app federated-credential create --id <APPLICATION-OBJECT-ID> --parameters credential.json ("credential.json" contains the following content)
OpenID Connect is an authentication method that uses short-lived tokens. Setting
"audiences": [ "api://AzureADTokenExchange" ]
- }
+ }
```
-
+ To learn how to create a Create an active directory application, service principal, and federated credentials in Azure portal, see [Connect GitHub and Azure](/azure/developer/github/connect-from-azure#use-the-azure-login-action-with-openid-connect).
To learn how to create a Create an active directory application, service princip
# [Service principal](#tab/userlevel)
-Update the Azure service principal credentials to allow push and pull access to your container registry. This step enables the GitHub workflow to use the service principal to [authenticate with your container registry](../container-registry/container-registry-auth-service-principal.md) and to push and pull a Docker image.
+Update the Azure service principal credentials to allow push and pull access to your container registry. This step enables the GitHub workflow to use the service principal to [authenticate with your container registry](../container-registry/container-registry-auth-service-principal.md) and to push and pull a Docker image.
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command:
az role assignment create \
# [OpenID Connect](#tab/openid)
-You need to give your application permission to access the Azure Container Registry and to create an Azure Container Instance.
+You need to give your application permission to access the Azure Container Registry and to create an Azure Container Instance.
-1. In Azure portal, go to [App registrations](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
-1. Search for your OpenID Connect app registration and copy the **Application (client) ID**.
-1. Grant permissions for your app to your resource group. You'll need to set permissions at the resource group level so that you can create Azure Container instances.
+1. In Azure portal, go to [App registrations](https://portal.azure.com/#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/RegisteredApps).
+1. Search for your OpenID Connect app registration and copy the **Application (client) ID**.
+1. Grant permissions for your app to your resource group. You'll need to set permissions at the resource group level so that you can create Azure Container instances.
```azurecli-interactive az role assignment create \
jobs:
# checkout the repo - name: 'Checkout GitHub Action' uses: actions/checkout@main
-
+ - name: 'Login via Azure CLI' uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
-
+ - name: 'Build and push image' uses: azure/docker-login@v1 with:
jobs:
steps: - name: 'Checkout GitHub Action' uses: actions/checkout@main
-
+ - name: 'Login via Azure CLI' uses: azure/login@v1 with:
jobs:
### Validate workflow
-After you commit the workflow file, the workflow is triggered. To review workflow progress, navigate to **Actions** > **Workflows**.
+After you commit the workflow file, the workflow is triggered. To review workflow progress, navigate to **Actions** > **Workflows**.
![View workflow progress](./media/container-instances-github-action/github-action-progress.png) See [Viewing workflow run history](https://docs.github.com/en/actions/managing-workflow-runs/viewing-workflow-run-history) for information about viewing the status and results of each step in your workflow. If the workflow doesn't complete, see [Viewing logs to diagnose failures](https://docs.github.com/en/actions/managing-workflow-runs/using-workflow-run-logs#viewing-logs-to-diagnose-failures).
-When the workflow completes successfully, get information about the container instance named *aci-sampleapp* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
+When the workflow completes successfully, get information about the container instance named *aci-sampleapp* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
```azurecli-interactive az container show \
After the instance is provisioned, navigate to the container's FQDN in your brow
## Use Deploy to Azure extension
-Alternatively, use the [Deploy to Azure extension](https://github.com/Azure/deploy-to-azure-cli-extension) in the Azure CLI to configure the workflow. The `az container app up` command in the extension takes input parameters from you to set up a workflow to deploy to Azure Container Instances.
+Alternatively, use the [Deploy to Azure extension](https://github.com/Azure/deploy-to-azure-cli-extension) in the Azure CLI to configure the workflow. The `az container app up` command in the extension takes input parameters from you to set up a workflow to deploy to Azure Container Instances.
The workflow created by the Azure CLI is similar to the workflow you can [create manually using GitHub](#configure-github-workflow).
az container app up \
* Service principal credentials for the Azure CLI * Credentials to access the Azure container registry
-* After the command commits the workflow file to your repo, the workflow is triggered.
+* After the command commits the workflow file to your repo, the workflow is triggered.
Output is similar to:
To view the workflow status and results of each step in the GitHub UI, see [View
### Validate workflow
-The workflow deploys an Azure container instance with the base name of your GitHub repo, in this case, *acr-build-helloworld-node*. When the workflow completes successfully, get information about the container instance named *acr-build-helloworld-node* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
+The workflow deploys an Azure container instance with the base name of your GitHub repo, in this case, *acr-build-helloworld-node*. When the workflow completes successfully, get information about the container instance named *acr-build-helloworld-node* by running the [az container show][az-container-show] command. Substitute the name of your resource group:
```azurecli-interactive az container show \
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
Title: Deploy GPU-enabled container instance
+ Title: Deploy GPU-enabled container instance
description: Learn how to deploy Azure container instances to run compute-intensive container applications using GPU resources. -+ Last updated 06/17/2022
This article shows how to add GPU resources when you deploy a container group by
> [!NOTE] > Due to some current limitations, not all limit increase requests are guaranteed to be approved.
-* If you would like to use this sku for your production container deployments, create an [Azure Support request](https://azure.microsoft.com/support) to increase the limit.
+* If you would like to use this sku for your production container deployments, create an [Azure Support request](https://azure.microsoft.com/support) to increase the limit.
## Preview limitations
-In preview, the following limitations apply when using GPU resources in container groups.
+In preview, the following limitations apply when using GPU resources in container groups.
[!INCLUDE [container-instances-gpu-regions](../../includes/container-instances-gpu-regions.md)]
To use GPUs in a container instance, specify a *GPU resource* with the following
[!INCLUDE [container-instances-gpu-limits](../../includes/container-instances-gpu-limits.md)]
-When deploying GPU resources, set CPU and memory resources appropriate for the workload, up to the maximum values shown in the preceding table. These values are currently larger than the CPU and memory resources available in container groups without GPU resources.
+When deploying GPU resources, set CPU and memory resources appropriate for the workload, up to the maximum values shown in the preceding table. These values are currently larger than the CPU and memory resources available in container groups without GPU resources.
> [!IMPORTANT] > Default [subscription limits](container-instances-quotas.md) (quotas) for GPU resources differ by SKU. The default CPU limits for V100 SKUs are initially set to 0. To request an increase in an available region, please submit an [Azure support request][azure-support]. ### Things to know
-* **Deployment time** - Creation of a container group containing GPU resources takes up to **8-10 minutes**. This is due to the additional time to provision and configure a GPU VM in Azure.
+* **Deployment time** - Creation of a container group containing GPU resources takes up to **8-10 minutes**. This is due to the additional time to provision and configure a GPU VM in Azure.
* **Pricing** - Similar to container groups without GPU resources, Azure bills for resources consumed over the *duration* of a container group with GPU resources. The duration is calculated from the time to pull your first container's image until the container group terminates. It does not include the time to deploy the container group.
When deploying GPU resources, set CPU and memory resources appropriate for the w
> [!NOTE] > To improve reliability when using a public container image from Docker Hub, import and manage the image in a private Azure container registry, and update your Dockerfile to use your privately managed base image. [Learn more about working with public images](../container-registry/buffer-gate-public-content.md).
-
+ ## YAML example One way to add GPU resources is to deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md). Copy the following YAML into a new file named *gpu-deploy-aci.yaml*, then save the file. This YAML creates a container group named *gpucontainergroup* specifying a container instance with a V100 GPU. The instance runs a sample CUDA vector addition application. The resource requests are sufficient to run the workload.
properties:
restartPolicy: OnFailure ```
-Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name for the `--file` parameter. You need to supply the name of a resource group and a location for the container group such as *eastus* that supports GPU resources.
+Deploy the container group with the [az container create][az-container-create] command, specifying the YAML file name for the `--file` parameter. You need to supply the name of a resource group and a location for the container group such as *eastus* that supports GPU resources.
```azurecli-interactive az container create --resource-group myResourceGroup --file gpu-deploy-aci.yaml --location eastus
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
-+ Last updated 06/17/2022
Use a managed identity in a running container to authenticate to any [service th
### Enable a managed identity
- When you create a container group, enable one or more managed identities by setting a [ContainerGroupIdentity](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#containergroupidentity) property. You can also enable or update managed identities after a container group is running - either action causes the container group to restart. To set the identities on a new or existing container group, use the Azure CLI, a Resource Manager template, a YAML file, or another Azure tool.
+ When you create a container group, enable one or more managed identities by setting a [ContainerGroupIdentity](/rest/api/container-instances/2022-09-01/container-groups/create-or-update#containergroupidentity) property. You can also enable or update managed identities after a container group is running - either action causes the container group to restart. To set the identities on a new or existing container group, use the Azure CLI, a Resource Manager template, a YAML file, or another Azure tool.
Azure Container Instances supports both types of managed Azure identities: user-assigned and system-assigned. On a container group, you can enable a system-assigned identity, one or more user-assigned identities, or both types of identities. If you're unfamiliar with managed identities for Azure resources, see the [overview](../active-directory/managed-identities-azure-resources/overview.md).
To use a managed identity, the identity must be granted access to one or more Az
## Create an Azure key vault
-The examples in this article use a managed identity in Azure Container Instances to access an Azure key vault secret.
+The examples in this article use a managed identity in Azure Container Instances to access an Azure key vault secret.
First, create a resource group named *myResourceGroup* in the *eastus* location with the following [az group create](/cli/azure/group#az-group-create) command:
First, create a resource group named *myResourceGroup* in the *eastus* location
az group create --name myResourceGroup --location eastus ```
-Use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command to create a key vault. Be sure to specify a unique key vault name.
+Use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command to create a key vault. Be sure to specify a unique key vault name.
```azurecli-interactive az keyvault create \ --name mykeyvault \
- --resource-group myResourceGroup \
+ --resource-group myResourceGroup \
--location eastus ```
Run the following [az keyvault set-policy](/cli/azure/keyvault) command to set a
### Enable user-assigned identity on a container group
-Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services. In this section, only the base operating system is used. For an example to use the Azure CLI in the container, see [Enable system-assigned identity on a container group](#enable-system-assigned-identity-on-a-container-group).
+Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services. In this section, only the base operating system is used. For an example to use the Azure CLI in the container, see [Enable system-assigned identity on a container group](#enable-system-assigned-identity-on-a-container-group).
The `--assign-identity` parameter passes your user-assigned managed identity to the group. The long-running command keeps the container running. This example uses the same resource group used to create the key vault, but you could specify a different one.
The response looks similar to the following, showing the secret. In your code, y
### Enable system-assigned identity on a container group
-Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services.
+Run the following [az container create](/cli/azure/container#az-container-create) command to create a container instance based on Microsoft's `azure-cli` image. This example provides a single-container group that you can use interactively to run the Azure CLI to access other Azure services.
The `--assign-identity` parameter with no additional value enables a system-assigned managed identity on the group. The identity is scoped to the resource group of the container group. The long-running command keeps the container running. This example uses the same resource group used to create the key vault, which is in the scope of the identity.
A user-assigned identity is a resource ID of the form:
``` "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}"
-```
+```
You can enable one or more user-assigned identities.
Specify a minimum `apiVersion` of `2018-10-01`.
### User-assigned identity
-A user-assigned identity is a resource ID of the form
+A user-assigned identity is a resource ID of the form
``` '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'
container-instances Container Instances Readiness Probe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-readiness-probe.md
-+ Last updated 06/17/2022
type: Microsoft.ContainerInstance/containerGroups
### Start command
-The deployment includes a `command` property defining a starting command that runs when the container first starts running. This property accepts an array of strings. This command simulates a time when the web app runs but the container isn't ready.
+The deployment includes a `command` property defining a starting command that runs when the container first starts running. This property accepts an array of strings. This command simulates a time when the web app runs but the container isn't ready.
First, it starts a shell session and runs a `node` command to start the web app. It also starts a command to sleep for 240 seconds, after which it creates a file called `ready` within the `/tmp` directory:
node /usr/src/app/index.js & (sleep 240; touch /tmp/ready); wait
This YAML file defines a `readinessProbe` which supports an `exec` readiness command that acts as the readiness check. This example readiness command tests for the existence of the `ready` file in the `/tmp` directory.
-When the `ready` file doesn't exist, the readiness command exits with a non-zero value; the container continues running but can't be accessed. When the command exits successfully with exit code 0, the container is ready to be accessed.
+When the `ready` file doesn't exist, the readiness command exits with a non-zero value; the container continues running but can't be accessed. When the command exits successfully with exit code 0, the container is ready to be accessed.
The `periodSeconds` property designates the readiness command should execute every 5 seconds. The readiness probe runs for the lifetime of the container group.
az container create --resource-group myResourceGroup --file readiness-probe.yaml
In this example, during the first 240 seconds, the readiness command fails when it checks for the `ready` file's existence. The status code returned signals that the container isn't ready.
-These events can be viewed from the Azure portal or Azure CLI. For example, the portal shows events of type `Unhealthy` are triggered upon the readiness command failing.
+These events can be viewed from the Azure portal or Azure CLI. For example, the portal shows events of type `Unhealthy` are triggered upon the readiness command failing.
![Portal unhealthy event][portal-unhealthy]
wget 192.0.2.1
```output --2019-10-15 16:46:02-- http://192.0.2.1/ Connecting to 192.0.2.1... connected.
-HTTP request sent, awaiting response...
+HTTP request sent, awaiting response...
``` After 240 seconds, the readiness command succeeds, signaling the container is ready. Now, when you run the `wget` command, it succeeds:
HTTP request sent, awaiting response...200 OK
Length: 1663 (1.6K) [text/html] Saving to: ΓÇÿhttps://docsupdatetracker.net/index.html.1ΓÇÖ
-https://docsupdatetracker.net/index.html.1 100%[===============================================================>] 1.62K --.-KB/s in 0s
+https://docsupdatetracker.net/index.html.1 100%[===============================================================>] 1.62K --.-KB/s in 0s
-2019-10-15 16:49:38 (113 MB/s) - ΓÇÿhttps://docsupdatetracker.net/index.html.1ΓÇÖ saved [1663/1663]
+2019-10-15 16:49:38 (113 MB/s) - ΓÇÿhttps://docsupdatetracker.net/index.html.1ΓÇÖ saved [1663/1663]
``` When the container is ready, you can also access the web app by browsing to the IP address using a web browser. > [!NOTE]
-> The readiness probe continues to run for the lifetime of the container group. If the readiness command fails at a later time, the container again becomes inaccessible.
->
+> The readiness probe continues to run for the lifetime of the container group. If the readiness command fails at a later time, the container again becomes inaccessible.
+>
## Next steps
container-instances Container Instances Restart Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-restart-policy.md
Title: Restart policy for run-once tasks
+ Title: Restart policy for run-once tasks
description: Learn how to use Azure Container Instances to execute tasks that run to completion, such as in build, test, or image rendering jobs. -+ Last updated 06/17/2022
container-instances Container Instances Start Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-start-command.md
-+ Last updated 06/17/2022
Like setting [environment variables](container-instances-environment-variables.m
* Depending on the container configuration, you might need to set a full path to the command line executable or arguments.
-* Set an appropriate [restart policy](container-instances-restart-policy.md) for the container instance, depending on whether the command-line specifies a long-running task or a run-once task. For example, a restart policy of `Never` or `OnFailure` is recommended for a run-once task.
+* Set an appropriate [restart policy](container-instances-restart-policy.md) for the container instance, depending on whether the command-line specifies a long-running task or a run-once task. For example, a restart policy of `Never` or `OnFailure` is recommended for a run-once task.
* If you need information about the default entrypoint set in a container image, use the [docker image inspect](https://docs.docker.com/engine/reference/commandline/image_inspect/) command.
The command line syntax varies depending on the Azure API or tool used to create
* [New-AzureRmContainerGroup][new-azurermcontainergroup] Azure PowerShell cmdlet: Pass a string with the `-Command` parameter. Example: `-Command "echo hello"`.
-* Azure portal: In the **Command override** property of the container configuration, provide a comma-separated list of strings, without quotes. Example: `python, myscript.py, arg1, arg2`).
+* Azure portal: In the **Command override** property of the container configuration, provide a comma-separated list of strings, without quotes. Example: `python, myscript.py, arg1, arg2`).
-* Resource Manager template or YAML file, or one of the Azure SDKs: Specify the command line property as an array of strings. Example: the JSON array `["python", "myscript.py", "arg1", "arg2"]` in a Resource Manager template.
+* Resource Manager template or YAML file, or one of the Azure SDKs: Specify the command line property as an array of strings. Example: the JSON array `["python", "myscript.py", "arg1", "arg2"]` in a Resource Manager template.
If you're familiar with [Dockerfile](https://docs.docker.com/engine/reference/builder/) syntax, this format is similar to the *exec* form of the CMD instruction. ### Examples
-| | Azure CLI | Portal | Template |
+| | Azure CLI | Portal | Template |
| - | - | | | | **Single command** | `--command-line "python myscript.py arg1 arg2"` | **Command override**: `python, myscript.py, arg1, arg2` | `"command": ["python", "myscript.py", "arg1", "arg2"]` | | **Multiple commands** | `--command-line "/bin/bash -c 'mkdir test; touch test/myfile; tail -f '"` |**Command override**: `/bin/bash, -c, mkdir test; touch test/myfile; tail -f ` | `"command": ["/bin/bash", "-c", "mkdir test; touch test/myfile; tail -f "]` |
container-instances Container Instances Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-troubleshooting.md
Last updated 06/17/2022-+ # Troubleshoot common issues in Azure Container Instances
When running container groups without long-running processes you may see repeate
az container create -g MyResourceGroup --name myapp --image ubuntu --command-line "tail -f " ```
-```azurecli-interactive
+```azurecli-interactive
## Deploying a Windows container az container create -g myResourceGroup --name mywindowsapp --os-type Windows --image mcr.microsoft.com/windows/servercore:ltsc2019 --command-line "ping -t localhost"
If you want to confirm that Azure Container Instances can listen on the port you
--ip-address Public --ports 9000 \ --environment-variables 'PORT'='9000' ```
-1. Find the IP address of the container group in the command output of `az container create`. Look for the value of **ip**.
-1. After the container is provisioned successfully, browse to the IP address and port of the container application in your browser, for example: `192.0.2.0:9000`.
+1. Find the IP address of the container group in the command output of `az container create`. Look for the value of **ip**.
+1. After the container is provisioned successfully, browse to the IP address and port of the container application in your browser, for example: `192.0.2.0:9000`.
You should see the "Welcome to Azure Container Instances!" message displayed by the web app. 1. When you're done with the container, remove it using the `az container delete` command:
If you want to confirm that Azure Container Instances can listen on the port you
az container delete --resource-group myResourceGroup --name mycontainer ```
-## Issues during confidential container group deployments
+## Issues during confidential container group deployments
-### Policy errors while using custom CCE policy
+### Policy errors while using custom CCE policy
-Custom CCE policies must be generated the [Azure CLI confcom extension](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md). Before generating the policy, ensure that all properties specified in your ARM template are valid and match what you expect to be represented in a confidential computing policy. Some properties to validate include the container image, environment variables, volume mounts, and container commands.
+Custom CCE policies must be generated the [Azure CLI confcom extension](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md). Before generating the policy, ensure that all properties specified in your ARM template are valid and match what you expect to be represented in a confidential computing policy. Some properties to validate include the container image, environment variables, volume mounts, and container commands.
-### Missing hash from policy
+### Missing hash from policy
-The Azure CLI confcom extension will use cached images on your local machine which may not match those that are available remotely which can result in layer mismatch when the policy is validated. Please ensure that you remove any old images and pull the latest container images to your local environment. Once you are sure that you have the latest SHA, you should regenerate the CCE policy.
+The Azure CLI confcom extension will use cached images on your local machine which may not match those that are available remotely which can result in layer mismatch when the policy is validated. Please ensure that you remove any old images and pull the latest container images to your local environment. Once you are sure that you have the latest SHA, you should regenerate the CCE policy.
### Process/container terminated with exit code: 139
-This exit code occurs due to limitations with the Ubuntu Version 22.04 base image. The recommendation is to use a different base image to resolve this issue.
+This exit code occurs due to limitations with the Ubuntu Version 22.04 base image. The recommendation is to use a different base image to resolve this issue.
## Next steps
container-instances Container Instances Tutorial Azure Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-azure-function-trigger.md
Last updated 06/17/2022-+ # Tutorial: Use an HTTP-triggered Azure function to create a container group [Azure Functions](../azure-functions/functions-overview.md) is a serverless compute service that can run scripts or code in response to a variety of events, such as an HTTP request, a timer, or a message in an Azure Storage queue.
-In this tutorial, you create an Azure function that takes an HTTP request and triggers deployment of a [container group](container-instances-container-groups.md). This example shows the basics of using Azure Functions to automatically create resources in Azure Container Instances. Modify or extend the example for more complex scenarios or other event triggers.
+In this tutorial, you create an Azure function that takes an HTTP request and triggers deployment of a [container group](container-instances-container-groups.md). This example shows the basics of using Azure Functions to automatically create resources in Azure Container Instances. Modify or extend the example for more complex scenarios or other event triggers.
You learn how to:
This article assumes you publish the project using the name *myfunctionapp*, in
## Enable an Azure-managed identity in the function app
-The following commands enable a system-assigned [managed identity](../app-service/overview-managed-identity.md?toc=/azure/azure-functions/toc.json#add-a-system-assigned-identity) in your function app. The PowerShell host running the app can automatically authenticate to Azure using this identity, enabling functions to take actions on Azure services to which the identity is granted access. In this tutorial, you grant the managed identity permissions to create resources in the function app's resource group.
+The following commands enable a system-assigned [managed identity](../app-service/overview-managed-identity.md?toc=/azure/azure-functions/toc.json#add-a-system-assigned-identity) in your function app. The PowerShell host running the app can automatically authenticate to Azure using this identity, enabling functions to take actions on Azure services to which the identity is granted access. In this tutorial, you grant the managed identity permissions to create resources in the function app's resource group.
[Add an identity](../app-service/overview-managed-identity.md?tabs=ps%2Cdotnet) to the function app:
if ($name) {
``` This example creates a container group consisting of a single container instance running the `alpine` image. The container runs a single `echo` command and then terminates. In a real-world example, you might trigger creation of one or more container groups for running a batch job.
-
+ ## Test function app locally Ensure that the function runs locally before republishing the function app project to Azure. When run locally, the function doesn't create Azure resources. However, you can test the function flow with and without passing a name value in a query string. To debug the function, see [Debug PowerShell Azure Functions locally](../azure-functions/functions-debug-powershell-local.md).
https://myfunctionapp.azurewebsites.net/api/HttpTrigger
### Run function without passing a name
-As a first test, run the `curl` command and pass the function URL without appending a `name` query string.
+As a first test, run the `curl` command and pass the function URL without appending a `name` query string.
```bash curl --verbose "https://myfunctionapp.azurewebsites.net/api/HttpTrigger"
The function returns status code 200 and the text `This HTTP triggered function
> Host: myfunctionapp.azurewebsites.net > User-Agent: curl/7.64.1 > Accept: */*
->
+>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)! < HTTP/1.1 200 OK < Content-Length: 135 < Content-Type: text/plain; charset=utf-8 < Request-Context: appId=cid-v1:d0bd0123-f713-4579-8990-bb368a229c38 < Date: Wed, 10 Jun 2020 17:50:27 GMT
-<
+<
* Connection #0 to host myfunctionapp.azurewebsites.net left intact This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.* Closing connection 0 ```
The function returns status code 200 and triggers the creation of the container
> Host: myfunctionapp.azurewebsites.net > User-Agent: curl/7.64.1 > Accept: */*
->
+>
< HTTP/1.1 200 OK < Content-Length: 92 < Content-Type: text/plain; charset=utf-8 < Request-Context: appId=cid-v1:d0bd0123-f713-4579-8990-bb368a229c38 < Date: Wed, 10 Jun 2020 17:54:31 GMT
-<
+<
* Connection #0 to host myfunctionapp.azurewebsites.net left intact This HTTP triggered function executed successfully. Started container group mycontainergroup* Closing connection 0 ```
Verify that the container ran with the [Get-AzContainerInstanceLog][get-azcontai
```azurecli-interactive Get-AzContainerInstanceLog -ResourceGroupName myfunctionapp `
- -ContainerGroupName mycontainergroup
+ -ContainerGroupName mycontainergroup
``` Sample output:
In this tutorial, you created an Azure function that takes an HTTP request and t
For a detailed example to launch and monitor a containerized job, see the blog post [Event-Driven Serverless Containers with PowerShell Azure Functions and Azure Container Instances](https://dev.to/azure/event-driven-serverless-containers-with-powershell-azure-functions-and-azure-container-instances-e9b) and accompanying [code sample](https://github.com/anthonychu/functions-powershell-run-aci).
-See the [Azure Functions documentation](../azure-functions/index.yml) for detailed guidance on creating Azure functions and publishing a functions project.
+See the [Azure Functions documentation](../azure-functions/index.yml) for detailed guidance on creating Azure functions and publishing a functions project.
<!-- IMAGES -->
container-instances Container Instances Tutorial Deploy Confidential Containers Cce Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-deploy-confidential-containers-cce-arm.md
Last updated 05/23/2023-+ # Tutorial: Create an ARM template for a confidential container deployment with custom confidential computing enforcement policy
-Confidential containers on ACI is a SKU on the serverless platform that enables customers to run container applications in a hardware-based and attested trusted execution environment (TEE), which can protect data in use and provides in-memory encryption via Secure Nested Paging.
+Confidential containers on ACI is a SKU on the serverless platform that enables customers to run container applications in a hardware-based and attested trusted execution environment (TEE), which can protect data in use and provides in-memory encryption via Secure Nested Paging.
In this article, you'll: > [!div class="checklist"] > * Create an ARM template for a confidential container group > * Generate a confidential computing enforcement (CCE) policy
-> * Deploy the confidential container group to Azure
+> * Deploy the confidential container group to Azure
## Before you begin
In this article, you'll:
In this tutorial, you deploy a hello world application that generates a hardware attestation report. You start by creating an ARM template with a container group resource to define the properties of this application. You'll use this ARM template with the Azure CLI confcom tooling to generate a confidential computing enforcement (CCE) policy for attestation. In this tutorial, we use this [ARM template](https://raw.githubusercontent.com/Azure-Samples/aci-confidential-hello-world/main/template.json?token=GHSAT0AAAAAAB5B6SJ7VUYU3G6MMQUL7KKKY7QBZBA). To view the source code for this application, visit [ACI Confidential Hello World](https://aka.ms/ccacihelloworld).
-> [!NOTE]
-> The ccePolicy parameter of the template is blank and needs to be updated based on the next step of this tutorial.
+> [!NOTE]
+> The ccePolicy parameter of the template is blank and needs to be updated based on the next step of this tutorial.
-There are two properties added to the Azure Container Instance resource definition to make the container group confidential:
+There are two properties added to the Azure Container Instance resource definition to make the container group confidential:
-1. **sku**: The SKU property enables you to select between confidential and standard container group deployments. If this property isn't added, the container group will be deployed as standard SKU.
+1. **sku**: The SKU property enables you to select between confidential and standard container group deployments. If this property isn't added, the container group will be deployed as standard SKU.
2. **confidentialComputeProperties**: The confidentialComputeProperties object enables you to pass in a custom confidential computing enforcement policy for attestation of your container group. If this object isn't added to the resource there will be no validation of the software components running within the container group. Use your preferred text editor to save this ARM template on your local machine as **template.json**.
You can see under **confidentialComputeProperties**, we have left a blank **cceP
} ```
-## Create a custom CCE Policy
+## Create a custom CCE Policy
With the ARM template that you've crafted and the Azure CLI confcom extension, you're able to generate a custom CCE policy. the CCE policy is used for attestation. The tool takes the ARM template as an input to generate the policy. The policy enforces the specific container images, environment variables, mounts, and commands, which can then be validated when the container group starts up. For more information on the Azure CLI confcom extension, see [Azure CLI confcom extension](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md).
-1. To generate the CCE policy, you'll run the following command using the ARM template as input:
+1. To generate the CCE policy, you'll run the following command using the ARM template as input:
```azurecli-interactive az confcom acipolicygen -a .\template.json --print-policy
- ```
+ ```
- When this command completes, you should see a Base 64 string generated as output in the format seen below. This string is the CCE policy that you will copy and paste into your ARM template under the ccePolicy property.
+ When this command completes, you should see a Base 64 string generated as output in the format seen below. This string is the CCE policy that you will copy and paste into your ARM template under the ccePolicy property.
```output cGFja2FnZSBwb2xpY3kKCmFwaV9zdm4gOj0gIjAuOS4wIgoKaW1wb3J0IGZ1dHVyZS5rZXl3b3Jkcy5ldmVyeQppbXBvcnQgZnV0dXJlLmtleXdvcmRzLmluCgpmcmFnbWVudHMgOj0gWwpdCgpjb250YWluZXJzIDo9IFsKICAgIHsKICAgICAgICAiY29tbWFuZCI6IFsiL3BhdXNlIl0sCiAgICAgICAgImVudl9ydWxlcyI6IFt7InBhdHRlcm4iOiAiUEFUSD0vdXNyL2xvY2FsL3NiaW46L3Vzci9sb2NhbC9iaW46L3Vzci9zYmluOi91c3IvYmluOi9zYmluOi9iaW4iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogdHJ1ZX0seyJwYXR0ZXJuIjogIlRFUk09eHRlcm0iLCAic3RyYXRlZ3kiOiAic3RyaW5nIiwgInJlcXVpcmVkIjogZmFsc2V9XSwKICAgICAgICAibGF5ZXJzIjogWyIxNmI1MTQwNTdhMDZhZDY2NWY5MmMwMjg2M2FjYTA3NGZkNTk3NmM3NTVkMjZiZmYxNjM2NTI5OTE2OWU4NDE1Il0sCiAgICAgICAgIm1vdW50cyI6IFtdLAogICAgICAgICJleGVjX3Byb2Nlc3NlcyI6IFtdLAogICAgICAgICJzaWduYWxzIjogW10sCiAgICAgICAgImFsbG93X2VsZXZhdGVkIjogZmFsc2UsCiAgICAgICAgIndvcmtpbmdfZGlyIjogIi8iCiAgICB9LApdCmFsbG93X3Byb3BlcnRpZXNfYWNjZXNzIDo9IHRydWUKYWxsb3dfZHVtcF9zdGFja3MgOj0gdHJ1ZQphbGxvd19ydW50aW1lX2xvZ2dpbmcgOj0gdHJ1ZQphbGxvd19lbnZpcm9ubWVudF92YXJpYWJsZV9kcm9wcGluZyA6PSB0cnVlCmFsbG93X3VuZW5jcnlwdGVkX3NjcmF0Y2ggOj0gdHJ1ZQoKCm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQp1bm1vdW50X2RldmljZSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQptb3VudF9vdmVybGF5IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnVubW91bnRfb3ZlcmxheSA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpjcmVhdGVfY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfaW5fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmV4ZWNfZXh0ZXJuYWwgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2h1dGRvd25fY29udGFpbmVyIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNpZ25hbF9jb250YWluZXJfcHJvY2VzcyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV9tb3VudCA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpwbGFuOV91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmdldF9wcm9wZXJ0aWVzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CmR1bXBfc3RhY2tzIDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJ1bnRpbWVfbG9nZ2luZyA6PSB7ICJhbGxvd2VkIiA6IHRydWUgfQpsb2FkX2ZyYWdtZW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnNjcmF0Y2hfbW91bnQgOj0geyAiYWxsb3dlZCIgOiB0cnVlIH0Kc2NyYXRjaF91bm1vdW50IDo9IHsgImFsbG93ZWQiIDogdHJ1ZSB9CnJlYXNvbiA6PSB7ImVycm9ycyI6IGRhdGEuZnJhbWV3b3JrLmVycm9yc30K
With the ARM template that you've crafted and the Azure CLI confcom extension, y
![Screenshot of Build your own template in the editor button on deployment screen, PNG.](./media/container-instances-confidential-containers-tutorials/confidential-containers-cce-build-template.png)
-1. Select **Load file** and upload **template.json**, which you've modified by adding the CCE policy you generated in the previous steps.
+1. Select **Load file** and upload **template.json**, which you've modified by adding the CCE policy you generated in the previous steps.
![Screenshot of Load file button on template screen, PNG.](./media/container-instances-confidential-containers-tutorials/confidential-containers-cce-load-file.png)
-1. Click **Save**.
+1. Click **Save**.
1. Select or enter the following values.
Use the Azure portal or a tool such as the [Azure CLI](container-instances-quick
![Screenshot of overview page for container group instance, PNG.](media/container-instances-confidential-containers-tutorials/confidential-containers-cce-portal.png)
-3. Once its status is *Running*, navigate to the IP address in your browser.
+3. Once its status is *Running*, navigate to the IP address in your browser.
![Screenshot of browser view of app deployed using Azure Container Instances, PNG.](media/container-instances-confidential-containers-tutorials/confidential-containers-aci-hello-world.png) The presence of the attestation report below the Azure Container Instances logo confirms that the container is running on hardware that supports a TEE. If you deploy to hardware that does not support a TEE, for example by choosing a region where the ACI Confidential SKU is not available, no attestation report will be shown.
-## Next Steps
+## Next Steps
-Now that you have deployed a confidential container group on ACI, you can learn more about how policies are enforced.
+Now that you have deployed a confidential container group on ACI, you can learn more about how policies are enforced.
* [Confidential computing enforcement policies overview](./container-instances-confidential-overview.md) * [Azure CLI confcom extension examples](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md)
container-instances Container Instances Tutorial Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-acr.md
Last updated 06/17/2022-+ # Tutorial: Create an Azure container registry and push a container image
container-instances Container Instances Tutorial Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-app.md
Last updated 06/17/2022-+ # Tutorial: Create a container image for deployment to Azure Container Instances
container-instances Container Instances Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md
Last updated 06/17/2022-+ # Deploy container instances into an Azure virtual network [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) provides secure, private networking for your Azure and on-premises resources. By deploying container groups into an Azure virtual network, your containers can communicate securely with other resources in the virtual network.
-This article shows how to use the [az container create][az-container-create] command in the Azure CLI to deploy container groups to either a new virtual network or an existing virtual network.
+This article shows how to use the [az container create][az-container-create] command in the Azure CLI to deploy container groups to either a new virtual network or an existing virtual network.
> [!IMPORTANT] > Before deploying container groups in virtual networks, we suggest checking the limitation first. For networking scenarios and limitations, see [Virtual network scenarios and resources for Azure Container Instances](container-instances-virtual-network-concepts.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [available-regions][available-regions].
+> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [available-regions][available-regions].
[!INCLUDE [network profile callout](./includes/network-profile/network-profile-callout.md)]
The log output should show that `wget` was able to connect and download the inde
### Example - YAML
-You can also deploy a container group to an existing virtual network by using a YAML file, a [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), or another programmatic method such as with the Python SDK.
+You can also deploy a container group to an existing virtual network by using a YAML file, a [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.containerinstance/aci-vnet), or another programmatic method such as with the Python SDK.
For example, when using a YAML file, you can deploy to a virtual network with a subnet delegated to Azure Container Instances. Specify the following properties:
container-registry Container Registry Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-streaming.md
+
+ Title: "Artifact streaming in Azure Container Registry (Preview)"
+description: "Artifact streaming is a feature in Azure Container Registry to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
++++ Last updated : 12/14/2023+
+#customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
++
+# Artifact streaming in Azure Container Registry (Preview)
+
+Artifact streaming is a feature in Azure Container Registry that allows you to store container images within a single registry, manage, and stream the container images to Azure Kubernetes Service (AKS) clusters in multiple regions. This feature is designed to accelerate containerized workloads for Azure customers using AKS. With artifact streaming, you can easily scale workloads without having to wait for slow pull times for your node.
+
+## Use cases
+
+Here are few scenarios to use artifact streaming:
+
+**Deploying containerized applications to multiple regions**: With artifact streaming, you can store container images within a single registry and manage and stream container images to AKS clusters in multiple regions. artifact streaming deploys container applications to multiple regions without consuming time and resources.
+
+**Reducing image pull latency**: Artifact streaming can reduce time to pod readiness by over 15%, depending on the size of the image, and it works best for images < 30GB. This feature reduces image pull latency and fast container startup, which is beneficial for software developers and system architects.
+
+**Effective scaling of containerized applications**: Artifact streaming provides the opportunity to design, build, and deploy containerized applications at a high scale.
+
+## Artifact streaming aspects
+
+Here are some brief aspects of artifact streaming:
+
+* Customers with new and existing registries can start artifact streaming for specific repositories or tags.
+
+* Once artifact streaming is started, the original and the streaming artifact will be stored in the customerΓÇÖs ACR.
+
+* If the user decides to turn off artifact streaming for repositories or artifacts, the streaming and the original artifact will still be present.
+
+* If a customer deletes a repository or artifact with artifact streaming and Soft Delete enabled, then both the original and artifact streaming versions will be deleted. However, only the original version will be available on the soft delete blade.
+
+## Availability and pricing information
+
+Artifact streaming is only available in the **Premium** SKU [service tiers](container-registry-skus.md). Please note that artifact streaming may increase the overall registry storage consumption and customers may be subjected to additional storage charges as outlined in our [pricing](https://azure.microsoft.com/pricing/details/container-registry/) if the consumption exceeds the included 500 GiB Premium SKU threshold.
+
+## Preview limitations
+
+Artifact streaming is currently in preview. The following limitations apply:
+
+* Only images with Linux AMD64 architecture are supported in the preview release.
+* The preview release doesn't support Windows-based container images, and ARM64 images.
+* The preview release partially support multi-architecture images, only the AMD64 architecture is supported.
+* For creating Ubuntu based node pool in AKS, choose Ubuntu version 20.04 or higher.
+* For Kubernetes, use Kubernetes version 1.26 or higher or Kubernetes version > 1.25.
+* Only premium SKU registries support generating streaming artifacts in the preview release. The non-premium SKU registries do not offer this functionality during the preview.
+* The CMK (Customer-Managed Keys) registries are NOT supported in the preview release.
+* Kubernetes regcred is currently NOT supported.
+
+## Prerequisites
+
+* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.54.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
+
+* Sign in to the [Azure portal](https://ms.portal.azure.com/).
+
+## Start artifact streaming
+
+Start artifact streaming with a series with Azure CLI commands and Azure portal for pushing, importing, and generating streaming artifacts for container images in an Azure Container Registry (ACR). These instructions outline the process for creating a *Premium* [SKU](container-registry-skus.md) ACR, importing an image, generating a streaming artifact, and managing the artifact streaming operation. Make sure to replace the placeholders with your actual values where necessary.
+
+### Push/Import the image and generate the streaming artifact - Azure CLI
+
+Artifact streaming is available in the **Premium** container registry service tier. To start Artifact streaming, update a registry using the Azure CLI (version 2.54.0 or above). To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+Start artifact streaming, by following these general steps:
+
+>[!NOTE]
+> If you already have a premium container registry, you can skip this step. If the user is on Basic of Standard SKUs, the following commands will fail.
+> The code is written in Azure CLI and can be executed in an interactive mode.
+> Please note that the placeholders should be replaced with actual values before executing the command.
+
+1. Create a new Azure Container Registry (ACR) using the premium SKU through:
+
+ For example, run the [az group create][az-group-create] command to create an Azure Resource Group with name `my-streaming-test` in the West US region and then run the [az acr create][az-acr-create] command to create a premium Azure Container Registry with name `mystreamingtest` in that resource group.
+
+ ```azurecli-interactive
+ az group create -n my-streaming-test -l westus
+ az acr create -n mystreamingtest -g my-streaming-test -l westus --sku premium
+ ```
+
+2. Push or import an image to the registry through:
+
+ For example, run the [az configure] command to configure the default ACR and [az acr import][az-acr-import] command to import a Jupyter Notebook image from Docker Hub into the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az configure --defaults acr="mystreamingtest"
+ az acr import -source docker.io/jupyter/all-spark-notebook:latest -t jupyter/all-spark-notebook:latest
+ ```
+
+3. Create an artifact streaming from the Image
+
+ Initiates the creation of a streaming artifact from the specified image.
+
+ For example, run the [az acr artifact-streaming create][az-acr-artifact-streaming-create] commands to create a streaming artifact from the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming create --image jupyter/all-spark-notebook:latest
+ ```
+
+>[!NOTE]
+> An operation ID is generated during the process for future reference to verify the status of the operation.
+
+4. Verify the generated artifact streaming in the Azure CLI.
+
+ For example, run the [az acr manifest list-referrers][az-acr-manifest-list-referrers] command to list the streaming artifacts for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr manifest list-referrers -n jupyter/all-spark-notebook:latest
+ ```
+
+5. Cancel the artifact streaming creation (if needed)
+
+ Cancel the streaming artifact creation if the conversion is not finished yet. It will stop the operation.
+
+ For example, run the [az acr artifact-streaming operation cancel][az-acr-artifact-streaming-operation-cancel] command to cancel the conversion operation for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming operation cancel --repository jupyter/all-spark-notebook --id c015067a-7463-4a5a-9168-3b17dbe42ca3
+ ```
+
+6. Start auto-conversion on the repository
+
+ Start auto-conversion in the repository for newly pushed or imported images. When started, new images pushed into that repository will trigger the generation of streaming artifacts.
+
+ >[!NOTE]
+ > Auto-conversion does not apply to existing images. Existing images can be manually converted.
+
+ For example, run the [az acr artifact-streaming update][az-acr-artifact-streaming-update] command to start auto-conversion for the `jupyter/all-spark-notebook` repository in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming update --repository jupyter/all-spark-notebook --enable-streaming true
+ ```
+
+7. Verify the streaming conversion progress, after pushing a new image `jupyter/all-spark-notebook:newtag` to the above repository.
+
+ For example, run the [az acr artifact-streaming operation show][az-acr-artifact-streaming-operation-show] command to check the status of the conversion operation for the `jupyter/all-spark-notebook:newtag` image in the `mystreamingtest` ACR.
+
+ ```azurecli-interactive
+ az acr artifact-streaming operation show --image jupyter/all-spark-notebook:newtag
+ ```
+
+>[!NOTE]
+> Artifact streaming can work across regions, regardless of whether geo-replication is started or not.
+> Artifact streaming can work through a private endpoint and attach to it.
+
+### Push/Import the image and generate the streaming artifact - Azure portal
+
+Artifact streaming is available in the *premium* [SKU](container-registry-skus.md) Azure Container Registry. To start artifact streaming, update a registry using the Azure portal.
+
+Follow the steps to create artifact streaming in the [Azure portal](https://portal.azure.com).
+
+1. Navigate to your Azure Container Registry.
+
+2. In the side **Menu**, under the **Services**, select **Repositories**.
+
+3. Select the latest imported image.
+
+4. Convert the image and create artifact streaming in Azure portal.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the create streaming artifact button highlighted.](./media/container-registry-artifact-streaming/01-create-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/01-create-artifact-streaming-expanded.png#lightbox)
++
+5. Check the streaming artifact generated from the image in Referrers tab.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the streaming artifact highlighted.](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-inline.png)](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-expanded.png#lightbox)
+
+6. You can also delete the artifact streaming from the repository blade.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the delete artifact streaming button highlighted.](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-expanded.png#lightbox)
+
+7. You can also enable auto-conversion on the repository blade. Active means auto-conversion is enabled on the repository. Inactive means auto-conversion is disabled on the repository.
+
+ > [!div class="mx-imgBorder"]
+ > [![A screenshot of Azure portal with the start artifact streaming button highlighted.](./media/container-registry-artifact-streaming/03-start-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/03-start-artifact-streaming-expanded.png#lightbox)
+
+> [!NOTE]
+> The state of artifact streaming in a repository (inactive or active) determines whether newly pushed compatible images will be automatically converted. By default, all repositories are in an inactive state for artifact streaming. This means that when new compatible images are pushed to the repository, artifact streaming will not be triggered, and the images will not be automatically converted. If you want to start automatic conversion of newly pushed images, you need to set the repository's artifact streaming to the active state. Once the repository is in the active state, any new compatible container images that are pushed to the repository will trigger artifact streaming. This will start the automatic conversion of those images.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Troubleshoot artifact streaming](troubleshoot-artifact-streaming.md)
+
+<!-- LINKS - External -->
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[Azure Cloud Shell]: /azure/cloud-shell/quickstart
+[az-group-create]: /cli/azure/group#az-group-create
+[az-acr-import]: /cli/azure/acr#az-acr-import
+[az-acr-artifact-streaming-create]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-create
+[az-acr-manifest-list-referrers]: /cli/azure/acr/manifest#az-acr-manifest-list-referrers
+[az-acr-create]: /cli/azure/acr#az-acr-create
+[az-acr-artifact-streaming-operation-cancel]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-cancel
+[az-acr-artifact-streaming-operation-show]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-show
+[az-acr-artifact-streaming-update]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-update
+
container-registry Container Registry Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-authentication-managed-identity.md
Title: Authenticate with managed identity description: Provide access to images in your private container registry by using a user-assigned or system-assigned managed Azure identity. -+ Last updated 10/31/2023
-# Use an Azure managed identity to authenticate to an Azure container registry
+# Use an Azure managed identity to authenticate to an Azure container registry
Use a [managed identity for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) to authenticate to an Azure container registry from another Azure resource, without needing to provide or manage registry credentials. For example, set up a user-assigned or system-assigned managed identity on a Linux VM to access container images from your container registry, as easily as you use a public registry. Or, set up an Azure Kubernetes Service cluster to use its [managed identity](../aks/cluster-container-registry-integration.md) to pull container images from Azure Container Registry for pod deployments.
For this article, you learn more about managed identities and how to:
> [!div class="checklist"] > * Enable a user-assigned or system-assigned identity on an Azure VM > * Grant the identity access to an Azure container registry
-> * Use the managed identity to access the registry and pull a container image
+> * Use the managed identity to access the registry and pull a container image
### [Azure CLI](#tab/azure-cli)
If you're not familiar with the managed identities for Azure resources feature,
After you set up selected Azure resources with a managed identity, give the identity the access you want to another resource, just like any security principal. For example, assign a managed identity a role with pull, push and pull, or other permissions to a private registry in Azure. (For a complete list of registry roles, see [Azure Container Registry roles and permissions](container-registry-roles.md).) You can give an identity access to one or more resources.
-Then, use the identity to authenticate to any [service that supports Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), without any credentials in your code. Choose how to authenticate using the managed identity, depending on your scenario. To use the identity to access an Azure container registry from a virtual machine, you authenticate with Azure Resource Manager.
+Then, use the identity to authenticate to any [service that supports Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication), without any credentials in your code. Choose how to authenticate using the managed identity, depending on your scenario. To use the identity to access an Azure container registry from a virtual machine, you authenticate with Azure Resource Manager.
## Create a container registry
$vmParams = @{
ResourceGroupName = 'MyResourceGroup' Name = 'myDockerVM' Image = 'UbuntuLTS'
- PublicIpAddressName = 'myPublicIP'
+ PublicIpAddressName = 'myPublicIP'
GenerateSshKey = $true SshKeyName = 'mySSHKey' }
New-AzRoleAssignment -ObjectId $spID -Scope $resourceID -RoleDefinitionName AcrP
SSH into the Docker virtual machine that's configured with the identity. Run the following Azure CLI commands, using the Azure CLI installed on the VM.
-First, authenticate to the Azure CLI with [az login][az-login], using the identity you configured on the VM. For `<userID>`, substitute the ID of the identity you retrieved in a previous step.
+First, authenticate to the Azure CLI with [az login][az-login], using the identity you configured on the VM. For `<userID>`, substitute the ID of the identity you retrieved in a previous step.
```azurecli-interactive az login --identity --username <userID>
docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1
SSH into the Docker virtual machine that's configured with the identity. Run the following Azure PowerShell commands, using the Azure PowerShell installed on the VM.
-First, authenticate to the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the identity you configured on the VM. For `-AccountId` specify a client ID of the identity.
+First, authenticate to the Azure PowerShell with [Connect-AzAccount][connect-azaccount], using the identity you configured on the VM. For `-AccountId` specify a client ID of the identity.
```azurepowershell-interactive $clientId = (Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name myACRId).ClientId
docker pull mycontainerregistry.azurecr.io/aci-helloworld:v1
The following [az vm identity assign][az-vm-identity-assign] command configures your Docker VM with a system-assigned identity: ```azurecli-interactive
-az vm identity assign --resource-group myResourceGroup --name myDockerVM
+az vm identity assign --resource-group myResourceGroup --name myDockerVM
``` Use the [az vm show][az-vm-show] command to set a variable to the value of `principalId` (the service principal ID) of the VM's identity, to use in later steps.
The following [Update-AzVM][update-azvm] command configures your Docker VM with
```azurepowershell-interactive $vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myDockerVM
-Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType SystemAssigned
+Update-AzVM -ResourceGroupName myResourceGroup -VM $vm -IdentityType SystemAssigned
``` Use the [Get-AzVM][get-azvm] command to set a variable to the value of `principalId` (the service principal ID) of the VM's identity, to use in later steps.
container-registry Troubleshoot Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-streaming.md
+
+ Title: "Troubleshoot artifact streaming"
+description: "Troubleshoot artifact streaming in Azure Container Registry to diagnose and resolve with managing, scaling, and deploying artifacts through containerized platforms."
+++ Last updated : 10/31/2023+++
+# Troubleshoot artifact streaming
+
+The troubleshooting steps in this article can help you resolve common issues that you might encounter when using artifact streaming in Azure Container Registry (ACR). These steps and recommendations can help diagnose and resolve issues related to artifact streaming as well as provide insights into the underlying processes and logs for debugging purposes.
+
+## Symptoms
+
+* Conversion operation failed due to an unknown error.
+* Troubleshooting Failed AKS Pod Deployments.
+* Pod conditions indicate "UpgradeIfStreamableDisabled."
+* Using Digest Instead of Tag for Streaming Artifact
+
+## Causes
+
+* Issues with authentication, network latency, image retrieval, streaming operations, or other issues.
+* Issues with image pull or streaming, streaming artifacts configurations, image sources, and resource constraints.
+* Issues with ACR configurations or permissions.
+
+## Conversion operation failed
+
+| Error Code | Error Message | Troubleshooting Info |
+| | - | |
+| UNKNOWN_ERROR | Conversion operation failed due to an unknown error. | Caused by an internal error. A retry helps here. If retry is unsuccessful, contact support. |
+| RESOURCE_NOT_FOUND | Conversion operation failed because target resource isn't found. | If the target image isn't found in the registry. Verify typos in the image digest, if the image is deleted, or missing in the target region (replication consistency is not immediate for example) |
+| UNSUPPORTED_PLATFORM | Conversion is not currently supported for image platform. | Only linux/amd64 images are initially supported. |
+| NO_SUPPORTED_PLATFORM_FOUND | Conversion is not currently supported for any of the image platforms in the index. | Only linux/amd64 images are initially supported. No image with this platform is found in the target index. |
+| UNSUPPORTED_MEDIATYPE | Conversion is not supported for the image MediaType. | Conversion can only target images with media type: application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.list.v2+json |
+| UNSUPPORTED_ARTIFACT_TYPE | Conversion isn't supported for the image ArtifactType. | Streaming Artifacts (Artifact type: application/vnd.azure.artifact.streaming.v1) can't be converted again. |
+| IMAGE_NOT_RUNNABLE | Conversion isn't supported for nonrunnable images. | Only linux/amd64 runnable images are initially supported. |
+
+## Troubleshooting Failed AKS Pod Deployments
+
+If AKS pod deployment fails with an error related to image pulling, like the following example
+
+```bash
+Failed to pull image "mystreamingtest.azurecr.io/jupyter/all-spark-notebook:latest":
+rpc error: code = Unknown desc = failed to pull and unpack image
+"mystreamingtest.azurecr.io/latestobd/jupyter/all-spark-notebook:latest":
+failed to resolve reference "mystreamingtest.azurecr.io/jupyter/all-spark-notebook:latest":
+unexpected status from HEAD request to http://localhost:8578/v2/jupyter/all-spark-notebook/manifests/latest?ns=mystreamingtest.azurecr.io:503 Service Unavailable
+```
+
+To troubleshoot this issue, you should check the following:
+
+1. Verify if the AKS has permissions to access the container registry `mystreamingtest.azurecr.io`
+1. Ensure that the container registry `mystreamingtest.azurecr.io` is accessible and properly attached to AKS.
+
+## Checking for "UpgradeIfStreamableDisabled" Pod Condition:
+
+If the AKS pod condition shows "UpgradeIfStreamableDisabled," check if the image is from an Azure Container Registry.
+
+## Using Digest Instead of Tag for Streaming Artifact:
+
+If you deploy the streaming artifact using digest instead of tag (for example, mystreamingtest.azurecr.io/jupyter/all-spark-notebook@sha256:4ef83ea6b0f7763c230e696709d8d8c398e21f65542db36e82961908bcf58d18), AKS pod event and condition message won't include streaming related information. However, you see fast container startup as the underlying container engine. This engine stream the image to AKS if it detects the actual image content is streamed.
+
+## Related content
+
+> [!div class="nextstepaction"]
+> [Artifact streaming](./container-registry-artifact-streaming.md)
container-registry Tutorial Artifact Streaming Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-streaming-cli.md
- Title: "Enable Artifact Streaming- Azure CLI"
-description: "Enable Artifact Streaming in Azure Container Registry using Azure CLI commands to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
---- Previously updated : 10/31/2023-
-# Artifact Streaming - Azure CLI
-
-Start Artifact Streaming with a series of Azure CLI commands for pushing, importing, and generating streaming artifacts for container images in an Azure Container Registry (ACR). These commands outline the process for creating a *Premium* [SKU](container-registry-skus.md) ACR, importing an image, generating a streaming artifact, and managing the artifact streaming operation. Make sure to replace the placeholders with your actual values where necessary.
-
-This article is part two in a four-part tutorial series. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Push/Import the image and generate the streaming artifact - Azure CLI.
-
-## Prerequisites
-
-* You can use the [Azure Cloud Shell][Azure Cloud Shell] or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.54.0 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][Install Azure CLI].
-
-## Push/Import the image and generate the streaming artifact - Azure CLI
-
-Artifact Streaming is available in the **Premium** container registry service tier. To enable Artifact Streaming, update a registry using the Azure CLI (version 2.54.0 or above). To install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-
-Enable Artifact Streaming by following these general steps:
-
->[!NOTE]
-> If you already have a premium container registry, you can skip this step. If the user is on Basic of Standard SKUs, the following commands will fail.
-> The code is written in Azure CLI and can be executed in an interactive mode.
-> Please note that the placeholders should be replaced with actual values before executing the command.
-
-Use the following command to create an Azure Resource Group with name `my-streaming-test` in the West US region and a premium Azure Container Registry with name `mystreamingtest` in that resource group.
-
-```azurecli-interactive
-az group create -n my-streaming-test -l westus
-az acr create -n mystreamingtest -g my-streaming-test -l westus --sku premium
-```
-
-To push or import an image to the registry, run the `az configure` command to configure the default ACR and `az acr import` command to import a Jupyter Notebook image from Docker Hub into the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az configure --defaults acr="mystreamingtest"
-az acr import -source docker.io/jupyter/all-spark-notebook:latest -t jupyter/all-spark-notebook:latest
-```
-
-Use the following command to create a streaming artifact from the specified image. This example creates a streaming artifact from the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming create --image jupyter/all-spark-notebook:latest
-```
-
-To verify the generated Artifact Streaming in the Azure CLI, run the `az acr manifest list-referrers` command. This command lists the streaming artifacts for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr manifest list-referrers -n jupyter/all-spark-notebook:latest
-```
-
-If you need to cancel the streaming artifact creation, run the `az acr artifact-streaming operation cancel` command. This command stops the operation. For example, this command cancels the conversion operation for the `jupyter/all-spark-notebook:latest` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming operation cancel --repository jupyter/all-spark-notebook --id c015067a-7463-4a5a-9168-3b17dbe42ca3
-```
-
-Enable auto-conversion in the repository for newly pushed or imported images. When enabled, new images pushed into that repository trigger the generation of streaming artifacts.
-
->[!NOTE]
-Auto-conversion does not apply to existing images. Existing images can be manually converted.
-
-For example, run the `az acr artifact-streaming update` command to enable auto-conversion for the `jupyter/all-spark-notebook` repository in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming update --repository jupyter/all-spark-notebook --enable-streaming true
-```
-
-Use the `az acr artifact-streaming operation show` command to verify the streaming conversion progress. For example, this command checks the status of the conversion operation for the `jupyter/all-spark-notebook:newtag` image in the `mystreamingtest` ACR.
-
-```azurecli-interactive
-az acr artifact-streaming operation show --image jupyter/all-spark-notebook:newtag
-```
-
->[!NOTE]
-> Artifact Streaming can work across regions, regardless of whether geo-replication is enabled or not.
-> Artifact Streaming can work through a private endpoint and attach to it.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Enable Artifact Streaming- Portal](tutorial-artifact-streaming-portal.md)
-
-<!-- LINKS - External -->
-[Install Azure CLI]: /cli/azure/install-azure-cli
-[Azure Cloud Shell]: /azure/cloud-shell/quickstart
container-registry Tutorial Artifact Streaming Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-streaming-portal.md
- Title: "Enable Artifact Streaming- Portal"
-description: "Enable Artifact Streaming is a feature in Azure Container Registry in Azure portal to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
--- Previously updated : 10/31/2023--
-# Enable Artifact Streaming - Azure portal
-
-Start artifact streaming with a series of Azure portal steps for pushing, importing, and generating streaming artifacts for container images in an Azure Container Registry (ACR). These steps outline the process for creating a *premium* [SKU](container-registry-skus.md) ACR, importing an image, generating a streaming artifact, and managing the artifact streaming operation. Make sure to replace the placeholders with your actual values where necessary.
-
-This article is part three in a four-part tutorial series. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Push/Import the image and generate the streaming artifact - Azure portal.
-
-## Prerequisites
-
-* Sign in to the [Azure portal](https://ms.portal.azure.com/).
-
-## Push/Import the image and generate the streaming artifact - Azure portal
-
-Complete the following steps to create artifact streaming in the [Azure portal](https://portal.azure.com).
-
-1. Navigate to your Azure Container Registry.
-
-1. In the side **Menu**, under the **Services**, select **Repositories**.
-
-1. Select the latest imported image.
-
-1. Convert the image and create artifact streaming in Azure portal.
-
- [ ![A screenshot of Azure portal with the create streaming artifact button highlighted](./media/container-registry-artifact-streaming/01-create-artifact-streaming-inline.png)](./media/container-registry-artifact-streaming/01-create-artifact-streaming-expanded.png#lightbox)
-
-1. Check the streaming artifact generated from the image in Referrers tab.
-
- [ ![A screenshot of Azure portal with the streaming artifact highlighted.](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-inline.png) ](./media/container-registry-artifact-streaming/02-artifact-streaming-generated-expanded.png#lightbox)
-
-1. You can also delete the Artifact streaming from the repository blade.
-
- [ ![A screenshot of Azure portal with the delete artifact streaming button higlighted](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-inline.png) ](./media/container-registry-artifact-streaming/04-delete-artifact-streaming-expanded.png#lightbox)
-
-1. You can also enable auto-conversion on the repository blade. Active means auto-conversion is enabled on the repository. Inactive means auto-conversion is disabled on the repository.
-
- [ ![A screenshot of Azure portal with the start artifact streaming button highlighted](./media/container-registry-artifact-streaming/03-start-artifact-streaming-inline.png) ](./media/container-registry-artifact-streaming/03-start-artifact-streaming-expanded.png#lightbox)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Troubleshoot Artifact Streaming](tutorial-artifact-streaming-troubleshoot.md)
container-registry Tutorial Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-streaming.md
- Title: "Tutorial: Artifact Streaming in Azure Container Registry (Preview)"
-description: "Artifact Streaming is a feature in Azure Container Registry to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms."
--- Previously updated : 10/31/2023-
-#customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
--
-# Tutorial: Artifact Streaming in Azure Container Registry (Preview)
-
-Azure Container Registry (ACR) artifact streaming is designed to accelerate containerized workloads for Azure customers using Azure Kubernetes Service (AKS). Artifact streaming empowers customers to easily scale workloads without having to wait for slow pull times for their node.
-
-For example, consider the scenario where you have a containerized application that you want to deploy to multiple regions. Traditionally, you have to create multiple container registries and enable geo-replication to ensure that your container images are available in all regions. This can be time-consuming and can degrade performance of the application.
-
-Leverage artifact streaming to store container images within a single registry and manage and stream container images to Azure Kubernetes Service (AKS) clusters in multiple regions. Artifact streaming deploys container applications to multiple regions without having to create multiple registries or enable geo-replication.
-
-Artifact streaming is only available in the **Premium** SKU [service tiers](container-registry-skus.md)
-
-This article is part one in a four-part tutorial series. In this tutorial, you learn how to:
-
-* [Artifact Streaming (Preview)](tutorial-artifact-streaming.md)
-* [Artifact Streaming - Azure CLI](tutorial-artifact-streaming-cli.md)
-* [Artifact Streaming - Azure portal](tutorial-artifact-streaming-portal.md)
-* [Troubleshoot Artifact Streaming](tutorial-artifact-streaming-troubleshoot.md)
-
-## Preview limitations
-
-Artifact streaming is currently in preview. The following limitations apply:
-
-* Only images with Linux AMD64 architecture are supported in the preview release.
-* The preview release doesn't support Windows-based container images and ARM64 images.
-* The preview release partially supports multi-architecture images (only AMD64 architecture is enabled).
-* For creating Ubuntu based node pools in AKS, choose Ubuntu version 20.04 or higher.
-* For Kubernetes, use Kubernetes version 1.26 or higher or k8s version > 1.25.
-* Only premium SKU registries support generating streaming artifacts in the preview release. Non-premium SKU registries do not offer this functionality during the preview release.
-* Customer-Managed Keys (CMK) registries are not supported in the preview release.
-* Kubernetes regcred is currently not supported.
-
-## Benefits of using artifact streaming
-
-Benefits of enabling and using artifact streaming at a registry level include:
-
-* Reduce image pull latency and fast container startup.
-* Seamless and agile experience for software developers and system architects.
-* Time and performance effective scaling mechanism to design, build, and deploy container applications and cloud solutions at high scale.
-* Simplify the process of deploying containerized applications to multiple regions using a single container registry and streaming container images to multiple regions.
-* Supercharge the process of deploying containerized platforms by simplifying the process of deploying and managing container images.
-
-## Considerations before using artifact streaming
-
-Here is a brief overview on how to use artifact streaming with Azure Container Registry (ACR).
-
-* Customers with new and existing registries can enable artifact streaming for specific repositories or tags.
-* Once you enable artifact streaming, two versions of the artifact are stored in the container registry: the original artifact and the artifact streaming artifact.
-* If you disable or turn off artifact streaming for repositories or artifacts, the artifact streaming copy and original artifact still exist.
-* If you delete a repository or artifact with artifact streaming and soft delete enabled, then both the original and artifact streaming versions are deleted. However, only the original version is available on the soft delete blade.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Enable Artifact Streaming- Azure CLI](tutorial-artifact-streaming-cli.md)
copilot Build Infrastructure Deploy Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/build-infrastructure-deploy-workloads.md
Title: Build infrastructure and deploy workloads using Microsoft Copilot for Azure (preview) description: Learn how Microsoft Copilot for Azure (preview) can help you build custom infrastructure for your workloads and provide templates and scripts to help you deploy. Previously updated : 11/15/2023 Last updated : 01/18/2024
Microsoft Copilot for Azure (preview) can help you quickly build custom infrastr
Throughout a conversation, Microsoft Copilot for Azure (preview) asks you questions to better understand your requirements and applications. Based on the provided information, it then provides several architecture options suitable for deploying that infrastructure. After you select an option, Microsoft Copilot for Azure (preview) provides detailed descriptions of the infrastructure, including how it can be configured. Finally, Microsoft Copilot for Azure provides templates and scripts using the language of your choice to deploy your infrastructure.
-To get help building infrastructure and deploying workloads, start on the **Virtual machines** page in the Azure portal. Select the arrow next to **Create**, then select **More VMs and related solutions**.
-
+To get help building infrastructure and deploying workloads, start on the [More virtual machines and related solutions](https://portal.azure.com/?feature.customportal=false#view/Microsoft_Azure_SolutionCenter/SolutionGroup.ReactView/groupid/defaultLandingVmBrowse) page in the Azure portal.
Once you're there, start the conversation by letting Microsoft Copilot for Azure (preview) know what you want to build and deploy.
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
the page.
|Region |Select **East US**.| |Availability Options |Select **Availability zones**.| |Availability zone |Select **1**.|
- |Image |Select **Ubuntu Server 18.04LTS - Gen1**.|
+ |Image |Select **Ubuntu Server 22.04 LTS**.|
|Azure Spot instance |Select **No**.| |Size |Choose VM size or take default setting.| |**Administrator account**||
data-share Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/concepts-roles-permissions.md
The following shows an example of how the required actions will be listed in JSO
"Microsoft.Storage/storageAccounts/blobServices/containers/read",
-"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action",
+"Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action",
+
+"Microsoft.Storage/storageAccounts/listkeys/action",
"Microsoft.DataShare/accounts/read",
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The following table summarizes each plan and their cloud availability.
> [!NOTE]
-> Starting March 1, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
+> Starting March 7, 2024, Defender CSPM must be enabled to have premium DevOps security capabilities that include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See DevOps security [support and prerequisites](devops-support.md) to learn more.
## Integrations (preview)
You can choose which ticketing system to integrate. For preview, only ServiceNow
- Defender CSPM for GCP is free until January 31, 2024. -- From March 1, 2023, advanced DevOps security posture capabilities will only be available through the paid Defender CSPM plan. Free foundational security posture management in Defender for Cloud will continue providing a number of Azure DevOps recommendations. Learn more about [DevOps security features](devops-support.md#azure-devops).
+- From March 7, 2024, advanced DevOps security posture capabilities will only be available through the paid Defender CSPM plan. Free foundational security posture management in Defender for Cloud will continue providing a number of Azure DevOps recommendations. Learn more about [DevOps security features](devops-support.md#azure-devops).
- For subscriptions that use both Defender CSPM and Defender for Containers plans, free vulnerability assessment is calculated based on free image scans provided via the Defender for Containers plan, as summarized [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Follow [these steps](tutorial-enable-storage-plan.md#set-up-and-configure-micros
If you have a file that you suspect might be malware or is being incorrectly detected, you can submit it to us for analysis through the [sample submission portal](/microsoft-365/security/intelligence/submission-guide). Select ΓÇ£Microsoft Defender for StorageΓÇ¥ as the source.
-Malware Scanning doesn't block access or change permissions to the uploaded blob, even if it's malicious.
+Defender for Cloud allows you to [suppress false positive alerts](alerts-suppression-rules.md). Make sure to limit the suppression rule by using the malware name or file hash.
+
+Malware Scanning doesn't automatically block access or change permissions to the uploaded blob, even if it's malicious.
## Limitations
defender-for-cloud Devops Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-support.md
DevOps security requires the following permissions:
The following tables summarize the availability and prerequisites for each feature within the supported DevOps platforms: > [!NOTE]
-> Starting March 1, 2024, [Defender CSPM](concept-cloud-security-posture-management.md) must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See details below to learn more.
+> Starting March 7, 2024, [Defender CSPM](concept-cloud-security-posture-management.md) must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See details below to learn more.
### Azure DevOps
defender-for-cloud Episode Forty Three https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-three.md
+
+ Title: Unified insights from Microsoft Entra permissions management | Defender for Cloud in the field
+description: Learn about unified insights from Microsoft Entra permissions management
+ Last updated : 01/18/2024++
+# Unified insights from Microsoft Entra permissions management
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Sean Lee joins Yuri Diogenes to talk about the new unified insights from Microsoft Entra permissions management (CIEM) into Microsoft Defender for Cloud to enable comprehensive risk mitigation. Sean explains how this integration enables teams to drive least privilege access controls for cloud resources, and receive actionable recommendations for resolving permission risks across Azure, AWS, and GCP. Sean also presents the recommendations included with this integration and demonstrates how to remediate them.
+
+> [!VIDEO https://aka.ms/docs/player?id=28414ce1-1acb-486a-a327-802a654edc38]
+
+- [01:48](/shows/mdc-in-the-field/unified-insights#time=01m48s) - Overview of Entra permission management
+- [02:55](/shows/mdc-in-the-field/unified-insights#time=02m55s) - Details about the integration with Defender for Cloud
+- [06:50](/shows/mdc-in-the-field/unified-insights#time=06m50s) - Demonstration
+
+## Recommended resources
+
+- Learn more about [enabling permissions management in Defender for Cloud](enable-permissions-management.md)
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY).
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS).
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Forty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-two.md
Title: Agentless secrets scanning for virtual machines | Defender for Cloud in the field description: Learn about agentless secrets scanning for virtual machines Previously updated : 01/08/2024 Last updated : 01/18/2024 # Agentless secrets scanning for virtual machines
Last updated 01/08/2024
- Learn more about [Microsoft Security](https://msft.it/6002T9HQY). - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS). -- - Follow us on social media: - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
Last updated 01/08/2024
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Unified insights from Microsoft Entra permissions management](episode-forty-three.md)
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account description: Defend your AWS resources by using Microsoft Defender for Cloud. -+ Last updated 01/03/2024
AWS Systems Manager (SSM) manages autoprovisioning by using the SSM Agent. Some
Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html). It enables core functionality for the AWS Systems Manager service. Enable these other extensions on the Azure Arc-connected machines:
-
+ - Microsoft Defender for Endpoint - A vulnerability assessment solution (TVM or Qualys) - The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent
Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore]
If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed. Enable these other extensions on the Azure Arc-connected machines:
-
+ - Microsoft Defender for Endpoint - A vulnerability assessment solution (TVM or Qualys) - The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent
Deploy the CloudFormation template by using Stack (or StackSet if you have a man
- **Upload a template file**: AWS automatically creates an S3 bucket that the CloudFormation template is saved to. The automation for the S3 bucket has a security misconfiguration that causes the `S3 buckets should require requests to use Secure Socket Layer` recommendation to appear. You can remediate this recommendation by applying the following policy: ```bash
- {ΓÇ»
- ΓÇ» "Id": "ExamplePolicy",ΓÇ»
- ΓÇ» "Version": "2012-10-17",ΓÇ»
- ΓÇ» "Statement": [ΓÇ»
-     { 
-       "Sid": "AllowSSLRequestsOnly", 
-       "Action": "s3:*", 
-       "Effect": "Deny", 
-       "Resource": [ 
-         "<S3_Bucket ARN>", 
-         "<S3_Bucket ARN>/*" 
-       ], 
-       "Condition": { 
-         "Bool": { 
-           "aws:SecureTransport": "false" 
-         } 
-       }, 
-      "Principal": "*" 
-     } 
- ΓÇ» ]ΓÇ»
- }ΓÇ»
+ {ΓÇ»
+ ΓÇ» "Id": "ExamplePolicy",ΓÇ»
+ ΓÇ» "Version": "2012-10-17",ΓÇ»
+ ΓÇ» "Statement": [ΓÇ»
+     { 
+       "Sid": "AllowSSLRequestsOnly", 
+       "Action": "s3:*", 
+       "Effect": "Deny", 
+       "Resource": [ 
+         "<S3_Bucket ARN>", 
+         "<S3_Bucket ARN>/*" 
+       ], 
+       "Condition": { 
+         "Bool": { 
+           "aws:SecureTransport": "false" 
+         } 
+       }, 
+      "Principal": "*" 
+     } 
+ ΓÇ» ]ΓÇ»
+ }ΓÇ»
``` > [!NOTE]
defender-for-cloud Recommendations Reference Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference-aws.md
impact on your secure score.
### Data plane recommendations
-All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under AWS after [enabling Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
+All the data plane recommendations listed [here](kubernetes-workload-protections.md#view-and-configure-the-bundle-of-recommendations) are supported under AWS after [enabling Azure Policy for Kubernetes](kubernetes-workload-protections.md#enable-kubernetes-data-plane-hardening).
## <a name='recs-aws-data'></a> AWS Data recommendations
defender-for-cloud Sql Information Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-information-protection-policy.md
SQL information protection's [data discovery and classification mechanism](/azur
The classification mechanism is based on the following two elements: -- **Labels** ΓÇô The main classification attributes, used to define the *sensitivity level of the data* stored in the column.
+- **Labels** ΓÇô The main classification attributes, used to define the *sensitivity level of the data* stored in the column.
- **Information Types** ΓÇô Provides additional granularity into the *type of data* stored in the column.
-The information protection policy options within Defender for Cloud provide a predefined set of labels and information types which serve as the defaults for the classification engine. You can customize the policy, according to your organization's needs, as described below.
+The information protection policy options within Defender for Cloud provide a predefined set of labels and information types that serve as the defaults for the classification engine. You can customize the policy, according to your organization's needs, as described below.
:::image type="content" source="./media/sql-information-protection-policy/sql-information-protection-policy-page.png" alt-text="The page showing your SQL information protection policy.":::
-
-- ## How do I access the SQL information protection policy? There are three ways to access the information protection policy: - **(Recommended)** From the **Environment settings** page of Defender for Cloud-- From the security recommendation "Sensitive data in your SQL databases should be classified"
+- From the security recommendation *Sensitive data in your SQL databases should be classified*
- From the Azure SQL DB data discovery page Each of these is shown in the relevant tab below. -- ### [**From Defender for Cloud's settings**](#tab/sqlip-tenant) <a name="sqlip-tenant"></a>
From Defender for Cloud's **Environment settings** page, select **SQL informatio
:::image type="content" source="./media/sql-information-protection-policy/environment-settings-link-to-information-protection.png" alt-text="Accessing the SQL Information Protection policy from the environment settings page of Microsoft Defender for Cloud."::: -- ### [**From Defender for Cloud's recommendation**](#tab/sqlip-db) <a name="sqlip-db"></a> ### Access the policy from the Defender for Cloud recommendation
-Use Defender for Cloud's recommendation, "Sensitive data in your SQL databases should be classified", to view the data discovery and classification page for your database. There, you'll also see the columns discovered to contain information that we recommend you classify.
+Use Defender for Cloud's recommendation, *Sensitive data in your SQL databases should be classified*, to view the data discovery and classification page for your database. There, you'll also see the columns discovered to contain information that we recommend you classify.
1. From Defender for Cloud's **Recommendations** page, search for the recommendation **Sensitive data in your SQL databases should be classified**.
Use Defender for Cloud's recommendation, "Sensitive data in your SQL databases s
:::image type="content" source="./media/sql-information-protection-policy/access-policy-from-security-center-recommendation.png" alt-text="Opening the SQL information protection policy from the relevant recommendation in Microsoft Defender for Cloud's"::: -- ### [**From Azure SQL**](#tab/sqlip-azuresql) <a name="sqlip-azuresql"></a>
Use Defender for Cloud's recommendation, "Sensitive data in your SQL databases s
:::image type="content" source="./media/sql-information-protection-policy/access-policy-from-azure-sql.png" alt-text="Opening the SQL information protection policy from Azure SQL.":::
-
+ ## Customize your information types
To manage and customize information types:
:::image type="content" source="./media/sql-information-protection-policy/manage-types.png" alt-text="Manage information types for your information protection policy."::: 1. To add a new type, select **Create information type**. You can configure a name, description, and search pattern strings for the information type. Search pattern strings can optionally use keywords with wildcard characters (using the character '%'), which the automated discovery engine uses to identify sensitive data in your databases, based on the columns' metadata.
-
+ :::image type="content" source="./media/sql-information-protection-policy/configure-new-type.png" alt-text="Configure a new information type for your information protection policy.":::
-1. You can also modify the built-in types by adding additional search pattern strings, disabling some of the existing strings, or by changing the description.
+1. You can also modify the built-in types by adding additional search pattern strings, disabling some of the existing strings, or by changing the description.
> [!TIP]
- > You can't delete built-in types or change their names.
+ > You can't delete built-in types or change their names.
-1. **Information types** are listed in order of ascending discovery ranking, meaning that the types higher in the list will attempt to match first. To change the ranking between information types, drag the types to the right spot in the table, or use the **Move up** and **Move down** buttons to change the order.
+1. **Information types** are listed in order of ascending discovery ranking, meaning that the types higher in the list attempt to match first. To change the ranking between information types, drag the types to the right spot in the table, or use the **Move up** and **Move down** buttons to change the order.
-1. Select **OK** when you are done.
+1. Select **OK** when you're done.
-1. After you completed managing your information types, be sure to associate the relevant types with the relevant labels, by clicking **Configure** for a particular label, and adding or deleting information types as appropriate.
+1. After you completed managing your information types, be sure to associate the relevant types with the relevant labels, by selecting **Configure** for a particular label, and adding or deleting information types as appropriate.
1. To apply your changes, select **Save** in the main **Labels** page.
-
## Exporting and importing a policy
-You can download a JSON file with your defined labels and information types, edit the file in the editor of your choice, and then import the updated file.
+You can download a JSON file with your defined labels and information types, edit the file in the editor of your choice, and then import the updated file.
:::image type="content" source="./media/sql-information-protection-policy/export-import.png" alt-text="Exporting and importing your information protection policy."::: > [!NOTE]
-> You'll need tenant level permissions to import a policy file.
-
+> You'll need tenant level permissions to import a policy file.
## Permissions
-To customize the information protection policy for your Azure tenant, you'll need the following actions on the tenant's root management group:
- - Microsoft.Security/informationProtectionPolicies/read
- - Microsoft.Security/informationProtectionPolicies/write
+To customize the information protection policy for your Azure tenant, you need the following actions on the tenant's root management group:
+
+- Microsoft.Security/informationProtectionPolicies/read
+- Microsoft.Security/informationProtectionPolicies/write
Learn more in [Grant and request tenant-wide visibility](tenant-wide-permissions-management.md).
Learn more in [Grant and request tenant-wide visibility](tenant-wide-permissions
- [Get-AzSqlInformationProtectionPolicy](/powershell/module/az.security/get-azsqlinformationprotectionpolicy): Retrieves the effective tenant SQL information protection policy. - [Set-AzSqlInformationProtectionPolicy](/powershell/module/az.security/set-azsqlinformationprotectionpolicy): Sets the effective tenant SQL information protection policy.
-
## Next steps
-
+ In this article, you learned about defining an information protection policy in Microsoft Defender for Cloud. To learn more about using SQL Information Protection to classify and protect sensitive data in your SQL databases, see [Azure SQL Database Data Discovery and Classification](/azure/azure-sql/database/data-discovery-and-classification-overview). For more information on security policies and data security in Defender for Cloud, see the following articles:
-
+ - [Setting security policies in Microsoft Defender for Cloud](tutorial-security-policy.md): Learn how to configure security policies for your Azure subscriptions and resource groups - [Microsoft Defender for Cloud data security](data-security.md): Learn how Defender for Cloud manages and safeguards data
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
This procedure describes how to send a software version update to one or more OT
:::image type="content" source="media/update-ot-software/remote-update-step-1.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/remote-update-step-1.png":::
-1. In the **Send package** pane that appears, check to make sure that you're sending the software to the sensor you want to update. To jump to the release notes for the new version, select **Learn more** at the top of the pane.
+1. In the **Send package** pane that appears, under **Available versions**, select the software version from the list. If the version you need doesn't appear, select **Show more** to list all available versions.
+
+ To jump to the release notes for the new version, select **Learn more** at the top of the pane.
+
+ :::image type="content" source="media/update-ot-software/send-package-multiple-versions-400.png" alt-text="Screenshot of sensor update pane with option to choose sensor update version." lightbox="media/update-ot-software/send-package-multiple-versions.png" border="false":::
1. When you're ready, select **Send package**, and the software transfer to your sensor machine is started. You can see the transfer progress in the **Sensor version** column, with the percentage complete automatically updating in the progress bar, so you can see that the process has started and letting you track its progress until the transfer is complete. For example: :::image type="content" source="media/update-ot-software/sensor-version-update-bar.png" alt-text="Screenshot of the update bar in the Sensor version column." lightbox="media/update-ot-software/sensor-version-update-bar.png":::
- When the transfer is complete, the **Sensor version** column changes to :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false" ::: **Ready to update**.
+ When the transfer is complete, the **Sensor version** column changes to :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="true" ::: **Ready to update**.
Hover over the **Sensor version** value to see the source and target version for your update.
This procedure describes how to send a software version update to one or more OT
Run the sensor update only when you see the :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update** icon in the **Sensor version** column.
-1. Select one or more sensors to update, and then select **Sensor update** > **Remote update** > **Step 2: Update sensor** from the toolbar.
+1. Select one or more sensors to update, and then select **Sensor update** > **Remote update** > **Step 2: Update sensor** from the toolbar. The **Update sensor** pane opens in the right side of the screen.
For an individual sensor, the **Step 2: Update sensor** option is also available from the **...** options menu. For example: :::image type="content" source="media/update-ot-software/remote-update-step-2.png" alt-text="Screenshot of the Update sensor option." lightbox="media/update-ot-software/remote-update-step-2.png":::
-1. In the **Update sensor** pane that appears, verify your update details.
+1. In the **Update sensor** pane that appears, verify your update details.
When you're ready, select **Update now** > **Confirm update**. In the grid, the **Sensor version** value changes to :::image type="icon" source="media/update-ot-software/installing.png" border="false"::: **Installing**, and an update progress bar appears showing you the percentage complete. The bar automatically updates, so that you can track the progress until the installation is complete.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## January 2024
+
+|Service area |Updates |
+|||
+| **OT networks** | - [Sensor update in Azure portal now supports selecting a specific version](#sensor-update-in-azure-portal-now-supports-selecting-a-specific-version) <br> |
+
+### Sensor update in Azure portal now supports selecting a specific version
+
+When you update the sensor in the Azure portal, you can now choose to update to any of the supported, previous versions (versions other than the latest version). Previously, sensors onboarded to Microsoft Defender for IoT on the Azure portal were automatically updated to the latest version.
+
+You might want to update your sensor to a specific version for various reasons, such as for testing purposes, or to align all sensors to the same version.
++
+For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md#send-the-software-update-to-your-ot-sensor).
+ ## December 2023 |Service area |Updates |
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-ontologies-adopt.md
description: Learn about existing industry ontologies that can be adopted for Azure Digital Twins Previously updated : 03/29/2023 Last updated : 01/18/2024
Microsoft has partnered with domain experts to create DTDL model sets based on i
| Smart buildings | [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building) | Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a consortium of real estate owners, software vendors, and research institutions.<br><br>This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (like [BRICK Schema](https://brickschema.org/ontology/) or [W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it. | You can read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794). | | Smart cities | [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities) | Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). | You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585). | | Energy grids | [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/) | This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market. | You can also read more about the partnerships and approach for energy grids in the following blog post: [Energy Grid Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/energy-grid-ontology-for-digital-twins-is-now-available/ba-p/2325134). |
-| Manufacturing | [Manufacturing Ontologies](https://github.com/digitaltwinconsortium/ManufacturingOntologies) | These ontologies were created to help solution providers accelerate development of digital twin solutions for manufacturing use cases like asset condition monitoring, simulation, OEE calculation, and predictive maintenance. Additionally, the ontologies can be used to enable the digital transformation and modernization of factories and plants. They are adapted from [OPC UA](https://opcfoundation.org), [ISA95](https://en.wikipedia.org/wiki/ANSI/ISA-95) and the [Asset Administration Shell](https://www.plattform-i40.de/IP/Redaktion/EN/Standardartikel/specification-administrationshell.html), three global standards widely used in the manufacturing space. | Visit the repository to read more about this ontology and explore a sample solution for ingesting OPC UA data into Azure Digital Twins. |
+| Manufacturing | [Manufacturing Ontologies](https://github.com/digitaltwinconsortium/ManufacturingOntologies) | These ontologies were created to help solution providers accelerate development of digital twin solutions for manufacturing use cases like asset condition monitoring, simulation, OEE calculation, and predictive maintenance. Additionally, the ontologies can be used to enable the digital transformation and modernization of factories and plants. They are adapted from [OPC UA](https://opcfoundation.org), [ISA95](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa95) and the [Asset Administration Shell](https://reference.opcfoundation.org/I4AAS/v100/docs/4.1), three global standards widely used in the manufacturing space. | Visit the repository to read more about this ontology and explore a sample solution for ingesting OPC UA data into Azure Digital Twins. |
Each ontology is focused on an initial set of models. You can contribute to the ontologies by suggesting extensions or other improvements through the GitHub contribution process in each ontology repository.
dms Tutorial Sql Server Azure Sql Database Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline.md
In this tutorial, you learn how to:
> - Run an assessment of your source SQL Server databases > - Collect performance data from your source SQL Server instance > - Get a recommendation of the Azure SQL Database SKU that will work best for your workload
-> - Deploy your on-premises database schema to Azure SQL Database
> - Create an instance of Azure Database Migration Service > - Start your migration and monitor progress to completion
Before you begin the tutorial:
- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the db_datareader role and that the login for the target SQL Server instance is a member of the db_owner role. -- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.-
+- To migrate the database schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.
+
- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider). > [!NOTE]
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
>
-> If no tables exist on the Azure SQL Database target, or no tables are selected before starting the migration, the **Next** button isn't available to select to initiate the migration task.
+> If no table exist on the Azure SQL Database target, or no tables are selected before starting the migration, the **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward.
### Open the Migrate to Azure SQL wizard in Azure Data Studio
To open the Migrate to Azure SQL wizard:
> [!NOTE] > If no tables are selected or if a username and password aren't entered, the **Next** button isn't available to select. >
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate Schema before selecting the list of tables to migrate.
### Create a Database Migration Service instance
Before you begin the tutorial:
- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the **db_datareader** role, and that the login for the target SQL Server instance is a member of the **db_owner** role. -- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.
+- To migrate the database Schema from source to target Azure SQL DB by using the Database Migration Service, the minimum supported [SHIR version](https://www.microsoft.com/download/details.aspx?id=39717) required is 5.37 or above.
- If you're using Database Migration Service for the first time, make sure that the `Microsoft.DataMigration` [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider). > [!NOTE]
-> Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+> Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
>
-> If no tables exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task.
+> If no table exists on the Azure SQL Database target, or no tables are selected before starting the migration. The **Next** button isn't available to select to initiate the migration task. If no table exists on target then you must select the Schema migration option to move forward.
[!INCLUDE [create-database-migration-service-instance](includes/create-database-migration-service-instance.md)]
Before you begin the tutorial:
> [!NOTE] > In an offline migration, application downtime starts when the migration starts. >
- > Make sure to migrate the database schema from source to target by using the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio before selecting the list of tables to migrate.
+ > Now, you can migrate database Schema and data both using Database Migration Service. Also, you can use tools like the [SQL Server dacpac extension](/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio to migrate schema before selecting the list of tables to migrate.
### Monitor the database migration
dns Dns Protect Private Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-protect-private-zones-recordsets.md
Azure PowerShell
$lvl = "<lock level>" $lnm = "<lock name>" $rnm = "<zone name>/<record set name>"
-$rty = "Microsoft.Network/privateDnsZones"
+$rty = "Microsoft.Network/privateDnsZones/<record type>"
$rsg = "<resource group name>" New-AzResourceLock -LockLevel $lvl -LockName $lnm -ResourceName $rnm -ResourceType $rty -ResourceGroupName $rsg
dns Dns Reverse Dns Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-hosting.md
Last updated 04/27/2023--++ ms.devlang: azurecli
The following example explains the process of creating a PTR record for a revers
:::image type="content" source="./media/dns-reverse-dns-hosting/create-record-set-ipv4.png" alt-text="Screenshot of create IPv4 pointer record set.":::
-1. The name of the record set for a PTR record is the rest of the IPv4 address in reverse order.
+1. The name of the record set for a PTR record is the rest of the IPv4 address in reverse order.
- In this example, the first three octets are already populated as part of the zone name `.2.0.192`. That's why only the last octet is needed in the **Name** box. For example, give your record set the name of **15** for a resource whose IP address is `192.0.2.15`.
+ In this example, the first three octets are already populated as part of the zone name `.2.0.192`. That's why only the last octet is needed in the **Name** box. For example, give your record set the name of **15** for a resource whose IP address is `192.0.2.15`.
:::image type="content" source="./media/dns-reverse-dns-hosting/create-ipv4-ptr.png" alt-text="Screenshot of create IPv4 pointer record.":::
New-AzDnsRecordSet -Name 15 -RecordType PTR -ZoneName 2.0.192.in-addr.arpa -Reso
#### Azure classic CLI ```azurecli
-azure network dns record-set add-record mydnsresourcegroup 2.0.192.in-addr.arpa 15 PTR --ptrdname dc1.contoso.com
+azure network dns record-set add-record mydnsresourcegroup 2.0.192.in-addr.arpa 15 PTR --ptrdname dc1.contoso.com
``` #### Azure CLI
The following example explains the process of creating new PTR record for IPv6.
:::image type="content" source="./media/dns-reverse-dns-hosting/create-record-set-ipv6.png" alt-text="Screenshot of create IPv6 pointer record set.":::
-1. The name of the record set for a PTR record is the rest of the IPv6 address in reverse order. It must not include any zero compression.
+1. The name of the record set for a PTR record is the rest of the IPv6 address in reverse order. It must not include any zero compression.
In this example, the first 64 bits of the IPv6 gets populated as part of the zone name (0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa). That's why only the last 64 bits are supplied in the **Name** box. The last 64 bits of the IP address gets entered in reverse order, with a period as the delimiter between each hexadecimal number. Name your record set **e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f** if you have a resource whose IP address is 2001:0db8:abdc:0000:f524:10bc:1af9:405e. :::image type="content" source="./media/dns-reverse-dns-hosting/create-ipv6-ptr.png" alt-text="Screenshot of create IPv6 pointer record.":::
-1. For *Type*, select **PTR**.
+1. For *Type*, select **PTR**.
1. For *DOMAIN NAME*, enter the FQDN of the resource that uses the IP.
New-AzDnsRecordSet -Name "e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f" -RecordType PTR -Zone
#### Azure classic CLI ```azurecli
-azure network dns record-set add-record mydnsresourcegroup 0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f PTR --ptrdname dc2.contoso.com
+azure network dns record-set add-record mydnsresourcegroup 0.0.0.0.c.d.b.a.8.b.d.0.1.0.0.2.ip6.arpa e.5.0.4.9.f.a.1.c.b.0.1.4.2.5.f PTR --ptrdname dc2.contoso.com
```
-
+ #### Azure CLI ```azurecli-interactive
az network dns record-set ptr add-record -g mydnsresourcegroup -z 0.0.0.0.c.d.b.
## View records
-To view the records that you created, browse to your DNS zone in the Azure portal. In the lower part of the **DNS zone** pane, you can see the records for the DNS zone. You should see the default NS and SOA records, plus any new records that you've created. The NS and SOA records are created in every zone.
+To view the records that you created, browse to your DNS zone in the Azure portal. In the lower part of the **DNS zone** pane, you can see the records for the DNS zone. You should see the default NS and SOA records, plus any new records that you've created. The NS and SOA records are created in every zone.
### IPv4
dns Dns Reverse Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-reverse-dns-overview.md
na-+ Last updated 04/27/2023
energy-data-services How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md
Use the following steps to create a private endpoint for an existing Azure Data
* Configure network and private IP settings. [Learn more](../private-link/create-private-endpoint-portal.md#create-a-private-endpoint).
- * Configure a private endpoint with an application security group. [Learn more](../private-link/configure-asg-private-endpoint.md#create-private-endpoint-with-an-asg).
+ * Configure a private endpoint with an application security group. [Learn more](../private-link/configure-asg-private-endpoint.md#create-a-private-endpoint-with-an-asg).
[![Screenshot of virtual network information for a private endpoint.](media/how-to-manage-private-links/private-links-4-virtual-network.png)](media/how-to-manage-private-links/private-links-4-virtual-network.png#lightbox)
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
Title: Use Azure Event Grid with events in CloudEvents schema description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of CloudEvents. Previously updated : 12/02/2022 Last updated : 01/18/2024 ms.devlang: csharp # ms.devlang: csharp, javascript
You set the input schema for a custom topic when you create the custom topic.
For the Azure CLI, use: ```azurecli-interactive
-az eventgrid topic create \
- --name <topic_name> \
- -l westcentralus \
- -g gridResourceGroup \
- --input-schema cloudeventschemav1_0
+az eventgrid topic create --name demotopic -l westcentralus -g gridResourceGroup --input-schema cloudeventschemav1_0
``` For PowerShell, use: ```azurepowershell-interactive
-New-AzEventGridTopic `
- -ResourceGroupName gridResourceGroup `
- -Location westcentralus `
- -Name <topic_name> `
- -InputSchema CloudEventSchemaV1_0
+New-AzEventGridTopic -ResourceGroupName gridResourceGroup -Location westcentralus -Name demotopic -InputSchema CloudEventSchemaV1_0
``` ### Output schema
You set the output schema when you create the event subscription.
For the Azure CLI, use: ```azurecli-interactive
-topicID=$(az eventgrid topic show --name <topic-name> -g gridResourceGroup --query id --output tsv)
+topicID=$(az eventgrid topic show --name demotopic -g gridResourceGroup --query id --output tsv)
-az eventgrid event-subscription create \
- --name <event_subscription_name> \
- --source-resource-id $topicID \
- --endpoint <endpoint_URL> \
- --event-delivery-schema cloudeventschemav1_0
+az eventgrid event-subscription create --name demotopicsub --source-resource-id $topicID --endpoint <endpoint_URL> --event-delivery-schema cloudeventschemav1_0
``` For PowerShell, use: ```azurepowershell-interactive $topicid = (Get-AzEventGridTopic -ResourceGroupName gridResourceGroup -Name <topic-name>).Id
-New-AzEventGridSubscription `
- -ResourceId $topicid `
- -EventSubscriptionName <event_subscription_name> `
- -Endpoint <endpoint_URL> `
- -DeliverySchema CloudEventSchemaV1_0
+New-AzEventGridSubscription -ResourceId $topicid -EventSubscriptionName <event_subscription_name> -Endpoint <endpoint_URL> -DeliverySchema CloudEventSchemaV1_0
``` ## Endpoint validation with CloudEvents v1.0
If you're already familiar with Event Grid, you might be aware of the endpoint v
### Visual Studio or Visual Studio Code
-If you're using Visual Studio or Visual Studio Code, and C# programming language to develop functions, make sure that you're using the latest [Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/) NuGet package (version **3.2.1** or above).
+If you're using Visual Studio or Visual Studio Code, and C# programming language to develop functions, make sure that you're using the latest [Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/) NuGet package (version **3.3.1** or above).
In Visual Studio, use the **Tools** -> **NuGet Package Manager** -> **Package Manager Console**, and run the `Install-Package` command (`Install-Package Microsoft.Azure.WebJobs.Extensions.EventGrid -Version 3.2.1`). Alternatively, right-click the project in the Solution Explorer window, and select **Manage NuGet Packages** menu to browse for the NuGet package, and install or update it to the latest version.
namespace Company.Function
public static class CloudEventTriggerFunction { [FunctionName("CloudEventTriggerFunction")]
- public static void Run(
- ILogger logger,
- [EventGridTrigger] CloudEvent e)
+ public static void Run(ILogger logger, [EventGridTrigger] CloudEvent e)
{ logger.LogInformation("Event received {type} {subject}", e.Type, e.Subject); }
event-grid Handler Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-service-bus.md
Title: Service Bus queues and topics as event handlers for Azure Event Grid events description: Describes how you can use Service Bus queues and topics as event handlers for Azure Event Grid events. Previously updated : 11/17/2022 Last updated : 01/17/2024 # Service Bus queues and topics as event handlers for Azure Event Grid events
You can also use the [`New-AzEventGridSystemTopicEventSubscription`](/powershell
When you send an event to a Service Bus queue or topic as a brokered message, the `messageid` of the brokered message is an internal system ID.
-The internal system ID for the message will be maintained across redelivery of the event so that you can avoid duplicate deliveries by turning on **duplicate detection** on the service bus entity. We recommend that you enable duration of the duplicate detection on the Service Bus entity to be either the time-to-live (TTL) of the event or max retry duration, whichever is longer.
+The internal system ID for the message is maintained across redelivery of the event so that you can avoid duplicate deliveries by turning on **duplicate detection** on the service bus entity. We recommend that you enable duration of the duplicate detection on the Service Bus entity to be either the time-to-live (TTL) of the event or max retry duration, whichever is longer.
## Delivery properties
-Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that are required by a destination. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
+Event subscriptions allow you to set up HTTP headers that are included in delivered events. This capability allows you to set custom headers that the destination requires. You can set custom headers on the events that are delivered to Azure Service Bus queues and topics.
Azure Service Bus supports the use of following message properties when sending single messages.
event-grid Manage Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/manage-event-delivery.md
Title: Dead letter and retry policies - Azure Event Grid description: Describes how to customize event delivery options for Event Grid. Set a dead-letter destination, and specify how long to retry delivery. Previously updated : 11/07/2022 Last updated : 01/17/2024 ms.devlang: azurecli
event-grid Post To Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/post-to-custom-topic.md
Title: Post event to custom Azure Event Grid topic description: This article describes how to post an event to a custom topic. It shows the format of the post and event data. Previously updated : 11/17/2022 Last updated : 01/18/2024 # Publish events to Azure Event Grid custom topics using access keys
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Title: Azure Event Grid - Subscribe to partner events description: This article explains how to subscribe to events from a partner using Azure Event Grid. Previously updated : 10/31/2022 Last updated : 01/18/2024 # Subscribe to events published by a partner with Azure Event Grid
-This article describes steps to subscribe to events that originate in a system owned or managed by a partner (SaaS, ERP, etc.).
+This article describes steps to subscribe to events that originate in a system owned or managed by a partner (SaaS, Enterprise Resource Planning (ERP), etc.).
> [!IMPORTANT] >If you aren't familiar with the **Partner Events** feature, see [Partner Events overview](partner-events-overview.md) to understand the rationale of the steps in this article.
Here's the list of partners and a link to submit a request to enable events flow
## Next steps
-See the following articles for more details about the Partner Events feature:
+For more information, see the following articles about the Partner Events feature:
- [Partner Events overview for customers](partner-events-overview.md) - [Partner Events overview for partners](partner-events-overview-for-partners.md)
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
IDPS signature rules have the following properties:
|Signature ID |Internal ID for each signature. This ID is also presented in Azure Firewall Network Rules logs.| |Mode |Indicates if the signature is active or not, and whether firewall drops or alerts upon matched traffic. The below signature mode can override IDPS mode<br>- **Disabled**: The signature isn't enabled on your firewall.<br>- **Alert**: You receive alerts when suspicious traffic is detected.<br>- **Alert and Deny**: You receive alerts and suspicious traffic is blocked. Few signature categories are defined as ΓÇ£Alert OnlyΓÇ¥, therefore by default, traffic matching their signatures isn't blocked even though IDPS mode is set to ΓÇ£Alert and DenyΓÇ¥. Customers may override this by customizing these specific signatures to ΓÇ£Alert and DenyΓÇ¥ mode. <br><br>IDPS Signature mode is determined by one of the following reasons:<br><br> 1. Defined by Policy Mode ΓÇô Signature mode is derived from IDPS mode of the existing policy.<br>2. Defined by Parent Policy ΓÇô Signature mode is derived from IDPS mode of the parent policy.<br>3. Overridden ΓÇô You can override and customize the Signature mode.<br>4. Defined by System - Signature mode is set to *Alert Only* by the system due to its [category](idps-signature-categories.md). You may override this signature mode.<br><br>Note: IDPS alerts are available in the portal via network rule log query.| |Severity |Each signature has an associated severity level and assigned priority that indicates the probability that the signature is an actual attack.<br>- **Low (priority 3)**: An abnormal event is one that doesn't normally occur on a network or Informational events are logged. Probability of attack is low.<br>- **Medium (priority 2)**: The signature indicates an attack of a suspicious nature. The administrator should investigate further.<br>- **High (priority 1)**: The attack signatures indicate that an attack of a severe nature is being launched. There's little probability that the packets have a legitimate purpose.|
-|Direction |The traffic direction for which the signature is applied.<br><br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Internal**: Signature is applied only on traffic sent from and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Any**: Signature is always applied on any traffic direction.|
+|Direction |The traffic direction for which the signature is applied.<br><br>- **Inbound**: Signature is applied only on traffic arriving from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Outbound**: Signature is applied only on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) to the Internet.<br>- **Internal**: Signature is applied only on traffic sent from and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Internal/Inbound**: Signature is applied on traffic arriving from your [configured private IP address range](#idps-private-ip-ranges) or from the Internet and destined to your [configured private IP address range](#idps-private-ip-ranges).<br>- **Internal/Outbound**: Signature is applied on traffic sent from your [configured private IP address range](#idps-private-ip-ranges) and destined to your [configured private IP address range](#idps-private-ip-ranges) or to the Internet.<br>- **Any**: Signature is always applied on any traffic direction.|
|Group |The group name that the signature belongs to.| |Description |Structured from the following three parts:<br>- **Category name**: The category name that the signature belongs to as described in [Azure Firewall IDPS signature rule categories](idps-signature-categories.md).<br>- High level description of the signature<br>- **CVE-ID** (optional) in the case where the signature is associated with a specific CVE.| |Protocol |The protocol associated with this signature.|
firewall Protect Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-kubernetes-service.md
Title: Use Azure Firewall to protect Azure Kubernetes Service (AKS) clusters
description: Learn how to use Azure Firewall to protect Azure Kubernetes Service (AKS) clusters -+ Last updated 10/19/2023
Azure Kubernetes Service (AKS) offers a managed Kubernetes cluster on Azure. For
Despite AKS being a fully managed solution, it doesn't offer a built-in solution to secure ingress and egress traffic between the cluster and external networks. Azure Firewall offers a solution to this. AKS clusters are deployed on a virtual network. This network can be managed (created by AKS) or custom (preconfigured by the user beforehand). In either case, the cluster has outbound dependencies on services outside of that virtual network (the service has no inbound dependencies). For management and operational purposes, nodes in an AKS cluster need to access [certain ports and fully qualified domain names (FQDNs)](../aks/outbound-rules-control-egress.md) describing these outbound dependencies. This is required for various functions including, but not limited to, the nodes that communicate with the Kubernetes API server. They download and install core Kubernetes cluster components and node security updates, or pull base system container images from Microsoft Container Registry (MCR), and so on. These outbound dependencies are almost entirely defined with FQDNs, which don't have static addresses behind them. The lack of static addresses means that Network Security Groups can't be used to lock down outbound traffic from an AKS cluster. For this reason, by default, AKS clusters have unrestricted outbound (egress) Internet access. This level of network access allows nodes and services you run to access external resources as needed.
-
+ However, in a production environment, communications with a Kubernetes cluster should be protected to prevent against data exfiltration along with other vulnerabilities. All incoming and outgoing network traffic must be monitored and controlled based on a set of security rules. If you want to do this, you have to restrict egress traffic, but a limited number of ports and addresses must remain accessible to maintain healthy cluster maintenance tasks and satisfy those outbound dependencies previously mentioned.
-
+ The simplest solution uses a firewall device that can control outbound traffic based on domain names. A firewall typically establishes a barrier between a trusted network and an untrusted network, such as the Internet. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination, giving you fine-grained egress traffic control, but at the same time allows you to provide access to the FQDNs encompassing an AKS clusterΓÇÖs outbound dependencies (something that NSGs can't do). Likewise, you can control ingress traffic and improve security by enabling threat intelligence-based filtering on an Azure Firewall deployed to a shared perimeter network. This filtering can provide alerts, and deny traffic to and from known malicious IP addresses and domains. See the following video by Abhinav Sriram for a quick overview on how this works in practice on a sample environment: > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE529Qc]
-You can download a zip file from the [Microsoft Download Center](https://download.microsoft.com/download/0/1/3/0131e87a-c862-45f8-8ee6-31fa103a03ff/aks-azfw-protection-setup.zip) that contains a bash script file and a yaml file to automatically configure the sample environment used in the video. It configures Azure Firewall to protect both ingress and egress traffic. The following guides walk through each step of the script in more detail so you can set up a custom configuration.
+You can download a zip file from the [Microsoft Download Center](https://download.microsoft.com/download/0/1/3/0131e87a-c862-45f8-8ee6-31fa103a03ff/aks-azfw-protection-setup.zip) that contains a bash script file and a yaml file to automatically configure the sample environment used in the video. It configures Azure Firewall to protect both ingress and egress traffic. The following guides walk through each step of the script in more detail so you can set up a custom configuration.
The following diagram shows the sample environment from the video that the script and guide configure:
See [virtual network route table documentation](../virtual-network/virtual-netwo
> For applications outside of the kube-system or gatekeeper-system namespaces that needs to talk to the API server, an additional network rule to allow TCP communication to port 443 for the API server IP in addition to adding application rule for fqdn-tag AzureKubernetesService is required.
- You can use the following three network rules to configure your firewall. You might need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
+ You can use the following three network rules to configure your firewall. You might need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP. Both these rules only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
Finally, we add a third network rule opening port 123 to an Internet time server FQDN (for example:`ntp.ubuntu.com`) via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you need to adapt it when using your own options.
apiVersion: v1
kind: Service metadata: name: voting-storage
- labels:
+ labels:
app: voting-storage spec: ports:
apiVersion: v1
kind: Service metadata: name: voting-app
- labels:
+ labels:
app: voting-app spec: type: LoadBalancer
apiVersion: v1
kind: Service metadata: name: voting-analytics
- labels:
+ labels:
app: voting-analytics spec: ports:
frontdoor Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scripts/custom-domain.md
Title: "Azure CLI example: Deploy custom domain in Azure Front Door"
-description: Use this Azure CLI example script to deploy a Custom Domain name and TLS certificate on an Azure Front Door front-end.
+ Title: "Azure CLI example: Deploy custom domain in Azure Front Door"
+description: Use this Azure CLI example script to deploy a Custom Domain name and TLS certificate on an Azure Front Door front-end.
-+ ms.devlang: azurecli Previously updated : 04/27/2022 Last updated : 04/27/2022 # Azure Front Door: Deploy custom domain
healthcare-apis Configure Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/configure-azure-rbac.md
Last updated 06/06/2022 -
+
# Configure Azure RBAC role for Azure Health Data Services
healthcare-apis Change Feed Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/change-feed-overview.md
+
+ Title: Change feed overview for the DICOM service in Azure Health Data Services
+description: Learn how to use the change feed in the DICOM service to access the logs of all the changes that occur in your organization's medical imaging data. The change feed allows you to query, process, and act upon the change events in a scalable and efficient way.
++++ Last updated : 1/18/2024+++
+# Change feed overview
+
+The change feed provides logs of all the changes that occur in the DICOM&reg; service. The change feed provides ordered, guaranteed, immutable, and read-only logs of these changes. The change feed offers the ability to go through the history of DICOM service and acts upon the creates, updates, and deletes in the service.
+
+Client applications can read these logs at any time in batches of any size. The change feed enables you to build efficient and scalable solutions that process change events that occur in your DICOM service.
+
+You can process these change events asynchronously, incrementally or in-full. Any number of client applications can independently read the change feed, in parallel, and at their own pace.
+
+As of v2 of the API, the change feed can be queried for a particular time window.
+
+Make sure to specify the version as part of the URL when making requests. More information can be found in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md).
+
+## API Design
+
+The API exposes two `GET` endpoints for interacting with the change feed. A typical flow for consuming the change feed is provided in the [Usage](#usage) section.
+
+Verb | Route | Returns | Description
+: | :-- | :- | :
+GET | /changefeed | JSON Array | [Read the change feed](#change-feed)
+GET | /changefeed/latest | JSON Object | [Read the latest entry in the change feed](#latest-change-feed)
+
+### Object model
+
+Field | Type | Description
+: | :-- | :
+Sequence | long | The unique ID per change event
+StudyInstanceUid | string | The study instance UID
+SeriesInstanceUid | string | The series instance UID
+SopInstanceUid | string | The sop instance UID
+Action | string | The action that was performed - either `create`, `update`, or `delete`
+Timestamp | datetime | The date and time the action was performed in UTC
+State | string | [The current state of the metadata](#states)
+Metadata | object | Optionally, the current DICOM metadata if the instance exists
+
+#### States
+
+State | Description
+:- | :
+current | This instance is the current version.
+replaced | This instance is replaced with a new version.
+deleted | This instance is deleted and is no longer available in the service.
+
+## Change feed
+
+The change feed resource is a collection of events that occurred within the DICOM server.
+
+### Version 2
+
+#### Request
+```http
+GET /changefeed?startTime={datetime}&endtime={datetime}&offset={int}&limit={int}&includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
+
+#### Response
+```json
+[
+ {
+ "Sequence": 1,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-04T01:03:08.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ {
+ "Sequence": 2,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-05T07:13:16.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ //...
+]
+```
+#### Parameters
+
+Name | Type | Description | Default | Min | Max |
+:-- | :- | :- | : | :-- | :-- |
+offset | long | The number of events to skip from the beginning of the result set | `0` | `0` | |
+limit | int | The maximum number of events to return | `100` | `1` | `200` |
+startTime | DateTime | The inclusive start time for change events | `"0001-01-01T00:00:00Z"` | `"0001-01-01T00:00:00Z"` | `"9999-12-31T23:59:59.9999998Z"`|
+endTime | DateTime | The exclusive end time for change events | `"9999-12-31T23:59:59.9999999Z"` | `"0001-01-01T00:00:00.0000001"` | `"9999-12-31T23:59:59.9999999Z"` |
+includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | |
+
+### Version 1
+
+#### Request
+```http
+GET /changefeed?offset={int}&limit={int}&includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
+
+#### Response
+```json
+[
+ {
+ "Sequence": 1,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-04T01:03:08.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ {
+ "Sequence": 2,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|delete",
+ "Timestamp": "2020-03-05T07:13:16.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ // DICOM JSON
+ }
+ },
+ // ...
+]
+```
+
+#### Parameters
+Name | Type | Description | Default | Min | Max |
+:-- | :- | :- | : | :-- | :-- |
+offset | long | The exclusive starting sequence number for events | `0` | `0` | |
+limit | int | The maximum value of the sequence number relative to the offset. For example, if the offset is 10 and the limit is 5, then the maximum sequence number returned is 15. | `10` | `1` | `100` |
+includeMetadata | bool | Indicates whether or not to include the DICOM metadata | `true` | | |
+
+## Latest change feed
+The latest change feed resource represents the latest event that occurred within the DICOM server.
+
+### Request
+```http
+GET /changefeed/latest?includemetadata={bool} HTTP/1.1
+Accept: application/json
+Content-Type: application/json
+```
+
+### Response
+```json
+{
+ "Sequence": 2,
+ "StudyInstanceUid": "{uid}",
+ "SeriesInstanceUid": "{uid}",
+ "SopInstanceUid": "{uid}",
+ "Action": "create|update|delete",
+ "Timestamp": "2020-03-05T07:13:16.4834Z",
+ "State": "current|replaced|deleted",
+ "Metadata": {
+ //DICOM JSON
+ }
+}
+```
+
+### Parameters
+
+Name | Type | Description | Default |
+:-- | : | :- | : |
+includeMetadata | bool | Indicates whether or not to include the metadata | `true` |
+
+## Usage
+
+### User application
+
+#### Version 2
+
+1. An application regularly queries the change feed on some time interval
+ * For example, if querying every hour, a query for the change feed might look like `/changefeed?startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * If starting from the beginning, the change feed query might omit the `startTime` to read all of the changes up to, but excluding, the `endTime`
+ * For example: `/changefeed?endTime=2023-05-10T17:00:00Z`
+2. Based on the `limit` (if provided), an application continues to query for more pages of change events if the number of returned events is equal to the `limit` (or default) by updating the offset on each subsequent query
+ * For example, if the `limit` is `100`, and 100 events are returned, then the subsequent query would include `offset=100` to fetch the next "page" of results. The queries demonstrate the pattern:
+ * `/changefeed?offset=0&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * `/changefeed?offset=100&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * `/changefeed?offset=200&limit=100&startTime=2023-05-10T16:00:00Z&endTime=2023-05-10T17:00:00Z`
+ * If fewer events than the `limit` are returned, then the application can assume that there are no more results within the time range
+
+#### Version 1
+
+1. An application determines from which sequence number it wishes to start reading change events:
+ * To start from the first event, the application should use `offset=0`
+ * To start from the latest event, the application should specify the `offset` parameter with the value of `Sequence` from the latest change event using the `/changefeed/latest` resource
+2. On some regular polling interval, the application performs the following actions:
+ * Fetches the latest sequence number from the `/changefeed/latest` endpoint
+ * Fetches the next set of changes for processing by querying the change feed with the current offset
+ * For example, if the application processed up to sequence number 15 and it only wants to process at most five events at once, then it should use the URL `/changefeed?offset=15&limit=5`
+ * Processes any entries return by the `/changefeed` resource
+ * Updates its current sequence number to either:
+ 1. The maximum sequence number returned by the `/changefeed` resource
+ 2. The `offset` + `limit` if no change events were returned from the `/changefeed` resource, but the latest sequence number returned by `/changefeed/latest` is greater than the current sequence number used for `offset`
+
+### Other potential usage patterns
+
+Change feed support is well-suited for scenarios that process data based on objects that are changed. For example, it can be used to:
+
+* Build connected application pipelines like ML that react to change events or schedule executions based on created or deleted instance.
+* Extract business analytics insights and metrics, based on changes that occur to your objects.
+* Poll the change feed to create an event source for push notifications.
+
+## Next steps
+
+[Pull changes from the change feed](pull-dicom-changes-from-change-feed.md)
+
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
Title: DICOM Conformance Statement version 2 for Azure Health Data Services
-description: This document provides details about the DICOM Conformance Statement v2 for Azure Health Data Services.
+description: Read about the features and specifications of the DICOM service v2 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard.
Previously updated : 10/13/2023 Last updated : 1/18/2024
The Medical Imaging Server for DICOM&reg; supports a subset of the DICOMweb Standard. Support includes: * [Studies Service](#studies-service)
- * [Store (STOW-RS)](#store-stow-rs)
- * [Retrieve (WADO-RS)](#retrieve-wado-rs)
- * [Search (QIDO-RS)](#search-qido-rs)
- * [Delete](#delete)
+ * [Store (STOW-RS)](#store-stow-rs)
+ * [Retrieve (WADO-RS)](#retrieve-wado-rs)
+ * [Search (QIDO-RS)](#search-qido-rs)
+ * [Delete](#delete)
* [Worklist Service (UPS Push and Pull SOPs)](#worklist-service-ups-rs)
- * [Create Workitem](#create-workitem)
- * [Retrieve Workitem](#retrieve-workitem)
- * [Update Workitem](#update-workitem)
- * [Change Workitem State](#change-workitem-state)
- * [Request Cancellation](#request-cancellation)
- * [Search Workitems](#search-workitems)
+ * [Create Workitem](#create-workitem)
+ * [Retrieve Workitem](#retrieve-workitem)
+ * [Update Workitem](#update-workitem)
+ * [Change Workitem State](#change-workitem-state)
+ * [Request Cancellation](#request-cancellation)
+ * [Search Workitems](#search-workitems)
-Additionally, the following nonstandard API(s) are supported:
+Additionally, these nonstandard API(s) are supported:
-* [Change Feed](dicom-change-feed-overview.md)
+* [Change Feed](change-feed-overview.md)
* [Extended Query Tags](dicom-extended-query-tags-overview.md)
+* [Bulk Update](update-files.md)
+* [Bulk Import](import-files.md)
+* [Export](export-dicom-files.md)
The service uses REST API versioning. The version of the REST API must be explicitly specified as part of the base URL, as in the following example:
The service ignores the 128-byte File Preamble, and replaces its contents with n
## Studies Service
-The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We've added the nonstandard Delete transaction to enable a full resource lifecycle.
+The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We added the nonstandard Delete transaction to enable a full resource lifecycle.
### Store (STOW-RS)
This transaction uses the POST method to store representations of studies, serie
| POST | ../studies | Store instances. | | POST | ../studies/{study} | Store instances for a specific study. |
-Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If it's specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code.
+Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code.
The following `Accept` header(s) for the response are supported:
If an attribute is padded with nulls, the attribute is indexed when searchable a
| Code | Description | | : |:|
-| `200 (OK)` | All the SOP instances in the request have been stored. |
-| `202 (Accepted)` | The origin server stored some of the Instances and others have failed or returned warnings. Additional information regarding this error might be found in the response message body. |
+| `200 (OK)` | All the SOP instances in the request were stored. |
+| `202 (Accepted)` | The origin server stored some of the Instances and others failed or returned warnings. Additional information regarding this error might be found in the response message body. |
| `204 (No Content)` | No content was provided in the store transaction request. | | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform the expected UID format. | | `401 (Unauthorized)` | The client isn't authenticated. | | `406 (Not Acceptable)` | The specified `Accept` header isn't supported. |
-| `409 (Conflict)` | None of the instances in the store transaction request have been stored. |
+| `409 (Conflict)` | None of the instances in the store transaction request were stored. |
| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
An example response with `Accept` header `application/dicom+json` with a FailedA
| `272` | The store transaction didn't store the instance because of a general failure in processing the operation. | | `43264` | The DICOM instance failed the validation. | | `43265` | The provided instance `StudyInstanceUID` didn't match the specified `StudyInstanceUID` in the store request. |
-| `45070` | A DICOM instance with the same `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` has already been stored. If you wish to update the contents, delete this instance first. |
-| `45071` | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process hasn't had chance to clean up yet. Delete the instance first before attempting to create again. |
+| `45070` | A DICOM instance with the same `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` was already stored. If you want to update the contents, delete this instance first. |
+| `45071` | A DICOM instance is being created by another process, or the previous attempt to create failed and the cleanup process isn't complete. Delete the instance first before attempting to create again. |
#### Store warning reason codes
The following `Accept` header(s) are supported for retrieving instances within a
* `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default) * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax is not specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve an Instance
The following `Accept` header(s) are supported for retrieving a specific instanc
* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `application/dicom; transfer-syntax=1.2.840.10008.1.2.4.90` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax is not specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve Frames
The following `Accept` headers are supported for retrieving frames:
* `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.4.90` is used as default) * `multipart/related; type="image/jp2";transfer-syntax=1.2.840.10008.1.2.4.90` * `application/octet-stream; transfer-syntax=*` for single frame retrieval-- `*/*` (when transfer-syntax is not specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/octet-stream`)
+- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/octet-stream`)
#### Retrieve transfer syntax
Retrieved metadata includes the null character when the attribute was padded wit
Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists:
-* Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body.
-* Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is returned as part of the body.
+* Data is unchanged since the last request: `HTTP 304 (Not Modified)` response is sent with no response body.
+* Data changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is returned as part of the body.
### Retrieve rendered image (for instance or frame)+ The following `Accept` header(s) are supported for retrieving a rendered image an instance or a frame: - `image/jpeg`
When specifying a particular frame to return, frame indexing starts at 1.
The `quality` query parameter is also supported. An integer value between `1` and `100` inclusive (1 being worst quality, and 100 being best quality) might be passed as the value for the query parameter. This parameter is used for images rendered as `jpeg`, and is ignored for `png` render requests. If not specified the parameter defaults to `100`.
+### Retrieve original version
+Using the [bulk update](update-files.md) operation allows you to retrieve either the original and latest version of a study, series, or instance. The latest version of a study, series, or instance is always returned by default. The original version might be returned by setting the `msdicom-request-original` header to `true`. An example request is shown here:
+
+```http
+GET ../studies/{study}/series/{series}/instances/{instance}
+Accept: multipart/related; type="application/dicom"; transfer-syntax=*
+msdicom-request-original: true
+Content-Type: application/dicom
+ ```
+ ### Retrieve response status codes | Code | Description | | : | :- |
-| `200 (OK)` | All requested data has been retrieved. |
-| `304 (Not Modified)` | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
+| `200 (OK)` | All requested data was retrieved. |
+| `304 (Not Modified)` | The requested data is unchanged since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. |
| `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |
The following `Accept` header(s) are supported for searching:
* `application/dicom+json` ### Search changes from v1
-In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag returns `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings for [searchable attributes](#searchable-attributes) at the time the [instance was stored](#store-changes-from-v1), those attributes may not be used to search for the stored instance. However, any [searchable attributes](#searchable-attributes) that failed validation will be able to return results if the values are overwritten by instances in the same study/series that are stored after the failed one, or if the values are already stored correctly by a previous instance. If the attribute values are not overwritten, then they will not produce any search results.
+In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag returns `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings for [searchable attributes](#searchable-attributes) at the time the [instance was stored](#store-changes-from-v1), those attributes may not be used to search for the stored instance. However, any [searchable attributes](#searchable-attributes) that failed validation can return results if the values are overwritten by instances in the same study or series that are stored after the failed one, or if the values are already stored correctly by a previous instance. If the attribute values aren't overwritten, then they don't produce any search results.
An attribute can be corrected in the following ways: - Delete the stored instance and upload a new instance with the corrected data+ - Upload a new instance in the same study/series with corrected data ### Supported search parameters
We support the following matching types.
#### Attribute ID
-Tags can be encoded in several ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in several ways for the query parameter. We partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). These encodings for a tag are supported:
| Value | Example | | : | : |
The response is an array of DICOM datasets. Depending on the resource, by *defau
| :-- | :- | | (0008, 0018) | `SOPInstanceUID` |
-If `includefield=all`, the following attributes are included along with default attributes. Along with the default attributes, this is the full list of attributes supported at each resource level.
+If `includefield=all`, these attributes are included along with default attributes. Along with the default attributes, this list contains a full list of attributes supported at each resource level.
#### Other Study tags
There are no restrictions on the request's `Accept` header, `Content-Type` heade
| Code | Description | | : | :- |
-| `204 (No Content)` | When all the SOP instances have been deleted. |
+| `204 (No Content)` | When all the SOP instances are deleted. |
| `400 (Bad Request)` | The request was badly formatted. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |
The request payload might include Action Information as [defined in the DICOM St
| Code | Description | | : | :- |
-| `202 (Accepted)` | The request was accepted by the server, but the Target Workitem state hasn't necessarily changed yet. |
+| `202 (Accepted)` | The request was accepted by the server, but the Target Workitem state isn't changed yet. |
| `400 (Bad Request)` | There was a problem with the syntax of the request. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |
To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` a
The `Content-Type` header is required, and must have the value `application/dicom+json`.
-The request payload contains a dataset with the changes to be applied to the target Workitem. When a sequence is modified, the request must include all Items in the sequence, not just the Items to be modified.
-When multiple Attributes need updated as a group, do this as multiple Attributes in a single request, not as multiple requests.
+The request payload contains a dataset with the changes to be applied to the target Workitem. When a sequence is modified, the request must include all Items in the sequence, not just the Items to be modified. When you need to update multiple Attributes as a group, update them as multiple Attributes in a single request, not as multiple requests.
There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes might be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be
The request payload shall contain the Change UPS State Data Elements. These data
| Code | Description | | :- | :- | | `200 (OK)` | Workitem Instance was successfully retrieved. |
-| `400 (Bad Request)` | The request can't be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect |
+| `400 (Bad Request)` | The request can't be performed for one of the following reasons: (1) the request isn't valid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect |
| `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | The Target Workitem wasn't found. |
The following parameters for each query are supported:
| Key | Support Value(s) | Allowed Count | Description | | : | :- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes will be returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Only top-level attributes can be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes are returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. |
| `limit=` | `{value}` | 0...1 | Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`. | | `offset=` | `{value}` | 0...1 | Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response is returned. |
-| `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It does a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` will **not** match. |
+| `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It does a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` doesn't match. |
##### Searchable Attributes
We support these matching types:
| Search Type | Supported Attribute | Example | | :- | : | : |
-| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This range will be mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This range is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` is matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values must be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` isn't valid. |
| Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. | > [!NOTE]
-> While we don't support full sequence matching, we do support exact match on the attributes listed above that are contained in a sequence.
+> Although we don't support full sequence matching, we do support exact match on the attributes listed that are contained in a sequence.
##### Attribute ID
-Tags can be encoded in many ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in many ways for the query parameter. We partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
| Value | Example | | :-- | : |
The query API returns one of the following status codes in the response:
#### Additional notes
-The query API won't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range, will be resolved.
+The query API doesn't return `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved.
-* Paged results are optimized to return matched newest instance first, this might result in duplicate records in subsequent pages if newer data matching the query was added.
+* Paged results are optimized to return matched newest instance first, which might result in duplicate records in subsequent pages if newer data matching the query was added.
* Matching is case insensitive and accent insensitive for PN VR types. * Matching is case insensitive and accent sensitive for other string VR types.
-* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query will most likely exclude the Workitem that's getting updated and the response code will be `206 (Partial Content)`.
+* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query most likely excludes the Workitem that's getting updated and the response code is `206 (Partial Content)`.
[!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)]
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
Title: DICOM Conformance Statement version 1 for Azure Health Data Services
-description: This document provides details about the DICOM Conformance Statement v1 for Azure Health Data Services.
+description: Read about the features and specifications of the DICOM service v1 API, which supports a subset of the DICOMweb Standard for medical imaging data. A DICOM Conformance Statement is a technical document that describes how a device or software implements the DICOM standard.
The Medical Imaging Server for DICOM&reg; supports a subset of the DICOMweb Stan
Additionally, the following nonstandard API(s) are supported:
-* [Change Feed](dicom-change-feed-overview.md)
+* [Change Feed](change-feed-overview.md)
* [Extended Query Tags](dicom-extended-query-tags-overview.md)
+* [Bulk update](update-files.md)
+* [Bulk import](import-files.md)
+* [Export](export-dicom-files.md)
The service uses REST API versioning. The version of the REST API must be explicitly specified as part of the base URL, as in the following example:
The service ignores the 128-byte File Preamble, and replaces its contents with n
## Studies Service
-The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We've added the nonstandard Delete transaction to enable a full resource lifecycle.
+The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM Studies, Series, and Instances. We added the nonstandard Delete transaction to enable a full resource lifecycle.
### Store (STOW-RS)
This transaction uses the POST method to store representations of studies, serie
| POST | ../studies | Store instances. | | POST | ../studies/{study} | Store instances for a specific study. |
-Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If it's specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code.
+Parameter `study` corresponds to the DICOM attribute StudyInstanceUID. If specified, any instance that doesn't belong to the provided study is rejected with a `43265` warning code.
The following `Accept` header(s) for the response are supported:
Only transfer syntaxes with explicit Value Representations are accepted.
| Code | Description | | :-- | :- |
-| `200 (OK)` | All the SOP instances in the request have been stored. |
-| `202 (Accepted)` | Some instances in the request have been stored but others have failed. |
+| `200 (OK)` | All the SOP instances in the request are stored. |
+| `202 (Accepted)` | Some instances in the request are stored but others failed. |
| `204 (No Content)` | No content was provided in the store transaction request. | | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `406 (Not Acceptable)` | The specified `Accept` header isn't supported. |
-| `409 (Conflict)` | None of the instances in the store transaction request have been stored. |
+| `409 (Conflict)` | None of the instances in the store transaction request were stored. |
| `415 (Unsupported Media Type)` | The provided `Content-Type` isn't supported. | | `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. |
An example response with `Accept` header `application/dicom+json`:
| `272` | The store transaction didn't store the instance because of a general failure in processing the operation. | | `43264` | The DICOM instance failed the validation. | | `43265` | The provided instance `StudyInstanceUID` didn't match the specified `StudyInstanceUID` in the store request. |
-| `45070` | A DICOM instance with the same `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` has already been stored. If you wish to update the contents, delete this instance first. |
-| `45071` | A DICOM instance is being created by another process, or the previous attempt to create has failed and the cleanup process hasn't had chance to clean up yet. Delete the instance first before attempting to create again. |
+| `45070` | A DICOM instance with the same `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID` is already stored. If you want to update the contents, delete this instance first. |
+| `45071` | A DICOM instance is being created by another process, or the previous attempt to create failed and the cleanup process isn't complete. Delete the instance first before attempting to create again. |
#### Store warning reason codes | Code | Description |
The following `Accept` header(s) are supported for retrieving instances within a
* `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default) * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax is not specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve an Instance
The following `Accept` header(s) are supported for retrieving a specific instanc
* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `application/dicom; transfer-syntax=1.2.840.10008.1.2.4.90` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax is not specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve Frames
The following `Accept` headers are supported for retrieving frames:
* `multipart/related; type="application/octet-stream"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.4.90` is used as default) * `multipart/related; type="image/jp2";transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax is not specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/octet-stream`)
+- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/octet-stream`)
#### Retrieve transfer syntax
Retrieving metadata doesn't return attributes with the following value represent
Cache validation is supported using the `ETag` mechanism. In the response to a metadata request, ETag is returned as one of the headers. This ETag can be cached and added as `If-None-Match` header in the later requests for the same metadata. Two types of responses are possible if the data exists:
-* Data hasn't changed since the last request: `HTTP 304 (Not Modified)` response is sent with no response body.
-* Data has changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is also returned as part of the body.
+* Data is unchanged since the last request: `HTTP 304 (Not Modified)` response is sent with no response body.
+* Data changed since the last request: `HTTP 200 (OK)` response is sent with updated ETag. Required data is also returned as part of the body.
### Retrieve rendered image (for instance or frame) The following `Accept` header(s) are supported for retrieving a rendered image an instance or a frame:
The `quality` query parameter is also supported. An integer value between `1` an
| Code | Description | | : | :- |
-| `200 (OK)` | All requested data has been retrieved. |
+| `200 (OK)` | All requested data was retrieved. |
| `304 (Not Modified)` | The requested data hasn't been modified since the last request. Content isn't added to the response body in such case. For more information, see the above section **Retrieve Metadata Cache Validation (for Study, Series, or Instance)**. | | `400 (Bad Request)` | The request was badly formatted. For example, the provided study instance identifier didn't conform to the expected UID format, or the requested transfer-syntax encoding isn't supported. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | The specified DICOM resource couldn't be found, or for rendered request the instance didn't contain pixel data. |
-| `406 (Not Acceptable)` | The specified `Accept` header isn't supported, or for rendered and transcode requests the file requested was too large. |
+| `406 (Not Acceptable)` | The specified `Accept` header isn't supported, or for rendered and transcodes requests the file requested was too large. |
| `503 (Service Unavailable)` | The service is unavailable or busy. Try again later. | ### Search (QIDO-RS)
There are no restrictions on the request's `Accept` header, `Content-Type` heade
| Code | Description | | : | :- |
-| `204 (No Content)` | When all the SOP instances have been deleted. |
+| `204 (No Content)` | When all the SOP instances are deleted. |
| `400 (Bad Request)` | The request was badly formatted. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |
The request payload might include Action Information as [defined in the DICOM St
| Code | Description | | : | :- |
-| `202 (Accepted)` | The request was accepted by the server, but the Target Workitem state hasn't necessarily changed yet. |
+| `202 (Accepted)` | The request was accepted by the server, but the Target Workitem state is unchanged. |
| `400 (Bad Request)` | There was a problem with the syntax of the request. | | `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. |
To update a Workitem currently in the `SCHEDULED` state, the `Transaction UID` a
The `Content-Type` header is required, and must have the value `application/dicom+json`. The request payload contains a dataset with the changes to be applied to the target Workitem. When a sequence is modified, the request must include all Items in the sequence, not just the Items to be modified.
-When multiple Attributes need to be updated as a group, do this as multiple Attributes in a single request, not as multiple requests.
+When multiple Attributes need to be updated as a group, do the update as multiple Attributes in a single request, not as multiple requests.
There are many requirements related to DICOM data attributes in the context of a specific transaction. Attributes might be required to be present, required to not be present, required to be empty, or required to not be empty. These requirements can be
The request payload shall contain the Change UPS State Data Elements. These data
| Code | Description | | :- | :- | | `200 (OK)` | Workitem Instance was successfully retrieved. |
-| `400 (Bad Request)` | The request can't be performed for one of the following reasons: (1) the request is invalid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect |
+| `400 (Bad Request)` | The request can't be performed for one of the following reasons: (1) the request isn't valid given the current state of the Target Workitem. (2) the Transaction UID is missing. (3) the Transaction UID is incorrect |
| `401 (Unauthorized)` | The client isn't authenticated. | | `403 (Forbidden)` | The user isn't authorized. | | `404 (Not Found)` | The Target Workitem wasn't found. |
The following parameters for each query are supported:
| Key | Support Value(s) | Allowed Count | Description | | : | :- | : | :- | | `{attributeID}=` | `{value}` | 0...N | Search for attribute/ value matching in query. |
-| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The additional attributes to return in the response. Only top-level attributes can be specified to be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes are returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. |
+| `includefield=` | `{attributeID}`<br/>`all` | 0...N | The other attributes to return in the response. Only top-level attributes can be included - not attributes that are part of sequences. Both public and private tags are supported. When `all` is provided, see [Search Response](#search-response) for more information about which attributes are returned for each query type. If a mixture of `{attributeID}` and `all` is provided, the server defaults to using 'all'. |
| `limit=` | `{value}` | 0...1 | Integer value to limit the number of values returned in the response. Value can be between the range `1 >= x <= 200`. Defaulted to `100`. | | `offset=` | `{value}` | 0...1 | Skip {value} results. If an offset is provided larger than the number of search query results, a `204 (no content)` response is returned. | | `fuzzymatching=` | `true` \| `false` | 0...1 | If true fuzzy matching is applied to any attributes with the Person Name (PN) Value Representation (VR). It does a prefix word match of any name part inside these attributes. For example, if `PatientName` is `John^Doe`, then `joh`, `do`, `jo do`, `Doe` and `John Doe` all match. However `ohn` does **not** match. |
We support these matching types:
| Search Type | Supported Attribute | Example | | :- | : | : |
-| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` will be matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times will be matched. However, one of these values has to be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
+| Range Query | `ScheduledΓÇïProcedureΓÇïStepΓÇïStartΓÇïDateΓÇïTime` | `{attributeID}={value1}-{value2}`. For date/time values, we support an inclusive range on the tag. This is mapped to `attributeID >= {value1} AND attributeID <= {value2}`. If `{value1}` isn't specified, all occurrences of dates/times prior to and including `{value2}` are matched. Likewise, if `{value2}` isn't specified, all occurrences of `{value1}` and subsequent dates/times are matched. However, one of these values must be present. `{attributeID}={value1}-` and `{attributeID}=-{value2}` are valid, however, `{attributeID}=-` is invalid. |
| Exact Match | All supported attributes | `{attributeID}={value1}` | | Fuzzy Match | `PatientName` | Matches any component of the name that starts with the value. |
We support these matching types:
##### Attribute ID
-Tags can be encoded in many ways for the query parameter. We have partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
+Tags can be encoded in many ways for the query parameter. We partially implemented the standard as defined in [PS3.18 6.7.1.1.1](http://dicom.nema.org/medical/dicom/2019a/output/chtml/part18/sect_6.7.html#sect_6.7.1.1.1). The following encodings for a tag are supported:
| Value | Example | | :-- | : |
The query API returns one of the following status codes in the response:
The query API doesn't `413 (request entity too large)`. If the requested query response limit is outside of the acceptable range, a bad request is returned. Anything requested within the acceptable range is resolved.
-* Paged results are optimized to return matched newest instance first, this might result in duplicate records in subsequent pages if newer data matching the query was added.
+* Paged results are optimized to return matched newest instance first, which might result in duplicate records in subsequent pages if newer data matching the query was added.
* Matching is case insensitive and accent insensitive for PN VR types. * Matching is case insensitive and accent sensitive for other string VR types.
-* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query will likely exclude the Workitem that's getting updated and the response code is `206 (Partial Content)`.
+* If there's a scenario where canceling a Workitem and querying the same happens at the same time, then the query likely excludes the Workitem that's getting updated and the response code is `206 (Partial Content)`.
[!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)]
healthcare-apis Dicomweb Standard Apis With Dicom Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-with-dicom-services.md
Title: Use DICOMweb Standard APIs with the DICOM servixw in Azure Health Data Services
-description: This tutorial describes how to use DICOMweb Standard APIs with the DICOM service.
+ Title: Access DICOMweb APIs with the DICOM service in Azure Health Data Services
+description: Learn how to use the DICOMweb APIs to store, review, search, and delete DICOM objects with the DICOM service. The DICOM service also offers custom APIs for tracking changes and defining custom tags for DICOM data.
# Access DICOMweb APIs with the DICOM service
-The DICOM&reg; service allows you to store, review, search, and delete DICOM objects using a subset of DICOMweb APIs, which are web-based services that follow the DICOM standard. By using these APIs, you can access and manage your organization's DICOM data in the cloud without requiring complex protocols or formats.
+The DICOM&reg; service allows you to store, review, search, and delete DICOM objects by using a subset of DICOMweb APIs, which are web-based services that follow the DICOM standard. By using these APIs, you can access and manage your organization's DICOM data in the cloud without requiring complex protocols or formats.
The supported services are:
The supported services are:
In addition to the subset of DICOMweb APIs, the DICOM service supports two custom APIs that are unique to Microsoft:
-* [Change feed](dicom-change-feed-overview.md): Track changes to DICOM data over time.
+* [Change feed](change-feed-overview.md): Track changes to DICOM data over time.
* [Extended query tags](dicom-extended-query-tags-overview.md): Define custom tags for querying DICOM data.
+* [Bulk update](update-files.md)
+* [Bulk import](import-files.md)
+* [Export](export-dicom-files.md)
## Prerequisites
healthcare-apis Pull Dicom Changes From Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/pull-dicom-changes-from-change-feed.md
Title: Pull DICOM changes using the Change Feed
-description: This how-to guide explains how to pull DICOM changes using DICOM Change Feed for Azure Health Data Services.
+ Title: Access DICOM Change Feed logs by using C# and the DICOM client package in Azure Health Data Services
+description: Learn how to use C# code to consume Change Feed, a feature of the DICOM service that provides logs of all the changes in your organization's medical imaging data. The code example uses the DICOM client package to access and process the Change Feed.
Previously updated : 10/13/2023 Last updated : 1/18/2024
-# Pull DICOM changes using the change feed
+# Access DICOM Change Feed logs by using C# and the DICOM client package
-DICOM&reg; The change feed offers customers the ability to go through the history of the DICOM service and act on the create and delete events in the service. This how-to guide describes how to consume Change Feed.
+The Change Feed capability enables you to go through the history of the DICOM&reg; service and then act on the create and delete events.
-The Change Feed is accessed using REST APIs. These APIs along with sample usage of Change Feed are documented in the [Overview of DICOM Change Feed](dicom-change-feed-overview.md). The version of the REST API should be explicitly specified in the request URL as called out in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md).
+You access the Change Feed by using REST APIs. These APIs, along with sample usage of Change Feed, are documented in the [DICOM Change Feed overview](change-feed-overview.md). The version of the REST API should be explicitly specified in the request URL as described in the [API Versioning for DICOM service Documentation](api-versioning-dicom-service.md).
## Consume Change Feed
-The following C# code example shows how to consume Change Feed using the DICOM client package.
+The C# code example shows how to consume Change Feed using the DICOM client package.
```csharp const int limit = 10;
healthcare-apis Update Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/update-files.md
+
+ Title: Update files in the DICOM service in Azure Health Data Services
+description: Learn how to use the bulk update API in Azure Health Data Services to modify DICOM attributes for multiple files in the DICOM service. This article explains the benefits, requirements, and steps of the bulk update operation.
++++ Last updated : 1/18/2024+++
+# Update DICOM files
+
+The bulk update operation lets you make changes to imaging metadata for multiple files stored in the DICOM&reg; service. For example, bulk update enables you to modify DICOM attributes for one or more studies in a single, asynchronous operation. You can use this API to perform updates to patient demographic changes and avoid the costs of repeating time-consuming uploads.
+
+Beyond the efficiency gains, the bulk update capability preserves a record of the changes in the [change feed](change-feed-overview.md) and persists the original, unmodified instances for future retrieval.
+
+## Limitations
+There are a few limitations when you use the bulk update operation:
+
+- A maximum of 50 studies can be updated in a single operation.
+- Only one bulk update operation can be performed at a time.
+- You can't delete only the latest version of a study or revert back to the original version.
+- You can't update any field from non-null to a null value.
+
+## Use the bulk update operation
+Bulk update is an asynchronous, long-running operation available at the studies endpoint. The request payload includes one or more studies to update, the set of attributes to update, and the new values for those attributes.
+
+### Update instances in multiple studies
+The bulk update endpoint starts a long-running operation that updates all instances in each study with the specified attributes.
+
+```http
+POST {dicom-service-url}/{version}/studies/$bulkUpdate
+```
+
+```http
+POST {dicom-service-url}/{version}/partitions/{PartitionName}/studies/$bulkUpdate
+```
+
+#### Request header
+
+| Name | Required | Type | Description |
+| | | | - |
+| Content-Type | False | string | `application/json` is supported |
+
+#### Request body
+
+The request body contains the specification for studies to update. Both the `studyInstanceUids` and `changeDataset` are required.
+
+```json
+{
+ "studyInstanceUids": ["1.113654.3.13.1026"],
+ "changeDataset": {
+ "00100010": {
+ "vr": "PN",
+ "Value":
+ [
+ {
+ "Alphabetic": "New Patient Name 1"
+ }
+ ]
+ }
+ }
+}
+```
+
+#### Responses
+When a bulk update operation starts successfully, the API returns a `202` status code. The body of the response contains a reference to the operation.
+
+```http
+HTTP/1.1 202 Accepted
+Content-Type: application/json
+{
+ "id": "1323c079a1b64efcb8943ef7707b5438",
+ "href": "../v1/operations/1323c079a1b64efcb8943ef7707b5438"
+}
+```
+
+If the operation fails to start successfully, the response includes information about the failure in the errors list, including UIDs of the failing instance(s).
+
+```http
+{
+ "operationId": "1323c079a1b64efcb8943ef7707b5438",
+ "type": "update",
+ "createdTime": "2023-05-08T05:01:30.1441374Z",
+ "lastUpdatedTime": "2023-05-08T05:01:42.9067335Z",
+ "status": "failed",
+ "percentComplete": 100,
+ "results": {
+ "studyUpdated": 0,
+ "studyFailed": 1,
+ "instanceUpdated": 0,
+ "errors": [
+ "Failed to update instances for study 1.113654.3.13.1026"
+ ]
+ }
+}
+```
+
+| Name | Type | Description |
+| -- | - | |
+| 202 (Accepted) | Operation Reference | A long-running operation was started to update DICOM attributes |
+| 400 (Bad Request) | | Request body has invalid data |
+
+### Operation Status
+The `href` URL can be polled for the current status of the update operation until completion. A return code of `200` indicates the operation completed successfully.
+
+```http
+GET {dicom-service-url}/{version}/operations/{operationId}
+```
+
+#### URI Parameters
+
+| Name | In | Required | Type | Description |
+| -- | - | -- | | - |
+| operationId | path | True | string | The operation ID |
+
+#### Responses
+
+```json
+{
+ "operationId": "1323c079a1b64efcb8943ef7707b5438",
+ "type": "update",
+ "createdTime": "2023-05-08T05:01:30.1441374Z",
+ "lastUpdatedTime": "2023-05-08T05:01:42.9067335Z",
+ "status": "completed",
+ "percentComplete": 100,
+ "results": {
+ "studyUpdated": 1,
+ "instanceUpdated": 16,
+ // Errors will go here
+ }
+}
+```
+
+| Name | Type | Description |
+| | | -- |
+| 200 (OK) | Operation | The operation with the specified ID is complete |
+| 202 (Accepted) | Operation | The operation with the specified ID is running |
+| 404 (Not Found) | | Operation not found |
+
+## Retrieving study versions
+The [Retrieve (WADO-RS)](dicom-services-conformance-statement-v2.md#retrieve-wado-rs) transaction allows you to retrieve both the original and latest version of a study, series, or instance. The latest version of a study, series, or instance is always returned by default. The original version is returned by setting the `msdicom-request-original` header to `true`. Here's an example request:
+
+```http
+GET {dicom-service-url}/{version}/studies/{study}/series/{series}/instances/{instance}
+Accept: multipart/related; type="application/dicom"; transfer-syntax=*
+msdicom-request-original: true
+Content-Type: application/dicom
+ ```
+
+## Delete
+The [delete](dicom-services-conformance-statement-v2.md#delete) method deletes both the original and latest version of a study, series, or instance.
+
+## Change feed
+The [change feed](change-feed-overview.md) records update actions in the same manner as create and delete actions.
+
+## Supported DICOM modules
+Any attributes in the [Patient Identification Module](https://dicom.nema.org/dicom/2013/output/chtml/part03/sect_C.2.html#table_C.2-2) and [Patient Demographic Module](https://dicom.nema.org/dicom/2013/output/chtml/part03/sect_C.2.html#table_C.2-3) that aren't sequences can be updated using the bulk update operation. Supported attributes are called out in the tables.
+
+#### Patient identification module attributes
+| Attribute Name | Tag | Description |
+| - | --| |
+| Patient's Name | (0010,0010) | Patient's full name |
+| Patient ID | (0010,0020) | Primary hospital identification number or code for the patient. |
+| Other Patient IDs| (0010,1000) | Other identification numbers or codes used to identify the patient.
+| Type of Patient ID| (0010,0022) | The type of identifier in this item. Enumerated Values: TEXT RFID BARCODE Note that the identifier is coded as a string regardless of the type, not as a binary value.
+| Other Patient Names| (0010,1001) | Other names used to identify the patient.
+| Patient's Birth Name| (0010,1005) | Patient's birth name.
+| Patient's Mother's Birth Name| (0010,1060) | Birth name of patient's mother.
+| Medical Record Locator | (0010,1090)| An identifier used to find the patient's existing medical record (for example, film jacket).
+
+#### Patient demographic module attributes
+| Attribute Name | Tag | Description |
+| - | --| |
+| Patient's Age | (0010,1010) | Age of the Patient. |
+| Occupation | (0010,2180) | Occupation of the Patient. |
+| Confidentiality Constraint on Patient Data Description | (0040,3001) | Special indication to the modality operator about confidentiality of patient information (for example, that they shouldn't use the patients name where other patients are present). |
+| Patient's Birth Date | (0010,0030) | Date of birth of the named patient |
+| Patient's Birth Time | (0010,0032) | Time of birth of the named patient |
+| Patient's Sex | (0010,0040) | Sex of the named patient. |
+| Quality Control Subject |(0010,0200) | Indicates whether or not the subject is a quality control phantom. |
+| Patient's Size | (0010,1020) | Patient's height or length in meters |
+| Patient's Weight | (0010,1030) | Weight of the patient in kilograms |
+| Patient's Address | (0010,1040) | Legal address of the named patient |
+| Military Rank | (0010,1080) | Military rank of patient |
+| Branch of Service | (0010,1081) | Branch of the military. The country or regional allegiance might also be included (for example, U.S. Army). |
+| Country of Residence | (0010,2150) | Country where a patient currently resides |
+| Region of Residence | (0010,2152) | Region within patient's country of residence |
+| Patient's Telephone Numbers | (0010,2154) | Telephone numbers at which the patient can be reached |
+| Ethnic Group | (0010,2160) | Ethnic group or race of patient |
+| Patient's Religious Preference | (0010,21F0) | The religious preference of the patient |
+| Patient Comments | (0010,4000) | User-defined comments about the patient |
+| Responsible Person | (0010,2297) | Name of person with medical decision making authority for the patient. |
+| Responsible Person Role | (0010,2298) | Relationship of Responsible Person to the patient. |
+| Responsible Organization | (0010,2299) | Name of organization with medical decision making authority for the patient. |
+| Patient Species Description | (0010,2201) | The species of the patient. |
+| Patient Breed Description | (0010,2292) | The breed of the patient. See Section C.7.1.1.1.1. |
+| Breed Registration Number | (0010,2295) | Identification number of a veterinary patient within the registry. |
+| Issuer of Patient ID | (0010,0021) | Identifier of the Assigning Authority (system, organization, agency, or department) that issued the Patient ID.
+
+#### General study module
+| Attribute Name | Tag | Description |
+| - | --| |
+| Referring Physician's Name | (0008,0090) | Name of the patient's referring physician. |
+| Accession Number | (0008,0050) | A RIS generated number that identifies the order for the Study. |
+| Study Description | (0008,1030) | Institution-generated description or classification of the Study (component) performed. |
+
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
This article provides details about the features and enhancements made to Azure
## January 2024
-### FHIR Service
+### FHIR service
**Storage size support in FHIR service beyond 4TB**
-By default each FHIR instance is limited to storage capacity of 4TB. To provision a FHIR instance with storage capacity beyond 4TB, create support request with Issue type 'Service and Subscription limit (quotas)'.
+By default each FHIR instance is limited to storage capacity of 4TB. To provision a FHIR instance with storage capacity beyond 4TB, [create support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) with Issue type 'Service and Subscription limit (quotas)'.
> [!NOTE] > Due to issue in billing metrics for storage. Customers opting for more than 4TB storage capacity will not be billed for storage till the issue is addressed.
iot-dps Concepts X509 Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-x509-attestation.md
An intermediate certificate is an X.509 certificate, which has been signed by th
Intermediate certificates are used in a variety of ways. For example, intermediate certificates can be used to group devices by product lines, customers purchasing devices, company divisions, or factories.
-Imagine that Contoso is a large corporation with its own Public Key Infrastructure (PKI) using the root certificate named *ContosoRootCert*. Each subsidiary of Contoso has their own intermediate certificate that is signed by *ContosoRootCert*. Each subsidiary will then use their intermediate certificate to sign their leaf certificates for each device. In this scenario, Contoso can use a single DPS instance where *ContosoRootCert* has been verified with [proof-of-possession](./how-to-verify-certificates.md). They can have an enrollment group for each subsidiary. This way each individual subsidiary will not have to worry about verifying certificates.
+Imagine that Contoso is a large corporation with its own Public Key Infrastructure (PKI) using the root certificate named *ContosoRootCert*. Each subsidiary of Contoso has their own intermediate certificate that is signed by *ContosoRootCert*. Each subsidiary will then use their intermediate certificate to sign their leaf certificates for each device. In this scenario, Contoso can use a single DPS instance where *ContosoRootCert* is a [verified certificate](./how-to-verify-certificates.md). They can have an enrollment group for each subsidiary. This way each individual subsidiary will not have to worry about verifying certificates.
### End-entity "leaf" certificate
When DPS enrollments are configured for X.509 attestation, mutual TLS (mTLS) is
### DPS device chain requirements
-When a device is attempting registration through DPS using an enrollment group, the device must send the certificate chain from the leaf certificate to a certificate verified with [proof-of-possession](how-to-verify-certificates.md). Otherwise, authentication will fail.
+When a device is attempting registration through DPS using an enrollment group, the device must send the certificate chain from the leaf certificate to a [verified certificate](how-to-verify-certificates.md). Otherwise, authentication will fail.
For example, if only the root certificate is verified and an intermediate certificate is uploaded to the enrollment group, the device should present the certificate chain from leaf certificate all the way to the verified root certificate. This certificate chain would include any intermediate certificates in-between. Authentication will fail if DPS cannot traverse the certificate chain to a verified certificate.
For example, consider a corporation using the following device chain for a devic
![Example device certificate chain](./media/concepts-x509-attestation/example-device-cert-chain.png)
-Only the root certificate is verified, and *intermediate2* certificate is uploaded on the enrollment group.
+In this example, only the root certificate is verified, and *intermediate2* certificate is uploaded on the enrollment group.
![Example root verified](./media/concepts-x509-attestation/example-root-verified.png)
If the device sends the full device chain as follows during provisioning, then D
![Example device certificate chain](./media/concepts-x509-attestation/example-device-cert-chain.png)
-> [!NOTE]
-> Intermediate certificates can also be verified with [proof-of-possession](how-to-verify-certificates.md)..
- ### DPS order of operations with certificates When a device connects to the provisioning service, the service walks its certificate chain beginning with the device (leaf) certificate and looks for a corresponding enrollment entry. It uses the first entry that it finds in the chain to determine whether to provision the device. That is, if an individual enrollment for the device (leaf) certificate exists, the provisioning service applies that entry. If there isn't an individual enrollment for the device, the service looks for an enrollment group that corresponds to the first intermediate certificate. If it finds one, it applies that entry; otherwise, it looks for an enrollment group for the next intermediate certificate, and so on down the chain to the root.
iot-dps How To Verify Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-verify-certificates.md
description: How to do proof-of-possession for X.509 CA certificates with Azure
Previously updated : 06/29/2021 Last updated : 01/16/2024
-# How to do proof-of-possession for X.509 CA certificates with your Device Provisioning Service
+# How to verify X.509 CA certificates with your Device Provisioning Service
-A verified X.509 Certificate Authority (CA) certificate is a CA certificate that has been uploaded and registered to your provisioning service and has gone through proof-of-possession with the service.
+A verified X.509 certificate authority (CA) certificate is a CA certificate that has been uploaded and registered to your provisioning service and then verified, either automatically or through proof-of-possession with the service.
Verified certificates play an important role when using enrollment groups. Verifying certificate ownership provides an additional security layer by ensuring that the uploader of the certificate is in possession of the certificate's private key. Verification prevents a malicious actor sniffing your traffic from extracting an intermediate certificate and using that certificate to create an enrollment group in their own provisioning service, effectively hijacking your devices. By proving ownership of the root or an intermediate certificate in a certificate chain, you're proving that you have permission to generate leaf certificates for the devices that will be registering as a part of that enrollment group. For this reason, the root or intermediate certificate configured in an enrollment group must either be a verified certificate or must roll up to a verified certificate in the certificate chain a device presents when it authenticates with the service. To learn more about X.509 certificate attestation, see [X.509 certificates](concepts-x509-attestation.md) and [Controlling device access to the provisioning service with X.509 certificates](concepts-x509-attestation.md#controlling-device-access-to-the-provisioning-service-with-x509-certificates).
+## Prerequisites
+
+Before you begin the steps in this article, have the following prerequisites prepared:
+
+* A DPS instance created in your Azure subscription.
+* A .cer or .pem certificate file.
+ ## Automatic verification of intermediate or root CA through self-attestation
-If you are using an intermediate or root CA that you trust and know you have full ownernship of the certificate, you can self-attest that you have verified the certificate.
+
+If you are using an intermediate or root CA that you trust and know you have full ownership of the certificate, you can self-attest that you have verified the certificate.
To add an auto-verified certificate, follow these steps:
-1. In the Azure portal, navigate to your provisioning service and open **Certificates** from the left-hand menu.
-2. Click **Add** to add a new certificate.
-3. Enter a friendly display name for your certificate. Browse to the .cer or .pem file that represents the public part of your X.509 certificate. Click **Upload**.
-4. Check the box next to **Set certificate status to verified on upload**.
+1. In the [Azure portal](https://portal.azure.com), navigate to your provisioning service and select **Certificates** from the left-hand menu.
+1. Select **Add** to add a new certificate.
+1. Enter a friendly display name for your certificate.
+1. Browse to the .cer or .pem file that represents the public part of your X.509 certificate. Click **Upload**.
+1. Check the box next to **Set certificate status to verified on upload**.
- ![Upload certificate_with_verified](./media/how-to-verify-certificates/add-certificate-with-verified.png)
+ :::image type="content" source="./media/how-to-verify-certificates/add-certificate-with-verified.png" alt-text="Screenshot that shows uploading a certificate and setting status to verified.":::
-1. Click **Save**.
+1. Select **Save**.
1. Your certificate is show in the certificate tab with a status *Verified*.
-
- ![Certificate_Status](./media/how-to-verify-certificates/certificate-status.png)
+
+ :::image type="content" source="./media/how-to-verify-certificates/certificate-status.png" alt-text="Screenshot that shows the verified certificate after upload.":::
## Manual verification of intermediate or root CA
+Automatic verification is recommended when you upload new intermediate or root CA certificates to DPS. However, you can still perform proof-of-possession if it makes sense for your IoT scenario.
+ Proof-of-possession involves the following steps:+ 1. Get a unique verification code generated by the provisioning service for your X.509 CA certificate. You can do this from the Azure portal. 2. Create an X.509 verification certificate with the verification code as its subject and sign the certificate with the private key associated with your X.509 CA certificate. 3. Upload the signed verification certificate to the service. The service validates the verification certificate using the public portion of the CA certificate to be verified, thus proving that you are in possession of the CA certificate's private key. - ### Register the public part of an X.509 certificate and get a verification code
-To register a CA certificate with your provisioning service and get a verification code that you can use during proof-of-possession, follow these steps.
+To register a CA certificate with your provisioning service and get a verification code that you can use during proof-of-possession, follow these steps.
-1. In the Azure portal, navigate to your provisioning service and open **Certificates** from the left-hand menu.
-2. Click **Add** to add a new certificate.
-3. Enter a friendly display name for your certificate. Browse to the .cer or .pem file that represents the public part of your X.509 certificate. Click **Upload**.
-4. Once you get a notification that your certificate is successfully uploaded, click **Save**.
+1. In the Azure portal, navigate to your provisioning service and open **Certificates** from the left-hand menu.
+1. Select **Add** to add a new certificate.
+1. Enter a friendly display name for your certificate in the **Certificate name** field.
+1. Select the folder icon, then browse to the .cer or .pem file that represents the public part of your X.509 certificate. Select **Open**.
+1. Once you get a notification that your certificate is successfully uploaded, select **Save**.
- ![Upload certificate](./media/how-to-verify-certificates/add-new-cert.png)
+ :::image type="content" source="./media/how-to-verify-certificates/add-new-cert.png" alt-text="Screenshot that shows uploading a certificate without automatic verification.":::
- Your certificate will show in the **Certificate Explorer** list. Note that the **STATUS** of this certificate is *Unverified*.
+ Your certificate will show in the **Certificate Explorer** list. Note that the status of this certificate is *Unverified*.
-5. Click on the certificate that you added in the previous step.
+1. Select on the certificate that you added in the previous step to open its details.
-6. In **Certificate Details**, click **Generate Verification Code**.
+1. In the certificate details, notice that there's an empty **Verification code** field. Select the **Generate verification code** button.
-7. The provisioning service creates a **Verification Code** that you can use to validate the certificate ownership. Copy the code to your clipboard.
+ :::image type="content" source="./media/how-to-verify-certificates/verify-cert.png" alt-text="Screenshot that shows generating a verification code for proof-of-possession.":::
- ![Verify certificate](./media/how-to-verify-certificates/verify-cert.png)
+1. The provisioning service creates a **Verification code** that you can use to validate the certificate ownership. Copy the code to your clipboard.
### Digitally sign the verification code to create a verification certificate
-Now, you need to sign the *Verification Code* with the private key associated with your X.509 CA certificate, which generates a signature. This is known as [Proof of possession](https://tools.ietf.org/html/rfc5280#section-3.1) and results in a signed verification certificate.
-
-Microsoft provides tools and samples that can help you create a signed verification certificate:
+Now, you need to sign the verification code from DPS with the private key associated with your X.509 CA certificate, which generates a signature. This step is known as [Proof of possession](https://tools.ietf.org/html/rfc5280#section-3.1) and results in a signed verification certificate.
-- The **Azure IoT Hub C SDK** provides PowerShell (Windows) and Bash (Linux) scripts to help you create CA and leaf certificates for development and to perform proof-of-possession using a verification code. You can download the [files](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) relevant to your system to a working folder and follow the instructions in the [Managing CA certificates readme](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) to perform proof-of-possession on a CA certificate. -- The **Azure IoT Hub C# SDK** contains the [Group Certificate Verification Sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/how%20to%20guides/GroupCertificateVerificationSample), which you can use to do proof-of-possession.
-
-> [!IMPORTANT]
-> In addition to performing proof-of-possession, the PowerShell and Bash scripts cited previously also allow you to create root certificates, intermediate certificates, and leaf certificates that can be used to authenticate and provision devices. These certificates should be used for development only. They should never be used in a production environment.
+Microsoft provides tools and samples that can help you create a signed verification certificate:
-The PowerShell and Bash scripts provided in the documentation and SDKs rely on [OpenSSL](https://www.openssl.org/). You may also use OpenSSL or other third-party tools to help you do proof-of-possession. For an example using tooling provided with the SDKs, see [Create an X.509 certificate chain](tutorial-custom-hsm-enrollment-group-x509.md#create-an-x509-certificate-chain).
+* The **Azure IoT Hub C SDK** provides PowerShell (Windows) and Bash (Linux) scripts to help you create CA and leaf certificates for development and to perform proof-of-possession using a verification code. You can download the [files](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) relevant to your system to a working folder and follow the instructions in the [Managing CA certificates readme](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md) to perform proof-of-possession on a CA certificate.
+* The **Azure IoT Hub C# SDK** contains the [Group certificate verification sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/how%20to%20guides/GroupCertificateVerificationSample), which you can use to do proof-of-possession.
+The PowerShell and Bash scripts provided in the documentation and SDKs rely on [OpenSSL](https://www.openssl.org/). You may also use OpenSSL or other third-party tools to help you do proof-of-possession. For an example using tooling provided with the SDKs, see [Create an X.509 certificate chain](tutorial-custom-hsm-enrollment-group-x509.md#create-an-x509-certificate-chain).
### Upload the signed verification certificate
-1. Upload the resulting signature as a verification certificate to your provisioning service in the portal. In **Certificate Details** on the Azure portal, use the _File Explorer_ icon next to the **Verification Certificate .pem or .cer file** field to upload the signed verification certificate from your system.
+Upload the resulting signature as a verification certificate to your provisioning service in the Azure portal.
-2. Once the certificate is successfully uploaded, click **Verify**. The **STATUS** of your certificate changes to **_Verified_** in the **Certificate Explorer** list. Click **Refresh** if it does not update automatically.
+1. In the certificate details on the Azure portal, where you copied the verification code from, select the folder icon next to the **Verification certificate .pem or .cer file** field. Browse to the signed verification certificate from your system and select **Open**.
- ![Upload certificate verification](./media/how-to-verify-certificates/upload-cert-verification.png)
+2. Once the certificate is successfully uploaded, select **Verify**. The status of your certificate changes to **_Verified_** in the **Certificates** list. Select **Refresh** if it does not update automatically.
## Next steps
iot-dps Quick Enroll Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md
This article shows you how to programmatically create an [enrollment group](conc
:::zone pivot="programming-language-java"
-* [Java SE Development Kit 8](/azure/developer/java/fundamentals/java-support-on-azure). This article installs the [Java Service SDK](https://azure.github.io/azure-iot-sdk-java/master/service/) below. It works on both Windows and Linux. This article uses Windows.
+* [Java SE Development Kit 8](/azure/developer/java/fundamentals/java-support-on-azure). This article uses the [Azure IoT SDK for Java](https://azure.github.io/azure-iot-sdk-java/master/service/), which works on both Windows and Linux. This article uses Windows.
* [Maven 3](https://maven.apache.org/download.cgi).
This article shows you how to programmatically create an [enrollment group](conc
## Create test certificates
-Enrollment groups that use X.509 certificate attestation can be configured to use a root CA certificate or an intermediate certificate. The more usual case is to configure the enrollment group with an intermediate certificate. This provides more flexibility as multiple intermediate certificates can be generated or revoked by the same root CA certificate.
+Enrollment groups that use X.509 certificate attestation can be configured to use a root CA certificate or an intermediate certificate. The more usual case is to configure the enrollment group with an intermediate certificate. Using an intermediate certificate provides more flexibility as multiple intermediate certificates can be generated or revoked by the same root CA certificate.
-For this article, you'll need either a root CA certificate file, an intermediate CA certificate file, or both in *.pem* or *.cer* format. One file contains the public portion of the root CA X.509 certificate and the other contains the public portion of the intermediate CA X.509 certificate.
+For this article, you need either a root CA certificate file, an intermediate CA certificate file, or both in *.pem* or *.cer* format. One file contains the public portion of the root CA X.509 certificate and the other contains the public portion of the intermediate CA X.509 certificate.
If you already have a root CA file and/or an intermediate CA file, you can continue to [Add and verify your root or intermediate CA certificate](#add-and-verify-your-root-or-intermediate-ca-certificate).
-If you don't have a root CA file and/or an intermediate CA file, follow the steps in [Create an X.509 certificate chain](tutorial-custom-hsm-enrollment-group-x509.md?tabs=windows#create-an-x509-certificate-chain) to create them. You can stop after you complete the steps in [Create the intermediate CA certificate](tutorial-custom-hsm-enrollment-group-x509.md?tabs=windows#create-the-intermediate-ca-certificate) as you won't need device certificates to complete the steps in this article. When you're finished, you'll have two X.509 certificate files: *./certs/azure-iot-test-only.root.ca.cert.pem* and *./certs/azure-iot-test-only.intermediate.cert.pem*.
+If you don't have a root CA file and/or an intermediate CA file, follow the steps in [Create an X.509 certificate chain](tutorial-custom-hsm-enrollment-group-x509.md?tabs=windows#create-an-x509-certificate-chain) to create them. You can stop after you complete the steps in [Create the intermediate CA certificate](tutorial-custom-hsm-enrollment-group-x509.md?tabs=windows#create-the-intermediate-ca-certificate) as you don't need device certificates to complete the steps in this article. When you're finished, you have two X.509 certificate files: *./certs/azure-iot-test-only.root.ca.cert.pem* and *./certs/azure-iot-test-only.intermediate.cert.pem*.
## Add and verify your root or intermediate CA certificate
Devices that provision through an enrollment group using X.509 certificates, pre
For this article, assuming you have both a root CA certificate and an intermediate CA certificate signed by the root CA:
-* If you plan on creating the enrollment group with the root CA certificate, you'll need to upload and verify the root CA certificate.
+* If you plan on creating the enrollment group with the root CA certificate, you need to upload and verify the root CA certificate.
* If you plan on creating the enrollment group with the intermediate CA certificate, you can upload and verify either the root CA certificate or the intermediate CA certificate. (If you have multiple intermediate CA certificates in the certificate chain, you could, alternatively, upload and verify any intermediate certificate that sits between the root CA certificate and the intermediate certificate that you create the enrollment group with.)
To add and verify your root or intermediate CA certificate to the Device Provisi
## Get the connection string for your provisioning service
-For the sample in this article, you'll need to copy the connection string for your provisioning service.
+For the sample in this article, you need the connection string for your provisioning service. Use the following steps to retrieve it.
1. Sign in to the [Azure portal](https://portal.azure.com).
This section shows you how to create a .NET Core console application that adds a
1. Open *Program.cs* file in an editor.
-1. Replace the namespace statement at the top of the file with the following:
+1. Replace the namespace statement at the top of the file with the following line:
```csharp namespace CreateEnrollmentGroup;
This section shows you how to create a Node.js script that adds an enrollment gr
"--END CERTIFICATE--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into a **Git Bash** prompt, replace `your-cert.pem` with the location of your certificate file, and press **ENTER**. This command will generate the syntax for the `PUBLIC_KEY_CERTIFICATE_STRING` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into a **Git Bash** prompt, replace `your-cert.pem` with the location of your certificate file, and press **ENTER**. This command generates the syntax for the `PUBLIC_KEY_CERTIFICATE_STRING` string constant value and writes it to the output.
```bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' your-cert.pem
This section shows you how to create a Node.js script that adds an enrollment gr
> * Hard-coding the connection string for the provisioning service administrator is against security best practices. Instead, the connection string should be held in a secure manner, such as in a secure configuration file or in the registry. > * Be sure to upload only the public part of the signing certificate. Never upload .pfx (PKCS12) or .pem files containing private keys to the provisioning service.
-1. The sample allows you to set an IoT hub in the enrollment group to provision the device to. This must be an IoT hub that has been previously linked to the provisioning service. For this article, we'll let DPS choose from the linked hubs according to the default allocation policy, evenly-weighted distribution. Comment out the following statement in the file:
+1. The sample allows you to set an IoT hub in the enrollment group to provision the device to. This must be an IoT hub that has been previously linked to the provisioning service. For this article, we let DPS choose from the linked hubs according to the default allocation policy, evenly weighted distribution. Comment out the following statement in the file:
```Java enrollmentGroup.setIotHubHostName(IOTHUB_HOST_NAME); // Optional parameter.
This section shows you how to create a Node.js script that adds an enrollment gr
This command downloads the [Azure IoT DPS service client Maven package](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-service-client) to your machine and builds the sample. This package includes the binaries for the Java service SDK.
-1. Switch to the *target* folder and run the sample. Be aware that the build in the previous step outputs .jar file in the *target* folder with the following file format: `provisioning-x509-sample-{version}-with-deps.jar`; for example: `provisioning-x509-sample-1.8.1-with-deps.jar`. You may need to replace the version in the command below.
+1. Switch to the *target* folder and run the sample. The build in the previous step outputs .jar file in the *target* folder with the following file format: `provisioning-x509-sample-{version}-with-deps.jar`; for example: `provisioning-x509-sample-1.8.1-with-deps.jar`. You may need to replace the version in the command below.
```cmd\sh cd target
If you plan to explore the Azure IoT Hub Device Provisioning Service tutorials,
:::zone pivot="programming-language-csharp"
-The [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) has scripts that can help you create root CA, intermediate CA, and device certificates, and do proof-of-possession with the service to verify root and intermediate CA certificates. To learn more, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
-
-The [Group certificate verification sample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/provisioning/service/samples/how%20to%20guides/GroupCertificateVerificationSample) in the [Azure IoT SDK for C# (.NET)](https://github.com/Azure/azure-iot-sdk-csharp) shows how to do proof-of-possession in C# with an existing X.509 intermediate or root CA certificate.
+The [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) has scripts that can help you create and manage certificates. To learn more, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
:::zone-end :::zone pivot="programming-language-nodejs"
-The [Azure IoT Node.js SDK](https://github.com/Azure/azure-iot-sdk-node) has scripts that can help you create root CA, intermediate CA, and device certificates, and do proof-of-possession with the service to verify root and intermediate CA certificates. To learn more, see [Tools for the Azure IoT Device Provisioning Device SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/tools).
+The [Azure IoT Node.js SDK](https://github.com/Azure/azure-iot-sdk-node) has scripts that can help you create and manage certificates. To learn more, see [Tools for the Azure IoT Device Provisioning Device SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/tools).
You can also use tools available in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). To learn more, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md).
You can also use tools available in the [Azure IoT C SDK](https://github.com/Az
:::zone pivot="programming-language-java"
-The [Azure IoT Java SDK](https://github.com/Azure/azure-iot-sdk-java) contains test tooling that can help you create an X.509 certificate chain, upload a root or intermediate certificate from that chain, and do proof-of-possession with the service to verify root and intermediate CA certificates. To learn more, see [X509 certificate generator using DICE emulator](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-tools/provisioning-x509-cert-generator).
+The [Azure IoT Java SDK](https://github.com/Azure/azure-iot-sdk-java) contains test tooling that can help you create and manage certificates. To learn more, see [X509 certificate generator using DICE emulator](https://github.com/Azure/azure-iot-sdk-java/tree/main/provisioning/provisioning-tools/provisioning-x509-cert-generator).
:::zone-end
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
zone_pivot_groups: iot-dps-set1
# Tutorial: Provision multiple X.509 devices using enrollment groups
-In this tutorial, you'll learn how to provision groups of IoT devices that use X.509 certificates for authentication. Sample device code from the Azure IoT SDK will be executed on your development machine to simulate provisioning of X.509 devices. On real devices, device code would be deployed and run from the IoT device.
+In this tutorial, you learn how to provision groups of IoT devices that use X.509 certificates for authentication. Sample device code from the Azure IoT SDK will be executed on your development machine to simulate provisioning of X.509 devices. On real devices, device code would be deployed and run from the IoT device.
The Azure IoT Hub Device Provisioning Service supports two types of enrollments for provisioning devices:
The Azure IoT Hub Device Provisioning Service supports three forms of authentica
* [Symmetric keys](./concepts-symmetric-key-attestation.md) ::: zone pivot="programming-language-ansi-c"
-This tutorial uses the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example), which provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it's strongly recommended to help protect sensitive information like your device certificate's private key.
+This tutorial uses the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example), which provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it's recommended to help protect sensitive information like your device certificate's private key.
::: zone-end ::: zone pivot="programming-language-csharp,programming-language-nodejs,programming-language-python,programming-language-java"
-A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it's strongly recommended to help protect sensitive information like your device certificate's private key.
+A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it's recommended to help protect sensitive information like your device certificate's private key.
::: zone-end
-In this tutorial, you'll complete the following objectives:
+In this tutorial, you complete the following objectives:
> [!div class="checklist"] > > * Create a certificate chain of trust to organize a set of devices using X.509 certificates.
-> * Complete proof of possession with a signing certificate used with the certificate chain.
> * Create a new group enrollment that uses the certificate chain. > * Set up the development environment. > * Provision a device using the certificate chain using sample code in the Azure IoT device SDK.
The following prerequisites are for a Windows development environment. For Linux
* Open both a Windows command prompt and a Git Bash prompt.
- The steps in this tutorial assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
+ The steps in this tutorial assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
## Prepare your development environment ::: zone pivot="programming-language-ansi-c"
-In this section, you'll prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by devices provisioning with DPS.
+In this section, you prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by devices provisioning with DPS.
1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
In this section, you'll prepare a development environment used to build the [Azu
6. The code sample uses an X.509 certificate to provide attestation via X.509 authentication. Run the following command to build a version of the SDK specific to your development platform that includes the device provisioning client. A Visual Studio solution for the simulated device is generated in the `cmake` directory.
- When specifying the path used with `-Dhsm_custom_lib` in the command below, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown below assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
+ When specifying the path used with `-Dhsm_custom_lib` in the following command, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown below assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
```cmd cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
git clone -b v2 https://github.com/Azure/azure-iot-sdk-python.git --recursive
## Create an X.509 certificate chain
-In this section, you'll generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates have the following hierarchy.
+In this section, you generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates have the following hierarchy.
:::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/example-device-cert-chain.png" alt-text="Diagram that shows relationship of root C A, intermediate C A, and device certificates." border="false":::
-[Root certificate](concepts-x509-attestation.md#root-certificate): You'll complete [proof of possession](how-to-verify-certificates.md) to verify the root certificate. This verification enables DPS to trust that certificate and verify certificates signed by it.
+[Root certificate](concepts-x509-attestation.md#root-certificate): You upload and verify the root certificate with DPS. This verification enables DPS to trust that certificate and verify certificates signed by it.
[Intermediate certificate](concepts-x509-attestation.md#intermediate-certificate): It's common to use intermediate certificates to group devices logically by product lines, company divisions, or other criteria. This tutorial uses a certificate chain with one intermediate certificate, but in a production scenario you may have several. The intermediate certificate in this chain is signed by the root certificate. This certificate is provided to the enrollment group created in DPS to logically group a set of devices. This configuration allows managing a whole group of devices that have device certificates signed by the same intermediate certificate.
In this section, you'll generate an X.509 certificate chain of three certificate
### Set up the X.509 OpenSSL environment
-In this section, you'll create the Openssl configuration files, directory structure, and other files used by the Openssl commands.
+In this section, you create the Openssl configuration files, directory structure, and other files used by the Openssl commands.
1. In your Git Bash command prompt, navigate to a folder where you want to generate the X.509 certificates and keys for this tutorial.
In this section, you'll create the Openssl configuration files, directory struct
### Create the root CA certificate
-Run the following commands to create the root CA private key and the root CA certificate. You'll use this certificate and key to sign your intermediate certificate.
+Run the following commands to create the root CA private key and the root CA certificate. You use this certificate and key to sign your intermediate certificate.
1. Create the root CA private key:
Run the following commands to create the root CA private key and the root CA cer
### Create the intermediate CA certificate
-Run the following commands to create the intermediate CA private key and the intermediate CA certificate. You'll use this certificate and key to sign your device certificate(s).
+Run the following commands to create the intermediate CA private key and the intermediate CA certificate. You use this certificate and key to sign your device certificate(s).
1. Create the intermediate CA private key:
In this section, you create two device certificates and their full chain certifi
1. Create the device certificate CSR.
- The subject common name (CN) of the device certificate must be set to the [registration ID](./concepts-service.md#registration-id) that your device will use to register with DPS. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. DPS supports registration IDs up to 128 characters long; however, the maximum length of the subject common name in an X.509 certificate is 64 characters. The registration ID, therefore, is limited to 64 characters when using X.509 certificates. For group enrollments, the registration ID is also used as the device ID in IoT Hub.
+ The subject common name (CN) of the device certificate must be set to the [registration ID](./concepts-service.md#registration-id) that your device uses to register with DPS. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. DPS supports registration IDs up to 128 characters long; however, the maximum length of the subject common name in an X.509 certificate is 64 characters. The registration ID, therefore, is limited to 64 characters when using X.509 certificates. For group enrollments, the registration ID is also used as the device ID in IoT Hub.
The subject common name is set using the `-subj` parameter. In the following command, the common name is set to **device-01**.
In this section, you create two device certificates and their full chain certifi
cat ./certs/device-01.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/device-01-full-chain.cert.pem ```
-1. Open the certificate chain file, *./certs/device-01-full-chain.cert.pem*, in a text editor to examine it. The certificate chain text contains the full chain of all three certificates. You'll use this certificate chain later in this tutorial to provision `device-01`.
+1. Open the certificate chain file, *./certs/device-01-full-chain.cert.pem*, in a text editor to examine it. The certificate chain text contains the full chain of all three certificates. You use this certificate chain later in this tutorial to provision `device-01`.
The full chain text has the following format:
In this section, you create two device certificates and their full chain certifi
> > However, the device must also have access to the private key for the device certificate. This is necessary because the device must perform verification using that key at runtime when it attempts to provision. The sensitivity of this key is one of the main reasons it is recommended to use hardware-based storage in a real HSM to help secure private keys.
-You'll use the following files in the rest of this tutorial:
+You use the following files in the rest of this tutorial:
| Certificate | File | Description | | - | | - |
-| root CA certificate. | *certs/azure-iot-test-only.root.ca.cert.pem* | Will be uploaded to DPS and verified. |
-| intermediate CA certificate | *certs/azure-iot-test-only.intermediate.cert.pem* | Will be used to create an enrollment group in DPS. |
+| root CA certificate. | *certs/azure-iot-test-only.root.ca.cert.pem* | Uploaded to DPS and verified. |
+| intermediate CA certificate | *certs/azure-iot-test-only.intermediate.cert.pem* | Used to create an enrollment group in DPS. |
| device-01 private key | *private/device-01.key.pem* | Used by the device to verify ownership of the device certificate during authentication with DPS. | | device-01 full chain certificate | *certs/device-01-full-chain.cert.pem* | Presented by the device to authenticate and register with DPS. | | device-02 private key | *private/device-02.key.pem* | Used by the device to verify ownership of the device certificate during authentication with DPS. |
You'll use the following files in the rest of this tutorial:
## Verify ownership of the root certificate
-For DPS to be able to validate the device's certificate chain during authentication, you must upload and verify ownership of the root CA certificate. Because you created the root CA certificate in the last section, you'll auto-verify that it's valid when you upload it. Alternatively, you can do manual verification of the certificate if you're using a CA certificate from a 3rd-party. To learn more about verifying CA certificates, see [How to do proof-of-possession for X.509 CA certificates](how-to-verify-certificates.md).
+For DPS to be able to validate the device's certificate chain during authentication, you must upload and verify ownership of the root CA certificate. Because you created the root CA certificate in the last section, you'll automatically verify that it's valid when you upload it.
To add the root CA certificate to your DPS instance, follow these steps:
To add the root CA certificate to your DPS instance, follow these steps:
1. Select the box next to **Set certificate status to verified on upload**.
- :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/add-root-certificate.png" alt-text="Screenshot that shows adding the root C A certificate and the set certificate status to verified on upload box selected.":::
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/add-root-certificate.png" alt-text="Screenshot that shows adding the root CA certificate and the set certificate status to verified on upload box selected.":::
1. Select **Save**.
To add the root CA certificate to your DPS instance, follow these steps:
:::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/verify-root-certificate.png" alt-text="Screenshot that shows the verified root C A certificate in the list of certificates.":::
-## (Optional) Manual verification of root certificate
-If you didn't choose to automatically verify the certificate during upload, you manually prove possession:
-
-1. Select the new CA certificate.
-
-1. Select Generate Verification Code in the Certificate Details dialog.
-
-1. Create a certificate that contains the verification code. For example, if you're using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "<verification code>"` to create a certificate named `verification-code.cert.pem`, replacing `<verification code>` with the previously generated verification code. For more information, you can download the [files](https://github.com/Azure/azure-iot-sdk-c/tree/main/tools/CACertificates) relevant to your system to a working folder and follow the instructions in the [Managing CA certificates readme](https://github.com/Azure/azure-iot-sdk-c/blob/main/tools/CACertificates/CACertificateOverview.md) to perform proof-of-possession on a CA certificate.
-
-1. Upload `verification-code.cert.pem` to your provisioning service in the Certificate Details dialog.
-
-1. Select Verify.
- ## Update the certificate store on Windows-based devices On non-Windows devices, you can pass the certificate chain from the code as the certificate store.
Your signing certificates are now trusted on the Windows-based device and the fu
## Prepare and run the device provisioning code
-In this section, you update the sample code with your Device Provisioning Service instance information. If a device is authenticated, it will be assigned to an IoT hub linked to the Device Provisioning Service instance configured in this section.
+In this section, you update the sample code with your Device Provisioning Service instance information. If a device is authenticated, it's assigned to an IoT hub linked to the Device Provisioning Service instance configured in this section.
::: zone pivot="programming-language-ansi-c"
-In this section, you'll use your Git Bash prompt and the Visual Studio IDE.
+In this section, you use your Git Bash prompt and the Visual Studio IDE.
### Configure the provisioning device code
The C# sample code is set up to use X.509 certificates that are stored in a pass
openssl pkcs12 -inkey ./private/device-02.key.pem -in ./certs/device-02-full-chain.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/device-02-full-chain.cert.pfx ```
-In the rest of this section, you'll use your Windows command prompt.
+In the rest of this section, you use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
In the rest of this section, you'll use your Windows command prompt.
:::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
-3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\getting started\X509Sample* directory off the directory where you cloned the samples on your computer.
+3. In your Windows command prompt, change to the *X509Sample* directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\getting started\X509Sample* directory off the directory where you cloned the samples on your computer.
4. Enter the following command to build and run the X.509 device provisioning sample (replace `<id-scope>` with the ID Scope that you copied in step 2. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
In the following steps, use your Windows command prompt.
::: zone pivot="programming-language-java"
-In the following steps, you'll use both your Windows command prompt and your Git Bash prompt.
+In the following steps, you use both your Windows command prompt and your Git Bash prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
In the following steps, you'll use both your Windows command prompt and your Git
"--END CERTIFICATE--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPublicPem` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `leafPublicPem` string constant value and writes it to the output.
```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/device-01.cert.pem
In the following steps, you'll use both your Windows command prompt and your Git
"--END PRIVATE KEY--"; ```
- To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPrivateKey` string constant value and write it to the output.
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `leafPrivateKey` string constant value and writes it to the output.
```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./private/device-01.key.pem
In the following steps, you'll use both your Windows command prompt and your Git
"--END CERTIFICATE--"; ```
- To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `rootPublicPem` string constant value and write it to the output.
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `rootPublicPem` string constant value and writes it to the output.
```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/azure-iot-test-only.root.ca.cert.pem
In the following steps, you'll use both your Windows command prompt and your Git
"--END CERTIFICATE--"; ```
- To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `intermediatePublicPem` string constant value and write it to the output.
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `intermediatePublicPem` string constant value and writes it to the output.
```Bash sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/azure-iot-test-only.intermediate.cert.pem
In the following steps, you'll use both your Windows command prompt and your Git
java -jar ./provisioning-x509-sample-1.8.1-with-deps.jar ```
- The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+ The sample connects to DPS, which provisions the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
```output Starting...
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
Title: Create and provision an IoT Edge device on Linux using symmetric keys - A
description: Create and provision a single IoT Edge device in IoT Hub for manual provisioning with symmetric keys -+ Last updated 04/25/2023
This article provides end-to-end instructions for registering and provisioning a Linux IoT Edge device, which includes installing IoT Edge.
-Each device that connects to an [IoT hub](../iot-hub/index.yml) has a device ID that's used to track [cloud-to-device](../iot-hub/iot-hub-devguide-c2d-guidance.md) or [device-to-cloud](../iot-hub/iot-hub-devguide-d2c-guidance.md) communications. You configure a device with its connection information, which includes:
+Each device that connects to an [IoT hub](../iot-hub/index.yml) has a device ID that's used to track [cloud-to-device](../iot-hub/iot-hub-devguide-c2d-guidance.md) or [device-to-cloud](../iot-hub/iot-hub-devguide-d2c-guidance.md) communications. You configure a device with its connection information, which includes:
* IoT hub hostname * Device ID
If you are using Visual Studio Code, there are helpful Azure IoT extensions that
Install both the Azure IoT Edge and Azure IoT Hub extensions:
-* [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge)
+* [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge)
* [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)
To deploy your IoT Edge modules, go to your IoT hub in the Azure portal, then:
1. Since we want to deploy the IoT Edge default modules (edgeAgent and edgeHub), we don't need to add any modules to this pane, so select **Review + create** at the bottom. 1. You see the JSON confirmation of your modules. Select **Create** to deploy the modules.<br>
-
+ For more information, see [Deploy a module](quickstart-linux.md#deploy-a-module). ## Verify successful configuration
Verify that the runtime was successfully installed and configured on your IoT Ed
Check that your device and modules are deployed and running, by viewing your device page in the Azure portal.
- :::image type="content" source="media/how-to-provision-single-device-linux-symmetric/modules-deployed.png" alt-text="Screenshot of IoT Edge modules deployed and running confirmation in the Azure portal." lightbox="media/how-to-provision-single-device-linux-symmetric/modules-deployed.png":::
+ :::image type="content" source="media/how-to-provision-single-device-linux-symmetric/modules-deployed.png" alt-text="Screenshot of IoT Edge modules deployed and running confirmation in the Azure portal." lightbox="media/how-to-provision-single-device-linux-symmetric/modules-deployed.png":::
Once your modules are deployed and running, list them in your device or virtual machine with the following command:
iot-hub Iot Hub Raspberry Pi Kit C Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-c-get-started.md
Turn on Pi by using the micro USB cable and the power supply. Use the Ethernet c
1. Use one of the following SSH clients from your host computer to connect to your Raspberry Pi. **Windows Users**
- 1. Download and install [PuTTY](https://www.putty.org/) for Windows.
+ 1. Download and install [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/) for Windows.
1. Copy the IP address of your Pi into the Host name (or IP address) section and select SSH as the connection type. ![PuTTy](./media/iot-hub-raspberry-pi-kit-node-get-started/7-putty-windows.png)
iot-hub Iot Hub Raspberry Pi Kit Node Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-raspberry-pi-kit-node-get-started.md
Turn on Pi by using the micro USB cable and the power supply. Use the Ethernet c
**Windows Users**
- a. Download and install [PuTTY](https://www.putty.org/) for Windows.
+ a. Download and install [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/) for Windows.
b. Copy the IP address of your Pi into the Host name (or IP address) section and select SSH as the connection type.
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md). > [!NOTE]
-> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model, but you can use Azure PowerShell, Azure CLI, ARM template deployments. App Service certificate management requires **Key Vault Secrets User** and **Key Vault Reader** role assignments for App Service global identity, for example Microsoft Azure App Service' in public cloud.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Certificate User** role assignment for App Service global identity, for example Microsoft Azure App Service' in public cloud.
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
More about Azure Key Vault management guidelines, see:
| Built-in role | Description | ID | | | | | | Key Vault Administrator| Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. | 00482a5a-887f-4fb3-b363-3b7fe8e74483 |
+| Key Vault Reader | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 |
| Key Vault Certificates Officer | Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 |
+| Key Vault Certificates User | Read entire certificate contents including secret and key portion. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 |
| Key Vault Crypto Officer | Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 | | Key Vault Crypto Service Encryption User | Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. | e147488a-f6f5-4113-8e2d-b22465e65bf6 | | Key Vault Crypto User | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 | | Key Vault Crypto Service Release User | Release keys for [Azure Confidential Computing](../../confidential-computing/concept-skr-attestation.md) and equivalent environments. Only works for key vaults that use the 'Azure role-based access control' permission model.
-| Key Vault Reader | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 |
| Key Vault Secrets Officer| Perform any action on the secrets of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | b86a8fe4-44ce-4948-aee5-eccb2c155cd7 | | Key Vault Secrets User | Read secret contents including secret portion of a certificate with private key. Only works for key vaults that use the 'Azure role-based access control' permission model. | 4633458b-17de-408a-b874-0445c86b69e6 |
-> [!NOTE]
-> There is no `Key Vault Certificate User` because applications require secrets portion of certificate with private key. The `Key Vault Secrets User` role should be used for applications to retrieve certificate.
- For more information about Azure built-in roles definitions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). ### Managing built-in Key Vault data plane role assignments
key-vault Rbac Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-migration.md
Key Vault built-in roles for keys, certificates, and secrets access management:
- Key Vault Administrator - Key Vault Reader - Key Vault Certificates Officer
+- Key Vault Certificate User
- Key Vault Crypto Officer - Key Vault Crypto User - Key Vault Crypto Service Encryption User
Access policy predefined permission templates:
| Azure Information BYOK | Keys: get, decrypt, sign | N/A<br>Custom role required| > [!NOTE]
-> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignments for 'Microsoft Azure App Service' global indentity.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Certificate User** role assignment for App Service global identity, for example Microsoft Azure App Service' in public cloud.
## Assignment scopes mapping
load-balancer Load Balancer Ipv6 For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md
Last updated 04/21/2023 -+ # Configure DHCPv6 for Linux VMs
This document describes how to enable DHCPv6 so that your Linux virtual machine
> [!WARNING] > By improperly editing network configuration files, you can lose network access to your VM. We recommended that you test your configuration changes on non-production systems. The instructions in this article have been tested on the latest versions of the Linux images in the Azure Marketplace. For more detailed instructions, consult the documentation for your own version of Linux.
-# [RHEL/CentOS/Oracle](#tab/redhat)
+# [RHEL/CentOS/Oracle](#tab/redhat)
For RHEL, CentOS, and Oracle Linux versions 7.4 or higher, follow these steps:
For RHEL, CentOS, and Oracle Linux versions 7.4 or higher, follow these steps:
```bash sudo ifdown eth0 && sudo ifup eth0 ```
-
-# [openSUSE/SLES](#tab/suse)
+
+# [openSUSE/SLES](#tab/suse)
Recent SUSE Linux Enterprise Server (SLES) and openSUSE images in Azure have been preconfigured with DHCPv6. No other changes are required when you use these images. If you have a VM that's based on an older or custom SUSE image, use one of the following procedures to configure DHCPv6.
Recent SUSE Linux Enterprise Server (SLES) and openSUSE images in Azure have bee
```config DHCLIENT6_MODE='managed'
-
+ 3. Renew the IPv6 address: ```bash sudo ifdown eth0 && sudo ifup eth0
- ```
+ ```
## OpenSUSE Leap and SLES 12 For openSUSE Leap and SLES 12, follow these steps:
For openSUSE Leap and SLES 12, follow these steps:
```bash sudo ifdown eth0 && sudo ifup eth0
- ```
+ ```
-# [Ubuntu](#tab/ubuntu)
+# [Ubuntu](#tab/ubuntu)
For Ubuntu versions 17.10 or higher, follow these steps:
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 01/09/2024 Last updated : 01/18/2024
In *single-tenant* Azure Logic Apps, deployment becomes easier because you can s
App settings integrate with Azure Key Vault. You can [directly reference secure strings](../app-service/app-service-key-vault-references.md), such as connection strings and keys. Similar to Azure Resource Manager templates (ARM templates), where you can define environment variables at deployment time, you can define app settings within your [logic app workflow definition](/azure/templates/microsoft.logic/workflows). You can then capture dynamically generated infrastructure values, such as connection endpoints, storage strings, and more. However, app settings have size limitations and can't be referenced from certain areas in Azure Logic Apps.
+> [!NOTE]
+>
+> If you use Key Vault, make sure that you store only secrets, such as passwords, credentials, and certificates.
+> In a logic app workflow, don't use Key Vault to store non-secret values, such as URL paths, that the workflow designer needs to make calls.
+> The designer can't deference an app setting that references a Key Vault resource type, which results in an
+> error and a failed call. For non-secret values, store them directly in app settings.
+ For more information about setting up your logic apps for deployment, see the following documentation: - [Create parameters for values that change in workflows between environments for single-tenant Azure Logic Apps](parameterize-workflow-app.md)
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
Workspaces are places to collaborate with colleagues to create machine learning
Ready to get started? [Create a workspace](#create-a-workspace). - ## Tasks performed within a workspace For machine learning teams, the workspace is a place to organize their work. Below are some of the tasks you can start from a workspace:
To automate workspace creation using your preferred security settings:
:::moniker range="azureml-api-1" * Use the [Azure Machine Learning CLI](./v1/reference-azure-machine-learning-cli.md) or [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) for prototyping and as part of your [MLOps workflows](concept-model-management-and-deployment.md). :::moniker-end
-* Use [REST APIs](how-to-manage-rest.md) directly in scripting environment, for platform integration or in MLOps workfows.
+* Use [REST APIs](how-to-manage-rest.md) directly in scripting environment, for platform integration or in MLOps workflows.
## Tools for workspace interaction and management
machine-learning Dsvm Ubuntu Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
Last updated 04/18/2023-+ #Customer intent: As a data scientist, I want to learn how to provision the Linux DSVM so that I can move my existing workflow to the cloud.
Here are the steps to create an instance of the Ubuntu 20.04 Data Science Virtua
1. On the next window, select **Create**. 1. You should be redirected to the "Create a virtual machine" blade.
-
+ 1. Enter the following information to configure each step of the wizard: 1. **Basics**:
-
+ * **Subscription**: If you have more than one subscription, select the one on which the machine will be created and billed. You must have resource creation privileges for this subscription. * **Resource group**: Create a new group or use an existing one. * **Virtual machine name**: Enter the name of the virtual machine. This name will be used in your Azure portal. * **Region**: Select the datacenter that's most appropriate. For fastest network access, it's the datacenter that has most of your data or is closest to your physical location. Learn more about [Azure Regions](https://azure.microsoft.com/global-infrastructure/regions/). * **Image**: Leave the default value. * **Size**: This option should autopopulate with a size that is appropriate for general workloads. Read more about [Linux VM sizes in Azure](../../virtual-machines/sizes.md).
- * **Authentication type**: For quicker setup, select "Password."
-
+ * **Authentication type**: For quicker setup, select "Password."
+ > [!NOTE] > If you intend to use JupyterHub, make sure to select "Password," as JupyterHub is *not* configured to use SSH public keys. * **Username**: Enter the administrator username. You'll use this username to log into your virtual machine. This username need not be the same as your Azure username. Do *not* use capitalized letters.
-
+ > [!IMPORTANT] > If you use capitalized letters in your username, JupyterHub will not work, and you'll encounter a 500 internal server error.
- * **Password**: Enter the password you'll use to log into your virtual machine.
-
+ * **Password**: Enter the password you'll use to log into your virtual machine.
+ 1. Select **Review + create**. 1. **Review+create**
- * Verify that all the information you entered is correct.
+ * Verify that all the information you entered is correct.
* Select **Create**.
-
+ The provisioning should take about 5 minutes. The status is displayed in the Azure portal. ## How to access the Ubuntu Data Science Virtual Machine
The Linux VM is already provisioned with X2Go Server and ready to accept client
* **SSH Port**: Leave it at 22, the default value. * **Session Type**: Change the value to **XFCE**. Currently, the Linux VM supports only the XFCE desktop. * **Media tab**: You can turn off sound support and client printing if you don't need to use them.
- * **Shared folders**: Use this tab to add client machine directory that you would like to mount on the VM.
+ * **Shared folders**: Use this tab to add client machine directory that you would like to mount on the VM.
![X2go configuration](./media/dsvm-ubuntu-intro/x2go-ubuntu.png) 1. Select **OK**.
The Linux VM is already provisioned with X2Go Server and ready to accept client
1. Enter the password for your VM. 1. Select **OK**. 1. You may have to give X2Go permission to bypass your firewall to finish connecting.
-1. You should now see the graphical interface for your Ubuntu DSVM.
+1. You should now see the graphical interface for your Ubuntu DSVM.
### JupyterHub and JupyterLab
The Ubuntu DSVM runs [JupyterHub](https://github.com/jupyterhub/jupyterhub), a m
>[!NOTE] > If you see the `ERR_EMPTY_RESPONSE` error message in your browser, make sure you access the machine by explicitly using the *HTTPS* protocol, and not by using *HTTP* or just the web address. If you type the web address without `https://` in the address line, most browsers will default to `http`, and you will see this error.
- 1. Enter the username and password that you used to create the VM, and sign in.
+ 1. Enter the username and password that you used to create the VM, and sign in.
![Enter Jupyter login](./media/dsvm-ubuntu-intro/jupyter-login.png) >[!NOTE]
- > If you receive a 500 Error at this stage, it is likely that you used capitalized letters in your username. This is a known interaction between Jupyter Hub and the PAMAuthenticator it uses.
+ > If you receive a 500 Error at this stage, it is likely that you used capitalized letters in your username. This is a known interaction between Jupyter Hub and the PAMAuthenticator it uses.
> If you receive a "Can't reach this page" error, it is likely that your Network Security Group permissions need to be adjusted. In the Azure portal, find the Network Security Group resource within your Resource Group. To access JupyterHub from the public Internet, you must have port 8000 open. (The image shows that this VM is configured for just-in-time access, which is highly recommended. See [Secure your management ports with just-in time access](../../security-center/security-center-just-in-time.md).) > ![Configuration of Network Security Group](./media/dsvm-ubuntu-intro/nsg-permissions.png)
c.Spawner.default_url = '/lab'
Here's how you can continue your learning and exploration:
-* The [Data science on the Data Science Virtual Machine for Linux](linux-dsvm-walkthrough.md) walkthrough shows you how to do several common data science tasks with the Linux DSVM provisioned here.
-* Explore the various data science tools on the DSVM by trying out the tools described in this article. You can also run `dsvm-more-info` on the shell within the virtual machine for a basic introduction and pointers to more information about the tools installed on the VM.
+* The [Data science on the Data Science Virtual Machine for Linux](linux-dsvm-walkthrough.md) walkthrough shows you how to do several common data science tasks with the Linux DSVM provisioned here.
+* Explore the various data science tools on the DSVM by trying out the tools described in this article. You can also run `dsvm-more-info` on the shell within the virtual machine for a basic introduction and pointers to more information about the tools installed on the VM.
* Learn how to systematically build analytical solutions using the [Team Data Science Process](/azure/architecture/data-science-process/overview). * Visit the [Azure AI Gallery](https://gallery.azure.ai/) for machine learning and data analytics samples that use the Azure AI services. * Consult the appropriate [reference documentation](./reference-ubuntu-vm.md) for this virtual machine.
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
Title: What is the Azure Data Science Virtual Machine-+ description: Overview of Azure Data Science Virtual Machine - An easy to use virtual machine on the Azure cloud platform with preinstalled and configured tools and libraries for doing data science. keywords: data science tools, data science virtual machine, tools for data science, linux data science -+
The DSVM is a customized VM image for Data Science but [Azure Machine Learning](
Key differences between these:
-|Feature |Data Science<br>VM |Azure Machine Learning<br>Compute Instance |
+|Feature |Data Science<br>VM |Azure Machine Learning<br>Compute Instance |
|||| | Fully Managed | No | Yes | |Language Support | Python, R, Julia, SQL, C#,<br> Java, Node.js, F# | Python and R |
You can use the DSVM to evaluate or learn new data science [tools](./tools-inclu
In the DSVM, your training models can use deep learning algorithms on hardware that's based on graphics processing units (GPUs). By taking advantage of the VM scaling capabilities of the Azure platform, the DSVM helps you use GPU-based hardware in the cloud according to your needs. You can switch to a GPU-based VM when you're training large models, or when you need high-speed computations while keeping the same OS disk. You can choose any of the N series GPUs enabled virtual machine SKUs with DSVM. Note GPU enabled virtual machine SKUs aren't supported on Azure free accounts.
-The Windows editions of the DSVM come preinstalled with GPU drivers, frameworks, and GPU versions of deep learning frameworks. On the Linux editions, deep learning on GPUs is enabled on the Ubuntu DSVMs.
+The Windows editions of the DSVM come preinstalled with GPU drivers, frameworks, and GPU versions of deep learning frameworks. On the Linux editions, deep learning on GPUs is enabled on the Ubuntu DSVMs.
You can also deploy the Ubuntu or Windows editions of the DSVM to an Azure virtual machine that isn't based on GPUs. In this case, all the deep learning frameworks falls back to the CPU mode.
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
Title: 'Reference: Ubuntu Data Science Virtual Machine'-+ description: Details on tools included in the Ubuntu Data Science Virtual Machine -+ Last updated 04/18/2023
# Reference: Ubuntu (Linux) Data Science Virtual Machine
-See below for a list of available tools on your Ubuntu Data Science Virtual Machine.
+See below for a list of available tools on your Ubuntu Data Science Virtual Machine.
## Deep learning libraries
available in the `py38_pytorch` environment.
### H2O
-H2O is a fast, in-memory, distributed machine learning and predictive analytics platform. A Python package is installed in both the root and py35 Anaconda environments. An R package is also installed.
+H2O is a fast, in-memory, distributed machine learning and predictive analytics platform. A Python package is installed in both the root and py35 Anaconda environments. An R package is also installed.
To open H2O from the command line, run `java -jar /dsvm/tools/h2o/current/h2o.jar`. There are various [command-line options](http://docs.h2o.ai/h2o/latest-stable/h2o-docs/starting-h2o.html#from-the-command-line) that you might want to configure. You can access the Flow web UI by browsing to `http://localhost:54321` to get started. Sample notebooks are also available in JupyterHub.
username and password.
## Apache Spark standalone A standalone instance of Apache Spark is preinstalled on the Linux DSVM to help you develop Spark applications locally
-before you test and deploy them on large clusters.
+before you test and deploy them on large clusters.
You can run PySpark programs through the Jupyter kernel. When you open Jupyter, select the **New** button and you should see a list of available kernels. **Spark - Python** is the PySpark kernel that lets you build Spark applications by
-using the Python language. You can also use a Python IDE like VS.Code or PyCharm to build your Spark program.
+using the Python language. You can also use a Python IDE like VS.Code or PyCharm to build your Spark program.
In this standalone instance, the Spark stack runs within the calling client program. This feature makes it faster and easier to troubleshoot issues, compared to developing on a Spark cluster.
easier to troubleshoot issues, compared to developing on a Spark cluster.
## IDEs and editors
-You have a choice of several code editors, including VS.Code, PyCharm, IntelliJ, vi/Vim, Emacs.
+You have a choice of several code editors, including VS.Code, PyCharm, IntelliJ, vi/Vim, Emacs.
VS.Code, PyCharm, and IntelliJ are graphical editors. To use them, you need to be signed in to a graphical desktop. You open them by using desktop and application menu shortcuts.
For more information, see [Connecting with bcp](/sql/connect/odbc/linux-mac/conn
Libraries are available in R and Python for database access: * In R, you can use the RODBC package or dplyr package to query or run SQL statements on the database server.
-* In Python, the pyodbc library provides database access with ODBC as the underlying layer.
+* In Python, the pyodbc library provides database access with ODBC as the underlying layer.
## Azure tools
The following Azure tools are installed on the VM:
* **Azure CLI**: You can use the command-line interface in Azure to create and manage Azure resources through shell commands. To open the Azure tools, enter **azure help**. For more information, see the [Azure CLI documentation page](/cli/azure/get-started-with-az-cli2). * **Azure Storage Explorer**: Azure Storage Explorer is a graphical tool that you can use to browse through the objects that you have stored in your Azure storage account, and to upload and download data to and from Azure blobs. You can access Storage Explorer from the desktop shortcut icon. You can also open it from a shell prompt by entering **StorageExplorer**. You must be signed in from an X2Go client, or have X11 forwarding set up. * **Azure libraries**: The following are some of the pre-installed libraries.
-
+ * **Python**: The Azure-related libraries in Python are *azure*, *azureml*, *pydocumentdb*, and *pyodbc*. With the first three libraries, you can access Azure storage services, Azure Machine Learning, and Azure Cosmos DB (a NoSQL database on Azure). The fourth library, pyodbc (along with the Microsoft ODBC driver for SQL Server), enables access to SQL Server, Azure SQL Database, and Azure Synapse Analytics from Python by using an ODBC interface. Enter **pip list** to see all the listed libraries. Be sure to run this command in both the Python 2.7 and 3.5 environments. * **R**: The Azure-related libraries in R are Azure Machine Learning and RODBC.
- * **Java**: The list of Azure Java libraries can be found in the directory /dsvm/sdk/AzureSDKJava on the VM. The key libraries are Azure storage and management APIs, Azure Cosmos DB, and JDBC drivers for SQL Server.
+ * **Java**: The list of Azure Java libraries can be found in the directory /dsvm/sdk/AzureSDKJava on the VM. The key libraries are Azure storage and management APIs, Azure Cosmos DB, and JDBC drivers for SQL Server.
## Azure Machine Learning Azure Machine Learning is a fully managed cloud service that enables you to build, deploy, and share predictive analytics solutions. You can build your experiments and models in Azure Machine Learning studio. You can access it from a web browser on the Data Science Virtual Machine by visiting [Microsoft Azure Machine Learning](https://ml.azure.com).
-After you sign in to Azure Machine Learning studio, you can use an experimentation canvas to build a logical flow for the machine learning algorithms. You also have access to a Jupyter notebook that is hosted on Azure Machine Learning and can work seamlessly with the experiments in Azure Machine Learning studio.
+After you sign in to Azure Machine Learning studio, you can use an experimentation canvas to build a logical flow for the machine learning algorithms. You also have access to a Jupyter notebook that is hosted on Azure Machine Learning and can work seamlessly with the experiments in Azure Machine Learning studio.
Operationalize the machine learning models that you have built by wrapping them in a web service interface. Operationalizing machine learning models enables clients written in any language to invoke predictions from those models. For more information, see the [Machine Learning documentation](../index.yml).
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
description: Release notes for the Azure Data Science Virtual Machine -+ Last updated 04/18/2023
machine-learning Ubuntu Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/ubuntu-upgrade.md
Title: How to upgrade your Data Science Virtual Machine to Ubuntu 20.04-+ description: Learn how to upgrade from CentOS and Ubuntu 18.04 to the latest Ubuntu 20.04 Data Science Virtual Machine. keywords: deep learning, AI, data science tools, data science virtual machine, team data science process -+
Last updated 04/19/2023
# Upgrade your Data Science Virtual Machine to Ubuntu 20.04
-If you have a Data Science Virtual Machine running an older release such as Ubuntu 18.04 or CentOS, you should migrate your DSVM to Ubuntu 20.04. Migrating will ensure that you get the latest operating system patches, drivers, preinstalled software, and library versions. This document tells you how to migrate from either older versions of Ubuntu or from CentOS.
+If you have a Data Science Virtual Machine running an older release such as Ubuntu 18.04 or CentOS, you should migrate your DSVM to Ubuntu 20.04. Migrating will ensure that you get the latest operating system patches, drivers, preinstalled software, and library versions. This document tells you how to migrate from either older versions of Ubuntu or from CentOS.
## Prerequisites
There are two possible ways to migrate:
## Snapshot your VM in case you need to roll back
-In the Azure portal, use the search bar to find the **Snapshots** functionality.
+In the Azure portal, use the search bar to find the **Snapshots** functionality.
:::image type="content" source="media/ubuntu_upgrade/azure-portal-search-bar.png" alt-text="Screenshot showing Azure portal and search bar, with **Snapshots** highlighted":::
In the Azure portal, use the search bar to find the **Snapshots** functionality.
## In-place migration
-If you're migrating an older Ubuntu release, you may choose to do an in-place migration. This migration doesn't create a new virtual machine and has fewer steps than a side-by-side migration. If you wish to do a side-by-side migration because you want more control or because you're migrating from a different distribution, such as CentOS, skip to the [Side-by-side migration](#side-by-side-migration) section.
+If you're migrating an older Ubuntu release, you may choose to do an in-place migration. This migration doesn't create a new virtual machine and has fewer steps than a side-by-side migration. If you wish to do a side-by-side migration because you want more control or because you're migrating from a different distribution, such as CentOS, skip to the [Side-by-side migration](#side-by-side-migration) section.
-1. From the Azure portal, start your DSVM and sign in using SSH. To do so, select **Connect** and **SSH** and follow the connection instructions.
+1. From the Azure portal, start your DSVM and sign in using SSH. To do so, select **Connect** and **SSH** and follow the connection instructions.
1. Once connected to a terminal session on your DSVM, run the following upgrade command:
The upgrade process will take a while to complete. When it's over, the program w
### If necessary, regenerate SSH keys
-> [!IMPORTANT]
+> [!IMPORTANT]
> After upgrading and rebooting, you may need to regenerate your SSH keys. After your VM has upgraded and rebooted, attempt to access it again via SSH. The IP address may have changed during the reboot, so confirm it before attempting to connect.
You may choose to upgrade the operating system parts of the filesystem and leave
### Create a disk from your VM snapshot
-If you haven't already created a VM snapshot as described previously, do so.
+If you haven't already created a VM snapshot as described previously, do so.
1. In the Azure portal, search for **Disks** and select **Add**, which will open the **Disk** page.
If you haven't already created a VM snapshot as described previously, do so.
2. Set the **Subscription**, **Resource group**, and **Region** to the values of your VM snapshot. Choose a **Name** for the disk to be created.
-3. Select **Source type** as **Snapshot** and select the VM snapshot as the **Source snapshot**. Review and create the disk.
+3. Select **Source type** as **Snapshot** and select the VM snapshot as the **Source snapshot**. Review and create the disk.
:::image type="content" source="media/ubuntu_upgrade/disk-create-options.png" alt-text="Screenshot of disk creation dialog showing options"::: ### Create a new Ubuntu Data Science Virtual Machine
-Create a new Ubuntu Data Science Virtual Machine using the [Azure portal](https://portal.azure.com) or an [ARM template](./dsvm-tutorial-resource-manager.md).
+Create a new Ubuntu Data Science Virtual Machine using the [Azure portal](https://portal.azure.com) or an [ARM template](./dsvm-tutorial-resource-manager.md).
### Recreate user account(s) on your new Data Science Virtual Machine
For more information, see [Quickstart: Set up the Data Science Virtual Machine f
> [!Important] > Your VM should be running at the time you attach the data disk. If the VM isn't running, the disks may be added in an incorrect order, leading to a confusing and potentially non-bootable system. If you add the data disk with the VM off, choose the **X** beside the data disk, start the VM, and re-attach it.
-### Manually copy the wanted data
+### Manually copy the wanted data
1. Sign on to your running virtual machine using SSH.
For more information, see [Quickstart: Set up the Data Science Virtual Machine f
```bash lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i 'sd' ```
-
+ The results should look something like the following image. In the image, disk `sda1` is mounted at the root and `sdb2` is the `/mnt` scratch disk. The data disk created from the snapshot of your old VM is identified as `sdc1` but isn't yet available, as evidenced by the lack of a mount location. Your results might have different identifiers, but you should see a similar pattern.
-
+ :::image type="content" source="media/ubuntu_upgrade/lsblk-results.png" alt-text="Screenshot of lsblk output, showing unmounted data drive":::
-
+ 3. To access the data drive, create a location for it and mount it. Replace `/dev/sdc1` with the appropriate value returned by `lsblk`: ```bash sudo mkdir /datadrive && sudo mount /dev/sdc1 /datadrive ```
-
+ 4. Now, `/datadrive` contains the directories and files of your old Data Science Virtual Machine. Move or copy the directories or files you want from the data drive to the new VM as you wish. For more information, see [Use the portal to attach a data disk to a Linux VM](../../virtual-machines/linux/attach-disk-portal.md#connect-to-the-linux-vm-to-mount-the-new-disk). ## Connect and confirm version upgrade
-Whether you did an in-place or side-by-side migration, confirm that you've successfully upgraded. From a terminal session, run:
+Whether you did an in-place or side-by-side migration, confirm that you've successfully upgraded. From a terminal session, run:
```bash cat /etc/os-release
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Azure Machine Learning data assets (formerly known as datasets) are supported as
```azurecli az ml batch-endpoint invoke --name $ENDPOINT_NAME \
- --set inputs.heart_dataset.type uri_folder inputs.heart_dataset.path $DATASET_ID
+ --set inputs.heart_dataset.type="uri_folder" inputs.heart_dataset.path=$DATASET_ID
``` For an endpoint that serves a model deployment, you can use the `--input` argument to specify the data input, since a model deployment always requires only one data input.
Data from Azure Machine Learning registered data stores can be directly referenc
```azurecli az ml batch-endpoint invoke --name $ENDPOINT_NAME \
- --set inputs.heart_dataset.type uri_folder inputs.heart_dataset.path $INPUT_PATH
+ --set inputs.heart_dataset.type="uri_folder" inputs.heart_dataset.path=$INPUT_PATH
``` For an endpoint that serves a model deployment, you can use the `--input` argument to specify the data input, since a model deployment always requires only one data input.
Azure Machine Learning batch endpoints can read data from cloud locations in Azu
```azurecli az ml batch-endpoint invoke --name $ENDPOINT_NAME \
- --set inputs.heart_dataset.type uri_folder inputs.heart_dataset.path $INPUT_DATA
+ --set inputs.heart_dataset.type="uri_folder" inputs.heart_dataset.path=$INPUT_DATA
``` For an endpoint that serves a model deployment, you can use the `--input` argument to specify the data input, since a model deployment always requires only one data input.
You can also use the argument `--set` to specify the value. However, it tends to
```azurecli az ml batch-endpoint invoke --name $ENDPOINT_NAME \
- --set inputs.score_mode.type string inputs.score_mode.default append
+ --set inputs.score_mode.type="string" inputs.score_mode.default="append"
``` # [Python](#tab/sdk)
The following example shows how to change the location where an output named `sc
}, "OutputData": { "score": {
- "JobOutputType" : "UriFolder",
+ "JobOutputType" : "UriFile",
"Uri": "azureml:/subscriptions/<subscription>/resourceGroups/<resource-group/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/datastores/<data-store>/paths/<data-path>" } }
The following example shows how to change the location where an output named `sc
```azurecli az ml batch-endpoint invoke --name $ENDPOINT_NAME \
- --set inputs.heart_dataset.path $INPUT_PATH \
- --set outputs.score.path $OUTPUT_PATH
+ --set inputs.heart_dataset.path=$INPUT_PATH \
+ --set outputs.score.path=$OUTPUT_PATH
``` # [Python](#tab/sdk)
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Last updated 11/15/2023 reviewer: msakande -+ # Deploy and score a machine learning model by using an online endpoint
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
In prompt flow, on flow page with successful run and run detail page, you can fi
:::image type="content" source="../media/faq/trace-large-language-model-tool.png" alt-text="Screenshot that shows raw request send to LLM model and response from LLM model." lightbox = "../media/faq/trace-large-language-model-tool.png":::
-## How to fix 409 error in from Azure OpenAI?
+## How to fix 409 error from Azure OpenAI?
You may encounter 409 error from Azure OpenAI, it means you have reached the rate limit of Azure OpenAI. You can check the error message in the output section of LLM node. Learn more about [Azure OpenAI rate limit](../../../ai-services/openai/quotas-limits.md).
machine-learning How To Cicd Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-cicd-data-ingestion.md
This article demonstrates how to automate the CI and CD processes with [Azure Pi
## Source control management Source control management is needed to track changes and enable collaboration between team members.
-For example, the code would be stored in an Azure DevOps, GitHub, or GitLab repository. The collaboration workflow is based on a branching model. For example, [GitFlow](https://datasift.github.io/gitflow/IntroducingGitFlow.html).
+For example, the code would be stored in an Azure DevOps, GitHub, or GitLab repository. The collaboration workflow is based on a branching model.
### Python Notebook Source Code
mariadb Howto Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-ssl.md
Last updated 04/19/2023 ms.devlang: csharp # ms.devlang: csharp, golang, java, php, python, ruby-+ # Configure SSL connectivity in your application to securely connect to Azure Database for MariaDB
To establish a secure connection to Azure Database for MariaDB over SSL from you
```php $conn = mysqli_init();
-mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/BaltimoreCyberTrustRoot.crt.pem", NULL, NULL) ;
+mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/BaltimoreCyberTrustRoot.crt.pem", NULL, NULL) ;
mysqli_real_connect($conn, 'mydemoserver.mariadb.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT); if (mysqli_connect_errno($conn)) { die('Failed to connect to MySQL: '.mysqli_connect_error());
conn = pymysql.connect(user='myadmin@mydemoserver',
```ruby client = Mysql2::Client.new(
- :host => 'mydemoserver.mariadb.database.azure.com',
- :username => 'myadmin@mydemoserver',
- :password => 'yourpassword',
+ :host => 'mydemoserver.mariadb.database.azure.com',
+ :username => 'myadmin@mydemoserver',
+ :password => 'yourpassword',
:database => 'quickstartdb', :sslca => '/var/www/html/BaltimoreCyberTrustRoot.crt.pem' :ssl_mode => 'required'
if ok := rootCertPool.AppendCertsFromPEM(pem); !ok {
} mysql.RegisterTLSConfig("custom", &tls.Config{RootCAs: rootCertPool}) var connectionString string
-connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true&tls=custom",'myadmin@mydemoserver' , 'yourpassword', 'mydemoserver.mariadb.database.azure.com', 'quickstartdb')
+connectionString = fmt.Sprintf("%s:%s@tcp(%s:3306)/%s?allowNativePasswords=true&tls=custom",'myadmin@mydemoserver' , 'yourpassword', 'mydemoserver.mariadb.database.azure.com', 'quickstartdb')
db, _ := sql.Open("mysql", connectionString) ```
String importCert = " -import "+
" -alias mysqlServerCACert "+ " -file " + ssl_ca + " -keystore truststore "+
- " -trustcacerts " +
+ " -trustcacerts " +
" -storepass password -noprompt "; String genKey = " -genkey -keyalg rsa " + " -alias mysqlClientCertificate -keystore keystore " +
- " -storepass password123 -keypass password " +
+ " -storepass password123 -keypass password " +
" -dname CN=MS "; sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+")); sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+"));
-# use the generated keystore and truststore
+# use the generated keystore and truststore
System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file"); System.setProperty("javax.net.ssl.keyStorePassword","password");
String importCert = " -import "+
" -alias mysqlServerCACert "+ " -file " + ssl_ca + " -keystore truststore "+
- " -trustcacerts " +
+ " -trustcacerts " +
" -storepass password -noprompt "; String genKey = " -genkey -keyalg rsa " + " -alias mysqlClientCertificate -keystore keystore " +
- " -storepass password123 -keypass password " +
+ " -storepass password123 -keypass password " +
" -dname CN=MS "; sun.security.tools.keytool.Main.main(importCert.trim().split("\\s+")); sun.security.tools.keytool.Main.main(genKey.trim().split("\\s+"));
-# use the generated keystore and truststore
+# use the generated keystore and truststore
System.setProperty("javax.net.ssl.keyStore","path_to_keystore_file"); System.setProperty("javax.net.ssl.keyStorePassword","password");
mariadb Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-data-in-replication.md
Title: Configure data-in Replication - Azure Database for MariaDB description: This article describes how to set up Data-in Replication in Azure Database for MariaDB. -+
mariadb Howto Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-migrate-dump-restore.md
-+ Last updated 04/19/2023
You can use MySQL utilities such as mysqldump and mysqlpump to dump and load dat
To optimize performance when you're dumping large databases, keep in mind the following considerations: -- Use the `exclude-triggers` option in mysqldump. Exclude triggers from dump files to avoid having the trigger commands fire during the data restore. -- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and send a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during the restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive. This is because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.
+- Use the `exclude-triggers` option in mysqldump. Exclude triggers from dump files to avoid having the trigger commands fire during the data restore.
+- Use the `single-transaction` option to set the transaction isolation mode to REPEATABLE READ and send a START TRANSACTION SQL statement to the server before dumping data. Dumping many tables within a single transaction causes some extra storage to be consumed during the restore. The `single-transaction` option and the `lock-tables` option are mutually exclusive. This is because LOCK TABLES causes any pending transactions to be committed implicitly. To dump large tables, combine the `single-transaction` option with the `quick` option.
- Use the `extended-insert` multiple-row syntax that includes several VALUE lists. This approach results in a smaller dump file and speeds up inserts when the file is reloaded. - Use the `order-by-primary` option in mysqldump when you're dumping databases, so that the data is scripted in primary key order. - Use the `disable-keys` option in mysqldump when you're dumping data, to disable foreign key constraints before the load. Disabling foreign key checks helps improve performance. Enable the constraints and verify the data after the load to ensure referential integrity. - Use partitioned tables when appropriate.-- Load data in parallel. Avoid too much parallelism, which could cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.
+- Load data in parallel. Avoid too much parallelism, which could cause you to hit a resource limit, and monitor resources by using the metrics available in the Azure portal.
- Use the `defer-table-indexes` option in mysqlpump when you're dumping databases, so that index creation happens after table data is loaded. - Copy the backup files to an Azure blob store and perform the restore from there. This approach should be a lot faster than performing the restore across the internet.
mysqldump --opt -u <uname> -p<pass> <dbname> > <backupfile.sql>
The parameters to provide are: -- *\<uname>*: Your database user name -- *\<pass>*: The password for your database (note that there is no space between -p and the password) -- *\<dbname>*: The name of your database -- *\<backupfile.sql>*: The file name for your database backup
+- *\<uname>*: Your database user name
+- *\<pass>*: The password for your database (note that there is no space between -p and the password)
+- *\<dbname>*: The name of your database
+- *\<backupfile.sql>*: The file name for your database backup
- *\<--opt>*: The mysqldump option For example, to back up a database named *testdb* on your MariaDB server with the user name *testuser* and with no password to a file testdb_backup.sql, use the following command. The command backs up the `testdb` database into a file called `testdb_backup.sql`, which contains all the SQL statements needed to re-create the database.
mysqldump -u root -p testdb table1 table2 > testdb_tables_backup.sql
To back up more than one database at once, use the --database switch and list the database names, separated by spaces. ```bash
-mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
+mysqldump -u root -p --databases testdb1 testdb3 testdb5 > testdb135_backup.sql
``` ## Create a database on the target server
mariadb Howto Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-redirection.md
Title: Connect with redirection - Azure Database for MariaDB description: This article describes how you can configure your application to connect to Azure Database for MariaDB with redirection. -+
The subsequent sections of the document outline how to install the `mysqlnd_azur
**Prerequisites** - PHP versions 7.2.15+ and 7.3.2+-- PHP PEAR
+- PHP PEAR
- php-mysql - Azure Database for MariaDB server
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-vmware.md
Title: Set up an Azure Migrate appliance for server assessment in a VMware environment
+ Title: Set up an Azure Migrate appliance for server assessment in a VMware environment
description: Learn how to set up an Azure Migrate appliance to assess and migrate servers in VMware environment.-+ ms. Last updated 10/11/2023-+ # Set up an appliance for servers in a VMware environment
To set up the appliance by using an OVA template, you'll complete these steps, w
1. Provide an appliance name and generate a project key in the portal. 1. Download an OVA template file, and import it to vCenter Server. Verify that the OVA is secure. 1. Create the appliance from the OVA file. Verify that the appliance can connect to Azure Migrate.
-1. Configure the appliance for the first time.
+1. Configure the appliance for the first time.
1. Register the appliance with the project by using the project key. #### Generate the project key
Before you deploy the OVA file, verify that the file is secure:
1. On the server on which you downloaded the file, open a Command Prompt window by using the **Run as administrator** option. 1. Run the following command to generate the hash for the OVA file:
-
+ ``` C:\>CertUtil -HashFile <file_location> <hashing_agorithm> ```
-
- For example:
+
+ For example:
``` C:\>CertUtil -HashFile C:\Users\Administrator\Desktop\MicrosoftAzureMigration.ova SHA256 ```
-
+ 1. Verify the latest appliance versions and hash values: - For the Azure public cloud:
-
+ **Algorithm** | **Download** | **SHA256** | | VMware (11.9 GB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191954) | 06256F9C6FB3F011152D861DA43FFA1C5C8FF966931D5CE00F1F252D3A2F4723 - For Azure Government:
-
+ **Algorithm** | **Download** | **SHA256** | | VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2191847) | 7EF01AE30F7BB8F4486EDC1688481DB656FB8ECA7B9EF6363B4DAB1CFCFDA141
In the configuration manager, select **Set up prerequisites**, and complete thes
1. For the appliance to run auto-update, paste the project key that you copied from the portal. If you don't have the key, go to **Azure Migrate: Discovery and assessment** > **Overview** > **Manage existing appliances**. Select the appliance name you provided when you generated the project key, and copy the key that's shown. 2. The appliance will verify the key and start the auto-update service, which updates all the services on the appliance to their latest versions. When the auto-update has run, you can select **View appliance services** to see the status and versions of the services running on the appliance server. 3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure sign-in prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt.
-
+ :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and sign in."::: 4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported. > [!NOTE]
To add server credentials:
1. Select **Add Credentials**. 1. In the dropdown menu, select **Credentials type**.
-
+ You can provide domain/, Windows(non-domain)/, Linux(non-domain)/, and SQL Server authentication credentials. Learn how to [provide credentials](add-server-credentials.md) and how we handle them. 1. For each type of credentials, enter: * A friendly name.
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-agentless-migration.md
Last updated 09/01/2023-+ # Prepare for VMware agentless migration
Azure Migrate automatically handles these configuration changes for the followin
- SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 - Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022)-- Debian 11, 10, 9, 8, 7
+- Debian 11, 10, 9, 8, 7
- Oracle Linux 9, 8, 7.7-CI, 7.7, 6 You can also use this article to manually prepare the VMs for migration to Azure for operating systems versions not listed above. At a high level, these changes include:
The preparation script executes the following changes based on the OS type of th
After the source OS volume files are detected, the preparation script will load the SYSTEM registry hive into the registry editor of the temporary Azure VM and perform the following changes to ensure VM boot up and connectivity. You need to configure these settings manually if the OS version isn't supported for hydration. 1. **Validate the presence of the required drivers**
-
+ Ensure if the required drivers are installed and are set to load at **boot start**. These Windows drivers allow the server to communicate with the hardware and other connected devices. - IntelIde.sys
The preparation script executes the following changes based on the OS type of th
Make ΓÇ£VMware ToolsΓÇ¥ service start-type to disabled if it exists as they aren't required for the VM in Azure.
- >[!NOTE]
+ >[!NOTE]
>To connect to Windows Server 2003 VMs, Hyper-V Integration Services must be installed on the Azure VM. Windows Server 2003 machines don't have this installed by default. See this [article](./prepare-windows-server-2003-migration.md) to install and prepare for migration. 1. **Install the Windows Azure Guest Agent**
The preparation script executes the following changes based on the OS type of th
### Changes performed on Linux servers
-1. **Discover and mount Linux OS partitions**
+1. **Discover and mount Linux OS partitions**
Before performing relevant configuration changes, the preparation script will validate if the correct OS disk was selected for migration. The script will collect information on all partitions, their UUIDs, and mount points. The script will look through all these visible partitions to locate the /boot and /root partitions.
The preparation script executes the following changes based on the OS type of th
- Ubuntu: etc/lsb-release - Debian: etc/debian_version
-1. **Install Hyper-V Linux Integration Services and regenerate kernel image**
+1. **Install Hyper-V Linux Integration Services and regenerate kernel image**
The next step is to inspect the kernel image and rebuild the Linux init image so, that it contains the necessary Hyper-V drivers (**hv_vmbus, hv_storvsc, hv_netvsc**) on the initial ramdisk. Rebuilding the init image ensures that the VM will boot in Azure.
The preparation script executes the following changes based on the OS type of th
An illustrative example for rebuilding initrd - Back up the existing initrd image
-
+ ```bash cd /boot sudo cp initrd-`uname -r`.img initrd-`uname -r`.img.bak
The preparation script executes the following changes based on the OS type of th
1. Remove Network Manager if necessary. Network Manager can interfere with the Azure Linux agent for a few OS versions. It's recommended to make these changes for servers running RedHat and Ubuntu distributions. 1. Uninstall this package by running the following command:
-
+ An illustrative example for RedHat servers ```bash
The preparation script executes the following changes based on the OS type of th
### Clean up the temporary VM
-After the necessary changes are performed, Azure Migrate will spin down the temporary VM and free the attached OS disks (and data disks). This marks the end of the *hydration process*.
+After the necessary changes are performed, Azure Migrate will spin down the temporary VM and free the attached OS disks (and data disks). This marks the end of the *hydration process*.
After this, the modified OS disk and the data disks that contain the replicated data are cloned. A new virtual machine is created in the target region, virtual network, and subnet, and the cloned disks are attached to the virtual machine. This marks the completion of the migration process.
mysql Concepts Migrate Dump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-migrate-dump-restore.md
Title: Migrate using dump and restore
description: This article explains two common ways to back up and restore databases in Azure Database for MySQL - Flexible Server, using tools such as mysqldump, MySQL Workbench, and PHPMyAdmin. -+
Most common use-cases are:
- **Moving from other managed service provider** - Most managed service provider may not provide access to the physical storage file for security reasons so logical backup and restore is the only option to migrate. - **Migrating from on-premises environment or Virtual machine** - Azure Database for MySQL flexible server doesn't support restore of physical backups, which makes logical backup and restore as the ONLY approach.-- **Moving your backup storage from locally redundant to geo-redundant storage** - Azure Database for MySQL flexible server allows configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you can't change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, dump and restore is the ONLY option.
+- **Moving your backup storage from locally redundant to geo-redundant storage** - Azure Database for MySQL flexible server allows configuring locally redundant or geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you can't change the backup storage redundancy option. In order to move your backup storage from locally redundant storage to geo-redundant storage, dump and restore is the ONLY option.
- **Migrating from alternative storage engines to InnoDB** - Azure Database for MySQL flexible server supports only InnoDB Storage engine, and therefore doesn't support alternative storage engines. If your tables are configured with other storage engines, convert them into the InnoDB engine format before migration to Azure Database for MySQL flexible server. For example, if you have a WordPress or WebApp using the MyISAM tables, first convert those tables by migrating into InnoDB format before restoring to Azure Database for MySQL flexible server. Use the clause `ENGINE=InnoDB` to set the engine used when creating a new table, then transfer the data into the compatible table before the restore.
Most common use-cases are:
``` > [!Important]
-> - To avoid any compatibility issues, ensure the same version of MySQL is used on the source and destination systems when dumping databases. For example, if your existing MySQL server is version 5.7, then you should migrate to an Azure Database for MySQL flexible server instance configured to run version 5.7. The `mysql_upgrade` command does not function in an Azure Database for MySQL flexible server instance, and is not supported.
+> - To avoid any compatibility issues, ensure the same version of MySQL is used on the source and destination systems when dumping databases. For example, if your existing MySQL server is version 5.7, then you should migrate to an Azure Database for MySQL flexible server instance configured to run version 5.7. The `mysql_upgrade` command does not function in an Azure Database for MySQL flexible server instance, and is not supported.
> - If you need to upgrade across MySQL versions, first dump or export your lower version database into a higher version of MySQL in your own environment. Then run `mysql_upgrade` before attempting migration into an Azure Database for MySQL flexible server instance. ## Performance considerations
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
-# Private Network Access for Azure Database for MySQL - Flexible Server
+# Private Network Access using VNet Integration for Azure Database for MySQL - Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
Azure Database for MySQL flexible server supports client connectivity from:
Subnets enable you to segment the virtual network into one or more subnetworks and allocate a portion of the virtual network's address space to which you can then deploy Azure resources. Azure Database for MySQL flexible server requires a [delegated subnet](../../virtual-network/subnet-delegation-overview.md). A delegated subnet is an explicit identifier that a subnet can host only Azure Database for MySQL flexible server instances. By delegating the subnet, the service gets direct permissions to create service-specific resources to manage your Azure Database for MySQL flexible server instance seamlessly.
-> [!NOTE]
-> The smallest range you can specify for a subnet is /29, which provides eight IP addresses, of which five are utilized by Azure internally. In contrast, for Azure Database for MySQL flexible server, you would require one IP address per node to be allocated from the delegated subnet when private access is enabled. HA-enabled servers would need two, and Non-HA server would need one IP address. The recommendation is to reserve at least 2 IP addresses per Azure Database for MySQL flexible server instance, keeping in mind that we can enable high availability options later.
-
+> [!NOTE]
+> The smallest CIDR range you can specify for the subnet to host Azure Database for MySQL flexible server is /29, which provides eight IP addresses. However, the first and last address in any network or subnet canΓÇÖt be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that cannot be assigned to a host. This leaves you 3 available IP addresses for a /29 CIDR range. For Azure Database for MySQL flexible server, you would require one IP address per node to be allocated from the delegated subnet when private access is enabled. HA-enabled servers would need two, and Non-HA server would need one IP address. The recommendation is to reserve at least 2 IP addresses per Azure Database for MySQL flexible server instance, keeping in mind that we can enable high availability options later.
Azure Database for MySQL flexible server integrates with Azure [Private DNS zones](../../dns/private-dns-privatednszone.md) to provide a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. A private DNS zone can be linked to one or more virtual networks by creating [virtual network links](../../dns/private-dns-virtual-network-links.md) :::image type="content" source="./media/concepts-networking/vnet-diagram.png" alt-text="Flexible server MySQL VNET":::
If you're using the custom DNS server, then you must **use a DNS forwarder to re
> [!IMPORTANT] > For successful provisioning of the Azure Database for MySQL flexible server instance, even if you are using a custom DNS server, **you must not block DNS traffic to [AzurePlatformDNS](../../virtual-network/service-tags-overview.md) using [NSG](../../virtual-network/network-security-groups-overview.md)**.+ ## Private DNS zone and VNET peering Private DNS zone settings and VNET peering are independent of each other. For more information on creating and using Private DNS zones, see the [Use Private DNS Zone](#use-private-dns-zone) section.
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
To get more details about the compute series available, refer to Azure VM docume
>[!NOTE] >For [Burstable (B-series) compute tier](../../virtual-machines/sizes-b-series-burstable.md) if the VM is started/stopped or restarted, the credits may be lost. For more information, see [Burstable (B-Series) FAQ](../../virtual-machines/sizes-b-series-burstable.md).
+## Performance limitations of burstable series instances
+
+Burstable compute tier is designed to provide a cost-effective solution for workloads that don't require continuous full CPU continuously. This tier is ideal for nonproduction workloads, such as development, staging, or testing environments.
+The unique feature of the burstable compute tier is its ability to ΓÇ£burstΓÇ¥, that is, to utilize more than its baseline CPU performance using up to 100% of the vCPU when the workload requires it. This is made possible by a CPU credit model, [which allows B-series instances to accumulate ΓÇ£CPU creditsΓÇ¥](../../virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md#b-series-cpu-credit-model) during periods of low CPU usage. These credits can then be spent during periods of high CPU usage, allowing the instance to burst above its base CPU performance.
+
+However, itΓÇÖs important to note that once a burstable instance exhausts its CPU credits, it operates at its base CPU performance. For example, the base CPU performance of a Standard_B1s is 20% that is, 0.2 Vcore. If Burstable tier server is running a workload that requires more CPU performance than the base level, and it has exhausted its CPU credits, the server may experience performance limitations and eventually could affect various system operations for your server.
+
+Therefore, while the Burstable compute tier offers significant cost and flexibility advantages for certain types of workloads, **it is not recommended for production workloads** that require consistent CPU performance. Note that the Burstable tier doesn't support functionality of creating [Read Replicas](./concepts-read-replicas.md) and [High availability](./concepts-high-availability.md) feature. For such workloads and features, other compute tiers, such as the General Purpose or Business Critical are more appropriate.
+
+For more information on the Azure's B-series CPU credit model, refer to the [B-series burstable instances](../../virtual-machines/sizes-b-series-burstable.md) and [B-series CPU credit model](../../virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md#b-series-cpu-credit-model).
+
+### Monitoring CPU credits in burstable tier
+
+Monitoring your CPU credit balance is crucial for maintaining optimal performance in the Burstable compute tier. Azure Database for MySQL Flexible Server provides two key metrics related to CPU credits. The ideal threshold for triggering an alert depends on your specific workload and performance requirements.
++
+[CPU Credit Consumed](./concepts-monitoring.md): This metric indicates the number of CPU credits consumed by your instance. Monitoring this metric can help you understand your instanceΓÇÖs CPU usage patterns and manage its performance effectively.
+
+[CPU Credit Remaining](./concepts-monitoring.md): This metric shows the number of CPU credits remaining for your instance. Keeping an eye on this metric can help you prevent your instance from degrading in performance due to exhausting its CPU credits.
+
+For more information, on [how to setup alerts on metrics, refer to this guide](./how-to-alert-on-metric.md).
+ ## Storage The storage you provision is the amount of storage capacity available to your flexible server. Storage is used for the database files, temporary files, transaction logs, and the MySQL server logs. In all service tiers, the minimum storage supported is 20 GiB and maximum is 16 TiB. Storage is scaled in 1 GiB increments and can be scaled up after the server is created.
For example, if you have provisioned 110 GiB of storage, and the actual utilizat
While the service attempts to make the server read-only, all new write transaction requests are blocked and existing active transactions will continue to execute. When the server is set to read-only, all subsequent write operations and transaction commits fail. Read queries will continue to work uninterrupted.
-To get the server out of read-only mode, you should increase the provisioned storage on the server. This can be done using the Azure portal or Azure CLI. Once increased, the server will be ready to accept write transactions again.
+To get the server out of read-only mode, you should increase the provisioned storage on the server. This can be done using the Azure portal or Azure CLI. Once increased, the server is ready to accept write transactions again.
-We recommend that you <!--turn on storage auto-grow or to--> set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on alert documentation [how to set up an alert](how-to-alert-on-metric.md).
+We recommended that you <!--turn on storage auto-grow or to--> set up an alert to notify you when your server storage is approaching the threshold so you can avoid getting into the read-only state. For more information, see the documentation on alert documentation [how to set up an alert](how-to-alert-on-metric.md).
-### Storage auto-grow
+### Storage auto grow
-Storage auto-grow prevents your server from running out of storage and becoming read-only. If storage auto-grow is enabled, the storage automatically grows without impacting the workload. Storage auto-grow is enabled by default for all new server creates. For servers with less than equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply. Refresh the server instance to see the updated storage provisioned under **Settings** on the **Compute + Storage** page.
+Storage autogrow prevents your server from running out of storage and becoming read-only. If storage autogrow is enabled, the storage automatically grows without impacting the workload. Storage autogrow is enabled by default for all new server creates. For servers with less than equal to 100 GB provisioned storage, the provisioned storage size is increased by 5 GB when the free storage is below 10% of the provisioned storage. For servers with more than 100 GB of provisioned storage, the provisioned storage size is increased by 5% when the free storage space is below 10 GB of the provisioned storage size. Maximum storage limits as specified above apply. Refresh the server instance to see the updated storage provisioned under **Settings** on the **Compute + Storage** page.
For example, if you have provisioned 1000 GB of storage, and the actual utilization goes over 990 GB, the server storage size is increased to 1050 GB. Alternatively, if you have provisioned 20 GB of storage, the storage size is increase to 25 GB when less than 2 GB of storage is free.
The minimum IOPS are 360 across all compute sizes and the maximum IOPS is determ
> **Minimum IOPS are 360 across all compute sizes<br> > **Maximum IOPS are determined by the selected compute size.
-You can monitor your I/O consumption in the Azure portal (with Azure Monitor) using [IO percent](./concepts-monitoring.md) metric. If you need more IOPS than the max IOPS based on compute then you need to scale your server's compute.
+You can monitor your I/O consumption in the Azure portal (with Azure Monitor) using [IO percent](./concepts-monitoring.md) metric. If you need more IOPS than the max IOPS based on compute, then you need to scale your server's compute.
## Autoscale IOPS The cornerstone of Azure Database for MySQL flexible server is its ability to achieve the best performance for tier 1 workloads, which can be improved by enabling server automatically scale performance (IO) of its database servers seamlessly depending on the workload needs. This is an opt-in feature that enables users to scale IOPS on demand without having to pre-provision a certain amount of IO per second. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs.ΓÇ»
mysql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-azure-cli.md
description: This quickstart provides several ways to connect with and query Azu
---+++ Last updated 05/03/2023
This quickstart demonstrates how to connect to Azure Database for MySQL flexible
## Prerequisites -- An Azure account with an active subscription.
+- An Azure account with an active subscription.
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] - Install [Azure CLI](/cli/azure/install-azure-cli) latest version (2.20.0 or higher)
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-csharp.md
description: "This quickstart provides a C# (.NET) code sample you can use to co
--++ ms.devlang: csharp-+ Last updated 05/03/2023
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
-+ ms.devlang: java Last updated 05/03/2023
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-nodejs.md
Last updated 06/19/2023
-+ ms.devlang: javascript
This quickstart uses the resources created in either of these guides as a starti
- [Create an Azure Database for MySQL flexible server instance using Azure portal](./quickstart-create-server-portal.md) - [Create an Azure Database for MySQL flexible server instance using Azure CLI](./quickstart-create-server-cli.md)
-> [!IMPORTANT]
+> [!IMPORTANT]
> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-portal.md) or [Azure CLI](./how-to-manage-firewall-cli.md) ## Install Node.js and the MySQL connector
-Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql2](https://www.npmjs.com/package/mysql2) package and its dependencies into your project folder.
+Depending on your platform, follow the instructions in the appropriate section to install [Node.js](https://nodejs.org). Use npm to install the [mysql2](https://www.npmjs.com/package/mysql2) package and its dependencies into your project folder.
### [Windows](#tab/windows)
az group delete \
- [Encrypted connectivity using Transport Layer Security (TLS 1.2) in Azure Database for MySQL flexible server](./how-to-connect-tls-ssl.md). - Learn more about [Networking in Azure Database for MySQL flexible server](./concepts-networking.md). - [Create and manage Azure Database for MySQL flexible server firewall rules using the Azure portal](./how-to-manage-firewall-portal.md).-- [Create and manage an Azure Database for MySQL flexible server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).
+- [Create and manage an Azure Database for MySQL flexible server virtual network using Azure portal](./how-to-manage-virtual-network-portal.md).
mysql Connect Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-php.md
description: This quickstart provides several PHP code samples you can use to co
---+++ Last updated 05/03/2023
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-in-replication.md
Last updated 12/30/2022 -+
This article describes how to set up [Data-in replication](concepts-data-in-replication.md) in Azure Database for MySQL flexible server by configuring the source and replica servers. This article assumes that you have some prior experience with MySQL servers and databases.
-> [!NOTE]
+> [!NOTE]
> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. To create a replica in the Azure Database for MySQL flexible server instance, [Data-in replication](concepts-data-in-replication.md) synchronizes data from a source MySQL server on-premises, in virtual machines (VMs), or in cloud database services. Data-in replication can be configured using either binary log (binlog) file position-based replication OR GTID based replication. To learn more about binlog replication, see the [MySQL Replication](https://dev.mysql.com/doc/refman/5.7/en/replication-configuration.html).
Review the [limitations and requirements](concepts-data-in-replication.md#limita
## Configure the source MySQL server
-The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in replication. This server is the "source" for Data-in replication.
+The following steps prepare and configure the MySQL server hosted on-premises, in a virtual machine, or database service hosted by other cloud providers for Data-in replication. This server is the "source" for Data-in replication.
1. Review the [source server requirements](concepts-data-in-replication.md#requirements) before proceeding.
The following steps prepare and configure the MySQL server hosted on-premises, i
- Ensure that the source server allows both inbound and outbound traffic on port 3306, and that it has a **public IP address**, the DNS is publicly accessible, or that it has a fully qualified domain name (FQDN). - If private access is in use, make sure that you have connectivity between Source server and the Vnet in which the replica server is hosted.
-
+ - Make sure we provide site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../../expressroute/expressroute-introduction.md) or [VPN](../../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-
+ - If private access is used in replica server and your source is Azure VM make sure that VNet to VNet connectivity is established. VNet-Vnet peering is supported. You can also use other connectivity methods to communicate between VNets across different regions like VNet to VNet Connection. For more information you can, see [VNet-to-VNet VPN gateway](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)
-
+ - Ensure that your virtual network Network Security Group rules don't block the outbound port 3306 (Also inbound if the MySQL is running on Azure VM). For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md).
-
+ - Configure your source server's firewall rules to allow the replica server IP address. 1. Follow appropriate steps based on if you want to use bin-log position or GTID based data-in replication.
The following steps prepare and configure the MySQL server hosted on-premises, i
```sql SHOW VARIABLES LIKE 'log_bin'; ```
-
+ If the variable [`log_bin`](https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html#sysvar_log_bin) is returned with the value "ON", binary logging is enabled on your server.
-
+ If `log_bin` is returned with the value "OFF" and your source server is running on-premises or on virtual machines where you can access the configuration file (my.cnf), you can follow the following steps:
-
+ 1. Locate your MySQL configuration file (my.cnf) in the source server. For example: /etc/my.cnf
-
+ 1. Open the configuration file to edit it and locate **mysqld** section in the file.
-
+ 1. In the mysqld section, add following line:
-
+ ```bash log-bin=mysql-bin.log ```
-
+ 1. Restart the MySQL service on source server (or Restart) for the changes to take effect.
-
+ 1. After the server is restarted, verify that binary logging is enabled by running the same query as before:
-
+ ```sql SHOW VARIABLES LIKE 'log_bin'; ```
The following steps prepare and configure the MySQL server hosted on-premises, i
#### [GTID based replication](#tab/shell) The Master server needs to be started with GTID mode enabled by setting the gtid_mode variable to ON. It's also essential that the enforce_gtid_consistency variable is enabled to make sure that only the statements, which are safe for MySQL GTIDs Replication are logged.
-
+ SET @@GLOBAL.ENFORCE_GTID_CONSISTENCY = ON;
-
+ SET @@GLOBAL.GTID_MODE = ON;
-
+ If the master server is another Azure Database for MySQL flexible server instance, then these server parameters can also be updated from the portal by navigating to server parameter page.
-
-
++ 1. Configure the source server settings.
-
+ Data-in replication requires the parameter `lower_case_table_names` to be consistent between the source and replica servers. This parameter is 1 by default in Azure Database for MySQL flexible server.
-
+ ```sql SET GLOBAL lower_case_table_names = 1; ```
-
+ 5. Create a new replication role and set up permission.
-
+ Create a user account on the source server that is configured with replication privileges. This can be done through SQL commands or a tool such as MySQL Workbench. Consider whether you plan on replicating with SSL, as this will need to be specified when creating the user. Refer to the MySQL documentation to understand how to [add user accounts](https://dev.mysql.com/doc/refman/5.7/en/user-names.html) on your source server.
-
+ In the following commands, the new replication role created can access the source from any machine, not just the machine that hosts the source itself. This is done by specifying "syncuser@'%'" in the create user command. See the MySQL documentation to learn more about [specifying account names](https://dev.mysql.com/doc/refman/5.7/en/account-names.html). #### [SQL Command](#tab/command-line)
SET GLOBAL read_only = ON;
1. Get binary log file name and offset. Run the [`show master status`](https://dev.mysql.com/doc/refman/5.7/en/show-master-status.html) command to determine the current binary log file name and offset.
-
+ ```sql show master status; ```
The results should appear similar to the following. Make sure to note the binary
SET GLOBAL read_only = OFF; UNLOCK TABLES; ```
-[!NOTE]
+[!NOTE]
> Before the server is set back to read/write mode, you can retrieve the GTID information using global variable GTID_EXECUTED. The same will be used at the later stage to set GTID on the replica server 3. Restore dump file to new server. Restore the dump file to the server created in Azure Database for MySQL flexible server. Refer to [Dump & Restore](../concepts-migrate-dump-restore.md) for how to restore a dump file to a MySQL server. If the dump file is large, upload it to a virtual machine in Azure within the same region as your replica server. Restore it to the Azure Database for MySQL flexible server instance from the virtual machine.
-> [!NOTE]
+> [!NOTE]
> If you want to avoid setting the database to read only when you dump and restore, you can use [mydumper/myloader](../concepts-migrate-mydumper-myloader.md). ## Set GTID in Replica Server
The results should appear similar to the following. Make sure to note the binary
For more details refer [GTID Reset](/cli/azure/mysql/flexible-server/gtid).
-> [!NOTE]
+> [!NOTE]
>GTID reset can't be performed on a Geo-redundancy backup enabled server. Please disable Geo-redundancy to perform GTID reset on the server. You can enable Geo-redundancy option again after GTID reset. GTID reset action invalidates all the available backups and therefore, once Geo-redundancy is enabled again, it may take a day before geo-restore can be performed on the server
To link two servers and start replication, login to the target replica server in
It's recommended to pass this parameter in as a variable. For more information, visit the following examples.
- > [!NOTE]
+ > [!NOTE]
> - If the source server is hosted in an Azure VM, set "Allow access to Azure services" to "ON" to allow the source and replica servers to communicate with each other. This setting can be changed from the **Connection security** options. For more information, see [Manage firewall rules using the portal](how-to-manage-firewall-portal.md). > - If you used mydumper/myloader to dump the database then you can get the master_log_file and master_log_pos from the */backup/metadata* file.
To skip a replication error and allow replication to continue, use the following
## Next steps - Learn more about [Data-in replication](concepts-data-in-replication.md) for Azure Database for MySQL flexible server.
-
+
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/concepts-migrate-mydumper-myloader.md
-+ Last updated 05/03/2023
Before you begin migrating your MySQL database, you need to:
> [!Note] > Prior to installing the tools, consider the following points: >
- > * If your source is on-premises and has a high bandwidth connection to Azure (using ExpressRoute), consider installing the tool on an Azure VM.<br>
+ > * If your source is on-premises and has a high bandwidth connection to Azure (using ExpressRoute), consider installing the tool on an Azure VM.<br>
> * If you have a challenge in the bandwidth between the source and target, consider installing mydumper near the source and myloader near the target server. You can use tools **[Azcopy](../../storage/common/storage-use-azcopy-v10.md)** to move the data from on-premises or other cloud solutions to Azure. 3. Install mysql client, do the following steps:
-4.
+4.
- * Update the package index on the Azure VM running Linux by running the following command:
- ```bash
- sudo apt update
- ```
- * Install the mysql client package by running the following command:
- ```bash
- sudo apt install mysql-client
- ```
+* Update the package index on the Azure VM running Linux by running the following command:
+```bash
+sudo apt update
+```
+* Install the mysql client package by running the following command:
+
+```bash
+sudo apt install mysql-client
+```
## Install mydumper/myloader
To install mydumper/myloader, do the following steps.
This command uses the following variables: * **--host:** The host to connect to
-* **--user:** Username with the necessary privileges
+* **--user:** Username with the necessary privileges
* **--password:** User password * **--rows:** Try to split tables into chunks of this many rows * **--outputdir:** Directory to dump output files to
This command uses the following variables:
* **--trx-consistency-only:** Transactional consistency only * **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer.
- >[!Note]
+ >[!Note]
>For more information on other options, you can use with mydumper, run the following command: **mydumper --help** . For more details see, [mydumper\myloader documentation](https://centminmod.com/mydumper.html)<br> >To dump multiple databases in parallel, you can modify regex variable as shown in the example: **regex ΓÇÖ^(DbName1\.|DbName2\.)**
This command uses the following variables:
This command uses the following variables: * **--host:** The host to connect to
-* **--user:** Username with the necessary privileges
+* **--user:** Username with the necessary privileges
* **--password:** User password
-* **--directory:** Location where the backup is stored.
+* **--directory:** Location where the backup is stored.
* **--queries-per-transaction:** Recommend setting to value not more than 500 * **--threads:** Number of threads to use, default 4. Recommended a use a value equal to 2x of the vCore of the computer
mysql How To Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md
Last updated 05/03/2023 -+
To configure Data in replication, perform the following steps:
6. Read the metadata file to determine the binary log file name and offset by running the following command: ```bash
- cat ./backup/metadata
+ cat ./backup/metadata
``` In this command, **./backup** refers to the output directory used in the command in the previous step.
To configure Data in replication, perform the following steps:
```sql SET @cert = '--BEGIN CERTIFICATE--
- PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HERE
+ PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HERE
--END CERTIFICATE--' ```
To configure Data in replication, perform the following steps:
10. To check the replication status, on the replica server, run the following command: ```sql
- show slave status \G;
+ show slave status \G;
``` > [!Note]
To confirm that Data-in replication is working properly, you can verify that the
3. In the Customers table on the primary server, insert rows by running the following command: ```sql
- insert into `customers`(`customerNumber`,`customerName`,`contactLastName`,`contactFirstName`,`phone`,`addressLine1`,`addressLine2`,`city`,`state`,`postalCode`,`country`,`salesRepEmployeeNumber`,`creditLimit`) values
+ insert into `customers`(`customerNumber`,`customerName`,`contactLastName`,`contactFirstName`,`phone`,`addressLine1`,`addressLine2`,`city`,`state`,`postalCode`,`country`,`salesRepEmployeeNumber`,`creditLimit`) values
(<ID>,'name1','name2','name3 ','11.22.5555','54, Add',NULL,'Add1',NULL,'44000','country',1370,'21000.00'); ```
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md
-+ ms.devlang: golang Last updated 05/03/2023
This quickstart uses the resources created in either of these guides as a starti
- [Create an Azure Database for MySQL server using Azure portal](./quickstart-create-mysql-server-database-using-azure-portal.md) - [Create an Azure Database for MySQL server using Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md)
-> [!IMPORTANT]
+> [!IMPORTANT]
> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) ## Install Go and MySQL connector
Install [Go](https://go.dev/doc/install) and the [go-sql-driver for MySQL](https
### [Apple macOS](#tab/macos)
-1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform.
+1. Download and install Go according to the [installation instructions](https://go.dev/doc/install) matching your platform.
2. Launch the Bash shell. 3. Make a folder for your project in your home directory, such as `mkdir -p ~/go/src/mysqlgo/`. 4. Change directory into the folder, such as `cd ~/go/src/mysqlgo/`.
Get the connection information needed to connect to the Azure Database for MySQL
1. To write Golang code, you can use a simple text editor, such as Notepad in Microsoft Windows, [vi](https://manpages.ubuntu.com/manpages/xenial/man1/nvi.1.html#contenttoc5) or [Nano](https://www.nano-editor.org/) in Ubuntu, or TextEdit in macOS. If you prefer a richer Interactive Development Environment (IDE), try [Gogland](https://www.jetbrains.com/go/) by Jetbrains, [Visual Studio Code](https://code.visualstudio.com/) by Microsoft, or [Atom](https://atom.io/). 2. Paste the Go code from the sections below into text files, and then save them into your project folder with file extension \*.go (such as Windows path `%USERPROFILE%\go\src\mysqlgo\createtable.go` or Linux path `~/go/src/mysqlgo/createtable.go`).
-3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and then replace the example values with your own values.
+3. Locate the `HOST`, `DATABASE`, `USER`, and `PASSWORD` constants in the code, and then replace the example values with your own values.
4. Launch the command prompt or Bash shell. Change directory into your project folder. For example, on Windows `cd %USERPROFILE%\go\src\mysqlgo\`. On Linux `cd ~/go/src/mysqlgo/`. Some of the IDE editors mentioned offer debug and runtime capabilities without requiring shell commands.
-5. Run the code by typing the command `go run createtable.go` to compile the application and run it.
+5. Run the code by typing the command `go run createtable.go` to compile the application and run it.
6. Alternatively, to build the code into a native application, `go build createtable.go`, then launch `createtable.exe` to run the application. ## Connect, create table, and insert data
-Use the following code to connect to the server, create a table, and load the data by using an **INSERT** SQL statement.
+Use the following code to connect to the server, create a table, and load the data by using an **INSERT** SQL statement.
The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and it checks the connection by using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method several times to run several DDL commands. The code also uses [Prepare()](http://go-database-sql.org/prepared.html) and Exec() to run prepared statements with different parameters to insert three rows. Each time, a custom checkError() method is used to check if an error occurred and panic to exit.
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
```Go package main
func main() {
## Read data
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
+Use the following code to connect and read the data by using a **SELECT** SQL statement.
The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Query()](https://go.dev/pkg/database/sql/#DB.Query) method to run the select command. Then it runs [Next()](https://go.dev/pkg/database/sql/#Rows.Next) to iterate through the result set and [Scan()](https://go.dev/pkg/database/sql/#Rows.Scan) to parse the column values, saving the value into variables. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
```Go package main
func main() {
## Update data
-Use the following code to connect and update the data using a **UPDATE** SQL statement.
+Use the following code to connect and update the data using a **UPDATE** SQL statement.
The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the update command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
```Go package main
func main() {
## Delete data
-Use the following code to connect and remove data using a **DELETE** SQL statement.
+Use the following code to connect and remove data using a **DELETE** SQL statement.
The code imports three packages: the [sql package](https://go.dev/pkg/database/sql/), the [go sql driver for mysql](https://github.com/go-sql-driver/mysql#installation) as a driver to communicate with the Azure Database for MySQL, and the [fmt package](https://go.dev/pkg/fmt/) for printed input and output on the command line. The code calls method [sql.Open()](http://go-database-sql.org/accessing.html) to connect to Azure Database for MySQL, and checks the connection using method [db.Ping()](https://go.dev/pkg/database/sql/#DB.Ping). A [database handle](https://go.dev/pkg/database/sql/#DB) is used throughout, holding the connection pool for the database server. The code calls the [Exec()](https://go.dev/pkg/database/sql/#DB.Exec) method to run the delete command. Each time a custom checkError() method is used to check if an error occurred and panic to exit.
-Replace the `host`, `database`, `user`, and `password` constants with your own values.
+Replace the `host`, `database`, `user`, and `password` constants with your own values.
```Go package main
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
ms.devlang: java -+ Last updated 05/03/2023
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
ms.devlang: javascript -+ Last updated 05/03/2023
Last updated 05/03/2023
[!INCLUDE[azure-database-for-mysql-single-server-deprecation](../includes/azure-database-for-mysql-single-server-deprecation.md)]
-In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Linux, and Windows platforms.
+In this quickstart, you connect to an Azure Database for MySQL by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Linux, and Windows platforms.
This article assumes that you're familiar with developing using Node.js, but you're new to working with Azure Database for MySQL.
This article assumes that you're familiar with developing using Node.js, but you
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). - An Azure Database for MySQL server. [Create an Azure Database for MySQL server using Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Create an Azure Database for MySQL server using Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
-> [!IMPORTANT]
+> [!IMPORTANT]
> Ensure the IP address you're connecting from has been added the server's firewall rules using the [Azure portal](./how-to-manage-firewall-using-portal.md) or [Azure CLI](./how-to-manage-firewall-using-cli.md) ## Install Node.js and the MySQL connector
Depending on your platform, follow the instructions in the appropriate section t
### [Windows](#tab/windows) 1. Visit the [Node.js downloads page](https://nodejs.org/en/download/), and then select your desired Windows installer option.
-2. Make a local project folder such as `nodejsmysql`.
+2. Make a local project folder such as `nodejsmysql`.
3. Open the command prompt, and then change directory into the project folder, such as `cd c:\nodejsmysql\` 4. Run the NPM tool to install the mysql2 library into the project folder.
Get the connection information needed to connect to the Azure Database for MySQL
Use the following code to connect and load the data by using **CREATE TABLE** and **INSERT INTO** SQL statements.
-The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) function is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) function is used to execute the SQL query against MySQL database.
+The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) function is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) function is used to execute the SQL query against MySQL database.
```javascript const mysql = require('mysql2');
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
}); function queryDatabase(){
- conn.query('DROP TABLE IF EXISTS inventory;', function (err, results, fields) {
- if (err) throw err;
+ conn.query('DROP TABLE IF EXISTS inventory;', function (err, results, fields) {
+ if (err) throw err;
console.log('Dropped inventory table if existed.'); })
- conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
+ conn.query('CREATE TABLE inventory (id serial PRIMARY KEY, name VARCHAR(50), quantity INTEGER);',
function (err, results, fields) { if (err) throw err; console.log('Created inventory table.'); })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['banana', 150],
function (err, results, fields) { if (err) throw err; else console.log('Inserted ' + results.affectedRows + ' row(s).'); })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 154],
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['orange', 154],
function (err, results, fields) { if (err) throw err; console.log('Inserted ' + results.affectedRows + ' row(s).'); })
- conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
+ conn.query('INSERT INTO inventory (name, quantity) VALUES (?, ?);', ['apple', 100],
function (err, results, fields) { if (err) throw err; console.log('Inserted ' + results.affectedRows + ' row(s).'); })
- conn.end(function (err) {
+ conn.end(function (err) {
if (err) throw err;
- else console.log('Done.')
+ else console.log('Done.')
}); }; ``` ## Read data
-Use the following code to connect and read the data by using a **SELECT** SQL statement.
+Use the following code to connect and read the data by using a **SELECT** SQL statement.
The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database. The results array is used to hold the results of the query.
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
}); function readData(){
- conn.query('SELECT * FROM inventory',
+ conn.query('SELECT * FROM inventory',
function (err, results, fields) { if (err) throw err; else console.log('Selected ' + results.length + ' row(s).');
function readData(){
console.log('Done.'); }) conn.end(
- function (err) {
+ function (err) {
if (err) throw err;
- else console.log('Closing connection.')
+ else console.log('Closing connection.')
}); }; ``` ## Update data
-Use the following code to connect and update data by using an **UPDATE** SQL statement.
+Use the following code to connect and update data by using an **UPDATE** SQL statement.
-The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database.
+The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database.
```javascript const mysql = require('mysql2');
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
}); function updateData(){
- conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [200, 'banana'],
+ conn.query('UPDATE inventory SET quantity = ? WHERE name = ?', [200, 'banana'],
function (err, results, fields) { if (err) throw err; else console.log('Updated ' + results.affectedRows + ' row(s).'); }) conn.end(
- function (err) {
+ function (err) {
if (err) throw err;
- else console.log('Done.')
+ else console.log('Done.')
}); }; ``` ## Delete data
-Use the following code to connect and delete data by using a **DELETE** SQL statement.
+Use the following code to connect and delete data by using a **DELETE** SQL statement.
-The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database.
+The [mysql.createConnection()](https://github.com/sidorares/node-mysql2#first-query) method is used to interface with the MySQL server. The [connect()](https://github.com/sidorares/node-mysql2#first-query) method is used to establish the connection to the server. The [query()](https://github.com/sidorares/node-mysql2#first-query) method is used to execute the SQL query against MySQL database.
```javascript const mysql = require('mysql2');
var config =
const conn = new mysql.createConnection(config); conn.connect(
- function (err) {
- if (err) {
+ function (err) {
+ if (err) {
console.log("!!! Cannot connect !!! Error:"); throw err; }
conn.connect(
}); function deleteData(){
- conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
+ conn.query('DELETE FROM inventory WHERE name = ?', ['orange'],
function (err, results, fields) { if (err) throw err; else console.log('Deleted ' + results.affectedRows + ' row(s).'); }) conn.end(
- function (err) {
+ function (err) {
if (err) throw err;
- else console.log('Done.')
+ else console.log('Done.')
}); }; ```
mysql Connect Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-ruby.md
ms.devlang: ruby -+ Last updated 05/03/2023
mysql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connect-with-managed-identity.md
-+ Last updated 05/03/2023
Last updated 05/03/2023
[!INCLUDE[azure-database-for-mysql-single-server-deprecation](../includes/azure-database-for-mysql-single-server-deprecation.md)]
-This article shows you how to use a user-assigned identity for an Azure Virtual Machine (VM) to access an Azure Database for MySQL server. Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Microsoft Entra authentication, without needing to insert credentials into your code.
+This article shows you how to use a user-assigned identity for an Azure Virtual Machine (VM) to access an Azure Database for MySQL server. Managed Service Identities are automatically managed by Azure and enable you to authenticate to services that support Microsoft Entra authentication, without needing to insert credentials into your code.
You learn how to:
mysql How To Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-data-in-replication.md
Title: Configure Data-in Replication - Azure Database for MySQL
description: This article describes how to set up Data-in Replication for Azure Database for MySQL. -+
The following steps prepare and configure the MySQL server hosted on-premises, i
> [!NOTE] > This IP address may change due to maintenance / deployment operations. This method of connectivity is only for customers who cannot afford to allow all IP address on 3306 port.
-
+ 3. Turn on binary logging. Check to see if binary logging has been enabled on the source by running the following command:
mysql How To Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-redirection.md
Last updated 05/03/2023 -+
Sign in to the [Azure portal](https://portal.azure.com). Create an Azure Databas
For details, refer to how to create an Azure Database for MySQL server using the [Azure portal](quickstart-create-mysql-server-database-using-azure-portal.md) or [Azure CLI](quickstart-create-mysql-server-database-using-azure-cli.md).
-> [!IMPORTANT]
+> [!IMPORTANT]
> Redirection is currently not supported with [Private Link for Azure Database for MySQL](concepts-data-access-security-private-link.md). ## Enable redirection
Support for redirection in PHP applications is available through the [mysqlnd_az
The mysqlnd_azure extension is available to add to PHP applications through PECL, and it's highly recommended to install and configure the extension through the officially published [PECL package](https://pecl.php.net/package/mysqlnd_azure).
-> [!IMPORTANT]
+> [!IMPORTANT]
> Support for redirection in the PHP [mysqlnd_azure](https://github.com/microsoft/mysqlnd_azure) extension is currently in preview. ### Redirection logic
-> [!IMPORTANT]
+> [!IMPORTANT]
> Redirection logic/behavior beginning version 1.1.0 was updated and **it is recommended to use version 1.1.0+**. The redirection behavior is determined by the value of `mysqlnd_azure.enableRedirect`. The table below outlines the behavior of redirection based on the value of this parameter beginning in **version 1.1.0+**.
nat-gateway Quickstart Create Nat Gateway Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-bicep.md
Last updated 07/21/2023 -+ # Customer intent: I want to create a NAT gateway using Bicep so that I can provide outbound connectivity for my virtual machines.
nat-gateway Quickstart Create Nat Gateway Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/quickstart-create-nat-gateway-template.md
Last updated 07/21/2023 -+ # Customer intent: I want to create a NAT gateway by using an Azure Resource Manager template so that I can provide outbound connectivity for my virtual machines.
nat-gateway Troubleshoot Nat Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat-connectivity.md
Title: Troubleshoot Azure NAT Gateway connectivity
description: Troubleshoot connectivity issues with a NAT gateway. -+ Last updated 04/24/2023
-# Troubleshoot Azure NAT Gateway connectivity
+# Troubleshoot Azure NAT Gateway connectivity
-This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway. This article also provides best practices on how to design applications to use outbound connections efficiently.
+This article provides guidance on how to troubleshoot and resolve common outbound connectivity issues with your NAT gateway. This article also provides best practices on how to design applications to use outbound connections efficiently.
## Datapath availability drop on NAT gateway with connection failures **Scenario**
-You observe a drop in the datapath availability of NAT gateway, which coincides with connection failures.
+You observe a drop in the datapath availability of NAT gateway, which coincides with connection failures.
**Possible causes** * SNAT port exhaustion
You observe a drop in the datapath availability of NAT gateway, which coincides
* Check the SNAT Connection Count metric and [split the connection state](/azure/nat-gateway/nat-metrics#snat-connection-count) by attempted and failed connections. More than zero failed connections may indicate SNAT port exhaustion or reaching the SNAT connection count limit of NAT gateway. * Verify the [Total SNAT Connection Count metric](/azure/nat-gateway/nat-metrics#total-snat-connection-count) to ensure it is within limits. NAT gateway supports 50,000 simultaneous connections per IP address to a unique destination and up to 2 million connections in total. For more information, see [NAT Gateway Performance](/azure/nat-gateway/nat-gateway-resource#performance). * Check the [dropped packets metric](/azure/nat-gateway/nat-metrics#dropped-packets) for any packet drops that align with connection failures or high connection volume.
-* Adjust the [TCP idle timeout timer](./nat-gateway-resource.md#tcp-idle-timeout) settings as needed. An idle timeout timer set higher than the default (4 minutes) holds on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers).
+* Adjust the [TCP idle timeout timer](./nat-gateway-resource.md#tcp-idle-timeout) settings as needed. An idle timeout timer set higher than the default (4 minutes) holds on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers).
### Possible solutions for SNAT port exhaustion or hitting simultaneous connection limits
-* Add public IP addresses to your NAT gateway up to a total of 16 to scale your outbound connectivity. Each public IP provides 64,512 SNAT ports and supports up to 50,000 simultaneous connections per unique destination endpoint for NAT gateway.
+* Add public IP addresses to your NAT gateway up to a total of 16 to scale your outbound connectivity. Each public IP provides 64,512 SNAT ports and supports up to 50,000 simultaneous connections per unique destination endpoint for NAT gateway.
* Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. * Reduce the [TCP idle timeout timer](./nat-gateway-resource.md#idle-timeout-timers) to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes. * Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations. * Make connections to Azure PaaS services over the Azure backbone using [Private Link](/azure/private-link/private-link-overview). Private link frees up SNAT ports for outbound connections to the internet. * If your investigation is inconclusive, open a support case to [further troubleshoot](#more-troubleshooting-guidance).
->[!NOTE]
+>[!NOTE]
>It is important to understand why SNAT port exhaustion occurs. Make sure you use the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns. For more informations, see [best practices for efficient use of outbound connections](#outbound-connectivity-best-practices).
-### Possible solutions for TCP connection timeouts
+### Possible solutions for TCP connection timeouts
Use TCP keepalives or application layer keepalives to refresh idle flows and reset the idle timeout timer. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive).
-TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection.
+TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection.
->[!Note]
->Increasing the TCP idle timeout is a last resort and may not resolve the root cause of the issue. A long timeout can introduce delay and cause unnecessary low-rate failures when timeout expires.
+>[!Note]
+>Increasing the TCP idle timeout is a last resort and may not resolve the root cause of the issue. A long timeout can introduce delay and cause unnecessary low-rate failures when timeout expires.
### Possible solutions for UDP connection timeouts
UDP idle timeout timers are set to 4 minutes and aren't configurable. Enable UDP
Application layer keepalives can also be used to refresh idle flows and reset the idle timeout. Check the server side for what options exist for application specific keepalives.
-## Datapath availability drop on NAT gateway but no connection failures
+## Datapath availability drop on NAT gateway but no connection failures
**Scenario**
-The datapath availability of NAT gateway drops but no failed connections are observed.
+The datapath availability of NAT gateway drops but no failed connections are observed.
**Possible cause**
-Transient drop in datapath availability caused by noise in the datapath.
+Transient drop in datapath availability caused by noise in the datapath.
**How to troubleshoot**
-If you observe no impact on your outbound connectivity but see a drop in datapath availability, NAT gateway may be picking up noise in the datapath that shows as a transient drop.
+If you observe no impact on your outbound connectivity but see a drop in datapath availability, NAT gateway may be picking up noise in the datapath that shows as a transient drop.
### Recommended alert setup
-Set up an [alert for datapath availability drops](/azure/nat-gateway/nat-metrics#alerts-for-datapath-availability-degradation) or use [Azure Resource Health](/azure/nat-gateway/resource-health#resource-health-alerts) to alert on NAT Gateway health events.
+Set up an [alert for datapath availability drops](/azure/nat-gateway/nat-metrics#alerts-for-datapath-availability-degradation) or use [Azure Resource Health](/azure/nat-gateway/resource-health#resource-health-alerts) to alert on NAT Gateway health events.
## No outbound connectivity to the internet
You observe no outbound connectivity on your NAT gateway.
* Carefully consider your traffic routing requirements before making any changes to traffic routes for your virtual network. UDRs that send 0.0.0.0/0 traffic to a virtual appliance or virtual network gateway override NAT gateway. See [custom routes](/azure/virtual-network/virtual-networks-udr-overview#custom-routes) to learn more about how custom routes impact the routing of network traffic. To explore options for updating your traffic routes on your subnet routing table, see: * [Add a custom route](/azure/virtual-network/manage-route-table#create-a-route) * [Change a route](/azure/virtual-network/manage-route-table#change-a-route)
- * [Delete a route](/azure/virtual-network/manage-route-table#delete-a-route)
-* Update NSG security rules that block internet access for any of your VMs. For more information, see [manage network security groups](/azure/virtual-network/manage-network-security-group?tabs=network-security-group-portal).
-* DNS not resolving correctly can happen for many reasons. Refer to the [DNS troubleshooting guide](/azure/dns/dns-troubleshoot) to help investigate why DNS resolution may be failing.
+ * [Delete a route](/azure/virtual-network/manage-route-table#delete-a-route)
+* Update NSG security rules that block internet access for any of your VMs. For more information, see [manage network security groups](/azure/virtual-network/manage-network-security-group?tabs=network-security-group-portal).
+* DNS not resolving correctly can happen for many reasons. Refer to the [DNS troubleshooting guide](/azure/dns/dns-troubleshoot) to help investigate why DNS resolution may be failing.
## NAT gateway public IP isn't used to connect outbound **Scenario**
-NAT gateway is deployed in your Azure virtual network but unexpected IP addresses are used for outbound connections.
+NAT gateway is deployed in your Azure virtual network but unexpected IP addresses are used for outbound connections.
**Possible causes** * NAT gateway misconfiguration
-* Active connection with another Azure outbound connectivity method such as Azure Load balancer or instance-level public IPs on virtual machines. Active connection flows continue to use the old public IP address that was assigned when the connection was established. When NAT gateway is deployed, new connections start using NAT gateway right away.
+* Active connection with another Azure outbound connectivity method such as Azure Load balancer or instance-level public IPs on virtual machines. Active connection flows continue to use the old public IP address that was assigned when the connection was established. When NAT gateway is deployed, new connections start using NAT gateway right away.
* Private IPs are used to connect to Azure services by service endpoints or Private Link * Connections to storage accounts come from the same region as the VM you're making a connection from. * Internet traffic is being redirected away from NAT gateway and force-tunneled to an NVA or firewall.
NAT gateway is deployed in your Azure virtual network but unexpected IP addresse
* Check if connections being made to other Azure services is coming from a private IP address in your Azure virtual network. * Check if you have [Private Link](/azure/private-link/manage-private-endpoint?tabs=manage-private-link-powershell#manage-private-endpoint-connections-on-azure-paas-resources) or [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md#logging-and-troubleshooting) enabled for connecting to other Azure services. * If connecting to Azure storage, check if your VM is in the same region as Azure storage.
-* Check if the public IP address being used to make connections is coming from another Azure service in your Azure virtual network, such as an NVA.
+* Check if the public IP address being used to make connections is coming from another Azure service in your Azure virtual network, such as an NVA.
### Possible solutions for NAT gateway public IP not used to connect outbound * Attach a public IP address or prefix to NAT gateway. Also make sure that NAT gateway is attached to subnets from the same virtual network. [Validate that NAT gateway can connect outbound](/azure/nat-gateway/troubleshoot-nat#how-to-validate-connectivity). * Test and resolve issues with VMs holding on to old SNAT IP addresses from another outbound connectivity method by: * Ensure you establish a new connection and that existing connections aren't being reused in the OS or that the browser is caching the connections. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you're using a browser, connections may also be pooled. * It isn't necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state is flushed, all connections begin using the NAT gateway resource's IP address(es). This behavior is a side effect of the virtual machine reboot and not an indicator that a reboot is required.
- * If you're still having trouble, [open a support case](#more-troubleshooting-guidance) for further troubleshooting.
-* Custom routes directing 0.0.0.0/0 traffic to an NVA will take precedence over NAT gateway for routing traffic to the internet. To have NAT gateway route traffic to the internet instead of the NVA, [remove the custom route](/azure/virtual-network/manage-route-table#delete-a-route) for 0.0.0.0/0 traffic going to the virtual appliance. The 0.0.0.0/0 traffic resumes using the default route to the internet and NAT gateway is used instead.
+ * If you're still having trouble, [open a support case](#more-troubleshooting-guidance) for further troubleshooting.
+* Custom routes directing 0.0.0.0/0 traffic to an NVA will take precedence over NAT gateway for routing traffic to the internet. To have NAT gateway route traffic to the internet instead of the NVA, [remove the custom route](/azure/virtual-network/manage-route-table#delete-a-route) for 0.0.0.0/0 traffic going to the virtual appliance. The 0.0.0.0/0 traffic resumes using the default route to the internet and NAT gateway is used instead.
> [!IMPORTANT] > Consider the routing requirements of your cloud architecture before making any changes to how traffic is routed. * Services deployed in the same region as an Azure storage account use private Azure IP addresses for communication. You can't restrict access to specific Azure services based on their public outbound IP address range. For more information, see [restrictions for IP network rules](/azure/storage/common/storage-network-security?tabs=azure-portal#restrictions-for-ip-network-rules).
-* Private Link and service endpoints use the private IP addresses of virtual machine instances in your virtual network to connect to Azure platform services instead of the public IP of NAT gateway. It's recommended to use Private Link to connect to other Azure services over the Azure backbone instead of over the internet with NAT gateway.
->[!NOTE]
->Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
+* Private Link and service endpoints use the private IP addresses of virtual machine instances in your virtual network to connect to Azure platform services instead of the public IP of NAT gateway. It's recommended to use Private Link to connect to other Azure services over the Azure backbone instead of over the internet with NAT gateway.
+>[!NOTE]
+>Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
-## Connection failures at the public internet destination
+## Connection failures at the public internet destination
**Scenario**
-NAT gateway connections to internet destinations fail or time out.
+NAT gateway connections to internet destinations fail or time out.
**Possible causes**
-* Firewall or other traffic management components at the destination.
-* API rate limiting imposed by the destination side.
+* Firewall or other traffic management components at the destination.
+* API rate limiting imposed by the destination side.
* Volumetric DDoS mitigations or transport layer traffic shaping. * The destination endpoint responds with fragmented packets
NAT gateway connections to internet destinations fail or time out.
* Check that the NAT gateway public IP is allow listed at partner destinations with Firewalls or other traffic management components ### Possible solutions for connection failures at internet destination
-* Verify if NAT gateway public IP is allow listed at destination.
-* If you're creating high volume or transaction rate testing, explore if reducing the rate reduces the occurrence of failures.
-* If changing rate impacts the rate of failures, check if API rate limits, or other constraints on the destination side might have been reached.
+* Verify if NAT gateway public IP is allow listed at destination.
+* If you're creating high volume or transaction rate testing, explore if reducing the rate reduces the occurrence of failures.
+* If changing rate impacts the rate of failures, check if API rate limits, or other constraints on the destination side might have been reached.
## Connection failures at FTP server for active or passive mode **Scenario**
-You see connection failures at an FTP server when using NAT gateway with active or passive FTP mode.
+You see connection failures at an FTP server when using NAT gateway with active or passive FTP mode.
**Possible causes**
-* Active FTP mode is enabled.
+* Active FTP mode is enabled.
* Passive FTP mode is enabled and NAT gateway is using more than one public IP address.
-
+ ### Possible solution for Active FTP mode
-FTP uses two separate channels between a client and server, the command and data channels. Each channel communicates on separate TCP connections, one for sending the commands and the other for transferring data.
+FTP uses two separate channels between a client and server, the command and data channels. Each channel communicates on separate TCP connections, one for sending the commands and the other for transferring data.
-In active FTP mode, the client establishes the command channel and the server establishes the data channel.
+In active FTP mode, the client establishes the command channel and the server establishes the data channel.
-**NAT gateway isn't compatible with active FTP mode**. Active FTP uses a PORT command from the FTP client that tells the FTP server what IP address and port for the server to use on the data channel to connect back to the client. The PORT command uses the private address of the client, which can't be changed. Client side traffic is SNATed by NAT gateway for internet-based communication so the PORT command is seen as invalid by the FTP server.
+**NAT gateway isn't compatible with active FTP mode**. Active FTP uses a PORT command from the FTP client that tells the FTP server what IP address and port for the server to use on the data channel to connect back to the client. The PORT command uses the private address of the client, which can't be changed. Client side traffic is SNATed by NAT gateway for internet-based communication so the PORT command is seen as invalid by the FTP server.
-An alternative solution to active FTP mode is to use passive FTP mode instead. However, in order to use NAT gateway in passive FTP mode, [some considerations](#possible-solution-for-passive-ftp-mode) must be made.
+An alternative solution to active FTP mode is to use passive FTP mode instead. However, in order to use NAT gateway in passive FTP mode, [some considerations](#possible-solution-for-passive-ftp-mode) must be made.
### Possible solution for Passive FTP mode
-In passive FTP mode, the client establishes connections on both the command and data channels. The client requests that the server listen on a port rather than try to establish a connection back to the client.
+In passive FTP mode, the client establishes connections on both the command and data channels. The client requests that the server listen on a port rather than try to establish a connection back to the client.
Outbound Passive FTP may not work for NAT gateway with multiple public IP addresses, depending on your FTP server configuration. When a NAT gateway with multiple public IP addresses sends traffic outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration. To prevent possible passive FTP connection failures, do the following steps:
-1. Check that your NAT gateway is attached to a single public IP address rather than multiple IP addresses or a prefix.
+1. Check that your NAT gateway is attached to a single public IP address rather than multiple IP addresses or a prefix.
2. Make sure that the passive port range from your NAT gateway is allowed to pass any firewalls that may be at the destination endpoint. >[!NOTE]
To prevent possible passive FTP connection failures, do the following steps:
**Scenario**
-Unable to connect outbound with NAT gateway on port 25 for SMTP traffic.
+Unable to connect outbound with NAT gateway on port 25 for SMTP traffic.
**Cause**
-The Azure platform blocks outbound SMTP connections on TCP port 25 for deployed VMs. This block is to ensure better security for Microsoft partners and customers, protect MicrosoftΓÇÖs Azure platform, and conform to industry standards.
+The Azure platform blocks outbound SMTP connections on TCP port 25 for deployed VMs. This block is to ensure better security for Microsoft partners and customers, protect MicrosoftΓÇÖs Azure platform, and conform to industry standards.
### Recommended guidance for sending email
-It's recommended you use authenticated SMTP relay services to send email from Azure VMs or from Azure App Service. For more information, see [troubleshoot outbound SMTP connectivity problems](/azure/virtual-network/troubleshoot-outbound-smtp-connectivity).
+It's recommended you use authenticated SMTP relay services to send email from Azure VMs or from Azure App Service. For more information, see [troubleshoot outbound SMTP connectivity problems](/azure/virtual-network/troubleshoot-outbound-smtp-connectivity).
## More troubleshooting guidance
-### Extra network captures
+### Extra network captures
-If your investigation is inconclusive, open a support case for further troubleshooting and collect the following information for a quicker resolution. Choose a single virtual machine in your NAT gateway configured subnet to perform the following tests:
+If your investigation is inconclusive, open a support case for further troubleshooting and collect the following information for a quicker resolution. Choose a single virtual machine in your NAT gateway configured subnet to perform the following tests:
-* Use **`ps ping`** from one of the backend VMs within the virtual network to test the probe port response (example: **`ps ping 10.0.0.4:3389`**) and record results.
+* Use **`ps ping`** from one of the backend VMs within the virtual network to test the probe port response (example: **`ps ping 10.0.0.4:3389`**) and record results.
-* If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM, and the virtual network test VM while you run PsPing then stop the Netsh trace.
+* If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM, and the virtual network test VM while you run PsPing then stop the Netsh trace.
## Outbound connectivity best practices
-Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. NAT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance later in the article for how to ensure that applications are using connections efficiently.
+Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. NAT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance later in the article for how to ensure that applications are using connections efficiently.
-### Modify the application to use connection pooling
+### Modify the application to use connection pooling
-When you pool your connections, you avoid opening new network connections for calls to the same address and port. You can implement a connection pooling scheme in your application where requests are internally distributed across a fixed set of connections and reused when possible. This setup constrains the number of SNAT ports in use and creates a predictable environment. Connection pooling helps reduce latency and resource utilization and ultimately improve the performance of your applications.
+When you pool your connections, you avoid opening new network connections for calls to the same address and port. You can implement a connection pooling scheme in your application where requests are internally distributed across a fixed set of connections and reused when possible. This setup constrains the number of SNAT ports in use and creates a predictable environment. Connection pooling helps reduce latency and resource utilization and ultimately improve the performance of your applications.
To learn more on pooling HTTP connections, see [Pool HTTP connections](/aspnet/core/performance/performance-best-practices#pool-http-connections-with-httpclientfactory) with HttpClientFactory.
-### Modify the application to reuse connections
+### Modify the application to reuse connections
-Rather than generating individual, atomic TCP connections for each request, configure your application to reuse connections. Connection reuse results in more performant TCP transactions and is especially relevant for protocols like HTTP/1.1, where connection reuse is the default. This reuse applies to other protocols that use HTTP as their transport such as REST.
+Rather than generating individual, atomic TCP connections for each request, configure your application to reuse connections. Connection reuse results in more performant TCP transactions and is especially relevant for protocols like HTTP/1.1, where connection reuse is the default. This reuse applies to other protocols that use HTTP as their transport such as REST.
-### Modify the application to use less aggressive retry logic
+### Modify the application to use less aggressive retry logic
-When SNAT ports are exhausted or application failures occur, aggressive or brute force retries without delay and back-off logic cause exhaustion to occur or persist. You can reduce demand for SNAT ports by using a less aggressive retry logic.
+When SNAT ports are exhausted or application failures occur, aggressive or brute force retries without delay and back-off logic cause exhaustion to occur or persist. You can reduce demand for SNAT ports by using a less aggressive retry logic.
-Depending on the configured idle timeout, if retries are too aggressive, connections may not have enough time to close and release SNAT ports for reuse.
+Depending on the configured idle timeout, if retries are too aggressive, connections may not have enough time to close and release SNAT ports for reuse.
-For extra guidance and examples, see [Retry pattern](../app-service/troubleshoot-intermittent-outbound-connection-errors.md).
+For extra guidance and examples, see [Retry pattern](../app-service/troubleshoot-intermittent-outbound-connection-errors.md).
-### Use keepalives to reset the outbound idle timeout
+### Use keepalives to reset the outbound idle timeout
For more information about keepalives, see [TCP idle timeout](/azure/nat-gateway/nat-gateway-resource#tcp-idle-timeout).
-### Use Private link to reduce SNAT port usage for connecting to other Azure services
+### Use Private link to reduce SNAT port usage for connecting to other Azure services
-When possible, Private Link should be used to connect directly from your virtual networks to Azure platform services in order to [reduce the demand](./troubleshoot-nat.md) on SNAT ports. Reducing the demand on SNAT ports can help reduce the risk of SNAT port exhaustion.
+When possible, Private Link should be used to connect directly from your virtual networks to Azure platform services in order to [reduce the demand](./troubleshoot-nat.md) on SNAT ports. Reducing the demand on SNAT ports can help reduce the risk of SNAT port exhaustion.
-To create a Private Link, see the following Quickstart guides to get started:
+To create a Private Link, see the following Quickstart guides to get started:
-* [Create a Private Endpoint](../private-link/create-private-endpoint-portal.md?tabs=dynamic-ip)
+* [Create a Private Endpoint](../private-link/create-private-endpoint-portal.md?tabs=dynamic-ip)
* [Create a Private Link](../private-link/create-private-link-service-portal.md)
-## Next steps
+## Next steps
We always strive to enhance our customers' experience. If you encounter NAT gateway issues that not addressed or resolved by this article, provide feedback through GitHub at the bottom of this page.
-To learn more about NAT gateway, see:
+To learn more about NAT gateway, see:
-* [Azure NAT Gateway](./nat-overview.md)
+* [Azure NAT Gateway](./nat-overview.md)
-* [NAT gateway resource](./nat-gateway-resource.md)
+* [NAT gateway resource](./nat-gateway-resource.md)
* [Metrics and alerts for NAT gateway resources](./nat-metrics.md)
network-watcher Network Watcher Analyze Nsg Flow Logs Graylog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-analyze-nsg-flow-logs-graylog.md
Last updated 05/03/2023 -+ # Manage and analyze network security group flow logs in Azure using Network Watcher and Graylog
The following instructions are used to install Logstash in Ubuntu. For instructi
interval => 5 } }
-
+ filter { split { field => "[records]" } split { field => "[records][properties][flows]"} split { field => "[records][properties][flows][flows]"} split { field => "[records][properties][flows][flows][flowTuples]" }
-
+ mutate { split => { "[records][resourceId]" => "/"} add_field =>{
Now that you have established a connection to the flow logs using Logstash and s
### Search through Graylog messages
-After allowing some time for your Graylog server to collect messages, you are able to search through the messages. To check the messages being sent to your Graylog server, from the **Inputs** configuration page click the "**Show received messages**" button of the GELF UDP input you created. You are directed to a screen that looks similar to the following picture:
+After allowing some time for your Graylog server to collect messages, you are able to search through the messages. To check the messages being sent to your Graylog server, from the **Inputs** configuration page click the "**Show received messages**" button of the GELF UDP input you created. You are directed to a screen that looks similar to the following picture:
![Screenshot shows the Graylog server that displays Search result, Histogram, and Messages.](./media/network-watcher-analyze-nsg-flow-logs-graylog/histogram.png)
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
Last updated 05/03/2023 -+ # Manage and analyze network security group flow logs using Network Watcher and Grafana [Network Security Group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) provide information that can be used to understand ingress and egress IP traffic on network interfaces. These flow logs show outbound and inbound flows on a per NSG rule basis, the NIC the flow applies to, 5-tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol), and if the traffic was allowed or denied.
-You can have many NSGs in your network with flow logging enabled. This amount of logging data makes it cumbersome to parse and gain insights from your logs. This article provides a solution to centrally manage these NSG flow logs using Grafana, an open source graphing tool, ElasticSearch, a distributed search and analytics engine, and Logstash, which is an open source server-side data processing pipeline.
+You can have many NSGs in your network with flow logging enabled. This amount of logging data makes it cumbersome to parse and gain insights from your logs. This article provides a solution to centrally manage these NSG flow logs using Grafana, an open source graphing tool, ElasticSearch, a distributed search and analytics engine, and Logstash, which is an open source server-side data processing pipeline.
## Scenario
The following instructions are used to install Logstash in Ubuntu. For instructi
split => { "[records][resourceId]" => "/"} add_field => { "Subscription" => "%{[records][resourceId][2]}" "ResourceGroup" => "%{[records][resourceId][4]}"
- "NetworkSecurityGroup" => "%{[records][resourceId][8]}"
+ "NetworkSecurityGroup" => "%{[records][resourceId][8]}"
} convert => {"Subscription" => "string"} convert => {"ResourceGroup" => "string"}
The following instructions are used to install Logstash in Ubuntu. For instructi
convert => {"unixtimestamp" => "integer"} convert => {"srcPort" => "integer"} convert => {"destPort" => "integer"}
- add_field => { "message" => "%{Message}" }
+ add_field => { "message" => "%{Message}" }
}
-
+ date { match => ["unixtimestamp" , "UNIX"] }
The following instructions are used to install Logstash in Ubuntu. For instructi
``` The Logstash config file provided is composed of three parts: the input, filter, and output.
-The input section designates the input source of the logs that Logstash will process ΓÇô in this case we are going to use an ΓÇ£azureblobΓÇ¥ input plugin (installed in the next steps) that will allow us to access the NSG flow log JSON files stored in blob storage.
+The input section designates the input source of the logs that Logstash will process ΓÇô in this case we are going to use an ΓÇ£azureblobΓÇ¥ input plugin (installed in the next steps) that will allow us to access the NSG flow log JSON files stored in blob storage.
The filter section then flattens each flow log file so that each individual flow tuple and its associated properties becomes a separate Logstash event.
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
Last updated 05/03/2023 -+ # Visualize Azure Network Watcher NSG flow logs using open source tools
The following instructions are used to install Logstash in Ubuntu. For instructi
} convert => {"unixtimestamp" => "integer"} convert => {"srcPort" => "integer"}
- convert => {"destPort" => "integer"}
+ convert => {"destPort" => "integer"}
} date{
The following instructions are used to install Logstash in Ubuntu. For instructi
hosts => "localhost" index => "nsg-flow-logs" }
- }
+ }
``` For further instructions on installing Logstash, see the [official documentation](https://www.elastic.co/guide/en/beats/libbeat/5.2/logstash-installation.html).
operator-nexus Howto Cluster Runtime Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-runtime-upgrade.md
az networkcloud cluster show --cluster-name "clusterName" --resource-group "reso
``` The output should be the target cluster's information and the cluster's detailed status and detail status message should be present.
+For more detailed insights on the upgrade progress, the individual BMM in each Rack can be checked for status. Example of this is provided in the reference section under [BareMetal Machine roles](./reference-near-edge-baremetal-machine-roles.md).
## Configure compute threshold parameters for runtime upgrade using cluster updateStrategy The following Azure CLI command is used to configure the compute threshold parameters for a runtime upgrade:
orbital Overview Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview-analytics.md
- Title: What is Azure Orbital Analytics?
-description: Azure Orbital Analytics are Azure capabilities that allow you to discover and distribute the most valuable insights from your spaceborne data.
---- Previously updated : 08/31/2022---
-# What is Azure Orbital Analytics?
-
-Azure Orbital Analytics are Azure capabilities using spaceborne data and AI that allow you to discover and distribute the most valuable insights from your spaceborne data to take action in less time.
-
-## What it provides
-
-Azure Orbital Analytics informs customers of the ability to downlink spaceborne data from Azure Orbital Ground Station (AOGS), first- or third-party archives, or customer-acquired data directly into Azure. Data is efficiently stored using Azure Data Platform components. From there, raw spaceborne sensor data can be converted into analysis-ready data using Azure Orbital Analytics processing pipeline reference architectures.
-
-## Integrations
-
-Derive insights on data by applying AI models, integrating applications, and more. Partner AI models and Microsoft tools extract the highest precision results. Finally, deliver data to destinations such as Microsoft Teams, Power Platform, or process it using open-source tools. Azure Orbital Analytics enables scenarios including land classification, asset monitoring, object detection, and more.
-
-## Partnerships
-
-Azure Orbital Analytics is the pathway between satellite operators and Microsoft customers. Partnerships with [Airbus](https://www.airbus.com/en), [Blackshark.ai](https://blackshark.ai/technology/), and [Orbital Insight](https://orbitalinsight.com/) enable information extraction and publishing to EsriΓÇÖs ArcGIS workflows.
-
-Orbital Analytics for Azure Synapse applies artificial intelligence over satellite imagery at scale using Azure resources.
-
-## Next steps
--- [Geospatial reference architecture](./geospatial-reference-architecture.md)-- [Spaceborne data analysis with Azure Synapse Analytics](/azure/architecture/industries/aerospace/geospatial-processing-analytics)
postgresql Azure Pipelines Deploy Database Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/azure-pipelines-deploy-database-task.md
Title: Azure Pipelines task Azure Database for PostgreSQL Flexible Server
-description: Enable Azure Database for PostgreSQL Flexible Server CLI task for using with Azure Pipelines
+ Title: Azure Pipelines task
+description: Enable Azure Database for PostgreSQL - Flexible Server CLI task for using with Azure Pipelines.
Last updated 11/30/2021
-# Azure Pipelines task for Azure Database for PostgreSQL Flexible Server
+# Azure Pipelines task - Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can automatically deploy your database updates to Azure Database for PostgreSQL Flexible Server after every successful build with **Azure Pipelines**. You can use Azure CLI task to update the database either with a SQL file or an inline SQL script against the database. This task can be run on cross-platform agents running on Linux, macOS, or Windows operating systems.
+You can automatically deploy your database updates to Azure Database for PostgreSQL flexible server after every successful build with **Azure Pipelines**. You can use Azure CLI task to update the database either with a SQL file or an inline SQL script against the database. This task can be run on cross-platform agents running on Linux, macOS, or Windows operating systems.
## Prerequisites - An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/). - [Azure Resource Manager service connection](/azure/devops/pipelines/library/connect-to-azure) to your Azure account-- Microsoft hosted agents have Azure CLI pre-installed. However if you are using private agents, [install Azure CLI](/cli/azure/install-azure-cli) on the computer(s) that run the build and release agent. If an agent is already running on the machine on which the Azure CLI is installed, restart the agent to ensure all the relevant stage variables are updated.-- Create an Azure Database for PostgreSQL Flexible Server using [Azure portal](./quickstart-create-server-portal.md) or [Azure CLI](./quickstart-create-server-cli.md)
+- Microsoft hosted agents have Azure CLI preinstalled. However if you are using private agents, [install Azure CLI](/cli/azure/install-azure-cli) on the computer(s) that run the build and release agent. If an agent is already running on the machine on which the Azure CLI is installed, restart the agent to ensure all the relevant stage variables are updated.
+- Create an Azure Database for PostgreSQL flexible server instance using the [Azure portal](./quickstart-create-server-portal.md) or [Azure CLI](./quickstart-create-server-cli.md)
## Use SQL file
-The following example illustrates how to pass database arguments and run ```execute``` command
+The following example illustrates how to pass database arguments and run `execute` command
```yaml - task: AzureCLI@2
The following example illustrates how to pass database arguments and run ```exec
## Use inline SQL script
-The following example illustrates how to run an inline SQL script using ```execute``` command.
+The following example illustrates how to run an inline SQL script using `execute` command.
```yaml - task: AzureCLI@2
You can see the full list of all the task inputs when using Azure CLI task with
| Parameter | Description | | :- | :-| | azureSubscription| (Required) Provide the Azure Resource Manager subscription for the deployment. This parameter is shown only when the selected task version is 0.* as Azure CLI task v1.0 supports only Azure Resource Manager subscriptions. |
-|scriptType| (Required) Provide the type of script. Supported scripts are PowerShell, PowerShell Core, Bat, Shell, and script. When running on a **Linux agent**, select one of the following: ```bash``` or ```pscore``` . When running **Windows agent**, select one of the following: ```batch```,```ps``` and ```pscore```. |
-|scriptLocation| (Required) Provide the path to script, for example real file path or use ```Inline script``` when providing the scripts inline. The default value is ```scriptPath```. |
+|scriptType| (Required) Provide the type of script. Supported scripts are PowerShell, PowerShell Core, Bat, Shell, and script. When running on a **Linux agent**, select one of the following: `bash` or `pscore` . When running **Windows agent**, select one of the following: `batch`,`ps` and `pscore`. |
+|scriptLocation| (Required) Provide the path to script, for example real file path or use `Inline script` when providing the scripts inline. The default value is `scriptPath`. |
|scriptPath| (Required) Fully qualified path of the script(.ps1 or .bat or .cmd when using Windows-based agent else <code>.ps1 </code> or <code>.sh </code> when using linux-based agent) or a path relative to the default working directory. |
-|inlineScript|(Required) You can write your scripts inline here. When using Windows agent, use PowerShell or PowerShell Core or batch scripting whereas use PowerShell Core or shell scripting when using Linux-based agents. For batch files use the prefix \"call\" before every Azure command. You can also pass predefined and custom variables to this script using arguments. <br/>Example for PowerShell/PowerShellCore/shell:``` az --version az account show``` <br/>Example for batch: ``` call az --version call az account show```. |
-| arguments| (Optional) Provide all the arguments passed to the script. For examples ```-SERVERNAME mydemoserver```. |
+|inlineScript|(Required) You can write your scripts inline here. When using Windows agent, use PowerShell or PowerShell Core or batch scripting whereas use PowerShell Core or shell scripting when using Linux-based agents. For batch files use the prefix \"call\" before every Azure command. You can also pass predefined and custom variables to this script using arguments. <br/>Example for PowerShell/PowerShellCore/shell:` az --version az account show` <br/>Example for batch: ` call az --version call az account show`. |
+| arguments| (Optional) Provide all the arguments passed to the script. For examples `-SERVERNAME mydemoserver`. |
|powerShellErrorActionPreference| (Optional) Prepends the line <b>$ErrorActionPreference = 'VALUE'</b> at the top of your PowerShell/PowerShell Core script. The default value is stop. Supported values are stop, continue, and silentlyContinue. | |addSpnToEnvironment|(Optional) Adds service principal ID and key of the Azure endpoint you chose to the script's execution environment. You can use these variables: <b>$env:servicePrincipalId, $env:servicePrincipalKey and $env:tenantId</b> in your script. This is honored only when the Azure endpoint has Service Principal authentication scheme. The default value is false.| |useGlobalConfig|(Optional) If this is false, this task will use its own separate <a href= "/cli/azure/azure-cli-configuration#cli-configuration-file">Azure CLI configuration directory</a>. This can be used to run Azure CLI tasks in <b>parallel</b> releases" <br/>Default value: false</td>
You can see the full list of all the task inputs when using Azure CLI task with
Having issues with CLI Task, see [how to troubleshoot Build and Release](/azure/devops/pipelines/troubleshooting/troubleshooting). ## Next steps
-Here are some related tasks that can be used to deploy with Azure Piplelines.
+Here are some related tasks that can be used to deploy with Azure Pipelines.
- [Azure Resource Group Deployment](/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment) - [Azure Web App Deployment](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment)
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-audit.md
Title: Audit logging - Azure Database for PostgreSQL - Flexible Server
+ Title: Audit logging
description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Audit logging of database activities in Azure Database for PostgreSQL - Flexible Server is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session and/or object audit logging.
+Audit logging of database activities in Azure Database for PostgreSQL flexible server is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session and/or object audit logging.
If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md). ## Usage considerations
-By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. In Azure Database for PostgreSQL - Flexible Server, you can configure all logs to be sent to Azure Monitor Log store for later analytics in Log Analytics. If you enable Azure Monitor resource logging, your logs will be automatically sent (in JSON format) to Azure Storage, Event Hubs, and/or Azure Monitor logs, depending on your choice.
+By default, pgAudit log statements are emitted along with your regular log statements by using Postgres's standard logging facility. In Azure Database for PostgreSQL flexible server, you can configure all logs to be sent to Azure Monitor Log store for later analytics in Log Analytics. If you enable Azure Monitor resource logging, your logs will be automatically sent (in JSON format) to Azure Storage, Event Hubs, and/or Azure Monitor logs, depending on your choice.
To learn how to set up logging to Azure Storage, Event Hubs, or Azure Monitor logs, visit the resource logs section of the [server logs article](concepts-logging.md). ## Installing pgAudit
-Before you can install pgAudit extension in Azure Database for PostgreSQL - Flexible Server, you will need to allow-list pgAudit extension for use.
+Before you can install pgAudit extension in Azure Database for PostgreSQL flexible server, you need to allow-list pgAudit extension for use.
Using the [Azure portal](https://portal.azure.com):
- 1. Select your Azure Database for PostgreSQL - Flexible Server.
+ 1. Select your Azure Database for PostgreSQL flexible server instance.
2. On the sidebar, select **Server Parameters**. 3. Search for the `azure.extensions` parameter. 4. Select pgAudit as extension you wish to allow-list.
- :::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text=" Screenshot showing Azure Database for PostgreSQL - allow-listing extensions for installation ":::
+ :::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text="Screenshot showing Azure Database for PostgreSQL - allow-listing extensions for installation.":::
Using [Azure CLI](/cli/azure/):
- You can allow-list extensions via CLI parameter set [command](/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
+ You can allow-list extensions via CLI parameter set [command](/cli/azure/postgres/flexible-server/parameter).
```bash az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name azure.extensions --value pgAudit ```
-To install pgAudit, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a server restart to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md), [Azure CLI](howto-configure-server-parameters-using-cli.md), or [REST API](/rest/api/postgresql/singleserver/configurations/createorupdate).
+To install pgAudit, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a server restart to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md), [Azure CLI](how-to-configure-server-parameters-using-cli.md), or [REST API](/rest/api/postgresql/singleserver/configurations/createorupdate).
Using the [Azure portal](https://portal.azure.com):
- 1. Select your Azure Database for PostgreSQL - Flexible Server.
+ 1. Select your Azure Database for PostgreSQL flexible server instance.
2. On the sidebar, select **Server Parameters**. 3. Search for the `shared_preload_libraries` parameter. 4. Select **pgaudit**.
- :::image type="content" source="./media/concepts-audit/shared-preload-libraries.png" alt-text=" Screenshot showing Azure Database for PostgreSQL - enabling shared_preload_libraries for pgaudit ":::
+ :::image type="content" source="./media/concepts-audit/shared-preload-libraries.png" alt-text="Screenshot showing Azure Database for PostgreSQL flexible server enabling shared_preload_libraries for pgaudit.":::
5. You can check that **pgaudit** is loaded in shared_preload_libraries by executing following query in psql: ```SQL show shared_preload_libraries; ```
- You should see **pgaudit** in the query result that will return shared_preload_libraries
+ You should see **pgaudit** in the query result that will return shared_preload_libraries.
- 6. Connect to your server using a client (like psql) and enable the pgAudit extension
+ 6. Connect to your server using a client (like psql) and enable the pgAudit extension.
```SQL CREATE EXTENSION pgaudit; ```
The [pgAudit documentation](https://github.com/pgaudit/pgaudit/blob/master/READM
> `pgaudit.log_level` is only enabled when `pgaudit.log_client` is on. > [!NOTE]
-> In Azure Database for PostgreSQL - Flexible Server, `pgaudit.log` cannot be set using a `-` (minus) sign shortcut as described in the pgAudit documentation. All required statement classes (READ, WRITE, etc.) should be individually specified.
+> In Azure Database for PostgreSQL flexible server `pgaudit.log` can't be set using a `-` (minus) sign shortcut as described in the pgAudit documentation. All required statement classes (READ, WRITE, etc.) should be individually specified.
> [!NOTE] >If you set the log_statement parameter to DDL or ALL, and run a `CREATE ROLE/USER ... WITH PASSWORD ... ; ` or `ALTER ROLE/USER ... WITH PASSWORD ... ;`, command, then PostgreSQL creates an entry in the PostgreSQL logs, where password is logged in clear text, which may cause a potential security risk. This is expected behavior as per PostgreSQL engine design. You can, however, use PGAudit extension and set `pgaudit.log='DDL'` parameter in server parameters page, which doesn't record any `CREATE/ALTER ROLE` statement in Postgres Log, unlike Postgres `log_statement='DDL'` setting. If you do need to log these statements you can add `pgaudit.log ='ROLE'` in addition, which, while logging `'CREATE/ALTER ROLE'` will redact the password from logs.
AzureDiagnostics
## Next steps-- [Learn about logging in Azure Database for PostgreSQL - Flexible Server](concepts-logging.md)-- [Learn how to setup logging in Azure Database for PostgreSQL - Flexible Server and how to access logs](howto-configure-and-access-logs.md)
+- [Learn about logging in Azure Database for PostgreSQL flexible server](concepts-logging.md)
+- [Learn how to setup logging in Azure Database for PostgreSQL flexible server and how to access logs](how-to-configure-and-access-logs.md)
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-ad-authentication.md
Title: Active Directory authentication - Azure Database for PostgreSQL - Flexible Server
-description: Learn about the concepts of Microsoft Entra ID for authentication with Azure Database for PostgreSQL - Flexible Server
+ Title: Active Directory authentication
+description: Learn about the concepts of Microsoft Entra ID for authentication with Azure Database for PostgreSQL - Flexible Server.
-# Microsoft Entra authentication with PostgreSQL Flexible Server
+# Microsoft Entra authentication with Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-Microsoft Entra authentication is a mechanism of connecting to Azure Database for PostgreSQL using identities defined in Microsoft Entra ID.
+Microsoft Entra authentication is a mechanism of connecting to Azure Database for PostgreSQL flexible server using identities defined in Microsoft Entra ID.
With Microsoft Entra authentication, you can manage database user identities and other Microsoft services in a central location, which simplifies permission management. Benefits of using Microsoft Entra ID include:
Benefits of using Microsoft Entra ID include:
- Authentication of users across Azure Services in a uniform way - Management of password policies and password rotation in a single place - Multiple forms of authentication supported by Microsoft Entra ID, which can eliminate the need to store passwords-- Customers can manage database permissions using external (Microsoft Entra ID) groups.
+- Customers can manage database permissions using external (Microsoft Entra ID) groups
- Microsoft Entra authentication uses PostgreSQL database roles to authenticate identities at the database level-- Support of token-based authentication for applications connecting to Azure Database for PostgreSQL
+- Support of token-based authentication for applications connecting to Azure Database for PostgreSQL flexible server
<a name='azure-active-directory-authentication-single-server-vs-flexible-server'></a>
-## Microsoft Entra authentication (Single Server VS Flexible Server)
+## Microsoft Entra authentication (Azure Database for PostgreSQL single Server vs Azure Database for PostgreSQL flexible server)
-Microsoft Entra authentication for Flexible Server is built using our experience and feedback we've collected from Azure Database for PostgreSQL Single Server, and supports the following features and improvements over single server:
+Microsoft Entra authentication for Azure Database for PostgreSQL flexible server is built using our experience and feedback collected from Azure Database for PostgreSQL single server, and supports the following features and improvements over Azure Database for PostgreSQL single server:
-The following table provides a list of high-level Microsoft Entra features and capabilities comparisons between Single Server and Flexible Server
+The following table provides a list of high-level Microsoft Entra features and capabilities comparisons between Azure Database for PostgreSQL single server and Azure Database for PostgreSQL flexible server.
-| **Feature / Capability** | **Single Server** | **Flexible Server** |
+| **Feature / Capability** | **Azure Database for PostgreSQL single server** | **Azure Database for PostgreSQL flexible server** |
| | | | | Multiple Microsoft Entra Admins | No | Yes | | Managed Identities (System & User assigned) | Partial | Full |
The following table provides a list of high-level Microsoft Entra features and c
<a name='how-azure-ad-works-in-flexible-server'></a>
-## How Microsoft Entra ID Works In Flexible Server
+## How Microsoft Entra ID Works in Azure Database for PostgreSQL flexible server
-The following high-level diagram summarizes how authentication works using Microsoft Entra authentication with Azure Database for PostgreSQL. The arrows indicate communication pathways.
+The following high-level diagram summarizes how authentication works using Microsoft Entra authentication with Azure Database for PostgreSQL flexible server. The arrows indicate communication pathways.
![authentication flow][1]
- Use these steps to configure Microsoft Entra ID with Azure Database for PostgreSQL Flexible Server [Configure and sign in with Microsoft Entra ID for Azure Database for PostgreSQL Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
+ Use these steps to configure Microsoft Entra ID with Azure Database for PostgreSQL flexible server [Configure and sign in with Microsoft Entra ID for Azure Database for PostgreSQL - Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
## Manage PostgreSQL Access For AD Principals
-When Microsoft Entra authentication is enabled and Microsoft Entra principal is added as a Microsoft Entra administrator the account gets the same privileges as the original PostgreSQL administrator. Only Microsoft Entra administrator can manage other Microsoft Entra ID enabled roles on the server using Azure portal or Database API. The Microsoft Entra administrator sign-in can be a Microsoft Entra user, Microsoft Entra group, Service Principal or Managed Identity. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Microsoft Entra ID without changing the users or permissions in the PostgreSQL server. Multiple Microsoft Entra administrators can be configured at any time and you can optionally disable password authentication to an Azure Database for PostgreSQL Flexible Server for better auditing and compliance needs.
+When Microsoft Entra authentication is enabled and Microsoft Entra principal is added as a Microsoft Entra administrator the account gets the same privileges as the original PostgreSQL administrator. Only Microsoft Entra administrator can manage other Microsoft Entra ID enabled roles on the server using Azure portal or Database API. The Microsoft Entra administrator sign-in can be a Microsoft Entra user, Microsoft Entra group, Service Principal or Managed Identity. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Microsoft Entra ID without changing the users or permissions in the Azure Database for PostgreSQL flexible server instance. Multiple Microsoft Entra administrators can be configured at any time and you can optionally disable password authentication to an Azure Database for PostgreSQL flexible server instance for better auditing and compliance needs.
![admin structure][2] > [!NOTE]
- > Service Principal or Managed Identity can now act as fully functional Microsoft Entra Administrator in Flexible Server and this was a limitation in our Single Server.
+ > Service Principal or Managed Identity can now act as fully functional Microsoft Entra Administrator in Azure Database for PostgreSQL flexible server and this was a limitation in Azure Database for PostgreSQL single server.
Microsoft Entra administrators that are created via Portal, API or SQL would have the same permissions as the regular admin user created during server provisioning. Additionally, database permissions for non-admin Microsoft Entra ID enabled roles are managed similar to regular roles.
Microsoft Entra authentication supports the following methods of connecting to a
Once you've authenticated against the Active Directory, you then retrieve a token. This token is your password for logging in. > [!NOTE]
-> Use these steps to configure Microsoft Entra ID with Azure Database for PostgreSQL Flexible Server [Configure and sign in with Microsoft Entra ID for Azure Database for PostgreSQL Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
+> Use these steps to configure Microsoft Entra ID with Azure Database for PostgreSQL flexible server [Configure and sign in with Microsoft Entra ID for Azure Database for PostgreSQL - Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
## Other considerations -- Multiple Microsoft Entra principals (a user, group, service principal or managed identity) can be configured as Microsoft Entra Administrator for an Azure Database for PostgreSQL server at any time.-- Only a Microsoft Entra administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using a Microsoft Entra account. The Active Directory administrator can configure subsequent Microsoft Entra database users.
+- Multiple Microsoft Entra principals (a user, group, service principal or managed identity) can be configured as Microsoft Entra Administrator for an Azure Database for PostgreSQL flexible server instance at any time.
+- Only a Microsoft Entra administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL flexible server instance using a Microsoft Entra account. The Active Directory administrator can configure subsequent Microsoft Entra database users.
- If a Microsoft Entra principal is deleted from Microsoft Entra ID, it still remains as PostgreSQL role, but it will no longer be able to acquire new access token. In this case, although the matching role still exists in the database it won't be able to authenticate to the server. Database administrators need to transfer ownership and drop roles manually. > [!NOTE]
-> Login with the deleted Microsoft Entra user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for PostgreSQL this access will be revoked immediately.
+> Login with the deleted Microsoft Entra user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for PostgreSQL flexible server this access is revoked immediately.
-- Azure Database for PostgreSQL Flexible Server matches access tokens to the database role using the userΓÇÖs unique Microsoft Entra user ID, as opposed to using the username. If a Microsoft Entra user is deleted and a new user is created with the same name, Azure Database for PostgreSQL Flexible Server considers that a different user. Therefore, if a user is deleted from Microsoft Entra ID and a new user is added with the same name the new user won't be able to connect with the existing role.
+- Azure Database for PostgreSQL flexible server matches access tokens to the database role using the userΓÇÖs unique Microsoft Entra user ID, as opposed to using the username. If a Microsoft Entra user is deleted and a new user is created with the same name, Azure Database for PostgreSQL flexible server considers that a different user. Therefore, if a user is deleted from Microsoft Entra ID and a new user is added with the same name the new user won't be able to connect with the existing role.
## Next steps -- To learn how to create and populate Microsoft Entra ID, and then configure Microsoft Entra ID with Azure Database for PostgreSQL, see [Configure and sign in with Microsoft Entra ID for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).
+- To learn how to create and populate Microsoft Entra ID, and then configure Microsoft Entra ID with Azure Database for PostgreSQL flexible server, see [Configure and sign in with Microsoft Entra ID for Azure Database for PostgreSQL - Flexible Server](how-to-configure-sign-in-azure-ad-authentication.md).
- To learn how to manage Microsoft Entra users for Flexible Server, see [Manage Microsoft Entra users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md). <!--Image references-->
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for PostgreSQL - Flexible Server
-description: Learn about Azure Advisor recommendations for PostgreSQL - Flexible Server.
+ Title: Azure Advisor
+description: Learn about Azure Advisor recommendations for Azure Database for PostgreSQL - Flexible Server.
Last updated 11/16/2021
-# Azure Advisor for PostgreSQL - Flexible Server
+# Azure Advisor for Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Learn about how Azure Advisor is applied to Azure Database for PostgreSQL and get answers to common questions.
+Learn about how Azure Advisor is applied to Azure Database for PostgreSQL flexible server and get answers to common questions.
## What is Azure Advisor for PostgreSQL?
-The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your PostgreSQL database.
-Advisor recommendations are split among our PostgreSQL database offerings:
-* Azure Database for PostgreSQL - Single Server
-* Azure Database for PostgreSQL - Flexible Server
+The Azure Advisor system uses telemetry to issue performance and reliability recommendations for your Azure Database for PostgreSQL flexible server database.
+Advisor recommendations are split among our Azure Database for PostgreSQL flexible server database offerings:
+* Azure Database for PostgreSQL single server
+* Azure Database for PostgreSQL flexible server
Some recommendations are common to multiple product offerings, while other recommendations are based on product-specific optimizations. ## Where can I view my recommendations?
Recommendations are available from the **Overview** navigation sidebar in the Az
:::image type="content" source="../media/concepts-azure-advisor-recommendations/advisor-example.png" alt-text="Screenshot of the Azure portal showing an Azure Advisor recommendation."::: ## Recommendation types
-Azure Database for PostgreSQL prioritize the following types of recommendations:
-* **Performance**: To improve the speed of your PostgreSQL server. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
+Azure Database for PostgreSQL flexible server prioritizes the following types of recommendations:
+* **Performance**: To improve the speed of your Azure Database for PostgreSQL flexible server instance. This includes CPU usage, memory pressure, connection pooling, disk utilization, and product-specific server parameters. For more information, see [Advisor Performance recommendations](../../advisor/advisor-performance-recommendations.md).
* **Reliability**: To ensure and improve the continuity of your business-critical databases. This includes storage limits, and connection limits. For more information, see [Advisor Reliability recommendations](../../advisor/advisor-high-availability-recommendations.md). * **Cost**: To optimize and reduce your overall Azure spending. This includes server right-sizing recommendations. For more information, see [Advisor Cost recommendations](../../advisor/advisor-cost-recommendations.md). ## Understanding your recommendations
-* **Daily schedule**: For Azure PostgreSQL databases, we check server telemetry and issue recommendations on a twice a day schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry at either 7PM or 7AM according to PST.
-* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations will be paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
+* **Daily schedule**: For Azure Database for PostgreSQL flexible server databases, we check server telemetry and issue recommendations on a twice a day schedule. If you make a change to your server configuration, existing recommendations will remain visible until we re-examine telemetry at either 7PM or 7AM according to PST.
+* **Performance history**: Some of our recommendations are based on performance history. These recommendations will only appear after a server has been operating with the same configuration for 7 days. This allows us to detect patterns of heavy usage (e.g. high CPU activity or high connection volume) over a sustained time period. If you provision a new server or change to a new vCore configuration, these recommendations are paused temporarily. This prevents legacy telemetry from triggering recommendations on a newly reconfigured server. However, this also means that performance history-based recommendations may not be identified immediately.
## Next steps For more information, see [Azure Advisor Overview](../../advisor/advisor-overview.md).
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
Title: Backup and restore in Azure Database for PostgreSQL - Flexible Server
+ Title: Backup and restore
description: Learn about the concepts of backup and restore with Azure Database for PostgreSQL - Flexible Server.
Last updated 12/23/2023
Backups form an essential part of any business continuity strategy. They help protect data from accidental corruption or deletion.
-Azure Database for PostgreSQL - Flexible Server automatically performs regular backups of your server. You can then do a point-in-time recovery (PITR) within a retention period that you specify. The overall time to restore and recovery typically depends on the size of data and the amount of recovery to be performed.
+Azure Database for PostgreSQL flexible server automatically performs regular backups of your server. You can then do a point-in-time recovery (PITR) within a retention period that you specify. The overall time to restore and recovery typically depends on the size of data and the amount of recovery to be performed.
## Backup overview
-Flexible Server takes snapshot backups of data files and stores them securely in zone-redundant storage or locally redundant storage, depending on the [region](overview.md#azure-regions). The server also backs up transaction logs when the write-ahead log (WAL) file is ready to be archived. You can use these backups to restore a server to any point in time within your configured backup retention period.
+Azure Database for PostgreSQL flexible server takes snapshot backups of data files and stores them securely in zone-redundant storage or locally redundant storage, depending on the [region](overview.md#azure-regions). The server also backs up transaction logs when the write-ahead log (WAL) file is ready to be archived. You can use these backups to restore a server to any point in time within your configured backup retention period.
The default backup retention period is 7 days, but you can extend the period to a maximum of 35 days. All backups are encrypted through AES 256-bit encryption for data stored at rest.
-These backup files can't be exported or used to create servers outside Azure Database for PostgreSQL - Flexible Server. For that purpose, you can use the PostgreSQL tools pg_dump and pg_restore/psql.
+These backup files can't be exported or used to create servers outside Azure Database for PostgreSQL flexible server. For that purpose, you can use the PostgreSQL tools pg_dump and pg_restore/psql.
## Backup frequency
-Backups on flexible servers are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily. If none of the databases in the server receive any furhter modifications after the last snapshot backup is taken, snapshots backups are suspended until new modifications are made in any of the databases, point at which a new snapshot is immediately taken. **The first snapshot is a full backup and consecutive snapshots are differential backups.**
+Backups on Azure Database for PostgreSQL flexible server instances are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily. If none of the databases in the server receive any further modifications after the last snapshot backup is taken, snapshot backups are suspended until new modifications are made in any of the databases, point at which a new snapshot is immediately taken. **The first snapshot is a full backup and consecutive snapshots are differential backups.**
Transaction log backups happen at varied frequencies, depending on the workload and when the WAL file is filled and ready to be archived. In general, the delay (recovery point objective, or RPO) can be up to 15 minutes. ## Backup redundancy options
-Flexible Server stores multiple copies of your backups to help protect your data from planned and unplanned events. These events can include transient hardware failures, network or power outages, and natural disasters. Backup redundancy helps ensure that your database meets its availability and durability targets, even if failures happen.
+Azure Database for PostgreSQL flexible server stores multiple copies of your backups to help protect your data from planned and unplanned events. These events can include transient hardware failures, network or power outages, and natural disasters. Backup redundancy helps ensure that your database meets its availability and durability targets, even if failures happen.
-Flexible Server offers three options:
+Azure Database for PostgreSQL flexible server offers three options:
- **Zone-redundant backup storage**: This option is automatically chosen for regions that support availability zones. When the backups are stored in zone-redundant backup storage, multiple copies are not only stored within the same availability zone, but also replicated to another availability zone within the same region.
All backups required to perform a PITR within the backup retention period are re
### Backup storage cost
-Flexible Server provides up to 100 percent of your provisioned server storage as backup storage at no extra cost. Any additional backup storage that you use is charged in gigabytes per month.
+Azure Database for PostgreSQL flexible server provides up to 100 percent of your provisioned server storage as backup storage at no extra cost. Any additional backup storage that you use is charged in gigabytes per month.
For example, if you have provision a server with 250 gibibytes (GiB) of storage, then you have 250 GiB of backup storage capacity at no additional charge. If the daily backup usage is 25 GiB, then you can have up to 10 days of free backup storage. Backup storage consumption that exceeds 250 GiB is charged as defined in the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
You can use the [Backup Storage Used](../concepts-monitoring.md) metric in the
## Point-in-time recovery
-In Flexible Server, performing a PITR creates a new server in the same region as your source server, but you can choose the availability zone. It's created with the source server's configuration for the pricing tier, compute generation, number of virtual cores, storage size, backup retention period, and backup redundancy option. Also, tags and settings such as virtual networks and firewall settings are inherited from the source server.
+In Azure Database for PostgreSQL flexible server, performing a PITR creates a new server in the same region as your source server, but you can choose the availability zone. It's created with the source server's configuration for the pricing tier, compute generation, number of virtual cores, storage size, backup retention period, and backup redundancy option. Also, tags and settings such as virtual networks and firewall settings are inherited from the source server.
The physical database files are first restored from the snapshot backups to the server's data location. The appropriate backup that was taken earlier than the desired point in time is automatically chosen and restored. A recovery process then starts by using WAL files to bring the database to a consistent state.
For example, assume that the backups are performed at 11:00 PM every night. If t
To restore your database server, see [these steps](./how-to-restore-server-portal.md). > [!IMPORTANT]
-> A restore operation in Flexible Server always creates a new database server with the name that you provide. It doesn't overwrite the existing database server.
+> A restore operation in Azure Database for PostgreSQL flexible server always creates a new database server with the name that you provide. It doesn't overwrite the existing database server.
PITR is useful in scenarios like these:
PITR is useful in scenarios like these:
- An application accidentally overwrites good data with bad data because of an application defect. - You want to clone your server for test, development, or for data verification.
-With continuous backup of transaction logs, you'll be able to restore to the last transaction. You can choose between the following restore options:
+With continuous backup of transaction logs, you can restore to the last transaction. You can choose between the following restore options:
-- **Latest restore point (now)**: This is the default option. It allows you to restore the server to the latest point in time.
+- **Latest restore point (now)**: This is the default option, which allows you to restore the server to the latest point in time.
-- **Custom restore point**: This option allows you to choose any point in time within the retention period defined for this flexible server. By default, the latest time in UTC is automatically selected. Automatic selection is useful if you want to restore to the last committed transaction for test purposes. You can optionally choose other days and times.
+- **Custom restore point**: This option allows you to choose any point in time within the retention period defined for this Azure Database for PostgreSQL flexible server instance. By default, the latest time in UTC is automatically selected. Automatic selection is useful if you want to restore to the last committed transaction for test purposes. You can optionally choose other days and times.
-- **Fast restore point**: This option allows users to restore the server in the fastest time possible within the retention period defined for their flexible server. Fastest restore is possible by directly choosing the timestamp from the list of backups. This restore operation provisions a server and simply restores the full snapshot backup and doesn't require any recovery of logs, which makes it fast. We recommend you select a backup timestamp, which is greater than the earliest restore point in time for a successful restore operation.
+- **Fast restore point**: This option allows users to restore the server in the fastest time possible within the retention period defined for their Azure Database for PostgreSQL flexible server instance. Fastest restore is possible by directly choosing the timestamp from the list of backups. This restore operation provisions a server and simply restores the full snapshot backup and doesn't require any recovery of logs, which makes it fast. We recommend you select a backup timestamp, which is greater than the earliest restore point in time for a successful restore operation.
The time required to recover using the latest and custom restore point options varies based on factors such as the volume of transaction logs to process since the last backup and the total number of databases being recovered simultaneously in the same region The overall recovery time usually takes from few minutes up to a few hours. If you configure your server within a virtual network, you can restore to the same virtual network or to a different virtual network. However, you can't restore to public access. Similarly, if you configured your server with public access, you can't restore to private virtual network access. > [!IMPORTANT]
-> Deleted servers can be restored. If you delete the server, you can follow our guidance [Restore a dropped Azure Database for PostgreSQL Flexible server](how-to-restore-dropped-server.md) to recover. Use Azure resource lock to help prevent accidental deletion of your server.
+> Deleted servers can be restored. If you delete the server, you can follow our guidance [Restore a dropped Azure Database for Azure Database for PostgreSQL - Flexible Server](how-to-restore-dropped-server.md) to recover. Use Azure resource lock to help prevent accidental deletion of your server.
## Geo-redundant backup and restore
After you restore the database, you can perform the following tasks to get your
## Long-term retention (preview)
-Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-term backup solution for Azure Database for PostgreSQL Flexible servers that retain backups for up to 10 years. You can use long-term retention independently or in addition to the automated backup solution offered by Azure PostgreSQL, which offer retention of up to 35 days. Automated backups are physical backups suited for operational recoveries, especially when you want to restore from the latest backups. Long-term backups help you with your compliance needs, are more granular, and are taken as logical backups using native pg_dump. In addition to long-term retention, the solution offers the following capabilities:
-
+Azure Backup and Azure Database for PostgreSQL flexible server services have built an enterprise-class long-term backup solution for Azure Database for PostgreSQL flexible server instances that retains backups for up to 10 years. You can use long-term retention independently or in addition to the automated backup solution offered by Azure Database for PostgreSQL flexible server, which offers retention of up to 35 days. Automated backups are physical backups suited for operational recoveries, especially when you want to restore from the latest backups. Long-term backups help you with your compliance needs, are more granular, and are taken as logical backups using native pg_dump. In addition to long-term retention, the solution offers the following capabilities:
- Customer-controlled scheduled and on-demand backups at the individual database level. - Central monitoring of all operations and jobs.
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
- Using pg_dump allows greater flexibility in restoring data across different database versions. - Azure backup vaults support immutability and soft delete (preview) features, protecting your data.
-#### Limitations and Considerations
+#### Limitations and considerations
- In preview, LTR restore is currently available as RestoreasFiles to storage accounts. RestoreasServer capability will be added in the future. - In preview, you can perform LTR backups for all databases, single db backup support will be added in the future.
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **How does Azure handle backup of my server?**
- By default, Azure Database for PostgreSQL enables automated backups of your entire server (encompassing all databases created) with a default retention period of 7 days. The automated backups include a daily incremental snapshot of the database. The log (WAL) files are archived to Azure Blob Storage continuously.
+ By default, Azure Database for PostgreSQL flexible server enables automated backups of your entire server (encompassing all databases created) with a default retention period of 7 days. The automated backups include a daily incremental snapshot of the database. The log (WAL) files are archived to Azure Blob Storage continuously.
* **Can I configure automated backups to retain data for the long term?**
- No. Currently, Flexible Server supports a maximum of 35 days of retention. You can use manual backups for a long-term retention requirement.
+ No. Currently, Azure Database for PostgreSQL flexible server supports a maximum of 35 days of retention. You can use manual backups for a long-term retention requirement.
-* **How do I manually back up my PostgreSQL servers?**
+* **How do I manually back up my Azure Database for PostgreSQL flexible server instances?**
- You can manually take a backup by using the PostgreSQL tool [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html). For examples, see [Migrate your PostgreSQL database by using dump and restore](../howto-migrate-using-dump-and-restore.md).
+ You can manually take a backup by using the PostgreSQL tool [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html). For examples, see [Migrate your Azure Database for PostgreSQL flexible server database by using dump and restore](../howto-migrate-using-dump-and-restore.md).
- If you want to back up Azure Database for PostgreSQL to Blob Storage, see [Back up Azure Database for PostgreSQL to Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/backup-azure-database-for-postgresql-to-a-blob-storage/ba-p/803343) on our tech community blog.
+ If you want to back up Azure Database for PostgreSQL flexible server to Blob Storage, see [Back up Azure Database for PostgreSQL to Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/backup-azure-database-for-postgresql-to-a-blob-storage/ba-p/803343) on our tech community blog.
* **What are the backup windows for my server? Can I customize them?**
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **Are my backups encrypted?**
- Yes. All Azure Database for PostgreSQL data, backups, and temporary files that are created during query execution are encrypted through AES 256-bit encryption. Storage encryption is always on and can't be disabled.
+ Yes. All Azure Database for PostgreSQL flexible server data, backups, and temporary files that are created during query execution are encrypted through AES 256-bit encryption. Storage encryption is always on and can't be disabled.
* **Can I restore a single database or a few databases in a server?**
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **Where are my automated backups stored, and how do I manage their retention?**
- Azure Database for PostgreSQL automatically creates server backups and stores them in:
+ Azure Database for PostgreSQL flexible server automatically creates server backups and stores them in:
- Zone-redundant storage, in regions where multiple zones are supported. - Locally redundant storage, in regions that don't support multiple zones yet.
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **How are backups performed in a HA-enabled servers?**
- Data volumes in Flexible Server are backed up through managed disk incremental snapshots from the primary server. The WAL backup is performed from either the primary server or the standby server.
+ Data volumes in Azure Database for PostgreSQL flexible server are backed up through managed disk incremental snapshots from the primary server. The WAL backup is performed from either the primary server or the standby server.
* **How can I validate that backups are performed on my server?**
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **How will I be charged and billed for my backups?**
- Flexible Server provides up to 100 percent of your provisioned server storage as backup storage at no extra cost. Any additional backup storage that you use is charged in gigabytes per month, as defined in the pricing model.
+ Azure Database for PostgreSQL flexible server provides up to 100 percent of your provisioned server storage as backup storage at no extra cost. Any more backup storage that you use is charged in gigabytes per month, as defined in the pricing model.
The backup retention period and backup redundancy option that you select, along with transactional activity on the server, directly affect the total backup storage and billing.
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **I configured my server with zone-redundant high availability. Do you take two backups, and will I be charged twice?**
- No. Irrespective of HA or non-HA servers, only one set of backup copies is maintained. You'll be charged only once.
+ No. Irrespective of HA or non-HA servers, only one set of backup copies is maintained. You're charged only once.
### Restore-related questions
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
Azure supports PITR for all servers. Users can restore to the latest restore point or a custom restore point by using the Azure portal, the Azure CLI, and the API.
- To restore your server from manual backups by using tools like pg_dump, you can first create a flexible server and then restore your databases to the server by using [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html).
+ To restore your server from manual backups by using tools like pg_dump, you can first create an Azure Database for PostgreSQL flexible server instance and then restore your databases to the server by using [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html).
* **Can I restore to another availability zone within the same region?**
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **If I restore my HA-enabled server, is the restore server automatically configured with high availability?**
- No. The server is restored as a single-instance flexible server. After the restore is complete, you can optionally configure the server with high availability.
+ No. The server is restored as a single-instance Azure Database for PostgreSQL flexible server instance. After the restore is complete, you can optionally configure the server with high availability.
* **I configured my server within a virtual network. Can I restore to another virtual network?**
Azure Backup and Azure PostgreSQL Services have built an enterprise-class long-t
* **Can I restore my public access server to a virtual network or vice versa?**
- No. Flexible Server currently doesn't support restoring servers across public and private access.
+ No. Azure Database for PostgreSQL flexible server currently doesn't support restoring servers across public and private access.
* **How do I track my restore operation?**
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
Title: Overview of business continuity with Azure Database for PostgreSQL - Flexible Server
-description: Learn about the concepts of business continuity with Azure Database for PostgreSQL - Flexible Server
+ Title: Overview of business continuity
+description: Learn about the concepts of business continuity with Azure Database for PostgreSQL - Flexible Server.
Last updated 1/4/2024
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-**Business continuity** in Azure Database for PostgreSQL - Flexible Server refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In most of the cases, flexible server handles disruptive events that might happen in the cloud environment and keep your applications and business processes running. However, there are some events that can't be handled automatically such as:
+**Business continuity** in Azure Database for PostgreSQL flexible server refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In most of the cases, Azure Database for PostgreSQL flexible server handles disruptive events that might happen in the cloud environment and keep your applications and business processes running. However, there are some events that can't be handled automatically such as:
- User accidentally deletes or updates a row in a table. - Earthquake causes a power outage and temporarily disables an availability zone or a region. - Database patching required to fix a bug or security issue.
-The flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, the flexible server has business continuity features that provide another fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
+Azure Database for PostgreSQL flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, Azure Database for PostgreSQL flexible server has business continuity features that provide another fault protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
-The table below illustrates the features that Flexible server offers.
+The table below illustrates the features that Azure Database for PostgreSQL flexible server offers.
| **Feature** | **Description** | **Considerations** |
-| -- | | |
-| **Automatic backups** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained from 7 days up to 35 days. You're able to restore your database server to any point in time within your backup retention period. RTO is dependent on the size of the data to restore + the time to perform log recovery. It can be from few minutes up to 12 hours. For more details, see [Concepts - Backup and Restore](./concepts-backup-restore.md). |Backup data remains within the region. |
-| **Zone redundant high availability** | Flexible server can be deployed with zone redundant high availability (HA) configuration where primary and standby servers are deployed in two different availability zones within a region. This HA configuration protects your databases from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. RTO in most cases is expected to be less than 120s. RPO is expected to be zero (no data loss). For more information, see [Concepts - High availability](./concepts-high-availability.md). | Supported in general purpose and memory optimized compute tiers. Available only in regions where multiple zones are available. |
-| **Same zone high availability** | Flexible server can be deployed with same zone high availability (HA) configuration where primary and standby servers are deployed in the same availability zone in a region. This HA configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. RTO in most cases is expected to be less than 120s. RPO is expected to be zero (no data loss). For more information, see [Concepts - High availability](./concepts-high-availability.md). | Supported in general purpose and memory optimized compute tiers. |
+| - | -- | |
+| **Automatic backups** | Azure Database for PostgreSQL flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained from 7 days up to 35 days. You're able to restore your database server to any point in time within your backup retention period. RTO is dependent on the size of the data to restore + the time to perform log recovery. It can be from few minutes up to 12 hours. For more details, see [Concepts - Backup and Restore](./concepts-backup-restore.md). |Backup data remains within the region. |
+| **Zone redundant high availability** | Azure Database for PostgreSQL flexible server can be deployed with zone redundant high availability (HA) configuration where primary and standby servers are deployed in two different availability zones within a region. This HA configuration protects your databases from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. RTO in most cases is expected to be less than 120s. RPO is expected to be zero (no data loss). For more information, see [Concepts - High availability](./concepts-high-availability.md). | Supported in general purpose and memory optimized compute tiers. Available only in regions where multiple zones are available. |
+| **Same zone high availability** | Azure Database for PostgreSQL flexible server can be deployed with same zone high availability (HA) configuration where primary and standby servers are deployed in the same availability zone in a region. This HA configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. RTO in most cases is expected to be less than 120s. RPO is expected to be zero (no data loss). For more information, see [Concepts - High availability](./concepts-high-availability.md). | Supported in general purpose and memory optimized compute tiers. |
| **Premium-managed disks** | Database files are stored in a highly durable and reliable premium-managed storage. This provides data redundancy with three copies of replica stored within an availability zone with automatic data recovery capabilities. For more information, see [Managed disks documentation](../../virtual-machines/managed-disks-overview.md). | Data stored within an availability zone. |
-| **Zone redundant backup** | Flexible server backups are automatically and securely stored in a zone redundant storage within a region, if the region supports availability zones. During a zone-level failure where your server is provisioned, and if your server isn't configured with zone redundancy, you can still restore your database using the latest restore point in a different zone. For more information, see [Concepts - Backup and Restore](./concepts-backup-restore.md).| Only applicable in regions where multiple zones are available.|
-| **Geo redundant backup** | Flexible server backups are copied to a remote region. that helps with disaster recovery situation in the event of the primary server region being down. | This feature is currently enabled in selected regions. It takes a longer RTO and a higher RPO depending on the size of the data to restore and amount of recovery to perform. |
+| **Zone redundant backup** | Azure Database for PostgreSQL flexible server backups are automatically and securely stored in a zone redundant storage within a region, if the region supports availability zones. During a zone-level failure where your server is provisioned, and if your server isn't configured with zone redundancy, you can still restore your database using the latest restore point in a different zone. For more information, see [Concepts - Backup and Restore](./concepts-backup-restore.md).| Only applicable in regions where multiple zones are available.|
+| **Geo redundant backup** | Azure Database for PostgreSQL flexible server backups are copied to a remote region. that helps with disaster recovery situation in the event the primary server region is down. | This feature is currently enabled in selected regions. It takes a longer RTO and a higher RPO depending on the size of the data to restore and amount of recovery to perform. |
| **Read Replica** | Cross Region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. For more information, see [Concepts - Read Replicas](./concepts-read-replicas.md).| Supported in general purpose and memory optimized compute tiers. |
Below are some planned maintenance scenarios. These events typically incur up to
| **Scenario** | **Process**| | - | -- |
-| <b>Compute scaling (User-initiated)| During compute scaling operation, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, storage is detached, and then it's shut down. A new flexible server with the same database server name is provisioned with the scaled compute configuration. The storage is then attached to the new server and the database is started which performs recovery, if necessary, before accepting client connections. |
+| <b>Compute scaling (User-initiated)| During compute scaling operation, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, storage is detached, and then it's shut down. A new Azure Database for PostgreSQL flexible server instance with the same database server name is provisioned with the scaled compute configuration. The storage is then attached to the new server and the database is started which performs recovery, if necessary, before accepting client connections. |
| <b>Scaling up storage (User-initiated) | When a scaling up storage operation is initiated, active checkpoints are allowed to complete, client connections are drained, and any uncommitted transactions are canceled. After that the server is shut down. The storage is scaled to the desired size and then attached to the new server. A recovery is performed if needed before accepting client connections. Note that scaling down of the storage size isn't supported. | | <b>New software deployment (Azure-initiated) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance, and you can schedule when those activities to happen. For more information, check your [portal](https://aka.ms/servicehealthpm). |
-| <b>Minor version upgrades (Azure-initiated) | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. The database server is automatically restarted with the new minor version. For more information, see [documentation](../concepts-monitoring.md#planned-maintenance-notification). You can also check your [portal](https://aka.ms/servicehealthpm).|
+| <b>Minor version upgrades (Azure-initiated) | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of the service's planned maintenance. The database server is automatically restarted with the new minor version. For more information, see [documentation](../concepts-monitoring.md#planned-maintenance-notification). You can also check your [portal](https://aka.ms/servicehealthpm).|
-When the flexible server is configured with **high availability**, the flexible server performs the scaling and the maintenance operations on the standby server first. For more information, see [Concepts - High availability](./concepts-high-availability.md).
+When the Azure Database for PostgreSQL flexible server instance is configured with **high availability**, the service performs the scaling and the maintenance operations on the standby server first. For more information, see [Concepts - High availability](./concepts-high-availability.md).
## Unplanned downtime mitigation
-Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention.
+Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, Azure Database for PostgreSQL flexible server helps mitigate the downtime by automatically performing recovery operations without requiring human intervention.
-Though we continuously strive to provide high availability, there are times when Azure Database for PostgreSQL - Flexible Server service does incur outage causing unavailability of the databases and thus impacting your application. When our service monitoring detects issues that cause widespread connectivity errors, failures or performance issues, the service automatically declares an outage to keep you informed.
+Though we continuously strive to provide high availability, there are times when Azure Database for PostgreSQL flexible server does incur outage causing unavailability of the databases and thus impacting your application. When our service monitoring detects issues that cause widespread connectivity errors, failures or performance issues, the service automatically declares an outage to keep you informed.
### Service Outage
-In the event of the Azure Database for PostgreSQL - Flexible Server service outage, you'll be able to see additional details related to the outage in the following places:
+In the event of Azure Database for PostgreSQL flexible server outage, you can see more details related to the outage in the following places:
* **Azure portal banner**: If your subscription is identified to be impacted, there will be an outage alert of a Service Issue in your Azure portal **Notifications**.
Below are some unplanned failure scenarios and the recovery process.
| **Scenario** | **Recovery process** <br> [Servers configured without zone-redundant HA]| **Recovery process** <br> [Servers configured with Zone-redundant HA] | | - || - |
-| <B>Database server failure</B> | If the database server is down, Azure will attempt to restart the database server. If that fails, the database server will be restarted on another physical node. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault, such as large transaction, and the volume of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. | If the database server failure is detected, the server is failed over to the standby server, thus reducing downtime. For more information, see [HA concepts page](./concepts-high-availability.md). RTO is expected to be 60-120s, with zero data loss. |
-| <B>Storage failure</B> | Applications don't see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. The corrupted data block is automatically repaired and a new copy of the data is automatically created. | For any rare and non-recoverable errors such as the entire storage is inaccessible, the flexible server is failed over to the standby replica to reduce the downtime. For more information, see [HA concepts page](./concepts-high-availability.md). |
-| <b> Logical/user errors</B> | To recover from user errors, such as accidentally dropped tables or incorrectly updated data, you have to perform a [point-in-time recovery](../concepts-backup.md) (PITR). While performing the restore operation, you specify the custom restore point, which is the time right before the error occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors aren't protected with high availability as all changes are replicated to the standby replica synchronously. You have to perform point-in-time restore to recover from such errors. |
-| <b> Availability zone failure</B> | To recover from a zone-level failure, you can perform point-in-time restore using the backup and choosing a custom restore point with the latest time to restore the latest data. A new flexible server will be deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. | Flexible server is automatically failed over to the standby server within 60-120s with zero data loss. For more information, see [HA concepts page](./concepts-high-availability.md). |
-| <b> Region failure | If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server will be provisioned and recovered to the last available data that was copied to this region. <br /> <br /> You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. | Same process. |
+| **Database server failure** | If the database server is down, Azure will attempt to restart the database server. If that fails, the database server will be restarted on another physical node. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault, such as large transaction, and the volume of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. | If the database server failure is detected, the server is failed over to the standby server, thus reducing downtime. For more information, see [HA concepts page](./concepts-high-availability.md). RTO is expected to be 60-120s, with zero data loss. |
+| **Storage failure** | Applications don't see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. The corrupted data block is automatically repaired and a new copy of the data is automatically created. | For any rare and non-recoverable errors such as the entire storage is inaccessible, the Azure Database for PostgreSQL flexible server instance is failed over to the standby replica to reduce the downtime. For more information, see [HA concepts page](./concepts-high-availability.md). |
+| ** Logical/user errors** | To recover from user errors, such as accidentally dropped tables or incorrectly updated data, you have to perform a [point-in-time recovery](../concepts-backup.md) (PITR). While performing the restore operation, you specify the custom restore point, which is the time right before the error occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors aren't protected with high availability as all changes are replicated to the standby replica synchronously. You have to perform point-in-time restore to recover from such errors. |
+| ** Availability zone failure** | To recover from a zone-level failure, you can perform point-in-time restore using the backup and choosing a custom restore point with the latest time to restore the latest data. A new Azure Database for PostgreSQL flexible server instance is deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. | Azure Database for PostgreSQL flexible server is automatically failed over to the standby server within 60-120s with zero data loss. For more information, see [HA concepts page](./concepts-high-availability.md). |
+| ** Region failure** | If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server will be provisioned and recovered to the last available data that was copied to this region. <br /> <br /> You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. | Same process. |
### Configure your database after recovery from regional failure * If you are using geo-restore or geo-replica to recover from an outage, you must make sure that the connectivity to the new server is properly configured so that the normal application function can be resumed. You can follow the [Post-restore tasks](concepts-backup-restore.md#geo-redundant-backup-and-restore).
-* If you've previously set up a diagnostic setting on the original server, make sure to do the same on the target server if necessary as explained in [Configure and Access Logs in Azure Database for PostgreSQL - Flexible Server](howto-configure-and-access-logs.md).
-* Setup telemetry alerts, you need to make sure your existing alert rule settings are updated to map to the new server. For more information about alert rules, see [Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](howto-alert-on-metrics.md).
-
+* If you've previously set up a diagnostic setting on the original server, make sure to do the same on the target server if necessary as explained in [Configure and Access Logs in Azure Database for PostgreSQL - Flexible Server](how-to-configure-and-access-logs.md).
+* Setup telemetry alerts, you need to make sure your existing alert rule settings are updated to map to the new server. For more information about alert rules, see [Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](how-to-alert-on-metrics.md).
> [!IMPORTANT]
-> Deleted servers can be restored. If you delete the server, you can follow our guidance [Restore a dropped Azure Database for PostgreSQL Flexible server](how-to-restore-dropped-server.md) to recover. Use Azure resource lock to help prevent accidental deletion of your server.
+> Deleted servers can be restored. If you delete the server, you can follow our guidance [Restore a dropped Azure database - Azure Database for PostgreSQL - Flexible Server](how-to-restore-dropped-server.md) to recover. Use Azure resource lock to help prevent accidental deletion of your server.
## Next steps
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
Title: Compare Azure Database for PostgreSQL - Single Server and Flexible Server
-description: Detailed comparison of features and capabilities between Azure Database for PostgreSQL Single Server and Flexible Server
+ Title: Compare deployment options
+description: Detailed comparison of features and capabilities between Azure Database for PostgreSQL - Single Server and Azure Database for PostgreSQL - Flexible Server.
Last updated 12/11/2023
-# Comparison chart - Azure Database for PostgreSQL Single Server and Flexible Server
+# Comparison chart - Azure Database for PostgreSQL - Single Server and Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] ## Overview
-Azure Database for PostgreSQL Flexible Server is the next generation managed PostgreSQL service in Azure. It provides maximum flexibility over your database, built-in cost-optimizations, and offers several improvements over Single Server.
+Azure Database for PostgreSQL flexible server is the next generation managed PostgreSQL service in Azure. It provides maximum flexibility over your database, built-in cost-optimizations, and offers several improvements over Azure Database for PostgreSQL single server.
>[!NOTE]
-> For all your new PostgreSQL deployments, we recommend using Flexible Server. However, you should consider your own requirements against the comparison table below.
+> For all your new deployments, we recommend using Azure Database for PostgreSQL flexible server. However, you should consider your own requirements against the comparison table below.
## Comparison table
-The following table provides a list of high-level features and capabilities comparisons between Single Server and Flexible Server.
+The following table provides a list of high-level features and capabilities comparisons between Azure Database for PostgreSQL single server and Azure Database for PostgreSQL flexible server.
-| **Feature / Capability** | **Single Server** | **Flexible Server** |
+| **Feature / Capability** | **Azure Database for PostgreSQL single server** | **Azure Database for PostgreSQL flexible server** |
| - | - | - | | **General** | | | | General availability | GA since 2018 | GA since 2021|
The following table provides a list of high-level features and capabilities comp
## Next steps -- Understand [whatΓÇÖs available for compute and storage options - Flexible server](concepts-compute-storage.md)-- Learn about [supported PostgreSQL Database Versions in Flexible Server](concepts-supported-versions.md)-- Learn about [current limitations in Flexible Server](concepts-limits.md)
+- Understand [whatΓÇÖs available for compute and storage options - Azure Database for PostgreSQL - Flexible Server](concepts-compute-storage.md)
+- Learn about [supported PostgreSQL database versions - Azure Database for PostgreSQL - Flexible Server](concepts-supported-versions.md)
+- Learn about [current limitations in Azure Database for PostgreSQL flexible server](concepts-limits.md)
postgresql Concepts Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compliance.md
Title: 'Security and Compliance Certifications in Azure Database for PostgreSQL - Flexible Server'
-description: Learn about compliance in the Flexible Server deployment option for Azure Database for PostgreSQL.
+ Title: Security and compliance certifications
+description: Learn about compliance in the Flexible Server deployment option for Azure Database for PostgreSQL - Flexible Server.
Last updated 10/20/2022
-# Security and Compliance Certifications in Azure Database for PostgreSQL - Flexible Server
+# Security and compliance certifications in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] ## Overview of Compliance Certifications on Microsoft Azure
-Customers experience an increasing demand for highly secure and compliant solutions as they face data breaches along with requests from governments to access online customer information. Important regulatory requirements such as the [General Data Protection Regulation (GDPR)](/compliance/regulatory/gdpr) or [Sarbanes-Oxley (SOX)](/compliance/regulatory/offering-sox) make selecting cloud services that help customers achieve trust, transparency, security, and compliance essential. To help customers achieve compliance with national/regional and industry specific regulations and requirements Azure Database for PostgreSQL - Flexible Server build upon Microsoft AzureΓÇÖs compliance offerings to provide the most rigorous compliance certifications to customers at service general availability.
+Customers experience an increasing demand for highly secure and compliant solutions as they face data breaches along with requests from governments to access online customer information. Important regulatory requirements such as the [General Data Protection Regulation (GDPR)](/compliance/regulatory/gdpr) or [Sarbanes-Oxley (SOX)](/compliance/regulatory/offering-sox) make selecting cloud services that help customers achieve trust, transparency, security, and compliance essential. To help customers achieve compliance with national/regional and industry specific regulations and requirements Azure Database for PostgreSQL flexible server build upon Microsoft AzureΓÇÖs compliance offerings to provide the most rigorous compliance certifications to customers at service general availability.
To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of offerings), as well as depth (number of customer-facing services in assessment scope). Azure compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific. Compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments and customer guidance documents produced by Microsoft. More detailed information about Azure compliance offerings is available from the [Trust](https://www.microsoft.com/trust-center/compliance/compliance-overview) Center.
-## Azure Database for PostgreSQL - Flexible Server Compliance Certifications
+## Azure Database for PostgreSQL flexible server compliance certifications
- Azure Database for PostgreSQL - Flexible Server has achieved a comprehensive set of national/regional and industry-specific compliance certifications in our Azure public cloud to help you comply with requirements governing the collection and use of your data.
+Azure Database for PostgreSQL flexible server has achieved a comprehensive set of national/regional and industry-specific compliance certifications in our Azure public cloud to help you comply with requirements governing the collection and use of your data.
> [!div class="mx-tableFixed"] > | **Certification**| **Applicable To** |
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
Title: Compute and storage options in Azure Database for PostgreSQL - Flexible Server
+ Title: Compute and storage options
description: This article describes the compute and storage options in Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can create an Azure Database for PostgreSQL server in one of three pricing tiers: Burstable, General Purpose, and Memory Optimized. The pricing tier is calculated based on the compute, memory, and storage you provision. A server can have one or many databases.
+You can create an Azure Database for PostgreSQL flexible server instance in one of three pricing tiers: Burstable, General Purpose, and Memory Optimized. The pricing tier is calculated based on the compute, memory, and storage you provision. A server can have one or many databases.
| Resource/Tier | Burstable | General Purpose | Memory Optimized | | : | : | : | : |
Storage is available in the following fixed sizes:
| 32 TiB | 20,000 |
-Your VM type also have IOPS limits. Even though you can select any storage size independently from the server type, you might not be able to use all IOPS that the storage provides, especially when you choose a server with a few vCores.
+Your VM type also has IOPS limits. Even though you can select any storage size independently from the server type, you might not be able to use all IOPS that the storage provides, especially when you choose a server with a few vCores.
You can add storage capacity during and after the creation of the server. > [!NOTE]
You can monitor your I/O consumption in the Azure portal or by using Azure CLI c
| E32s_v3 / E32ds_v4 / E32ds_v5 / E32ads_v5 | 20,000 IOPS | 120 | 240 | 500 | 1,100 | 2,300 | 5,000 | 7,500 | 7,500 | 16,000 | 18,000 | 20,000 | | E48s_v3 / E48ds_v4 / E48ds_v5 / E48ads_v5 | 20,000 IOPS | 120 | 240 | 500 | 1,100 | 2,300 | 5,000 | 7,500 | 7,500 | 16,000 | 18,000 | 20,000 | | E64s_v3 / E64ds_v4 / E64ds_v5 / E64ads_v5 | 20,000 IOPS | 120 | 240 | 500 | 1,100 | 2,300 | 5,000 | 7,500 | 7,500 | 16,000 | 18,000 | 20,000 |
-| E96ds_v5 / E96ads_v5 | 20,000 IOPS | 120 | 240 | 500 | 1,100 | 2,300 | 5,000 | 7,500 | 7,500 | 16,000 | 18,000 | 20,000 |
+| E96ds_v5 / E96ads_v5 | 20,000 IOPS | 120 | 240 | 500 | 1,100 | 2,300 | 5,000 | 7,500 | 7,500 | 16,000 | 18,000 | 20,000 |
IOPS marked with an asterisk (\*) are limited by the VM type that you selected. Otherwise, the selected storage size limits the IOPS.
When you reach the storage limit, the server starts returning errors and prevent
To avoid this situation, the server is automatically switched to read-only mode when the storage usage reaches 95 percent or when the available capacity is less than 5 GiB.
-We recommend that you actively monitor the disk space that's in use and increase the disk size before you run out of storage. You can set up an alert to notify you when your server storage is approaching an out-of-disk state. For more information, see [Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](howto-alert-on-metrics.md).
+We recommend that you actively monitor the disk space that's in use and increase the disk size before you run out of storage. You can set up an alert to notify you when your server storage is approaching an out-of-disk state. For more information, see [Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server](how-to-alert-on-metrics.md).
### Storage autogrow
For servers with more than 1 TiB of provisioned storage, the storage autogrow me
As an illustration, take a server with a storage capacity of 2 TiB (greater than 1 TiB). In this case, the autogrow limit is set at 64 GiB. This choice is made because 64 GiB is the smaller value when compared to 10% of 2 TiB, which is roughly 204.8 GiB. In contrast, for a server with a storage size of 128 GiB (less than 1 TiB), the autogrow feature activates when there's only 25.8 GiB of space left. This activation is based on the 20% threshold of the total allocated storage (128 GiB), which is smaller than 64 GiB.
-Azure Database for PostgreSQL - Flexible Server uses [Azure managed disks](/azure/virtual-machines/disks-types). The default behavior is to increase the disk size to the next premium tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage autogrow. Enabling storage autogrow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly.
+Azure Database for PostgreSQL flexible server uses [Azure managed disks](/azure/virtual-machines/disks-types). The default behavior is to increase the disk size to the next premium tier. This increase is always double in both size and cost, regardless of whether you start the storage scaling operation manually or through storage autogrow. Enabling storage autogrow is valuable when you're managing unpredictable workloads, because it automatically detects low-storage conditions and scales up the storage accordingly.
The process of scaling storage is performed online without causing any downtime, except when the disk is provisioned at 4,096 GiB. This exception is a limitation of Azure Managed disks. If a disk is already 4,096 GiB, the storage scaling activity will not be triggered, even if storage auto-grow is turned on. In such cases, you need to manually scale your storage. Manual scaling is an offline operation that you should plan according to your business requirements.
Remember that storage can only be scaled up, not down.
## Premium SSD v2 (preview)
-Premium SSD v2 offers higher performance than Premium SSDs, while also generally being less costly. You can individually tweak the performance (capacity, throughput, and IOPS) of Premium SSD v2 disks at any time, allowing workloads to be cost efficient while meeting shifting performance needs. For example, a transaction-intensive database might need a large amount of IOPS at a small size, or a gaming application might need a large amount of IOPS but only during peak hours. Because of this, for most general purpose workloads, Premium SSD v2 can provide the best price performance. You can now deploy Azure Database for PostgreSQL Flexible servers with Premium SSD v2 disk in limited regions.
+Premium SSD v2 offers higher performance than Premium SSDs while also generally being less costly. You can individually tweak the performance (capacity, throughput, and IOPS) of Premium SSD v2 disks at any time, allowing workloads to be cost efficient while meeting shifting performance needs. For example, a transaction-intensive database might need a large amount of IOPS at a small size, or a gaming application might need a large amount of IOPS but only during peak hours. Because of this, for most general purpose workloads, Premium SSD v2 can provide the best price performance. You can now deploy Azure Database for PostgreSQL flexible server instances with Premium SSD v2 disk in limited regions.
### Differences between Premium SSD and Premium SSD v2
All Premium SSD v2 disks have a baseline of 3000 IOPS that is free of charge. Af
All Premium SSD v2 disks have a baseline throughput of 125 MB/s that is free of charge. After 6 GiB, the maximum throughput that can be set increases by 0.25 MB/s per set IOPS. If a disk has 3,000 IOPS, the max throughput it can set is 750 MB/s. To raise the throughput for this disk beyond 750 MB/s, its IOPS must be increased. For example, if you increased the IOPS to 4,000, then the max throughput that can be set is 1,000. 1,200 MB/s is the maximum throughput supported for disks that have 5,000 IOPS or more. Increasing your throughput beyond 125 increases the price of your disk. > [!NOTE]
-> Premium SSD v2 is currently in preview for Azure Database for PostgreSQL Flexible Server.
+> Premium SSD v2 is currently in preview for Azure Database for PostgreSQL flexible server.
#### Premium SSD v2 early preview limitations -- Azure Database for PostgreSQL Flexible Server with Premium SSD V2 disk can be deployed only in West Europe, East US, Switzerland North regions during early preview, and provided there is still capacity in the selected region. Support for more regions is coming soon.
+- Azure Database for PostgreSQL flexible server with Premium SSD V2 disk can be deployed only in West Europe, East US, Switzerland North regions during early preview, and provided there is still capacity in the selected region. Support for more regions is coming soon.
- During early preview, SSD V2 disk won't have support for High Availability, Read Replicas, Geo Redundant Backups, Customer Managed Keys, Storage Auto-grow features. These features will be supported soon on Premium SSD V2.
All Premium SSD v2 disks have a baseline throughput of 125 MB/s that is free of
## IOPS (preview)
-Azure Database for PostgreSQL ΓÇô Flexible Server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
+Azure Database for PostgreSQL flexible server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
The minimum and maximum IOPS are determined by the selected compute size. To learn more about the minimum and maximum IOPS per compute size refer to the [table](#maximum-iops-for-your-configuration).
Learn how to [scale up or down IOPS](./how-to-scale-compute-storage-portal.md).
## Price
-For the most up-to-date pricing information, see the [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) page. The [Azure Portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab, based on the options that you select.
+For the most up-to-date pricing information, see the [Azure Database for PostgreSQL flexible server pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) page. The [Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab, based on the options that you select.
If you don't have an Azure subscription, you can use the Azure pricing calculator to get an estimated price. On the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) website, select **Add items**, expand the **Databases** category, and then select **Azure Database for PostgreSQL** to customize the options. ## Related content -- [create a PostgreSQL server in the portal](how-to-manage-server-portal.md)-- [service limits](concepts-limits.md)
+- [Create an Azure Database for PostgreSQL - Flexible Server in the portal](how-to-manage-server-portal.md)
+- [Service limits](concepts-limits.md)
postgresql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-connection-libraries.md
Title: Connection libraries - Azure Database for PostgreSQL - Flexible Server
+ Title: Connection libraries
description: This article describes several libraries and drivers that you can use when coding applications to connect and query Azure Database for PostgreSQL - Flexible Server.
Last updated 03/24/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article lists libraries and drivers that developers can use to develop applications to connect to and query Azure Database for PostgreSQL.
+This article lists libraries and drivers that developers can use to develop applications to connect to and query Azure Database for PostgreSQL flexible server.
## Client interfaces
-Most language client libraries used to connect to PostgreSQL server are external projects and are distributed independently. The libraries listed are supported on the Windows, Linux, and Mac platforms, for connecting to Azure Database for PostgreSQL. Several quickstart examples are listed in the Next steps section.
+Most language client libraries used to connect to Azure Database for PostgreSQL flexible server are external projects and are distributed independently. The libraries listed are supported on the Windows, Linux, and Mac platforms, for connecting to Azure Database for PostgreSQL flexible server. Several quickstart examples are listed in the Next steps section.
| **Language** | **Client interface** | **Additional information** | **Download** | |--|-|-|--|
Most language client libraries used to connect to PostgreSQL server are external
## Next steps
-Read these quickstarts on how to connect to and query Azure Database for PostgreSQL by using your language of choice:
+Read these quickstarts on how to connect to and query Azure Database for PostgreSQL flexible server by using your language of choice:
[Python](./connect-python.md) | [Java](./connect-java.md) | [Azure CLI](./connect-azure-cli.md) | [.NET (C#)](./connect-csharp.md)
postgresql Concepts Connection Pooling Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-connection-pooling-best-practices.md
Title: Connection pooling best practices - Azure Database for PostgreSQL - Flexible Server
+ Title: Connection pooling best practices
description: This article describes the best practices for connection pooling in Azure Database for PostgreSQL - Flexible Server.
Last updated 08/30/2023
-# Connection pooling strategy for PostgreSQL Using PgBouncer
+# Connection pooling strategy for Azure Database for PostgreSQL - Flexible Server using PgBouncer
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Strategic guidance for selecting connection pooling mechanism for PostgreSQL.
+Strategic guidance for selecting connection pooling mechanism for Azure Database for PostgreSQL flexible server.
## Introduction
-When using PostgreSQL, establishing a connection to the database involves creating a communication channel between the client application and the server. This channel is responsible for managing data, executing queries, and initiating transactions. Once the connection is established, the client application can send commands to the server and receive responses. However, creating a new connection for each operation can cause performance issues for mission-critical applications. Every time a new connection is created, PostgreSQL spawns a new process using the postmaster process, which consumes more resources.
+When using Azure Database for PostgreSQL flexible server, establishing a connection to the database involves creating a communication channel between the client application and the server. This channel is responsible for managing data, executing queries, and initiating transactions. Once the connection is established, the client application can send commands to the server and receive responses. However, creating a new connection for each operation can cause performance issues for mission-critical applications. Every time a new connection is created, Azure Database for PostgreSQL flexible server spawns a new process using the postmaster process, which consumes more resources.
-To mitigate this issue, connection pooling is used to create a cache of connections that can be reused in PostgreSQL. When an application or client requests a connection, it's created from the connection pool. After the session or transaction is completed, the connection is returned to the pool for reuse. By reusing connections, resources usage is reduced, and performance is improved.
+To mitigate this issue, connection pooling is used to create a cache of connections that can be reused in Azure Database for PostgreSQL flexible server. When an application or client requests a connection, it's created from the connection pool. After the session or transaction is completed, the connection is returned to the pool for reuse. By reusing connections, resources usage is reduced, and performance is improved.
:::image type="content" source="./media/concepts-connection-pooling-best-practices/connection-patterns.png" alt-text="Diagram for Connection Pooling Patterns.":::
Although there are different tools for connection pooling, in this section, we d
**PgBouncer** is an efficient connection pooler designed for PostgreSQL, offering the advantage of reducing processing time and optimizing resource usage in managing multiple client connections to one or more databases. **PgBouncer** incorporates three distinct pooling mode for connection rotation: - **Session pooling:** This method assigns a server connection to the client application for the entire duration of the client's connection. Upon disconnection of the client application, **PgBouncer** promptly returns the server connection back to the pool. Session pooling mechanism is the default mode in Open Source PgBouncer. See [PgBouncer configuration](https://www.pgbouncer.org/config.html)-- **Transaction pooling:** With transaction pooling, a server connection is dedicated to the client application during a transaction. Once the transaction is successfully completed, **PgBouncer** intelligently releases the server connection, making it available again within the pool. Transaction pooling is the default mode in Azure PostgreSQL Flexible Server's in-built PgBouncer, and it does not support prepared transactions.
+- **Transaction pooling:** With transaction pooling, a server connection is dedicated to the client application during a transaction. Once the transaction is successfully completed, **PgBouncer** intelligently releases the server connection, making it available again within the pool. Transaction pooling is the default mode in Azure Database for PostgreSQL flexible server's in-built PgBouncer, and it does not support prepared transactions.
- **Statement pooling:** In statement pooling, a server connection is allocated to the client application for each individual statement. Upon the statement's completion, the server connection is promptly returned to the connection pool. It's important to note that multi-statement transactions are not supported in this mode. The effective utilization of PgBouncer can be categorized into three distinct usage patterns.
When utilizing this approach, PgBouncer is deployed on the same server where you
### I. PgBouncer deployed in Application VM
-If your application runs on an Azure VM, you can set up PgBouncer on the same VM. To install and configure PgBouncer as a connection pooling proxy with Azure Database for PostgreSQL, follow the instructions provided in the following [link](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/steps-to-install-and-setup-pgbouncer-connection-pooling-proxy/ba-p/730555).
+If your application runs on an Azure VM, you can set up PgBouncer on the same VM. To install and configure PgBouncer as a connection pooling proxy with Azure Database for PostgreSQL flexible server, follow the instructions provided in the following [link](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/steps-to-install-and-setup-pgbouncer-connection-pooling-proxy/ba-p/730555).
:::image type="content" source="./media/concepts-connection-pooling-best-practices/co-location.png" alt-text="Diagram for App co-location on VM.":::
-Deploying PgBouncer in an application server can provide several advantages, especially when working with PostgreSQL databases. Some of the key benefits & limitations of this deployment method are:
+Deploying PgBouncer in an application server can provide several advantages, especially when working with Azure Database for PostgreSQL flexible server databases. Some of the key benefits & limitations of this deployment method are:
**Benefits:** -- **Reduced Latency:** By deploying **PgBouncer** on the same Application VM, communication between the primary application and the connection pooler is efficient due to their proximity. deploying PgBouncer in Application VM minimizes latency and ensures smooth and swift interactions.
+- **Reduced Latency:** By deploying **PgBouncer** on the same Application VM, communication between the primary application and the connection pooler is efficient due to their proximity. Deploying PgBouncer in Application VM minimizes latency and ensures smooth and swift interactions.
- **Improved security:** **PgBouncer** can act as a secure intermediary between the application and the database, providing an extra layer of security. It can enforce authentication and encryption, ensuring that only authorized clients can access the database.
-Overall, deploying PgBouncer in an application server provides a more efficient, secure, and scalable approach to managing connections to PostgreSQL databases, enhancing the performance and reliability of the application.
+Overall, deploying PgBouncer in an application server provides a more efficient, secure, and scalable approach to managing connections to Azure Database for PostgreSQL flexible server databases, enhancing the performance and reliability of the application.
**Limitations:**
It's important to weigh these limitations against the benefits and evaluate whet
It's possible to utilize **PgBouncer** as a sidecar container if your application is containerized and running on [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/), [Azure Container Instance (ACI)](https://azure.microsoft.com/products/container-instances), [Azure Container Apps (ACA)](https://azure.microsoft.com/products/container-apps/), or [Azure Red Hat OpenShift (ARO)](https://azure.microsoft.com/products/openshift/). The Sidecar pattern draws its inspiration from the concept of a sidecar that attached to a motorcycle, where an auxiliary container, known as the sidecar container, is attached to a parent application. This pattern enriches the parent application by extending its functionalities and delivering supplementary support.
-The sidecar pattern is typically used with containers being coscheduled as an atomic container group. deploying PgBouncer in an AKS sidecar tightly couples the application and sidecar lifecycles and shares resources such as hostname and networking to make efficient use of resources. The PgBouncer sidecar operates alongside the application container within the same pod in Azure Kubernetes Service (AKS) with 1:1 mapping, serving as a connection pooling proxy for Azure Database for PostgreSQL.
+The sidecar pattern is typically used with containers being coscheduled as an atomic container group. deploying PgBouncer in an AKS sidecar tightly couples the application and sidecar lifecycles and shares resources such as hostname and networking to make efficient use of resources. The PgBouncer sidecar operates alongside the application container within the same pod in Azure Kubernetes Service (AKS) with 1:1 mapping, serving as a connection pooling proxy for Azure Database for PostgreSQL flexible server.
-This sidecar pattern is typically used with containers being coscheduled as an atomic container group. sidecar pattern strongly binds the application and sidecar lifecycles and has shared resources such hostname and networking. By using this setup, PgBouncer optimizes connection management and facilitates efficient communication between the application and the Azure Database for PostgreSQL.
+This sidecar pattern is typically used with containers being coscheduled as an atomic container group. sidecar pattern strongly binds the application and sidecar lifecycles and has shared resources such hostname and networking. By using this setup, PgBouncer optimizes connection management and facilitates efficient communication between the application and the Azure Database for PostgreSQL flexible server instance.
-Microsoft has published a [**PgBouncer** sidecar proxy image](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) in Microsoft container registry.
+Microsoft has published a **PgBouncer** sidecar proxy image in Microsoft container registry.
Refer [this](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/steps-to-install-and-setup-pgbouncer-connection-pooling-on-azure/ba-p/3633043) for more details.
When utilizing this approach, PgBouncer is deployed as a centralized service, in
### I. PgBouncer deployed in ubuntu VM behind Azure Load Balancer
-**PgBouncer** connection proxy is set up between the application and database layer behind a Azure Load Balancer as shown in the image. In this pattern multiple PgBouncer instances are deployed behind a load balancer as a service to mitigate single point of failure.This pattern is also suitable in scenarios where the application is running on a managed service like Azure App Services or Azure Functions and connecting to **PgBouncer** service for easy integration with your existing infrastructure.
+**PgBouncer** connection proxy is set up between the application and database layer behind an Azure Load Balancer as shown in the image. In this pattern multiple PgBouncer instances are deployed behind a load balancer as a service to mitigate single point of failure. This pattern is also suitable in scenarios where the application is running on a managed service like Azure App Services or Azure Functions and connecting to **PgBouncer** service for easy integration with your existing infrastructure.
-Refer [link](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/steps-to-install-and-setup-pgbouncer-connection-pooling-proxy/ba-p/730555) to install and set up PgBouncer connection pooling proxy with Azure Database for PostgreSQL.
+Refer [link](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/steps-to-install-and-setup-pgbouncer-connection-pooling-proxy/ba-p/730555) to install and set up PgBouncer connection pooling proxy with Azure Database for PostgreSQL flexible server.
:::image type="content" source="./media/concepts-connection-pooling-best-practices/deploying-vm.png" alt-text="Diagram for App co-location on Vm with Load Balancer.":::
Some of the key benefits & limitations of this deployment method are:
- **Removing Single Point of Failure:** Application connectivity may not be affected by the failure of a single PgBouncer VM, as there are several PgBouncer instances behind Azure Load Balancer. - **Seamless Integration with Managed - **Simplified Setup on Azure VM:** If you're already running your application on an Azure VM, setting up PgBouncer on the same VM is straightforward. deploying the PgBouncer in VM ensures that PgBouncer is deployed in close proximity to your application, minimizing network latency and maximizing performance.-- **Non-Intrusive Configuration:** By deploying PgBouncer on a VM, you can avoid modifying server parameters on Azure PostgreSQL. This is useful when you want to configure PgBouncer on a flexible server. For example, changing the SSLMODE parameter to "required" on Azure PostgreSQL might cause certain applications that rely on SSLMODE=FALSE to fail. Deploying PgBouncer on a separate VM allows you to maintain the default server configuration while still using PgBouncer's benefits.
+- **Non-Intrusive Configuration:** By deploying PgBouncer on a VM, you can avoid modifying server parameters on Azure Database for PostgreSQL flexible server. This is useful when you want to configure PgBouncer on an Azure Database for PostgreSQL flexible server instance. For example, changing the SSLMODE parameter to "required" on Azure Database for PostgreSQL flexible server might cause certain applications that rely on SSLMODE=FALSE to fail. Deploying PgBouncer on a separate VM allows you to maintain the default server configuration while still using PgBouncer's benefits.
By considering these benefits, deploying PgBouncer on a VM offers a convenient and efficient solution for enhancing the performance and compatibility of your application running on Azure infrastructure.
By considering these benefits, deploying PgBouncer on a VM offers a convenient a
**Limitations:** - **Management overhead:** As **PgBouncer** is installed in VM, there might be management overhead to manage multiple configuration files. This makes it difficult to cope up with version upgrades, new releases, and product updates.-- **Feature parity:** If you're migrating from traditional PostgreSQL to Azure PostgreSQL and using **PgBouncer**, there might be some features gaps. For example, lack of md5 support in Azure PostgreSQL.
+- **Feature parity:** If you're migrating from traditional PostgreSQL to Azure Database for PostgreSQL flexible server and using **PgBouncer**, there might be some features gaps. For example, lack of md5 support in Azure Database for PostgreSQL flexible server.
### II. Centralized PgBouncer deployed as a service within AKS
If you're working with highly scalable and large containerized deployments on Az
By utilizing **PgBouncer** as a separate service, you can efficiently manage and handle connection pooling for your applications on a broader scale. This approach allows for centralizing the connection pooling functionality, enabling multiple applications to connect to the same database resource while maintaining optimal performance and resource utilization.
-[**PgBouncer** sidecar proxy image](https://hub.docker.com/_/microsoft-azure-oss-db-tools-pgbouncer-sidecar) published in Microsoft container registry can be used to create and deploy a service.
+**PgBouncer** sidecar proxy image published in Microsoft container registry can be used to create and deploy a service.
:::image type="content" source="./media/concepts-connection-pooling-best-practices/centralized-aks.png" alt-text="Diagram for PgBouncer as a service within AKS.":::
By considering **PgBouncer** as a standalone service within AKS, you can use the
While **PgBouncer** running as a standalone service offers benefits such as centralized management and resource optimization, it's important to assess the impact of potential latency on your application's performance to ensure it aligns with your specific requirements.
-## 3. Inbuilt PgBouncer in Azure Database for PostgreSQL Flexible Server
+## 3. Built-in PgBouncer in Azure Database for PostgreSQL flexible server
-Azure Database for PostgreSQL ΓÇô Flexible Server offers [PgBouncer](https://github.com/pgbouncer/pgbouncer) as a built-in connection pooling solution. This is offered as an optional service that can be enabled on a per-database server basis. PgBouncer runs in the same virtual machine as the Postgres database server. As the number of connections increases beyond a few hundreds or thousand, Postgres may encounter resource limitations. In such cases, built-in PgBouncer can provide a significant advantage by improving the management of idle and short-lived connections at the database server.
+Azure Database for PostgreSQL flexible server offers [PgBouncer](https://github.com/pgbouncer/pgbouncer) as a built-in connection pooling solution. This is offered as an optional service that can be enabled on a per-database server basis. PgBouncer runs in the same virtual machine as the Azure Database for PostgreSQL flexible server instance. As the number of connections increases beyond a few hundreds or thousand, Azure Database for PostgreSQL flexible server may encounter resource limitations. In such cases, built-in PgBouncer can provide a significant advantage by improving the management of idle and short-lived connections at the database server.
-Refer link to enable and set up PgBouncer connection pooling in Azure DB for PostgreSQL Flexible server.
+Refer link to enable and set up PgBouncer connection pooling in Azure Database for PostgreSQL flexible server.
Some of the key benefits & limitations of this deployment method are: **Benefits:** -- **Seamless Configuration:** With the inbuilt **PgBouncer** in Flexible Server, there is no need for a separate installation or complex setup. It can be easily configured directly from the server parameters, ensuring a hassle-free experience.
+- **Seamless Configuration:** With the built-in **PgBouncer** in Azure Database for PostgreSQL flexible server, there's no need for a separate installation or complex setup. It can be easily configured directly from the server parameters, ensuring a hassle-free experience.
- **Managed Service Convenience:** As a managed service, users can enjoy the advantages of other Azure managed services. This includes automatic updates, eliminating the need for manual maintenance and ensuring that **PgBouncer** stays up to date with the latest features and security patches.-- **Public and Private Connection Support:** The inbuilt **PgBouncer** in Flexible Server provides support for both public and private connections. This allows users to establish secure connections over private networks or connect externally, depending on their specific requirements.
+- **Public and Private Connection Support:** The built-in **PgBouncer** in Azure Database for PostgreSQL flexible server provides support for both public and private connections. This allows users to establish secure connections over private networks or connect externally, depending on their specific requirements.
- **High Availability (HA):** In the event of a failover, where a standby server is promoted to the primary role, **PgBouncer** seamlessly restarts on the newly promoted standby without any changes required to the application connection string. This ensures continuous availability and minimizes disruption to the application.-- **Cost Efficient:** It's cost efficient as the users donΓÇÖt need to pay for extra compute like VM or the containers. Though It does have some CPU impact as it's another process running on the same machine.
+- **Cost Efficient:** It's cost efficient as the users donΓÇÖt need to pay for extra compute like VM or the containers, though it does have some CPU impact as it's another process running on the same machine.
-With inbuilt PgBouncer in Flexible Server, users can enjoy the convenience of simplified configuration, the reliability of a managed service, support for various pooling modes, and seamless high availability during failover scenarios.
+With built-in PgBouncer in Azure Database for PostgreSQL flexible server, users can enjoy the convenience of simplified configuration, the reliability of a managed service, support for various pooling modes, and seamless high availability during failover scenarios.
**Limitations:** - **Not supported with Burstable:** **PgBouncer** is currently not supported with Burstable server compute tier. If you change the compute tier from General Purpose or Memory Optimized to Burstable tier, you lose the **PgBouncer** capability. - **Re-establish connections after restarts:** Whenever the server is restarted during scale operations, HA failover, or a restart, the **PgBouncer** is also restarted along with the server virtual machine. Hence, existing connections must be re-established.
-_We have discussed different ways of implementing PgBouncer and the table summarizes which deployment method to opt for:_
+We have discussed different ways of implementing PgBouncer and the table summarizes which deployment method to opt for:
-|**Selection Criteria**|**PgBouncer on App VM**|**PgBouncer on VM using ALB***|**PgBouncer on AKS Sidecar**|**PgBouncer as a Service**|**Flexible Server Inbuilt PgBouncer**|
+|**Selection Criteria**|**PgBouncer on App VM**|**PgBouncer on VM using ALB***|**PgBouncer on AKS Sidecar**|**PgBouncer as a Service**|**Azure Database for PostgreSQL flexible server built-in PgBouncer**|
||:-:|:-:|:-:|:-:|:-:| |Simplified Management|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/yellow.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/yellow.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/red.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/red.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/green.png":::| |HA|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/yellow.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/yellow.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/green.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/green.png":::|:::image type="icon" source="./media/concepts-connection-pooling-best-practices/green.png":::|
postgresql Concepts Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-connectivity.md
Title: Handle transient connectivity errors - Azure Database for PostgreSQL - Flexible Server
+ Title: Handle transient connectivity errors
description: Learn how to handle transient connectivity errors for Azure Database for PostgreSQL - Flexible Server.
Last updated 03/22/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article describes how to handle transient errors connecting to Azure Database for PostgreSQL.
+This article describes how to handle transient errors connecting to Azure Database for PostgreSQL flexible server.
## Transient errors
Transient errors should be handled using retry logic. Situations that must be co
* An idle connection is dropped on the server side. When you try to issue a command, it can't be executed * An active connection that currently is executing a command is dropped.
-The first and second cases are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for PostgreSQL again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
+The first and second cases are fairly straight forward to handle. Try to open the connection again. When you succeed, the transient error has been mitigated by the system. You can use your Azure Database for PostgreSQL flexible server instance again. We recommend having waits before retrying the connection. Back off if the initial retries fail. This way the system can use all resources available to overcome the error situation. A good pattern to follow is:
* Wait for 5 seconds before your first retry. * For each following retry, increase the wait exponentially, up to 60 seconds.
When a connection with an active transaction fails, it is more difficult to hand
One way of doing this, is to generate a unique ID on the client that is used for all the retries. You pass this unique ID as part of the transaction to the server and to store it in a column with a unique constraint. This way you can safely retry the transaction. It will succeed if the previous transaction was rolled back and the client generated unique ID does not yet exist in the system. It will fail indicating a duplicate key violation if the unique ID was previously stored because the previous transaction completed successfully.
-When your program communicates with Azure Database for PostgreSQL through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
+When your program communicates with Azure Database for PostgreSQL flexible server through third-party middleware, ask the vendor whether the middleware contains retry logic for transient errors.
-Make sure to test your retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for PostgreSQL server. Your application should handle the brief downtime that is encountered during this operation without any problems.
+Make sure to test your retry logic. For example, try to execute your code while scaling up or down the compute resources of your Azure Database for PostgreSQL flexible server instance. Your application should handle the brief downtime that is encountered during this operation without any problems.
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
Title: Data encryption with customer-managed key - Azure Database for PostgreSQL - Flexible Server
-description: Azure Database for PostgreSQL Flexible Server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
+ Title: Data encryption with customer-managed key
+description: Azure Database for PostgreSQL - Flexible Server data encryption with a customer-managed key enables you to Bring Your Own Key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data.
-# Azure Database for PostgreSQL - Flexible Server Data Encryption with a Customer-managed Key
+# Azure Database for PostgreSQL - Flexible Server data encryption with a customer-managed key
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure PostgreSQL uses [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure PostgreSQL users, it's similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control of access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible Server enables you to bring your key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
+Azure Database for PostgreSQL flexible server uses [Azure Storage encryption](../../storage/common/storage-service-encryption.md) to encrypt data at-rest by default using Microsoft-managed keys. For Azure Database for PostgreSQL flexible server users, it's similar to Transparent Data Encryption (TDE) in other databases such as SQL Server. Many organizations require full control of access to the data using a customer-managed key. Data encryption with customer-managed keys for Azure Database for PostgreSQL flexible server enables you to bring your key (BYOK) for data protection at rest. It also allows organizations to implement separation of duties in the management of keys and data. With customer-managed encryption, you're responsible for, and in full control of, a key's lifecycle, key usage permissions, and auditing of operations on keys.
-Data encryption with customer-managed keys for Azure Database for PostgreSQL Flexible Server is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the service's data encryption key (DEK). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](https://azure.microsoft.com/services/key-vault/)) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) are described in more detail later in this article.
+Data encryption with customer-managed keys for Azure Database for PostgreSQL flexible server is set at the server level. For a given server, a customer-managed key, called the key encryption key (KEK), is used to encrypt the service's data encryption key (DEK). The KEK is an asymmetric key stored in a customer-owned and customer-managed [Azure Key Vault](https://azure.microsoft.com/services/key-vault/)) instance. The Key Encryption Key (KEK) and Data Encryption Key (DEK) are described in more detail later in this article.
Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key but provides encryption and decryption services to authorized entities. Key Vault can generate the key, import it, or have it transferred from an on-premises HSM device. ## Benefits
-Data encryption with customer-managed keys for Azure Database for PostgreSQL - Flexible Server provides the following benefits:
+Data encryption with customer-managed keys for Azure Database for PostgreSQL flexible server provides the following benefits:
- You fully control data-access by the ability to remove the key and make the database inaccessible.
The key vault administrator can also [enable logging of Key Vault audit events](
When the server is configured to use the customer-managed key stored in the key Vault, the server sends the DEK to the key Vault for encryptions. Key Vault returns the encrypted DEK stored in the user database. Similarly, when needed, the server sends the protected DEK to the key Vault for decryption. Auditors can use Azure Monitor to review Key Vault audit event logs, if logging is enabled.
-## Requirements for configuring data encryption for Azure Database for PostgreSQL Flexible Server
+## Requirements for configuring data encryption for Azure Database for PostgreSQL flexible server
The following are requirements for configuring Key Vault: -- Key Vault and Azure Database for PostgreSQL Flexible Server must belong to the same Microsoft Entra tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving the Key Vault resource afterward requires you to reconfigure the data encryption.
+- Key Vault and Azure Database for PostgreSQL flexible server must belong to the same Microsoft Entra tenant. Cross-tenant Key Vault and server interactions aren't supported. Moving the Key Vault resource afterward requires you to reconfigure the data encryption.
- The key Vault must be set with 90 days for 'Days to retain deleted vaults'. If the existing key Vault has been configured with a lower number, you'll need to create a new key vault as it can't be modified after creation.
The following are requirements for configuring Key Vault:
- Enable Purge protection to enforce a mandatory retention period for deleted vaults and vault objects -- Grant the Azure Database for PostgreSQL Flexible Server access to the key Vault with the get, list, wrapKey, and unwrapKey permissions using its unique managed identity.
+- Grant the Azure Database for PostgreSQL flexible server instance access to the key Vault with the get, list, wrapKey, and unwrapKey permissions using its unique managed identity.
-The following are requirements for configuring the customer-managed key in Flexible Server:
+The following are requirements for configuring the customer-managed key in Azure Database for PostgreSQL flexible server:
- The customer-managed key to be used for encrypting the DEK can be only asymmetric, RSA or RSA-HSM. Key sizes of 2048, 3072, and 4096 are supported.
When you're using data encryption by using a customer-managed key, here are reco
- Enable auditing and reporting on all encryption keys. Key Vault provides logs that are easy to inject into other security information and event management tools. Azure Monitor Log Analytics is one example of a service that's already integrated. -- Ensure that Key Vault and Azure Database for PostgreSQL = Flexible Server reside in the same region to ensure a faster access for DEK wrap, and unwrap operations.
+- Ensure that Key Vault and Azure Database for PostgreSQL flexible server reside in the same region to ensure a faster access for DEK wrap, and unwrap operations.
- Lock down the Azure KeyVault to only **disable public access** and allow only *trusted Microsoft* services to secure the resources.
To monitor the database state, and to enable alerting for the loss of transparen
## Restore and replicate with a customer's managed key in Key Vault
-After Azure Database for PostgreSQL - Flexible Server is encrypted with a customer's managed key stored in Key Vault, any newly created server copy is also encrypted. You can make this new copy through a [PITR restore](concepts-backup-restore.md) operation or read replicas.
+After Azure Database for PostgreSQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created server copy is also encrypted. You can make this new copy through a [PITR restore](concepts-backup-restore.md) operation or read replicas.
Avoid issues while setting up customer-managed data encryption during restore or read replica creation by following these steps on the primary and restored/replica servers: -- Initiate the restore or read replica creation process from the primary Azure Database for PostgreSQL - Flexible Server.
+- Initiate the restore or read replica creation process from the primary Azure Database for PostgreSQL flexible server instance.
- On the restored/replica server, you can change the customer-managed key and\or Microsoft Entra identity used to access Azure Key Vault in the data encryption settings. Ensure that the newly created server is given list, wrap and unwrap permissions to the key stored in Key Vault.
Avoid issues while setting up customer-managed data encryption during restore or
**Hardware security modules (HSMs)** are hardened, tamper-resistant hardware devices that secure cryptographic processes by generating, protecting, and managing keys used for encrypting and decrypting data and creating digital signatures and certificates. HSMs are tested, validated and certified to the highest security standards including FIPS 140-2 and Common Criteria. Azure Key Vault Managed HSM (Hardware Security Module) is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs.
-You can pick **Azure Key Vault Managed HSM** as key store when creating new PostgreSQL Flexible Server in Azure Portal with Customer Managed Key (CMK) feature, as alternative to **Azure Key Vault**. The prerequisites in terms of user defined identity and permissions are same as with Azure Key Vault, as already listed [above](#requirements-for-configuring-data-encryption-for-azure-database-for-postgresql-flexible-server). More information on how to create Azure Key Vault Managed HSM, its advantages and differences with shared Azure Key Vault based certificate store, as well as how to import keys into AKV Managed HSM is available [here](../../key-vault/managed-hsm/overview.md).
+You can pick **Azure Key Vault Managed HSM** as key store when creating new Azure Database for PostgreSQL flexible server instances in Azure portal with Customer Managed Key (CMK) feature, as alternative to **Azure Key Vault**. The prerequisites in terms of user defined identity and permissions are same as with Azure Key Vault, as already listed [above](#requirements-for-configuring-data-encryption-for-azure-database-for-postgresql-flexible-server). More information on how to create Azure Key Vault Managed HSM, its advantages and differences with shared Azure Key Vault based certificate store, as well as how to import keys into AKV Managed HSM is available [here](../../key-vault/managed-hsm/overview.md).
## Inaccessible customer-managed key condition When you configure data encryption with a customer-managed key in Key Vault, continuous access to this key is required for the server to stay online. If the server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The server issues a corresponding error message, and changes the server state to *Inaccessible*. Some of the reasons why server state can become *Inaccessible* are: -- If you delete the KeyVault, the Azure Database for PostgreSQL - Flexible Server will be unable to access the key and will move to *Inaccessible* state. [Recover the Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.-- If you delete the key from the KeyVault, the Azure Database for PostgreSQL- Flexible Server will be unable to access the key and will move to *Inaccessible* state. [Recover the Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.-- If you delete [managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) from Microsoft Entra ID that is used to retrieve a key from KeyVault, the Azure Database for PostgreSQL- Flexible Server will be unable to access the key and will move to *Inaccessible* state.[Recover the identity](../../active-directory/fundamentals/recover-from-deletions.md) and revalidate data encryption to make server *Available*. -- If you revoke the Key Vault's list, get, wrapKey, and unwrapKey access policies from the [managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) that is used to retrieve a key from KeyVault, the Azure Database for PostgreSQL- Flexible Server will be unable to access the key and will move to *Inaccessible* state. [Add required access policies](../../key-vault/general/assign-access-policy.md) to the identity in KeyVault. -- If you set up overly restrictive Azure KeyVault firewall rules that cause Azure Database for PostgreSQL- Flexible Server inability to communicate with Azure KeyVault to retrieve keys. If you enable [KeyVault firewall](../../key-vault/general/overview-vnet-service-endpoints.md#trusted-services), make sure you check an option to *'Allow Trusted Microsoft Services to bypass this firewall.'*
+- If you delete the KeyVault, the Azure Database for PostgreSQL flexible server instance will be unable to access the key and will move to *Inaccessible* state. [Recover the Key Vault](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
+- If you delete the key from the KeyVault, the Azure Database for PostgreSQL flexible server instance will be unable to access the key and will move to *Inaccessible* state. [Recover the Key](../../key-vault/general/key-vault-recovery.md) and revalidate the data encryption to make the server *Available*.
+- If you delete [managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) from Microsoft Entra ID that is used to retrieve a key from KeyVault, the Azure Database for PostgreSQL flexible server instance will be unable to access the key and will move to *Inaccessible* state.[Recover the identity](../../active-directory/fundamentals/recover-from-deletions.md) and revalidate data encryption to make server *Available*.
+- If you revoke the Key Vault's list, get, wrapKey, and unwrapKey access policies from the [managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) that is used to retrieve a key from KeyVault, the Azure Database for PostgreSQL flexible server instance will be unable to access the key and will move to *Inaccessible* state. [Add required access policies](../../key-vault/general/assign-access-policy.md) to the identity in KeyVault.
+- If you set up overly restrictive Azure KeyVault firewall rules that cause Azure Database for PostgreSQL flexible server inability to communicate with Azure KeyVault to retrieve keys. If you enable [KeyVault firewall](../../key-vault/general/overview-vnet-service-endpoints.md#trusted-services), make sure you check an option to *'Allow Trusted Microsoft Services to bypass this firewall.'*
> [!NOTE] > When a key is either disabled, deleted, expired, or not reachable server with data encrypted using that key will become **inaccessible** as stated above. Server will not become available until the key is enabled again, or you assign a new key.
Some of the reasons why server state can become *Inaccessible* are:
## Using Data Encryption with Customer Managed Key (CMK) and Geo-redundant Business Continuity features, such as Replicas and Geo-redundant backup
-Azure Database for PostgreSQL - Flexible Server supports advanced [Data Recovery (DR)](../flexible-server/concepts-business-continuity.md) features, such as [Replicas](../../postgresql/flexible-server/concepts-read-replicas.md) and [geo-redundant backup](../flexible-server/concepts-backup-restore.md). Following are requirements for setting up data encryption with CMK and these features, additional to [basic requirements for data encryption with CMK](#requirements-for-configuring-data-encryption-for-azure-database-for-postgresql-flexible-server):
+Azure Database for PostgreSQL flexible server supports advanced [Data Recovery (DR)](../flexible-server/concepts-business-continuity.md) features, such as [Replicas](../../postgresql/flexible-server/concepts-read-replicas.md) and [geo-redundant backup](../flexible-server/concepts-backup-restore.md). Following are requirements for setting up data encryption with CMK and these features, additional to [basic requirements for data encryption with CMK](#requirements-for-configuring-data-encryption-for-azure-database-for-postgresql-flexible-server):
* The Geo-redundant backup encryption key needs to be the created in an Azure Key Vault (AKV) in the region where the Geo-redundant backup is stored * The [Azure Resource Manager (ARM) REST API](../../azure-resource-manager/management/overview.md) version for supporting Geo-redundant backup enabled CMK servers is '2022-11-01-preview'. Therefore, using [ARM templates](../../azure-resource-manager/templates/overview.md) for automation of creation of servers utilizing both encryption with CMK and geo-redundant backup features, please use this ARM API version.
Azure Database for PostgreSQL - Flexible Server supports advanced [Data Recovery
## Limitations
-The following are current limitations for configuring the customer-managed key in Flexible Server:
+The following are current limitations for configuring the customer-managed key in Azure Database for PostgreSQL flexible server:
-- CMK encryption can only be configured during creation of a new server, not as an update to the existing Flexible Server. You can [restore PITR backup to new server with CMK encryption](./concepts-backup-restore.md#point-in-time-recovery) instead.
+- CMK encryption can only be configured during creation of a new server, not as an update to the existing Azure Database for PostgreSQL flexible server instance. You can [restore PITR backup to new server with CMK encryption](./concepts-backup-restore.md#point-in-time-recovery) instead.
- Once enabled, CMK encryption can't be removed. If customer desires to remove this feature, it can only be done via [restore of the server to non-CMK server](./concepts-backup-restore.md#point-in-time-recovery).
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Title: Extensions - Azure Database for PostgreSQL - Flexible Server
-description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server
+ Title: Extensions
+description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server.
Last updated 1/8/2024
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-PostgreSQL provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects in a single package that can be loaded or removed from your database with a command. After being loaded into the database, extensions function like built-in features.
+Azure Database for PostgreSQL flexible server provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects in a single package that can be loaded or removed from your database with a command. After being loaded into the database, extensions function like built-in features.
## How to use PostgreSQL extensions
-Before installing extensions in Azure Database for PostgreSQL - Flexible Server, you'll need to allowlist these extensions for use.
+Before installing extensions in Azure Database for PostgreSQL flexible server, you need to allowlist these extensions for use.
Using the [Azure portal](https://portal.azure.com):
- 1. Select your Azure Database for PostgreSQL - Flexible Server.
+ 1. Select your Azure Database for PostgreSQL flexible server instance.
1. On the sidebar, select **Server Parameters**. 1. Search for the `azure.extensions` parameter. 1. Select extensions you wish to allowlist.
- :::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text="Screenshot showing Azure Database for PostgreSQL - allow-listing extensions for installation." lightbox="./media/concepts-extensions/allow-list.png":::
+ :::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text="Screenshot showing Azure Database for PostgreSQL flexible server - allow-listing extensions for installation." lightbox="./media/concepts-extensions/allow-list.png":::
Using [Azure CLI](/cli/azure/):
az postgres flexible-server parameter set --resource-group <your resource group>
} ```
-`shared_preload_libraries` is a server configuration parameter determining which libraries are to be loaded when PostgreSQL starts. Any libraries, which use shared memory must be loaded via this parameter. If your extension needs to be added to shared preload libraries this action can be done:
+`shared_preload_libraries` is a server configuration parameter determining which libraries are to be loaded when Azure Database for PostgreSQL flexible server starts. Any libraries, which use shared memory must be loaded via this parameter. If your extension needs to be added to shared preload libraries this action can be done:
Using the [Azure portal](https://portal.azure.com):
- 1. Select your Azure Database for PostgreSQL - Flexible Server.
+ 1. Select your Azure Database for PostgreSQL flexible server instance.
1. On the sidebar, select **Server Parameters**. 1. Search for the `shared_preload_libraries` parameter. 1. Select extensions you wish to add.
az postgres flexible-server parameter set --resource-group <your resource group>
After extensions are allow-listed and loaded, these must be installed in your database before you can use them. To install a particular extension, you should run the [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database. > [!NOTE]
-> Third party extensions offered in Azure Database for PostgreSQL - Flexible Server are open source licensed code. Currently, we don't offer any third party extensions or extension versions with premium or proprietary licensing models.
+> Third party extensions offered in Azure Database for PostgreSQL flexible server are open source licensed code. Currently, we don't offer any third party extensions or extension versions with premium or proprietary licensing models.
-Azure Database for PostgreSQL supports a subset of key PostgreSQL extensions as listed below. This information is also available by running `SHOW azure.extensions;`. Extensions not listed in this document aren't supported on Azure Database for PostgreSQL - Flexible Server. You can't create or load your own extension in Azure Database for PostgreSQL.
+Azure Database for PostgreSQL flexible server instance supports a subset of key PostgreSQL extensions as listed below. This information is also available by running `SHOW azure.extensions;`. Extensions not listed in this document aren't supported on Azure Database for PostgreSQL flexible server. You can't create or load your own extension in Azure Database for PostgreSQL flexible server.
## Extension versions
-The following extensions are available in Azure Database for PostgreSQL - Flexible Servers
+The following extensions are available in Azure Database for PostgreSQL flexible server:
|**Extension Name** |**Description** |**Postgres 16**|**Postgres 15**|**Postgres 14**|**Postgres 13**|**Postgres 12**|**Postgres 11**| |--||--|--|--|--|--||
The following extensions are available in Azure Database for PostgreSQL - Flexib
## dblink and postgres_fdw
-[dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one PostgreSQL server to another, or to another database in the same server. Flexible server supports both incoming and outgoing connections to any PostgreSQL server. The sending server needs to allow outbound connections to the receiving server. Similarly, the receiving server needs to allow connections from the sending server.
+[dblink](https://www.postgresql.org/docs/current/contrib-dblink-function.html) and [postgres_fdw](https://www.postgresql.org/docs/current/postgres-fdw.html) allow you to connect from one Azure Database for PostgreSQL flexible server instance to another, or to another database in the same server. Azure Database for PostgreSQL flexible server supports both incoming and outgoing connections to any PostgreSQL server. The sending server needs to allow outbound connections to the receiving server. Similarly, the receiving server needs to allow connections from the sending server.
We recommend deploying your servers with [virtual network integration](concepts-networking.md) if you plan to use these two extensions. By default virtual network integration allows connections between servers in the virtual network. You can also choose to use [virtual network network security groups](../../virtual-network/manage-network-security-group.md) to customize access. ## pg_prewarm
-The pg_prewarm extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. The auto-prewarm functionality isn't currently available in Azure Database for PostgreSQL - Flexible Server.
+The pg_prewarm extension loads relational data into cache. Prewarming your caches means that your queries have better response times on their first run after a restart. The auto-prewarm functionality isn't currently available in Azure Database for PostgreSQL flexible server.
## pg_cron
SELECT cron.schedule_in_database('VACUUM','0 10 * * * ','VACUUM','testcron',null
``` > [!NOTE]
-> pg_cron extension is preloaded in shared_preload_libraries for every Azure Database for PostgreSQL -Flexible Server inside postgres database to provide you with ability to schedule jobs to run in other databases within your PostgreSQL DB instance without compromising security. However, for security reasons, you still have to [allow list](#how-to-use-postgresql-extensions) pg_cron extension and install it using [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command.
+> pg_cron extension is preloaded in shared_preload_libraries for every Azure Database for PostgreSQL flexible server instance inside postgres database to provide you with ability to schedule jobs to run in other databases within your Azure Database for PostgreSQL flexible server DB instance without compromising security. However, for security reasons, you still have to [allow list](#how-to-use-postgresql-extensions) pg_cron extension and install it using [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command.
Starting with pg_cron version 1.4, you can use the cron.schedule_in_database and cron.alter_job functions to schedule your job in a specific database and update an existing schedule respectively.
To delete old data on Saturday at 3:30am (GMT) on database DBName
SELECT cron.schedule_in_database('JobName', '30 3 * * 6', $$DELETE FROM events WHERE event_time < now() - interval '1 week'$$,'DBName'); ``` > [!NOTE]
-> cron_schedule_in_database function allows for user name as optional parameter. Setting the username to a non-null value requires PostgreSQL superuser privilege and is not supported in Azure Database for PostgreSQL - Flexible Server. Above examples show running this function with optional user name parameter ommitted or set to null, which runs the job in context of user scheduling the job, which should have azure_pg_admin role priviledges.
+> cron_schedule_in_database function allows for user name as optional parameter. Setting the username to a non-null value requires PostgreSQL superuser privilege and is not supported in Azure Database for PostgreSQL flexible server. Preceding examples show running this function with optional user name parameter ommitted or set to null, which runs the job in context of user scheduling the job, which should have azure_pg_admin role privileges.
To update or change the database name for the existing schedule
select cron.alter_job(job_id:=MyJobID,database:='NewDBName');
## pg_failover_slots (preview)
-The PG Failover Slots extension enhances Azure Database for PostgreSQL when operating with both logical replication and high availability enabled servers. It effectively addresses the challenge within the standard PostgreSQL engine that doesn't preserve logical replication slots after a failover. Maintaining these slots is critical to prevent replication pauses or data mismatches during primary server role changes, ensuring operational continuity and data integrity.
+The PG Failover Slots extension enhances Azure Database for PostgreSQL flexible server when operating with both logical replication and high availability enabled servers. It effectively addresses the challenge within the standard PostgreSQL engine that doesn't preserve logical replication slots after a failover. Maintaining these slots is critical to prevent replication pauses or data mismatches during primary server role changes, ensuring operational continuity and data integrity.
The extension streamlines the failover process by managing the necessary transfer, cleanup, and synchronization of replication slots, thus providing a seamless transition during server role changes. The extension is supported for PostgreSQL versions 11 to 15.
You can find more information and how to use the PG Failover Slots extension on
### Enable pg_failover_slots
-To enable the PG Failover Slots extension for your Azure Database for PostgreSQL server, you'll need to modify the server's configuration by including the extension in the server's shared preload libraries and adjusting a specific server parameter. Here's the process:
+To enable the PG Failover Slots extension for your Azure Database for PostgreSQL flexible server instance, you need to modify the server's configuration by including the extension in the server's shared preload libraries and adjusting a specific server parameter. Here's the process:
1. Add `pg_failover_slots` to the server's shared preload libraries by updating the `shared_preload_libraries` parameter. 1. Change the server parameter `hot_standby_feedback` to `on`.
Any changes to the `shared_preload_libraries` parameter require a server restart
Follow these steps in the Azure portal:
-1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your Azure Database for PostgreSQL server's page.
+1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your Azure Database for PostgreSQL flexible server instance's page.
1. In the menu on the left, select **Server parameters**. 1. Find the `shared_preload_libraries` parameter in the list and edit its value to include `pg_failover_slots`. 1. Search for the `hot_standby_feedback` parameter and set its value to `on`. 1. Select on **Save** to preserve your changes. Now, you'll have the option to **Save and restart**. Choose this to ensure that the changes take effect since modifying `shared_preload_libraries` requires a server restart.
-By selecting **Save and restart**, your server will automatically reboot, applying the changes you've made. Once the server is back online, the PG Failover Slots extension is enabled and operational on your primary PostgreSQL server, ready to handle logical replication slots during failovers.
+By selecting **Save and restart**, your server will automatically reboot, applying the changes you've made. Once the server is back online, the PG Failover Slots extension is enabled and operational on your primary Azure Database for PostgreSQL flexible server instance, ready to handle logical replication slots during failovers.
## pg_stat_statements The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) gives you a view of all the queries that have run on your database. That is useful to get an understanding of what your query workload performance looks like on a production system.
-The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded in shared_preload_libraries on every Azure Database for PostgreSQL flexible server to provide you a means of tracking execution statistics of SQL statements.
+The [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded in shared_preload_libraries on every Azure Database for PostgreSQL flexible server instance to provide you a means of tracking execution statistics of SQL statements.
However, for security reasons, you still have to [allowlist](#how-to-use-postgresql-extensions) [pg_stat_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) and install it using [CREATE EXTENSION](https://www.postgresql.org/docs/current/sql-createextension.html) command. The setting `pg_stat_statements.track`, which controls what statements are counted by the extension, defaults to `top`, meaning all statements issued directly by clients are tracked. The two other tracking levels are `none` and `all`. This setting is configurable as a server parameter.
There's a tradeoff between the query execution information pg_stat_statements pr
## TimescaleDB TimescaleDB is a time-series database that is packaged as an extension for PostgreSQL. TimescaleDB provides time-oriented analytical functions, optimizations, and scales Postgres for time-series workloads.
-[Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of Timescale, Inc.. Azure Database for PostgreSQL provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses).
+[Learn more about TimescaleDB](https://docs.timescale.com/timescaledb/latest/), a registered trademark of Timescale, Inc. Azure Database for PostgreSQL flexible server provides the TimescaleDB [Apache-2 edition](https://www.timescale.com/legal/licenses).
### Install TimescaleDB
-To install TimescaleDB, in addition, to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
+To install TimescaleDB, in addition, to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
Using the [Azure portal](https://portal.azure.com/):
-1. Select your Azure Database for PostgreSQL server.
+1. Select your Azure Database for PostgreSQL flexible server instance.
1. On the sidebar, select **Server Parameters**.
Using the [Azure portal](https://portal.azure.com/):
1. After the notification, **restart** the server to apply these changes.
-You can now enable TimescaleDB in your Postgres database. Connect to the database and issue the following command:
+You can now enable TimescaleDB in your Azure Database for PostgreSQL flexible server database. Connect to the database and issue the following command:
```sql CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
For more details on restore method with Timescale enabled database, see [Timesca
While running `SELECT timescaledb_post_restore()` procedure listed above you might get permissions denied error updating timescaledb.restoring flag. This is due to limited ALTER DATABASE permission in Cloud PaaS database services. In this case you can perform alternative method using `timescaledb-backup` tool to backup and restore Timescale database. Timescaledb-backup is a program for making dumping and restoring a TimescaleDB database simpler, less error-prone, and more performant. To do so, you should do following 1. Install tools as detailed [here](https://github.com/timescale/timescaledb-backup#installing-timescaledb-backup)
- 1. Create target Azure Database for PostgreSQL server and database
+ 1. Create a target Azure Database for PostgreSQL flexible server instance and database
1. Enable Timescale extension as shown above 1. Grant azure_pg_admin [role](https://www.postgresql.org/docs/11/database-roles.html) to user that will be used by [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) 1. Run [ts-restore](https://github.com/timescale/timescaledb-backup#using-ts-restore) to restore database More details on these utilities can be found [here](https://github.com/timescale/timescaledb-backup). > [!NOTE]
-> When using `timescale-backup` utilities to restore to Azure is that since database user names for non-flexible Azure Database for PostgresQL must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
+> When using `timescale-backup` utilities to restore to Azure, since database user names for Azure Database for PostgreSQL single server must use the `<user@db-name>` format, you need to replace `@` with `%40` character encoding.
## pg_hint_plan
More details on these utilities can be found [here](https://github.com/timescale
```sql /*+ SeqScan(a) */ ```
-`pg_hint_plan` reads hinting phrases in a comment of special form given with the target SQL statement. The special form is beginning by the character sequence "/\*+" and ends with "\*/". Hint phrases consists of hint name and following parameters enclosed by parentheses and delimited by spaces. New lines for readability can delimit each hinting phrase.
+`pg_hint_plan` reads hinting phrases in a comment of special form given with the target SQL statement. The special form is beginning by the character sequence "/\*+" and ends with "\*/". Hint phrases consist of hint name and following parameters enclosed by parentheses and delimited by spaces. New lines for readability can delimit each hinting phrase.
Example:
Example:
``` The above example causes the planner to use the results of a `seq scan` on the table a to be combined with table b as a `hash join`. -
-To install pg_hint_plan, in addition, to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
-
+To install pg_hint_plan, in addition, to allow listing it, as shown [above](#how-to-use-postgresql-extensions), you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a **server restart** to take effect. You can change parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
Using the [Azure portal](https://portal.azure.com/):
-1. Select your Azure Database for the PostgreSQL server.
+1. Select your Azure Database for PostgreSQL flexible server instance.
1. On the sidebar, select **Server Parameters**.
Using the [Azure portal](https://portal.azure.com/):
1. After the notification, **restart** the server to apply these changes.
-You can now enable pg_hint_plan your Postgres database. Connect to the database and issue the following command:
+You can now enable pg_hint_plan your Azure Database for PostgreSQL flexible server database. Connect to the database and issue the following command:
```sql CREATE EXTENSION pg_hint_plan ;
CREATE EXTENSION pg_buffercache;
## Extensions and Major Version Upgrade
-Azure Database for PostgreSQL Flexible Server Postgres has introduced [in-place major version upgrade](./concepts-major-version-upgrade.md#overview) feature that performs an in-place upgrade of the Postgres server with just a click. In-place major version upgrade simplifies the Postgres upgrade process, minimizing the disruption to users and applications accessing the server. In-place major version upgrade doesn't support specific extensions, and there are some limitations to upgrading certain extensions. The extensions **Timescaledb**, **pgaudit**, **dblink**, **orafce**, and **postgres_fdw** are unsupported for all PostgreSQL versions when using [in-place major version update feature](./concepts-major-version-upgrade.md#overview).
+Azure Database for PostgreSQL flexible server has introduced an [in-place major version upgrade](./concepts-major-version-upgrade.md#overview) feature that performs an in-place upgrade of the Azure Database for PostgreSQL flexible server instance with just a click. In-place major version upgrade simplifies the Azure Database for PostgreSQL flexible server upgrade process, minimizing the disruption to users and applications accessing the server. In-place major version upgrade doesn't support specific extensions, and there are some limitations to upgrading certain extensions. The extensions **Timescaledb**, **pgaudit**, **dblink**, **orafce**, and **postgres_fdw** are unsupported for all Azure Database for PostgreSQL flexible server versions when using [in-place major version update feature](./concepts-major-version-upgrade.md#overview).
## Related content
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-firewall-rules.md
Title: Firewall rules in Azure Database for PostgreSQL - Flexible Server
+ Title: Firewall rules
description: This article describes how to use firewall rules to connect to Azure Database for PostgreSQL - Flexible Server with the public networking deployment option.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options. The options are private access (virtual network integration) and public access (allowed IP addresses).
+When you're running Azure Database for PostgreSQL flexible server, you have two main networking options. The options are private access (virtual network integration) and public access (allowed IP addresses).
-With public access, the Azure Database for PostgreSQL server is accessed through a public endpoint. By default, the firewall blocks all access to the server. To specify which IP hosts can access the server, you create server-level *firewall rules*. Firewall rules specify allowed public IP address ranges. The firewall grants access to the server based on the originating IP address of each request. With [private access](concepts-networking.md#private-access-vnet-integration) no public endpoint is available and only hosts located on the same network can access Azure Database for PostgreSQL - Flexible Server.
+With public access, the Azure Database for PostgreSQL flexible server instance is accessed through a public endpoint. By default, the firewall blocks all access to the server. To specify which IP hosts can access the server, you create server-level *firewall rules*. Firewall rules specify allowed public IP address ranges. The firewall grants access to the server based on the originating IP address of each request. With [private access](concepts-networking.md#private-access-vnet-integration) no public endpoint is available and only hosts located on the same network can access Azure Database for PostgreSQL flexible server.
You can create firewall rules by using the Azure portal or by using Azure CLI commands. You must be the subscription owner or a subscription contributor.
-Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server. The rules don't affect access to the Azure portal website.
+Server-level firewall rules apply to all databases on the same Azure Database for MySQL flexible server instance. The rules don't affect access to the Azure portal website.
-The following diagram shows how connection attempts from the internet and Azure must pass through the firewall before they can reach PostgreSQL databases:
+The following diagram shows how connection attempts from the internet and Azure must pass through the firewall before they can reach Azure Database for MySQL flexible server databases:
:::image type="content" source="../media/concepts-firewall-rules/1-firewall-concept.png" alt-text="Diagram that shows an overview of how the firewall works."::: ## Connect from the internet If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted. Otherwise, it's rejected.
-For example, if your application connects with a Java Database Connectivity (JDBC) driver for PostgreSQL, you might encounter this error because the firewall is blocking the connection:
+For example, if your application connects with a Java Database Connectivity (JDBC) driver for Azure Database for MySQL flexible server, you might encounter this error because the firewall is blocking the connection:
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL > [!NOTE]
-> To access Azure Database for PostgreSQL from your local computer, ensure that the firewall on your network and local computer allow outgoing communication on TCP port 5432.
+> To access Azure Database for MySQL flexible server from your local computer, ensure that the firewall on your network and local computer allow outgoing communication on TCP port 5432.
## Connect from Azure We recommend that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service app, or use a public IP address that's tied to a virtual machine.
If a fixed outgoing IP address isn't available for your Azure service, consider
## Programmatically manage firewall rules In addition to using the Azure portal, you can manage firewall rules programmatically by using the Azure CLI.
-From the Azure CLI, a firewall rule setting with a starting and ending address equal to 0.0.0.0 does the equivalent of the **Allow public access from any Azure service within Azure to this server** option in the portal. If firewall rules reject the connection attempt, the app won't reach the Azure Database for PostgreSQL server.
+From the Azure CLI, a firewall rule setting with a starting and ending address equal to 0.0.0.0 does the equivalent of the **Allow public access from any Azure service within Azure to this server** option in the portal. If firewall rules reject the connection attempt, the app won't reach the Azure Database for MySQL flexible server instance.
## Troubleshoot firewall problems
-Consider the following possibilities when access to an Azure Database for PostgreSQL server doesn't behave as you expect:
+Consider the following possibilities when access to an Azure Database for MySQL flexible server instance doesn't behave as you expect:
-* **Changes to the allowlist haven't taken effect yet**: Changes to the firewall configuration of an Azure Database for PostgreSQL server might take up to five minutes.
+* **Changes to the allowlist haven't taken effect yet**: Changes to the firewall configuration of an Azure Database for MySQL flexible server instance might take up to five minutes.
-* **The sign-in isn't authorized, or an incorrect password was used**: If a sign-in doesn't have permissions on the Azure Database for PostgreSQL server or the password is incorrect, the connection to the server is denied. Creating a firewall setting only provides clients with an opportunity to try connecting to your server. Each client must still provide the necessary security credentials.
+* **The sign-in isn't authorized, or an incorrect password was used**: If a sign-in doesn't have permissions on the Azure Database for MySQL flexible server instance or the password is incorrect, the connection to the server is denied. Creating a firewall setting only provides clients with an opportunity to try connecting to your server. Each client must still provide the necessary security credentials.
For example, the following error might appear if authentication fails for a JDBC client:
Consider the following possibilities when access to an Azure Database for Postgr
* **The firewall isn't allowing dynamic IP addresses**: If you have an internet connection with dynamic IP addressing and you're having trouble getting through the firewall, try one of the following solutions:
- * Ask your internet service provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for PostgreSQL server. Then add the IP address range as a firewall rule.
+ * Ask your internet service provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for MySQL flexible server instance. Then add the IP address range as a firewall rule.
* Get static IP addresses instead for your client computers, and then add the static IP addresses as a firewall rule.
Consider the following possibilities when access to an Azure Database for Postgr
## Next steps
-* [Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](how-to-manage-firewall-portal.md)
-* [Create and manage Azure Database for PostgreSQL firewall rules by using the Azure CLI](how-to-manage-firewall-cli.md)
+* [Create and manage Azure Database for MySQL flexible server firewall rules by using the Azure portal](how-to-manage-firewall-portal.md)
+* [Create and manage Azure Database for MySQL flexible server firewall rules by using the Azure CLI](how-to-manage-firewall-cli.md)
postgresql Concepts Geo Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-geo-disaster-recovery.md
Title: Geo-disaster recovery - Azure Database for PostgreSQL - Flexible Server
-description: Learn about the concepts of Geo-disaster recovery with Azure Database for PostgreSQL - Flexible Server
+ Title: Geo-disaster recovery
+description: Learn about the concepts of Geo-disaster recovery with Azure Database for PostgreSQL - Flexible Server.
Last updated 10/23/2023
If there's a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../../site-recovery/azure-to-azure-architecture.md).
-Flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
+Azure Database for MySQL flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, Azure Database for MySQL flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database.
## Compare geo-replication with geo-redundant backup storage Both geo-replication with read replicas and geo-backup are solutions for geo-disaster recovery. However, they differ in the details of their offerings. To choose the right solution for your system, it's important to understand and compare their features.
For more information on geo-redundant backup and restore, see [geo-redundant bac
## Read replicas
-Cross region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and can lag the primary. Read replicas are supported in general purpose and memory optimized compute tiers.
+Cross region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using Azure Database for MySQL flexible server's physical replication technology, and can lag the primary. Read replicas are supported in general purpose and memory optimized compute tiers.
For more information on read replica features and considerations, see [Read replicas](/azure/postgresql/flexible-server/concepts-read-replicas).
For more information on unplanned downtime mitigation and recovery after regiona
## Next steps > [!div class="nextstepaction"]
-> [Azure Database for PostgreSQL documentation](/azure/postgresql/)
+> [Azure Database for MySQL flexible server documentation](/azure/postgresql/)
> [!div class="nextstepaction"] > [Reliability in Azure](../../reliability/availability-zones-overview.md)
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
Title: Overview of high availability with Azure Database for PostgreSQL - Flexible Server
-description: Learn about the concepts of high availability with Azure Database for PostgreSQL - Flexible Server
+ Title: Overview of high availability
+description: Learn about the concepts of high availability with Azure Database for PostgreSQL - Flexible Server.
Last updated 7/19/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server offers high availability configurations with automatic failover capabilities. The high availability solution is designed to ensure that committed data is never lost because of failures and that the database won't be a single point of failure in your architecture. When high availability is configured, flexible server automatically provisions and manages a standby. Write-ahead-logs (WAL) is streamed to the replica in synchronous mode using PostgreSQL streaming replication. There are two high availability architectural models:
+Azure Database for PostgreSQL flexible server offers high availability configurations with automatic failover capabilities. The high availability solution is designed to ensure that committed data is never lost because of failures and that the database won't be a single point of failure in your architecture. When high availability is configured, Azure Database for MySQL flexible server automatically provisions and manages a standby. Write-ahead-logs (WAL) is streamed to the replica in synchronous mode using PostgreSQL streaming replication. There are two high availability architectural models:
* **Zone-redundant HA**: This option provides a complete isolation and redundancy of infrastructure across multiple availability zones within a region. It provides the highest level of availability, but it requires you to configure application redundancy across availability zones. Zone-redundant HA is preferred when you want protection from availability zone failures. However, one should account for added latency for cross-AZ synchronous writes. This latency is more pronounced for applications with short duration transactions. Zone-redundant HA is available in a [subset of Azure regions](./overview.md#azure-regions) where the region supports multiple [availability zones](../../availability-zones/az-overview.md). Uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration.
-* **Same-zone HA**: This option provide for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone. This option lowers the latency impact but makes your application vulnerable to zone failures. Same-zone HA is available in all [Azure regions](./overview.md#azure-regions) where you can deploy Flexible Server. Uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql) offered in this configuration.
+* **Same-zone HA**: This option provides for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone. This option lowers the latency impact but makes your application vulnerable to zone failures. Same-zone HA is available in all [Azure regions](./overview.md#azure-regions) where you can deploy Azure Database for PostgreSQL flexible server. Uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql) offered in this configuration.
High availability configuration enables automatic failover capability with zero data loss (i.e. RPO=0) both during planned/unplanned events. For example, user-initiated scale compute operation is a planned failover even while unplanned event refers to failures such as underlying hardware and software faults, network failures, and availability zone failures.
High availability configuration enables automatic failover capability with zero
## High availability architecture
-As mentioned earlier, Azure Database for PostgreSQL Flexible server supports two high availability deployment models: zone-redundant HA and same-zone HA. In both deployment models, when the application commits a transaction, the transaction logs (write-ahead logs a.k.a WAL) are written to the data/log disk and also replicated in *synchronous* mode to the standby server. Once the logs are persisted on the standby, the transaction is considered committed and an acknowledgement is sent to the application. The standby server is always in recovery mode applying the transaction logs. However, the primary server doesn't wait for standby to apply these log records. It is possible that under heavy transaction workload, the replica server may fall behind but typically catches up to the primary with workload throughput fluctuations.
+As mentioned earlier, Azure Database for PostgreSQL flexible server supports two high availability deployment models: zone-redundant HA and same-zone HA. In both deployment models, when the application commits a transaction, the transaction logs (write-ahead logs a.k.a WAL) are written to the data/log disk and also replicated in *synchronous* mode to the standby server. Once the logs are persisted on the standby, the transaction is considered committed and an acknowledgment is sent to the application. The standby server is always in recovery mode applying the transaction logs. However, the primary server doesn't wait for standby to apply these log records. It is possible that under heavy transaction workload, the replica server may fall behind but typically catches up to the primary with workload throughput fluctuations.
### Zone-redundant high availability
-This high availability deployment enables Flexible server to be highly available across availability zones. You can choose the region, availability zones for the primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, which automatically stores **three** data copies. This provides physical isolation of the entire stack between primary and standby servers.
+This high availability deployment enables Azure Database for PostgreSQL flexible server to be highly available across availability zones. You can choose the region, availability zones for the primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, which automatically stores **three** data copies. This provides physical isolation of the entire stack between primary and standby servers.
>[!NOTE] > Not all regions support availability zone to deploy zone-redundant high availability. See this [Azure regions](./overview.md#azure-regions) list.
Automatic backups are performed periodically from the primary database server, w
### Same-zone high availability
-This model of high availability deployment enables Flexible server to be highly available within the same availability zone. This is supported in all regions, including regions that don't support availability zones. You can choose the region and the availability zone to deploy your primary database server. A standby server is **automatically** provisioned and managed in the **same** availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage, which automatically stores as **three** synchronous data copies each for primary and standby. This provides physical isolation of the entire stack between primary and standby servers within the same availability zone.
+This model of high availability deployment enables Azure Database for PostgreSQL flexible server to be highly available within the same availability zone. This is supported in all regions, including regions that don't support availability zones. You can choose the region and the availability zone to deploy your primary database server. A standby server is **automatically** provisioned and managed in the **same** availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage, which automatically stores as **three** synchronous data copies each for primary and standby. This provides physical isolation of the entire stack between primary and standby servers within the same availability zone.
Automatic backups are performed periodically from the primary database server, while the transaction logs are continuously archived to the backup storage from the standby replica. If the region supports availability zones, then backup data is stored on zone-redundant storage (ZRS). In regions that doesn't support availability zones, backup data is stored on local redundant storage (LRS). :::image type="content" source="./media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Same-zone high availability":::
Automatic backups are performed periodically from the primary database server, w
### Transaction completion
-Application transaction triggered writes and commits are first logged to the WAL on the primary server. It is then streamed to the standby server using Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged of write completion. Only then and the application is confirmed of the writes. This additional round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgement process does not wait for the logs to be applied at the standby server. The standby server is permanently in recovery mode until it is promoted.
+Application transaction triggered writes and commits are first logged to the WAL on the primary server. It is then streamed to the standby server using Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged of write completion. Only then and the application is confirmed of the writes. This additional round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgment process does not wait for the logs to be applied at the standby server. The standby server is permanently in recovery mode until it is promoted.
### Health check
-Flexible server has a health monitoring in place that checks for the primary and standby health periodically. If that detects primary server is not reachable after multiple pings, it makes the decision to initiate an automatic failover or not. The algorithm is based on multiple data points to avoid any false positive situation.
+Azure Database for PostgreSQL flexible server has a health monitoring in place that checks for the primary and standby health periodically. If that detects primary server is not reachable after multiple pings, it makes the decision to initiate an automatic failover or not. The algorithm is based on multiple data points to avoid any false positive situation.
### Failover modes
PostgreSQL client applications are connected to the primary server using the DB
:::image type="content" source="./media/business-continuity/concepts-high-availability-steady-state.png" alt-text="high availability - steady state":::
-1. Clients connect to the flexible server and perform write operations.
+1. Clients connect to the Azure Database for PostgreSQL flexible server instance and perform write operations.
2. Changes are replicated to the standby site. 3. Primary receives acknowledgment. 4. Writes/commits are acknowledged.
For other user initiated operations such as scale-compute or scale-storage, the
### Reducing planned downtime with managed maintenance window
-With flexible server, you can optionally schedule Azure initiated maintenance activities by choosing a 60-minute window in a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that maintenance window. If you do not choose a custom window, a system allocated 1-hr window between 11pm-7am local time is chosen for your server.
+With Azure Database for PostgreSQL flexible server, you can optionally schedule Azure initiated maintenance activities by choosing a 60-minute window in a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that maintenance window. If you do not choose a custom window, a system allocated 1-hr window between 11pm-7am local time is chosen for your server.
-For flexible servers configured with high availability, these maintenance activities are performed on the standby replica first and the service is failed over to the standby to which applications can reconnect.
+For Azure Database for PostgreSQL flexible server instances configured with high availability, these maintenance activities are performed on the standby replica first and the service is failed over to the standby to which applications can reconnect.
## Failover process - unplanned downtimes - Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recover any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations. > [!NOTE]
-> Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
+> Azure Database for PostgreSQL flexible server instances configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
After the failover, while a new standby server is being provisioned (which usually takes 5-10 minutes), applications can still connect to the primary server and proceed with their read/write operations. Once the standby server is established, it will start recovering the logs that were generated after the failover.
After the failover, while a new standby server is being provisioned (which usual
## On-demand failover
-Flexible server provides two methods for you to perform on-demand failover to the standby server. These are useful if you want to test the failover time and downtime impact for your applications and if you want to fail over to the preferred availability zone.
+Azure Database for PostgreSQL flexible server provides two methods for you to perform on-demand failover to the standby server. These are useful if you want to test the failover time and downtime impact for your applications and if you want to fail over to the preferred availability zone.
### Forced failover
See [this guide](how-to-manage-high-availability-portal.md) for managing high av
## Point-in-time restore of HA servers
-Flexible servers that are configured with high availability, log data is replicated in real time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates are replicated to the standby replica as well. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform point-in-time restore from the backup. Using flexible server's point-in-time restore capability, you can restore to the time before the error occurred. For databases configured with high availability, a new database server will be restored as a single zone flexible server with a new user-provided server name. You can use the restored server for few use cases:
+For Azure Database for PostgreSQL flexible server instances that are configured with high availability, log data is replicated in real time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates are replicated to the standby replica as well. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform point-in-time restore from the backup. Using Azure Database for PostgreSQL flexible server's point-in-time restore capability, you can restore to the time before the error occurred. For databases configured with high availability, a new database server will be restored as a single zone Azure Database for PostgreSQL flexible server with a new user-provided server name. You can use the restored server for few use cases:
1. You can use the restored server for production usage and can optionally enable zone-redundant high availability. 2. If you just want to restore an object, you can then export the object from the restored database server and import it to your production database server.
Flexible servers that are configured with high availability, log data is replica
* Planned events such as scale compute and scale storage happens in the standby first and then on the primary server. Currently the server doesn't fail over for these planned operations.
-* If logical decoding or logical replication is configured with a HA configured flexible server, in the event of a failover to the standby server, the logical replication slots are not copied over to the standby server.
+* If logical decoding or logical replication is configured with a HA configured Azure Database for PostgreSQL flexible server instance, in the event of a failover to the standby server, the logical replication slots are not copied over to the standby server.
## Availability for non-HA servers
-For Flexible servers configured **without** high availability, the service still provides built-in availability, storage redundancy and resiliency to help to recover from any planned or unplanned downtime events. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this non-HA configuration.
+For Azure Database for PostgreSQL flexible server instances configured **without** high availability, the service still provides built-in availability, storage redundancy and resiliency to help to recover from any planned or unplanned downtime events. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this non-HA configuration.
During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
Here are some planned maintenance scenarios:
### Unplanned downtime
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Flexible server mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime can't be avoided, Azure Database for PostgreSQL flexible server mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
-Here are some failure scenarios and how Flexible server automatically recovers:
+Here are some failure scenarios and how Azure Database for PostgreSQL flexible server automatically recovers:
| **Scenario** | **Automatic recovery** | | - | - |
Here are some failure scenarios that require user action to recover:
### HA configuration questions
-* **Where can I see the SLAs offered with Flexible server?** <br>
- [Azure Database for PostgreSQL SLAs](https://azure.microsoft.com/support/legal/sla/postgresql).
+* **Where can I see the SLAs offered with Azure Database for PostgreSQL flexible server?** <br>
+ [Azure Database for PostgreSQL flexible server SLAs](https://azure.microsoft.com/support/legal/sla/postgresql).
* **Do I need to have HA to protect my server from unplanned outages?** <br>
- No. Flexible server offers local redundant storage with 3 copies of data, zone-redundant backup (in regions where it is supported), and also built-in server resiliency to automatically restart a crashed server and even relocate server to another physical node. Zone redundant HA will provide higher uptime by performing automatic failover to another running (standby) server in another zone and thus provides zone-resilient high availability with zero data loss.
+ No. Azure Database for PostgreSQL flexible server offers local redundant storage with 3 copies of data, zone-redundant backup (in regions where it is supported), and also built-in server resiliency to automatically restart a crashed server and even relocate server to another physical node. Zone redundant HA will provide higher uptime by performing automatic failover to another running (standby) server in another zone and thus provides zone-resilient high availability with zero data loss.
* **Can I choose the availability zones for my primary and standby servers?** <br> If you choose same zone HA, then you can only choose the primary server. If you choose zone redundant HA, then you can choose both primary and standby AZs.
Here are some failure scenarios that require user action to recover:
### Replication and failover related questions
-* **How does flexible server provide high availability in the event of a fault - like AZ fault?** <br>
+* **How does Azure Database for PostgreSQL flexible server provide high availability in the event of a fault - like AZ fault?** <br>
When you enable your server with zone-redundant HA, a physical standby replica with the same compute and storage configuration as the primary is deployed automatically in a different availability zone than the primary. PostgreSQL streaming replication is established between the primary and standby servers. * **What is the typical failover process during an outage?** <br>
postgresql Concepts Intelligent Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-intelligent-tuning.md
Title: Intelligent tuning - Azure Database for PostgreSQL - Flexible Server
+ Title: Intelligent tuning
description: This article describes the intelligent tuning feature in Azure Database for PostgreSQL - Flexible Server.
Last updated 06/02/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server has an intelligent tuning feature that's designed to enhance
-performance automatically and help prevent problems. Intelligent tuning continuously monitors the PostgreSQL database's
+Azure Database for PostgreSQL flexible server has an intelligent tuning feature that's designed to enhance
+performance automatically and help prevent problems. Intelligent tuning continuously monitors the Azure Database for PostgreSQL flexible server database's
status and dynamically adapts the database to your workload. This feature comprises two
You can enable intelligent tuning by using the [Azure portal](how-to-enable-inte
## Why intelligent tuning?
-The autovacuum process is a critical part of maintaining the health and performance of a PostgreSQL database. It helps
+The autovacuum process is a critical part of maintaining the health and performance of an Azure Database for PostgreSQL flexible server database. It helps
reclaim storage occupied by "dead" rows, freeing up space and keeping the database running smoothly. Equally important is the tuning of write operations within the database. This task typically falls to database
The autovacuum tuning function in intelligent tuning monitors the bloat ratio an
The writes tuning function observes the quantity and transactional patterns of write operations. It intelligently adjusts parameters such as `bgwriter_delay`, `checkpoint_completion_target`, `max_wal_size`, and `min_wal_size`. By doing so, it enhances system performance and reliability, even under high write loads.
-When you use intelligent tuning, you can save valuable time and resources by relying on Azure Database for
-PostgreSQL - Flexible Server to maintain the optimal performance of your databases.
+When you use intelligent tuning, you can save valuable time and resources by relying on Azure Database for PostgreSQL flexible server to maintain the optimal performance of your databases.
## How does intelligent tuning work?
updated or dead tuples needed to start a `VACUUM` process.
> Intelligent tuning modifies autovacuum-related parameters at the server level, not at individual table levels. Also, if autovacuum is turned off, intelligent tuning can't operate correctly. For intelligent tuning to optimize the process, the autovacuum feature must be enabled. Although the autovacuum daemon triggers two operations (`VACUUM` and `ANALYZE`), intelligent tuning fine-tunes only the `VACUUM`
-process. This feature currently doesn't adjust the `ANALYZE` process, which gathers statistics on table contents to help the PostgreSQL query planner choose the
+process. This feature currently doesn't adjust the `ANALYZE` process, which gathers statistics on table contents to help the Azure Database for PostgreSQL flexible server query planner choose the
most suitable query execution plan. Intelligent tuning includes safeguards to measure resource utilization like CPU and IOPS.
scale factor, and naptime. This balance minimizes bloat and helps ensure that th
Intelligent tuning adjusts four parameters related to writes tuning: `bgwriter_delay`, `checkpoint_completion_target`, `max_wal_size`, and `min_wal_size`.
-The `bgwriter_delay` parameter determines the frequency at which the background writer process is awakened to clean "dirty" buffers (buffers that are new or modified). The background writer process is one of three processes in PostgreSQL
+The `bgwriter_delay` parameter determines the frequency at which the background writer process is awakened to clean "dirty" buffers (buffers that are new or modified). The background writer process is one of three processes in Azure Database for PostgreSQL flexible server
that handle write operations. The other are the checkpointer process and back-end writes (standard client processes, such as application connections). The background writer process's primary role is to alleviate the load from the main checkpointer process and decrease the strain of back-end writes. The `bgwriter_delay` parameter governs the frequency of background writer rounds. By adjusting this parameter, you can also optimize the performance of Data Manipulation Language (DML) queries.
-The `checkpoint_completion_target` parameter is part of the second write mechanism that PostgreSQL supports, specifically
+The `checkpoint_completion_target` parameter is part of the second write mechanism that Azure Database for PostgreSQL flexible server supports, specifically
the checkpointer process. Checkpoints occur at constant intervals that `checkpoint_timeout` defines (unless forced by exceeding the configured space). To avoid overloading the I/O system with a surge of page writes, writing dirty buffers during a checkpoint is spread out over a period of time. The `checkpoint_completion_target` parameter controls this duration by using `checkpoint_timeout` to specify the duration as a fraction of the checkpoint interval.
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
Title: Limits - Azure Database for PostgreSQL - Flexible Server
+ Title: Limits
description: This article describes limits in Azure Database for PostgreSQL - Flexible Server, such as number of connection and storage engine options.
Last updated 12/16/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The following sections describe capacity and functional limits in the database service. If you'd like to learn about resource (compute, memory, storage) tiers, see the [compute and storage](concepts-compute-storage.md) article.
+The following sections describe capacity and functional limits in Azure Database for PostgreSQL flexible server. If you'd like to learn about resource (compute, memory, storage) tiers, see the [compute and storage](concepts-compute-storage.md) article.
## Maximum connections
-Below, you'll find the _default_ maximum number of connections for each pricing tier and vCore configuration. Please note, Azure Postgres reserves 15 connections for physical replication and monitoring of the Flexible Server. Consequently, the `max user connections` value listed in the table is reduced by 15 from the total `max connections`.
+Below, you'll find the _default_ maximum number of connections for each pricing tier and vCore configuration. Please note, Azure Database for PostgreSQL flexible server reserves 15 connections for physical replication and monitoring of the Azure Database for PostgreSQL flexible server instance. Consequently, the `max user connections` value listed in the table is reduced by 15 from the total `max connections`.
|SKU Name |vCores|Memory Size|Max Connections|Max User Connections| |--||--||--|
When connections exceed the limit, you may receive the following error:
`FATAL: sorry, too many clients already.`
-When using PostgreSQL for a busy database with a large number of concurrent connections, there may be a significant strain on resources. This strain can result in high CPU utilization, particularly when many connections are established simultaneously and when connections have short durations (less than 60 seconds). These factors can negatively impact overall database performance by increasing the time spent on processing connections and disconnections. It's important to note that each connection in Postgres, regardless of whether it is idle or active, consumes a significant amount of resources from your database. This consumption can lead to performance issues beyond high CPU utilization, such as disk and lock contention. The topic is discussed in more detail in the PostgreSQL Wiki article on the [Number of Database Connections](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections). To learn more, visit [Identify and solve connection performance in Azure Postgres](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/identify-and-solve-connection-performance-in-azure-postgres/ba-p/3698375).
+When using Azure Database for PostgreSQL flexible server for a busy database with a large number of concurrent connections, there may be a significant strain on resources. This strain can result in high CPU utilization, particularly when many connections are established simultaneously and when connections have short durations (less than 60 seconds). These factors can negatively impact overall database performance by increasing the time spent on processing connections and disconnections. It's important to note that each connection in Azure Database for PostgreSQL flexible server, regardless of whether it is idle or active, consumes a significant amount of resources from your database. This consumption can lead to performance issues beyond high CPU utilization, such as disk and lock contention. The topic is discussed in more detail in the PostgreSQL Wiki article on the [Number of Database Connections](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections). To learn more, visit [Identify and solve connection performance in Azure Database for PostgreSQL flexible server](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/identify-and-solve-connection-performance-in-azure-postgres/ba-p/3698375).
## Functional limitations
When using PostgreSQL for a busy database with a large number of concurrent conn
- At this time, scaling up the server storage requires a server restart. - Server storage can only be scaled in 2x increments, see [Compute and Storage](concepts-compute-storage.md) for details.-- Decreasing server storage size is currently not supported. Only way to do is [dump and restore](../howto-migrate-using-dump-and-restore.md) it to a new Flexible Server.
+- Decreasing server storage size is currently not supported. The only way to do is [dump and restore](../howto-migrate-using-dump-and-restore.md) it to a new Azure Database for PostgreSQL flexible server instance.
### Server version upgrades
When using PostgreSQL for a busy database with a large number of concurrent conn
- Currently, storage auto-grow feature isn't available. You can monitor the usage and increase the storage to a higher size. - When the storage usage reaches 95% or if the available capacity is less than 5 GiB whichever is more, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. In rare cases, if the rate of data growth outpaces the time it takes switch to read-only mode, your Server may still run out of storage. - We recommend setting alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage.-- If you're using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise the WAL files start to get accumulated in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot isn't in use (due to non-available subscriber), Flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation. -- We don't support the creation of tablespaces, so if you're creating a database, donΓÇÖt provide a tablespace name. PostgreSQL will use the default one that is inherited from the template database. It's unsafe to provide a tablespace like the temporary one because we can't ensure that such objects will remain persistent after server restarts, HA failovers, etc.
+- If you're using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise the WAL files start to get accumulated in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot isn't in use (due to non-available subscriber), Azure Database for PostgreSQL flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation.
+- We don't support the creation of tablespaces, so if you're creating a database, donΓÇÖt provide a tablespace name. Azure Database for PostgreSQL flexible server uses the default one that is inherited from the template database. It's unsafe to provide a tablespace like the temporary one because we can't ensure that such objects will remain persistent after server restarts, HA failovers, etc.
### Networking
When using PostgreSQL for a busy database with a large number of concurrent conn
### Postgres engine, extensions, and PgBouncer -- Postgres 10 and older aren't supported as those are already retired by the open-source community. If you must use one of these versions, you need to use the [Single Server](../overview-single-server.md) option, which supports the older major versions 9.5, 9.6 and 10.-- Flexible Server supports all `contrib` extensions and more. Please refer to [PostgreSQL extensions](/azure/postgresql/flexible-server/concepts-extensions).
+- Postgres 10 and older aren't supported as those are already retired by the open-source community. If you must use one of these versions, you need to use the [Azure Database for PostgreSQL single server](../overview-single-server.md) option, which supports the older major versions 9.5, 9.6 and 10.
+- Azure Database for PostgreSQL flexible server supports all `contrib` extensions and more. Please refer to [PostgreSQL extensions](/azure/postgresql/flexible-server/concepts-extensions).
- Built-in PgBouncer connection pooler is currently not available for Burstable servers. ### Stop/start operation -- Once you stop the Flexible Server, it automatically starts after 7 days.
-
+- Once you stop the Azure Database for PostgreSQL flexible server instance, it automatically starts after 7 days.
+ ### Scheduled maintenance - You can change custom maintenance window to any day/time of the week. However, any changes made after receiving the maintenance notification will have no impact on the next maintenance. Changes only take effect with the following monthly scheduled maintenance.
When using PostgreSQL for a busy database with a large number of concurrent conn
## Next steps - Understand [whatΓÇÖs available for compute and storage options](concepts-compute-storage.md)-- Learn about [Supported PostgreSQL Database Versions](concepts-supported-versions.md)-- Review [how to back up and restore a server in Azure Database for PostgreSQL using the Azure portal](how-to-restore-server-portal.md)
+- Learn about [Supported PostgreSQL database versions](concepts-supported-versions.md)
+- Review [how to back up and restore a server in Azure Database for PostgreSQL flexible server using the Azure portal](how-to-restore-server-portal.md)
postgresql Concepts Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logging.md
Title: Logs - Azure Database for PostgreSQL - Flexible Server
-description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Flexible Server
+ Title: Logs
+description: Describes logging configuration, storage and analysis in Azure Database for PostgreSQL - Flexible Server.
Last updated 12/26/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL allows you to configure and access Postgres' standard logs. The logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. Logging information you can configure and access includes errors, query information, autovacuum records, connections, and checkpoints. (Access to transaction logs is not available).
+Azure Database for PostgreSQL flexible server allows you to configure and access Postgres' standard logs. The logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance. Logging information you can configure and access includes errors, query information, autovacuum records, connections, and checkpoints. (Access to transaction logs is not available).
Audit logging is made available through a Postgres extension, `pgaudit`. To learn more, visit the [auditing concepts](concepts-audit.md) article. ## Configure logging
-You can configure Postgres standard logging on your server using the logging server parameters. To learn more about Postgres log parameters, visit the [When To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN) and [What To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT) sections of the Postgres documentation. Most, but not all, Postgres logging parameters are available to configure in Azure Database for PostgreSQL.
+You can configure Postgres standard logging on your server using the logging server parameters. To learn more about Postgres log parameters, visit the [When To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN) and [What To Log](https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT) sections of the Postgres documentation. Most, but not all, Postgres logging parameters are available to configure in Azure Database for PostgreSQL flexible server.
-To learn how to configure parameters in Azure Database for PostgreSQL, see the [portal documentation](howto-configure-server-parameters-using-portal.md) or the [CLI documentation](howto-configure-server-parameters-using-cli.md).
+To learn how to configure parameters in Azure Database for PostgreSQL flexible server, see the [portal documentation](howto-configure-server-parameters-using-portal.md) or the [CLI documentation](howto-configure-server-parameters-using-cli.md).
> [!NOTE] > Configuring a high volume of logs, for example statement logging, can add significant performance overhead. ## Accessing logs
-Azure Database for PostgreSQL is integrated with Azure Monitor diagnostic settings. Diagnostic settings allows you to send your Postgres logs in JSON format to Azure Monitor Logs for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
+Azure Database for PostgreSQL flexible server is integrated with Azure Monitor diagnostic settings. Diagnostic settings allows you to send your Azure Database for PostgreSQL flexible server logs in JSON format to Azure Monitor Logs for analytics and alerting, Event Hubs for streaming, and Azure Storage for archiving.
### Log format
The following table describes the fields for the **PostgreSQLLogs** type. Depend
| Category | `PostgreSQLLogs` | | OperationName | `LogEvent` | | errorLevel_s | Logging level, example: LOG, ERROR, NOTICE |
-| processId_d | Process id of the PostgreSQL backend |
+| processId_d | Process ID of the PostgreSQL backend |
| sqlerrcode_s | PostgreSQL Error code that follows the SQL standard's conventions for SQLSTATE codes | | Message | Primary log message | | Detail | Secondary log message (if applicable) |
The following table describes the fields for the **PostgreSQLLogs** type. Depend
## Next steps -- Learn more about how to [Configure and Access Logs](howto-configure-and-access-logs.md).
+- Learn more about how to [Configure and Access Logs](how-to-configure-and-access-logs.md).
- Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). - Learn more about [audit logs](concepts-audit.md)
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logical.md
Title: Logical replication and logical decoding - Azure Database for PostgreSQL - Flexible Server
-description: Learn about using logical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server
+ Title: Logical replication and logical decoding
+description: Learn about using logical replication and logical decoding in Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports the following logical data extraction and replication methodologies:
+Azure Database for PostgreSQL flexible server supports the following logical data extraction and replication methodologies:
1. **Logical replication** 1. Using PostgreSQL [native logical replication](https://www.postgresql.org/docs/current/logical-replication.html) to replicate data objects. Logical replication allows fine-grained control over the data replication, including table-level data replication.
Logical decoding:
1. Save the changes and restart the server to apply the changes.
-1. Confirm that your PostgreSQL instance allows network traffic from your connecting resource.
+1. Confirm that your Azure Database for PostgreSQL flexible server instance allows network traffic from your connecting resource.
1. Grant the admin user replication permissions.
Logical decoding:
## Use logical replication and logical decoding
-Using native logical replication is the simplest way to replicate data out of Postgres. You can use the SQL interface or the streaming protocol to consume the changes. You can also use the SQL interface to consume changes using logical decoding.
+Using native logical replication is the simplest way to replicate data out of Azure Database for PostgreSQL flexible server. You can use the SQL interface or the streaming protocol to consume the changes. You can also use the SQL interface to consume changes using logical decoding.
### Native logical replication Logical replication uses the terms 'publisher' and 'subscriber'.-- The publisher is the PostgreSQL database you're sending data **from**.-- The subscriber is the PostgreSQL database you're sending data **to**.
+- The publisher is the Azure Database for PostgreSQL flexible server database you're sending data **from**.
+- The subscriber is the Azure Database for PostgreSQL flexible server database you're sending data **to**.
Here's some sample code you can use to try out logical replication.
Visit the PostgreSQL documentation to understand more about [logical replication
### Use logical replication between databases on the same server
-When you're aiming to set up logical replication between different databases residing on the same PostgreSQL server, it's essential to follow specific guidelines to avoid implementation restrictions that are currently present. As of now, creating a subscription that connects to the same database cluster will only succeed if the replication slot isn't created within the same command; otherwise, the `CREATE SUBSCRIPTION` call hangs, on a `LibPQWalReceiverReceive` wait event. This happens due to an existing restriction within Postgres engine, which might be removed in future releases.
+When you're aiming to set up logical replication between different databases residing on the same Azure Database for PostgreSQL flexible server instance, it's essential to follow specific guidelines to avoid implementation restrictions that are currently present. As of now, creating a subscription that connects to the same database cluster will only succeed if the replication slot isn't created within the same command; otherwise, the `CREATE SUBSCRIPTION` call hangs, on a `LibPQWalReceiverReceive` wait event. This happens due to an existing restriction within Postgres engine, which might be removed in future releases.
To effectively set up logical replication between your "source" and "target" databases on the same server while circumventing this restriction, follow the steps outlined below:
CREATE PUBLICATION pub FOR TABLE basic;
SELECT pg_create_logical_replication_slot('myslot', 'pgoutput'); ```
-Thereafter, in your target database, create a subscription to the previously created publication, ensuring that `create_slot` is set to `false` to prevent PostgreSQL from creating a new slot, and correctly specifying the slot name that was created in the previous step. Before running the command, replace the placeholders in the connection string with your actual database credentials:
+Thereafter, in your target database, create a subscription to the previously created publication, ensuring that `create_slot` is set to `false` to prevent Azure Database for PostgreSQL flexible server from creating a new slot, and correctly specifying the slot name that was created in the previous step. Before running the command, replace the placeholders in the connection string with your actual database credentials:
```sql -- Run this on the target database
The 'active' column in the `pg_replication_slots` view indicates whether there's
```sql SELECT * FROM pg_replication_slots; ```
-[Set alerts](howto-alert-on-metrics.md) on the **Maximum Used Transaction IDs** and **Storage Used** flexible server metrics to notify you when the values increase past normal thresholds.
+[Set alerts](howto-alert-on-metrics.md) on the **Maximum Used Transaction IDs** and **Storage Used** Azure Database for PostgreSQL flexible server metrics to notify you when the values increase past normal thresholds.
## Limitations - **Logical replication** limitations apply as documented [here](https://www.postgresql.org/docs/current/logical-replication-restrictions.html). -- **Slots and HA failover** - When using [high-availability (HA)](concepts-high-availability.md) enabled servers with Azure Database for PostgreSQL - Flexible Server, be aware that logical replication slots aren't preserved during failover events. To maintain logical replication slots and ensure data consistency after a failover, it's recommended to use the PG Failover Slots extension. For more information on enabling this extension, please refer to the [documentation](concepts-extensions.md#pg_failover_slots-preview).
+- **Slots and HA failover** - When using [high-availability (HA)](concepts-high-availability.md) enabled servers with Azure Database for PostgreSQL flexible server, be aware that logical replication slots aren't preserved during failover events. To maintain logical replication slots and ensure data consistency after a failover, it's recommended to use the PG Failover Slots extension. For more information on enabling this extension, please refer to the [documentation](concepts-extensions.md#pg_failover_slots-preview).
> [!IMPORTANT]
-> You must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise, the WAL files accumulate in the primary, filling up the storage. Suppose the storage threshold exceeds a certain threshold, and the logical replication slot is not in use (due to a non-available subscriber). In that case, the Flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation.
+> You must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise, the WAL files accumulate in the primary, filling up the storage. Suppose the storage threshold exceeds a certain threshold, and the logical replication slot is not in use (due to a non-available subscriber). In that case, the Azure Database for PostgreSQL flexible server instance automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation.
## Related content
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
Title: Scheduled maintenance - Azure Database for PostgreSQL - Flexible Server
+ Title: Scheduled maintenance
description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Flexible Server.
Last updated 1/4/2024
-# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Flexible Server
+# Scheduled maintenance in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server performs periodic maintenance to keep your managed database secure, stable, and up-to-date. During maintenance, the server gets new features, updates, and patches.
- > [!IMPORTANT]
+Azure Database for PostgreSQL flexible server performs periodic maintenance to keep your managed database secure, stable, and up-to-date. During maintenance, the server gets new features, updates, and patches.
+
+> [!IMPORTANT]
> Please avoid all server operations (modifications, configuration changes, starting/stopping server) during Azure Database for PostgreSQL flexible server maintenance. Engaging in these activities can lead to unpredictable outcomes, possibly affecting server performance and stability. Wait until maintenance concludes before conducting server operations. ## Select a maintenance window
-
-You can schedule maintenance during a specific day of the week and a time window within that day. Or you can let the system pick a day and a time window time for you automatically. **Maintenance Notifications are sent 5 days in advance**. This ensures ample time to prepare for the scheduled maintenance.. The system will also let you know when maintenance is started, and when it's successfully completed.
+
+You can schedule maintenance during a specific day of the week and a time window within that day. Or you can let the system pick a day and a time window time for you automatically. **Maintenance Notifications are sent 5 days in advance**. This ensures ample time to prepare for the scheduled maintenance. The system also lets you know when maintenance is started, and when it's successfully completed.
Notifications about upcoming scheduled maintenance can be:
Notifications about upcoming scheduled maintenance can be:
* Pushed as a notification to an Azure app * Delivered as a voice message
-When specifying preferences for the maintenance schedule, you can pick a day of the week and a time window. If you don't specify, the system will pick times between 11pm and 7am in your server's region time. You can define different schedules for each flexible server in your Azure subscription.
+When specifying preferences for the maintenance schedule, you can pick a day of the week and a time window. If you don't specify, the system will pick times between 11pm and 7am in your server's region time. You can define different schedules for each Azure Database for PostgreSQL flexible server instance in your Azure subscription.
> [!IMPORTANT] > Normally there are at least 30 days between successful scheduled maintenance events for a server. > > However, in case of a critical emergency update such as a severe vulnerability, the notification window could be shorter than five days or be omitted. The critical update may be applied to your server even if a successful scheduled maintenance was performed in the last 30 days.
-You can update scheduling settings at any time. If there's maintenance scheduled for your flexible server and you update scheduling preferences, the current rollout proceeds as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance.
+You can update scheduling settings at any time. If there's maintenance scheduled for your Azure Database for PostgreSQL flexible server instance and you update scheduling preferences, the current rollout proceeds as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance.
-## System Vs Custom managed maintenance schedules
+## System vs custom managed maintenance schedules
+
+You can define system-managed schedule or custom schedule for each Azure Database for PostgreSQL flexible server instance in your Azure subscription.
-You can define system-managed schedule or custom schedule for each flexible server in your Azure subscription:
-
* With custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a one-hour time window. * With system-managed schedule, the system will pick any one-hour window between 11pm and 7am in your server's region time.
-> [!IMPORTANT]
-> A 7-day deployment gap between system-managed and custom-managed schedules was maintained.
- Updates are first applied to servers with system-managed schedules, followed by those with custom schedules after at least 7 days within a region. To receive early updates for development and test servers, use a system-managed schedule. This allows early testing and issue resolution before updates reach production servers with custom schedules. Updates for custom-schedule servers begin 7 days later during a defined maintenance window. Once notified, updates can't be deferred. Custom schedules are advised for production environments only. In rare cases, maintenance event can be canceled by the system or may fail to complete successfully. If the update fails, the update is reverted, and the previous version of the binaries is restored. In such failed update scenarios, you may still experience restart of the server during the maintenance window. If the update is canceled or failed, the system creates a notification about canceled or failed maintenance event respectively notifying you. The next attempt to perform maintenance will be scheduled as per your current scheduling settings and you'll receive notification about it 5 days in advance.
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
Title: Major Version Upgrade - Azure Database for PostgreSQL - Flexible Server
-description: Learn about the concepts of in-place major version upgrade with Azure Database for PostgreSQL - Flexible Server
+ Title: Major version upgrade
+description: Learn about the concepts of in-place major version upgrade with Azure Database for PostgreSQL - Flexible Server.
-# Major Version Upgrade for PostgreSQL Flexible Server
+# Major version upgrade for Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-Azure Database for PostgreSQL Flexible Server supports PostgreSQL versions 11, 12, 13, 14, 15 and 16. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL Flexible Server periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades, as they can include internal changes and new features that may not be backward-compatible with existing applications.
+Azure Database for PostgreSQL flexible server supports PostgreSQL versions 11, 12, 13, 14, 15, and 16. Postgres community releases a new major version containing new features about once a year. Additionally, major version receives periodic bug fixes in the form of minor releases. Minor version upgrades include changes that are backward-compatible with existing applications. Azure Database for PostgreSQL flexible server periodically updates the minor versions during customerΓÇÖs maintenance window. Major version upgrades are more complicated than minor version upgrades as they can include internal changes and new features that may not be backward-compatible with existing applications.
## Overview
-Azure Database for PostgreSQL Flexible Server Postgres has now introduced in-place major version upgrade feature that performs an in-place upgrade of the server with just a click. In-place major version upgrade simplifies the upgrade process minimizing the disruption to users and applications accessing the server. In-place upgrades are a simpler way to upgrade the major version of the instance, as they retain the server name and other settings of the current server after the upgrade, and don't require data migration or changes to the application connection strings. In-place upgrades are faster and involve shorter downtime than data migration.
+Azure Database for PostgreSQL flexible server has now introduced an in-place major version upgrade feature that performs an in-place upgrade of the server with just a click. In-place major version upgrade simplifies the upgrade process minimizing the disruption to users and applications accessing the server. In-place upgrades are a simpler way to upgrade the major version of the instance, as they retain the server name and other settings of the current server after the upgrade, and don't require data migration or changes to the application connection strings. In-place upgrades are faster and involve shorter downtime than data migration.
## Process Here are some of the important considerations with in-place major version upgrade. -- During in-place major version upgrade process, Flexible Server runs a pre-check procedure to identify any potential issues that might cause the upgrade to fail. If the pre-check finds any incompatibilities, it creates a log event showing that the upgrade pre-check failed, along with an error message.
+- During in-place major version upgrade process, Azure Database for PostgreSQL flexible server runs a pre-check procedure to identify any potential issues that might cause the upgrade to fail. If the pre-check finds any incompatibilities, it creates a log event showing that the upgrade pre-check failed, along with an error message.
-- If the pre-check is successful, then Flexible Server stops the service and takes an implicit backup just before starting the upgrade. This backup can be used to restore the database instance to its previous version if there's an upgrade error.
+- If the pre-check is successful, then Azure Database for PostgreSQL flexible server stops the service and takes an implicit backup just before starting the upgrade. This backup can be used to restore the database instance to its previous version if there's an upgrade error.
-- Flexible Server uses [**pg_upgrade**](https://www.postgresql.org/docs/current/pgupgrade.html) utility to perform in-place major version upgrades and provides the flexibility to skip versions and upgrade directly to higher versions.
+- Azure Database for PostgreSQL flexible server uses [pg_upgrade](https://www.postgresql.org/docs/current/pgupgrade.html) utility to perform in-place major version upgrades and provides the flexibility to skip versions and upgrade directly to higher versions.
- During an in-place major version upgrade of a High Availability (HA) enabled server, the service disables HA, performs the upgrade on the primary server, and then re-enables HA after the upgrade is complete. - Most extensions are automatically upgraded to higher versions during an in-place major version upgrade, with some exceptions. Refer **limitations** section for more details. -- In-place major version upgrade process for Flexible Server automatically deploys the latest supported minor version.
+- In-place major version upgrade process for Azure Database for PostgreSQL flexible server automatically deploys the latest supported minor version.
- The process of performing an in-place major version upgrade is an offline operation that results in a brief period of downtime. Typically, the downtime is under 15 minutes, although the duration may vary depending on the number of system tables involved.
It's recommended to perform a dry run of the in-place major version upgrade in a
## Post upgrade
-Run the **ANALYZE** operation to refresh the `pg_statistic` table. You should do this for every database on your Flexible Server. Optimizer statistics aren't transferred during a major version upgrade, so you need to regenerate all statistics to avoid performance issues. Run the command without any parameters to generate statistics for all regular tables in the current database, as follows:
-
+Run the **ANALYZE** operation to refresh the `pg_statistic` table. You should do this for every database on all your Azure Database for PostgreSQL flexible server instances. Optimizer statistics aren't transferred during a major version upgrade, so you need to regenerate all statistics to avoid performance issues. Run the command without any parameters to generate statistics for all regular tables in the current database, as follows:
``` VACUUM ANALYZE VERBOSE;
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
Title: Monitoring and metrics - Azure Database for PostgreSQL - Flexible Server
+ Title: Monitoring and metrics
description: Review the monitoring and metrics features in Azure Database for PostgreSQL - Flexible Server.
Last updated 1/17/2024
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for PostgreSQL provides various monitoring options to provide insight into how your server is performing.
+Monitoring data about your servers helps you troubleshoot and optimize for your workload. Azure Database for PostgreSQL flexible server provides various monitoring options to provide insight into how your server is performing.
## Metrics
-Azure Database for PostgreSQL provides various metrics that give insight into the behavior of the resources that support the Azure Database for PostgreSQL server. Each metric is emitted at a 1-minute interval and has up to [93 days of history](../../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics). You can configure alerts on the metrics. Other options include setting up automated actions, performing advanced analytics, and archiving the history. For more information, see the [Azure Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md).
+Azure Database for PostgreSQL flexible server provides various metrics that give insight into the behavior of the resources that support the Azure Database for PostgreSQL flexible server instance. Each metric is emitted at a 1-minute interval and has up to [93 days of history](../../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics). You can configure alerts on the metrics. Other options include setting up automated actions, performing advanced analytics, and archiving the history. For more information, see the [Azure Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md).
> [!NOTE] > While metrics are stored for 93 days, you can only query (in the Metrics tile) for a maximum of 30 days' worth of data on any single chart. If you see a blank chart or your chart displays only part of metric data, verify that the difference between start and end dates in the time picker doesn't exceed the 30-day interval. After you've selected a 30-day interval, you can pan the chart to view the full retention window. ### Default Metrics
-The following metrics are available for a flexible server instance of Azure Database for PostgreSQL:
+The following metrics are available for an Azure Database for PostgreSQL flexible server instance:
|Display name |Metric ID |Unit |Description |Default enabled| |--|--|-|--||
The following metrics are available for a flexible server instance of Azure Data
### Enhanced metrics
-You can use enhanced metrics for Azure Database for PostgreSQL - Flexible Server to get fine-grained monitoring and alerting on databases. You can configure alerts on the metrics. Some enhanced metrics include a `Dimension` parameter that you can use to split and filter metrics data by using a dimension like database name or state.
+You can use enhanced metrics for Azure Database for PostgreSQL flexible server to get fine-grained monitoring and alerting on databases. You can configure alerts on the metrics. Some enhanced metrics include a `Dimension` parameter that you can use to split and filter metrics data by using a dimension like database name or state.
#### Enabling enhanced metrics -- Most of these new metrics are *disabled* by default. There are a few exceptions though, which are enabled by default. Rightmost column in the following tables indicate whether each metric is enabled by default or not.
+- Most of these new metrics are *disabled* by default. There are a few exceptions though, which are enabled by default. Rightmost column in the following tables indicates whether each metric is enabled by default or not.
- To enable those metrics which are not enabled by default, set the server parameter `metrics.collector_database_activity` to `ON`. This parameter is dynamic and doesn't require an instance restart. ##### List of enhanced metrics
You can choose from the following categories of enhanced metrics:
### Autovacuum metrics
-Autovacuum metrics can be used to monitor and tune autovacuum performance for Azure Database for PostgreSQL - Flexible Server. Each metric is emitted at a *30-minute* interval and has up to *93 days* of retention. You can create alerts for specific metrics, and you can split and filter metrics data by using the DatabaseName dimension.
+Autovacuum metrics can be used to monitor and tune autovacuum performance for Azure Database for PostgreSQL flexible server. Each metric is emitted at a *30-minute* interval and has up to *93 days* of retention. You can create alerts for specific metrics, and you can split and filter metrics data by using the DatabaseName dimension.
#### How to enable autovacuum metrics
You can use PgBouncer metrics to monitor the performance of the PgBouncer proces
|Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||
-|**Active client connections** |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL connection. |DatabaseName|No |
-|**Waiting client connections** |`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL connection to service them.|DatabaseName|No |
-|**Active server connections** |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL that are in use by a client connection. |DatabaseName|No |
-|**Idle server connections** |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL that are idle and ready to service a new client connection. |DatabaseName|No |
+|**Active client connections** |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL flexible server connection. |DatabaseName|No |
+|**Waiting client connections** |`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL flexible server connection to service them.|DatabaseName|No |
+|**Active server connections** |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL flexible server that are in use by a client connection. |DatabaseName|No |
+|**Idle server connections** |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL flexible server that are idle and ready to service a new client connection. |DatabaseName|No |
|**Total pooled connections** |`total_pooled_connections`|Count|Current number of pooled connections. |DatabaseName|No | |**Number of connection pools** |`num_pools` |Count|Total number of connection pools. |DatabaseName|No |
You can use PgBouncer metrics to monitor the performance of the PgBouncer proces
### Database availability metric
-Is-db-alive is an database server availability metric for Azure Postgres Flexible Server, that returns `[1 for available]` and `[0 for not-available]`. Each metric is emitted at a *1 minute* frequency, and has up to *93 days* of retention. Customers can configure alerts on the metric.
+Is-db-alive is a database server availability metric for Azure Database for PostgreSQL flexible server that returns `[1 for available]` and `[0 for not-available]`. Each metric is emitted at a *1 minute* frequency, and has up to *93 days* of retention. Customers can configure alerts on the metric.
|Display Name |Metric ID |Unit |Description |Dimension |Default enabled| |-|-|-|--|||
There are several options to visualize Azure Monitor metrics.
|||--| |Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. | |[Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no other configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. |
-| [Grafana](https://grafana.com/grafan) to visualize your Azure Monitor metrics and logs. | To become familiar with Grafana dashboards, some training is required. However, you can simplify the process by downloading a prebuilt [Azure PostgreSQL grafana monitoring dashboard](https://grafana.com/grafana/dashboards/19556-azure-azure-postgresql-flexible-server-monitoring/), which allows for easy monitoring of all Azure PostgreSQL servers within your organization. |
+| [Grafana](https://grafana.com/grafan) to visualize your Azure Monitor metrics and logs. | To become familiar with Grafana dashboards, some training is required. However, you can simplify the process by downloading a prebuilt [Azure Database for PostgreSQL flexible server grafana monitoring dashboard](https://grafana.com/grafana/dashboards/19556-azure-azure-postgresql-flexible-server-monitoring/), which allows for easy monitoring of all Azure Database for PostgreSQL flexible server instances within your organization. |
## Logs
-In addition to the metrics, you can use Azure Database for PostgreSQL to configure and access Azure Database for PostgreSQL standard logs. For more information, see [Logging concepts](concepts-logging.md).
+In addition to the metrics, you can use Azure Database for PostgreSQL flexible server to configure and access Azure Database for PostgreSQL standard logs. For more information, see [Logging concepts](concepts-logging.md).
### Logs visualization
In addition to the metrics, you can use Azure Database for PostgreSQL to configu
## Next steps -- Learn more about how to [configure and access logs](howto-configure-and-access-logs.md).
+- Learn more about how to [configure and access logs](how-to-configure-and-access-logs.md).
- Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). - Learn more about [audit logs](concepts-audit.md).
postgresql Concepts Networking Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private-link.md
Title: Networking overview - Azure Database for PostgreSQL - Flexible Server with Private Link connectivity
-description: Learn about connectivity and networking options in the Flexible Server deployment option for Azure Database for PostgreSQL with Private Link
+ Title: Networking overview with Private Link connectivity
+description: Learn about connectivity and networking options for Azure Database for PostgreSQL - Flexible Server with Private Link.
-# Azure Database for PostgreSQL Flexible Server Networking with Private Link - Preview
+# Azure Database for PostgreSQL - Flexible Server networking with Private Link - Preview
-**Azure Private Link** allows you to create private endpoints for Azure Database for PostgreSQL - Flexible server to bring it inside your Virtual Network (VNET). That functionality is introduced **in addition** to already [existing networking capabilities provided by VNET Integration](./concepts-networking-private.md), which is currently in general availability with Azure Database for PostgreSQL - Flexible Server. With **Private Link**, traffic between your virtual network and the service travels the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own private link service in your virtual network and deliver it to your customers. Setup and consumption using Azure Private Link is consistent across Azure PaaS, customer-owned, and shared partner services.
+**Azure Private Link** allows you to create private endpoints for Azure Database for PostgreSQL flexible server to bring it inside your Virtual Network (VNET). That functionality is introduced **in addition** to already [existing networking capabilities provided by VNET Integration](./concepts-networking-private.md), which is currently in general availability with Azure Database for PostgreSQL flexible server. With **Private Link**, traffic between your virtual network and the service travels the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own private link service in your virtual network and deliver it to your customers. Setup and consumption using Azure Private Link is consistent across Azure PaaS, customer-owned, and shared partner services.
> [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server supports Private Link based networking in Preview.
+> Azure Database for PostgreSQL flexible server supports Private Link based networking in Preview.
Private Link is exposed to users through two Azure resource types:
The same public service instance can be referenced by multiple private endpoints
- **Global reach: Connect privately to services running in other regions.** The consumer's virtual network could be in region A and it can connect to services behind Private Link in region B.
-## Use Cases for Private Link with Azure Database for PostgreSQL - Flexible Server in Preview
+## Use Cases for Private Link with Azure Database for PostgreSQL flexible server in Preview
Clients can connect to the private endpoint from the same VNet, peered VNet in same region or across regions, or via [VNet-to-VNet connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) across regions. Additionally, clients can connect from on-premises using ExpressRoute, private peering, or VPN tunneling. Below is a simplified diagram showing the common use cases. :::image type="content" source="./media/concepts-networking/show-private-link-overview.png" alt-text="Diagram that shows how Azure Private Link works with Private Endpoints." lightbox="./media/concepts-networking/show-private-link-overview.png":::
-### Limitations and Supported Features for Private Link Preview with Azure Database for PostgreSQL - Flexible Server
+### Limitations and Supported Features for Private Link Preview with Azure Database for PostgreSQL flexible server
-In Preview of Private Endpoint for PostgreSQL flexible server, there are certain limitations as explain in cross feature availability matrix below.
+In Preview of Private Endpoint for Azure Database for PostgreSQL flexible server, there are certain limitations as explain in cross feature availability matrix below.
-Cross Feature Availability Matrix for preview of Private Endpoint in Azure Database for PostgreSQL - Flexible Server.
+Cross Feature Availability Matrix for preview of Private Endpoint in Azure Database for PostgreSQL flexible server.
| **Feature** | **Availability** | **Notes** | | | | |
Cross Feature Availability Matrix for preview of Private Endpoint in Azure Datab
| Private Endpoint DNS | Yes | Works as designed and [documented](../../private-link/private-endpoint-dns.md) | > [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server support for Private Endpoints in Preview requires enablement of [**PostgreSQL Private Endpoint capability** preview feature in your subscription](../../azure-resource-manager/management/preview-features.md).
+> Azure Database for PostgreSQL flexible server support for Private Endpoints in Preview requires enablement of [**Azure Database for PostgreSQL flexible server Private Endpoint capability** preview feature in your subscription](../../azure-resource-manager/management/preview-features.md).
> Only **after preview feature is enabled** you can create servers which are PE capable, i.e. can be networked using Private Link.
Cross Feature Availability Matrix for preview of Private Endpoint in Azure Datab
### Connect from an Azure VM in Peered Virtual Network
-Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to the Azure Database for PostgreSQL - Flexible server from an Azure VM in a peered VNet.
+Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks-powershell.md) to establish connectivity to Azure Database for PostgreSQL flexible server from an Azure VM in a peered VNet.
### Connect from an Azure VM in VNet-to-VNet environment
-Configure [VNet-to-VNet VPN gateway](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connection to establish connectivity to an Azure Database for PostgreSQL - Flexible server from an Azure VM in a different region or subscription.
+Configure [VNet-to-VNet VPN gateway](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) connection to establish connectivity to an Azure Database for PostgreSQL flexible server instance from an Azure VM in a different region or subscription.
### Connect from an on-premises environment over VPN
-To establish connectivity from an on-premises environment to the Azure Database for PostgreSQL - Flexible server, choose and implement one of the options:
+To establish connectivity from an on-premises environment to the Azure Database for PostgreSQL flexible server instance, choose and implement one of the options:
- [Point-to-Site Connection](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md) - [Site-to-Site VPN Connection](../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md) - [ExpressRoute Circuit](../../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
Network policies can be enabled either for Network Security Groups only, for Use
Limitations to Network Security Groups (NSG) and Private Endpoints are listed [here](../../private-link/private-endpoint-overview.md) > [!IMPORTANT]
- > High availability and other Features of Azure Database for PostgreSQL - Flexible Server require ability to send\receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL - Flexible Server is deployed , as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL - Flexible Server within the subnet where it's deployed, please **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL - Flexible Server please allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md).
- > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md) , Azure Database for PostgreSQL - Flexible Server requires ability to send\receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers.
+ > High availability and other features of Azure Database for PostgreSQL flexible server require the ability to send/receive traffic to **destination port 5432** within the Azure virtual network subnet where Azure Database for PostgreSQL flexible server is deployed, as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL flexible server instance within the subnet where it's deployed, **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL flexible server instance, allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md).
+ > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md), Azure Database for PostgreSQL flexible server requires the ability to send/receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers.
## Private Link combined with firewall rules The following situations and outcomes are possible when you use Private Link in combination with firewall rules: -- If you don't configure any firewall rules, then by default, no traffic is able to access the Azure Database for PostgreSQL Flexible server.
+- If you don't configure any firewall rules, then by default, no traffic is able to access the Azure Database for PostgreSQL flexible server instance.
- If you configure public traffic or a service endpoint and you create private endpoints, then different types of incoming traffic are authorized by the corresponding type of firewall rule. -- If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for PostgreSQL Flexible server is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for PostgreSQL Flexible server.
+- If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for PostgreSQL flexible server instance is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for PostgreSQL flexible server instance.
## Next steps -- Learn how to create a flexible server by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
+- Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
postgresql Concepts Networking Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private.md
Title: Networking overview - Azure Database for PostgreSQL - Flexible Server with private access (VNET)
-description: Learn about connectivity and networking options in the Flexible Server deployment option for Azure Database for PostgreSQL with private access (VNET)
+ Title: Networking overview with private access (VNET)
+description: Learn about connectivity and networking options for Azure Database for PostgreSQL - Flexible Server with private access (VNET).
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article describes connectivity and networking concepts for Azure Database for PostgreSQL - Flexible Server.
+This article describes connectivity and networking concepts for Azure Database for PostgreSQL flexible server.
-When you create an Azure Database for PostgreSQL - Flexible Server instance (a *flexible server*), you must choose one of the following networking options: **Private access (VNet integration)** or **Public access (allowed IP addresses) and Private Endpoint**. This document will describe **Private access (VNet integration)** networking option.
+When you create an Azure Database for PostgreSQL flexible server instance, you must choose one of the following networking options: **Private access (VNet integration)** or **Public access (allowed IP addresses) and Private Endpoint**. This document will describe **Private access (VNet integration)** networking option.
## Private access (VNet integration)
-You can deploy a flexible server into your [Azure virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) using **[VNET injection](../../virtual-network/virtual-network-for-azure-services.md)**. Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through **private IP addresses** that were assigned on this network.
+You can deploy an Azure Database for PostgreSQL flexible server instance into your [Azure virtual network (VNet)](../../virtual-network/virtual-networks-overview.md) using **[VNET injection](../../virtual-network/virtual-network-for-azure-services.md)**. Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through **private IP addresses** that were assigned on this network.
Choose this networking option if you want the following capabilities:
-* Connect from Azure resources in the same virtual network to your flexible server by using private IP addresses.
-* Use VPN or Azure ExpressRoute to connect from non-Azure resources to your flexible server.
-* Ensure that the flexible server has no public endpoint that's accessible through the internet.
+* Connect from Azure resources in the same virtual network to your Azure Database for PostgreSQL flexible server instance by using private IP addresses.
+* Use VPN or Azure ExpressRoute to connect from non-Azure resources to your Azure Database for PostgreSQL flexible server instance.
+* Ensure that the Azure Database for PostgreSQL flexible server instance has no public endpoint that's accessible through the internet.
In the preceding diagram:-- Flexible servers are injected into subnet 10.0.1.0/24 of the VNet-1 virtual network.-- Applications that are deployed on different subnets within the same virtual network can access flexible servers directly.-- Applications that are deployed on a different virtual network (VNet-2) don't have direct access to flexible servers. You have to perform [virtual network peering for a private DNS zone](#private-dns-zone-and-virtual-network-peering) before they can access the flexible server.
+- Azure Database for PostgreSQL flexible server instances are injected into subnet 10.0.1.0/24 of the VNet-1 virtual network.
+- Applications that are deployed on different subnets within the same virtual network can access Azure Database for PostgreSQL flexible server instances directly.
+- Applications that are deployed on a different virtual network (VNet-2) don't have direct access to Azure Database for PostgreSQL flexible server instances. You have to perform [virtual network peering for a private DNS zone](#private-dns-zone-and-virtual-network-peering) before they can access the flexible server.
### Virtual network concepts
-An Azure virtual network contains a private IP address space that's configured for your use. Your virtual network must be in the same Azure region as your flexible server. To learn more about virtual networks, see the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md).
+An Azure virtual network contains a private IP address space that's configured for your use. Your virtual network must be in the same Azure region as your Azure Database for PostgreSQL flexible server instance. To learn more about virtual networks, see the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md).
-Here are some concepts to be familiar with when you're using virtual networks where resources are [integrated into VNET](../../virtual-network/virtual-network-for-azure-services.md) with PostgreSQL flexible servers:
+Here are some concepts to be familiar with when you're using virtual networks where resources are [integrated into VNET](../../virtual-network/virtual-network-for-azure-services.md) with Azure Database for PostgreSQL flexible server instances:
* **Delegated subnet**. A virtual network contains subnets (sub-networks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
- Your VNET integrated flexible server must be in a subnet that's *delegated*. That is, only Azure Database for PostgreSQL - Flexible Server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as `Microsoft.DBforPostgreSQL/flexibleServers`.
- The smallest CIDR range you can specify for the subnet is /28, which provides sixteen IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that cannot be assigned to host, mentioned above. This leaves you eleven available IP addresses for /28 CIDR range, whereas a single Flexible Server with High Availability features utilizes 4 addresses.
- For Replication and Microsoft Entra connections please make sure Route Tables do not affect traffic.A common pattern is route all outbound traffic via an Azure Firewall or a custom / on premise network filtering appliance.
+ Your VNET integrated Azure Database for PostgreSQL flexible server instance must be in a subnet that's *delegated*. That is, only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as `Microsoft.DBforPostgreSQL/flexibleServers`.
+ The smallest CIDR range you can specify for the subnet is /28, which provides sixteen IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that cannot be assigned to host, mentioned above. This leaves you eleven available IP addresses for /28 CIDR range, whereas a single Azure Database for PostgreSQL flexible server instance with High Availability features utilizes 4 addresses.
+ For Replication and Microsoft Entra connections please make sure Route Tables do not affect traffic.A common pattern is route all outbound traffic via an Azure Firewall or a custom on-premises network filtering appliance.
If the subnet has a Route Table associated with the rule to route all traffic to a virtual appliance: * Add a rule with Destination Service Tag ΓÇ£AzureActiveDirectoryΓÇ¥ and next hop ΓÇ£InternetΓÇ¥
- * Add a rule with Destination IP range same as PostgreSQL subnet range and next hop ΓÇ£Virtual NetworkΓÇ¥
+ * Add a rule with Destination IP range same as the Azure Database for PostgreSQL flexible server subnet range and next hop ΓÇ£Virtual NetworkΓÇ¥
> [!IMPORTANT] > The names `AzureFirewallSubnet`, `AzureFirewallManagementSubnet`, `AzureBastionSubnet`, and `GatewaySubnet` are reserved within Azure. Don't use any of these as your subnet name.
- > For Azure Storage connection please make sure PostgreSQL delegated subnet has Service Endpoints for Azure Storage in the region of the VNet. The endpoints are created by default, but please take care not to remove these manually.
+ > For Azure Storage connection please make sure the Azure Database for PostgreSQL flexible server delegated subnet has Service Endpoints for Azure Storage in the region of the VNet. The endpoints are created by default, but please take care not to remove these manually.
* **Network security group (NSG)**. Security rules in NSGs enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. For more information, see the [NSG overview](../../virtual-network/network-security-groups-overview.md).
Here are some concepts to be familiar with when you're using virtual networks wh
For more information, see the [ASG overview](../../virtual-network/application-security-groups.md).
- At this time, we don't support NSGs where an ASG is part of the rule with Azure Database for PostgreSQL - Flexible Server. We currently advise using [IP-based source or destination filtering](../../virtual-network/network-security-groups-overview.md#security-rules) in an NSG.
+ At this time, we don't support NSGs where an ASG is part of the rule with Azure Database for PostgreSQL flexible server. We currently advise using [IP-based source or destination filtering](../../virtual-network/network-security-groups-overview.md#security-rules) in an NSG.
> [!IMPORTANT]
- > High availability and other Features of Azure Database for PostgreSQL - Flexible Server require ability to send\receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL - Flexible Server is deployed , as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL - Flexible Server within the subnet where its deployed, please **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL - Flexible Server please allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md).
- > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md) , Azure Database for PostgreSQL - Flexible Server requires ability to send\receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers.
+ > High availability and other Features of Azure Database for PostgreSQL flexible server require ability to send/receive traffic to **destination port 5432** within Azure virtual network subnet where Azure Database for PostgreSQL flexible server is deployed, as well as to **Azure storage** for log archival. If you create **[Network Security Groups (NSG)](../../virtual-network/network-security-groups-overview.md)** to deny traffic flow to or from your Azure Database for PostgreSQL flexible server instance within the subnet where it's deployed, **make sure to allow traffic to destination port 5432** within the subnet, and also to Azure storage by using **[service tag](../../virtual-network/service-tags-overview.md) Azure Storage** as a destination. Also, if you elect to use [Microsoft Entra authentication](concepts-azure-ad-authentication.md) to authenticate logins to your Azure Database for PostgreSQL flexible server instance, allow outbound traffic to Microsoft Entra ID using Microsoft Entra [service tag](../../virtual-network/service-tags-overview.md).
+ > When setting up [Read Replicas across Azure regions](./concepts-read-replicas.md), Azure Database for PostgreSQL flexible server requires ability to send/receive traffic to **destination port 5432** for both primary and replica, as well as to **[Azure storage](../../virtual-network/service-tags-overview.md#available-service-tags)** in primary and replica regions from both primary and replica servers.
* **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked. ### Using a private DNS zone [Azure Private DNS](../../dns/private-dns-overview.md) provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution.
-When using private network access with Azure virtual network, providing the private DNS zone information is **mandatory** in order to be able to do DNS resolution. For new Azure Database for PostgreSQL Flexible Server creation using private network access, private DNS zones will need to be used while configuring flexible servers with private access.
-For new Azure Database for PostgreSQL Flexible Server creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring flexible servers with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating flexible servers, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription.
+When using private network access with Azure virtual network, providing the private DNS zone information is **mandatory** in order to be able to do DNS resolution. For new Azure Database for PostgreSQL flexible server instance creation using private network access, private DNS zones will need to be used while configuring Azure Database for PostgreSQL flexible server instances with private access.
+For new Azure Database for PostgreSQL flexible server instance creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring Azure Database for PostgreSQL flexible server instances with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating Azure Database for PostgreSQL flexible server instances, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription.
-If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, **create private DNS zones that end with `.postgres.database.azure.com`**. Use those zones while configuring flexible servers with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name **can't** be the name you use for one of your flexible servers or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
+If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, **create private DNS zones that end with `.postgres.database.azure.com`**. Use those zones while configuring Azure Database for PostgreSQL flexible server instances with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name **can't** be the name you use for one of your Azure Database for PostgreSQL flexible server instances or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
-Using Azure Portal, API, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL - Flexible Server to another private DNS zone that exists the same or different subscription.
+Using Azure portal, API, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone that exists the same or different subscription.
> [!IMPORTANT]
- > Ability to change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL - Flexible Server to another private DNS zone is currently disabled for servers with High Availability feature enabled.
+ > Ability to change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone is currently disabled for servers with High Availability feature enabled.
-After you create a private DNS zone in Azure, you'll need to [link](../../dns/private-dns-virtual-network-links.md) a virtual network to it. Once linked, resources hosted in that virtual network can access the private DNS zone.
+After you create a private DNS zone in Azure, you need to [link](../../dns/private-dns-virtual-network-links.md) a virtual network to it. Once linked, resources hosted in that virtual network can access the private DNS zone.
> [!IMPORTANT]
- > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL - Flexible Server with private networking. When creating server through the Portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure Portal.
+ > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL flexible server with private networking. When creating server through the portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure portal.
[DNS private zones are resilient](../../dns/private-dns-overview.md) to regional outages because zone data is globally available. Resource records in a private zone are automatically replicated across regions. Azure Private DNS is an availability zone foundational, zone-reduntant service. For more information, see [Azure services with availability zone support](../../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support). ### Integration with a custom DNS server
-If you're using a custom DNS server, you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL - Flexible Server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
+If you're using a custom DNS server, you must use a DNS forwarder to resolve the FQDN of Azure Database for PostgreSQL flexible server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md).
The custom DNS server should be inside the virtual network or reachable via the virtual network's DNS server setting. To learn more, see [Name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). ### Private DNS zone and virtual network peering
-Private DNS zone settings and virtual network peering are independent of each other. If you want to connect to the flexible server from a client that's provisioned in another virtual network from the same region or a different region, you have to **link** the private DNS zone with the virtual network. For more information, see [Link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network).
+Private DNS zone settings and virtual network peering are independent of each other. If you want to connect to the Azure Database for PostgreSQL flexible server instance from a client that's provisioned in another virtual network from the same region or a different region, you have to **link** the private DNS zone with the virtual network. For more information, see [Link the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network).
> [!NOTE]
-> Only private DNS zone names that end with **'postgres.database.azure.com'** can be linked. Your DNS zone name cannot be the same as your flexible server(s) otherwise name resolution will fail.
+> Only private DNS zone names that end with **'postgres.database.azure.com'** can be linked. Your DNS zone name cannot be the same as your Azure Database for PostgreSQL flexible server instance(s) otherwise name resolution will fail.
To map a Server name to the DNS record you can run *nslookup* command in [Azure Cloud Shell](../../cloud-shell/overview.md) using Azure PowerShell or Bash, substituting name of your server for <server_name> parameter in example below:
Use [Azure Virtual Network Manager (AVNM)](../../virtual-network-manager/overvie
### Replication across Azure regions and virtual networks with private networking
-Database replication is the process of copying data from a central or primary server to multiple servers known as replicas. The primary server accepts read and write operations whereas the replicas serve read-only transactions. The primary server and replicas collectively form a database cluster.The goal of database replication is to ensure redundancy, consistency, high availability, and accessibility of data, especially in high-traffic, mission-critical applications.
+Database replication is the process of copying data from a central or primary server to multiple servers known as replicas. The primary server accepts read and write operations whereas the replicas serve read-only transactions. The primary server and replicas collectively form a database cluster. The goal of database replication is to ensure redundancy, consistency, high availability, and accessibility of data, especially in high-traffic, mission-critical applications.
-Azure Database for PostgreSQL - Flexible Server offers two methods for replications: physical (i.e. streaming) via [built -in Read Replica feature](./concepts-read-replicas.md) and [logical replication](./concepts-logical.md). Both are ideal for different use cases, and a user may choose one over the other depending on the end goal.
+Azure Database for PostgreSQL flexible server offers two methods for replications: physical (i.e. streaming) via [built -in Read Replica feature](./concepts-read-replicas.md) and [logical replication](./concepts-logical.md). Both are ideal for different use cases, and a user may choose one over the other depending on the end goal.
Replication across Azure regions, with separate [virtual networks (VNETs)](../../virtual-network/virtual-networks-overview.md) in each region, **requires connectivity across regional virtual network boundaries** that can be provided via **[virtual network peering](../../virtual-network/virtual-network-peering-overview.md)** or in **[Hub and Spoke architectures](#using-hub-and-spoke-private-networking-design) via network appliance**.
-By default **DNS name resolution** is **scoped to a virtual network**. This means that any client in one virtual network (VNET1) is unable to resolve the Flexible Server FQDN in another virtual network (VNET2)
+By default **DNS name resolution** is **scoped to a virtual network**. This means that any client in one virtual network (VNET1) is unable to resolve the Azure Database for PostgreSQL flexible server FQDN in another virtual network (VNET2)
-In order to resolve this issue, you must make sure clients in VNET1 can access the Flexible Server Private DNS Zone. This can be achieved by adding a **[virtual network link](../../dns/private-dns-virtual-network-links.md)** to the Private DNS Zone of your Flexible Server instance.
+In order to resolve this issue, you must make sure clients in VNET1 can access the Azure Database for PostgreSQL flexible server Private DNS Zone. This can be achieved by adding a **[virtual network link](../../dns/private-dns-virtual-network-links.md)** to the Private DNS Zone of your Azure Database for PostgreSQL flexible server instance.
### Unsupported virtual network scenarios
In order to resolve this issue, you must make sure clients in VNET1 can access t
Here are some limitations for working with virtual networks created via VNET integration:
-* After a flexible server is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription.
+* After an Azure Database for PostgreSQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network or subnet. You can't move the virtual network into another resource group or subscription.
* Subnet size (address spaces) can't be increased after resources exist in the subnet.
-* VNET injected resources cannot interact with Private Link by default. If you with to use **[Private Link](../../private-link/private-link-overview.md) for private networking see [Azure Database for PostgreSQL Flexible Server Networking with Private Link - Preview](./concepts-networking-private-link.md)**
+* VNET injected resources can't interact with Private Link by default. If you with to use **[Private Link](../../private-link/private-link-overview.md) for private networking see [Azure Database for PostgreSQL flexible server networking with Private Link - Preview](./concepts-networking-private-link.md)**
> [!IMPORTANT]
-> Azure Resource Manager supports ability to **lock** resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: **CanNotDelete** and **ReadOnly**. These lock types can be applied either to a Private DNS zone, or to an individual record set. **Applying a lock of either type against Private DNS Zone or individual record set may interfere with ability of Azure Database for PostgreSQL - Flexible Server service to update DNS records** and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. For these reasons, please make sure you are **not** utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL - Flexible Server.
+> Azure Resource Manager supports the ability to **lock** resources, as a security control. Resource locks are applied to the resource, and are effective across all users and roles. There are two types of resource lock: **CanNotDelete** and **ReadOnly**. These lock types can be applied either to a Private DNS zone, or to an individual record set. **Applying a lock of either type against Private DNS Zone or individual record set may interfere with the ability of Azure Database for PostgreSQL flexible server to update DNS records** and cause issues during important operations on DNS, such as High Availability failover from primary to secondary. For these reasons, please make sure you are **not** utilizing DNS private zone or record locks when utilizing High Availability features with Azure Database for PostgreSQL flexible server.
## Host name
-Regardless of the networking option that you choose, we recommend that you always use an **FQDN** as host name when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
+Regardless of the networking option that you choose, we recommend that you always use an **FQDN** as host name when connecting to your Azure Database for PostgreSQL flexible server instance. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
An example that uses an FQDN as a host name is `hostname = servername.postgres.database.azure.com`. Where possible, avoid using `hostname = 10.0.0.4` (a private address) or `hostname = 40.2.45.67` (a public address). ## Next steps
-* Learn how to create a flexible server by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
+* Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
postgresql Concepts Networking Public https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-public.md
Title: Networking overview - Azure Database for PostgreSQL - Flexible Server with public access (allowed IP addresses)
-description: Learn about connectivity and networking with public access in the Flexible Server deployment option for Azure Database for PostgreSQL.
+ Title: Networking overview with public access (allowed IP addresses)
+description: Learn about connectivity and networking with public access for Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article describes connectivity and networking concepts for Azure Database for PostgreSQL - Flexible Server.
+This article describes connectivity and networking concepts for Azure Database for PostgreSQL flexible server.
-When you create an Azure Database for PostgreSQL - Flexible Server instance (a *flexible server*), you must choose one of the following networking options: **Private access (VNet integration)** or **Public access (allowed IP addresses) and Private Endpoint**.
+When you create an Azure Database for PostgreSQL flexible server instance, you must choose one of the following networking options: **Private access (VNet integration)** or **Public access (allowed IP addresses) and Private Endpoint**.
The following characteristics apply whether you choose to use the private access or the public access option: -- Connections from allowed IP addresses need to authenticate to the PostgreSQL server with valid credentials.
+- Connections from allowed IP addresses need to authenticate to the Azure Database for PostgreSQL flexible server instance with valid credentials.
- Connection encryption is enforced for your network traffic. - The server has a fully qualified domain name (FQDN). For the `hostname` property in connection strings, we recommend using the FQDN instead of an IP address. - Both options control access at the server level, not at the database or table level. You would use PostgreSQL's roles properties to control database, table, and other object access. > [!NOTE]
-> Because Azure Database for PostgreSQL is a managed database service, users are not provided host or OS access to view or modify configuration files such as `pg_hba.conf`. The content of the files is automatically updated based on the network settings.
+> Because Azure Database for PostgreSQL flexible server is a managed database service, users are not provided host or OS access to view or modify configuration files such as `pg_hba.conf`. The content of the files is automatically updated based on the network settings.
-## Use Public Access Networking with Flexible Server
+## Use Public Access Networking with Azure Database for PostgreSQL flexible server
-When you choose the **Public Access** method, your PostgreSQL Flexible server is accessed through a public endpoint over the internet. The public endpoint is a publicly resolvable DNS address. The phrase **allowed IP addresses** refers to a range of IP addresses that you choose to give permission to access your server. These permissions are called *firewall rules*.
+When you choose the **Public Access** method, your Azure Database for PostgreSQL flexible server instance is accessed through a public endpoint over the internet. The public endpoint is a publicly resolvable DNS address. The phrase **allowed IP addresses** refers to a range of IP addresses that you choose to give permission to access your server. These permissions are called *firewall rules*.
Choose this networking option if you want the following capabilities: - Connect from Azure resources that don't support virtual networks. - Connect from resources outside Azure that are not connected by VPN or ExpressRoute.-- Ensure that the flexible server has a public endpoint that's accessible through the internet.
+- Ensure that the Azure Database for PostgreSQL flexible server instance has a public endpoint that's accessible through the internet.
Characteristics of the public access method include: -- Only the IP addresses that you allow have permission to access your PostgreSQL flexible server. By default, no IP addresses are allowed. You can add IP addresses during server creation or afterward.-- Your PostgreSQL server has a publicly resolvable DNS name.-- Your flexible server is not in one of your Azure virtual networks.
+- Only the IP addresses that you allow have permission to access your Azure Database for PostgreSQL flexible server instance. By default, no IP addresses are allowed. You can add IP addresses during server creation or afterward.
+- Your Azure Database for PostgreSQL flexible server instance has a publicly resolvable DNS name.
+- Your Azure Database for PostgreSQL flexible server instance is not in one of your Azure virtual networks.
- Network traffic to and from your server does not go over a private network. The traffic uses the general internet pathways. ### Firewall rules
-Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server. If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted otherwise it is rejected. For example, if your application connects with JDBC driver for PostgreSQL, you may encounter this error attempting to connect when the firewall is blocking the connection.
+Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL flexible server instance. If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted otherwise it is rejected. For example, if your application connects with JDBC driver for PostgreSQL, you may encounter this error attempting to connect when the firewall is blocking the connection.
```java java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL ``` > [!NOTE]
-> To access Azure Database for PostgreSQL- Flexible Server from your local computer, ensure that the firewall on your network and local computer allow outgoing communication on TCP port 5432.
+> To access Azure Database for PostgreSQL flexible server from your local computer, ensure that the firewall on your network and local computer allow outgoing communication on TCP port 5432.
### Programmatically managed Firewall rules In addition to the Azure portal, firewall rules can be managed programmatically using Azure CLI. See [Create and manage Azure Database for PostgreSQL - Flexible Server firewall rules using the Azure CLI](./how-to-manage-firewall-cli.md)
This setting can be enabled from the Azure portal by checking the **Allow public
### Troubleshoot public access issues
-Consider the following points when access to the Azure Database for PostgreSQL service doesn't behave as you expect:
+Consider the following points when access to Azure Database for PostgreSQL flexible server doesn't behave as you expect:
-- **Changes to the allowlist have not taken effect yet**. There might be as much as a five-minute delay for changes to the firewall configuration of the Azure Database for PostgreSQL server to take effect.
+- **Changes to the allowlist haven't taken effect yet**. There might be as much as a five-minute delay for changes to the firewall configuration of the Azure Database for PostgreSQL flexible server instance to take effect.
-- **Authentication failed**. If a user doesn't have permissions on the Azure Database for PostgreSQL server or the password is incorrect, the connection to the Azure Database for PostgreSQL server is denied. Creating a firewall setting only provides clients with an opportunity to try connecting to your server. Each client must still provide the necessary security credentials.
+- **Authentication failed**. If a user doesn't have permissions on the Azure Database for PostgreSQL flexible server instance or the password is incorrect, the connection to the Azure Database for PostgreSQL flexible server instance is denied. Creating a firewall setting only provides clients with an opportunity to try connecting to your server. Each client must still provide the necessary security credentials.
- **Dynamic client IP address is preventing access**. If you have an internet connection with dynamic IP addressing and you're having trouble getting through the firewall, try one of the following solutions:
- * Ask your internet service provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for PostgreSQL server. Then add the IP address range as a firewall rule.
+ * Ask your internet service provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for PostgreSQL flexible server instance. Then add the IP address range as a firewall rule.
* Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule. - **Firewall rule is not available for IPv6 format**. The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, you'll get a validation error. ## Host name
-Regardless of the networking option that you choose, we recommend that you always use an FQDN as host name when connecting to your flexible server. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
+Regardless of the networking option that you choose, we recommend that you always use an FQDN as host name when connecting to your Azure Database for PostgreSQL flexible server instance. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
An example that uses an FQDN as a host name is `hostname = servername.postgres.database.azure.com`. Where possible, avoid using `hostname = 10.0.0.4` (a private address) or `hostname = 40.2.45.67` (a public address). ## Next steps -- Learn how to create a flexible server by using the **Public access (allowed IP addresses)** option in [the Azure portal](how-to-manage-firewall-portal.md) or [the Azure CLI](how-to-manage-firewall-cli.md).
+- Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Public access (allowed IP addresses)** option in [the Azure portal](how-to-manage-firewall-portal.md) or [the Azure CLI](how-to-manage-firewall-cli.md).
postgresql Concepts Networking Ssl Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-ssl-tls.md
Title: Networking overview - Azure Database for PostgreSQL - Flexible Server using SSL and TLS
-description: Learn about secure connectivity with Flexible Server using SSL and TLS
+ Title: Networking overview using SSL and TLS
+description: Learn about secure connectivity with Flexible Server using SSL and TLS.
-# Secure connectivity with TLS and SSL
+# Secure connectivity with TLS and SSL in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server enforces connecting your client applications to the PostgreSQL service by using Transport Layer Security (TLS). TLS is an industry-standard protocol that ensures encrypted network connections between your database server and client applications. TLS is an updated protocol of Secure Sockets Layer (SSL).
+Azure Database for PostgreSQL flexible server enforces connecting your client applications to Azure Database for PostgreSQL flexible server by using Transport Layer Security (TLS). TLS is an industry-standard protocol that ensures encrypted network connections between your database server and client applications. TLS is an updated protocol of Secure Sockets Layer (SSL).
## What is TLS?
Azure Database for PostgreSQL supports TLS version 1.2 and later. In [RFC 8996](
All incoming connections that use earlier versions of the TLS protocol, such as TLS 1.0 and TLS 1.1, are denied by default. > [!NOTE]
-> SSL and TLS certificates certify that your connection is secured with state-of-the-art encryption protocols. By encrypting your connection on the wire, you prevent unauthorized access to your data while in transit. This is why we strongly recommend using latest versions of TLS to encrypt your connections to Azure Database for PostgreSQL - Flexible Server.
-> Although it's not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the **require_secure_transport** server parameter to OFF. You can also set TLS version by setting **ssl_min_protocol_version** and **ssl_max_protocol_version** server parameters.
+> SSL and TLS certificates certify that your connection is secured with state-of-the-art encryption protocols. By encrypting your connection on the wire, you prevent unauthorized access to your data while in transit. This is why we strongly recommend using latest versions of TLS to encrypt your connections to Azure Database for PostgreSQL flexible server.
+> Although it's not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL flexible server by updating the **require_secure_transport** server parameter to OFF. You can also set TLS version by setting **ssl_min_protocol_version** and **ssl_max_protocol_version** server parameters.
[Certificate authentication](https://www.postgresql.org/docs/current/auth-cert.html) is performed using **SSL client certificates** for authentication. In this scenario, PostgreSQL server compares the CN (common name) attribute of the client certificate presented, against the requested database user.
-**Azure Database for PostgreSQL - Flexible Server does not support SSL certificate based authentication at this time.**
+**Azure Database for PostgreSQL flexible server doesn't support SSL certificate based authentication at this time.**
-To determine your current TLS\SSL connection status, you can load the [sslinfo extension](concepts-extensions.md) and then call the `ssl_is_used()` function to determine if SSL is being used. The function returns t if the connection is using SSL, otherwise it returns f. You can also collect all the information about your Azure Database for PostgreSQL - Flexible Server instance's SSL usage by process, client, and application by using the following query:
+To determine your current TLS\SSL connection status, you can load the [sslinfo extension](concepts-extensions.md) and then call the `ssl_is_used()` function to determine if SSL is being used. The function returns t if the connection is using SSL, otherwise it returns f. You can also collect all the information about your Azure Database for PostgreSQL flexible server instance's SSL usage by process, client, and application by using the following query:
```sql SELECT datname as "Database name", usename as "User name", ssl, client_addr, application_name, backend_type
openssl s_client -connect localhost:5432 -starttls postgres
This prints out a lot of low-level protocol information, including the TLS version, cipher, and so on. Note that you must use the option -starttls postgres, or otherwise this command reports that no SSL is in use. This requires at least OpenSSL 1.1.1. > [!NOTE]
-> To enforce **latest, most secure TLS version** for connectivity protection from client to Azure Database for PostgreSQL - Flexible Server set **ssl_min_protocol_version** to **1.3**. That would **require** clients connecting to your Azure Postgres server to use **this version of the protocol only** to securely communicate. However, older clients, since they don't support this version, may not be able to communicate with the server.
+> To enforce **latest, most secure TLS version** for connectivity protection from client to Azure Database for PostgreSQL flexible server set **ssl_min_protocol_version** to **1.3**. That would **require** clients connecting to your Azure Database for PostgreSQL flexible server instance to use **this version of the protocol only** to securely communicate. However, older clients, since they don't support this version, may not be able to communicate with the server.
## Cipher Suites
A cipher suite is displayed as a long string of seemingly random information ΓÇö
- Message authentication code algorithm (MAC) Different versions of SSL/TLS support different cipher suites. TLS 1.2 cipher suites canΓÇÖt be negotiated with TLS 1.3 connections and vice versa.
-As of this time Azure Database for PostgreSQL - Flexible Server supports number of cipher suites with TLS 1.2 protocol version that fall into [HIGH:!aNULL](https://www.postgresql.org/docs/16/runtime-config-connection.html#GUC-SSL-CIPHERS) category.
+As of this time Azure Database for PostgreSQL flexible server supports a number of cipher suites with TLS 1.2 protocol version that fall into [HIGH:!aNULL](https://www.postgresql.org/docs/16/runtime-config-connection.html#GUC-SSL-CIPHERS) category.
## Troubleshooting SSL\TLS connectivity errors
As of this time Azure Database for PostgreSQL - Flexible Server supports number
## Related content -- Learn how to create a flexible server by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).-- Learn how to create a flexible server by using the **Public access (allowed IP addresses)** option in [the Azure portal](how-to-manage-firewall-portal.md) or [the Azure CLI](how-to-manage-firewall-cli.md).
+- Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
+- Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Public access (allowed IP addresses)** option in [the Azure portal](how-to-manage-firewall-portal.md) or [the Azure CLI](how-to-manage-firewall-cli.md).
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Title: PgBouncer - Azure Database for PostgreSQL - Flexible Server
+ Title: PgBouncer
description: This article provides an overview with the built-in PgBouncer extension.
Last updated 7/25/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL ΓÇô Flexible Server offers [PgBouncer](https://github.com/pgbouncer/pgbouncer) as a built-in connection pooling solution. This is an optional service that can be enabled on a per-database server basis and is supported with both public and private access. PgBouncer runs in the same virtual machine as the Postgres database server. Postgres uses a process-based model for connections, which makes it expensive to maintain many idle connections. So, Postgres itself runs into resource constraints once the server runs more than a few thousand connections. The primary benefit of PgBouncer is to improve idle connections and short-lived connections at the database server.
+Azure Database for PostgreSQL flexible server offers [PgBouncer](https://github.com/pgbouncer/pgbouncer) as a built-in connection pooling solution. This is an optional service that can be enabled on a per-database server basis and is supported with both public and private access. PgBouncer runs in the same virtual machine as the Azure Database for PostgreSQL flexible server database server. Postgres uses a process-based model for connections, which makes it expensive to maintain many idle connections. So, Postgres itself runs into resource constraints once the server runs more than a few thousand connections. The primary benefit of PgBouncer is to improve idle connections and short-lived connections at the database server.
PgBouncer uses a more lightweight model that utilizes asynchronous I/O, and only uses actual Postgres connections when needed, that is, when inside an open transaction, or when a query is active. This model can support thousands of connections more easily with low overhead and allows scaling to up to 10,000 connections with low overhead.
For more information about PgBouncer configurations, see [pgbouncer.ini](https:/
## Benefits and Limitations of built-in PGBouncer feature By using the benefits of built-in PgBouncer with Flexible Server, users can enjoy the convenience of simplified configuration, the reliability of a managed service, support for various connection types, and seamless high availability during failover scenarios. Using built-in PGBouncer feature provides for following benefits:
- * As it's seamlessly integrated with Azure Database for PostgreSQL - Flexible Server service, there's no need for a separate installation or complex setup. It can be easily configured directly from the server parameters, ensuring a hassle-free experience.
+ * As it's seamlessly integrated with Azure Database for PostgreSQL flexible server, there's no need for a separate installation or complex setup. It can be easily configured directly from the server parameters, ensuring a hassle-free experience.
* As a managed service, users can enjoy the advantages of other Azure managed services. This includes automatic updates, eliminating the need for manual maintenance and ensuring that PgBouncer stays up-to-date with the latest features and security patches. * The built-in PgBouncer in Flexible Server provides support for both public and private connections. This functionality allows users to establish secure connections over private networks or connect externally, depending on their specific requirements. * In the event of a failover, where a standby server is promoted to the primary role, PgBouncer seamlessly restarts on the newly promoted standby without any changes required to the application connection string. This ability ensures continuous availability and minimizes disruption to the application's connection pool.
By using the benefits of built-in PgBouncer with Flexible Server, users can enjo
### PgBouncer Metrics
-Azure Database for PostgreSQL - Flexible Server now provides six new metrics for monitoring PgBouncer connection pooling.
+Azure Database for PostgreSQL flexible server now provides six new metrics for monitoring PgBouncer connection pooling.
|Display Name |Metrics ID |Unit |Description |Dimension |Default enabled| |-|--|--|-|||
-|**Active client connections** (Preview) |client_connections_active |Count|Connections from clients that are associated with a PostgreSQL connection |DatabaseName|No |
-|**Waiting client connections** (Preview)|client_connections_waiting|Count|Connections from clients that are waiting for a PostgreSQL connection to service them|DatabaseName|No |
-|**Active server connections** (Preview) |server_connections_active |Count|Connections to PostgreSQL that are in use by a client connection |DatabaseName|No |
-|**Idle server connections** (Preview) |server_connections_idle |Count|Connections to PostgreSQL that are idle, ready to service a new client connection |DatabaseName|No |
+|**Active client connections** (Preview) |client_connections_active |Count|Connections from clients that are associated with an Azure Database for PostgreSQL flexible server connection |DatabaseName|No |
+|**Waiting client connections** (Preview)|client_connections_waiting|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL flexible server connection to service them|DatabaseName|No |
+|**Active server connections** (Preview) |server_connections_active |Count|Connections to Azure Database for PostgreSQL flexible server that are in use by a client connection |DatabaseName|No |
+|**Idle server connections** (Preview) |server_connections_idle |Count|Connections to Azure Database for PostgreSQL flexible server that are idle, ready to service a new client connection |DatabaseName|No |
|**Total pooled connections** (Preview) |total_pooled_connections |Count|Current number of pooled connections |DatabaseName|No | |**Number of connection pools** (Preview)|num_pools |Count|Total number of connection pools |DatabaseName|No |
Utilizing an application side pool together with PgBouncer on the database serve
* Transaction and statement pool modes can't be used along with prepared statements. Refer to the [PgBouncer documentation](https://www.pgbouncer.org/features.html) to check other limitations of chosen pool mode. > [!IMPORTANT]
-> Parameter pgbouncer.client_tls_sslmode for built-in PgBouncer feature has been deprecated in Azure Database for PostgreSQL - Flexible Server with built-in PgBouncer feature enabled. When TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server is enforced via setting the **require_secure_transport** server parameter to ON, TLS\SSL is automatically enforced for connections to built-in PgBouncer. This setting to enforce SSL\TLS is on by default on creation of new PostgreSQL Flexible Server and enabling built-in PgBouncer feature. For more on SSL\TLS in Flexible Server see this [doc.](./concepts-networking.md#tls-and-ssl)
+> Parameter pgbouncer.client_tls_sslmode for built-in PgBouncer feature has been deprecated in Azure Database for PostgreSQL flexible server with built-in PgBouncer feature enabled. When TLS/SSL for connections to Azure Database for PostgreSQL flexible server is enforced via setting the **require_secure_transport** server parameter to ON, TLS/SSL is automatically enforced for connections to built-in PgBouncer. This setting to enforce SSL/TLS is on by default on creation of a new Azure Database for PostgreSQL flexible server instance and enabling the built-in PgBouncer feature. For more on SSL/TLS in Azure Database for PostgreSQL flexible server see this [doc.](./concepts-networking.md#tls-and-ssl)
For those customers that are looking for simplified management, built-in high availability, easy connectivity with containerized applications and are interested in utilizing most popular configuration parameters with PGBouncer built-in PGBouncer feature is good choice. For customers looking for full control of all parameters and debugging experience another choice could be setting up PGBouncer on Azure VM as an alternative.
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-performance-insight.md
Title: Query Performance Insight - Azure Database for PostgreSQL - Flexible Server
+ Title: Query Performance Insight
description: This article describes the Query Performance Insight feature in Azure Database for PostgreSQL - Flexible Server.
Last updated 4/1/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Query Performance Insight provides intelligent query analysis for Azure Postgres Flexible Server databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for. Query Performance Insight helps you spend less time troubleshooting database performance by providing:
+Query Performance Insight provides intelligent query analysis for Azure Database for PostgreSQL flexible server databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for. Query Performance Insight helps you spend less time troubleshooting database performance by providing:
>[!div class="checklist"] > * Identify what your long running queries, and how they change over time.
Query Performance Insight provides intelligent query analysis for Azure Postgres
> [!NOTE] > **Query Store Wait Sampling** is currently **disabled**. Query Performance Insight depends on Query Store wait sampling data. You need to enable it by setting the dynamic server parameter `pgms_wait_sampling.query_capture_mode` to **ALL**.
-3. **[Log analytics workspace](howto-configure-and-access-logs.md)** is configured for storing 3 log categories including - PostgreSQL Sessions logs, PostgreSQL Query Store and Runtime and PostgreSQL Query Store Wait Statistics. To configure log analytics, refer [Log analytics workspace](howto-configure-and-access-logs.md#configure-diagnostic-settings).
+3. **[Log analytics workspace](howto-configure-and-access-logs.md)** is configured for storing 3 log categories including - Azure Database for PostgreSQL flexible server Sessions logs, Azure Database for PostgreSQL flexible server Query Store and Runtime, and Azure Database for PostgreSQL flexible server Query Store Wait Statistics. To configure log analytics, refer [Log analytics workspace](howto-configure-and-access-logs.md#configure-diagnostic-settings).
> [!NOTE]
-> The **Query Store data is not being transmitted to the log analytics workspace**. The PostgreSQL logs (Sessions data / Query Store Runtime / Query Store Wait Statistics) is not being sent to the log analytics workspace, which is necessary to use Query Performance Insight. To configure the logging settings for category PostgreSQL sessions and send the data to a log analytics workspace.
+> The **Query Store data is not being transmitted to the log analytics workspace**. The Azure Database for PostgreSQL flexible server logs (Sessions data / Query Store Runtime / Query Store Wait Statistics) is not being sent to the log analytics workspace, which is necessary to use Query Performance Insight. To configure the logging settings for category Azure Database for PostgreSQL flexible server sessions and send the data to a log analytics workspace.
## Using Query Performance Insight The Query Performance Insight view in the Azure portal will surface visualizations on key information from Query Store. Query Performance Insight is easy to use:
-1. Open the Azure portal and find a postgres instance that you want to examine.
+1. Open the Azure portal and find an Azure Database for PostgreSQL flexible server instance that you want to examine.
2. From the left-side menu, open **Intelligent Performance** > **Query Performance Insight**. 3. Select a **time range** for investigating queries. 4. On the first tab, review the list of **Long Running Queries**.
The Query Performance Insight view in the Azure portal will surface visualizatio
6. Optionally, you can select the **custom** to specify a time range. > [!NOTE]
-> For Azure PostgreSQL Flexible Server to render the information in Query Performance Insight, **Query Store needs to capture a couple hours of data**. If the database has no activity or if Query Store was not active during a certain period, the charts will be empty when Query Performance Insight displays that time range. You can enable Query Store at any time if it's not running. For more information, see [Best practices with Query Store](concepts-query-store-best-practices.md).
+> For Azure Database for PostgreSQL flexible server to render the information in Query Performance Insight, **Query Store needs to capture a couple hours of data**. If the database has no activity or if Query Store was not active during a certain period, the charts will be empty when Query Performance Insight displays that time range. You can enable Query Store at any time if it's not running. For more information, see [Best practices with Query Store](concepts-query-store-best-practices.md).
7. To **view details** of a specific query, click the `QueryId Snapshot` dropdown. :::image type="content" source="./media/concepts-query-performance-insight/2-individual-query-details.png" alt-text="Screenshot of viewing details of a specific query.":::
The Query Performance Insight view in the Azure portal will surface visualizatio
* Query Performance Insight is not available for [read replicas](concepts-read-replicas.md). * For Query Performance Insight to function, data must exist in the Query Store. Query Store is an opt-in feature, so it isn't enabled by default on a server. Query store is enabled or disabled globally for all databases on a given server and cannot be turned on or off per database.
-* Enabling Query Store on the Burstable pricing tier may negatively impact performance; therefore, it is not recommended.
+* Enabling Query Store on the Burstable pricing tier may negatively impact performance; therefore, it's not recommended.
## Next steps -- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL.
+- Learn more about [monitoring and tuning](concepts-monitoring.md) in Azure Database for PostgreSQL flexible server.
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-best-practices.md
Title: Query Store best practices in Azure Database for PostgreSQL - Flex Server
-description: This article describes best practices for Query Store in Azure Database for PostgreSQL - Flex Server.
+ Title: Query Store best practices
+description: This article describes best practices for Query Store in Azure Database for PostgreSQL - Flexible Server.
Last updated 12/31/2023
-# Best practices for Query Store - Flexible Server
+# Best practices for Query Store - Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article outlines best practices for using Query Store in Azure Database for PostgreSQL.
+This article outlines best practices for using Query Store in Azure Database for PostgreSQL flexible server.
## Set the optimal query capture mode
The **pg_qs.retention_period_in_days** parameter specifies in days the data rete
## Next steps -- Learn how to get or set parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
+- Learn how to get or set parameters using the [Azure portal](how-to-configure-server-parameters-using-portal.md) or the [Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-scenarios.md
Title: Query Store scenarios - Azure Database for PostgreSQL - Flex Server
-description: This article describes some scenarios for Query Store in Azure Database for PostgreSQL - Flex Server.
+ Title: Query Store scenarios
+description: This article describes some scenarios for Query Store in Azure Database for PostgreSQL - Flexible Server.
Last updated 12/31/2023
-# Usage scenarios for Query Store - Flexible Server
+# Usage scenarios for Query Store - Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
Use Query Store to compare workload performance before and after an application
- Modifying the amount of resources granted to the server. - Changing any of the server parameters that affect the behavior of the server. - Creating missing indexes on tables referenced by expensive queries.-- Migrating from Single Server to Flexible Server.
+- Migrating from Azure Database for PostgreSQL single server to Azure Database for PostgreSQL flexible server.
In any of these scenarios, apply the following workflow: 1. Run your workload with Query Store before the planned change, to generate a performance baseline.
If you are in control of the application code, you might consider rewriting the
## Next step > [!div class="nextstepaction"]
-> [best practices for using Query Store](concepts-query-store-best-practices.md)
+> [Best practices for using Query Store](concepts-query-store-best-practices.md)
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
Title: Query Store - Azure Database for PostgreSQL - Flexible Server
+ Title: Query Store
description: This article describes the Query Store feature in Azure Database for PostgreSQL - Flexible Server.
-# Monitor Performance with Query Store
+# Monitor performance with Query Store
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The Query Store feature in Azure Database for PostgreSQL provides a way to track query performance over time. Query Store simplifies performance-troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named **azure_sys** in the Azure Database for PostgreSQL instance.
+The Query Store feature in Azure Database for PostgreSQL flexible server provides a way to track query performance over time. Query Store simplifies performance-troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named **azure_sys** in the Azure Database for PostgreSQL flexible server instance.
> [!IMPORTANT] > Do not modify the **azure_sys** database or its schema. Doing so will prevent Query Store and related performance features from functioning correctly.
Query Store is available in all regions with no additional charges. It is an opt
### Enable Query Store in Azure portal
-1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server.
+1. Sign in to the Azure portal and select your Azure Database for PostgreSQL flexible server instance.
1. Select **Server parameters** in the **Settings** section of the menu. 1. Search for the `pg_qs.query_capture_mode` parameter. 1. Set the value to `TOP` or `ALL`, depending on whether you want to track top-level queries or also nested queries (those executed inside a function or procedure), and click on **Save**.
To minimize space usage, the runtime execution statistics in the runtime stats s
## Access Query Store information
-Query Store data is stored in the azure_sys database on your Postgres server.
+Query Store data is stored in the azure_sys database on your Azure Database for PostgreSQL flexible server instance.
The following query returns information about queries in Query Store: ```sql
The following options apply specifically to wait statistics:
> [!NOTE] > **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is NONE, the pgms_wait_sampling.query_capture_mode setting has no effect.
-Use the [Azure portal](howto-configure-server-parameters-using-portal.md) to get or set a different value for a parameter.
+Use the [Azure portal](how-to-configure-server-parameters-using-portal.md) to get or set a different value for a parameter.
## Views and functions
View and manage Query Store using the following views and functions. Anyone in t
Queries are normalized by looking at their structure and ignoring anything not semantically significant, like literals, constants, aliases, or differences in casing.
-If two queries are semantically identical, even if they use different aliases for the same referenced columns and tables, they are identified with the same query_id. If two queries only differ in the literal values used in them, they are also identified with the same query_id. For all queries identified with the same query_id, their sql_query_text will be that of the query that executed first since Query Store started recording activity, or since the last time the persisted data was discarded because the function [query_store.qs_reset](#query_storeqs_reset) was executed.
+If two queries are semantically identical, even if they use different aliases for the same referenced columns and tables, they're identified with the same query_id. If two queries only differ in the literal values used in them, they're also identified with the same query_id. For all queries identified with the same query_id, their sql_query_text will be that of the query that executed first since Query Store started recording activity, or since the last time the persisted data was discarded because the function [query_store.qs_reset](#query_storeqs_reset) was executed.
### How query normalization works
This function discards all statistics gathered in-memory by Query Store (that is
## Limitations and known issues -- If a PostgreSQL server has the parameter `default_transaction_read_only` set to `on`, Query Store won't capture any data.
+- If an Azure Database for PostgreSQL flexible server instance has the parameter `default_transaction_read_only` set to on, Query Store doesn't capture any data.
## Related content
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Title: Read replicas - Azure Database for PostgreSQL - Flexible Server
+ Title: Read replicas
description: This article describes the read replica feature in Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The read replica feature allows you to replicate data from an Azure Database for a PostgreSQL server to a read-only replica. Replicas are updated **asynchronously** with the PostgreSQL engine's native physical replication technology. Streaming replication by using replication slots is the default operation mode. When necessary, file-based log shipping is used to catch up. You can replicate from the primary server to up to five replicas.
+The read replica feature allows you to replicate data from an Azure Database for PostgreSQL flexible server instance to a read-only replica. Replicas are updated **asynchronously** with the PostgreSQL engine's native physical replication technology. Streaming replication by using replication slots is the default operation mode. When necessary, file-based log shipping is used to catch up. You can replicate from the primary server to up to five replicas.
-Replicas are new servers you manage similar to regular Azure Database for PostgreSQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
+Replicas are new servers you manage similar to regular Azure Database for PostgreSQL flexible server instances. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
Learn how to [create and manage replicas](how-to-read-replicas-portal.md). > [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server is currently supporting the following features in Preview:
+> Azure Database for PostgreSQL flexible server is currently supporting the following features in Preview:
> > - Promote to primary server (to maintain backward compatibility, please use promote to independent server and remove from replication, which keeps the former behavior) > - Virtual endpoints
Read replicas are primarily designed for scenarios where offloading queries is b
A read replica can be created in the same region as the primary server and in a different one. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
-You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can also have replicas in any global region of Azure that supports Azure Database for PostgreSQL. Additionally, we support special regions [Azure Government](../../azure-government/documentation-government-welcome.md) and [Microsoft Azure operated by 21Vianet](/azure/china/overview-operations). The special regions now supported are:
+You can have a primary server in any [Azure Database for PostgreSQL flexible server region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can also have replicas in any global region of Azure that supports Azure Database for PostgreSQL flexible server. Additionally, we support special regions [Azure Government](../../azure-government/documentation-government-welcome.md) and [Microsoft Azure operated by 21Vianet](/azure/china/overview-operations). The special regions now supported are:
- **Azure Government regions**: - US Gov Arizona
For a deeper understanding of the advantages of paired regions, refer to [Azure'
## Create a replica
-A primary server for Azure Database for PostgreSQL - Flexible Server can be deployed in [any region that supports the service](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=postgresql&regions=all). You can create replicas of the primary server within the same region or across different global Azure regions where Azure Database for PostgreSQL - Flexible Server is available. The capability to create replicas now extends to some special Azure regions. See the [Geo-replication section](#geo-replication) for a list of special regions where you can create replicas.
+A primary server for Azure Database for PostgreSQL flexible server can be deployed in [any region that supports the service](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=postgresql&regions=all). You can create replicas of the primary server within the same region or across different global Azure regions where Azure Database for PostgreSQL flexible server is available. The capability to create replicas now extends to some special Azure regions. See the [Geo-replication section](#geo-replication) for a list of special regions where you can create replicas.
+When you start the create replica workflow, a blank Azure Database for PostgreSQL flexible server instance is created. The new server is filled with the data on the primary server. For the creation of replicas in the same region, a snapshot approach is used. Therefore, the time of creation is independent of the size of the data. Geo-replicas are created using the base backup of the primary instance, which is then transmitted over the network; therefore, the creation time might range from minutes to several hours, depending on the primary size.
-When you start the create replica workflow, a blank Azure Database for the PostgreSQL server is created. The new server is filled with the data on the primary server. For the creation of replicas in the same region, a snapshot approach is used. Therefore, the time of creation is independent of the size of the data. Geo-replicas are created using the base backup of the primary instance, which is then transmitted over the network; therefore, the creation time might range from minutes to several hours, depending on the primary size.
+In Azure Database for PostgreSQL flexible server, the creation operation of replicas is considered successful only when the entire backup of the primary instance copies to the replica destination and the transaction logs synchronize up to the threshold of a maximum 1GB lag.
-In Azure Database for PostgreSQL - Flexible Server, the creation operation of replicas is considered successful only when the entire backup of the primary instance copies to the replica destination and the transaction logs synchronize up to the threshold of a maximum 1GB lag.
-
-To achieve a successful create operation, avoid making replicas during times of high transactional load. For example, it's best to avoid creating replicas during migrations from other sources to Azure Database for PostgreSQL - Flexible Server or during excessive bulk load operations. If you're migrating data or loading large amounts of data right now, it's best to finish this task first. After completing it, you can then start setting up the replicas. Once the migration or bulk load operation has finished, check whether the transaction log size has returned to its normal size. Typically, the transaction log size should be close to the value defined in the max_wal_size server parameter for your instance. You can track the transaction log storage footprint using the [Transaction Log Storage Used](concepts-monitoring.md#default-metrics) metric, which provides insights into the amount of storage used by the transaction log. By monitoring this metric, you can ensure that the transaction log size is within the expected range and that the replica creation process might be started.
+To achieve a successful create operation, avoid making replicas during times of high transactional load. For example, it's best to avoid creating replicas during migrations from other sources to Azure Database for PostgreSQL flexible server or during excessive bulk load operations. If you're migrating data or loading large amounts of data right now, it's best to finish this task first. After completing it, you can then start setting up the replicas. Once the migration or bulk load operation has finished, check whether the transaction log size has returned to its normal size. Typically, the transaction log size should be close to the value defined in the max_wal_size server parameter for your instance. You can track the transaction log storage footprint using the [Transaction Log Storage Used](concepts-monitoring.md#default-metrics) metric, which provides insights into the amount of storage used by the transaction log. By monitoring this metric, you can ensure that the transaction log size is within the expected range and that the replica creation process might be started.
> [!IMPORTANT] > Read Replicas are currently supported for the General Purpose and Memory Optimized server compute tiers. The Burstable server compute tier is not supported.
Learn how to [create a read replica in the Azure portal](how-to-read-replicas-po
### Configuration management
-When setting up read replicas for Azure Database for PostgreSQL - Flexible Server, it's essential to understand the server configurations that can be adjusted, the ones inherited from the primary, and any related limitations.
+When setting up read replicas for Azure Database for PostgreSQL flexible server, it's essential to understand the server configurations that can be adjusted, the ones inherited from the primary, and any related limitations.
**Inherited configurations**
Certain functionalities are restricted to primary servers and can't be set up on
- Backups, including geo-backups. - High availability (HA)
-If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption.md) for additional considerations.
+If your source Azure Database for PostgreSQL flexible server instance is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption.md) for other considerations.
## Connect to a replica
The replica inherits the admin account from the primary server. All user account
There are two methods to connect to the replica:
-* **Direct to the Replica Instance**: You can connect to the replica using its hostname and a valid user account, as you would on a regular Azure Database for PostgreSQL server. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using `psql`:
+* **Direct to the Replica Instance**: You can connect to the replica using its hostname and a valid user account, as you would on a regular Azure Database for PostgreSQL flexible server instance. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using `psql`:
```bash psql -h myreplica.postgres.database.azure.com -U myadmin postgres
The promote operation won't carry over specific configurations and parameters. H
## Virtual Endpoints (preview)
-Virtual Endpoints are read-write and read-only listener endpoints, that remain consistent irrespective of the current role of the PostgreSQL instance. This means you don't have to update your application's connection string after performing the **promote to primary server** action, as the endpoints will automatically point to the correct instance following a role change.
+Virtual Endpoints are read-write and read-only listener endpoints, that remain consistent irrespective of the current role of the Azure Database for PostgreSQL flexible server instance. This means you don't have to update your application's connection string after performing the **promote to primary server** action, as the endpoints will automatically point to the correct instance following a role change.
All operations involving virtual endpoints, whether adding, editing, or removing, are performed in the context of the primary server. In the Azure portal, you manage these endpoints under the primary server page. Similarly, when using tools like the CLI, REST API, or other utilities, commands and actions target the primary server for endpoint management.
Learn how to [create virtual endpoints](how-to-read-replicas-portal.md#create-vi
## Monitor replication
-Read replica feature in Azure Database for PostgreSQL - Flexible Server relies on replication slots mechanism. The main advantage of replication slots is the ability to adjust the number of transaction logs automatically (WAL segments) needed by all replica servers and, therefore, avoid situations when one or more replicas go out of sync because WAL segments that weren't yet sent to the replicas are being removed on the primary. The disadvantage of the approach is the risk of going out of space on the primary in case the replication slot remains inactive for an extended time. In such situations, primary accumulates WAL files causing incremental growth of the storage usage. When the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to read-only mode to avoid errors associated with disk-full situations.
+Read replica feature in Azure Database for PostgreSQL flexible server relies on replication slots mechanism. The main advantage of replication slots is the ability to adjust the number of transaction logs automatically (WAL segments) needed by all replica servers and, therefore, avoid situations when one or more replicas go out of sync because WAL segments that weren't yet sent to the replicas are being removed on the primary. The disadvantage of the approach is the risk of going out of space on the primary in case the replication slot remains inactive for an extended time. In such situations, primary accumulates WAL files causing incremental growth of the storage usage. When the storage usage reaches 95% or if the available capacity is less than 5 GiB, the server is automatically switched to read-only mode to avoid errors associated with disk-full situations.
Therefore, monitoring the replication lag and replication slots status is crucial for read replicas. We recommend setting alert rules for storage used or storage percentage, and for replication lags, when they exceed certain thresholds so that you can proactively act, increase the storage size, and delete lagging read replicas. For example, you can set an alert if the storage percentage exceeds 80% usage, and if the replica lag is higher than 1 h. The [Transaction Log Storage Used](concepts-monitoring.md#default-metrics) metric shows you if the WAL files accumulation is the main reason of the excessive storage usage.
-Azure Database for PostgreSQL: Flexible Server provides [two metrics](concepts-monitoring.md#replication) for monitoring replication. The two metrics are **Max Physical Replication Lag** and **Read Replica Lag**. To learn how to view these metrics, see the **Monitor a replica** section of the [read replica how-to article](how-to-read-replicas-portal.md#monitor-a-replica).
+Azure Database for PostgreSQL flexible server provides [two metrics](concepts-monitoring.md#replication) for monitoring replication. The two metrics are **Max Physical Replication Lag** and **Read Replica Lag**. To learn how to view these metrics, see the **Monitor a replica** section of the [read replica how-to article](how-to-read-replicas-portal.md#monitor-a-replica).
The **Max Physical Replication Lag** metric shows the lag in bytes between the primary and the most-lagging replica. This metric is applicable and available on the primary server only, and will be available only if at least one of the read replicas is connected to the primary. The lag information is present also when the replica is in the process of catching up with the primary, during replica creation, or when replication becomes inactive.
Azure facilities across various regions are designed to be highly reliable. Howe
### Prepare for Regional Disasters
-Being prepared for potential regional disasters is critical to ensure the uninterrupted operation of your applications and services. If you're considering a robust contingency plan for your Azure Database for PostgreSQL - Flexible Server, here are the key steps and considerations:
+Being prepared for potential regional disasters is critical to ensure the uninterrupted operation of your applications and services. If you're considering a robust contingency plan for your Azure Database for PostgreSQL flexible server instance, here are the key steps and considerations:
1. **Establish a geo-replicated read replica**: It's essential to have a read replica set up in a separate region from your primary. This ensures continuity in case the primary region faces an outage. More details can be found in the [geo-replication](#geo-replication) section. 2. **Ensure server symmetry**: The "promote to primary server" action is the most recommended for handling regional outages, but it comes with a [server symmetry](#configuration-management) requirement. This means both the primary and replica servers must have identical configurations of specific settings. The advantages of using this action include:
Being proactive and preparing in advance for regional disasters ensure the resil
### When outages impact your SLA
-In the event of a prolonged outage with Azure Database for PostgreSQL - Flexible Server in a specific region that threatens your application's service-level agreement (SLA), be aware that both the actions discussed below aren't service-driven. User intervention is required for both. It's a best practice to automate the entire process as much as possible and to have robust monitoring in place. For more information about what information is provided during an outage, see the [Service outage](concepts-business-continuity.md#service-outage) page. Only a forced promote is possible in a region down scenario, meaning the amount of data loss is roughly equal to the current lag between the replica and primary. Hence, it's crucial to [monitor the lag](#monitor-replication). Consider the following steps:
+In the event of a prolonged outage with Azure Database for PostgreSQL flexible server in a specific region that threatens your application's service-level agreement (SLA), be aware that both the actions discussed below aren't service-driven. User intervention is required for both. It's a best practice to automate the entire process as much as possible and to have robust monitoring in place. For more information about what information is provided during an outage, see the [Service outage](concepts-business-continuity.md#service-outage) page. Only a forced promote is possible in a region down scenario, meaning the amount of data loss is roughly equal to the current lag between the replica and primary. Hence, it's crucial to [monitor the lag](#monitor-replication). Consider the following steps:
**Promote to primary server (preview)**
This section summarizes considerations about the read replica feature. The follo
- **Power operations**: [Power operations](how-to-stop-start-server-portal.md), including start and stop actions, can be applied to both the primary and replica servers. However, to preserve system integrity, a specific sequence should be followed. Before stopping the read replicas, ensure the primary server is stopped first. When commencing operations, initiate the start action on the replica servers before starting the primary server. - If server has read replicas then read replicas should be deleted first before deleting the primary server.-- [In-place major version upgrade](concepts-major-version-upgrade.md) in Azure Database for PostgreSQL requires removing any read replicas currently enabled on the server. Once the replicas have been deleted, the primary server can be upgraded to the desired major version. After the upgrade is complete, you can recreate the replicas to resume the replication process.-- **Storage auto-grow**: When configuring read replicas for an Azure Database for PostgreSQL - Flexible Server, it's essential to ensure that the storage autogrow setting on the replicas matches that of the primary server. The storage autogrow feature allows the database storage to increase automatically to prevent running out of space, which could lead to database outages. To maintain consistency and avoid potential replication issues, if the primary server has storage autogrow disabled, the read replicas must also have storage autogrow disabled. Conversely, if storage autogrow is enabled on the primary server, then any read replica that is created must have storage autogrow enabled from the outset. This synchronization of storage autogrow settings ensures the replication process isn't disrupted by differing storage behaviors between the primary server and its replicas.
+- [In-place major version upgrade](concepts-major-version-upgrade.md) in Azure Database for PostgreSQL flexible server requires removing any read replicas currently enabled on the server. Once the replicas have been deleted, the primary server can be upgraded to the desired major version. After the upgrade is complete, you can recreate the replicas to resume the replication process.
+- **Storage auto-grow**: When configuring read replicas for an Azure Database for PostgreSQL flexible server instance, it's essential to ensure that the storage autogrow setting on the replicas matches that of the primary server. The storage autogrow feature allows the database storage to increase automatically to prevent running out of space, which could lead to database outages. To maintain consistency and avoid potential replication issues, if the primary server has storage autogrow disabled, the read replicas must also have storage autogrow disabled. Conversely, if storage autogrow is enabled on the primary server, then any read replica that is created must have storage autogrow enabled from the outset. This synchronization of storage autogrow settings ensures the replication process isn't disrupted by differing storage behaviors between the primary server and its replicas.
- **Premium SSD v2**: As of the current release, if the primary server uses Premium SSD v2 for storage, the creation of read replicas isn't supported. ### New replicas
-A read replica is created as a new Azure Database for PostgreSQL server. An existing server can't be made into a replica. You can't create a replica of another read replica, that is, cascading replication isn't supported.
+A read replica is created as a new Azure Database for PostgreSQL flexible server instance. An existing server can't be made into a replica. You can't create a replica of another read replica, that is, cascading replication isn't supported.
### Resource move
When dealing with multiple replicas and if the primary region lacks a [paired re
### Back up and Restore
-When managing backups and restores for your Azure Database for PostgreSQL - Flexible Server, it's essential to keep in mind the current and previous role of the server in different [promotion scenarios](#promote-replicas). Here are the key points to remember:
+When managing backups and restores for your Azure Database for PostgreSQL flexible server instance, it's essential to keep in mind the current and previous role of the server in different [promotion scenarios](#promote-replicas). Here are the key points to remember:
**Promote to primary server**
While the server is a read replica, no backups are taken. However, once it's pro
Read replicas support both, private access via virtual network integration and public access through allowed IP addresses. However, please note that [private endpoint](concepts-networking-private-link.md) is not currently supported. > [!IMPORTANT]
-> Bi-directional communication between the primary server and read replicas is crucial for the Azure Database for PostgreSQL - Flexible Server setup. There must be a provision to send and receive traffic on destination port 5432 within the Azure virtual network subnet.
+> Bi-directional communication between the primary server and read replicas is crucial for the Azure Database for PostgreSQL flexible server setup. There must be a provision to send and receive traffic on destination port 5432 within the Azure virtual network subnet.
The above requirement not only facilitates the synchronization process but also ensures proper functioning of the promote mechanism where replicas might need to communicate in reverse orderΓÇöfrom replica to primaryΓÇöespecially during promote to primary operations. Moreover, connections to the Azure storage account that stores Write-Ahead Logging (WAL) archives must be permitted to uphold data durability and enable efficient recovery processes.
You're free to scale up and down compute (vCores), changing the service tier fro
For compute scaling: -- PostgreSQL requires several parameters on replicas to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the replica doesn't run out of shared memory during recovery. The parameters affected are: `max_connections`, `max_prepared_transactions`, `max_locks_per_transaction`, `max_wal_senders`, `max_worker_processes`.
+- Azure Database for PostgreSQL flexible server requires several parameters on replicas to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the replica doesn't run out of shared memory during recovery. The parameters affected are: `max_connections`, `max_prepared_transactions`, `max_locks_per_transaction`, `max_wal_senders`, `max_worker_processes`.
- **Scaling up**: First scale up a replica's compute, then scale up the primary.
postgresql Concepts Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-reserved-pricing.md
+
+ Title: Reserved compute pricing
+description: Prepay for Azure Database for PostgreSQL - Flexible Server compute resources with reserved capacity.
+++++++ Last updated : 12/12/2023++
+# Prepay for Azure Database for PostgreSQL - Flexible Server compute resources with reserved capacity
+++
+Azure Database for PostgreSQL flexible server now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL flexible server reserved capacity, you make an upfront commitment on Azure Database for PostgreSQL flexible server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL flexible server reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term.
+
+## How does the instance reservation work?
+
+You don't need to assign the reservation to specific Azure Database for PostgreSQL flexible server instances. An already running Azure Database for PostgreSQL flexible server instance (or ones that are newly deployed) automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one or three years. As soon as you buy a reservation, the Azure Database for PostgreSQL flexible server compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation doesn't cover software, networking, or storage charges associated with the Azure Database for PostgreSQL flexible server instances. At the end of the reservation term, the billing benefit expires, and the vCores used by Azure Database for PostgreSQL flexible server instances are billed at the pay-as-you go price. Reservations don't auto-renew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/).
+
+> [!IMPORTANT]
+> Reserved capacity pricing is available for [Azure Database for PostgreSQL single server](../single-server/overview-single-server.md) and [Azure Database for PostgreSQL flexible server](overview.md) deployment options.
+
+You can buy Azure Database for PostgreSQL flexible server reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
+
+* You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
+* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
+* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for PostgreSQL flexible server reserved capacity. </br>
+
+The details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md).
+
+## Reservation exchanges and refunds
+
+You can exchange a reservation for another reservation of the same type, you can also exchange a reservation from Azure Database for PostgreSQL single server with Azure Database for PostgreSQL flexible server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
+
+## Reservation discount
+
+You may save up to 65% on compute costs with reserved instances. In order to find the discount for your case, visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
+
+## Determine the right server size before purchase
+
+The size of reservation should be based on the total amount of compute used by the existing or soon-to-be-deployed servers within a specific region and using the same performance tier and hardware generation.
+
+For example, let's suppose that you're running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's suppose that you plan to deploy another general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server, within the next month. Let's suppose that you know that you need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5.
+
+## Buy Azure Database for PostgreSQL flexible server reserved capacity
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Select **All services** > **Reservations**.
+3. Select **Add** and then in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your Azure Database for PostgreSQL flexible server databases.
+4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL flexible server instances that get the discount depend on the scope and quantity selected.
++
+The following table describes required fields.
+
+| Field | Description |
+| : | :- |
+| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL flexible server reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
+| Scope | The vCore reservationΓÇÖs scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for PostgreSQL flexible server instances running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br>**Management group**, the reservation discount is applied to Azure Database for PostgreSQL flexible server instances running in any subscriptions that are a part of both the management group and billing scope.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for PostgreSQL flexible server instances in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for PostgreSQL flexible server instances in the selected subscription and the selected resource group within that subscription.
+| Region | The Azure region thatΓÇÖs covered by the Azure Database for PostgreSQL flexible server reserved capacity reservation.
+| Deployment Type | The Azure Database for PostgreSQL flexible server resource type that you want to buy the reservation for.
+| Performance Tier | The service tier for the Azure Database for PostgreSQL flexible server instances.
+| Term | One year
+| Quantity | The amount of compute resources being purchased within the Azure Database for PostgreSQL flexible server reserved capacity reservation. The quantity is a number of vCores in the selected Azure region and Performance tier that are being reserved and get the billing discount. For example, if you're running or planning to run Azure Database for PostgreSQL flexible server instances with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
+
+## Reserved instances API support
+
+Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
+
+- Find reservations to buy
+- Buy a reservation
+- View purchased reservations
+- View and manage reservation access
+- Split or merge reservations
+- Change the scope of reservations
+
+For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md).
+
+## vCore size flexibility
+
+vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit. If you scale to higher vCores than your reserved capacity, you're billed for the excess vCores using pay-as-you-go pricing.
+
+## How to view reserved instance purchase details
+
+You can view your reserved instance purchase details via the [Reservations menu on the left side of the Azure portal](https://aka.ms/reservations).
+
+## Reserved instance expiration
+
+You receive email notifications, the first one 30 days prior to reservation expiry and another one at expiration. Once the reservation expires, deployed VMs continue to run and be billed at a pay-as-you-go rate.
+
+## Need help? Contact us
+
+If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Next steps
+
+The vCore reservation discount is applied automatically to the number of Azure Database for PostgreSQL flexible server instances that match the Azure Database for PostgreSQL flexible server reserved capacity reservation scope and attributes. You can update the scope of the Azure Database for PostgreSQL flexible server reserved capacity reservation through Azure portal, PowerShell, CLI or through the API.
+
+To learn more about Azure Reservations, see the following articles:
+
+* [What are Azure Reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md)?
+* [Manage Azure Reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
+* [Understand Azure Reservations discount](../../cost-management-billing/reservations/understand-reservation-charges.md)
+* [Understand reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
+* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
postgresql Concepts Scaling Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md
Title: Scaling Resources in Azure Database for PostgreSQL - Flexible Server
-description: This article describes the resource scaling in Azure Database for PostgreSQL - Flexible Server.
+ Title: Scaling resources
+description: This article describes the resource scaling in Azure Database for PostgreSQL - Flexible Server.
Last updated 1/4/2024
-# Scaling Resources in Azure Database for PostgreSQL - Flexible Server
+# Scaling resources in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL Flexible Server supports both **vertical** and **horizontal** scaling options.
+Azure Database for PostgreSQL flexible server supports both **vertical** and **horizontal** scaling options.
-You can scale **vertically** by adding more resources to the Flexible server instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory. Once a Flexible server instance is created, you can independently change the CPU (vCores), the amount of storage, and the backup retention period. The number of vCores can be scaled up or down. However, the storage size can only be increased. In addition, you can scale the backup retention period, up or down, from 7 to 35 days. The resources can be scaled using multiple tools, for instance, [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md).
+You can scale **vertically** by adding more resources to the Azure Database for PostgreSQL flexible server instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory. Once an Azure Database for PostgreSQL flexible server instance is created, you can independently change the CPU (vCores), the amount of storage, and the backup retention period. The number of vCores can be scaled up or down. However, the storage size however can only be increased. In addition, you can scale the backup retention period, up or down, from 7 to 35 days. The resources can be scaled using multiple tools, for instance [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md).
> [!NOTE] > After you increase the storage size, you can't go back to a smaller storage size.
-You can scale **horizontally** by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate flexible server instances, without affecting the performance and availability of the primary instance.
+You can scale **horizontally** by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate Azure Database for PostgreSQL flexible server instances, without affecting the performance and availability of the primary instance.
When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time the system is switching over to the new server type, no new connections can be established, and all uncommitted transactions are rolled back. The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restart typically takes a minute or less, but it can be higher and can take several minutes, depending on transactional activity at the time the restart was initiated.
Near-zero downtime scaling is a feature designed to minimize downtime when modif
### How it works
-When updating your Flexible server in scaling scenarios, we create a new copy of your server (VM) with the updated configuration, synchronize it with your current one, briefly switch to the new copy with a 30-second interruption, and retire the old server, all at no extra cost to you. This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers, and the experience remains consistent for both, HA and non-HA servers. This feature is enabled in all Azure regions^ and there's **no customer action required** to use this capability.
+When updating your Azure Database for PostgreSQL flexible server instance in scaling scenarios, we create a new copy of your server (VM) with the updated configuration, synchronize it with your current one, briefly switch to the new copy with a 30-second interruption, and retire the old server, all at no extra cost to you. This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers, and the experience remains consistent for both HA and non-HA servers. This feature is enabled in all Azure regions and there's **no customer action required** to use this capability.
> [!NOTE] > Near-zero downtime scaling process is the _default_ operation. However, in cases where the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near-zero downtime scaling.
When updating your Flexible server in scaling scenarios, we create a new copy of
## Related content -- [Create a PostgreSQL server in the portal](how-to-manage-server-portal.md).
+- [Create an Azure Database for PostgreSQL flexible server instance in the portal](how-to-manage-server-portal.md).
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
Title: 'Security in Azure Database for PostgreSQL - Flexible Server'
-description: Learn about security in the Flexible Server deployment option for Azure Database for PostgreSQL.
+ Title: Security
+description: Learn about security in the Flexible Server deployment option for Azure Database for PostgreSQL - Flexible Server.
Last updated 2/10/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Multiple layers of security are available to help protect the data on your Azure Database for PostgreSQL server. This article outlines those security options.
+Multiple layers of security are available to help protect the data on your Azure Database for PostgreSQL flexible server instance. This article outlines those security options.
## Information protection and encryption
-Azure Database for PostgreSQL encrypts data in two ways:
+Azure Database for PostgreSQL flexible server encrypts data in two ways:
-- **Data in transit**: Azure Database for PostgreSQL encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default. See this [guide](how-to-connect-tls-ssl.md) for more details. For better security, you may choose to enable [SCRAM authentication](how-to-connect-scram.md).
+- **Data in transit**: Azure Database for PostgreSQL flexible server encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default. See this [guide](how-to-connect-tls-ssl.md) for more details. For better security, you may choose to enable [SCRAM authentication](how-to-connect-scram.md).
- Although it's not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL - Flexible Server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters.
+ Although it's not recommended, if needed, you have an option to disable TLS\SSL for connections to Azure Database for PostgreSQL flexible server by updating the `require_secure_transport` server parameter to OFF. You can also set TLS version by setting `ssl_max_protocol_version` server parameters.
-- **Data at rest**: For storage encryption, Azure Database for PostgreSQL uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running.
+- **Data at rest**: For storage encryption, Azure Database for PostgreSQL flexible server uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running.
The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled. ## Network security
-When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options:
+When you're running Azure Database for PostgreSQL flexible server, you have two main networking options:
- **Private access**: You can deploy your server into an Azure virtual network. Azure virtual networks help provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses. For more information, see the [networking overview for Azure Database for PostgreSQL - Flexible Server](concepts-networking.md).
To get alerts from the Microsoft Defender plan you'll first need to **enable it*
## Access management
-Best way to manage PostgreSQL database access permissions at scale is using the concept of [roles](https://www.postgresql.org/docs/current/user-manag.html). A role can be either a database user or a group of database users. Roles can own the database objects and assign privileges on those objects to other roles to control who has access to which objects. It is also possible to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role.
-PostgreSQL lets you grant permissions directly to the database users. **As a good security practice, it can be recommended that you create roles with specific sets of permissions based on minimum application and access requirements. You can then assign the appropriate roles to each user. Roles are used to enforce a *least privilege model* for accessing database objects.**
+The best way to manage Azure Database for PostgreSQL flexible server database access permissions at scale is using the concept of [roles](https://www.postgresql.org/docs/current/user-manag.html). A role can be either a database user or a group of database users. Roles can own the database objects and assign privileges on those objects to other roles to control who has access to which objects. It is also possible to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role.
+Azure Database for PostgreSQL flexible server lets you grant permissions directly to the database users. **As a good security practice, it can be recommended that you create roles with specific sets of permissions based on minimum application and access requirements. You can then assign the appropriate roles to each user. Roles are used to enforce a *least privilege model* for accessing database objects.**
-The Azure Database for PostgreSQL server is created with the 3 default roles defined. You can see these roles by running the command:
+The Azure Database for PostgreSQL flexible server instance is created with the three default roles defined. You can see these roles by running the command:
```sql SELECT rolname FROM pg_roles; ```
SELECT rolname FROM pg_roles;
* azuresu. * administrator role.
-While you're creating the Azure Database for PostgreSQL server, you provide credentials for an **administrator role**. This administrator role can be used to create more [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
+While you're creating the Azure Database for PostgreSQL flexible server instance, you provide credentials for an **administrator role**. This administrator role can be used to create more [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
For example, below we can create an example role called *demouser*, ```SQL
postgres=> create role demouser with password 'password123';
``` The **administrator role** should never be used by the application.
-In cloud-based PaaS environments access to a PostgreSQL superuser account is restricted to control plane operations only by cloud operators. Therefore, the **azure_pg_admin** account exists as a pseudo-superuser account. Your administrator role is a member of the **azure_pg_admin** role.
+In cloud-based PaaS environments access to an Azure Database for PostgreSQL flexible server superuser account is restricted to control plane operations only by cloud operators. Therefore, the **azure_pg_admin** account exists as a pseudo-superuser account. Your administrator role is a member of the **azure_pg_admin** role.
However, the server admin account is not part of the **azuresu** role, which has superuser privileges and is used to perform control pane operations. Since this service is a managed PaaS service, only Microsoft is part of the superuser role. > [!NOTE]
-> Number of superuser only permissions , such as creation of certain [implicit casts](https://www.postgresql.org/docs/current/sql-createcast.html), are not available with Azure Database for PostgreSQL - Flexible Server, since azure_pg_admin role doesn't align to permissions of postgresql superuser role.
+> Number of superuser only permissions , such as creation of certain [implicit casts](https://www.postgresql.org/docs/current/sql-createcast.html), are not available with Azure Database for PostgreSQL flexible server, since azure_pg_admin role doesn't align to permissions of postgresql superuser role.
You can periodically audit the list of roles in your server. For example, you can connect using `psql` client and query the `pg_roles` table which lists all the roles along with privileges such as create additional roles, create databases, replication etc.
oid | 24827
```
-[Audit logging](concepts-audit.md) is also available with Flexible Server to track activity in your databases.
+[Audit logging](concepts-audit.md) is also available with Azure Database for PostgreSQL flexible server to track activity in your databases.
> [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server currently doesn't support [Microsoft Defender for Cloud protection](../../security-center/azure-defender.md).
+> Azure Database for PostgreSQL flexible server currently doesn't support [Microsoft Defender for Cloud protection](../../security-center/azure-defender.md).
### Controlling schema access
-Newly created databases in PostgreSQL will have a default set of privileges in the database's public schema that allow all database users and roles to create objects. To better limit application user access to the databases that you create on your Flexible Server, we recommend that you consider revoking these default public privileges. After doing so, you can then grant specific privileges for database users on a more granular basis. For example:
+Newly created databases in Azure Database for PostgreSQL flexible server have a default set of privileges in the database's public schema that allow all database users and roles to create objects. To better limit application user access to the databases that you create on your Azure Database for PostgreSQL flexible server instance, we recommend that you consider revoking these default public privileges. After doing so, you can then grant specific privileges for database users on a more granular basis. For example:
* To prevent application database users from creating objects in the public schema, revoke create privileges to *public* schema ```sql
In this example, user *user1* can connect and has all privileges in our test dat
## Row level security
-[Row level security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html) is a PostgreSQL security feature that allows database administrators to define policies to control how specific rows of data display and operate for one or more roles. Row level security is an additional filter you can apply to a PostgreSQL database table. When a user tries to perform an action on a table, this filter is applied before the query criteria or other filtering, and the data is narrowed or rejected according to your security policy. You can create row level security policies for specific commands like *SELECT*, *INSERT*, *UPDATE*, and *DELETE*, specify it for ALL commands. Use cases for row level security include PCI compliant implementations, classified environments, as well as shared hosting / multitenant applications.
+[Row level security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html) is an Azure Database for PostgreSQL flexible server security feature that allows database administrators to define policies to control how specific rows of data display and operate for one or more roles. Row level security is an additional filter you can apply to an Azure Database for PostgreSQL flexible server database table. When a user tries to perform an action on a table, this filter is applied before the query criteria or other filtering, and the data is narrowed or rejected according to your security policy. You can create row level security policies for specific commands like *SELECT*, *INSERT*, *UPDATE*, and *DELETE*, specify it for ALL commands. Use cases for row level security include PCI compliant implementations, classified environments, as well as shared hosting / multitenant applications.
+ Only users with `SET ROW SECURITY` rights may apply row security rights to a table. The table owner may set row security on a table. Like `OVERRIDE ROW SECURITY` this is currently an implicit right. Row-level security does not override existing *GRANT* permissions, it adds a finer grained level of control. For example, setting `ROW SECURITY FOR SELECT` to allow a given user to give rows would only give that user access if the user also has *SELECT* privileges on the column or table in question.
-Here is an example showing how to create a policy that ensures only members of the custom created *ΓÇ£managerΓÇ¥* [role](#access-management) can access only the rows for a specific account. The code in below example was shared in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/ddl-rowsecurity.html).
+Here is an example showing how to create a policy that ensures only members of the custom created *ΓÇ£managerΓÇ¥* [role](#access-management) can access only the rows for a specific account. The code in the following example was shared in the [PostgreSQL documentation](https://www.postgresql.org/docs/current/ddl-rowsecurity.html).
```sql CREATE TABLE accounts (manager text, company text, contact_email text);
CREATE POLICY account_managers ON accounts TO managers
``` The USING clause implicitly adds a `WITH CHECK` clause, ensuring that members of the manager role cannot perform SELECT, DELETE, or UPDATE operations on rows that belong to other managers, and cannot INSERT new rows belonging to another manager. > [!NOTE]
-> In [PostgreSQL it is possible for a user to be assigned the *BYPASSRLS* attribute by another superuser](https://www.postgresql.org/docs/current/ddl-rowsecurity.html). With this permission, a user can bypass RLS for all tables in Postgres, as is superuser. That permission cannot be assigned in Azure Database for PostgreSQL - Flexible Server, since administrator role has no superuser privileges, as common in cloud based PaaS PostgreSQL service.
+> In [PostgreSQL it is possible for a user to be assigned the *BYPASSRLS* attribute by another superuser](https://www.postgresql.org/docs/current/ddl-rowsecurity.html). With this permission, a user can bypass RLS for all tables in Postgres, as is superuser. That permission cannot be assigned in Azure Database for PostgreSQL flexible server, since administrator role has no superuser privileges, as common in cloud based PaaS PostgreSQL service.
## Updating passwords
For better security, it is a good practice to periodically rotate your admin pas
## Using SCRAM The [Salted Challenge Response Authentication Mechanism (SCRAM)](https://datatracker.ietf.org/doc/html/rfc5802) greatly improves the security of password-based user authentication by adding several key security features that prevent rainbow-table attacks, man-in-the-middle attacks, and stored password attacks, while also adding support for multiple hashing algorithms and passwords that contain non-ASCII characters.
-If your [client driver supports SCRAM](https://wiki.postgresql.org/wiki/List_of_drivers) , you can **[setup access to Azure Database for PostgreSQL - Flexible Server using SCRAM](./how-to-connect-scram.md)** as `scram-sha-256` vs. default `md5`.
+If your [client driver supports SCRAM](https://wiki.postgresql.org/wiki/List_of_drivers) , you can **[setup access to Azure Database for PostgreSQL flexible server using SCRAM](./how-to-connect-scram.md)** as `scram-sha-256` vs. default `md5`.
### Reset administrator password
postgresql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-server-parameters.md
Title: Server parameters - Azure Database for PostgreSQL - Flexible Server
-description: Describes the server parameters in Azure Database for PostgreSQL - Flexible Server
+ Title: Server parameters
+description: Describes the server parameters in Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL provides a subset of configurable parameters for each server. For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/config-setting.html).
+Azure Database for PostgreSQL flexible server provides a subset of configurable parameters for each server. For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/config-setting.html).
## An overview of PostgreSQL parameters
Here's the list of some of the parameters:
| Parameter Name | Description | |-|--|
-| **max_connections** | You can tune max_connections on Postgres Flexible Server, where it can be set to 5000 connections. See the [limits documentation](concepts-limits.md) for more details. Although it is not the best practice to set this value higher than several hundreds. See [Postgres Wiki](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections) for more details. If you are considering higher values, consider using [connection pooling](concepts-pgbouncer.md) instead. |
+| **max_connections** | You can tune max_connections on Azure Database for PostgreSQL flexible server, where it can be set to 5000 connections. See the [limits documentation](concepts-limits.md) for more details. Although it is not the best practice to set this value higher than several hundreds. See [Postgres Wiki](https://wiki.postgresql.org/wiki/Number_Of_Database_Connections) for more details. If you are considering higher values, consider using [connection pooling](concepts-pgbouncer.md) instead. |
| **shared_buffers** | The 'shared_buffers' setting changes depending on the selected SKU (SKU determines the memory available). General Purpose servers have 2 GB shared_buffers for 2 vCores; Memory Optimized servers have 4 GB shared_buffers for 2 vCores. The shared_buffers setting scales linearly (approximately) as vCores increase in a tier. | | **shared_preload_libraries** | This parameter is available for configuration with a predefined set of supported extensions. We always load the `azure` extension (used for maintenance tasks), and the `pg_stat_statements` extension (you can use the pg_stat_statements.track parameter to control whether the extension is active). | | **connection_throttling** | You can enable or disable temporary connection throttling per IP for too many invalid password login failures. |
- | **work_mem** | This parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Increasing this parameter value can help Postgres perform larger in-memory scans instead of spilling to disk, which is faster. This configuration is beneficial if your workload contains few queries but with many complex sorting tasks, and you have ample available memory. Be careful however, as one complex query may have number of sort, hash operations running concurrently. Each one of those operations uses as much memory as it value allows before it starts writing to disk based temporary files. Therefore on a relatively busy system total memory usage is many times of individual work_mem parameter. If you do decide to tune this value globally, you can use formula Total RAM * 0.25 / max_connections as initial value. Azure Database for PostgreSQL - Flexible Server supports range of 4096-2097152 kilobytes for this parameter. |
+ | **work_mem** | This parameter specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Increasing this parameter value can help Azure Database for PostgreSQL flexible server perform larger in-memory scans instead of spilling to disk, which is faster. This configuration is beneficial if your workload contains few queries but with many complex sorting tasks, and you have ample available memory. Be careful however, as one complex query may have number of sort, hash operations running concurrently. Each one of those operations uses as much memory as it value allows before it starts writing to disk based temporary files. Therefore on a relatively busy system total memory usage is many times of individual work_mem parameter. If you do decide to tune this value globally, you can use formula Total RAM * 0.25 / max_connections as initial value. Azure Database for PostgreSQL flexible server supports range of 4096-2097152 kilobytes for this parameter. |
| **effective_cache_size** | The effective_cache_size parameter estimates how much memory is available for disk caching by the operating system and within the database shared_buffers itself. This parameter is just a planner "hint" and does not allocate or reserve any memory. Index scans are most likely to be used against higher values; otherwise, sequential scans are used if the value is low. Recommendations are to set effective_cache_size at 50%-75% of the machineΓÇÖs total RAM. | | **maintenance_work_mem** | The maintenance_work_mem parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations. Default value for that parameter is 64 KB. ItΓÇÖs recommended to set this value higher than work_mem. |
-| **effective_io_concurrency** | Sets the number of concurrent disk I/O operations that PostgreSQL expects can be executed simultaneously. Raising this value increases the number of I/O operations that any individual PostgreSQL session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans. |
+| **effective_io_concurrency** | Sets the number of concurrent disk I/O operations that Azure Database for PostgreSQL flexible server expects can be executed simultaneously. Raising this value increases the number of I/O operations that any individual Azure Database for PostgreSQL flexible server session attempts to initiate in parallel. The allowed range is 1 to 1000, or zero to disable issuance of asynchronous I/O requests. Currently, this setting only affects bitmap heap scans. |
|**require_secure_transport** | If your application doesn't support SSL connectivity to the server, you can optionally disable secured transport from your client by turning `OFF` this parameter value. |
- |**log_connections** | This parameter may be read-only, as on Azure Database for PostgreSQL - Flexible Server all connections are logged and intercepted to make sure connections are coming in from right sources for security reasons. |
-|**log_disconnections** | This parameter may be read-only, as on Azure Database for PostgreSQL - Flexible Server all disconnections are logged. |
+ |**log_connections** | This parameter may be read-only, as on Azure Database for PostgreSQL flexible server all connections are logged and intercepted to make sure connections are coming in from right sources for security reasons. |
+|**log_disconnections** | This parameter may be read-only, as on Azure Database for PostgreSQL flexible server all disconnections are logged. |
>[!NOTE]
-> As you scale Azure Database for PostgreSQL - Flexible Server SKUs up or down, affecting available memory to the server, you may wish to tune your memory global parameters, such as `work_mem` or `effective_cache_size` accordingly based on information shared in the article.
+> As you scale Azure Database for PostgreSQL flexible server SKUs up or down, affecting available memory to the server, you may wish to tune your memory global parameters, such as `work_mem` or `effective_cache_size` accordingly based on information shared in the article.
## Next steps
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-servers.md
Title: Servers in Azure Database for PostgreSQL - Flexible Server
+ Title: Servers
description: This article provides considerations and guidelines for configuring and managing Azure Database for PostgreSQL - Flexible Server.
Last updated 12/12/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides considerations and guidelines for working with Azure Database for PostgreSQL - Flexible Server.
+This article provides considerations and guidelines for working with Azure Database for PostgreSQL flexible server.
## What is an Azure Database for PostgreSQL server?
-A server in the Azure Database for PostgreSQL - Flexible Server deployment option is a central administrative point for multiple databases. It is the same PostgreSQL server construct that you may be familiar with in the on-premises world. Specifically, the PostgreSQL service is managed, provides performance guarantees, exposes access and features at the server-level.
+A server in the Azure Database for PostgreSQL flexible server deployment option is a central administrative point for multiple databases. It is the same PostgreSQL server construct that you may be familiar with in the on-premises world. Specifically, Azure Database for PostgreSQL flexible server is managed, provides performance guarantees, exposes access and features at the server-level.
-An Azure Database for PostgreSQL server:
+An Azure Database for PostgreSQL flexible server instance:
- Is created within an Azure subscription. - Is the parent resource for databases.
An Azure Database for PostgreSQL server:
- Is available in multiple versions. For more information, see [supported PostgreSQL database versions](concepts-supported-versions.md). - Is extensible by users. For more information, see [PostgreSQL extensions](concepts-extensions.md).
-Within an Azure Database for PostgreSQL server, you can create one or multiple databases. You can opt to create a single database per server to utilize all the resources, or create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Compute and Storage options](concepts-compute-storage.md).
+Within an Azure Database for PostgreSQL flexible server instance, you can create one or multiple databases. You can opt to create a single database per server to utilize all the resources, or create multiple databases to share the resources. The pricing is structured per-server, based on the configuration of pricing tier, vCores, and storage (GB). For more information, see [Compute and Storage options](concepts-compute-storage.md).
## How do I connect and authenticate to the database server?
The following elements help ensure safe access to your database:
| Security concept | Description | | :-- | :-- |
-| **Authentication and authorization** | Azure Database for PostgreSQL server supports native PostgreSQL authentication. You can connect and authenticate to server with the server's admin login. |
+| **Authentication and authorization** | Azure Database for PostgreSQL flexible server supports native PostgreSQL authentication. You can connect and authenticate to server with the server's admin login. |
| **Protocol** | The service supports a message-based protocol used by PostgreSQL. | | **TCP/IP** | The protocol is supported over TCP/IP, and over Unix-domain sockets. |
-| **Firewall** | To help protect your data, a firewall rule prevents all access to your server and to its databases, until you specify which computers have permission. See [Azure Database for PostgreSQL Server firewall rules](how-to-manage-firewall-portal.md). |
+| **Firewall** | To help protect your data, a firewall rule prevents all access to your server and to its databases, until you specify which computers have permission. See [Azure Database for PostgreSQL flexible server firewall rules](how-to-manage-firewall-portal.md). |
## Managing your server
-You can manage Azure Database for PostgreSQL servers by using the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/postgres).
+You can manage Azure Database for PostgreSQL flexible server instances by using the [Azure portal](https://portal.azure.com) or the [Azure CLI](/cli/azure/postgres).
While creating a server, you set up the credentials for your admin user. The admin user is the highest privilege user you have on the server. It belongs to the role azure_pg_admin. This role does not have full superuser permissions. The PostgreSQL superuser attribute is assigned to the azure_superuser, which belongs to the managed service. You do not have access to this role.
-An Azure Database for PostgreSQL server has default databases:
+An Azure Database for PostgreSQL flexible server instance has default databases:
- **postgres** - A default database you can connect to once your server is created. - **azure_maintenance** - This database is used to separate the processes that provide the managed service from user actions. You do not have access to this database. ## Server parameters
-The PostgreSQL server parameters determine the configuration of the server. In Azure Database for PostgreSQL, the list of parameters can be viewed and edited using the Azure portal or the Azure CLI.
+The Azure Database for PostgreSQL flexible server parameters determine the configuration of the server. In Azure Database for PostgreSQL flexible server, the list of parameters can be viewed and edited using the Azure portal or the Azure CLI.
-As a managed service for Postgres, the configurable parameters in Azure Database for PostgreSQL are a subset of the parameters in a local Postgres instance (For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/runtime-config.html)). Your Azure Database for PostgreSQL server is enabled with default values for each parameter on creation. Some parameters that would require a server restart or superuser access for changes to take effect cannot be configured by the user.
+As a managed service for Postgres, the configurable parameters in Azure Database for PostgreSQL are a subset of the parameters in a local Postgres instance. (For more information on Postgres parameters, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/static/runtime-config.html)). Your Azure Database for PostgreSQL flexible server instance is enabled with default values for each parameter on creation. Some parameters that would require a server restart or superuser access for changes to take effect can't be configured by the user.
## Next steps -- For an overview of the service, see [Azure Database for PostgreSQL Overview](overview.md).
+- For an overview of the service, see [Azure Database for PostgreSQL flexible server overview](overview.md).
- For information about specific resource quotas and limitations based on your **configuration**, see [Compute and Storage options](concepts-compute-storage.md).-- View and edit server parameters through [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
+- View and edit server parameters through [Azure portal](how-to-configure-server-parameters-using-portal.md) or [Azure CLI](how-to-configure-server-parameters-using-cli.md).
postgresql Concepts Storage Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-storage-extension.md
Title: Azure Storage Extension in Azure Database for PostgreSQL - Flexible Server -Preview
-description: Azure Storage Extension in Azure Database for PostgreSQL - Flexible Server -Preview
+ Title: Azure Storage Extension Preview
+description: Azure Storage Extension in Azure Database for PostgreSQL - Flexible Server - Preview.
-# Azure Database for PostgreSQL Flexible Server Azure Storage Extension - Preview
+# Azure Database for PostgreSQL - Flexible Server Azure Storage Extension - Preview
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-A common use case for our customers today is need to be able to import\export between Azure Blob Storage and Microsoft Database for PostgreSQL ΓÇô Flexible Server DB instance. To simplify this use case, we introduced new **Azure Storage Extension** (azure_storage) in Azure Database for PostgreSQL - Flexible Server, currently available in **Preview**.
+A common use case for our customers today is need to be able to import/export between Azure Blob Storage and an Azure Database for PostgreSQL flexible server instance. To simplify this use case, we introduced new **Azure Storage Extension** (azure_storage) in Azure Database for PostgreSQL flexible server, currently available in **Preview**.
> [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server supports Azure Storage Extension in Preview
+> Azure Database for PostgreSQL flexible server supports Azure Storage Extension in Preview.
## Azure Blob Storage
Azure Blob Storage is Microsoft's object storage solution for the cloud. Blob St
Blob Storage offers hierarchy of three types of resources. These types include: - The [**storage account**](../../storage/blobs/storage-blobs-introduction.md#storage-accounts). The storage account is like an administrative container, and within that container, we can have several services like *blobs*, *files*, *queues*, *tables*,* disks*, etc. And when we create a storage account in Azure, we get the unique namespace for our storage resources. That unique namespace forms the part of the URL. The storage account name should be unique across all existing storage account name in Azure.-- A [**container**](../../storage/blobs/storage-blobs-introduction.md#containers) inside storage account. The container is more like a folder where different blobs are stored. At the container level, we can define security policies and assign policies to the container, which is cascaded to all the blobs under the same container.A storage account can contain an unlimited number of containers, and each container can contain an unlimited number of blobs up to the maximum limit of storage account size of 500 TB.
+- A [**container**](../../storage/blobs/storage-blobs-introduction.md#containers) inside storage account. The container is more like a folder where different blobs are stored. At the container level, we can define security policies and assign policies to the container, which is cascaded to all the blobs under the same container. A storage account can contain an unlimited number of containers, and each container can contain an unlimited number of blobs up to the maximum limit of storage account size of 500 TB.
To refer this blob, once it's placed into a container inside a storage account, URL can be used, in format like *protocol://<storage_account_name>/blob.core.windows.net/<container_name>/<blob_name>* - A [**blob**](../../storage/blobs/storage-blobs-introduction.md#blobs) in the container. The following diagram shows the relationship between these resources.
Azure Blob Storage can provide following benefits:
- Azure Blob Storage interfaces with other Azure services and third-party applications, making it a versatile solution for a wide range of use cases such as backup and disaster recovery, archiving, and data analysis. - Azure Blob Storage allows you to pay only for the storage you need, making it a cost-effective solution for managing and storing massive amounts of data. Whether you're a small business or a large enterprise, Azure Blob Storage offers a versatile and scalable solution for your cloud storage needs.
-## Import data from Azure Blob Storage to Azure Database for PostgreSQL - Flexible Server
+## Import data from Azure Blob Storage to Azure Database for PostgreSQL flexible server
To load data from Azure Blob Storage, you need [allowlist](../../postgresql/flexible-server/concepts-extensions.md#how-to-use-postgresql-extensions) **azure_storage** extension and install the **azure_storage** PostgreSQL extension in this database using create extension command:
FROM azure_storage.blob_list('mystorageaccount','mytestblob');
Output of this statement can be further filtered either by using a regular *SQL WHERE* clause, or by using the prefix parameter of the blob_list method. Listing container contents requires an account and access key or a container with enabled anonymous access.
-Finally you can use either **COPY** statement or **blob_get** function to import data from Azure Storage into existing PostgreSQL table.
+Finally you can use either **COPY** statement or **blob_get** function to import data from Azure Storage into an existing Azure Database for PostgreSQL flexible server table.
### Import data using COPY statement Example below shows import of data from employee.csv file residing in blob container mytestblob in same mystorageaccount Azure storage account via **COPY** command: 1. First create target table matching source file schema:
SELECT * FROM azure_storage.blob_get('mystorageaccount','mytestblob','employee.c
FirstName varchar(50)) ```
-The **COPY** command and **blob_get** function support following file extensions for import:
+The **COPY** command and **blob_get** function support the following file extensions for import:
| **File Format** | **Description** | | | |
The **COPY** command and **blob_get** function support following file extension
| binary | Binary PostgreSQL COPY format | | text | A file containing a single text value (for example, large JSON or XML) |
-## Export data from Azure Database for PostgreSQL - Flexible Server to Azure Blob Storage
+## Export data from Azure Database for PostgreSQL flexible server to Azure Blob Storage
-To export data from PostgreSQL Flexible Server to Azure Blob Storage, you need to [allowlist](../../postgresql/flexible-server/concepts-extensions.md#how-to-use-postgresql-extensions) **azure_storage** extension and install the **azure_storage** PostgreSQL extension in database using create extension command:
+To export data from Azure Database for PostgreSQL flexible server to Azure Blob Storage, you need to [allowlist](../../postgresql/flexible-server/concepts-extensions.md#how-to-use-postgresql-extensions) **azure_storage** extension and install the **azure_storage** PostgreSQL extension in database using create extension command:
```sql CREATE EXTENSION azure_storage;
When you create a storage account, Azure generates two 512-bit storage **account
SELECT azure_storage.account_add('mystorageaccount', 'SECRET_ACCESS_KEY'); ```
-You can use either **COPY** statement or **blob_put** function to export data from Azure Database for PostgreSQL table to Azure storage.
+You can use either **COPY** statement or **blob_put** function to export data from an Azure Database for PostgreSQL table to Azure storage.
Example shows export of data from employee table to new file named employee2.csv residing in blob container mytestblob in same mystorageaccount Azure storage account via **COPY** command: ```sql
SELECT path, size, last_modified, etag FROM azure_storage.blob_list('mystorageac
## Assign permissions to nonadministrative account to access data from Azure Storage
-By default, only [azure_pg_admin](./concepts-security.md#access-management) administrative role can add an account key and access the storage account in Postgres Flexible Server.
-Granting the permissions to access data in Azure Storage to nonadministrative PostgreSQL Flexible server user can be done in two ways depending on permission granularity:
+By default, only [azure_pg_admin](./concepts-security.md#access-management) administrative role can add an account key and access the storage account in Azure Database for PostgreSQL flexible server.
+Granting the permissions to access data in Azure Storage to nonadministrative Azure Database for PostgreSQL flexible server users can be done in two ways depending on permission granularity:
- Assign **azure_storage_admin** to the nonadministrative user. This role is added with installation of Azure Data Storage Extension. Example below grants this role to nonadministrative user called *support* ```sql -- Allow adding/list/removing storage accounts GRANT azure_storage_admin TO support; ```-- Or by calling **account_user_add** function. Example is adding permissions to role *support* in Flex server. It's a more finite permission as it gives user access to Azure storage account named *mystorageaccount* only.
+- Or by calling **account_user_add** function. Example is adding permissions to role *support* in Azure Database for PostgreSQL flexible server. It's a more finite permission as it gives user access to Azure storage account named *mystorageaccount* only.
```sql SELECT * FROM azure_storage.account_user_add('mystorageaccount', 'support'); ```
-Postgres administrative users can see the list of storage accounts and permissions in the output of **account_list** function, which shows all accounts with access keys defined:
+Azure Database for PostgreSQL flexible server administrative users can see the list of storage accounts and permissions in the output of **account_list** function, which shows all accounts with access keys defined:
```sql SELECT * FROM azure_storage.account_list(); ```
-When Postgres administrator decides that the user should no longer have access, method\function **account_user_remove** can be used to remove this access. Following example removes role *support* from access to storage account *mystorageaccount*.
+When the Azure Database for PostgreSQL flexible server administrator decides that the user should no longer have access, method/function **account_user_remove** can be used to remove this access. Following example removes role *support* from access to storage account *mystorageaccount*.
```sql
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Title: Supported versions - Azure Database for PostgreSQL - Flexible Server
+ Title: Supported versions
description: Describes the supported PostgreSQL major and minor versions in Azure Database for PostgreSQL - Flexible Server.
Last updated 12/12/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server currently supports the following major versions:
+Azure Database for PostgreSQL flexible server currently supports the following major versions:
## PostgreSQL version 16
-PostgreSQL version 16 is now generally available in all Azure regions. The current minor release is **16.0**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/16/release-16.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+PostgreSQL version 16 is now generally available in all Azure regions. The current minor release is **16.0**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/16/release-16.html) to learn more about improvements and fixes in this release. New servers are created with this minor version.
## PostgreSQL version 15
-The current minor release is **15.4**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/15.4/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
-
+The current minor release is **15.4**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/15.4/) to learn more about improvements and fixes in this release. New servers are created with this minor version.
## PostgreSQL version 14
-The current minor release is **14.9**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/14.9/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **14.9**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/14.9/) to learn more about improvements and fixes in this release. New servers are created with this minor version.
## PostgreSQL version 13
-The current minor release is **13.12**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/13.12/) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **13.12**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/13.12/) to learn more about improvements and fixes in this release. New servers are created with this minor version.
## PostgreSQL version 12
-The current minor release is **12.16**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/12.16/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **12.16**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/12.16/) to learn more about improvements and fixes in this release. New servers are created with this minor version. Your existing servers are automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 11
-The current minor release is **11.21**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/11.21/) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **11.21**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/11.21/) to learn more about improvements and fixes in this release. New servers are created with this minor version. Your existing servers are automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 10 and older
-We don't support PostgreSQL version 10 and older for Azure Database for PostgreSQL - Flexible Server. Please use the [Single Server](../concepts-supported-versions.md) deployment option if you require older versions.
+We don't support PostgreSQL version 10 and older for Azure Database for PostgreSQL flexible server. Use the [Azure Database for PostgreSQL single server](../concepts-supported-versions.md) deployment option if you require older versions.
## Managing upgrades
-The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
+The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL flexible server automatically patches servers with minor releases during the service's monthly deployments.
It is also possible to do in-place major version upgrades by means of the [Major Version Upgrade](./concepts-major-version-upgrade.md) feature. This feature greatly simplifies the upgrade process of an instance from a given major version (PostgreSQL 11, for example) to any higher supported version (like PostgreSQL 16). ## Supportability and retirement policy of the underlying operating system
-Azure Database for PostgreSQL - Flexible Server is a fully managed open-source database. The underlying operating system is an integral part of the service. Microsoft continually works to ensure ongoing security updates and maintenance for security compliance and vulnerability mitigation, regardless of whether it is provided by a third-party or an internal vendor. Automatic upgrades during scheduled maintenance keep your managed database secure, stable, and up-to-date.
+Azure Database for PostgreSQL flexible server is a fully managed open-source database. The underlying operating system is an integral part of the service. Microsoft continually works to ensure ongoing security updates and maintenance for security compliance and vulnerability mitigation, regardless of whether it is provided by a third-party or an internal vendor. Automatic upgrades during scheduled maintenance keep your managed database secure, stable, and up-to-date.
## Managing PostgreSQL engine defects Microsoft has a team of committers and contributors who work full time on the open source Postgres project and are long term members of the community. Our contributions include but aren't limited to features, performance enhancements, bug fixes, security patches among other things. Our open source team also incorporates feedback from our Azure fleet (and customers) when prioritizing work, however please keep in mind that Postgres project has its own independent contribution guidelines, review process and release schedule.
-When a defect with PostgreSQL engine is identified, Microsoft will take immediate action to mitigate the issue. If it requires code change, Microsoft will fix the defect to address the production issue, if possible, and work with the community to incorporate the fix as quickly as possible.
+When a defect with PostgreSQL engine is identified, Microsoft takes immediate action to mitigate the issue. If it requires code change, Microsoft fixes the defect to address the production issue, if possible, and work with the community to incorporate the fix as quickly as possible.
<!--
postgresql Concepts Troubleshooting Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-troubleshooting-guides.md
Title: Troubleshooting Guides for Azure Database for PostgreSQL - Flexible Server
+ Title: Troubleshooting guides
description: Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server.
Last updated 03/21/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The Troubleshooting Guides for Azure Database for PostgreSQL - Flexible Server are designed to help you quickly identify and resolve common challenges you may encounter while using Azure Database for PostgreSQL. Integrated directly into the Azure portal, the Troubleshooting Guides provide actionable insights, recommendations, and data visualizations to assist you in diagnosing and addressing issues related to common performance problems. With these guides at your disposal, you'll be better equipped to optimize your PostgreSQL experience on Azure and ensure a smoother, more efficient database operation.
+The troubleshooting guides for Azure Database for PostgreSQL flexible server are designed to help you quickly identify and resolve common challenges you may encounter while using Azure Database for PostgreSQL flexible server. Integrated directly into the Azure portal, the troubleshooting guides provide actionable insights, recommendations, and data visualizations to assist you in diagnosing and addressing issues related to common performance problems. With these guides at your disposal, you'll be better equipped to optimize your Azure Database for PostgreSQL flexible server experience and ensure a smoother, more efficient database operation.
## Overview
-The troubleshooting guides available in Azure Database for PostgreSQL - Flexible Server provide you with the necessary tools to analyze and troubleshoot prevalent performance issues,
+The troubleshooting guides available in Azure Database for PostgreSQL flexible server provide you with the necessary tools to analyze and troubleshoot prevalent performance issues,
including: * High CPU Usage, * High Memory Usage,
including:
:::image type="content" source="./media/concepts-troubleshooting-guides/overview-troubleshooting-guides.jpg" alt-text="Screenshot of multiple Troubleshooting Guides combined." lightbox="./media/concepts-troubleshooting-guides/overview-troubleshooting-guides.jpg"::: Each guide is packed with multiple charts, guidelines, recommendations tailored to the specific problem you may encounter, which can help expedite the troubleshooting process.
-The troubleshooting guides are directly integrated into the Azure portal and your Azure Database for PostgreSQL - Flexible Server, making them convenient and easy to use.
+The troubleshooting guides are directly integrated into the Azure portal and your Azure Database for PostgreSQL flexible server instance, making them convenient and easy to use.
The troubleshooting guides consist of the following components:
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-version-policy.md
+
+ Title: Versioning policy
+description: Describes the policy around Postgres major and minor versions in Azure Database for PostgreSQL - Single Server and Azure Database for PostgreSQL - Flexible Server.
+++++ Last updated : 09/14/2022+++
+# Azure Database for PostgreSQL - Flexible Server versioning policy
+++
+This page describes the Azure Database for PostgreSQL flexible server versioning policy, and is applicable to these deployment modes:
+
+* Azure Database for PostgreSQL single server
+* Azure Database for PostgreSQL flexible server
+
+## Supported PostgreSQL versions
+
+Azure Database for PostgreSQL flexible server supports the following database versions.
+
+| Version | Azure Database for PostgreSQL single server | Azure Database for PostgreSQL flexible server |
+| -- | :: | :-: |
+| PostgreSQL 15 | | X |
+| PostgreSQL 14 | | X |
+| PostgreSQL 13 | | X |
+| PostgreSQL 12 | | X |
+| PostgreSQL 11 | X | X |
+| *PostgreSQL 10 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) | |
+| *PostgreSQL 9.6 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) | |
+| *PostgreSQL 9.5 (retired)* | See [policy](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) | |
+
+## Major version support
+
+Each major version of PostgreSQL will be supported by Azure Database for PostgreSQL flexible server from the date on which Azure begins supporting the version until the version is retired by the PostgreSQL community. Refer to [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
+
+## Minor version support
+
+Azure Database for PostgreSQL flexible server automatically performs minor version upgrades to the Azure preferred PostgreSQL version as part of periodic maintenance.
+
+## Major version retirement policy
+
+The table below provides the retirement details for PostgreSQL major versions. The dates follow the [PostgreSQL community versioning policy](https://www.postgresql.org/support/versioning/).
+
+| Version | What's New | Azure support start date | Retirement date (Azure)|
+| - | - | | - |
+| [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
+| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
+| [PostgreSQL 10 (retired)](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
+| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 |
+| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024
+| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | June 29, 2022 | November 12, 2026
+| [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | May 15, 2023 | November 11, 2027
+
+## PostgreSQL 11 support in Azure Database for PostgreSQL single server and Azure Database for PostgreSQL flexible server
+
+Azure is extending support for PostgreSQL 11 in Azure Database for PostgreSQL single server and Azure Database for PostgreSQL flexible server by one more year until **November 9, 2024**.
+
+- You will be able to create and use your PostgreSQL 11 servers until November 9, 2024 without any restrictions. This extended support is provided to help you with more time to plan and [migrate to Azure Database for PostgreSQL flexible server](../migrate/concepts-single-to-flexible.md) for higher PostgreSQL versions.
+- Until November 9, 2023, Azure will continue to update your PostgreSQL 11 server with PostgreSQL community provided minor versions.
+- Between November 9, 2023 and November 9, 2024, you can continue to use your PostgreSQL 11 servers and create new Azure Database for PostgreSQL flexible server instances without any restrictions. However, other retired PostgreSQL engine [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) apply.
+- Beyond Nov 9 2024, all retired PostgreSQL engine [restrictions](#retired-postgresql-engine-versions-not-supported-in-azure-database-for-postgresql-flexible-server) apply.
+
+## Retired PostgreSQL engine versions not supported in Azure Database for PostgreSQL flexible server
+
+You might continue to run the retired version in Azure Database for PostgreSQL flexible server. However, note the following restrictions after the retirement date for each PostgreSQL database version:
+- As the community won't be releasing any further bug fixes or security fixes, Azure Database for PostgreSQL flexible server won't patch the retired database engine for any bugs or security issues, or otherwise take security measures with regard to the retired database engine. You might experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+- If any support issue you might experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we might not be able to provide you with support. In such cases, you have to upgrade your database to one of the supported versions.
+- You won't be able to create new database servers for the retired version. However, you'll be able to perform point-in-time recoveries and create read replicas for your existing servers.
+- New service capabilities developed by Azure Database for PostgreSQL flexible server might only be available to supported database server versions.
+- Uptime SLAs will apply solely to Azure Database for PostgreSQL flexible server service-related issues and not to any downtime caused by database engine-related bugs.
+- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure might choose to stop your database server to secure the service. In such case, you'll be notified to upgrade the server before bringing the server online.
+
+
+## PostgreSQL version syntax
+
+Before PostgreSQL version 10, the [PostgreSQL versioning policy](https://www.postgresql.org/support/versioning/) considered a _major version_ upgrade to be an increase in the first _or_ second number. For example, 9.5 to 9.6 was considered a _major_ version upgrade. As of version 10, only a change in the first number is considered a major version upgrade. For example, 10.0 to 10.1 is a _minor_ release upgrade. Version 10 to 11 is a _major_ version upgrade.
+
+## Next steps
+
+- See Azure Database for PostgreSQL single server [supported versions](../single-server/concepts-supported-versions.md).
+- See Azure Database for PostgreSQL flexible server [supported versions](concepts-supported-versions.md).
postgresql Concepts Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-workbooks.md
Title: Monitor Azure Database for PostgreSQL Flexible Server by using Azure Monitor workbooks
-description: This article describes how you can monitor Azure Database for PostgreSQL Flexible Server by using Azure Monitor workbooks.
+ Title: Monitor by using Azure Monitor workbooks
+description: This article describes how you can monitor Azure Database for PostgreSQL - Flexible Server by using Azure Monitor workbooks.
-# Monitor Azure Database for PostgreSQL Flexible Server by using Azure Monitor workbooks
+# Monitor Azure Database for PostgreSQL - Flexible Server by using Azure Monitor workbooks
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL Flexible Server is now integrated with Azure Monitor workbooks. Workbooks give you a flexible canvas for analyzing data and creating rich visual reports within the Azure portal. Workbooks allow you to tap into multiple data sources across Azure and combine them into unified interactive experiences. Workbook templates serve as curated reports designed for flexible reuse by various users and teams.
+Azure Database for PostgreSQL flexible server is now integrated with Azure Monitor workbooks. Workbooks give you a flexible canvas for analyzing data and creating rich visual reports within the Azure portal. Workbooks allow you to tap into multiple data sources across Azure and combine them into unified interactive experiences. Workbook templates serve as curated reports designed for flexible reuse by various users and teams.
When you open a template, you create a transient workbook that's populated with the contents of the template. With this integration, the server links to workbooks and a few sample templates, which can help you monitor the service at scale. You can edit these templates, customize them to your requirements, and pin them to the dashboard to create a focused and organized view of Azure resources.
-In this article, you learn about the various workbook templates available for your flexible server.
+In this article, you learn about the various workbook templates available for your Azure Database for PostgreSQL flexible server instance.
-Azure Database for PostgreSQL Flexible Server has two available templates:
+Azure Database for PostgreSQL flexible server has two available templates:
- **Overview**: Displays an instance summary and top-level metrics to help you visualize and understand the resource utilization on your server. This template displays the following views:
Azure Database for PostgreSQL Flexible Server has two available templates:
* Performance Metrics * Storage Metrics -- **Enhanced Metrics**: Displays a summary of Enhanced Metrics for Azure Database for PostgreSQL Flexible Server with more fine-grained database monitoring. To enable these metrics, please enable the server parameters `metrics.collector_database_activity` and `metrics.autovacuum_diagnostics`. These parameters are dynamic and don't require a server restart. For more information, see [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics). This template displays the following views:
+- **Enhanced Metrics**: Displays a summary of Enhanced Metrics for Azure Database for PostgreSQL flexible server with more fine-grained database monitoring. To enable these metrics, enable the server parameters `metrics.collector_database_activity` and `metrics.autovacuum_diagnostics`. These parameters are dynamic and don't require a server restart. For more information, see [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics). This template displays the following views:
* Activity Metrics * Database Metrics
You can also edit and customize these templates according to your requirements.
## Access the workbook templates
-To view the templates in the Azure portal, go to the **Monitoring** pane for Azure Database for PostgreSQL Flexible Server, and then select **Workbooks**.
+To view the templates in the Azure portal, go to the **Monitoring** pane for Azure Database for PostgreSQL flexible server, and then select **Workbooks**.
:::image type="content" source="./media/concepts-workbooks/monitor-workbooks-all.png" alt-text="Screenshot showing the Overview, Enhanced Metrics templates on the Workbooks pane." lightbox="media/concepts-workbooks/monitor-workbooks-all.png":::
To view the templates in the Azure portal, go to the **Monitoring** pane for Azu
- [Azure workbooks access control](../../azure-monitor/visualize/workbooks-overview.md#access-control) - [Azure workbooks visualization options](../../azure-monitor/visualize/workbooks-visualizations.md)-- [Enhanced Metrics](concepts-monitoring.md#enhanced-metrics)-- [Autovacuum Metrics](concepts-monitoring.md#autovacuum-metrics)
+- [Enhanced metrics](concepts-monitoring.md#enhanced-metrics)
+- [Autovacuum metrics](concepts-monitoring.md#autovacuum-metrics)
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
Title: 'Quickstart: Connect using Azure CLI - Azure Database for PostgreSQL - Flexible Server'
+ Title: 'Quickstart: Connect using Azure CLI'
description: This quickstart provides several ways to connect with Azure CLI with Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This quickstart demonstrates how to connect to an Azure Database for PostgreSQL Flexible Server using Azure CLI with ```az postgres flexible-server connect``` and execute single query or sql file with ```az postgres flexible-server execute``` command. This command allows you test connectivity to your database server and run queries. You can also run multiple queries using the interactive mode.
+This quickstart demonstrates how to connect to an Azure Database for PostgreSQL flexible server instance using Azure CLI with `az postgres flexible-server connect` and execute single query or sql file with `az postgres flexible-server execute` command. This command allows you test connectivity to your database server and run queries. You can also run multiple queries using the interactive mode.
## Prerequisites - An Azure account. If you don't have one, [get a free trial](https://azure.microsoft.com/free/). - Install [Azure CLI](/cli/azure/install-azure-cli) latest version (2.20.0 or above)-- Log in using Azure CLI with ```az login``` command -- Turn on parameter persistence with ```az config param-persist on```. Parameter persistence will help you use local context without having to repeat numerous arguments like resource group or location.
+- Log in using Azure CLI with `az login` command
+- Turn on parameter persistence with `az config param-persist on`. Parameter persistence will help you use local context without having to repeat numerous arguments like resource group or location.
-## Create a PostgreSQL Flexible Server
+## Create Azure Database for PostgreSQL flexible server instance
-The first thing we'll create is a managed PostgreSQL server. In [Azure Cloud Shell](https://shell.azure.com/), run the following script and make a note of the **server name**, **username** and **password** generated from this command.
+The first thing to create is a managed Azure Database for PostgreSQL flexible server instance. In [Azure Cloud Shell](https://shell.azure.com/), run the following script and make a note of the **server name**, **username** and **password** generated from this command.
```azurecli az postgres flexible-server create --public-access <your-ip-address>
az postgres flexible-server create --public-access <your-ip-address>
You can provide more arguments for this command to customize it. See all arguments for [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create). ## View all the arguments
-You can view all the arguments for this command with ```--help``` argument.
+You can view all the arguments for this command with `--help` argument.
```azurecli az postgres flexible-server connect --help
test 200
``` ## Run SQL File
-You can execute a sql file with the command using ```--file-path``` argument, ```-f```.
+You can execute a sql file with the command using `--file-path` argument, `-f`.
```azurecli az postgres flexible-server execute -n <server-name> -u <username> -p "<password>" -d <database-name> --file-path "<file-path>"
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-csharp.md
Title: 'Quickstart: Connect with C# - Azure Database for PostgreSQL - Flexible Server'
+ Title: 'Quickstart: Connect with C#'
description: "This quickstart provides a C# (.NET) code sample you can use to connect and query data from Azure Database for PostgreSQL - Flexible Server."
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This quickstart demonstrates how to connect to an Azure Database for PostgreSQL using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL.
+This quickstart demonstrates how to connect to an Azure Database for PostgreSQL flexible server instance using a C# application. It shows how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you are familiar with developing using C#, and that you are new to working with Azure Database for PostgreSQL flexible server.
## Prerequisites For this quickstart you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- Create an Azure Database for PostgreSQL Flexible server using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md) if you do not have one.
+- Create an Azure Database for PostgreSQL flexible server instance using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md) if you do not have one.
- Use the empty *postgres* database available on the server or create a [new database](./quickstart-create-server-portal.md#connect-to-the-postgresql-database-using-psql). - Install the [.NET SDK for your platform](https://dotnet.microsoft.com/download) (Windows, Ubuntu Linux, or macOS) for your platform. - Install [Visual Studio](https://www.visualstudio.com/downloads/) to build your project.
For this quickstart you need:
## Get connection information
-Get the connection information needed to connect to the Azure Database for PostgreSQL. You need the fully qualified server name and login credentials.
+Get the connection information needed to connect to the Azure Database for PostgreSQL flexible server instance. You need the fully qualified server name and login credentials.
1. Log in to the [Azure portal](https://portal.azure.com/). 2. From the left-hand menu in Azure portal, click **All resources**, and then search for the server you have created (such as **mydemoserver**). 3. Click the server name. 4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel.
- :::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL server name":::
+ :::image type="content" source="./media/connect-csharp/1-connection-string.png" alt-text="Azure Database for PostgreSQL flexible server instance name.":::
## Step 1: Connect and insert data Use the following code to connect and load the data using **CREATE TABLE** and **INSERT INTO** SQL statements. The code uses NpgsqlCommand class with method:-- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database.
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the Azure Database for PostgreSQL flexible server database.
- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) sets the CommandText property. - [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.
namespace Driver
## Step 3: Update data Use the following code to connect and update the data using an **UPDATE** SQL statement. The code uses NpgsqlCommand class with method:-- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to PostgreSQL.
+- [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to Azure Database for PostgreSQL flexible server.
- [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand), sets the CommandText property. - [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) method to run the database commands.
namespace Driver
Use the following code to connect and delete data using a **DELETE** SQL statement.
-The code uses NpgsqlCommand class with method [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the PostgreSQL database. Then, the code uses the [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) method, sets the CommandText property, and calls the method [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run the database commands.
+The code uses NpgsqlCommand class with method [Open()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_Open) to establish a connection to the Azure Database for PostgreSQL flexible server database. Then, the code uses the [CreateCommand()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlConnection.html#Npgsql_NpgsqlConnection_CreateCommand) method, sets the CommandText property, and calls the method [ExecuteNonQuery()](https://www.npgsql.org/doc/api/Npgsql.NpgsqlCommand.html#Npgsql_NpgsqlCommand_ExecuteNonQuery) to run the database commands.
> [!IMPORTANT] > Replace the Host, DBName, User, and Password parameters with the values that you specified when you created the server and database.
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
Title: 'Quickstart: Use Java and JDBC with Azure Database for PostgreSQL Flexible Server'
-description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL Flexible Server.
+ Title: 'Quickstart: Use Java and JDBC'
+description: In this quickstart, you learn how to use Java and JDBC with an Azure Database for PostgreSQL - Flexible Server instance.
ms.devlang: java
Last updated 11/07/2022
-# Quickstart: Use Java and JDBC with Azure Database for PostgreSQL Flexible Server
+# Quickstart: Use Java and JDBC with Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for PostgreSQL Flexible Server](./index.yml).
+This article demonstrates creating a sample application that uses Java and [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity) to store and retrieve information in [Azure Database for PostgreSQL flexible server](./index.yml).
JDBC is the standard Java API to connect to traditional relational databases.
export CURRENT_USERNAME=$(az ad signed-in-user show --query userPrincipalName -o
Replace the placeholders with the following values, which are used throughout this article: -- `<YOUR_DATABASE_SERVER_NAME>`: The name of your PostgreSQL server, which should be unique across Azure.-- `<YOUR_DATABASE_NAME>`: The database name of the PostgreSQL server, which should be unique within Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.-- `<YOUR_POSTGRESQL_AD_NON_ADMIN_USERNAME>`: The username of your PostgreSQL database server. Make ensure the username is a valid user in your Microsoft Entra tenant.
+- `<YOUR_DATABASE_SERVER_NAME>`: The name of your Azure Database for PostgreSQL flexible server instance, which should be unique across Azure.
+- `<YOUR_DATABASE_NAME>`: The database name of the Azure Database for PostgreSQL flexible server instance, which should be unique within Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region to use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.
+- `<YOUR_POSTGRESQL_AD_NON_ADMIN_USERNAME>`: The username of your Azure Database for PostgreSQL flexible server instance. Make ensure the username is a valid user in your Microsoft Entra tenant.
- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Spring Boot application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/). > [!IMPORTANT]
export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
Replace the placeholders with the following values, which are used throughout this article: -- `<YOUR_DATABASE_SERVER_NAME>`: The name of your PostgreSQL server, which should be unique across Azure.-- `<YOUR_DATABASE_NAME>`: The database name of the PostgreSQL server, which should be unique within Azure.-- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.-- `<YOUR_POSTGRESQL_ADMIN_PASSWORD>` and `<YOUR_POSTGRESQL_NON_ADMIN_PASSWORD>`: The password of your PostgreSQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
+- `<YOUR_DATABASE_SERVER_NAME>`: The name of your Azure Database for PostgreSQL flexible server instance, which should be unique across Azure.
+- `<YOUR_DATABASE_NAME>`: The database name of the Azure Database for PostgreSQL flexible server instance, which should be unique within Azure.
+- `<YOUR_AZURE_REGION>`: The Azure region to use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`.
+- `<YOUR_POSTGRESQL_ADMIN_PASSWORD>` and `<YOUR_POSTGRESQL_NON_ADMIN_PASSWORD>`: The password of your Azure Database for PostgreSQL flexible server instance. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on).
- `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Spring Boot application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
az group create \
--output tsv ```
-## Create an Azure Database for PostgreSQL instance
+## Create an Azure Database for PostgreSQL flexible server instance
The following sections describe how to create and configure your database instance.
-### Create a PostgreSQL server and set up admin user
+### Create an Azure Database for PostgreSQL flexible server instance and set up admin user
-The first thing we'll create is a managed PostgreSQL server.
+The first thing you create is a managed Azure Database for PostgreSQL flexible server instance.
> [!NOTE]
-> You can read more detailed information about creating PostgreSQL servers in [Create an Azure Database for PostgreSQL server by using the Azure portal](./quickstart-create-server-portal.md).
+> You can read more detailed information about creating Azure Database for PostgreSQL flexible server instances in [Create an Azure Database for PostgreSQL flexible server instance by using the Azure portal](./quickstart-create-server-portal.md).
#### [Passwordless (Recommended)](#tab/passwordless)
az postgres flexible-server create \
To set up a Microsoft Entra administrator after creating the server, follow the steps in [Manage Microsoft Entra roles in Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md). > [!IMPORTANT]
-> When setting up an administrator, a new user with full administrator privileges is added to the PostgreSQL Flexible Server's Azure database. You can create multiple Microsoft Entra administrators per PostgreSQL Flexible Server.
+> When setting up an administrator, a new user with full administrator privileges is added to the Azure Database for PostgreSQL flexible server instance's Azure database. You can create multiple Microsoft Entra administrators per Azure Database for PostgreSQL flexible server instance.
#### [Password](#tab/password)
az postgres flexible-server create \
--output tsv ```
-This command creates a small PostgreSQL server.
+This command creates a small Azure Database for PostgreSQL flexible server instance.
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
-### Configure a firewall rule for your PostgreSQL server
+### Configure a firewall rule for your Azure Database for PostgreSQL flexible server instance
-Azure Database for PostgreSQL instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
+Azure Database for PostgreSQL flexible server instances are secured by default. They have a firewall that doesn't allow any incoming connection. To be able to use your database, you need to add a firewall rule that will allow the local IP address to access the database server.
Because you configured your local IP address at the beginning of this article, you can open the server's firewall by running the following command:
az postgres flexible-server firewall-rule create \
--output tsv ```
-If you're connecting to your PostgreSQL server from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall.
+If you're connecting to your Azure Database for PostgreSQL flexible server instance from Windows Subsystem for Linux (WSL) on a Windows computer, you'll need to add the WSL host ID to your firewall.
Obtain the IP address of your host machine by running the following command in WSL:
az postgres flexible-server firewall-rule create \
--output tsv ```
-### Configure a PostgreSQL database
+### Configure an Azure Database for PostgreSQL flexible server database
Create a new database using the following command:
az postgres flexible-server db create \
--output tsv ```
-### Create a PostgreSQL non-admin user and grant permission
+### Create an Azure Database for PostgreSQL flexible server non-admin user and grant permission
Next, create a non-admin user and grant all permissions to the database. > [!NOTE]
-> You can read more detailed information about managing PostgreSQL users in [Manage Microsoft Entra users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md).
+> You can read more detailed information about managing Azure Database for PostgreSQL flexible server users in [Manage Microsoft Entra users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md).
#### [Passwordless (Recommended)](#tab/passwordless)
This file is an [Apache Maven](https://maven.apache.org/) that configures our pr
- Java 8 - A recent PostgreSQL driver for Java
-### Prepare a configuration file to connect to Azure Database for PostgreSQL
+### Prepare a configuration file to connect to Azure Database for PostgreSQL flexible server
Create a *src/main/resources/application.properties* file, then add the following contents:
EOF
> [!NOTE]
-> The configuration property `url` has `?serverTimezone=UTC` appended tell the JDBC driver to use TLS ([Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security)) when connecting to the database. It is mandatory to use TLS with Azure Database for PostgreSQL, and it is a good security practice.
+> The configuration property `url` has `?serverTimezone=UTC` appended tell the JDBC driver to use TLS ([Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security)) when connecting to the database. It's mandatory to use TLS with Azure Database for PostgreSQL flexible server, and it's a good security practice.
### Create an SQL file to generate the database schema
CREATE TABLE todo (id SERIAL PRIMARY KEY, description VARCHAR(255), details VARC
### Connect to the database
-Next, add the Java code that will use JDBC to store and retrieve data from your PostgreSQL server.
+Next, add the Java code that will use JDBC to store and retrieve data from your Azure Database for PostgreSQL flexible server instance.
Create a *src/main/java/DemoApplication.java* file and add the following contents:
public class DemoApplication {
[Having any issues? Let us know.](https://github.com/MicrosoftDocs/azure-docs/issues)
-This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the PostgreSQL server and create a schema that will store our data.
+This Java code will use the *application.properties* and the *schema.sql* files that we created earlier, in order to connect to the Azure Database for PostgreSQL flexible server instance and create a schema that will store our data.
In this file, you can see that we commented methods to insert, read, update and delete data: we will code those methods in the rest of this article, and you will be able to uncomment them one after each other.
You can now execute this main class with your favorite tool:
- Using your IDE, you should be able to right-click on the *DemoApplication* class and execute it. - Using Maven, you can run the application by executing: `mvn exec:java -Dexec.mainClass="com.example.demo.DemoApplication"`.
-The application should connect to the Azure Database for PostgreSQL, create a database schema, and then close the connection, as you should see in the console logs:
+The application should connect to the Azure Database for PostgreSQL flexible server instance, create a database schema, and then close the connection, as you should see in the console logs:
```output [INFO ] Loading application properties
public class Todo {
This class is a domain model mapped on the `todo` table that you created when executing the *schema.sql* script.
-### Insert data into Azure Database for PostgreSQL
+### Insert data into Azure Database for PostgreSQL flexible server
In the *src/main/java/DemoApplication.java* file, after the main method, add the following method to insert data into the database:
Executing the main class should now produce the following output:
[INFO ] Closing database connection ```
-### Reading data from Azure Database for PostgreSQL
+### Reading data from Azure Database for PostgreSQL flexible server
Let's read the data previously inserted, to validate that our code works correctly.
Executing the main class should now produce the following output:
[INFO ] Closing database connection ```
-### Updating data in Azure Database for PostgreSQL
+### Updating data in Azure Database for PostgreSQL flexible server
Let's update the data we previously inserted.
Executing the main class should now produce the following output:
[INFO ] Closing database connection ```
-### Deleting data in Azure Database for PostgreSQL
+### Deleting data in Azure Database for PostgreSQL flexible server
Finally, let's delete the data we previously inserted.
Executing the main class should now produce the following output:
## Clean up resources
-Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for PostgreSQL.
+Congratulations! You've created a Java application that uses JDBC to store and retrieve data from Azure Database for PostgreSQL flexible server.
To clean up all resources used during this quickstart, delete the resource group using the following command:
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-python.md
Title: 'Quickstart: Connect using Python - Azure Database for PostgreSQL - Flexible Server'
+ Title: 'Quickstart: Connect using Python'
description: This quickstart provides several Python code samples you can use to connect and query data from Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this quickstart, you connect to an Azure Database for PostgreSQL - Flexible Server by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
+In this quickstart, you connect to an Azure Database for PostgreSQL flexible server instance by using Python. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Ubuntu Linux, and Windows platforms.
-This article assumes that you're familiar with developing using Python, but you're new to working with Azure Database for PostgreSQL - Flexible Server.
+This article assumes that you're familiar with developing using Python, but you're new to working with Azure Database for PostgreSQL flexible server.
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-* An Azure Database for PostgreSQL - Flexible Server. To create flexible server, refer to [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md).
+* An Azure Database for PostgreSQL flexible server instance. To create Azure Database for PostgreSQL flexible server instance, refer to [Create an Azure Database for PostgreSQL - Flexible Server instance using Azure portal](./quickstart-create-server-portal.md).
* [Python](https://www.python.org/downloads/) 2.7 or 3.6+. * Latest [pip](https://pip.pypa.io/en/stable/installing/) package installer. ## Preparing your client workstation-- If you created your flexible server with *Private access (VNet Integration)*, you will need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server. Refer to [Create and manage Azure Database for PostgreSQL - Flexible Server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md).-- If you created your flexible server with *Public access (allowed IP addresses)*, you can add your local IP address to the list of firewall rules on your server. Refer to [Create and manage Azure Database for PostgreSQL - Flexible Server firewall rules using the Azure CLI](./how-to-manage-firewall-cli.md).
+- If you created your Azure Database for PostgreSQL flexible server instance with *Private access (VNet Integration)*, you will need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your Azure Database for PostgreSQL flexible server instance. Refer to [Create and manage Azure Database for PostgreSQL - Flexible Server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md).
+- If you created your Azure Database for PostgreSQL flexible server instance with *Public access (allowed IP addresses)*, you can add your local IP address to the list of firewall rules on your server. Refer to [Create and manage Azure Database for PostgreSQL - Flexible Server firewall rules using the Azure CLI](./how-to-manage-firewall-cli.md).
## Install the Python libraries for PostgreSQL The [psycopg2](https://pypi.python.org/pypi/psycopg2/) module enables connecting to and querying a PostgreSQL database, and is available as a Linux, macOS, or Windows [wheel](https://pythonwheels.com/) package. Install the binary version of the module, including all the dependencies.
The [psycopg2](https://pypi.python.org/pypi/psycopg2/) module enables connecting
To install `psycopg2`, open a terminal or command prompt and run the command `pip install psycopg2`. ## Get database connection information
-Connecting to an Azure Database for PostgreSQL - Flexible Server requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
+Connecting to an Azure Database for PostgreSQL flexible server instance requires the fully qualified server name and login credentials. You can get this information from the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), search for and select your flexible server name.
+1. In the [Azure portal](https://portal.azure.com/), search for and select your Azure Database for PostgreSQL flexible server name.
2. On the server's **Overview** page, copy the fully qualified **Server name** and the **Admin username**. The fully qualified **Server name** is always of the form *\<my-server-name>.postgres.database.azure.com*. You also need your admin password. If you forget it, you can reset it from overview page.
For each code example in this article:
1. Add the code example to the file. In the code, replace: - `<server-name>` and `<admin-username>` with the values you copied from the Azure portal. - `<admin-password>` with your server password.
- - `<database-name>` with the name of your Azure Database for PostgreSQL - Flexible Server database. A default database named *postgres* was automatically created when you created your server. You can rename that database or create a new database by using SQL commands.
+ - `<database-name>` with the name of your Azure Database for PostgreSQL flexible server database. A default database named *postgres* was automatically created when you created your server. You can rename that database or create a new database by using SQL commands.
1. Save the file in your project folder with a *.py* extension, such as *postgres-insert.py*. For Windows, make sure UTF-8 encoding is selected when you save the file. 1. To run the file, change to your project folder in a command-line interface, and type `python` followed by the filename, for example `python postgres-insert.py`. ## Create a table and insert data
-The following code example connects to your Azure Database for PostgreSQL - Flexible Server database using the psycopg2.connect function, and loads data with a SQL **INSERT** statement. The cursor.execute function executes the SQL query against the database.
+The following code example connects to your Azure Database for PostgreSQL flexible server database using the psycopg2.connect function, and loads data with a SQL **INSERT** statement. The cursor.execute function executes the SQL query against the database.
```Python import psycopg2
When the code runs successfully, it produces the following output:
![Command-line output](media/connect-python/2-example-python-output.png) ## Read data
-The following code example connects to your Azure Database for PostgreSQL - Flexible Server database and uses cursor.execute with the SQL **SELECT** statement to read data. This function accepts a query and returns a result set to iterate over by using cursor.fetchall()
+The following code example connects to your Azure Database for PostgreSQL flexible server database and uses cursor.execute with the SQL **SELECT** statement to read data. This function accepts a query and returns a result set to iterate over by using cursor.fetchall()
```Python import psycopg2
conn.close()
``` ## Update data
-The following code example connects to your Azure Database for PostgreSQL - Flexible Server database and uses cursor.execute with the SQL **UPDATE** statement to update data.
+The following code example connects to your Azure Database for PostgreSQL flexible server database and uses cursor.execute with the SQL **UPDATE** statement to update data.
```Python import psycopg2
conn.close()
``` ## Delete data
-The following code example connects to your Azure Database for PostgreSQL - Flexible Server database and uses cursor.execute with the SQL **DELETE** statement to delete an inventory item that you previously inserted.
+The following code example connects to your Azure Database for PostgreSQL flexible server database and uses cursor.execute with the SQL **DELETE** statement to delete an inventory item that you previously inserted.
```Python import psycopg2
postgresql Connect With Power Bi Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-with-power-bi-desktop.md
Title: Connect Azure Database for PostgreSQL - Flexible Server with Power BI
-description: This article shows how to build Power BI reports from data on your Azure Database for PostgreSQL - Flexible Server.
+ Title: Connect with Power BI
+description: This article shows how to build Power BI reports from data on your Azure Database for PostgreSQL - Flexible Server instance.
Last updated 04/26/2023
# Import data from Azure Database for PostgreSQL - Flexible Server in Power BI + > [!NOTE] > This article applies to Power BI Desktop only. Currently Power Query online or Power BI Service is **not supported**.
-With Power BI Desktop, you can visually explore your data through a free-form drag-and-drop canvas, a broad range of modern data visualizations, and an easy-to-use report authoring experiences. You can import directly from the tables or import from a SELECT query. In this quickstart, you'll learn how to connect with Azure Database for PostgreSQL - Flexible Server with Power BI Desktop.
+With Power BI Desktop, you can visually explore your data through a free-form drag-and-drop canvas, a broad range of modern data visualizations, and an easy-to-use report authoring experiences. You can import directly from the tables or import from a SELECT query. In this quickstart, you'll learn how to connect with Azure Database for PostgreSQL flexible server with Power BI Desktop.
## Prerequisites
With Power BI Desktop, you can visually explore your data through a free-form dr
## Connect with Power BI desktop from Azure portal
-Get the connection information needed to connect to the Azure Database for PostgreSQL flexible server. You need the fully qualified server name and sign in credentials.
+Get the connection information needed to connect to the Azure Database for PostgreSQL flexible server instance. You need the fully qualified server name and sign in credentials.
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you've created (such as **mydemoserverpbi**).
Get the connection information needed to connect to the Azure Database for Postg
:::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-2.png" alt-text="Screenshot of downloading Power BI file for the database."::: 7. Open the file in Power BI desktop.
-8. Switch to **Database** tab to provide the username and password for your database server. **Note Windows authentication is not supported for Azure database for PostgreSQL Flexible Server.**
+8. Switch to **Database** tab to provide the username and password for your database server. **Note Windows authentication is not supported for Azure Database for PostgreSQL flexible server.**
- :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-3.png" alt-text="Screenshot of entering credentials to connect with PostgreSQL database.":::
+ :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-3.png" alt-text="Screenshot of entering credentials to connect with Azure Database for PostgreSQL flexible server database.":::
9. In **Navigator**, select the data you require, then either load or transform the data.
- :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-4.png" alt-text="Screenshot of navigator to view PostgreSQL tables.":::
+ :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-4.png" alt-text="Screenshot of navigator to view Azure Database for PostgreSQL flexible server tables.":::
-## Connect to PostgreSQL database from Power BI Desktop
+## Connect to Azure Database for PostgreSQL flexible server database from Power BI Desktop
-You can connect to Azure database for PostgreSQL Flexible server with Power BI desktop directly without the use of Azure portal.
+You can connect to Azure Database for PostgreSQL flexible server with Power BI desktop directly without the use of Azure portal.
-### Get the PostgreSQL connection information
+### Get the Azure Database for PostgreSQL flexible server connection information
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. From the left-hand menu in Azure portal, select **All resources**, and then search for the server you've created (such as **mydemoserverpbi**).
You can connect to Azure database for PostgreSQL Flexible server with Power BI d
4. From the server's **Overview** panel, make a note of the **Server name** and **Server admin login name**. If you forget your password, you can also reset the password from this panel. 5. Go to **Databases** page to find the database you want to connect to. Power BI desktop supports adding a connection to a single database and hence providing a database name is required for importing data.
-### Add PostgreSQL connection in Power BI desktop
+### Add Azure Database for PostgreSQL flexible server connection in Power BI desktop
1. Select the **PostgreSQL database** option in the connector selection.
You can connect to Azure database for PostgreSQL Flexible server with Power BI d
:::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-6.png" alt-text="Screeshot of Signing in to Power BI.":::
-3. Select the **Database** authentication type and input your PostgreSQL credentials in the **User name** and **Password** boxes. Make sure to select the level to apply your credentials to.
+3. Select the **Database** authentication type and input your Azure Database for PostgreSQL flexible server credentials in the **User name** and **Password** boxes. Make sure to select the level to apply your credentials to.
- :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-3.png" alt-text="Screenshot of entering credentials to connect with PostgreSQL database.":::
+ :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-3.png" alt-text="Screenshot of entering credentials to connect with Azure Database for PostgreSQL flexible server database.":::
4. Once you're done, select **OK**. 5. In **Navigator**, select the data you require, then either load or transform the data.
- :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-4.png" alt-text="Screenshot of navigator to view PostgreSQL tables.":::
+ :::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-4.png" alt-text="Screenshot of navigator to view Azure Database for PostgreSQL flexible server tables.":::
-## Connect to PostgreSQL database from Power Query Online
+## Connect to Azure Database for PostgreSQL flexible server database from Power Query Online
To make the connection, take the following steps:
To make the connection, take the following steps:
:::image type="content" source="./media/connect-with-power-bi-desktop/connector-power-bi-ap-7.png" alt-text="Screenshot of PostgreSQL connection with power query online."::: > [!NOTE]
- >Note that data gateway is not needed for Azure database for PostgreSQL Flexible Server.
+ >Note that data gateway is not needed for Azure Database for PostgreSQL flexible server.
-3. Select the **Basic** authentication kind and input your PostgreSQL credentials in the **Username** and **Password** boxes.
+3. Select the **Basic** authentication kind and input your Azure Database for PostgreSQL flexible server credentials in the **Username** and **Password** boxes.
4. If your connection isn't encrypted, clear **Use Encrypted Connection**.
postgresql Create Automation Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/create-automation-tasks.md
-# Required metadata
- # For more information, see https://review.learn.microsoft.com/en-us/help/platform/learn-editor-add-metadata?branch=main
- # For valid values of ms.service, ms.prod, and ms.topic, see https://review.learn.microsoft.com/en-us/help/platform/metadata-taxonomies?branch=main
- Title: Stop/Start - Automation tasks - Azure Database for PostgreSQL Flexible Server
-description: This article describes how to Stop/Start Azure Database for PostgreSQL Flexible Server instance using the Automation tasks.
+ Title: Stop/start automation tasks
+description: This article describes how to stop/start an Azure Database for PostgreSQL - Flexible Server instance by using automation tasks.
Last updated 07/13/2023
> This capability is in preview and is subject to the > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-To help you manage [Azure Database for PostgreSQL Flexible Server](./overview.md) resources more efficiently, you can create automation tasks for your Flexible Server. One example of such tasks can be starting or stopping the PostgreSQL Flexible Server on a predefined schedule. You can set this task to automatically start or stop the server a specific number of times every day, week, or month by setting the Interval and Frequency values on the task's Configure tab. The automation task continues to work until you delete or disable the task.
+To help you manage [Azure Database for PostgreSQL flexible server](./overview.md) resources more efficiently, you can create automation tasks for your Azure Database for PostgreSQL flexible server instance. One example of such tasks can be starting or stopping the Azure Database for PostgreSQL flexible server instance on a predefined schedule. You can set this task to automatically start or stop the server a specific number of times every day, week, or month by setting the Interval and Frequency values on the task's Configure tab. The automation task continues to work until you delete or disable the task.
-In addition, you can also setup automation tasks for other routine tasks such as 'Send monthly cost for resource' and 'Scale PostgreSQL Flexible Server'.
+In addition, you can also set up automation tasks for other routine tasks such as 'Send monthly cost for resource' and 'Scale Azure Database for PostgreSQL flexible server'.
## How do automation tasks differ from Azure Automation?
-Automation tasks are more basic and lightweight than [Azure Automation](../../automation/overview.md). Currently, you can create an automation task only at the Azure resource level. An automation task is actually a logic app resource that runs a workflow, powered by the [*multi-tenant* Azure Logic Apps service](../../logic-apps/logic-apps-overview.md). You can view and edit the underlying workflow by opening the task in the workflow designer after it has completed at least one run.
+Automation tasks are more basic and lightweight than [Azure Automation](../../automation/overview.md). Currently, you can create an automation task only at the Azure resource level. An automation task is actually a logic app resource that runs a workflow, powered by the [multi-tenant Azure Logic Apps service](../../logic-apps/logic-apps-overview.md). You can view and edit the underlying workflow by opening the task in the workflow designer after it has completed at least one run.
In contrast, Azure Automation is a comprehensive cloud-based automation and configuration service that provides consistent management across your Azure and non-Azure environments.
Creating an automation task doesn't immediately incur charges. Underneath, an au
## Prerequisites * An Azure account and subscription.
-* Azure Database for PostgreSQL Flexible Server that you want to manage.
+* An Azure Database for PostgreSQL flexible server instance that you want to manage.
## Create an automation task to stop server
-1. In the [Azure portal](https://portal.azure.com), find the PostgreSQL Flexible Server resource that you want to manage.
+1. In the [Azure portal](https://portal.azure.com), find the Azure Database for PostgreSQL flexible server resource that you want to manage.
1. On the resource navigation menu, in the **Automation** section, select **Tasks (preview)**.
-![Screenshot showing Azure portal and Azure PostgreSQL resource menu with "Tasks (preview)" selected.](media/create-automation-tasks/azure-postgres-menu-automation-section.png)
+![Screenshot showing Azure portal and Azure Database for PostgreSQL flexible server resource menu with "Tasks (preview)" selected.](media/create-automation-tasks/azure-postgres-menu-automation-section.png)
1. On the **Tasks** pane, select **Add a task** to select a task template. ![Screenshot that shows the "Tasks (preview)" pane with "Add a task" selected.](media/create-automation-tasks/add-automation-task.png)
-1. Under **Select a template**, select the task for **Starting** or **Stopping** your Azure PostgreSQL Flexible Server.
+1. Under **Select a template**, select the task for starting or stopping your Azure Database for PostgreSQL flexible server instance.
![Screenshot that shows the "Add a task" pane with "Stop PostgreSQL Flexible Server" template selected.](media/create-automation-tasks/select-task-template.png) 1. Under **Authenticate**, in the **Connections** section, select **Create** for every connection that appears in the task so that you can provide authentication credentials for all the connections. The types of connections in each task vary based on the task.
The task you've created, which is automatically live and running, will appear on
## Create an automation task to start server
-You can apply the same steps outlined above to create a seperate automation tasks for starting of the PostgreSQL Flexible Server at a specific time. Here's how:
+You can apply the same steps outlined above to create a seperate automation tasks for starting of the Azure Database for PostgreSQL flexible server instance at a specific time. Here's how:
1. Follow the same steps as outlined in the "Create an automation task" section until you reach the "Select a template" stage.
-1. Here, instead of selecting the task for "Stop PostgreSQL Flexible Server," you will select the template for "Start PostgreSQL Flexible Server."
+1. Here, instead of selecting the task for "Stop PostgreSQL Flexible Server," select the template for "Start PostgreSQL Flexible Server."
1. Proceed to fill in the rest of the required details as described in the subsequent steps, defining the specific schedule at which you want the server to start in the 'Configure' section. ## Review task history To view a task's history of runs along with their status:
-1. In the [Azure portal](https://portal.azure.com), find the PostgreSQL Flexible Server resource that you want to manage.
+1. In the [Azure portal](https://portal.azure.com), find the Azure Database for PostgreSQL flexible server resource that you want to manage.
2. On the resource navigation menu, in the **Automation** section, select **Tasks (preview)**. 3. In the tasks list, find the task that you want to review. In that task's **Runs** column, select **View**.
To change a task, you have these options:
### Edit the task inline
-1. In the [Azure portal](https://portal.azure.com), find the PostgreSQL Flexible Server resource that you want to manage.
+1. In the [Azure portal](https://portal.azure.com), find the Azure Database for PostgreSQL flexible server resource that you want to manage.
1. On the resource navigation menu, in the **Automation** section, select **Tasks (preview)**. 1. In the tasks list, find the task that you want to update. Open the task's ellipses (**...**) menu, and select **Edit in-line**. 1. By default, the **Authenticate** tab appears and shows the existing connections.
postgresql Generative Ai Azure Cognitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-cognitive.md
Title: Integrate Azure Database for PostgreSQL Flexible Server with Azure Cognitive Services -Preview
-description: Integrate Azure Database for PostgreSQL Flexible Server with Azure Cognitive Services -Preview
+ Title: Integrate with Azure Cognitive Services Preview
+description: Integrate Azure Database for PostgreSQL - Flexible Server with Azure Cognitive Services - Preview.
Last updated 11/02/2023
-# Integrate Azure Database for PostgreSQL Flexible Server with Azure Cognitive Services (Preview)
+# Integrate Azure Database for PostgreSQL - Flexible Server with Azure Cognitive Services (Preview)
Azure AI extension gives the ability to invoke the [language services](../../ai-services/language-service/overview.md#which-language-service-feature-should-i-use) such as sentiment analysis right from within the database.
postgresql Generative Ai Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-openai.md
Title: Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL Flexible Server
-description: Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL Flexible Server
+ Title: Generate vector embeddings with Azure OpenAI
+description: Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL - Flexible Server.
Last updated 11/02/2023
-# Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL Flexible Server (Preview)
+# Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL - Flexible Server (Preview)
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
CREATE TABLE conference_session_embeddings(
INSERT INTO conference_sessions (title,session_abstract,duration_minutes,publish_date) VALUES
- ('Gen AI with Azure Database for PostgreSQL'
+ ('Gen AI with Azure Database for PostgreSQL flexible server'
,'Learn about building intelligent applications with azure_ai extension and pg_vector' , 60, current_timestamp) ,('Deep Dive: PostgreSQL database storage engine internals'
postgresql Generative Ai Azure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-overview.md
Title: Azure AI Extension in Azure Database for PostgreSQL - Flexible Server
-description: Azure AI Extension in Azure Database for PostgreSQL - Flexible Server
+ Title: Azure AI Extension
+description: Azure AI Extension in Azure Database for PostgreSQL - Flexible Server.
Last updated 11/01/2023
-# Azure Database for PostgreSQL Flexible Server Azure AI Extension (Preview)
+# Azure Database for PostgreSQL - Flexible Server Azure AI Extension (Preview)
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL extension for Azure AI enables you to use large language models (LLMS) and build rich generative AI applications within the database.  The Azure AI extension enables the database to call into various Azure AI services including [Azure OpenAI](../../ai-services/openai/overview.md) and [Azure Cognitive Services](https://azure.microsoft.com/products/ai-services/cognitive-search/) simplifying the development process allowing seamless integration into those services.
+Azure Database for PostgreSQL flexible server extension for Azure AI enables you to use large language models (LLMS) and build rich generative AI applications within the database.  The Azure AI extension enables the database to call into various Azure AI services including [Azure OpenAI](../../ai-services/openai/overview.md) and [Azure Cognitive Services](https://azure.microsoft.com/products/ai-services/cognitive-search/) simplifying the development process allowing seamless integration into those services.
## Enable the `azure_ai` extension
-Before you can enable `azure_ai` on your Flexible Server, you need to add it to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
+Before you can enable `azure_ai` on your Azure Database for PostgreSQL flexible server instance, you need to add it to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
> [!TIP] > You might also want to enable the [`pgvector` extension](./how-to-use-pgvector.md) as it is commonly used with `azure_ai`.
select azure_ai.version();
## Permissions
-The `azure_ai` extension defines a role called `azure_ai_settings_manager`, which enables reading and writing of settings related to the extension. Only superusers and members of the `azure_ai_settings_manager` role can invoke the `azure_ai.get_settings` and `azure_ai.set_settings` functions. In PostgreSQL Flexible Server, all admin users have the `azure_ai_settings_manager` role assigned.
+The `azure_ai` extension defines a role called `azure_ai_settings_manager`, which enables reading and writing of settings related to the extension. Only superusers and members of the `azure_ai_settings_manager` role can invoke the `azure_ai.get_settings` and `azure_ai.set_settings` functions. In Azure Database for PostgreSQL flexible server, all admin users have the `azure_ai_settings_manager` role assigned.
## Next steps
postgresql Generative Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-overview.md
Title: Generative AI with Azure Database for PostgreSQL Flexible Server
-description: Generative AI with Azure Database for PostgreSQL Flexible Server
+ Title: Generative AI
+description: Generative AI with Azure Database for PostgreSQL - Flexible Server.
Last updated 12/15/2023
-# Generative AI with Azure Database for PostgreSQL Flexible Server
+# Generative AI with Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
Generative AI has a wide range of applications across various domains and indust
## Next steps
-Visit the following articles to learn how to perform semantic search with Azure Database for PostgreSQL Flexible Server and Azure OpenAI, and how to leverage the power of Azure Cognitive Services to analyze sentiment, detect language, extract key phrases, and more advanced operations you can apply on text.
+Visit the following articles to learn how to perform semantic search with Azure Database for PostgreSQL flexible server and Azure OpenAI, and how to leverage the power of Azure Cognitive Services to analyze sentiment, detect language, extract key phrases, and more advanced operations you can apply on text.
> [!div class="nextstepaction"] > [Generate vector embeddings with Azure OpenAI](./generative-ai-azure-openai.md) > [!div class="nextstepaction"]
-> [Integrate Azure Database for PostgreSQL Flexible Server with Azure Cognitive Services](./generative-ai-azure-cognitive.md)
+> [Integrate Azure Database for Azure Database for PostgreSQL - Flexible Server with Azure Cognitive Services](./generative-ai-azure-cognitive.md)
> [!div class="nextstepaction"]
-> [Implement Semantic Search with Azure Database for PostgreSQL Flexible Server and Azure OpenAI](./generative-ai-semantic-search.md)
+> [Implement Semantic Search with Azure Database for PostgreSQL - Flexible Server and Azure OpenAI](./generative-ai-semantic-search.md)
> [!div class="nextstepaction"] > [Learn more about vector similarity search using pgvector](./how-to-use-pgvector.md)
postgresql Generative Ai Recommendation System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-recommendation-system.md
Title: Recommendation system with Azure Database for PostgreSQL Flexible Server and Azure OpenAI
-description: Recommendation System with Azure Database for PostgreSQL Flexible Server and Azure OpenAI
+ Title: Recommendation system with Azure OpenAI
+description: Recommendation System with Azure Database for PostgreSQL - Flexible Server and Azure OpenAI.
Last updated 12/16/2023
-# Recommendation System with Azure Database for PostgreSQL Flexible Server and Azure OpenAI
+# Recommendation System with Azure Database for PostgreSQL - Flexible Server and Azure OpenAI
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This hands-on tutorial shows you how to build a recommender application using Azure Database for PostgreSQL Flexible Server and Azure OpenAI service. Recommendations have applications in different domains ΓÇô service providers frequently tend to provide recommendations for products and services they offer based on prior history and contextual information collected from the customer and environment.
+This hands-on tutorial shows you how to build a recommender application using Azure Database for PostgreSQL flexible server and Azure OpenAI service. Recommendations have applications in different domains ΓÇô service providers frequently tend to provide recommendations for products and services they offer based on prior history and contextual information collected from the customer and environment.
There are different ways to model recommendation systems. This article explores the simplest form ΓÇô recommendation based one product corresponding to, say, a prior purchase. This tutorial uses the recipe dataset used in the [Semantic Search](./generative-ai-semantic-search.md) article and the recommendation is for recipes based on a recipe a customer liked or searched for before.
There are different ways to model recommendation systems. This article explores
## Enable the `azure_ai` and `pgvector` extensions
-Before you can enable `azure_ai` and `pgvector` on your Flexible Server, you need to add them to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
+Before you can enable `azure_ai` and `pgvector` on your Azure Database for PostgreSQL flexible server instance, you need to add them to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
Then you can install the extension, by connecting to your target database and running the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command. You need to repeat the command separately for every database you want the extension to be available in.
And explore the results:
## Next steps
-You learned how to perform semantic search with Azure Database for PostgreSQL Flexible Server and Azure OpenAI.
+You learned how to perform semantic search with Azure Database for PostgreSQL flexible server and Azure OpenAI.
> [!div class="nextstepaction"] > [Generate vector embeddings with Azure OpenAI](./generative-ai-azure-openai.md) > [!div class="nextstepaction"]
-> [Integrate Azure Database for PostgreSQL Flexible Server with Azure Cognitive Services](./generative-ai-azure-cognitive.md)
+> [Integrate Azure Database for PostgreSQL - Flexible Server with Azure Cognitive Services](./generative-ai-azure-cognitive.md)
> [!div class="nextstepaction"] > [Learn more about vector similarity search using `pgvector`](./how-to-use-pgvector.md)
postgresql Generative Ai Semantic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-semantic-search.md
Title: Semantic search with Azure Database for PostgreSQL Flexible Server and Azure OpenAI
-description: Semantic Search with Azure Database for PostgreSQL Flexible Server and Azure OpenAI
+ Title: Semantic search with Azure OpenAI
+description: Semantic Search with Azure Database for PostgreSQL - Flexible Server and Azure OpenAI.
Last updated 12/15/2023
-# Semantic Search with Azure Database for PostgreSQL Flexible Server and Azure OpenAI
+# Semantic Search with Azure Database for PostgreSQL - Flexible Server and Azure OpenAI
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This hands-on tutorial shows you how to build a semantic search application using Azure Database for PostgreSQL Flexible Server and Azure OpenAI service. Semantic search does searches based on semantics; standard lexical search does searches based on keywords provided in a query. For example, your recipe dataset might not contain labels like gluten-free, vegan, dairy-free, fruit-free or dessert but these characteristics can be deduced from the ingredients. The idea is to issue such semantic queries and get relevant search results.
+This hands-on tutorial shows you how to build a semantic search application using Azure Database for PostgreSQL flexible server and Azure OpenAI service. Semantic search does searches based on semantics; standard lexical search does searches based on keywords provided in a query. For example, your recipe dataset might not contain labels like gluten-free, vegan, dairy-free, fruit-free or dessert but these characteristics can be deduced from the ingredients. The idea is to issue such semantic queries and get relevant search results.
Building semantic search capability on your data using GenAI and Flexible Server involves the following steps: >[!div class="checklist"]
Building semantic search capability on your data using GenAI and Flexible Server
## Enable the `azure_ai` and `pgvector` extensions
-Before you can enable `azure_ai` and `pgvector` on your Flexible Server, you need to add them to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
+Before you can enable `azure_ai` and `pgvector` on your Azure Database for PostgreSQL flexible server instance, you need to add them to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
Then you can install the extension, by connecting to your target database and running the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command. You need to repeat the command separately for every database you want the extension to be available in.
And explore the results:
## Next steps
-You learned how to perform semantic search with Azure Database for PostgreSQL Flexible Server and Azure OpenAI.
+You learned how to perform semantic search with Azure Database for PostgreSQL flexible server and Azure OpenAI.
> [!div class="nextstepaction"] > [Generate vector embeddings with Azure OpenAI](./generative-ai-azure-openai.md) > [!div class="nextstepaction"]
-> [Integrate Azure Database for PostgreSQL Flexible Server with Azure Cognitive Services](./generative-ai-azure-cognitive.md)
+> [Integrate Azure Database for PostgreSQL - Flexible Server with Azure Cognitive Services](./generative-ai-azure-cognitive.md)
> [!div class="nextstepaction"] > [Learn more about vector similarity search using `pgvector`](./how-to-use-pgvector.md) > [!div class="nextstepaction"]
-> [Build a Recommendation System with Azure Database for PostgreSQL Flexible Server and Azure OpenAI](./generative-ai-recommendation-system.md)
+> [Build a Recommendation System with Azure Database for PostgreSQL - Flexible Server and Azure OpenAI](./generative-ai-recommendation-system.md)
postgresql How To Alert On Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-alert-on-metrics.md
+
+ Title: Configure alerts - Azure portal
+description: This article describes how to configure and access metric alerts for Azure Database for PostgreSQL - Flexible Server from the Azure portal.
+++++ Last updated : 7/12/2023++
+# Use the Azure portal to set up alerts on metrics for Azure Database for PostgreSQL - Flexible Server
++
+This article shows you how to set up Azure Database for PostgreSQL flexible server alerts using the Azure portal. You can receive an alert based on monitoring metrics for your Azure services.
+
+The alert triggers when the value of a specified metric crosses a threshold you assign. The alert triggers both when the condition is first met, and then afterwards when that condition is no longer being met. Metric alerts are stateful, that is, they only send out notifications when the state changes.
+
+You can configure an alert to do the following actions when it triggers:
+
+* Send email notifications to the service administrator and co-administrators.
+* Send email to additional emails that you specify.
+* Call a webhook.
+
+You can configure and get information about alert rules using:
+
+* [Azure portal](../../azure-monitor/alerts/alerts-metric.md#create-with-azure-portal)
+* [Azure CLI](../../azure-monitor/alerts/alerts-metric.md#with-azure-cli)
+* [Azure Monitor REST API](/rest/api/monitor/metricalerts)
+
+## Create an alert rule on a metric from the Azure portal
+
+1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL flexible server instance you want to monitor
+
+2. Under the **Monitoring** section of the sidebar, select **Alerts**.
+
+3. Select **+ New alert rule**.
+
+4. The **Create rule** page opens as shown below. Fill in the required information:
+
+5. Current Azure Database for PostgreSQL flexible server instance is automatically added to the alert **Scope**.
+
+6. Within the **Condition** section, select **Add condition**.
+
+7. You'll see a list of supported signals, select the metric you want to create an alert on. For example, select `storage percent`.
+
+8. You'll see a chart for the metric for the last six hours. Use the **Chart period** dropdown to select to see longer history for the metric.
+
+9. Select the **Threshold type** (ex. "Static" or "Dynamic"), **Operator** (ex. "Greater than"), and **Aggregation type** (ex. average). This selection determines the logic that the metric alert rule will evaluate.
+ - If you're using a Static threshold, continue to define a Threshold value (ex. 85 percent). The metric chart can help determine what might be a reasonable threshold.
+ - If you're using a Dynamic threshold, continue to define the Threshold sensitivity. The metric chart will display the calculated thresholds based on recent data. [Learn more about Dynamic Thresholds condition type and sensitivity options](../../azure-monitor/alerts/alerts-dynamic-thresholds.md).
+
+10. Refine the condition by adjusting **Aggregation granularity (Period)** interval over which data points are grouped using the aggregation type function (ex. "Lookback period 30 minutes"), and **Frequency** (ex "Check every 15 Minutes").
+
+11. Select **Done** when complete.
+12. Add an action group. An action group is a collection of notification preferences defined by the owner of an Azure subscription. Within the **Action Groups** section, choose **Select action group** to select an already existing action group to attach to the alert rule.
+ - You can also create a new action group to receive notifications on the alert. For more information, see [create and manage action group](../../azure-monitor/alerts/action-groups.md).
+ - To create a new action group, choose **+ Create action group**. Fill out the **create action group** form with a **subscription**, **resource group**, **action group name** and **display name**.
+ - Configure **Notifications** for action group.
+
+ In **Notification type**, choose **Email Azure Resource Manager Role** to select subscription Owners, Contributors, and Readers to receive notifications. Choose the **Azure Resource Manager Role** for sending the email. You can also choose **Email/SMS message/Push/Voice** to send notifications to specific recipients. Provide **Name** to the notification type and select **Review + Create** when completed.
+
+13. Fill in **Alert rule details** like **severity**, **alert rule name** and **description**.
+14. Select **Create alert rule** to create the alert.
+15. Within a few minutes, the alert is active and triggers as previously described.
+
+## Monitor multiple resources
+
+Azure Database for PostgreSQL flexible server also supports multi-resource metric alert rule. You can monitor at scale by applying the same metric alert rule to multiple Azure Database for PostgreSQL flexible server instances in the same Azure region. Individual notifications are sent for each monitored resource.
+
+To [set up a new metric alert rule](../../azure-monitor/alerts/alerts-create-new-alert-rule.md), in the alert rule creation experience, in Scope definition (step 5.) from the previous section use the checkboxes to select all the Azure Database for PostgreSQL flexible server instances you want the rule to be applied to.
+
+> [!IMPORTANT]
+> The resources you select must be within the same resource type, location, and subscription. Resources that do not fit these criteria are not selectable.
+
+You can also use [Azure Resource Manager templates](../../azure-monitor/alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-using-an-arm-template) to deploy multi-resource metric alerts. To learn more about multi-resource alerts, refer our blog [Scale Monitoring with Azure Database for PostgreSQL - Flexible Server Multi-Resource Alert](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/scale-monitoring-with-azure-postgresql-multi-resource-alerts/ba-p/3866526).
+
+## Manage your alerts
+
+Once you have created an alert, you can select it and do the following actions:
+
+* View a graph showing the metric threshold and the actual values from the previous day relevant to this alert.
+* **Edit** or **Delete** the alert rule.
+* **Disable** or **Enable** the alert, if you want to temporarily stop or resume receiving notifications.
+
+## Next steps
+
+* Learn more about [setting alert on metrics](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+* Learn more about [monitoring metrics available in Azure Database for PostgreSQL - Flexible Server](./concepts-monitoring.md).
+* [Understand how metric alerts work in Azure Monitor](../../azure-monitor/alerts/alerts-types.md).
+* [Scale Monitoring with Azure Database for PostgreSQL - Flexible Server Multi-Resource Alerts](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/scale-monitoring-with-azure-postgresql-multi-resource-alerts/ba-p/3866526).
postgresql How To Auto Grow Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-auto-grow-storage-portal.md
Title: Storage Auto-grow - Azure portal - Azure Database for PostgreSQL - Flexible Server
-description: This article describes how you can configure storage autogrow using the Azure portal in Azure Database for PostgreSQL - Flexible Server
+ Title: Storage auto-grow - Azure portal
+description: This article describes how you can configure storage autogrow using the Azure portal in Azure Database for PostgreSQL - Flexible Server.
Last updated 06/24/2022
-# Storage Autogrow using Azure portal in Azure Database for PostgreSQL - Flexible Server
+# Storage autogrow using Azure portal in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-This article describes how you can configure an Azure Database for PostgreSQL server storage to grow without impacting the workload.
+This article describes how you can configure Azure Database for PostgreSQL server storage to grow without impacting the workload.
For servers with more than 1 TiB of provisioned storage, the storage autogrow mechanism activates when the available space falls to less than 10% of the total capacity or 64 GiB of free space, whichever of the two values is smaller. Conversely, for servers with storage under 1 TB, this threshold is adjusted to 20% of the available free space or 64 GiB, depending on which of these values is smaller.
As an illustration, take a server with a storage capacity of 2 TiB ( greater tha
## Enable storage auto-grow for existing servers
-Follow these steps to enable Storage Autogrow on your Azure Database for PostgreSQL Flexible server.
+Follow these steps to enable Storage Autogrow on your Azure Database for PostgreSQL flexible server instance.
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL Flexible Server.
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL flexible server instance.
-2. On the Flexible Server page, select **Compute + storage**
+2. On the Azure Database for PostgreSQL flexible server page, select **Compute + storage**
3. In the **Storage Auto-growth** section, checkmark to enable storage autogrow.
postgresql How To Autovacuum Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-autovacuum-tuning.md
Title: Autovacuum Tuning
-description: Troubleshooting guide for autovacuum in Azure Database for PostgreSQL - Flexible Server
+ Title: Autovacuum tuning
+description: Troubleshooting guide for autovacuum in Azure Database for PostgreSQL - Flexible Server.
-# Autovacuum Tuning in Azure Database for PostgreSQL - Flexible Server
+# Autovacuum tuning in Azure Database for PostgreSQL - Flexible Server
-This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL - Flexible Server](overview.md) and the feature troubleshooting guides that are available to monitor the database bloat, autovacuum blockers and also information around how far the database is from emergency or wraparound situation.
+
+This article provides an overview of the autovacuum feature for [Azure Database for PostgreSQL flexible server](overview.md) and the feature troubleshooting guides that are available to monitor the database bloat, autovacuum blockers and also information around how far the database is from emergency or wraparound situation.
## What is autovacuum
Continuously running autovacuum might affect CPU and IO utilization on the serve
Autovacuum daemon uses `autovacuum_work_mem` that is by default set to `-1` meaningΓÇ»`autovacuum_work_mem` would have the same value as the parameterΓÇ»`maintenance_work_mem`. This document assumes `autovacuum_work_mem` is set to `-1` and `maintenance_work_mem` is used by the autovacuum daemon.
-If `maintenance_work_mem` is low, it might be increased to up to 2 GB on Flexible Server. A general rule of thumb is to allocate 50 MB to `maintenance_work_mem` for every 1 GB of RAM.
+If `maintenance_work_mem` is low, it might be increased to up to 2 GB on Azure Database for PostgreSQL flexible server. A general rule of thumb is to allocate 50 MB to `maintenance_work_mem` for every 1 GB of RAM.
#### Large number of databases
Autovacuum will run on tables with an insert-only workload. Two new server p
## Troubleshooting guides
-Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal it is possible to monitor bloat at database or individual schema level along with identifying potential blockers to autovacuum process. Two troubleshooting guides are available first one is autovacuum monitoring that can be used to monitor bloat at database or individual schema level. The second troubleshooting guide is autovacuum blockers and wraparound which helps to identify potential autovacuum blockers along with information on how far the databases on the server are from wraparound or emergency situation. The troubleshooting guides also share recommendations to mitigate potential issues. How to set up the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL flexible server portal it is possible to monitor bloat at database or individual schema level along with identifying potential blockers to autovacuum process. Two troubleshooting guides are available first one is autovacuum monitoring that can be used to monitor bloat at database or individual schema level. The second troubleshooting guide is autovacuum blockers and wraparound which helps to identify potential autovacuum blockers along with information on how far the databases on the server are from wraparound or emergency situation. The troubleshooting guides also share recommendations to mitigate potential issues. How to set up the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
## Related content - [High CPU Utilization](how-to-high-cpu-utilization.md) - [High Memory Utilization](how-to-high-memory-utilization.md)-- [Server Parameters](howto-configure-server-parameters-using-portal.md)
+- [Server Parameters](how-to-configure-server-parameters-using-portal.md)
postgresql How To Bulk Load Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-bulk-load-data.md
Title: Upload data in bulk in Azure Database for PostgreSQL - Flexible Server
-description: This article discusses best practices for uploading data in bulk in Azure Database for PostgreSQL - Flexible Server
+ Title: Upload data in bulk
+description: This article discusses best practices for uploading data in bulk in Azure Database for PostgreSQL - Flexible Server.
+ Last updated 08/16/2022
# Best practices for uploading data in bulk in Azure Database for PostgreSQL - Flexible Server
-This article discusses various methods for loading data in bulk in Azure Database for PostgreSQL - Flexible Server, along with best practices for both initial data loads in empty databases and incremental data loads.
+
+This article discusses various methods for loading data in bulk in Azure Database for PostgreSQL flexible server, along with best practices for both initial data loads in empty databases and incremental data loads.
## Loading methods
To create an unlogged table or change an existing table to an unlogged table, us
> [!NOTE] > Follow the recommendations here only if there's enough memory and disk space.
-* `maintenance_work_mem`: Can be set to a maximum of 2 gigabytes (GB) on a flexible server. `maintenance_work_mem` helps in speeding up autovacuum, index, and foreign key creation.
+* `maintenance_work_mem`: Can be set to a maximum of 2 gigabytes (GB) on an Azure Database for PostgreSQL flexible server instance. `maintenance_work_mem` helps in speeding up autovacuum, index, and foreign key creation.
-* `checkpoint_timeout`: On a flexible server, the `checkpoint_timeout` value can be increased to a maximum of 24 hours from the default setting of 5 minutes. We recommend that you increase the value to 1 hour before you load data initially on the flexible server.
+* `checkpoint_timeout`: On an Azure Database for PostgreSQL flexible server instance, the `checkpoint_timeout` value can be increased to a maximum of 24 hours from the default setting of 5 minutes. We recommend that you increase the value to 1 hour before you load data initially on the Azure Database for PostgreSQL flexible server instance.
* `checkpoint_completion_target`: We recommend a value of 0.9.
-* `max_wal_size`: Can be set to the maximum allowed value on a flexible server, which is 64 GB while you're doing the initial data load.
+* `max_wal_size`: Can be set to the maximum allowed value on an Azure Database for PostgreSQL flexible server instance, which is 64 GB while you're doing the initial data load.
* `wal_compression`: Can be turned on. Enabling this parameter can incur some extra CPU cost spent on the compression during write-ahead log (WAL) logging and on the decompression during WAL replay.
-### Flexible server recommendations
+### Azure Database for PostgreSQL flexible server recommendations
-Before you begin an initial data load on the flexible server, we recommend that you:
+Before you begin an initial data load on the Azure Database for PostgreSQL flexible server instance, we recommend that you:
- Disable high availability on the server. You can enable it after the initial load is completed on the primary. - Create read replicas after the initial data load is completed.
The `number_of_scans`, `tuples_read`, and `tuples_fetched` columns would indicat
> [!NOTE] > Follow the recommendations in the following parameters only if there's enough memory and disk space.
-* `maintenance_work_mem`: This parameter can be set to a maximum of 2 GB on the flexible server. `maintenance_work_mem` helps speed up index creation and foreign key additions.
+* `maintenance_work_mem`: This parameter can be set to a maximum of 2 GB on the Azure Database for PostgreSQL flexible server instance. `maintenance_work_mem` helps speed up index creation and foreign key additions.
-* `checkpoint_timeout`: On the flexible server, the `checkpoint_timeout` value can be increased to 10 or 15 minutes from the default setting of 5 minutes. Increasing `checkpoint_timeout` to a larger value, such as 15 minutes, can reduce the I/O load, but the downside is that it takes longer to recover if there's a crash. We recommend careful consideration before you make the change.
+* `checkpoint_timeout`: On the Azure Database for PostgreSQL flexible server instance, the `checkpoint_timeout` value can be increased to 10 or 15 minutes from the default setting of 5 minutes. Increasing `checkpoint_timeout` to a larger value, such as 15 minutes, can reduce the I/O load, but the downside is that it takes longer to recover if there's a crash. We recommend careful consideration before you make the change.
* `checkpoint_completion_target`: We recommend a value of 0.9.
The `number_of_scans`, `tuples_read`, and `tuples_fetched` columns would indicat
## Next steps - [Troubleshoot high CPU utilization](./how-to-high-CPU-utilization.md) - [Troubleshoot high memory utilization](./how-to-high-memory-utilization.md)-- [Configure server parameters](./howto-configure-server-parameters-using-portal.md)
+- [Configure server parameters](./how-to-configure-server-parameters-using-portal.md)
- [Troubleshoot and tune Autovacuum](./how-to-autovacuum-tuning.md) - [Troubleshoot high CPU utilization](./how-to-high-io-utilization.md)
postgresql How To Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-and-access-logs.md
+
+ Title: Configure and access logs
+description: How to access database logs.
+++++ Last updated : 4/3/2023++
+# Configure and access logs in Azure Database for PostgreSQL - Flexible Server
++
+Azure Database for PostgreSQL flexible server logs are available on every node of a flexible server. You can ship logs to a storage server, or to an analytics service. The logs can be used to identify, troubleshoot, and repair configuration errors and suboptimal performance.
+
+## Configure diagnostic settings
+
+You can enable diagnostic settings for your Azure Database for PostgreSQL flexible server instance using the Azure portal, CLI, REST API, and PowerShell. The log category to select is **PostgreSQLLogs**.
+
+To enable resource logs using the Azure portal:
+
+1. In the portal, go to *Diagnostic Settings* in the navigation menu of your Azure Database for PostgreSQL flexible server instance.
+
+2. Select *Add Diagnostic Setting*.
+ :::image type="content" source="media/howto-logging/diagnostic-settings.png" alt-text="Add diagnostic settings button":::
+
+3. Name this setting.
+
+4. Select your preferred endpoint (Log Analytics workspace, Storage account, Event hub).
+
+5. Select the log type from the list of categories (Server Logs, Sessions data, Query Store Runtime / Wait Statistics etc.)
+ :::image type="content" source="media/howto-logging/diagnostic-setting-log-category.png" alt-text="Screenshot of choosing log categories.":::
+
+7. Save your setting.
+
+To enable resource logs using PowerShell, CLI, or REST API, visit the [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) article.
+
+### Access resource logs
+
+The way you access the logs depends on which endpoint you choose. For Azure Storage, see the [logs storage account](../../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) article. For Event Hubs, see the [stream Azure logs](../../azure-monitor/essentials/resource-logs.md#send-to-azure-event-hubs) article.
+
+For Azure Monitor Logs, logs are sent to the workspace you selected. The Azure Database for PostgreSQL flexible server logs use the **AzureDiagnostics** collection mode, so they can be queried from the AzureDiagnostics table. The fields in the table are described below. Learn more about querying and alerting in the [Azure Monitor Logs query](../../azure-monitor/logs/log-query-overview.md) overview.
+
+The following are queries you can try to get started. You can configure alerts based on queries.
+
+Search for all Azure Database for PostgreSQL flexible server logs for a particular server in the last day.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category == "PostgreSQLLogs"
+| where TimeGenerated > ago(1d)
+```
+Search for all non-localhost connection attempts. Below query will show results over the last 6 hours for any Azure Database for PostgreSQL flexible server logging in this workspace.
+
+```kusto
+AzureDiagnostics
+| where Message contains "connection received" and Message !contains "host=127.0.0.1"
+| where Category == "PostgreSQLLogs" and TimeGenerated > ago(6h)
+```
+
+Search for Azure Database for PostgreSQL flexible server Sessions collected from `pg_stat_activity` system view for a particular server in the last day.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexSessions'
+| where TimeGenerated > ago(1d)
+```
+
+Search for Azure Database for PostgreSQL flexible server Query Store Runtime statistics collected from `query_store.qs_view` for a particular server in the last day. It requires Query Store to be enabled.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexQueryStoreRuntime'
+| where TimeGenerated > ago(1d)
+```
+
+Search for Azure Database for PostgreSQL flexible server Query Store Wait Statistics collected from `query_store.pgms_wait_sampling_view` for a particular server in the last day. It requires Query Store Wait Sampling to be enabled.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexQueryStoreWaitStats'
+| where TimeGenerated > ago(1d)
+```
+
+Search for Azure Database for PostgreSQL flexible server Autovacuum and Schema statistics for each database in a particular server within the last day.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexTableStats'
+| where TimeGenerated > ago(1d)
+```
+
+Search for Azure Database for PostgreSQL flexible server remaining transactions and multixacts till emergency autovacuum or wraparound protection for each database in a particular server within the last day.
+
+```kusto
+AzureDiagnostics
+| where Resource == "myservername"
+| where Category =='PostgreSQLFlexDatabaseXacts'
+| where TimeGenerated > ago(1d)
+```
+
+## Next steps
+
+- [Get started with log analytics queries](../../azure-monitor/logs/log-analytics-tutorial.md)
+- Learn about [Azure event hubs](../../event-hubs/event-hubs-about.md)
postgresql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-high-availability-cli.md
Title: Manage high availability - Azure CLI - Azure Database for PostgreSQL Flexible Server
-description: This article describes how to configure high availability in Azure Database for PostgreSQL flexible Server with the Azure CLI.
+ Title: Manage high availability - Azure CLI
+description: This article describes how to configure high availability in Azure Database for PostgreSQL - Flexible Server with the Azure CLI.
Last updated 6/16/2022
-# Manage high availability in Azure Database for PostgreSQL Flexible Server with Azure CLI
+# Manage high availability in Azure Database for PostgreSQL - Flexible Server with Azure CLI
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The article describes how you can enable or disable high availability configuration at the time of server creation in your flexible server. You can disable high availability after server creation too.
+The article describes how you can enable or disable high availability configuration at the time of server creation in Azure Database for PostgreSQL flexible server. You can disable high availability after server creation too.
High availability feature provisions physically separate primary and standby replica in different zones. For more information, see [high availability concepts documentation](./concepts/../concepts-high-availability.md). Enabling or disabling high availability doesn't change your other settings including VNET configuration, firewall settings, and backup retention. Disabling of high availability doesn't impact your application connectivity and operations.
postgresql How To Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-server-parameters-using-cli.md
+
+ Title: Configure parameters
+description: This article describes how to configure Postgres parameters in Azure Database for PostgreSQL - Flexible Server using the Azure CLI.
++++
+ms.devlang: azurecli
+ Last updated : 8/14/2023+++
+# Customize server parameters for Azure Database for PostgreSQL - Flexible Server using Azure CLI
++
+You can list, show, and update configuration parameters for an Azure PostgreSQL server using the Command Line Interface (Azure CLI). A subset of engine parameters is exposed at server-level and can be modified.
+
+## Prerequisites
+
+To step through this how-to guide, you need:
+- Create an Azure Database for PostgreSQL flexible server instance and database by following [Create an Azure Database for PostgreSQL flexible server instance](quickstart-create-server-cli.md)
+- Install [Azure CLI](/cli/azure/install-azure-cli) command-line interface on your machine or use the [Azure Cloud Shell](../../cloud-shell/overview.md) in the Azure portal using your browser.
+
+## List server parameters for an Azure Database for PostgreSQL flexible server instance
+
+To list all modifiable parameters in a server and their values, run the [az postgres flexible-server parameter list](/cli/azure/postgres/flexible-server/parameter) command.
+
+You can list the server parameters for the server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup**.
+
+```azurecli-interactive
+az postgres flexible-server parameter list --resource-group myresourcegroup --server-name mydemoserver
+```
+
+## Show server parameter details
+
+To show details about a particular parameter for a server, run the [az postgres flexible-server parameter show](/cli/azure/postgres/flexible-server/parameter) command.
+
+This example shows details of the **log\_min\_messages** server parameter for server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.**
+
+```azurecli-interactive
+az postgres flexible-server parameter show --name log_min_messages --resource-group myresourcegroup --server-name mydemoserver
+```
+
+## Modify server parameter value
+
+You can also modify the value of a certain server parameter, which updates the underlying configuration value for the Azure Database for PostgreSQL flexible server engine. To update the parameter, use the [az postgres flexible-server parameter set](/cli/azure/postgres/flexible-server/parameter) command.
+
+To update the **log\_min\_messages** server parameter of server **mydemoserver.postgres.database.azure.com** under resource group **myresourcegroup.**
+
+```azurecli-interactive
+az postgres flexible-server parameter set --name log_min_messages --value INFO --resource-group myresourcegroup --server-name mydemoserver
+```
+
+If you want to reset the value of a parameter, you simply choose to leave out the optional `--value` parameter, and the service applies the default value. In above example, it would look like:
+
+```azurecli-interactive
+az postgres flexible-server parameter set --name log_min_messages --resource-group myresourcegroup --server-name mydemoserver
+```
+
+This command resets the **log\_min\_messages** parameter to the default value **WARNING**. For more information on server parameters and permissible values, see PostgreSQL documentation on [Setting Parameters](https://www.postgresql.org/docs/current/config-setting.html).
+
+## Next steps
+
+- To configure and access server logs, see [Server Logs in Azure Database for PostgreSQL - Flexible Server](concepts-logging.md)
postgresql How To Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-server-parameters-using-portal.md
+
+ Title: Configure server parameters - Azure portal
+description: This article describes how to configure the Postgres parameters in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
+++++ Last updated : 8/14/2023++
+# Configure server parameters in Azure Database for PostgreSQL - Flexible Server via the Azure portal
++
+You can list, show, and update configuration parameters for an Azure Database for PostgreSQL flexible server instance through the Azure portal. In addition, you can also click on the **Server Parameter Tabs** to easily view parameter group as **Modified**, **Static**, **Dynamic** and **Read-Only**.
+
+## Prerequisites
+To step through this how-to guide you need:
+- [An Azure Database for PostgreSQL flexible server instance](quickstart-create-server-portal.md)
+
+## Viewing and editing parameters
+1. Open the [Azure portal](https://portal.azure.com).
+
+2. Select your Azure Database for PostgreSQL flexible server instance.
+
+3. Under the **SETTINGS** section, select **Server parameters**. The page shows a list of parameters, their values, and descriptions.
+
+4. Select the **drop down** button to see the possible values for enumerated-type parameters like client_min_messages.
+
+5. Select or hover over the **i** (information) button to see the range of possible values for numeric parameters like cpu_index_tuple_cost.
+
+6. If needed, use the **search box** to narrow down to a specific parameter. The search is on the name and description of the parameters.
+
+7. Change the parameter values you would like to adjust. All changes you make in a session are highlighted in purple. Once you have changed the values, you can select **Save**. Or you can **Discard** your changes.
+
+8. List all the parameters that are modified from their _default_ value.
+
+9. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**.
+
+## Working with time zone parameters
+If you plan to work with date and time data in PostgreSQL, youΓÇÖll want to ensure that youΓÇÖve set the correct time zone for your location. All timezone-aware dates and times are stored internally in PostgreSQL in UTC. They are converted to local time in the zone specified by the **TimeZone** server parameter before being displayed to the client. This parameter can be edited on **Server parameters** page as explained above.
+PostgreSQL allows you to specify time zones in three different forms:
+1. A full time zone name, for example America/New_York. The recognized time zone names are listed in the [**pg_timezone_names**](https://www.postgresql.org/docs/9.2/view-pg-timezone-names.html) view.
+ Example to query this view in psql and get list of time zone names:
+ <pre>select name FROM pg_timezone_names LIMIT 20;</pre>
+
+ You should see result set like:
+
+ <pre>
+ name
+ --
+ GMT0
+ Iceland
+ Factory
+ NZ-CHAT
+ America/Panama
+ America/Fort_Nelson
+ America/Pangnirtung
+ America/Belem
+ America/Coral_Harbour
+ America/Guayaquil
+ America/Marigot
+ America/Barbados
+ America/Porto_Velho
+ America/Bogota
+ America/Menominee
+ America/Martinique
+ America/Asuncion
+ America/Toronto
+ America/Tortola
+ America/Managua
+ (20 rows)
+ </pre>
+
+2. A time zone abbreviation, for example PST. Such a specification merely defines a particular offset from UTC, in contrast to full time zone names which can imply a set of daylight savings transition-date rules as well. The recognized abbreviations are listed in the [**pg_timezone_abbrevs view**](https://www.postgresql.org/docs/9.4/view-pg-timezone-abbrevs.html)
+ Example to query this view in psql and get list of time zone abbreviations:
+
+ <pre> select abbrev from pg_timezone_abbrevs limit 20;</pre>
+
+ You should see result set like:
+
+ <pre>
+ abbrev|
+ +
+ ACDT |
+ ACSST |
+ ACST |
+ ACT |
+ ACWST |
+ ADT |
+ AEDT |
+ AESST |
+ AEST |
+ AFT |
+ AKDT |
+ AKST |
+ ALMST |
+ ALMT |
+ AMST |
+ AMT |
+ ANAST |
+ ANAT |
+ ARST |
+ ART |
+ </pre>
+
+3. In addition to the timezone names and abbreviations, PostgreSQL will accept POSIX-style time zone specifications of the form STDoffset or STDoffsetDST, where STD is a zone abbreviation, offset is a numeric offset in hours west from UTC, and DST is an optional daylight-savings zone abbreviation, assumed to stand for one hour ahead of the given offset.
+
+
+## Next steps
+Learn about:
+- [Overview of server parameters in Azure Database for PostgreSQL - Flexible Server](concepts-server-parameters.md)
+- [Configure Azure Database for PostgreSQL - Flexible Server parameters via CLI](how-to-configure-server-parameters-using-cli.md)
+
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
Title: Use Microsoft Entra ID for authentication with Azure Database for PostgreSQL - Flexible Server
+ Title: Use Microsoft Entra ID for authentication
description: Learn how to set up Microsoft Entra ID for authentication with Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-In this article, you'll configure Microsoft Entra ID access for authentication with Azure Database for PostgreSQL - Flexible Server. You'll also learn how to use a Microsoft Entra token with Azure Database for PostgreSQL - Flexible Server.
+In this article, you configure Microsoft Entra ID access for authentication with Azure Database for PostgreSQL flexible server. You'll also learn how to use a Microsoft Entra token with Azure Database for PostgreSQL flexible server.
-You can configure Microsoft Entra authentication for Azure Database for PostgreSQL - Flexible Server either during server provisioning or later. Only Microsoft Entra administrator users can create or enable users for Microsoft Entra ID-based authentication. We recommend not using the Microsoft Entra administrator for regular database operations because that role has elevated user permissions (for example, CREATEDB).
+You can configure Microsoft Entra authentication for Azure Database for PostgreSQL flexible server either during server provisioning or later. Only Microsoft Entra administrator users can create or enable users for Microsoft Entra ID-based authentication. We recommend not using the Microsoft Entra administrator for regular database operations because that role has elevated user permissions (for example, CREATEDB).
-You can have multiple Microsoft Entra admin users with Azure Database for PostgreSQL - Flexible Server. Microsoft Entra admin users can be a user, a group, or service principal.
+You can have multiple Microsoft Entra admin users with Azure Database for PostgreSQL flexible server. Microsoft Entra admin users can be a user, a group, or service principal.
## Prerequisites
To set the Microsoft Entra admin during server provisioning, follow these steps:
To set the Microsoft Entra administrator after server creation, follow these steps:
-1. In the Azure portal, select the instance of Azure Database for PostgreSQL - Flexible Server that you want to enable for Microsoft Entra ID.
+1. In the Azure portal, select the instance of Azure Database for PostgreSQL flexible server that you want to enable for Microsoft Entra ID.
1. Under **Security**, select **Authentication**. Then choose either **PostgreSQL and Microsoft Entra authentication** or **Microsoft Entra authentication only** as the authentication method, based on your requirements. 1. Select **Add Microsoft Entra Admins**. Then select a valid Microsoft Entra user, group, service principal, or managed identity in the customer tenant to be a Microsoft Entra administrator. 1. Select **Save**.
To set the Microsoft Entra administrator after server creation, follow these ste
:::image type="content" source="media/how-to-configure-sign-in-Azure-ad-authentication/set-Azure-ad-admin.png" alt-text="Screenshot that shows selections for setting a Microsoft Entra admin after server creation."::: > [!IMPORTANT]
-> When setting the administrator, a new user is added to Azure Database for PostgreSQL - Flexible Server with full administrator permissions.
+> When setting the administrator, a new user is added to Azure Database for PostgreSQL flexible server with full administrator permissions.
<a name='connect-to-azure-database-for-postgresql-by-using-azure-ad'></a>
We've tested the following clients:
## Authenticate with Microsoft Entra ID
-Use the following procedures to authenticate with Microsoft Entra ID as an Azure Database for PostgreSQL - Flexible Server user. You can follow along in Azure Cloud Shell, on an Azure virtual machine, or on your local machine.
+Use the following procedures to authenticate with Microsoft Entra ID as an Azure Database for PostgreSQL flexible server user. You can follow along in Azure Cloud Shell, on an Azure virtual machine, or on your local machine.
### Sign in to the user's Azure subscription
To connect by using a Microsoft Entra token with PgAdmin, follow these steps:
1. Open Pgadmin and click **Register** from left hand menu and select **Server** 2. In **General** Tab provide a connection name and clear the **Connect now** option.
-3. Click the **Connection** tab and provide your Flexible Server details for **Hostname/address** and **username** and save.
-4. From the browser menu, select your Azure Database for PostgreSQL - Flexible Server connection and click **Connect Server**
-5. Enter the your Active Directory token password when you're prompted.
+3. Click the **Connection** tab and provide your Azure Database for PostgreSQL flexible server instance details for **Hostname/address** and **username** and save.
+4. From the browser menu, select your Azure Database for PostgreSQL flexible server connection and click **Connect Server**
+5. Enter your Active Directory token password when prompted.
:::image type="content" source="media/how-to-configure-sign-in-Azure-ad-authentication/login-using-pgadmin.png" alt-text="Screenshot that shows login process using PG admin.":::
You're now authenticated to your Azure Database for PostgreSQL server through Mi
<a name='create-azure-ad-groups-in-azure-database-for-postgresqlflexible-server'></a>
-### Create Microsoft Entra groups in Azure Database for PostgreSQL - Flexible Server
+### Create Microsoft Entra groups in Azure Database for PostgreSQL flexible server
To enable a Microsoft Entra group to access your database, use the same mechanism you used for users, but specify the group name instead. For example:
select * from pgaadauth_create_principal('Prod DB Readonly', false, false).
When group members sign in, they use their access tokens but specify the group name as the username. > [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server supports managed identities as group members.
+> Azure Database for PostgreSQL flexible server supports managed identities as group members.
### Sign in to the user's Azure subscription
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-query-guide.md
Title: Connect and query - Flexible Server PostgreSQL
-description: Links to quickstarts showing how to connect to your Azure Database for PostgreSQL Flexible Server and run queries.
+ Title: Connect and query
+description: Links to quickstarts showing how to connect to your Azure Database for PostgreSQL - Flexible Server and run queries.
Last updated 11/30/2021
-# Connect and query overview for Azure database for PostgreSQL- Flexible Server
+# Connect and query overview for Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The following document includes links to examples showing how to connect and query with Azure Database for PostgreSQL Single Server. This guide also includes TLS recommendations and extension that you can use to connect to the server in supported languages below.
+The following document includes links to examples showing how to connect and query with Azure Database for PostgreSQL flexible server. This guide also includes TLS recommendations and extension that you can use to connect to the server in supported languages below.
## Quickstarts
The following document includes links to examples showing how to connect and que
## TLS considerations for database connectivity
-Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for PostgreSQL. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](how-to-connect-tls-ssl.md)
+Transport Layer Security (TLS) is used by all drivers that Microsoft supplies or supports for connecting to databases in Azure Database for PostgreSQL flexible server. No special configuration is necessary but do enforce TLS 1.2 for newly created servers. We recommend if you are using TLS 1.0 and 1.1, then you update the TLS version for your servers. See [How to configure TLS](how-to-connect-tls-ssl.md)
## PostgreSQL extensions
-PostgreSQL provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions function like built-in features.
+Azure Database for PostgreSQL flexible server provides the ability to extend the functionality of your database using extensions. Extensions bundle multiple related SQL objects together in a single package that can be loaded or removed from your database with a single command. After being loaded in the database, extensions function like built-in features.
- [Postgres extensions](./concepts-extensions.md#extension-versions) - [dblink and postgres_fdw](./concepts-extensions.md#dblink-and-postgres_fdw) - [pg_prewarm](./concepts-extensions.md#pg_prewarm) - [pg_stat_statements](./concepts-extensions.md#pg_stat_statements)
-Fore more details, see [How to use PostgreSQL extensions on Flexible server](concepts-extensions.md).
+Fore more details, see [How to use PostgreSQL extensions on Azure Database for PostgreSQL - Flexible Server](concepts-extensions.md).
## Next steps
postgresql How To Connect Scram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-scram.md
Title: Connectivity using SCRAM in Azure Database for PostgreSQL - Flexible Server
+ Title: Connectivity using SCRAM
description: Instructions and information on how to configure and connect using SCRAM in Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
Salted Challenge Response Authentication Mechanism (SCRAM) is a password-based mutual authentication protocol. It is a challenge-response scheme that adds several levels of security and prevents password sniffing on untrusted connections. SCRAM supports storing passwords on the server in a cryptographically hashed form which provides advanced security.
-To access the PostgreSQL database server using SCRAM method of authentication, your client libraries need to support SCRAM. Refer to the [list of drivers](https://wiki.postgresql.org/wiki/List_of_drivers) that support SCRAM.
+To access an Azure Database for PostgreSQL flexible server instance using SCRAM method of authentication, your client libraries need to support SCRAM. Refer to the [list of drivers](https://wiki.postgresql.org/wiki/List_of_drivers) that support SCRAM.
## Configuring SCRAM authentication
-1. Change password_encryption to SCRAM-SHA-256. Currently PostgreSQL only supports SCRAM using SHA-256.
+1. Change password_encryption to SCRAM-SHA-256. Currently Azure Database for PostgreSQL flexible server only supports SCRAM using SHA-256.
:::image type="content" source="./media/how-to-configure-scram/1-password-encryption.png" alt-text="Enable SCRAM password encryption"::: 2. Allow SCRAM-SHA-256 as the authentication method. :::image type="content" source="./media/how-to-configure-scram/2-auth-method.png" alt-text="Choose the authentication method"::: >[!Important] > You may choose to enforce SCRAM only authentication by selecting only SCRAM-SHA-256 method. By doing so, users with MD5 authentication can longer connect to the server. Hence, before enforcing SCRAM, it is recommended to have both MD5 and SCRAM-SHA-256 as authentication methods until you update all user passwords to SCRAM-SHA-256. You can verify the authentication type for users using the query mentioned in step #7. 3. Save the changes. These are dynamic properties and do not require server restart.
-4. From your Postgres client, connect to the Postgres server. For example,
+4. From your Azure Database for PostgreSQL flexible server client, connect to the Azure Database for PostgreSQL flexible server instance. For example,
```bash psql "host=myPGServer.postgres.database.azure.com port=5432 dbname=postgres user=myDemoUser password=MyPassword sslmode=require"
postgresql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-tls-ssl.md
Title: Encrypted connectivity using TLS/SSL in Azure Database for PostgreSQL - Flexible Server
+ Title: Encrypted connectivity using TLS/SSL
description: Instructions and information on how to connect using TLS/SSL in Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports connecting your client applications to the PostgreSQL service using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
+Azure Database for PostgreSQL flexible server supports connecting your client applications to Azure Database for PostgreSQL flexible server using Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL). TLS is an industry standard protocol that ensures encrypted network connections between your database server and client applications, allowing you to adhere to compliance requirements.
-Azure Database for PostgreSQL - Flexible Server supports encrypted connections using Transport Layer Security (TLS 1.2+) and all incoming connections with TLS 1.0 and TLS 1.1 will be denied. For all flexible servers enforcement of TLS connections is enabled.
+Azure Database for PostgreSQL flexible server supports encrypted connections using Transport Layer Security (TLS 1.2+) and all incoming connections with TLS 1.0 and TLS 1.1 will be denied. For all Azure Database for PostgreSQL flexible server instances enforcement of TLS connections is enabled.
>[!Note]
-> By default, secured connectivity between the client and the server is enforced. If you want to disable TLS/SSL for connecting to flexible server, you can change the server parameter *require_secure_transport* to *OFF*. You can also set TLS version by setting *ssl_max_protocol_version* server parameters.
+> By default, secured connectivity between the client and the server is enforced. If you want to disable TLS/SSL for connecting to Azure Database for PostgreSQL flexible server, you can change the server parameter *require_secure_transport* to *OFF*. You can also set TLS version by setting *ssl_max_protocol_version* server parameters.
## Applications that require certificate verification for TLS/SSL connectivity
-In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Azure Database for PostgreSQL - Flexible Server uses *DigiCert Global Root CA*. Download this certificate needed to communicate over SSL from [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) and save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl`.
+In some cases, applications require a local certificate file generated from a trusted Certificate Authority (CA) certificate file to connect securely. Azure Database for PostgreSQL flexible server uses *DigiCert Global Root CA*. Download this certificate needed to communicate over SSL from [DigiCert Global Root CA](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) and save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl`.
### Connect using psql
-If you created your flexible server with *Private access (VNet Integration)*, you will need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server.
+If you created your Azure Database for PostgreSQL flexible server instance with *Private access (VNet Integration)*, you will need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your Azure Database for PostgreSQL flexible server instance.
-If you created your flexible server with *Public access (allowed IP addresses)*, you can add your local IP address to the list of firewall rules on your server.
+If you created your Azure Database for PostgreSQL flexible server instance with *Public access (allowed IP addresses)*, you can add your local IP address to the list of firewall rules on your server.
The following example shows how to connect to your server using the psql command-line interface. Use the `sslmode=verify-full` connection string setting to enforce TLS/SSL certificate verification. Pass the local certificate file path to the `sslrootcert` parameter.
The following example shows how to connect to your server using the psql command
## Ensure your application or framework supports TLS connections
-Some application frameworks that use PostgreSQL for their database services do not enable TLS by default during installation. Your PostgreSQL server enforces TLS connections but if the application is not configured for TLS, the application may fail to connect to your database server. Consult your application's documentation to learn how to enable TLS connections.
+Some application frameworks that use PostgreSQL for their database services do not enable TLS by default during installation. Your Azure Database for PostgreSQL flexible server instance enforces TLS connections but if the application is not configured for TLS, the application may fail to connect to your database server. Consult your application's documentation to learn how to enable TLS connections.
## Next steps - [Create and manage Azure Database for PostgreSQL - Flexible Server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md).
postgresql How To Connect To Data Factory Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-to-data-factory-private-endpoint.md
+
+ Title: Connect to Azure Data Factory privately networked pipeline using Azure Private Link
+description: This article describes how to connect Azure Database for PostgreSQL - Flexible Server to an Azure-hosted Data Factory pipeline via Private Link.
+++ Last updated : 12/22/2023+++++
+# Connect to an Azure Data Factory privately networked pipeline with Azure Database for PostgreSQL - Flexible Server by using Azure Private Link
++
+In this article, you connect Azure Database for PostgreSQL flexible server to an Azure Data Factory pipeline via Azure Private Link.
+
+[Azure Data Factory](../../data-factory/introduction.md) is a fully managed, serverless solution to ingest and transform data. An Azure [integration runtime](../../data-factory/concepts-integration-runtime.md#azure-integration-runtime) supports connecting to data stores and compute services with public accessible endpoints. When you enable a managed virtual network, an integration runtime supports connecting to data stores by using the Azure Private Link service in a private network environment.
+
+Data Factory offers three types of integration runtimes:
+
+- Azure
+- Self-hosted
+- Azure-SQL Server Integration Services (Azure-SSIS)
+
+Choose the type that best serves your data integration capabilities and network environment requirements.
+
+## Prerequisites
+
+- An Azure Database for PostgreSQL flexible server instance that's [privately networked via Azure Private Link](../flexible-server/concepts-networking-private-link.md)
+- An Azure integration runtime within a [Data Factory managed virtual network](../../data-factory/data-factory-private-link.md)
+
+## Create a private endpoint in Data Factory
+
+An Azure Database for PostgreSQL connector currently supports *public connectivity only*. When you use an Azure Database for PostgreSQL connector in Azure Data Factory, you might get an error when you try to connect to a privately networked instance of Azure Database for PostgreSQL flexible server.
+
+To work around this limitation, you can use the Azure CLI to create a private endpoint first. Then you can use the Data Factory user interface with the Azure Database for PostgreSQL connector to create a connection between privately networked Azure Database for PostgreSQL flexible server and Azure Data Factory in a managed virtual network.
+
+The following example creates a private endpoint in Azure Data Factory. Substitute the placeholders *subscription_id*, *resource_group_name*, *azure_data_factory_name*, *endpoint_name*, and *flexible_server_name* with your own values.
+
+```azurecli
+az resource create --id /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.DataFactory/factories/<azure_data_factory_name>/managedVirtualNetworks/default/managedPrivateEndpoints/<endpoint_name> --properties '
+{
+ "privateLinkResourceId": "/subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.DBforPostgreSQL/flexibleServers/<flexible_server_name>",
+ "groupId": "postgresqlServer"
+}'
+```
+> [!NOTE]
+> An alternative command to create a private endpoint in Data Factory by using the Azure CLI is [az datafactory managed-private-endpoint create](/cli/azure/datafactory/managed-private-endpoint).
+
+After you successfully run the preceding command, you can view the private endpoint in the Azure portal by going to **Data Factory** > **Managed private endpoints**. The following screenshot shows an example.
++
+## Approve a private endpoint
+
+After you provision a private endpoint, you can approve it by following the **Manage approvals in Azure portal** link in the endpoint details. It takes several minutes for Data Factory to discover that the private endpoint is approved.
+
+## Add a networked server data source in Data Factory
+
+When provisioning succeeds and the endpoint is approved, you can finally create a connection to PGFlex using the Azure Database for PostgreSQL flexible server Data Factory connector.
+
+In the preceding steps, when you selected the server for which you created the private endpoint, the private endpoint was also selected automatically.
+
+1. Select a database, enter a username and password, and select **SSL** as the encryption method. The following screenshot shows an example.
+
+ :::image type="content" source="./media/how-to-connect-to-data-factory-private-endpoint/data-factory-data-source-connection.png" alt-text="Example screenshot of connection properties." lightbox="./media/how-to-connect-to-data-factory-private-endpoint/data-factory-data-source-connection.png":::
+
+1. Select **Test connection**. A **Connection successful** message should appear next to the **Test connection** button.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Networking with Private Link in Azure Database for PostgreSQL - Flexible Server](concepts-networking-private-link.md)
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-with-managed-identity.md
Title: Connect with Managed Identity - Azure Database for PostgreSQL - Flexible Server
-description: Learn about how to connect and authenticate using Managed Identity for authentication with Azure Database for PostgreSQL Flexible Server
+ Title: Connect with managed identity
+description: Learn about how to connect and authenticate using managed identity for authentication with Azure Database for PostgreSQL - Flexible Server.
-# Connect with Managed Identity to Azure Database for PostgreSQL Flexible Server
+# Connect with managed identity to Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-You can use both system-assigned and user-assigned managed identities to authenticate to Azure Database for PostgreSQL. This article shows you how to use a system-assigned managed identity for an Azure Virtual Machine (VM) to access an Azure Database for PostgreSQL server. Managed Identities are automatically managed by Azure and enable you to authenticate to services that support Microsoft Entra authentication without needing to insert credentials into your code.
+You can use both system-assigned and user-assigned managed identities to authenticate to Azure Database for PostgreSQL flexible server. This article shows you how to use a system-assigned managed identity for an Azure Virtual Machine (VM) to access an Azure Database for PostgreSQL flexible server instance. Managed Identities are automatically managed by Azure and enable you to authenticate to services that support Microsoft Entra authentication without needing to insert credentials into your code.
You learn how to:-- Grant your VM access to an Azure Database for PostgreSQL Flexible server-- Create a user in the database that represents the VM's system-assigned identity-- Get an access token using the VM identity and use it to query an Azure Database for PostgreSQL Flexible server-- Implement the token retrieval in a C# example application
+- Grant your VM access to an Azure Database for PostgreSQL flexible server instance.
+- Create a user in the database that represents the VM's system-assigned identity.
+- Get an access token using the VM identity and use it to query an Azure Database for PostgreSQL flexible server instance.
+- Implement the token retrieval in a C# example application.
## Prerequisites - If you're not familiar with the managed identities for Azure resources feature, see this [overview](../../../articles/active-directory/managed-identities-azure-resources/overview.md). If you don't have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - To do the required resource creation and role management, your account needs "Owner" permissions at the appropriate scope (your subscription or resource group). If you need assistance with a role assignment, see [Assign Azure roles to manage access to your Azure subscription resources](../../../articles/role-based-access-control/role-assignments-portal.md). - You need an Azure VM (for example, running Ubuntu Linux) that you'd like to use to access your database using Managed Identity-- You need an Azure Database for PostgreSQL database server that has [Microsoft Entra authentication](how-to-configure-sign-in-azure-ad-authentication.md) configured
+- You need an Azure Database for PostgreSQL flexible server instance that has [Microsoft Entra authentication](how-to-configure-sign-in-azure-ad-authentication.md) configured
- To follow the C# example, first, complete the guide on how to [Connect with C#](connect-csharp.md) ## Create a system-assigned managed identity for your VM
Retrieve the application ID for the system-assigned managed identity, which you'
az ad sp list --display-name vm-name --query [*].appId --out tsv ```
-## Create a PostgreSQL user for your Managed Identity
+## Create an Azure Database for PostgreSQL flexible server user for your Managed Identity
-Now, connect as the Microsoft Entra administrator user to your PostgreSQL database, and run the following SQL statements, replacing `CLIENT_ID` with the client ID you retrieved for your system-assigned managed identity:
+Now, connect as the Microsoft Entra administrator user to your Azure Database for PostgreSQL flexible server database, and run the following SQL statements, replacing `<identity_name>` with the name of the resources for which you created a system-assigned managed identity:
```sql select * from pgaadauth_create_principal('<identity_name>', false, false); ```
-For more information on managing Microsoft Entra ID enabled database roles, see [how to manage Microsoft Entra ID enabled PostgreSQL roles](./how-to-manage-azure-ad-users.md)
+Success looks like:
+```sql
+ pgaadauth_create_principal
+--
+ Created role for "<identity_name>"
+(1 row)
+```
+
+For more information on managing Microsoft Entra ID enabled database roles, see [how to manage Microsoft Entra ID enabled Azure Database for PostgreSQL - Flexible Server roles](./how-to-manage-azure-ad-users.md)
The managed identity now has access when authenticating with the identity name as a role name and the Microsoft Entra token as a password.
+> [!Note]
+> If the managed identity is not valid, an error is returned: `ERROR: Could not validate AAD user <ObjectId> because its name is not found in the tenant. [...]`.
+ ## Retrieve the access token from the Azure Instance Metadata service Your application can now retrieve an access token from the Azure Instance Metadata service and use it for authenticating with the database.
You're now connected to the database you configured earlier.
## Connect using Managed Identity in C#
-This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for PostgreSQL. Azure Database for PostgreSQL natively supports Microsoft Entra authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to PostgreSQL, you pass the access token in the password field.
+This section shows how to get an access token using the VM's user-assigned managed identity and use it to call Azure Database for PostgreSQL flexible server. Azure Database for PostgreSQL flexible server natively supports Microsoft Entra authentication, so it can directly accept access tokens obtained using managed identities for Azure resources. When creating a connection to Azure Database for PostgreSQL flexible server, you pass the access token in the password field.
-Here's a .NET code example of opening a connection to PostgreSQL using an access token. This code must run on the VM to use the system-assigned managed identity to obtain an access token from Microsoft Entra ID. Replace the values of HOST, USER, DATABASE, and CLIENT_ID.
+Here's a .NET code example of opening a connection to Azure Database for PostgreSQL flexible server using an access token. This code must run on the VM to use the system-assigned managed identity to obtain an access token from Microsoft Entra ID. Replace the values of HOST, USER (with `<identity_name>`), and DATABASE.
```csharp using System;
Postgres version: PostgreSQL 11.11, compiled by Visual C++ build 1800, 64-bit
## Next steps -- Review the overall concepts for [Microsoft Entra authentication with Azure Database for PostgreSQL](concepts-azure-ad-authentication.md)
+- Review the overall concepts for [Microsoft Entra authentication with Azure Database for PostgreSQL - Flexible Server](concepts-azure-ad-authentication.md)
postgresql How To Cost Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-cost-optimization.md
Title: How to optimize costs in Azure Database for Postgres Flexible Server
-description: This article provides a list of cost optimization recommendations
+ Title: How to optimize costs
+description: This article provides a list of cost optimization recommendations.
Last updated 4/13/2023
-# How to optimize costs in Azure Database for Postgres Flexible Server
+# How to optimize costs in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL Community Edition.](https://www.postgresql.org/). It's a fully managed database as a service offering that can handle mission-critical workloads with predictable performance and dynamic scalability.
+Azure Database for PostgreSQL flexible server is a relational database service in the Microsoft cloud based on the [PostgreSQL Community Edition.](https://www.postgresql.org/). It's a fully managed database as a service offering that can handle mission-critical workloads with predictable performance and dynamic scalability.
-This article provides a list of recommendations for optimizing Azure Postgres Flexible Server cost. The list includes design considerations, a configuration checklist, and recommended database settings to help you optimize your workload.
+This article provides a list of recommendations for optimizing Azure Database for PostgreSQL flexible server cost. The list includes design considerations, a configuration checklist, and recommended database settings to help you optimize your workload.
>[!div class="checklist"] > * Leverage reserved capacity pricing.
This article provides a list of recommendations for optimizing Azure Postgres Fl
## 1. Use reserved capacity pricing
-Azure Postgres reserved capacity pricing allows committing to a specific capacity for **1-3** **years**, saving costs for customers using Azure Database for PostgreSQL service. The cost savings compared to pay-as-you-go pricing can be significant, depending on the amount of capacity reserved and the length of the term. Customers can purchase reserved capacity in increments of vCores and storage. Reserved capacity can cover costs for Azure Database for PostgreSQL servers in the same region, applied to the customer's Azure subscription. Reserved Pricing for Azure Postgres Flexible Server offers cost savings up to 40% for 1 year and up to 60% for 3-year commitments, for customers who reserve capacity. For more details, please refer Pricing Calculator | Microsoft Azure
-To learn more, refer [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
+Azure Postgres reserved capacity pricing allows committing to a specific capacity for **1-3** **years**, saving costs for customers using Azure Database for PostgreSQL flexible server. The cost savings compared to pay-as-you-go pricing can be significant, depending on the amount of capacity reserved and the length of the term. Customers can purchase reserved capacity in increments of vCores and storage. Reserved capacity can cover costs for Azure Database for PostgreSQL flexible server instances in the same region, applied to the customer's Azure subscription. Reserved pricing for Azure Database for PostgreSQL flexible server offers cost savings up to 40% for 1 year and up to 60% for 3-year commitments, for customers who reserve capacity. For more details, see Pricing Calculator | Microsoft Azure. To learn more, see [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)
## 2. Scale compute up/down
-Scaling up or down the resources of an Azure Database for PostgreSQL server can help you optimize costs. Adjust the vCores and storage as needed to only pay for necessary resources. Scaling can be done through the Azure portal, Azure CLI, or Azure REST API. Scaling compute resources up or down can be done at any time and requires server restart. It's good practice to monitor your database usage patterns and adjust the resources accordingly to optimize costs and ensure performance. For more details, please refer Compute and Storage options in Azure Database for PostgreSQL - Flexible Server.
+Scaling up or down the resources of an Azure Database for PostgreSQL flexible server instance can help you optimize costs. Adjust the vCores and storage as needed to only pay for necessary resources. Scaling can be done through the Azure portal, Azure CLI, or Azure REST API. Scaling compute resources up or down can be done at any time and requires server restart. It's good practice to monitor your database usage patterns and adjust the resources accordingly to optimize costs and ensure performance. For more details, see Compute and Storage options in Azure Database for PostgreSQL flexible server.
Configure Non-prod environments conservatively - Configure idle dev/test/stage environments to have cost-efficient SKUs. Choosing Burstable SKUs is ideal for workloads that don't need continuous full capacity.
-To learn more, refer [Scale operations in Flexible Server](how-to-scale-compute-storage-portal.md)
+To learn more, see [Scale operations in Azure Database for PostgreSQL flexible server](how-to-scale-compute-storage-portal.md)
## 3. Using Azure advisor recommendations
Azure Advisor is a free service that provides recommendations to help optimize y
For Azure Database for PostgreSQL, Azure Advisor can provide recommendations on how to improve the performance, availability, and cost-effectiveness of your database. For example, it can suggest scaling the database up or down, using read-replicas to offload read-intensive workloads, or switching to reserved capacity pricing to reduce costs. Azure Advisor can also recommend security best practices, such as enabling encryption at rest, or enabling network security rules to limit incoming traffic to the database.
-You can access the recommendations provided by Azure Advisor through the Azure portal, where you can view and implement the recommendations with just a few clicks. Implementing Azure Advisor recommendations can help you optimize your Azure resources and reduce costs. For more details, refer Azure Advisor for PostgreSQL - Flexible Server.
-
-To learn more, refer [Azure Advisor for PostgreSQL](concepts-azure-advisor-recommendations.md)
+You can access the recommendations provided by Azure Advisor through the Azure portal, where you can view and implement the recommendations with just a few clicks. Implementing Azure Advisor recommendations can help you optimize your Azure resources and reduce costs. To learn more, see [Azure Advisor for Azure Database for PostgreSQL - Flexible Server](concepts-azure-advisor-recommendations.md)
## 4. Evaluate HA (high availability) and DR (disaster recovery) requirements
-Azure database for PostgreSQL ΓÇô Flexible Server has **built-in** node and storage resiliency at no extra cost to you. Node resiliency allows your Flexible Server to automatically failover to a healthy VM with no data loss (that is, RPO zero) and with no connection string changes except that your application must reconnect. Similarly, the data and transaction logs are stored in three synchronous copies, and it automatically detects storage corruption and takes the corrective action. For most Dev/Test workloads, and for many production workloads, this configuration should suffice.
+Azure Database for PostgreSQL flexible server has **built-in** node and storage resiliency at no extra cost to you. Node resiliency allows your Azure Database for PostgreSQL flexible server instance to automatically failover to a healthy VM with no data loss (that is, RPO zero) and with no connection string changes except that your application must reconnect. Similarly, the data and transaction logs are stored in three synchronous copies, and it automatically detects storage corruption and takes the corrective action. For most Dev/Test workloads, and for many production workloads, this configuration should suffice.
If your workload requires AZ resiliency and lower RTO, you can enable High Availability (HA) with in-zone or cross-AZ standby. This doubles your deployment costs, but it also provides a higher SLA. To achieve geo-resiliency for your application, you can set up GeoBackup for a lower cost but with a higher RTO. Alternatively, you can set up GeoReadReplica for double the cost, which offers an RTO in minutes if there was a geo-disaster.
-Key take away is to evaluate the requirement of your full application stack and then choose the right configuration for the Flexible Server. For example, if your application isn't AZ resilient, there's nothing to be gained by configuring Flexible Server in AZ resilient configuration.
+Key take away is to evaluate the requirement of your full application stack and then choose the right configuration for the Azure Database for PostgreSQL flexible server instance. For example, if your application isn't AZ resilient, there's nothing to be gained by configuring Azure Database for PostgreSQL flexible server in AZ resilient configuration.
-To learn more, refer [High availability architecture in Flexible Server](concepts-high-availability.md)
+To learn more, see [High availability architecture in Flexible Server](concepts-high-availability.md)
## 5. Consolidate databases and servers
-Consolidating databases can be a cost-saving strategy for Azure Database for PostgreSQL Flexible Server. Consolidating multiple databases into a single Flexible Server instance can reduce the number of instances and overall cost of running Azure Database for PostgreSQL. Follow these steps to consolidate your databases and save costs:
+Consolidating databases can be a cost-saving strategy for Azure Database for PostgreSQL flexible server. Consolidating multiple databases into a single Azure Database for PostgreSQL flexible server instance can reduce the number of instances and overall cost of running Azure Database for PostgreSQL flexible server. Follow these steps to consolidate your databases and save costs:
1. Access your server: Identify the server that can be consolidated, considering database's size, geo-region, configuration (CPU, memory, IOPS), performance requirements, workload type and data consistency needs.
-1. Create a new Flexible Server instance: Create a new Flexible Server instance with enough vCPUs, memory, and storage to support the consolidated databases.
-1. Reuse an existing Flexible Server instance: In case you already have an existing server, make sure it has enough vCPUs, memory, and storage to support the consolidated databases.
-1. Migrate the databases: Migrate the databases to the new Flexible Server instance. You can use tools such as pg_dump and pg_restore to export and import databases.
-1. Monitor performance: Monitor the performance of the consolidated Flexible Server instance and adjust the resources as needed to ensure optimal performance.
+1. Create a new Azure Database for PostgreSQL flexible server instance: Create a new Azure Database for PostgreSQL flexible server instance with enough vCPUs, memory, and storage to support the consolidated databases.
+1. Reuse an existing Azure Database for PostgreSQL flexible server instance: In case you already have an existing server, make sure it has enough vCPUs, memory, and storage to support the consolidated databases.
+1. Migrate the databases: Migrate the databases to the new Azure Database for PostgreSQL flexible server instance. You can use tools such as pg_dump and pg_restore to export and import databases.
+1. Monitor performance: Monitor the performance of the consolidated Azure Database for PostgreSQL flexible server instance and adjust the resources as needed to ensure optimal performance.
-Consolidating databases can help you save costs by reducing the number of Flexible Server instances you need to run and by enabling you to use larger instances that are more cost-effective than smaller instances. It is important to evaluate the impact of consolidation on your databases' performance and ensure that the consolidated Flexible Server instance is appropriately sized to meet all database needs.
+Consolidating databases can help you save costs by reducing the number of Azure Database for PostgreSQL flexible server instances you need to run and by enabling you to use larger instances that are more cost-effective than smaller instances. It is important to evaluate the impact of consolidation on your databases' performance and ensure that the consolidated Azure Database for PostgreSQL flexible server instance is appropriately sized to meet all database needs.
-To learn more, refer [Improve the performance of Azure applications by using Azure Advisor](../../advisor/advisor-reference-performance-recommendations.md#databases)
+To learn more, see [Improve the performance of Azure applications by using Azure Advisor](../../advisor/advisor-reference-performance-recommendations.md#databases)
## 6. Place test servers in cost-efficient geo-regions
-Creating a test server in a cost-efficient Azure region can be a cost-saving strategy for Azure Database for PostgreSQL Flexible Server. By creating a test server in a region with lower cost of computing resources, you can reduce the cost of running your test server and minimize the cost of running Azure Database for PostgreSQL. Here are a few steps to help you create a test server in a cost-efficient Azure region:
+Creating a test server in a cost-efficient Azure region can be a cost-saving strategy for Azure Database for PostgreSQL flexible server. By creating a test server in a region with lower cost of computing resources, you can reduce the cost of running your test server and minimize the cost of running Azure Database for PostgreSQL flexible server. Here are a few steps to help you create a test server in a cost-efficient Azure region:
1. Identify a cost-efficient region: Identify an Azure region with lower cost of computing resources.
-1. Create a new Flexible Server instance: Create a new Flexible Server instance in the cost-efficient region with the right configuration for your test environment.
-1. Migrate test data: Migrate the test data to the new Flexible Server instance. You can use tools such as pg_dump and pg_restore to export and import databases.
+1. Create a new Azure Database for PostgreSQL flexible server instance: Create a new Azure Database for PostgreSQL flexible server instance in the cost-efficient region with the right configuration for your test environment.
+1. Migrate test data: Migrate the test data to the new Azure Database for PostgreSQL flexible server instance. You can use tools such as pg_dump and pg_restore to export and import databases.
1. Monitor performance: Monitor the performance of the test server and adjust the resources as needed to ensure optimal performance.
-By creating a test server in a cost-efficient Azure region, you can reduce the cost of running your test server and minimize the cost of running Azure Database for PostgreSQL. It is important to evaluate the impact of the region on your test server's performance and your organization's specific regional requirements. This ensures that network latency and data transfer costs are acceptable for your use case.
+By creating a test server in a cost-efficient Azure region, you can reduce the cost of running your test server and minimize the cost of running Azure Database for PostgreSQL flexible server. It's important to evaluate the impact of the region on your test server's performance and your organization's specific regional requirements. This ensures that network latency and data transfer costs are acceptable for your use case.
-To learn more, refer [Azure regions](/azure/architecture/framework/cost/design-regions)
+To learn more, see [Azure regions](/azure/architecture/framework/cost/design-regions)
## 7. Starting and stopping servers
-Starting and stopping servers can be a cost-saving strategy for Azure Database for PostgreSQL Flexible Server. By only running the server when you need it, you can reduce the cost of running Azure Database for PostgreSQL. Here are a few steps to help you start and stop servers and save costs:
+Starting and stopping servers can be a cost-saving strategy for Azure Database for PostgreSQL flexible server. By only running the server when you need it, you can reduce the cost of running Azure Database for PostgreSQL flexible server. Here are a few steps to help you start and stop servers and save costs:
-1. Identify the server: Identify the Flexible Server instance that you want to start and stop.
-1. Start the server: Start the Flexible Server instance when you need it. You can start the server using the Azure portal, Azure CLI, or Azure REST API.
-1. Stop the server: Stop the Flexible Server instance when you don't need it. You can stop the server using the Azure portal, Azure CLI, or Azure REST API.
+1. Identify the server: Identify the Azure Database for PostgreSQL flexible server instance that you want to start and stop.
+1. Start the server: Start the Azure Database for PostgreSQL flexible server instance when you need it. You can start the server using the Azure portal, Azure CLI, or Azure REST API.
+1. Stop the server: Stop the Azure Database for PostgreSQL flexible server instance when you don't need it. You can stop the server using the Azure portal, Azure CLI, or Azure REST API.
1. Also, if a server has been in a stopped (or idle) state for several continuous weeks, you can consider dropping the server after the required due diligence.
-By starting and stopping the server as needed, you can reduce the cost of running Azure Database for PostgreSQL. To ensure smooth database performance, it is crucial to evaluate the impact of starting and stopping the server and have a reliable process in place for these actions as required. To learn more, refer Stop/Start an Azure Database for PostgreSQL - Flexible Server.
-
-To learn more, refer [Stop/Start Flexible Server Instance](how-to-stop-start-server-portal.md)
+By starting and stopping the server as needed, you can reduce the cost of running Azure Database for PostgreSQL flexible server. To ensure smooth database performance, it is crucial to evaluate the impact of starting and stopping the server and have a reliable process in place for these actions as required. To learn more, see [Stop/start an Azure Database for PostgreSQL - Flexible Server instance](how-to-stop-start-server-portal.md).
## 8. Archive old data for cold storage
-Archiving infrequently accessed data to Azure archive store (while still keeping access) can help reduce costs. Export data from PostgreSQL to Azure Archived Storage and store it in a lower-cost storage tier.
+Archiving infrequently accessed data to Azure archive store (while still keeping access) can help reduce costs. Export data from Azure Database for PostgreSQL flexible server to Azure Archived Storage and store it in a lower-cost storage tier.
-1. Setup Azure Blob Storage account and create a container for your database backups.
+1. Set up Azure Blob Storage account and create a container for your database backups.
1. Use `pg_dump` to export the old data to a file. 1. Use the Azure CLI or PowerShell to upload the exported file to your Blob Storage container. 1. Set up a retention policy on the Blob Storage container to automatically delete old backups.
Archiving infrequently accessed data to Azure archive store (while still keeping
You can also use Azure Data Factory to automate this process.
-To learn more, refer [Migrate your PostgreSQL database by using dump and restore](../migrate/how-to-migrate-using-dump-and-restore.md)
+To learn more, see [Migrate your Azure Database for PostgreSQL flexible server database by using dump and restore](../migrate/how-to-migrate-using-dump-and-restore.md)
## Tradeoffs for cost
-As you design your application database on Azure Database for PostgerSQL Flexible Server, consider tradeoffs between cost optimization and other aspects of the design, such as security, scalability, resilience, and operability.
+As you design your application database on Azure Database for PostgreSQL flexible server, consider tradeoffs between cost optimization and other aspects of the design, such as security, scalability, resilience, and operability.
**Cost vs reliability** > Cost has a direct correlation with reliability.
postgresql How To Create Server Customer Managed Key Azure Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-server-customer-managed-key-azure-api.md
Title: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure REST API
-description: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure REST API
+ Title: Create and manage with data encrypted by customer managed keys using Azure REST API
+description: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure REST API.
Last updated 04/13/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this article, you learn how to create Azure Database for PostgreSQL with data encrypted by Customer Managed Keys (CMK) by using the Azure REST API. For more information on encryption with Customer Managed Keys (CMK), see [overview](../flexible-server/concepts-data-encryption.md).
+In this article, you learn how to create an Azure Database for PostgreSQL flexible server instance with data encrypted by customer managed keys (CMK) by using the Azure REST API. For more information on encryption with Customer Managed Keys (CMK), see [overview](../flexible-server/concepts-data-encryption.md).
-## Setup Customer Managed Key during Server Creation
+## Set up customer managed key during server creation
Prerequisites: - You must have an Azure subscription and be an administrator on that subscription.-- Azure managed identity in region where Postgres Flex Server will be created. -- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
+- Azure managed identity in region where the Azure Database for PostgreSQL flexible server instance will be created.
+- Key Vault with key in region where the Azure Database for PostgreSQL flexible server instance will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
> [!NOTE] > API examples below are based on 2022-12-01 API version
-You can create a PostgreSQL Flexible Server encrypted with Customer Managed Key by using the [create API](/rest/api/postgresql/flexibleserver/servers/create?tabs=HTTP):
+You can create an Azure Database for PostgreSQL flexible server instance encrypted with customer managed key by using the [create API](/rest/api/postgresql/flexibleserver/servers/create?tabs=HTTP):
```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{serverName}?api-version=2022-12-01
You can also programmatically fetch Key Vault Uri using [Azure REST API](/rest/a
## Next steps -- [Flexible Server encryption with Customer Managed Key (CMK)](../flexible-server/concepts-data-encryption.md)
+- [Azure Database for PostgreSQL - Flexible Server encryption with customer managed key (CMK)](../flexible-server/concepts-data-encryption.md)
- [Microsoft Entra ID](../../active-directory-domain-services/overview.md)
postgresql How To Create Server Customer Managed Key Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-server-customer-managed-key-cli.md
Title: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using the Azure CLI
-description: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using the Azure CLI
+ Title: Create and manage with data encrypted by customer managed keys using the Azure CLI
+description: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using the Azure CLI.
Last updated 12/10/2022
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] > [!NOTE]
-> CLI examples below are based on 2.45.0 version of Azure Database for PostgreSQL - Flexible Server CLI libraries
+> CLI examples below are based on 2.45.0 version of Azure Database for PostgreSQL flexible server CLI libraries
-In this article, you learn how to create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using the Azure CLI. To learn more about Customer Managed Keys (CMK) feature with Azure Database for PostgreSQL - Flexible Server, see the [overview](concepts-data-encryption.md).
+In this article, you learn how to create and manage Azure Database for PostgreSQL flexible server with data encrypted by customer managed keys using the Azure CLI. To learn more about the customer managed keys (CMK) feature with Azure Database for PostgreSQL flexible server, see the [overview](concepts-data-encryption.md).
-## Setup Customer Managed Key during Server Creation
+## Set up customer managed key during server creation
Prerequisites: - You must have an Azure subscription and be an administrator on that subscription.
-Follow the steps below to enable CMK while creating Postgres Flexible Server using Azure CLI.
+Follow the steps below to enable CMK while creating an Azure Database for PostgreSQL flexible server instance using Azure CLI.
1. Create a key vault and a key to use for a customer-managed key. Also enable purge protection and soft delete on the key vault.
Follow the steps below to enable CMK while creating Postgres Flexible Server usi
az keyvault create -g <resource_group> -n <vault_name> --location <azure_region> --enable-purge-protection true ```
-2. In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for PostgreSQL - Flexible server.
+2. In the created Azure Key Vault, create the key that will be used for the data encryption of the Azure Database for PostgreSQL flexible server instance.
```azurecli-interactive keyIdentifier=$(az keyvault key create --name <key_name> -p software --vault-name <vault_name> --query key.kid -o tsv) ```
-3. Create Managed Identity which will be used to retrieve key from Azure Key Vault
+3. Create Managed Identity which will be used to retrieve key from Azure Key Vault.
```azurecli-interactive identityPrincipalId=$(az identity create -g <resource_group> --name <identity_name> --location <azure_region> --query principalId -o tsv) ```
-4. Add access policy with key permissions of *wrapKey*,*unwrapKey*, *get*, *list* in Azure KeyVault to the managed identity we created above
+4. Add access policy with key permissions of *wrapKey*,*unwrapKey*, *get*, *list* in Azure KeyVault to the managed identity you created above.
```azurecli-interactive az keyvault set-policy -g <resource_group> -n <vault_name> --object-id $identityPrincipalId --key-permissions wrapKey unwrapKey get list ```
-5. Finally, lets create Azure Database for PostgreSQL - Flexible Server with CMK based encryption enabled
+5. Finally, create an Azure Database for PostgreSQL flexible server instance with CMK based encryption enabled.
```azurecli-interactive az postgres flexible-server create -g <resource_group> -n <postgres_server_name> --location <azure_region> --key $keyIdentifier --identity <identity_name> ```
-## Update Customer Managed Key on the CMK enabled Flexible Server
+## Update customer managed key on the CMK enabled Azure Database for PostgreSQL flexible server instance
Prerequisites: - You must have an Azure subscription and be an administrator on that subscription.-- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
+- Key Vault with key in region where the Azure Database for PostgreSQL flexible server instance will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
Follow the steps below to change\rotate key or identity after creation of server with data encryption.
-1. Change key/identity for data encryption for existing server, first lets get new key identifier
+1. Change key/identity for data encryption for existing server. First the get new key identifier:
```azurecli-interactive newKeyIdentifier=$(az keyvault key show --vault-name <vault_name> --name <key_name> --query key.kid -o tsv) ```
-2. Update server with new key and\or identity
+2. Update server with new key and\or identity.
```azurecli-interactive az postgres flexible-server update --resource-group <resource_group> --name <server_name> --key $newKeyIdentifier --identity <identity_name> ```
postgresql How To Create Server Customer Managed Key Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-server-customer-managed-key-portal.md
Title: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure Portal
-description: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure Portal
+ Title: Create and manage with data encrypted by customer managed keys using Azure portal
+description: Create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using the Azure portal.
Last updated 12/12/2022
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this article, you learn how to create and manage Azure Database for PostgreSQL - Flexible Server with data encrypted by Customer Managed Keys using Azure portal. To learn more about Customer Managed Keys (CMK) feature with Azure Database for PostgreSQL - Flexible Server, see the [overview](concepts-data-encryption.md).
+In this article, you learn how to create and manage an Azure Database for PostgreSQL flexible server instance with data encrypted by customer managed keys using Azure portal. To learn more about the customer managed keys (CMK) feature with Azure Database for PostgreSQL flexible server, see the [overview](concepts-data-encryption.md).
-## Setup Customer Managed Key during Server Creation
+## Set up customer managed key during server creation
Prerequisites: -- Microsoft Entra user managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.
+- Microsoft Entra user managed identity in the region where the Azure Database for PostgreSQL flexible server instance will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.
-- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. Follow [requirements section in concepts doc](concepts-data-encryption.md) for required Azure Key Vault settings
+- Key Vault with key in region where the Azure Database for PostgreSQL flexible server instance will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key. Follow [requirements section in concepts doc](concepts-data-encryption.md) for required Azure Key Vault settings.
-Follow the steps below to enable CMK while creating Postgres Flexible Server using Azure portal.
+Follow the steps below to enable CMK while creating the Azure Database for PostgreSQL flexible server instance using Azure portal.
-1. Navigate to Azure Database for PostgreSQL - Flexible Server create pane via Azure portal
+1. Navigate to the Azure Database for PostgreSQL flexible server create pane via Azure portal.
-2. Provide required information on Basics and Networking tabs
+2. Provide required information on Basics and Networking tabs.
-3. Navigate to Security tab. On the screen, provide Microsoft Entra ID identity that has access to the Key Vault and Key in Key Vault in the same region where you're creating this server
+3. Navigate to Security tab. On the screen, provide Microsoft Entra ID identity that has access to the Key Vault and Key in Key Vault in the same region where you're creating this server.
-4. On Review Summary tab, make sure that you provided correct information in Security section and press Create button
+4. On Review Summary tab, make sure that you provided correct information in Security section and press Create button.
-5. Once it's finished, you should be able to navigate to Data Encryption screen for the server and update identity or key if necessary
+5. Once it's finished, you should be able to navigate to Data Encryption screen for the server and update identity or key if necessary.
-## Update Customer Managed Key on the CMK enabled Flexible Server
+## Update customer managed key on the CMK enabled Azure Database for PostgreSQL flexible server instance
Prerequisites: -- Microsoft Entra user-managed identity in region where Postgres Flex Server will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.
+- Microsoft Entra user-managed identity in region where the Azure Database for PostgreSQL flexible server instance will be created. Follow this [tutorial](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) to create identity.
-- Key Vault with key in region where Postgres Flex Server will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
+- Key Vault with key in region where the Azure Database for PostgreSQL flexible server instance will be created. Follow this [tutorial](../../key-vault/general/quick-create-portal.md) to create Key Vault and generate key.
-Follow the steps below to update CMK on CMK enabled Flexible Server using Azure portal:
+Follow the steps below to update CMK on CMK enabled Azure Database for PostgreSQL flexible server instance using Azure portal:
-1. Navigate to Azure Database for PostgreSQL - Flexible Server create a page via the Azure portal.
+1. Navigate to the Azure Database for PostgreSQL flexible server create page via the Azure portal.
-2. Navigate to Data Encryption screen under Security tab
+2. Navigate to Data Encryption screen under Security tab.
-3. Select different identity to connect to Azure Key Vault, remembering that this identity needs to have proper access rights to the Key Vault
+3. Select different identity to connect to Azure Key Vault, remembering that this identity needs to have proper access rights to the Key Vault.
4. Select different key by choosing subscription, Key Vault and key from dropdowns provided. ## Next steps -- [Manage an Azure Database for PostgreSQL - Flexible Server by using Azure portal](how-to-manage-server-portal.md)
+- [Manage an Azure Database for PostgreSQL - Flexible Server instance by using Azure portal](how-to-manage-server-portal.md)
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-users.md
Title: Create users - Azure Database for PostgreSQL - Flexible Server
-description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Flexible Server.
+ Title: Create users
+description: This article describes how you can create new user accounts to interact with an Azure Database for PostgreSQL - Flexible Server instance.
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-This article describes how you can create users within an Azure Database for PostgreSQL server.
+This article describes how you can create users within an Azure Database for PostgreSQL flexible server instance.
> [!NOTE]
-> Microsoft Entra authentication for PostgreSQL Flexible Server is currently in preview.
+> Microsoft Entra authentication for Azure Database for PostgreSQL flexible server is currently in preview.
Suppose you want to learn how to create and manage Azure subscription users and their privileges. In that case, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md). ## The server admin account
-When you first created your Azure Database for PostgreSQL, you provided a server admin username and password. For more information, you can follow the [Quickstart](quickstart-create-server-portal.md) to see the step-by-step approach. Since the server admin user name is a custom name, you can locate the chosen server admin user name from the Azure portal.
+When you first created your Azure Database for PostgreSQL flexible server instance, you provided a server admin username and password. For more information, you can follow the [Quickstart](quickstart-create-server-portal.md) to see the step-by-step approach. Since the server admin user name is a custom name, you can locate the chosen server admin user name from the Azure portal.
-The Azure Database for PostgreSQL server is created with the three default roles defined. You can see these roles by running the command: `SELECT rolname FROM pg_roles;`
+The Azure Database for PostgreSQL flexible server instance is created with the three default roles defined. You can see these roles by running the command: `SELECT rolname FROM pg_roles;`
- azure_pg_admin - azure_superuser
The Azure Database for PostgreSQL server is created with the three default roles
Your server admin user is a member of the azure_pg_admin role. However, the server admin account isn't part of the azure_superuser role. Since this service is a managed PaaS service, only Microsoft is part of the super user role.
-The PostgreSQL engine uses privileges to control access to database objects, as discussed in the [PostgreSQL product documentation](https://www.postgresql.org/docs/current/static/sql-createrole.html). In Azure Database for PostgreSQL, the server admin user is granted these privileges:
+The PostgreSQL engine uses privileges to control access to database objects, as discussed in the [PostgreSQL product documentation](https://www.postgresql.org/docs/current/static/sql-createrole.html). In Azure Database for PostgreSQL flexible server, the server admin user is granted these privileges:
- Sign in, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE, REPLICATION The server admin user account can be used to create more users and grant those users into the azure_pg_admin role. Also, the server admin account can be used to create less privileged users and roles that have access to individual databases and schemas.
-## How to create more admin users in Azure Database for PostgreSQL
+## How to create more admin users in Azure Database for PostgreSQL flexible server
1. Get the connection information and admin user name.
- You need the full server name and admin sign-in credentials to connect to your database server. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+ You need the full server name and admin sign-in credentials to connect to your Azure Database for PostgreSQL flexible server instance. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
-1. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+1. Use the admin account and password to connect to your Azure Database for PostgreSQL flexible server instance. Use your preferred client tool, such as pgAdmin or psql.
If you're unsure of how to connect, see [the quickstart](./quickstart-create-server-portal.md) 1. Edit and run the following SQL code. Replace your new user name with the placeholder value <new_user>, and replace the placeholder password with your own strong password.
The server admin user account can be used to create more users and grant those u
GRANT azure_pg_admin TO <new_user>; ```
-## How to create database users in Azure Database for PostgreSQL
+## How to create database users in Azure Database for PostgreSQL flexible server
1. Get the connection information and admin user name.
- You need the full server name and admin sign-in credentials to connect to your database server. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
+ You need the full server name and admin sign-in credentials to connect to your Azure Database for PostgreSQL flexible server instance. You can easily find the server name and sign-in information from the server **Overview** page or the **Properties** page in the Azure portal.
-1. Use the admin account and password to connect to your database server. Use your preferred client tool, such as pgAdmin or psql.
+1. Use the admin account and password to connect to your Azure Database for PostgreSQL flexible server instance. Use your preferred client tool, such as pgAdmin or psql.
1. Edit and run the following SQL code. Replace the placeholder value `<db_user>` with your intended new user name and placeholder value `<newdb>` with your own database name. Replace the placeholder password with your own strong password.
- This SQL code below creates a new database, then it creates a new user in the PostgreSQL instance and grants connect privilege to the new database for that user.
+ This SQL code below creates a new database, then it creates a new user in the Azure Database for PostgreSQL flexible server instance and grants connect privilege to the new database for that user.
```sql CREATE DATABASE <newdb>;
The server admin user account can be used to create more users and grant those u
Open the firewall for the IP addresses of the new users' machines to enable them to connect: -- [Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
+- [Create and manage Azure Database for PostgreSQL - Flexible Server firewall rules by using the Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md).
- For more information regarding user account management, see PostgreSQL product documentation for [Database Roles and Privileges](https://www.postgresql.org/docs/current/static/user-manag.html), [GRANT Syntax](https://www.postgresql.org/docs/current/static/sql-grant.html), and [Privileges](https://www.postgresql.org/docs/current/static/ddl-priv.html).
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-deploy-github-action.md
+
+ Title: "Quickstart: Connect with GitHub Actions"
+description: Use Azure Database for PostgreSQL - Flexible Server from a GitHub Actions workflow.
+++ Last updated : 04/28/2023++++++
+# Quickstart: Use GitHub Actions to connect to Azure Database for PostgreSQL - Flexible Server
+++
+Get started with [GitHub Actions](https://docs.github.com/en/actions) by using a workflow to deploy database updates to [Azure Database for PostgreSQL flexible server](https://azure.microsoft.com/services/postgresql/).
+
+## Prerequisites
+
+You need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A GitHub repository with sample data (`data.sql`). If you don't have a GitHub account, [sign up for free](https://github.com/join).
+- An Azure Database for PostgreSQL flexible server instance.
+ - [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server instance in the Azure portal](../single-server/quickstart-create-server-database-portal.md)
+
+## Workflow file overview
+
+A GitHub Actions workflow is defined by a YAML (.yml) file in the `/.github/workflows/` path in your repository. This definition contains the various steps and parameters that make up the workflow.
+
+The file has two sections:
+
+| Section | Tasks |
+| | |
+| **Authentication** | 1. Generate deployment credentials. |
+| **Deploy** | 1. Deploy the database. |
+
+## Generate deployment credentials
++
+## Copy the Azure Database for PostgreSQL flexible server connection string
+
+In the Azure portal, go to your Azure Database for PostgreSQL flexible server instance and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string looks similar to this.
+
+> [!IMPORTANT]
+> - For Azure Database for PostgreSQL single server, use ```user=adminusername@servername``` . Note the ```@servername``` is required.
+> - For Azure Database for PostgreSQL flexible server, use ```user= adminusername``` without the ```@servername```.
+
+```output
+psql host={servername.postgres.database.azure.com} port=5432 dbname={your_database} user={adminusername} password={your_database_password} sslmode=require
+```
+
+You use the connection string as a GitHub secret.
+
+## Configure the GitHub secrets
++
+## Add your workflow
+
+1. Go to **Actions** for your GitHub repository.
+
+1. Select **Set up your workflow yourself**.
+
+1. Delete everything after the `on:` section of your workflow file. For example, your remaining workflow may look like this.
+
+ ```yaml
+ name: CI
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ ```
+
+1. Rename your workflow `PostgreSQL for GitHub Actions` and add the checkout and sign in actions. These actions check out your site code and authenticate with Azure using the GitHub secret(s) you created earlier.
+
+ # [Service principal](#tab/userlevel)
+
+ ```yaml
+ name: PostgreSQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ ```
+
+ # [OpenID Connect](#tab/openid)
+
+ ```yaml
+ name: PostgreSQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ ```
+
+
+
+1. Use the Azure PostgreSQL Deploy action to connect to your Azure Database for PostgreSQL flexible server instance. Replace `POSTGRESQL_SERVER_NAME` with the name of your server. You should have an Azure Database for PostgreSQL flexible server data file named `data.sql` at the root level of your repository.
+
+ ```yaml
+ - uses: azure/postgresql@v1
+ with:
+ connection-string: ${{ secrets.AZURE_POSTGRESQL_CONNECTION_STRING }}
+ server-name: POSTGRESQL_SERVER_NAME
+ plsql-file: './data.sql'
+ ```
+
+1. Complete your workflow by adding an action to sign out of Azure. Here's the completed workflow. The file appears in the `.github/workflows` folder of your repository.
+
+ # [Service principal](#tab/userlevel)
+
+ ```yaml
+ name: PostgreSQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
++
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - uses: azure/postgresql@v1
+ with:
+ server-name: POSTGRESQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_POSTGRESQL_CONNECTION_STRING }}
+ plsql-file: './data.sql'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+ ```
+
+ # [OpenID Connect](#tab/openid)
+
+ ```yaml
+ name: PostgreSQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
++
+ jobs:
+ build:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+
+ - uses: azure/postgresql@v1
+ with:
+ server-name: POSTGRESQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_POSTGRESQL_CONNECTION_STRING }}
+ plsql-file: './data.sql'
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+ ```
+
+
+
+## Review your deployment
+
+1. Go to **Actions** for your GitHub repository.
+
+1. Open the first result to see detailed logs of your workflow's run.
+
+ :::image type="content" source="media/how-to-deploy-github-action/gitbub-action-postgres-success.png" alt-text="Log of GitHub Actions run." lightbox="media/how-to-deploy-github-action/gitbub-action-postgres-success.png":::
+
+## Clean up resources
+
+When your Azure Database for PostgreSQL flexible server database and repository are no longer needed, clean up the resources you deployed by deleting the resource group and your GitHub repository.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about Azure and GitHub integration](/azure/developer/github/)
+<br/>
+> [!div class="nextstepaction"]
+> [Learn how to connect to the server](../single-server/how-to-connect-query-guide.md)
postgresql How To Deploy On Azure Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-deploy-on-azure-free-account.md
Title: Use an Azure free account to try Azure Database for PostgreSQL - Flexible Server for free
-description: Guidance on how to deploy an Azure Database for PostgreSQL - Flexible Server for free using an Azure Free Account.
+ Title: Use an Azure free account to try for free
+description: Guidance on how to deploy an Azure Database for PostgreSQL - Flexible Server instance for free using an Azure Free Account.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. With an Azure free account, you can use Flexible Server for **free for 12 months** with **monthly limits** of up to:
+Azure Database for PostgreSQL flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. With an Azure free account, you can use Azure Database for PostgreSQL flexible server for **free for 12 months** with **monthly limits** of up to:
- **750 hours** of **Burstable B1MS** instance, enough hours to run a database instance continuously each month. - **32 GB storage and 32 GB backup storage**.
-This article shows you how to create and use a flexible server for free using an [Azure free account](https://azure.microsoft.com/free/).
+This article shows you how to create and use an Azure Database for PostgreSQL flexible server instance for free using an [Azure free account](https://azure.microsoft.com/free/).
## Prerequisites
To complete this tutorial, you need:
- An Azure free account. If you donΓÇÖt have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
-## Create an Azure Database for PostgreSQL - Flexible Server
+## Create an Azure Database for PostgreSQL flexible server instance
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure free account. The default view is your service dashboard.
-1. To create a PostgreSQL Flexible Server database, search for and select **Azure Database for PostgreSQL servers**:
+1. To create an Azure Database for PostgreSQL flexible server instance, search for and select **Azure Database for PostgreSQL servers**:
- :::image type="content" source="media/how-to-deploy-on-azure-free-account/select-postgresql.png" alt-text="Screenshot that shows how to search and select Azure Database for PostgreSQL.":::
+ :::image type="content" source="media/how-to-deploy-on-azure-free-account/select-postgresql.png" alt-text="Screenshot that shows how to search and select Azure Database for PostgreSQL flexible server.":::
Alternatively, you can search for and navigate to **Free Services**, and then select **Azure Database for PostgreSQL** tile from the list:
To complete this tutorial, you need:
1. Select **Create**.
-1. Enter the basic settings for a new **Flexible Server**.
+1. Enter the basic settings for a new **Azure Database for PostgreSQL flexible server** instance.
- :::image type="content" source="media/how-to-deploy-on-azure-free-account/basic-settings-postgresql.png" alt-text="Screenshot that shows the Basic Settings for creating Flexible Server.":::
+ :::image type="content" source="media/how-to-deploy-on-azure-free-account/basic-settings-postgresql.png" alt-text="Screenshot that shows the Basic Settings for creating an Azure Database for PostgreSQL flexible server instance.":::
| Setting | Suggested Value | Description | |--|-|-|
To complete this tutorial, you need:
1. Select **Networking** tab to configure how to reach your server.
- Azure Database for PostgreSQL - Flexible Server provides two ways to connect:
+ Azure Database for PostgreSQL flexible server provides two ways to connect:
- Public access (allowed IP addresses) and Private endpoint - Private access (VNet Integration)
- With public access, access to your server is limited to allowed IP addresses that you include in a firewall rule or to applications which can reach the instance of PostgreSQL via private endpoints. This method prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for a specific IP address or range, or create a private endpoint.
+ With public access, access to your server is limited to allowed IP addresses that you include in a firewall rule or to applications which can reach the instance of Azure Database for PostgreSQL flexible server via private endpoints. This method prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for a specific IP address or range, or create a private endpoint.
With private access, access to your server is limited to your virtual network. For more information about connectivity methods, [**see Networking overview**](./concepts-networking.md). For the purposes of this tutorial, enable public access to connect to the server. >[!NOTE]
- >Azure Database for PostgreSQL - Flexible Server support for Private Endpoints in Preview requires enablement of **Enable Private Endpoints for PostgreSQL flexible servers** [preview feature in your subscription](../../azure-resource-manager/management/preview-features.md).
+ >Azure Database for PostgreSQL flexible server support for Private Endpoints in Preview requires enablement of **Enable Private Endpoints for PostgreSQL flexible servers** [preview feature in your subscription](../../azure-resource-manager/management/preview-features.md).
>Only **after preview feature is enabled** you can create servers which are PE capable, i.e. can be networked using Private Link. 1. On the **Networking** tab, for **Connectivity method** select **Public access (allowed IP addresses) and Private endpoint**.
To complete this tutorial, you need:
:::image type="content" source="media/how-to-deploy-on-azure-free-account/networking-postgresql.png" alt-text="Screenshot that shows the networking options to be chosen, and highlights the add current client IP address button.":::
-1. To review your flexible server configuration, select **Review + create**.
+1. To review your Azure Database for PostgreSQL flexible server configuration, select **Review + create**.
:::image type="content" source="media/how-to-deploy-on-azure-free-account/review-create-postgresql.png" alt-text="Screenshot that shows the Review + create blade."::: >[!IMPORTANT]
- >While creating the Flexible server instance from your Azure free account, you will still see an **Estimated cost per month** in the **Compute + Storage : Cost Summary** blade and **Review + Create** tab. But, as long as you are using your Azure free account, and your free service usage is within monthly limits (to view usage information, refer [**Monitor and track free services usage**](#monitor-and-track-free-services-usage) section below), you won't be charged for the service. We're currently working to improve the **Cost Summary** experience for free services.
+ >While creating the Azure Database for PostgreSQL flexible server instance from your Azure free account, you still see an **Estimated cost per month** in the **Compute + Storage : Cost Summary** blade and **Review + Create** tab. But, as long as you are using your Azure free account, and your free service usage is within monthly limits (to view usage information, refer [**Monitor and track free services usage**](#monitor-and-track-free-services-usage) section below), you won't be charged for the service. We're currently working to improve the **Cost Summary** experience for free services.
1. Select **Create** to provision the server.
- Provisioning can take a few minutes
+ Provisioning can take a few minutes.
1. On the toolbar, select **Notifications** (the bell icon) to monitor the deployment process.
- After the deployment is complete, select **Pin to dashboard**, to create a tile for the flexible server on your Azure portal dashboard. This tile is a shortcut to the server's **Overview** page. When you select **Go to resource**, the server's **Overview** page opens.
+ After the deployment is complete, select **Pin to dashboard**, to create a tile for the Azure Database for PostgreSQL flexible server instance on your Azure portal dashboard. This tile is a shortcut to the server's **Overview** page. When you select **Go to resource**, the server's **Overview** page opens.
- By default, a **postgres** database is created under your server. The postgres database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You cannot access this database.)
+ By default, a **postgres** database is created under your server. The postgres database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You can't access this database.)
## Connect and query
-Now that youΓÇÖve created an Azure Database for PostgreSQL flexible server in a resource group, you can connect to server and query databases by using the following Connect and query quickstarts:
+Now that youΓÇÖve created an Azure Database for PostgreSQL flexible server instance in a resource group, you can connect to server and query databases by using the following Connect and query quickstarts:
- [psql](quickstart-create-server-portal.md#connect-to-the-postgresql-database-using-psql) - [Azure CLI](connect-azure-cli.md) - [Python](connect-python.md)
Now that youΓÇÖve created an Azure Database for PostgreSQL flexible server in a
## Monitor and track free services usage
-You're not charged for Azure Database for PostgreSQL - Flexible Server services that are included for free with your Azure free account unless you exceed the free service limits. To remain within the limits, use the Azure portal to track and monitor your free services usage.
+You're not charged for Azure Database for PostgreSQL flexible server services that are included for free with your Azure free account unless you exceed the free service limits. To remain within the limits, use the Azure portal to track and monitor your free services usage.
1. In the Azure portal, search for **Subscriptions** and select the Azure free account - **Free Trial** subscription. 1. On the **Overview** page, scroll down to show the tile **Top free services by usage**, and then select **View all free services**.
You're not charged for Azure Database for PostgreSQL - Flexible Server services
## Clean up resources
-If you are using the flexible server for development, testing, or predictable, time-bound production workloads, optimize usage by starting and stopping the server on-demand. After you stop the server, it remains in that state for seven days unless restarted sooner. For more information, see Start/Stop Server to lower TCO. When your Flexible Server is stopped, there is no Compute usage, but Storage usage is still considered.
+If you're using the Azure Database for PostgreSQL flexible server instance for development, testing, or predictable, time-bound production workloads, optimize usage by starting and stopping the server on-demand. After you stop the server, it remains in that state for seven days unless restarted sooner. For more information, see Start/Stop Server to lower TCO. When your Azure Database for PostgreSQL flexible server instance is stopped, there's no Compute usage, but Storage usage is still considered.
-Alternatively, if you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the PostgreSQL server.
+Alternatively, if you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the Azure Database for PostgreSQL flexible server instance.
- To delete the resource group, complete the following steps: 1. In the Azure portal, search for and select **Resource groups**.
Alternatively, if you don't expect to need these resources in the future, you ca
1. On the **Overview** page for your resource group, select **Delete resource group.** 1. In the confirmation dialog box, type the name of your resource group, and then select **Delete**. -- To delete the PostgreSQL server, on **Overview** page for the server, select **Delete**.
+- To delete the Azure Database for PostgreSQL flexible server instance, on the **Overview** page for the server, select **Delete**.
postgresql How To Enable Intelligent Performance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-enable-intelligent-performance-cli.md
Title: Configure intelligent tuning - Azure Database for PostgreSQL - Flexible Server - CLI
+ Title: Configure intelligent tuning - Azure CLI
description: This article describes how to configure intelligent tuning in Azure Database for PostgreSQL - Flexible Server by using the Azure CLI.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can verify and update the intelligent tuning configuration for an Azure Database for PostgreSQL server by using the Azure CLI.
+You can verify and update the intelligent tuning configuration for an Azure Database for PostgreSQL flexible server instance by using the Azure CLI.
To learn more about intelligent tuning, see the [overview](concepts-intelligent-tuning.md).
To learn more about intelligent tuning, see the [overview](concepts-intelligent-
az account set --subscription <subscription id> ``` -- If you haven't already created a PostgreSQL flexible server, create one by using the ```az postgres flexible-server create``` command:
+- If you haven't already created an Azure Database for PostgreSQL flexible server instance, create one by using the ```az postgres flexible-server create``` command:
```azurecli-interactive az postgres flexible-server create --resource-group myresourcegroup --name myservername
az postgres flexible-server parameter show --resource-group myresourcegroup --se
To enable or disable intelligent tuning, use the [az postgres flexible-server parameter set](/cli/azure/postgres/flexible-server/parameter#az-postgres-flexible-server-parameter-set) command. You can choose among the following tuning targets: `none`, `Storage-checkpoint_completion_target`, `Storage-min_wal_size`,`Storage-max_wal_size`, `Storage-bgwriter_delay`, `tuning-autovacuum`, and `all`. > [!IMPORTANT]
-> Autovacuum tuning is currently supported for the General Purpose and Memory Optimized server compute tiers that have four or more vCores. The Burstable server compute tier is not supported.
+> Autovacuum tuning is currently supported for the General Purpose and Memory Optimized server compute tiers that have four or more vCores. The Burstable server compute tier isn't supported.
1. Activate the intelligent tuning feature by using the following command:
To enable or disable intelligent tuning, use the [az postgres flexible-server pa
When you're choosing values from the `intelligent_tuning.metric_targets` server parameter, take the following considerations into account:
-* The `NONE` value takes precedence over all other values. If you choose `NONE` alongside any combination of other values, the parameter will be perceived as set to `NONE`. This is equivalent to `intelligent_tuning = OFF`, so no tuning will occur.
+* The `NONE` value takes precedence over all other values. If you choose `NONE` alongside any combination of other values, the parameter is perceived as set to `NONE`. This is equivalent to `intelligent_tuning = OFF`, so no tuning occurs.
-* The `ALL` value takes precedence over all other values, with the exception of `NONE`. If you choose `ALL` with any combination, barring `NONE`, all the listed parameters will undergo tuning.
+* The `ALL` value takes precedence over all other values, with the exception of `NONE`. If you choose `ALL` with any combination, barring `NONE`, all the listed parameters undergo tuning.
-* The `ALL` value encompasses all existing metric targets. This value will also automatically apply to any new metric targets that you might add in the future. This allows for comprehensive and future-proof tuning of your PostgreSQL server.
+* The `ALL` value encompasses all existing metric targets. This value also automatically applies to any new metric targets that you might add in the future. This allows for comprehensive and future-proof tuning of your Azure Database for PostgreSQL flexible server instance.
-* If you want to include an additional tuning target, you need to specify both the existing and new tuning targets. For example, if `bgwriter_delay` is already enabled and you want to add autovacuum tuning, your command should look like this:
+* If you want to include another tuning target, you need to specify both the existing and new tuning targets. For example, if `bgwriter_delay` is already enabled and you want to add autovacuum tuning, your command should look like this:
```azurecli-interactive az postgres flexible-server parameter set --resource-group myresourcegroup --server-name mydemoserver --name intelligent_tuning.metric_targets --value tuning-autovacuum,Storage-bgwriter_delay
postgresql How To Enable Intelligent Performance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-enable-intelligent-performance-portal.md
Title: Configure intelligent tuning - Azure Database for PostgreSQL - Flexible Server - portal
-description: This article describes how to configure intelligent tuning in Azure Database for PostgreSQL Flexible Server through the Azure portal.
+ Title: Configure intelligent tuning - portal
+description: This article describes how to configure intelligent tuning in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
Last updated 06/05/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides a step-by-step procedure to configure intelligent tuning in Azure Database for PostgreSQL - Flexible Server by using the Azure portal.
+This article provides a step-by-step procedure to configure intelligent tuning in Azure Database for PostgreSQL flexible server by using the Azure portal.
To learn more about intelligent tuning, see the [overview](concepts-intelligent-tuning.md). > [!IMPORTANT]
-> Autovacuum tuning is currently supported for the General Purpose and Memory Optimized server compute tiers that have four or more vCores. The Burstable server compute tier is not supported.
+> Autovacuum tuning is currently supported for the General Purpose and Memory Optimized server compute tiers that have four or more vCores. The Burstable server compute tier isn't supported.
## Steps to enable intelligent tuning on your flexible server
-1. Visit the [Azure portal](https://portal.azure.com/) and select the flexible server on which you want to enable intelligent tuning.
+1. Visit the [Azure portal](https://portal.azure.com/) and select the Azure Database for PostgreSQL flexible server instance on which you want to enable intelligent tuning.
2. On the left pane, select **Server parameters** and then search for **intelligent tuning**.
To learn more about intelligent tuning, see the [overview](concepts-intelligent-
When you're choosing values from the `intelligent_tuning.metric_targets` server parameter, take the following considerations into account:
-* The `NONE` value takes precedence over all other values. If you choose `NONE` alongside any combination of other values, the parameter will be perceived as set to `NONE`. This is equivalent to `intelligent_tuning = OFF`, so no tuning will occur.
+* The `NONE` value takes precedence over all other values. If you choose `NONE` alongside any combination of other values, the parameter is perceived as set to `NONE`. This is equivalent to `intelligent_tuning = OFF`, so no tuning occurs.
-* The `ALL` value takes precedence over all other values, with the exception of `NONE`. If you choose `ALL` with any combination, barring `NONE`, all the listed parameters will undergo tuning.
+* The `ALL` value takes precedence over all other values, with the exception of `NONE`. If you choose `ALL` with any combination, barring `NONE`, all the listed parameters undergo tuning.
-* The `ALL` value encompasses all existing metric targets. This value will also automatically apply to any new metric targets that you might add in the future. This allows for comprehensive and future-proof tuning of your PostgreSQL server.
+* The `ALL` value encompasses all existing metric targets. This value also automatically applies to any new metric targets that you might add in the future. This allows for comprehensive and future-proof tuning of your Azure Database for PostgreSQL flexible server instance.
## Next steps
postgresql How To High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-cpu-utilization.md
Title: High CPU Utilization
-description: Troubleshooting guide for high cpu utilization in Azure Database for PostgreSQL - Flexible Server
+ Title: High CPU utilization
+description: Troubleshooting guide for high CPU utilization.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article shows you how to quickly identify the root cause of high CPU utilization, and possible remedial actions to control CPU utilization when using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+This article shows you how to quickly identify the root cause of high CPU utilization, and possible remedial actions to control CPU utilization when using [Azure Database for PostgreSQL flexible server](overview.md).
In this article, you'll learn:
In this article, you'll learn:
## Troubleshooting guides
-Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal the probable root cause and recommendations to the mitigate high CPU scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL flexible server portal the probable root cause and recommendations to the mitigate high CPU scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
## Tools to identify high CPU utilization
Consider these tools to identify high CPU utilization.
### Azure Metrics
-Azure Metrics is a good starting point to check the CPU utilization for the definite date and period. Metrics give information about the time duration during which the CPU utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput with CPU utilization to find out times when the workload caused high CPU. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
+Azure Metrics is a good starting point to check the CPU utilization for the definite date and period. Metrics give information about the time duration during which the CPU utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput with CPU utilization to find out times when the workload caused high CPU. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./how-to-alert-on-metrics.md).
### Query Store
-Query Store automatically captures the history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL instance. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
+Query Store automatically captures the history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL flexible server instance. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
### pg_stat_statements
For more details about PgBouncer, review:
[Best Practices](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/connection-handling-best-practice-with-postgresql/ba-p/790883)
-Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md)
+Azure Database for PostgreSQL flexible server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md)
### Terminate long running transactions
postgresql How To High Io Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-io-utilization.md
Title: High IOPS utilization for Azure Database for PostgreSQL - Flexible Server
-description: This article is a troubleshooting guide for high IOPS utilization in Azure Database for PostgreSQL - Flexible Server
+ Title: High IOPS utilization
+description: This article is a troubleshooting guide for high IOPS utilization in Azure Database for PostgreSQL - Flexible Server.
Last updated 10/26/2023 + - template-how-to
# Troubleshoot high IOPS utilization for Azure Database for PostgreSQL - Flexible Server
-This article shows you how to quickly identify the root cause of high IOPS (input/output operations per second) utilization and provides remedial actions to control IOPS utilization when you're using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+This article shows you how to quickly identify the root cause of high IOPS (input/output operations per second) utilization and provides remedial actions to control IOPS utilization when you're using [Azure Database for PostgreSQL flexible server](overview.md).
In this article, you learn how to:
In this article, you learn how to:
## Troubleshooting guides
-Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal the probable root cause and recommendations to the mitigate high IOPS utilization scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL flexible server portal the probable root cause and recommendations to the mitigate high IOPS utilization scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
## Tools to identify high I/O utilization
Consider the following tools to identify high I/O utilization.
### Azure Metrics
-Azure Metrics is a good starting point to check I/O utilization for a defined date and period. Metrics give information about the time during which I/O utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput to find out times when the workload is causing high I/O utilization. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
+Azure Metrics is a good starting point to check I/O utilization for a defined date and period. Metrics give information about the time during which I/O utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput to find out times when the workload is causing high I/O utilization. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./how-to-alert-on-metrics.md).
### Query Store
-The Query Store feature automatically captures the history of queries and runtime statistics, and retains them for your review. It slices the data by time to see temporal usage patterns. Data for all users, databases, and queries is stored in a database named *azure_sys* in the Azure Database for PostgreSQL instance. For step-by-step guidance, see [Monitor performance with Query Store](./concepts-query-store.md).
+The Query Store feature automatically captures the history of queries and runtime statistics, and retains them for your review. It slices the data by time to see temporal usage patterns. Data for all users, databases, and queries is stored in a database named *azure_sys* in the Azure Database for PostgreSQL flexible server instance. For step-by-step guidance, see [Monitor performance with Query Store](./concepts-query-store.md).
Use the following statement to view the top five SQL statements that consume I/O:
ORDER BY duration DESC;
### Checkpoint timings
-High I/O can also be seen in scenarios where a checkpoint is happening too frequently. One way to identify this is by checking the PostgreSQL log file for the following log text: "LOG: checkpoints are occurring too frequently."
+High I/O can also be seen in scenarios where a checkpoint is happening too frequently. One way to identify this is by checking the Azure Database for PostgreSQL flexible server log file for the following log text: "LOG: checkpoints are occurring too frequently."
You could also investigate by using an approach where periodic snapshots of `pg_stat_bgwriter` with a time stamp are saved. By using the saved snapshots, you can calculate the average checkpoint interval, number of checkpoints requested, and number of checkpoints timed.
To resolve high I/O utilization, you can use any of the following three methods.
### The `EXPLAIN ANALYZE` command
-After you've identified the query that's consuming high I/O, use `EXPLAIN ANALYZE` to further investigate the query and tune it. For more information about the `EXPLAIN ANALYZE` command, review the [EXPLAIN plan](https://www.postgresql.org/docs/current/sql-explain.html).
+After you identify the query that's consuming high I/O, use `EXPLAIN ANALYZE` to further investigate the query and tune it. For more information about the `EXPLAIN ANALYZE` command, review the [EXPLAIN plan](https://www.postgresql.org/docs/current/sql-explain.html).
### Terminate long-running transactions
SELECT pg_terminate_backend(pid);
### Tune server parameters
-If you've observed that the checkpoint is happening too frequently, increase the `max_wal_size` server parameter until most checkpoints are time driven, instead of requested. Eventually, 90 percent or more should be time based, and the interval between two checkpoints should be close to the `checkpoint_timeout` value that's set on the server.
+If you observe that the checkpoint is happening too frequently, increase the `max_wal_size` server parameter until most checkpoints are time driven, instead of requested. Eventually, 90 percent or more should be time based, and the interval between two checkpoints should be close to the `checkpoint_timeout` value that's set on the server.
- `max_wal_size`: Peak business hours are a good time to arrive at a `max_wal_size` value. To arrive at a value, do the following:
postgresql How To High Memory Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-memory-utilization.md
Title: High Memory Utilization
-description: Troubleshooting guide for high memory utilization in Azure Database for PostgreSQL - Flexible Server
+ Title: High memory utilization
+description: Troubleshooting guide for high memory utilization.
# High memory utilization in Azure Database for PostgreSQL - Flexible Server
-This article introduces common scenarios and root causes that might lead to high memory utilization in [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+This article introduces common scenarios and root causes that might lead to high memory utilization in [Azure Database for PostgreSQL flexible server](overview.md).
In this article, you learn:
In this article, you learn:
## Troubleshooting guides
-Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL - Flexible Server portal the probable root cause and recommendations to the mitigate high memory scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
+Using the feature troubleshooting guides which is available on the Azure Database for PostgreSQL flexible server portal the probable root cause and recommendations to the mitigate high memory scenario can be found. How to setup the troubleshooting guides to use them please follow [setup troubleshooting guides](how-to-troubleshooting-guides.md).
## Tools to identify high memory utilization
Consider the following tools to identify high memory utilization.
### Azure Metrics Use Azure Metrics to monitor the percentage of memory in use for the definite date and time frame.
-For proactive monitoring, configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
+For proactive monitoring, configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./how-to-alert-on-metrics.md).
### Query Store
For example, consider a scenario where there are three autovacuum workers runnin
If `maintenance_work_mem` is set to 1 GB, then all sessions combined will use 3 GB of memory.
-A high `maintenance_work_mem` value along with multiple running sessions for vacuuming/index creation/adding foreign keys can cause high memory utilization. The maximum allowed value for the `maintenance_work_mem` server parameter in Azure Database for Flexible Server is 2 GB.
+A high `maintenance_work_mem` value along with multiple running sessions for vacuuming/index creation/adding foreign keys can cause high memory utilization. The maximum allowed value for the `maintenance_work_mem` server parameter in Azure Database for PostgreSQL flexible server is 2 GB.
#### Shared buffers
A reasonable setting for shared buffers is 25% of RAM. Setting a value of greate
### Max connections
-All new and idle connections on a Postgres database consume up to 2 MB of memory. One way to monitor connections is by using the following query:
+All new and idle connections on an Azure Database for PostgreSQL flexible server database consume up to 2 MB of memory. One way to monitor connections is by using the following query:
```postgresql select count(*) from pg_stat_activity;
For more details on PgBouncer, review:
[Best Practices](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/connection-handling-best-practice-with-postgresql/ba-p/790883).
-Azure Database for Flexible Server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md).
+Azure Database for PostgreSQL flexible server offers PgBouncer as a built-in connection pooling solution. For more information, see [PgBouncer](./concepts-pgbouncer.md).
### Explain Analyze
For more information on the **EXPLAIN** command, review [Explain Plan](https://w
- [Autovacuum Tuning](how-to-autovacuum-tuning.md) - [High CPU Utilization](how-to-high-cpu-utilization.md)-- [Server Parameters](howto-configure-server-parameters-using-portal.md)
+- [Server Parameters](how-to-configure-server-parameters-using-portal.md)
postgresql How To Identify Slow Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-identify-slow-queries.md
Title: Identify Slow Running Query for Azure Database for PostgreSQL - Flexible Server
-description: Troubleshooting guide for identifying slow running queries in Azure Database for PostgreSQL - Flexible Server
+ Title: Identify slow running query
+description: Troubleshooting guide for identifying slow running queries in Azure Database for PostgreSQL - Flexible Server.
# Troubleshoot and identify slow-running queries in Azure Database for PostgreSQL - Flexible Server
-This article shows you how to troubleshoot and identify slow-running queries using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+This article shows you how to troubleshoot and identify slow-running queries using [Azure Database for PostgreSQL flexible server](overview.md).
In a high CPU utilization scenario, in this article, you learn how to:
In a high CPU utilization scenario, in this article, you learn how to:
### Prerequisites
-One must enable troubleshooting guides and auto_explain extension on the Azure Database for PostgreSQL ΓÇô Flexible Server. To enable troubleshooting guides, follow the steps mentioned [here](how-to-troubleshooting-guides.md).
+One must enable troubleshooting guides and auto_explain extension on the Azure Database for PostgreSQL flexible server instance. To enable troubleshooting guides, follow the steps mentioned [here](how-to-troubleshooting-guides.md).
To enable auto_explain extension, follow the steps below:
-1. Add auto_explain extension to the shared preload libraries as shown below from the server parameters page on the Flexible Server portal
+1. Add auto_explain extension to the shared preload libraries as shown below from the server parameters page on the Azure Database for PostgreSQL flexible server portal.
:::image type="content" source="./media/how-to-identify-slow-queries/shared-preload-library.png" alt-text="Screenshot of server parameters page with shared preload libraries parameter." lightbox="./media/how-to-identify-slow-queries/shared-preload-library.png":::
To enable auto_explain extension, follow the steps below:
> [!NOTE] > Making this change will require a server restart.
-2. After the auto_explain extension is added to shared preload libraries and the server has restarted, change the below highlighted auto_explain server parameters to `ON` from the server parameters page on the Flexible Server portal and leave the remaining ones
+2. After the auto_explain extension is added to shared preload libraries and the server has restarted, change the below highlighted auto_explain server parameters to `ON` from the server parameters page on the Azure Database for PostgreSQL flexible server portal and leave the remaining ones
with default values as shown below. :::image type="content" source="./media/how-to-identify-slow-queries/auto-explain-parameters.png" alt-text="Screenshot of server parameters page with auto_explain parameters." lightbox="./media/how-to-identify-slow-queries/auto-explain-parameters.png":::
With troubleshooting guides and auto_explain extension in place, we explain the
We have a scenario where CPU utilization has spiked to 90% and would like to know the root cause of the spike. To debug the scenario, follow the steps mentioned below.
-1. As soon as you're alerted by a CPU scenario, go to the troubleshooting guides available under the Help tab on the Flexible server portal overview page.
+1. As soon as you're alerted by a CPU scenario, go to the troubleshooting guides available under the Help tab on the Azure Database for PostgreSQL flexible server portal overview page.
:::image type="content" source="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png" alt-text="Screenshot of troubleshooting guides menu." lightbox="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png":::
We have a scenario where CPU utilization has spiked to 90% and would like to kno
:::image type="content" source="./media/how-to-identify-slow-queries/high-cpu-query.png" alt-text="Screenshot of troubleshooting guides menu - Top CPU consuming queries tab." lightbox="./media/how-to-identify-slow-queries/high-cpu-query.png":::
-5. Connect to azure_sys database and execute the query to retrieve actual query text using the script below
+5. Connect to azure_sys database and execute the query to retrieve actual query text using the following script.
```sql psql -h ServerName.postgres.database.azure.com -U AdminUsername -d azure_sys
GROUP BY c_w_id,c_id
order by c_w_id; ```
-7. To understand what exact explain plan was generated, use Postgres logs. Auto explain extension would have logged an entry in the logs every time the query execution was completed during the interval. Select Logs section from the `Monitoring` tab from the Flexible server portal overview page.
+7. To understand what exact explain plan was generated, use Azure Database for PostgreSQL flexible server logs. Auto explain extension would have logged an entry in the logs every time the query execution was completed during the interval. Select Logs section from the `Monitoring` tab from the Azure Database for PostgreSQL flexible server portal overview page.
:::image type="content" source="./media/how-to-identify-slow-queries/log-analytics-tab.png" alt-text="Screenshot of troubleshooting guides menu - Logs." lightbox="./media/how-to-identify-slow-queries/log-analytics-tab.png":::
order by c_w_id;
:::image type="content" source="./media/how-to-identify-slow-queries/log-analytics-timerange.png" alt-text="Screenshot of troubleshooting guides menu - Logs Timerange." lightbox="./media/how-to-identify-slow-queries/log-analytics-timerange.png":::
-9. Execute the below query to retrieve the explain analyze output of the query identified.
+9. Execute the following query to retrieve the explain analyze output of the query identified.
```sql AzureDiagnostics
In the second scenario, a stored procedure execution time is found to be slow, a
### Prerequisites
-One must enable troubleshooting guides and auto_explain extension on the Azure Database for PostgreSQL ΓÇô Flexible Server as a prerequisite. To enable troubleshooting guides, follow the steps mentioned [here](how-to-troubleshooting-guides.md).
+One must enable troubleshooting guides and auto_explain extension on the Azure Database for PostgreSQL flexible server instance as a prerequisite. To enable troubleshooting guides, follow the steps mentioned [here](how-to-troubleshooting-guides.md).
To enable auto_explain extension, follow the steps below:
-1. Add auto_explain extension to the shared preload libraries as shown below from the server parameters page on the Flexible Server portal
+1. Add auto_explain extension to the shared preload libraries as shown below from the server parameters page on the Azure Database for PostgreSQL flexible server portal.
:::image type="content" source="./media/how-to-identify-slow-queries/shared-preload-library.png" alt-text="Screenshot of server parameters page with shared preload libraries parameter - Procedure." lightbox="./media/how-to-identify-slow-queries/shared-preload-library.png"::: > [!NOTE] > Making this change will require a server restart.
-2. After the auto_explain extension is added to shared preload libraries and the server has restarted, change the below highlighted auto_explain server parameters to `ON` from the server parameters page on the Flexible Server portal and leave the remaining ones
+2. After the auto_explain extension is added to shared preload libraries and the server has restarted, change the below highlighted auto_explain server parameters to `ON` from the server parameters page on the Azure Database for PostgreSQL flexible server portal and leave the remaining ones
with default values as shown below. :::image type="content" source="./media/how-to-identify-slow-queries/auto-explain-procedure-parameters.png" alt-text="Screenshot of server parameters blade with auto_explain parameters - Procedure." lightbox="./media/how-to-identify-slow-queries/auto-explain-procedure-parameters.png":::
With troubleshooting guides and auto_explain extension in place, we explain the
We have a scenario where CPU utilization has spiked to 90% and would like to know the root cause of the spike. To debug the scenario, follow the steps mentioned below.
-1. As soon as you're alerted by a CPU scenario, go to the troubleshooting guides available under the Help tab on the Flexible server portal overview page.
+1. As soon as you're alerted by a CPU scenario, go to the troubleshooting guides available under the Help tab on the Azure Database for PostgreSQL flexible server portal overview page.
:::image type="content" source="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png" alt-text="Screenshot of troubleshooting guides menu." lightbox="./media/how-to-identify-slow-queries/troubleshooting-guides-blade.png":::
We have a scenario where CPU utilization has spiked to 90% and would like to kno
call autoexplain_test (); ```
-7. To understand what exact explanations are generated for the queries that are part of the stored procedure, use Postgres logs. Auto explain extension would have logged an entry in the logs every time the query execution was completed during the interval. Select the Logs section from the `Monitoring` tab from the Flexible server portal overview page.
+7. To understand what exact explanations are generated for the queries that are part of the stored procedure, use Azure Database for PostgreSQL flexible server logs. Auto explain extension would have logged an entry in the logs every time the query execution was completed during the interval. Select the Logs section from the `Monitoring` tab from the Azure Database for PostgreSQL flexible server portal overview page.
:::image type="content" source="./media/how-to-identify-slow-queries/log-analytics-tab.png" alt-text="Screenshot of troubleshooting guides menu - Logs." lightbox="./media/how-to-identify-slow-queries/log-analytics-tab.png":::
Finalize Aggregate (cost=180185.84..180185.85 rows=1 width=4) (actual time=10387
``` > [!NOTE]
-> please note for demonstration purpose explain analyze output of only few queries used in the procedure are shared. The idea is one can gather explain analyze output of all queries from the logs, and then identify the slowest of those and try to tune them.
+> Please note for demonstration purpose explain analyze output of only few queries used in the procedure are shared. The idea is one can gather explain analyze output of all queries from the logs, and then identify the slowest of those and try to tune them.
## Related content
postgresql How To Integrate Azure Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-integrate-azure-ai.md
Title: Integrate Azure AI capabilities into Azure Database for PostgreSQL Flexible Server -Preview
-description: Integrate Azure AI capabilities into Azure Database for PostgreSQL Flexible Server -Preview
+ Title: Integrate Azure AI capabilities Preview
+description: Integrate Azure AI capabilities into Azure Database for PostgreSQL - Flexible Server - Preview.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The `azure_ai` extension adds the ability to use [large language models](/training/modules/fundamentals-generative-ai/3-language%20models) (LLMs) and build [generative AI](/training/paths/introduction-generative-ai/) applications within an Azure Database for PostgreSQL Flexible Server database by integrating the power of [Azure AI services](/azure/ai-services/what-are-ai-services). Generative AI is a form of artificial intelligence in which LLMs are trained to generate original content based on natural language input. Using the `azure_ai` extension allows you to use generative AI's natural language query processing capabilities directly from the database.
+The `azure_ai` extension adds the ability to use [large language models](/training/modules/fundamentals-generative-ai/3-language%20models) (LLMs) and build [generative AI](/training/paths/introduction-generative-ai/) applications within an Azure Database for PostgreSQL flexible server database by integrating the power of [Azure AI services](/azure/ai-services/what-are-ai-services). Generative AI is a form of artificial intelligence in which LLMs are trained to generate original content based on natural language input. Using the `azure_ai` extension allows you to use generative AI's natural language query processing capabilities directly from the database.
-This tutorial showcases adding rich AI capabilities to an Azure Database for PostgreSQL Flexible Server using the `azure_ai` extension. It covers integrating both [Azure OpenAI](/azure/ai-services/openai/overview) and the [Azure AI Language service](/azure/ai-services/language-service/) into your database using the extension.
+This tutorial showcases adding rich AI capabilities to an Azure Database for PostgreSQL flexible server instance using the `azure_ai` extension. It covers integrating both [Azure OpenAI](/azure/ai-services/openai/overview) and the [Azure AI Language service](/azure/ai-services/language-service/) into your database using the extension.
## Prerequisites
This tutorial showcases adding rich AI capabilities to an Azure Database for Pos
- An [Azure AI Language](/azure/ai-services/language-service/overview) service. If you don't have a resource, you can [create a Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal by following the instructions provided in the [quickstart for summarization](/azure/ai-services/language-service/summarization/custom/quickstart#create-a-new-resource-from-the-azure-portal) document. You can use the free pricing tier (`Free F0`) to try the service and upgrade later to a paid tier for production.
- - An Azure Database for PostgreSQL Flexible Server instance in your Azure subscription. If you don't have a resource, use either the [Azure portal](/azure/postgresql/flexible-server/quickstart-create-server-portal) or the [Azure CLI](/azure/postgresql/flexible-server/quickstart-create-server-cli) guide for creating one.
+ - An Azure Database for PostgreSQL flexible server instance in your Azure subscription. If you don't have a resource, use either the [Azure portal](/azure/postgresql/flexible-server/quickstart-create-server-portal) or the [Azure CLI](/azure/postgresql/flexible-server/quickstart-create-server-cli) guide for creating one.
## Connect to the database using `psql` in the Azure Cloud Shell
-Open the [Azure Cloud Shell](https://shell.azure.com/) in a web browser. Select **Bash** as the environment and, if prompted, select the subscription you used for your Azure Database for PostgreSQL Flexible Server database, then select **Create storage**.
+Open the [Azure Cloud Shell](https://shell.azure.com/) in a web browser. Select **Bash** as the environment and, if prompted, select the subscription you used for your Azure Database for PostgreSQL flexible server database, then select **Create storage**.
To retrieve the database connection details:
-1. Navigate to your Azure Database for PostgreSQL Flexible Server resource in the [Azure portal](https://portal.azure.com/).
+1. Navigate to your Azure Database for PostgreSQL flexible server resource in the [Azure portal](https://portal.azure.com/).
1. From the left-hand navigation menu, select **Connect** under **Settings** and copy the **Connection details** block.
To retrieve the database connection details:
The `azure_ai` extension allows you to integrate Azure OpenAI and Azure Cognitive Services into your database. To enable the extension in your database, follow the steps below:
-1. Add the extension to your allowlist as described in [how to use PostgreSQL extensions](/azure/postgresql/flexible-server/concepts-extensions#how-to-use-postgresql-extensions).
+1. Add the extension to your allowlist as described in [Extensions - Azure Database for PostgreSQL - Flexible Server](concepts-extensions.md#how-to-use-postgresql-extensions).
1. Verify that the extension was successfully added to the allowlist by running the following from the `psql` command prompt:
The `azure_ai.set_setting()` function lets you set the endpoint and critical val
> [!IMPORTANT] >
-> Because the connection information for Azure AI services, including API keys, is stored in a configuration table in the database, the `azure_ai` extension defines a role called `azure_ai_settings_manager` to ensure this information is protected and accessible only to users assigned that role. This role enables reading and writing of settings related to the extension. Only superusers and members of the `azure_ai_settings_manager` role can invoke the `azure_ai.get_setting()` and `azure_ai.set_setting()` functions. In the Azure Database for PostgreSQL Flexible Server, all admin users are assigned the `azure_ai_settings_manager` role.
+> Because the connection information for Azure AI services, including API keys, is stored in a configuration table in the database, the `azure_ai` extension defines a role called `azure_ai_settings_manager` to ensure this information is protected and accessible only to users assigned that role. This role enables reading and writing of settings related to the extension. Only superusers and members of the `azure_ai_settings_manager` role can invoke the `azure_ai.get_setting()` and `azure_ai.set_setting()` functions. In Azure Database for PostgreSQL flexible server, all admin users are assigned the `azure_ai_settings_manager` role.
## Generate vector embeddings with Azure OpenAI
The query uses the `<=>` [vector operator](https://github.com/pgvector/pgvector#
The Azure AI services integrations included in the `azure_cognitive` schema of the `azure_ai` extension provide a rich set of AI Language features accessible directly from the database. The functionalities include sentiment analysis, language detection, key phrase extraction, entity recognition, and text summarization. Access to these capabilities is enabled through the [Azure AI Language service](/azure/ai-services/language-service/overview).
-To review the complete Azure AI capabilities accessible through the extension, view the [Integrate Azure Database for PostgreSQL Flexible Server with Azure Cognitive Services](generative-ai-azure-cognitive.md).
+To review the complete Azure AI capabilities accessible through the extension, view the [Integrate Azure Database for PostgreSQL - Flexible Server with Azure Cognitive Services](generative-ai-azure-cognitive.md).
### Set the Azure AI Language service endpoint and key
In the output, you might notice a warning about an invalid document for which an
```sql SELECT bill_id, one_sentence_summary FROM bill_summaries WHERE one_sentence_summary is NULL;
+```
You can then query the `bill_summaries` table to view the new, one-sentence summaries generated by the `azure_ai` extension for the other records in the table.
Congratulations, you just learned how to use the `azure_ai` extension to integra
## Related content -- [How to use PostgreSQL extensions in Azure Database for PostgreSQL Flexible Server](/azure/postgresql/flexible-server/concepts-extensions)
+- [How to use PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/concepts-extensions)
- [Learn how to generate embeddings with Azure OpenAI](/azure/ai-services/openai/how-to/embeddings) - [Azure OpenAI Service embeddings models](/azure/ai-services/openai/concepts/models#embeddings-models-1) - [Understand embeddings in Azure OpenAI Service](/azure/ai-services/openai/concepts/understand-embeddings)
postgresql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-maintenance-portal.md
Title: Azure Database for PostgreSQL - Flexible Server - Scheduled maintenance - Azure portal
+ Title: Scheduled maintenance - Azure portal
description: Learn how to configure scheduled maintenance settings for an Azure Database for PostgreSQL - Flexible Server from the Azure portal.
Last updated 11/30/2021
-# Manage scheduled maintenance settings for Azure Database for PostgreSQL ΓÇô Flexible Server
+# Manage scheduled maintenance settings for Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can specify maintenance options for each flexible server in your Azure subscription. Options include the maintenance schedule and notification settings for upcoming and finished maintenance events.
+You can specify maintenance options for each Azure Database for PostgreSQL flexible server instance in your Azure subscription. Options include the maintenance schedule and notification settings for upcoming and finished maintenance events.
## Prerequisites To complete this how-to guide, you need:-- An [Azure Database for PostgreSQL - Flexible Server](quickstart-create-server-portal.md)
+- An [Azure Database for PostgreSQL flexible server](quickstart-create-server-portal.md) instance
## Specify maintenance schedule options
-1. On the PostgreSQL server page, under the **Settings** heading, choose **Maintenance** to open scheduled maintenance options.
+1. On the Azure Database for PostgreSQL flexible server instance page, under the **Settings** heading, choose **Maintenance** to open scheduled maintenance options.
2. The default (system-managed) schedule is a random day of the week, and 60-minute window for maintenance start between 11pm and 7am local server time. If you want to customize this schedule, choose **Custom schedule**. You can then select a preferred day of the week, and a 60-minute window for maintenance start time. ## Notifications about scheduled maintenance events
-You can use Azure Service Health to [view notifications](../../service-health/service-notifications.md) about upcoming and performed scheduled maintenance on your flexible server. You can also [set up](../../service-health/resource-health-alert-monitor-guide.md) alerts in Azure Service Health to get notifications about maintenance events.
+You can use Azure Service Health to [view notifications](../../service-health/service-notifications.md) about upcoming and performed scheduled maintenance on your Azure Database for PostgreSQL flexible server instance. You can also [set up](../../service-health/resource-health-alert-monitor-guide.md) alerts in Azure Service Health to get notifications about maintenance events.
## Next steps
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
Title: Manage Microsoft Entra users - Azure Database for PostgreSQL - Flexible Server
-description: This article describes how you can manage Microsoft Entra ID enabled roles to interact with an Azure Database for PostgreSQL - Flexible Server.
+ Title: Manage Microsoft Entra users
+description: This article describes how you can manage Microsoft Entra ID enabled roles to interact with Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-This article describes how you can create a Microsoft Entra ID enabled database roles within an Azure Database for PostgreSQL server.
+This article describes how you can create a Microsoft Entra ID enabled database roles within an Azure Database for PostgreSQL flexible server instance.
> [!NOTE]
-> This guide assumes you already enabled Microsoft Entra authentication on your PostgreSQL Flexible server.
+> This guide assumes you already enabled Microsoft Entra authentication on your Azure Database for PostgreSQL flexible server instance.
> See [How to Configure Microsoft Entra authentication](./how-to-configure-sign-in-azure-ad-authentication.md) If you like to learn about how to create and manage Azure subscription users and their privileges, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md). <a name='create-or-delete-azure-ad-administrators-using-azure-portal-or-azure-resource-manager-arm-api'></a>
-## Create or Delete Microsoft Entra administrators using Azure portal or Azure Resource Manager (ARM) API
+## Create or delete Microsoft Entra administrators using Azure portal or Azure Resource Manager (ARM) API
-1. Open **Authentication** page for your Azure Database for PostgreSQL Flexible Server in Azure portal
+1. Open the **Authentication** page for your Azure Database for PostgreSQL flexible server instance in the Azure portal.
1. To add an administrator - select **Add Microsoft Entra Admin** and select a user, group, application or a managed identity from the current Microsoft Entra tenant. 1. To remove an administrator - select **Delete** icon for the one to remove. 1. Select **Save** and wait for provisioning operation to completed.
If you like to learn about how to create and manage Azure subscription users and
## Manage Microsoft Entra roles using SQL
-Once first Microsoft Entra administrator is created from the Azure portal or API, you can use the administrator role to manage Microsoft Entra roles in your Azure Database for PostgreSQL Flexible Server.
+Once first Microsoft Entra administrator is created from the Azure portal or API, you can use the administrator role to manage Microsoft Entra roles in your Azure Database for PostgreSQL flexible server instance.
-We recommend getting familiar with [Microsoft identity platform](../../active-directory/develop/v2-overview.md). for best use of Microsoft Entra integration with Azure Database for PostgreSQL Flexible Servers.
+We recommend getting familiar with [Microsoft identity platform](../../active-directory/develop/v2-overview.md) for best use of Microsoft Entra integration with Azure Database for PostgreSQL flexible server.
-### Principal Types
+### Principal types
-Azure Database for PostgreSQL Flexible servers internally stores mapping between PostgreSQL database roles and unique identifiers of AzureAD objects.
+Azure Database for PostgreSQL flexible server internally stores mapping between PostgreSQL database roles and unique identifiers of AzureAD objects.
Each PostgreSQL database role can be mapped to one of the following Microsoft Entra object types: 1. **User** - Including Tenant local and guest users. 1. **Service Principal**. Including [Applications and Managed identities](../../active-directory/develop/app-objects-and-service-principals.md)
-1. **Group** When a PostgreSQL Role is linked to a Microsoft Entra group, any user or service principal member of this group can connect to the Azure Database for PostgreSQL Flexible Server instance with the group role.
+1. **Group** When a PostgreSQL role is linked to a Microsoft Entra group, any user or service principal member of this group can connect to the Azure Database for PostgreSQL flexible server instance with the group role.
<a name='list-azure-ad-roles-using-sql'></a>
For example: select * from pgaadauth_create_principal_with_oid('accounting_appli
## Enable Microsoft Entra authentication for an existing PostgreSQL role using SQL
-Azure Database for PostgreSQL Flexible Servers uses Security Labels associated with database roles to store Microsoft Entra ID mapping.
+Azure Database for PostgreSQL flexible server uses security labels associated with database roles to store Microsoft Entra ID mapping.
You can use the following SQL to assign security label:
postgresql How To Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-firewall-cli.md
Title: Manage firewall rules - Azure CLI - Azure Database for PostgreSQL - Flexible Server
+ Title: Manage firewall rules - Azure CLI
description: Create and manage firewall rules for Azure Database for PostgreSQL - Flexible Server using Azure CLI command line.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+Azure Database for PostgreSQL flexible server supports two types of mutually exclusive network connectivity methods to connect to your Azure Database for PostgreSQL flexible server instance. The two options are:
-* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL - Flexible Server in Preview.
+* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL flexible server in Preview.
* Private access (VNet Integration)
-In this article, we will focus on creation of PostgreSQL server with **Public access (allowed IP addresses)** using Azure CLI and will provide an overview on Azure CLI commands you can use to create, update, delete, list, and show firewall rules after creation of server. With *Public access (allowed IP addresses)*, the connections to the PostgreSQL server are restricted to allowed IP addresses only. The client IP addresses need to be allowed in firewall rules. To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later as well.
+This article focuses on creation of an Azure Database for PostgreSQL flexible server instance with **Public access (allowed IP addresses)** using Azure CLI and provides an overview on Azure CLI commands you can use to create, update, delete, list, and show firewall rules after server creation. With *Public access (allowed IP addresses)*, the connections to the Azure Database for PostgreSQL flexible server instance are restricted to allowed IP addresses only. The client IP addresses need to be allowed in firewall rules. To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later as well.
## Launch Azure Cloud Shell
If you prefer to install and use the CLI locally, this quickstart requires Azure
## Prerequisites
-You'll need to sign in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **ID** property, which refers to **Subscription ID** for your Azure account.
+You need to sign in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **ID** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
Select the specific subscription under your account using [az account set](/cli/
az account set --subscription <subscription id> ```
-## Create firewall rule during flexible server create using Azure CLI
+## Create firewall rule during Azure Database for PostgreSQL flexible server instance creation using Azure CLI
-You can use the `az postgres flexible-server --public access` command to create the flexible server with *Public access (allowed IP addresses)* and configure the firewall rules during creation of flexible server. You can use the **--public-access** switch to provide the allowed IP addresses that will be able to connect to the server. You can provide single or range of IP addresses to be included in the allowed list of IPs. IP address range must be dash separated and does not contain any spaces. There are various options to create a flexible server using CLI as shown in the example below.
+You can use the `az postgres flexible-server --public access` command to create the Azure Database for PostgreSQL flexible server instance with *Public access (allowed IP addresses)* and configure the firewall rules during creation of the Azure Database for PostgreSQL flexible server instance. You can use the **--public-access** switch to provide the allowed IP addresses that will be able to connect to the server. You can provide single or range of IP addresses to be included in the allowed list of IPs. IP address range must be dash separated and does not contain any spaces. There are various options to create an Azure Database for PostgreSQL flexible server instance using CLI as shown in the following examples.
Refer to the Azure CLI reference documentation <!--FIXME --> for the complete list of configurable CLI parameters. For example, in the below commands you can optionally specify the resource group. -- Create a flexible server with public access and add client IP address to have access to the server
+- Create an Azure Database for PostgreSQL flexible server instance with public access and add client IP address to have access to the server:
```azurecli-interactive az postgres flexible-server create --public-access <my_client_ip> ```-- Create a flexible server with public access and add the range of IP address to have access to this server
+- Create an Azure Database for PostgreSQL flexible server instance with public access and add the range of IP address to have access to this server:
```azurecli-interactive az postgres flexible-server create --public-access <start_ip_address-end_ip_address> ```-- Create a flexible server with public access and allow applications from Azure IP addresses to connect to your flexible server
+- Create an Azure Database for PostgreSQL flexible server instance with public access and allow applications from Azure IP addresses to connect to your Azure Database for PostgreSQL flexible server instance:
```azurecli-interactive az postgres flexible-server create --public-access 0.0.0.0 ``` > [!IMPORTANT] > This option configures the firewall to allow public access from Azure services and resources within Azure to this server including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users. >-+ ```azurecli-interactive az postgres flexible-server create --public-access all ``` >[!Note]
- > The above command will create a firewall rule with start IP address=0.0.0.0, end IP address=255.255.255.255 and no IP addresses will be blocked. Any host on the Internet can access this server. It is strongly recommended to use this rule only temporarily and only on test servers that do not contain sensitive data.
-- Create a flexible server with public access and with no IP address
+ > The preceding command creates a firewall rule with start IP address=0.0.0.0, end IP address=255.255.255.255 and no IP addresses are blocked. Any host on the internet can access this server. It is strongly recommended to use this rule only temporarily and only on test servers that don't contain sensitive data.
+- Create an Azure Database for PostgreSQL flexible server instance with public access and with no IP address:
```azurecli-interactive az postgres flexible-server create --public-access none ``` >[!Note]
- > we do not recommend to create a server without any firewall rules. If you do not add any firewall rules then no client will be able to connect to the server.
+ > We don't recommend creating a server without any firewall rules. If you don't add any firewall rules then no client will be able to connect to the server.
## Create and manage firewall rule after server create The **az postgres flexible-server firewall-rule** command is used from the Azure CLI to create, delete, list, show, and update firewall rules. Commands:-- **create**: Create an flexible server firewall rule.-- **list**: List the flexible server firewall rules.-- **update**: Update an flexible server firewall rule.-- **show**: Show the details of an flexible server firewall rule.-- **delete**: Delete an flexible server firewall rule.
+- **create**: Create an Azure Database for PostgreSQL flexible server firewall rule.
+- **list**: List the Azure Database for PostgreSQL flexible server firewall rules.
+- **update**: Update an Azure Database for PostgreSQL flexible server firewall rule.
+- **show**: Show the details of an Azure Database for PostgreSQL flexible server firewall rule.
+- **delete**: Delete an Azure Database for PostgreSQL flexible server firewall rule.
-Refer to the Azure CLI reference documentation <!--FIXME --> for the complete list of configurable CLI parameters. For example, in the below commands you can optionally specify the resource group.
+Refer to the Azure CLI reference documentation <!--FIXME --> for the complete list of configurable CLI parameters. For example, in the following commands you can optionally specify the resource group.
### Create a firewall rule Use the `az postgres flexible-server firewall-rule create` command to create new firewall rule on the server.
To allow access for a single IP address, just provide single IP address, as in t
az postgres flexible-server firewall-rule create --name mydemoserver --resource-group testGroup --start-ip-address 1.1.1.1 ```
-To allow applications from Azure IP addresses to connect to your flexible server, provide the IP address 0.0.0.0 as the Start IP, as in this example.
+To allow applications from Azure IP addresses to connect to your Azure Database for PostgreSQL flexible server instance, provide the IP address 0.0.0.0 as the Start IP, as in this example.
```azurecli-interactive az postgres flexible-server firewall-rule create --name mydemoserver --resource-group testGroup --start-ip-address 0.0.0.0 ```
az postgres flexible-server firewall-rule create --name mydemoserver --resource-
> [!IMPORTANT] > This option configures the firewall to allow public access from Azure services and resources within Azure to this server including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users. >
-Upon success, each create command output lists the details of the firewall rule you have created, in JSON format (by default). If there is a failure, the output shows error message text instead.
+Upon success, each create command output lists the details of the firewall rule you created, in JSON format (by default). If there is a failure, the output shows error message text instead.
### List firewall rules Use the `az postgres flexible-server firewall-rule list` command to list the existing server firewall rules on the server. Notice that the server name attribute is specified in the **--name** switch.
Use the `az postgres flexible-server firewall-rule update` command to update an
```azurecli-interactive az postgres flexible-server firewall-rule update --name mydemoserver --rule-name FirewallRule1 --resource-group testGroup --start-ip-address 13.83.152.0 --end-ip-address 13.83.152.1 ```
-Upon success, the command output lists the details of the firewall rule you have updated, in JSON format (by default). If there is a failure, the output shows error message text instead.
+Upon success, the command output lists the details of the firewall rule you updated, in JSON format (by default). If there is a failure, the output shows error message text instead.
> [!NOTE]
-> If the firewall rule does not exist, the rule is created by the update command.
+> If the firewall rule doesn't exist, the rule is created by the update command.
### Show firewall rule details Use the `az postgres flexible-server firewall-rule show` command to show the existing firewall rule details from the server. Provide the name of the existing firewall rule as input. ```azurecli-interactive
postgresql How To Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-firewall-portal.md
Title: Manage firewall rules - Azure portal - Azure Database for PostgreSQL - Flexible Server
+ Title: Manage firewall rules - Azure portal
description: Create and manage firewall rules for Azure Database for PostgreSQL - Flexible Server using the Azure portal
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+Azure Database for PostgreSQL flexible server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
-* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL - Flexible Server in Preview.
+* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with AAzure Database for PostgreSQL flexible server in Preview.
* Private access (VNet Integration)
-In this article, we'll focus on creation of PostgreSQL server with **Public access (allowed IP addresses)** using Azure portal and will provide an overview of managing firewall rules after creation of Flexible Server. With *Public access (allowed IP addresses)*, the connections to the PostgreSQL server are restricted to allowed IP addresses only. The client IP addresses need to be allowed in firewall rules. To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later as well. In this article, we'll provide an overview on how to create and manage firewall rules using public access (allowed IP addresses).
+This article focuses on creation of an Azure Database for PostgreSQL flexible server instance with **Public access (allowed IP addresses)** using Azure portal and provides an overview of managing firewall rules after creation of the Azure Database for PostgreSQL flexible server instance. With *Public access (allowed IP addresses)*, the connections to the Azure Database for PostgreSQL flexible server instance are restricted to allowed IP addresses only. The client IP addresses need to be allowed in firewall rules. To learn more about it, refer to [Public access (allowed IP addresses)](./concepts-networking.md#public-access-allowed-ip-addresses). The firewall rules can be defined at the time of server creation (recommended) but can be added later as well. In this article, we'll provide an overview on how to create and manage firewall rules using public access (allowed IP addresses).
## Create a firewall rule when creating a server 1. Select **Create a resource** (+) in the upper-left corner of the portal. 2. Select **Databases** > **Azure Database for PostgreSQL**. You can also enter **PostgreSQL** in the search box to find the service.
-3. Select **Flexible Server** as the deployment option.
+<!--Note this no longer appears in portal creation. 3. Select **Flexible Server** as the deployment option.-->
4. Fill out the **Basics** form. 5. Go to the **Networking** tab to configure how you want to connect to your server.
-6. In the **Connectivity method**, select *Public access (allowed IP addresses)*. To create the **Firewall rules**, specify the Firewall rule name and single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the PostgreSQL server to which they have valid credentials.
+6. In the **Connectivity method**, select *Public access (allowed IP addresses)*. To create the **Firewall rules**, specify the Firewall rule name and single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the Azure Database for PostgreSQL flexible server instance to which they have valid credentials.
> [!Note]
- > Azure Database for PostgreSQL - Flexible Server creates a firewall at the server level. It prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for specific IP addresses.
-7. Select **Review + create** to review your flexible server configuration.
+ > Azure Database for PostgreSQL flexible server instance creates a firewall at the server level. It prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for specific IP addresses.
+7. Select **Review + create** to review your Azure Database for PostgreSQL flexible server configuration.
8. Select **Create** to provision the server. Provisioning can take a few minutes. ## Create a firewall rule after server is created
-1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL - Flexible Server on which you want to add firewall rules.
-2. On the Flexible Server page, under **Settings** heading, click **Networking** to open the Networking page for Flexible Server.
+1. In the [Azure portal](https://portal.azure.com/), select the Azure Database for PostgreSQL flexible server instance on which you want to add firewall rules.
+2. On the Flexible Server page, under **Settings** heading, click **Networking** to open the Networking page.
<!--![Azure portal - click Connection Security](./media/howto-manage-firewall-portal/1-connection-security.png)-->
In this article, we'll focus on creation of PostgreSQL server with **Public acce
<!--![Bing search for What is my IP](./media/howto-manage-firewall-portal/3-what-is-my-ip.png)-->
-5. Add additional address ranges. In the firewall rules for the Azure Database for PostgreSQL - Flexible Server, you can specify a single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the PostgreSQL server to which they have valid credentials.
+5. Add more address ranges. In the firewall rules for the Azure Database for PostgreSQL flexible server instance, you can specify a single IP address, or a range of addresses. If you want to limit the rule to a single IP address, type the same address in the field for Start IP address and End IP address. Opening the firewall enables administrators, users, and applications to access any database on the Azure Database for PostgreSQL flexible server instance to which they have valid credentials.
<!--![Azure portal - firewall rules](./media/howto-manage-firewall-portal/4-specify-addresses.png)-->
In this article, we'll focus on creation of PostgreSQL server with **Public acce
## Connecting from Azure
-You may want to enable resources or applications deployed in Azure to connect to your flexible server. This includes web applications hosted in Azure App Service, running on an Azure VM, an Azure Data Factory data management gateway and many more.
+You may want to enable resources or applications deployed in Azure to connect to your Azure Database for PostgreSQL flexible server instance. This includes web applications hosted in Azure App Service, running on an Azure VM, an Azure Data Factory data management gateway and many more.
When an application within Azure attempts to connect to your server, the firewall verifies that Azure connections are allowed. You can enable this setting by selecting the **Allow public access from Azure services and resources within Azure to this server** option in the portal from the **Networking** tab and hit **Save**.
-The resources don't need to be in the same virtual network (VNet) or resource group for the firewall rule to enable those connections. If the connection attempt isn't allowed, the request doesn't reach the Azure Database for PostgreSQL - Flexible Server.
+The resources don't need to be in the same virtual network (VNet) or resource group for the firewall rule to enable those connections. If the connection attempt isn't allowed, the request doesn't reach the Azure Database for PostgreSQL flexible server instance.
> [!IMPORTANT] > This option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users. >
-> We recommend choosing the **Private access (VNet Integration)** to securely access Flexible Server.
+> We recommend choosing the **Private access (VNet Integration)** to securely access Azure Database for PostgreSQL flexible server.
> ## Manage existing firewall rules through the Azure portal
Repeat the following steps to manage the firewall rules.
## Next steps - Learn more about [Networking in Azure Database for PostgreSQL - Flexible Server](./concepts-networking.md) - Understand more about [Azure Database for PostgreSQL - Flexible Server firewall rules](./concepts-networking.md#public-access-allowed-ip-addresses)-- [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](./how-to-manage-firewall-cli.md).
+- [Create and manage Azure Database for PostgreSQL - Flexible Server firewall rules using Azure CLI](./how-to-manage-firewall-cli.md).
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
Title: Manage high availability - Azure portal - Azure Database for PostgreSQL - Flexible Server
+ Title: Manage high availability - Azure portal
description: This article describes how to enable or disable high availability in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
Last updated 06/23/2022
-# Manage high availability in Flexible Server
+# Manage high availability in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article describes how you can enable or disable high availability configuration in your flexible server in both zone-redundant and same-zone deployment models.
+This article describes how you can enable or disable high availability configuration in your Azure Database for PostgreSQL flexible server instance in both zone-redundant and same-zone deployment models.
-High availability feature provisions physically separate primary and standby replica with the same zone or across zones depending on the deployment model. For more details, see [high availability concepts documentation](./concepts-high-availability.md). You may choose to enable high availability at the time of flexible server creation or after the creation.
+High availability feature provisions physically separate primary and standby replica with the same zone or across zones depending on the deployment model. For more information, see [high availability concepts documentation](./concepts-high-availability.md). You may choose to enable high availability at the time of Azure Database for PostgreSQL flexible server instance creation or after the creation.
-This page provides guidelines how you can enable or disable high availability. This operation does not change your other settings including VNET configuration, firewall settings, and backup retention. Similarly, enabling and disabling of high availability is an online operation and does not impact your application connectivity and operations.
+This page provides guidelines how you can enable or disable high availability. This operation doesn't change your other settings including VNET configuration, firewall settings, and backup retention. Similarly, enabling and disabling of high availability is an online operation and doesn't impact your application connectivity and operations.
-## Pre-requisites
+## Prerequisites
> [!IMPORTANT] > For the list of regions that support Zone redundant high availability, please review the supported regions [here](./overview.md#azure-regions). ## Enable high availability during server creation
-This section provides details specifically for HA-related fields. You can follow these steps to deploy high availability while creating your flexible server.
+This section provides details specifically for HA-related fields. You can follow these steps to deploy high availability while creating your Azure Database for PostgreSQL flexible server instance.
-1. In the [Azure portal](https://portal.azure.com/), choose Flexible Server and click create. For details on how to fill details such as **Subscription**, **Resource group**, **server name**, **region**, and other fields, see how-to documentation for the server creation.
+1. In the [Azure portal](https://portal.azure.com/), choose Azure Database for PostgreSQL flexible server and select create. For details on how to fill details such as **Subscription**, **Resource group**, **server name**, **region**, and other fields, see how-to documentation for the server creation.
:::image type="content" source="./media/how-to-manage-high-availability-portal/subscription-region.png" alt-text="Screenshot of subscription and region selection.":::
-2. Choose your **availability zone**. This is useful if you want to collocate your application in the same availability zone as the database to reduce latency. Choose **No Preference** if you want the flexible server to deploy the primary server on any availability zone. Note that only if you choose the availability zone for the primary in a zone-redundant HA deployment, you will be allowed to choose the standby availability zone.
+2. Choose your **availability zone**. This is useful if you want to collocate your application in the same availability zone as the database to reduce latency. Choose **No Preference** if you want the Azure Database for PostgreSQL flexible server instance to deploy the primary server on any availability zone. Note that only if you choose the availability zone for the primary in a zone-redundant HA deployment are you allowed to choose the standby availability zone.
:::image type="content" source="./media/how-to-manage-high-availability-portal/zone-selection.png" alt-text="Screenshot of availability zone selection.":::
-3. Click the checkbox for **Enable high availability**. That will open up an option to choose high availability mode. If the region does not support AZs, then only same-zone mode is enabled.
+3. Select the checkbox for **Enable high availability**. That opens up an option to choose high availability mode. If the region doesn't support AZs, then only same-zone mode is enabled.
:::image type="content" source="./media/how-to-manage-high-availability-portal/choose-high-availability-deployment-model.png" alt-text="High availability checkbox and mode selection.":::
This section provides details specifically for HA-related fields. You can follow
:::image type="content" source="./media/how-to-manage-high-availability-portal/choose-standby-availability-zone.png" alt-text="Screenshot of Standby AZ selection.":::
-5. If you want to change the default compute and storage, click **Configure server**.
+5. If you want to change the default compute and storage, select **Configure server**.
:::image type="content" source="./media/how-to-manage-high-availability-portal/configure-server.png" alt-text="Screenshot of configure compute and storage screen.":::
-6. If high availability option is checked, the burstable tier will not be available to choose. You can choose either
+6. If high availability option is checked, the burstable tier isn't available to choose. You can choose either
**General purpose** or **Memory Optimized** compute tiers. Then you can select **compute size** for your choice from the dropdown. :::image type="content" source="./media/how-to-manage-high-availability-portal/select-compute.png" alt-text="Compute tier selection screen.":::
This section provides details specifically for HA-related fields. You can follow
:::image type="content" source="./media/how-to-manage-high-availability-portal/storage-backup.png" alt-text="Screenshot of Storage Backup.":::
-8. Click **Save**.
+8. Select **Save**.
## Enable high availability post server creation
-Follow these steps to enable high availability for your existing flexible server.
+Follow these steps to enable high availability for your existing Azure Database for PostgreSQL flexible server instance.
-1. In the [Azure portal](https://portal.azure.com/), select your existing PostgreSQL flexible server.
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL flexible server instance.
-2. On the flexible server page, click **High Availability** from the left panel to open high availability page.
+2. On the Azure Database for PostgreSQL flexible server instance page, select **High Availability** from the left panel to open high availability page.
:::image type="content" source="./media/how-to-manage-high-availability-portal/high-availability-left-panel.png" alt-text="Left panel selection screen.":::
-3. Click on the **Enable high availability** checkbox to **enable** the option. It shows same zone HA and zone-redundant HA option. If you choose zone-redundant HA, you can choose the standby AZ.
+3. Select the **Enable high availability** checkbox to **enable** the option. It shows same zone HA and zone-redundant HA option. If you choose zone-redundant HA, you can choose the standby AZ.
:::image type="content" source="./media/how-to-manage-high-availability-portal/enable-same-zone-high-availability-blade.png" alt-text="Screenshot to enable same zone high availability."::: :::image type="content" source="./media/how-to-manage-high-availability-portal/enable-zone-redundant-high-availability-blade.png" alt-text="Screenshot to enable zone redundant high availability.":::
-4. A confirmation dialog will show that states that by enabling high availability, your cost will increase due to additional server and storage deployment.
+4. A confirmation dialog appears stating that by enabling high availability, your costs increase due to more server and storage deployment.
-5. Click **Enable HA** button to enable the high availability.
+5. Select **Enable HA** button to enable the high availability.
-6. A notification will show up stating the high availability deployment is in progress.
+6. A notification appears stating the high availability deployment is in progress.
## Disable high availability
-Follow these steps to disable high availability for your flexible server that is already configured with high availability.
+Follow these steps to disable high availability for your Azure Database for PostgreSQL flexible server instance that is already configured with high availability.
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL - Flexible Server.
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL flexible server instance.
-2. On the flexible server page, click **High Availability** from the front panel to open high availability page.
+2. On the Azure Database for PostgreSQL flexible server instance page, select **High Availability** from the front panel to open high availability page.
:::image type="content" source="./media/how-to-manage-high-availability-portal/high-availability-left-panel.png" alt-text="Left panel selection screenshot.":::
-3. Click on the **High availability** checkbox to **disable** the option. Then click **Save** to save the change.
+3. Select on the **High availability** checkbox to **disable** the option. Then select **Save** to save the change.
:::image type="content" source="./media/how-to-manage-high-availability-portal/disable-high-availability.png" alt-text="Screenshot showing disable high availability.":::
-4. A confirmation dialog will be shown where you can confirm disabling high availability.
+4. A confirmation dialog is shown where you can confirm disabling high availability.
-5. Click **Disable HA** button to disable the high availability.
+5. Select **Disable HA** button to disable the high availability.
-6. A notification will show up decommissioning of the high availability deployment is in progress.
+6. A notification appears stating that decommissioning of the high availability deployment is in progress.
## Forced failover
-Follow these steps to force failover your primary to the standby flexible server. This will immediately bring the primary down and triggers a failover to the standby server. This is useful for cases like testing the unplanned outage failover time for your workload.
+Follow these steps to force failover your primary to the standby Azure Database for PostgreSQL flexible server instance. This immediately brings the primary down and triggers a failover to the standby server. This is useful for cases like testing the unplanned outage failover time for your workload.
-1. In the [Azure portal](https://portal.azure.com/), select your existing flexible server that has high availability feature already enabled.
-2. On the flexible server page, click High Availability from the front panel to open high availability page.
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL flexible server instance that has high availability feature already enabled.
+2. On the Azure Database for PostgreSQL flexible server instance page, select High Availability from the front panel to open high availability page.
3. Check the Primary availability zone and the Standby availability zone
-4. Click on Forced Failover to initiate the manual failover procedure. A pop up will inform you on the potential downtime until the failover is complete. Read the message and click Ok.
-5. A notification will show up mentioning that failover is in progress.
-6. Once failover to the standby server is complete, a notification will pop up.
+4. Select on Forced Failover to initiate the manual failover procedure. A pop up informs you on the potential downtime until the failover is complete. Read the message and select Ok.
+5. A notification appears mentioning that failover is in progress.
+6. Once failover to the standby server is complete, a notification pops up.
7. Check the new Primary availability zone and the Standby availability zone. :::image type="content" source="./media/how-to-manage-high-availability-portal/ha-forced-failover.png" alt-text="On-demand forced failover option screenshot.":::
Follow these steps to force failover your primary to the standby flexible server
## Planned failover
-Follow these steps to perform a planned failover from your primary to the standby flexible server. This will first prepare the standby server and performs the failover. This provides the least downtime as this performs a graceful failover to the standby server for situations like after a failover event, you want to bring the primary back to the preferred availability zone.
-1. In the [Azure portal](https://portal.azure.com/), select your existing flexible server that has high availability feature already enabled.
-2. On the flexible server page, click High Availability from the front panel to open high availability page.
+Follow these steps to perform a planned failover from your primary to the standby Azure Database for PostgreSQL flexible server instance. This will first prepare the standby server and performs the failover. This provides the least downtime as this performs a graceful failover to the standby server for situations like after a failover event, you want to bring the primary back to the preferred availability zone.
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL flexible server instance that has high availability feature already enabled.
+2. On the Azure Database for PostgreSQL flexible server instance page, select High Availability from the front panel to open high availability page.
3. Check the Primary availability zone and the Standby availability zone
-4. Click on Planned Failover to initiate the manual failover procedure. A pop up will inform you the process. Read the message and click Ok.
-5. A notification will show up mentioning that failover is in progress.
-6. Once failover to the standby server is complete, a notification will pop up.
+4. Select on Planned Failover to initiate the manual failover procedure. A pop up informs you about the process. Read the message and select Ok.
+5. A notification appears mentioning that failover is in progress.
+6. Once failover to the standby server is complete, a notification pops up.
7. Check the new Primary availability zone and the Standby availability zone. :::image type="content" source="./media/how-to-manage-high-availability-portal/ha-planned-failover.png" alt-text="Screenshot of On-demand planned failover.":::
Follow these steps to perform a planned failover from your primary to the standb
## Enabling Zone redundant HA after the region supports AZ
-There are Azure regions that do not support availability zones. If you have already deployed non-HA servers, you cannot directly enable zone redundant HA on the server, but you can perform restore and enable HA in that server. Following steps shows how to enable Zone redundant HA for that server.
+There are Azure regions that don't support availability zones. If you already deployed non-HA servers, you can't directly enable zone redundant HA on the server, but you can perform restore and enable HA in that server. The following steps shows how to enable Zone redundant HA for that server.
-1. From the overview page of the server, click **Restore** to [perform a PITR](how-to-restore-server-portal.md#restore-to-the-latest-restore-point). Choose **Latest restore point**.
+1. From the overview page of the server, select **Restore** to [perform a PITR](how-to-restore-server-portal.md#restore-to-the-latest-restore-point). Choose **Latest restore point**.
2. Choose a server name, availability zone.
-3. Click **Review+Create**".
-4. A new Flexible server will be created from the backup.
+3. Select **Review+Create**".
+4. A new Azure Database for PostgreSQL flexible server instance is created from the backup.
5. Once the new server is created, from the overview page of the server, follow the [guide](#enable-high-availability-post-server-creation) to enable HA. 6. After data verification, you can optionally [delete](how-to-manage-server-portal.md#delete-a-server) the old server. 7. Make sure your clients connection strings are modified to point to your new HA-enabled server.
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-cli.md
Title: Manage server - Azure CLI - Azure Database for PostgreSQL - Flexible Server
-description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server from the Azure CLI.
+ Title: Manage server - Azure CLI
+description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server instance from the Azure CLI.
Last updated 11/30/2021
-# Manage an Azure Database for PostgreSQL - Flexible Server by using the Azure CLI
+# Manage Azure Database for PostgreSQL - Flexible Server by using the Azure CLI
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article shows you how to manage your flexible server deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
+This article shows you how to manage your Azure Database for PostgreSQL flexible server instance deployed in Azure. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
## Prerequisites
az account set --subscription <subscription id>
``` > [!Important]
-> If you haven't created a flexible server yet, you'll need to do so to follow this how-to guide.
+> If you haven't created Azure Database for PostgreSQL flexible server instance yet, you need to do so to follow this how-to guide.
## Scale compute and storage
storage-size | 6144 | Enter the storage capacity of the server in megabytes. The
> [!IMPORTANT] > You cannot scale down storage.
-## Manage PostgreSQL databases on a server
+## Manage Azure Database for PostgreSQL flexible server databases on a server
-There are a number of applications you can use to connect to your Azure Database for PostgreSQL server. If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/current/static/app-psql.html). Let's now use the psql command-line tool to connect to the Azure Database for PostgreSQL server.
+There are a number of applications you can use to connect to your Azure Database for PostgreSQL flexible server instance. If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/current/static/app-psql.html). Let's now use the psql command-line tool to connect to the Azure Database for PostgreSQL flexible server instance.
1. Run the following **psql** command:
There are a number of applications you can use to connect to your Azure Database
psql --host=<servername> --port=<port> --username=<user> --dbname=<dbname> ```
- For example, the following command connects to the default database called **postgres** on your PostgreSQL server **mydemoserver.postgres.database.azure.com** through your access credentials. When you're prompted, enter the `<server_admin_password>` that you chose.
+ For example, the following command connects to the default database called **postgres** on your Azure Database for PostgreSQL flexible server instance **mydemoserver.postgres.database.azure.com** through your access credentials. When you're prompted, enter the `<server_admin_password>` that you chose.
```bash psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin --dbname=postgres ```
- After you connect, the psql tool displays a **postgres** prompt where you can enter SQL commands. A warning will appear in the initial connection output if the version of psql you're using is different from the version on the Azure Database for PostgreSQL server.
+ After you connect, the psql tool displays a **postgres** prompt where you can enter SQL commands. A warning will appear in the initial connection output if the version of psql you're using is different from the version on the Azure Database for PostgreSQL flexible server instance.
Example psql output:
There are a number of applications you can use to connect to your Azure Database
4. Type `\q` and select Enter to quit psql.
-In this section, you connected to the Azure Database for PostgreSQL server via psql and created a blank user database.
+In this section, you connected to the Azure Database for PostgreSQL flexible server instance via psql and created a blank user database.
## Reset the admin password
az postgres flexible-server update --resource-group myresourcegroup --name mydem
## Delete a server
-To delete the Azure Database for PostgreSQL flexible server, run the [az postgres flexible-server delete](/cli/azure/postgres/flexible-server#az-postgresql-flexible-server-delete) command.
+To delete the Azure Database for PostgreSQL flexible server instance, run the [az postgres flexible-server delete](/cli/azure/postgres/flexible-server#az-postgresql-flexible-server-delete) command.
```azurecli-interactive az postgres flexible-server delete --resource-group myresourcegroup --name mydemoserver
postgresql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-portal.md
Title: 'Manage server - Azure portal - Azure Database for PostgreSQL - Flexible Server'
-description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server from the Azure portal.
+ Title: Manage server - Azure portal
+description: Learn how to manage an Azure Database for PostgreSQL - Flexible Server instance from the Azure portal.
Last updated 11/30/2021
-# Manage an Azure Database for PostgreSQL - Flexible Server using the Azure portal
+# Manage Azure Database for PostgreSQL - Flexible Server using the Azure portal
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article shows you how to manage your Azure Database for PostgreSQL - Flexible Server. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
+This article shows you how to manage your Azure Database for PostgreSQL flexible server instance. Management tasks include compute and storage scaling, admin password reset, and viewing server details.
## Sign in
-Sign in to the [Azure portal](https://portal.azure.com). Go to your flexible server resource in the Azure portal.
+Sign in to the [Azure portal](https://portal.azure.com). Go to your Azure Database for PostgreSQL flexible server resource in the Azure portal.
## Scale compute and storage
After server creation you can scale between the various [pricing tiers](https://
2. You can change the **Compute Tier** , **vCore**, **Storage** to scale up the server using higher compute tier or scale up within the same tier by increasing storage or vCores to your desired value. > [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/howto-manage-server-portal/scale-server.png" alt-text="scaling storage flexible server":::
+> :::image type="content" source="./media/how-to-manage-server-portal/scale-server.png" alt-text="Scaling storage for Azure Database for PostgreSQL flexible server.":::
> [!Important]
-> - Storage cannot be scaled down.
+> - Storage can't be scaled down.
> - Scaling vCores causes a server restart. 3. Select **OK** to save changes.
You can change the administrator role's password using the Azure portal.
2. Enter a new password and confirm the password. The textbox will prompt you about password complexity requirements. > [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/howto-manage-server-portal/reset-password.png" alt-text="reset your password for flexible server":::
+> :::image type="content" source="./media/how-to-manage-server-portal/reset-password.png" alt-text="Reset your password for Azure Database for PostgreSQL flexible server.":::
3. Select **Save** to save the new password.
You can delete your server if you no longer need it.
1. Select your server in the Azure portal. In the **Overview** window select **Delete**. 2. Type the name of the server into the input box to confirm that you want to delete the server.
- :::image type="content" source="./media/howto-manage-server-portal/delete-server.png" alt-text="delete the flexible server":::
+ :::image type="content" source="./media/how-to-manage-server-portal/delete-server.png" alt-text="Delete the Azure Database for PostgreSQL flexible server instance.":::
> [!IMPORTANT] > Deleting a server is irreversible. > [!div class="mx-imgBorder"]
- > ![delete the flexible server](./media/howto-manage-server-portal/delete-server.png)
+ > ![Delete the Azure Database for PostgreSQL flexible server instance](./media/how-to-manage-server-portal/delete-server.png)
3. Select **Delete**.
postgresql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-cli.md
Title: Manage virtual networks - Azure CLI - Azure Database for PostgreSQL - Flexible Server
-description: Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure CLI
+ Title: Manage virtual networks - Azure CLI
+description: Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure CLI.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
-* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL - Flexible Server in Preview.
+Azure Database for PostgreSQL flexible server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL flexible server in Preview.
* Private access (VNET Integration)
-In this article, we'll focus on creation of PostgreSQL server with **Private access (VNet Integration)** using Azure CLI. With *Private access (VNET Integration)*, you can deploy your flexible server into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. In Private access, the connections to the PostgreSQL server are restricted to only within your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
+In this article, we focus on creation of an Azure Database for PostgreSQL flexible server instance with **Private access (VNet Integration)** using Azure CLI. With *Private access (VNET Integration)*, you can deploy your Azure Database for PostgreSQL flexible server instance into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. In Private access, the connections to the Azure Database for PostgreSQL flexible server instance are restricted to only within your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
-In Azure Database for PostgreSQL - Flexible Server, you can only deploy the server to a virtual network and subnet during creation of the server. After the flexible server is deployed to a virtual network and subnet, you can't move it to another virtual network, subnet or to *Public access (allowed IP addresses)*.
+In Azure Database for PostgreSQL flexible server, you can only deploy the server to a virtual network and subnet during creation of the server. After the Azure Database for PostgreSQL flexible server instance is deployed to a virtual network and subnet, you can't move it to another virtual network, subnet or to *Public access (allowed IP addresses)*.
## Launch Azure Cloud Shell
Select the specific subscription under your account using [az account set](/cli/
az account set --subscription <subscription id> ```
-## Create Azure Database for PostgreSQL - Flexible Server using CLI
-You can use the `az postgres flexible-server` command to create the flexible server with *Private access (VNet Integration)*. This command uses Private access (VNet Integration) as the default connectivity method. A virtual network and subnet will be created for you if none is provided. You can also provide the already existing virtual network and subnet using the subnet ID. <!-- You can provide the **vnet**,**subnet**,**vnet-address-prefix** or**subnet-address-prefix** to customize the virtual network and subnet.--> There are various options to create a flexible server using CLI as shown in the examples below.
+## Create an Azure Database for PostgreSQL flexible server instance using Azure CLI
+You can use the `az postgres flexible-server` command to create the Azure Database for PostgreSQL flexible server instance with *Private access (VNet Integration)*. This command uses Private access (VNet Integration) as the default connectivity method. A virtual network and subnet will be created for you if none is provided. You can also provide the already existing virtual network and subnet using the subnet ID. <!-- You can provide the **vnet**,**subnet**,**vnet-address-prefix** or**subnet-address-prefix** to customize the virtual network and subnet.--> There are various options to create an Azure Database for PostgreSQL flexible server instance using CLI as shown in the examples below.
>[!Important]
-> Using this command will delegate the subnet to **Microsoft.DBforPostgreSQL/flexibleServers**. This delegation means that only Azure Database for PostgreSQL Flexible Servers can use that subnet. No other Azure resource types can be in the delegated subnet.
+> Using this command will delegate the subnet to **Microsoft.DBforPostgreSQL/flexibleServers**. This delegation means that only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet.
> Refer to the Azure CLI reference documentation <!--FIXME --> for the complete list of configurable CLI parameters. For example, in the below commands you can optionally specify the resource group. -- Create a flexible server using default virtual network, subnet with default address prefix
+- Create an Azure Database for PostgreSQL flexible server instance using default virtual network, subnet with default address prefix
```azurecli-interactive az postgres flexible-server create ```-- Create a flexible server using already existing virtual network and subnet. If provided virtual network and subnet do not exist, then virtual network and subnet with default address prefix will be created.
+- Create an Azure Database for PostgreSQL flexible server instance using already existing virtual network and subnet. If provided virtual network and subnet do not exist, then virtual network and subnet with default address prefix will be created.
```azurecli-interactive az postgres flexible-server create --vnet myVnet --subnet mySubnet ```-- Create a flexible server using already existing virtual network, subnet, and using the subnet ID. The provided subnet shouldn't have any other resource deployed in it and this subnet will be delegated to **Microsoft.DBforPostgreSQL/flexibleServers**, if not already delegated.
+- Create an Azure Database for PostgreSQL flexible server instance using already existing virtual network, subnet, and using the subnet ID. The provided subnet shouldn't have any other resource deployed in it and this subnet will be delegated to **Microsoft.DBforPostgreSQL/flexibleServers**, if not already delegated.
```azurecli-interactive az postgres flexible-server create --subnet /subscriptions/{SubID}/resourceGroups/{ResourceGroup}/providers/Microsoft.Network/virtualNetworks/{VNetName}/subnets/{SubnetName} ``` > [!Note]
- > - The virtual network and subnet should be in the same region and subscription as your flexible server.
+ > - The virtual network and subnet should be in the same region and subscription as your Azure Database for PostgreSQL flexible server instance.
> - The virtual network should not have any resource lock set at the VNET or subnet level. Make sure to remove any lock (**Delete** or **Read only**) from your VNET and all subnets before creating the server in a virtual network, and you can set it back after server creation. > [!IMPORTANT] > The names including `AzureFirewallSubnet`, `AzureFirewallManagementSubnet`, `AzureBastionSubnet` and `GatewaySubnet` are reserved names within Azure. Please do not use these as your subnet name. -- Create a flexible server using new virtual network, subnet with nondefault address prefix
+- Create an Azure Database for PostgreSQL flexible server instance using new virtual network, subnet with nondefault address prefix.
```azurecli-interactive az postgres flexible-server create --vnet myVnet --address-prefixes 10.0.0.0/24 --subnet mySubnet --subnet-prefixes 10.0.0.0/24 ```
postgresql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-portal.md
Title: Manage virtual networks - Azure portal - Azure Database for PostgreSQL - Flexible Server
-description: Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure portal
+ Title: Manage virtual networks - Azure portal
+description: Create and manage virtual networks for Azure Database for PostgreSQL - Flexible Server using the Azure portal.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+Azure Database for PostgreSQL flexible server supports two types of mutually exclusive network connectivity methods to connect to your Azure Database for PostgreSQL flexible server instance. The two options are:
-* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL - Flexible Server in Preview.
+* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL flexible server in Preview.
* Private access (VNet Integration)
-In this article, we focus on creation of PostgreSQL server with **Private access (VNet integration)** using Azure portal. With Private access (VNet Integration), you can deploy your flexible server integrated into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the PostgreSQL server are restricted to your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
+In this article, we focus on creation of an Azure Database for PostgreSQL flexible server instance with **Private access (VNet integration)** using Azure portal. With Private access (VNet Integration), you can deploy your Azure Database for PostgreSQL flexible server instance integrated into your own [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md). Azure Virtual Networks provide private and secure network communication. With private access, connections to the Azure Database for PostgreSQL flexible server instance are restricted to your virtual network. To learn more about it, refer to [Private access (VNet Integration)](./concepts-networking.md#private-access-vnet-integration).
-You can deploy your flexible server into a virtual network and subnet during server creation. After the flexible server is deployed, you cannot move it into another virtual network, subnet or to *Public access (allowed IP addresses)*.
+You can deploy your Azure Database for PostgreSQL flexible server instance into a virtual network and subnet during server creation. After the Azure Database for PostgreSQL flexible server instance is deployed, you can't move it into another virtual network, subnet or to *Public access (allowed IP addresses)*.
## Prerequisites
-To create a flexible server in a virtual network, you need:
+To create an Azure Database for PostgreSQL flexible server instance in a virtual network, you need:
- A [Virtual Network](../../virtual-network/quick-create-portal.md#create-a-virtual-network) > [!Note]
- > - The virtual network and subnet should be in the same region and subscription as your flexible server.
- > - The virtual network should not have any resource lock set at the VNET or subnet level, as locks may interfere with operations on the network and DNS. Make sure to remove any lock (**Delete** or **Read only**) from your VNET and all subnets before creating the server in a virtual network, and you can set it back after server creation.
+ > - The virtual network and subnet should be in the same region and subscription as your Azure Database for PostgreSQL flexible server instance.
+ > - The virtual network shouldn't have any resource lock set at the VNET or subnet level, as locks may interfere with operations on the network and DNS. Make sure to remove any lock (**Delete** or **Read only**) from your VNET and all subnets before creating the server in a virtual network, and you can set it back after server creation.
-- To [delegate a subnet](../../virtual-network/manage-subnet-delegation.md#delegate-a-subnet-to-an-azure-service) to **Microsoft.DBforPostgreSQL/flexibleServers**. This delegation means that only Azure Database for PostgreSQL Flexible Servers can use that subnet. No other Azure resource types can be in the delegated subnet.-- Add `Microsoft.Storage` to the service end point for the subnet delegated to Flexible servers. This is done by performing following steps:
+- To [delegate a subnet](../../virtual-network/manage-subnet-delegation.md#delegate-a-subnet-to-an-azure-service) to **Microsoft.DBforPostgreSQL/flexibleServers**. This delegation means that only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet.
+- Add `Microsoft.Storage` to the service endpoint for the subnet delegated to Azure Database for PostgreSQL flexible server. This is done by performing following steps:
1. Go to your virtual network page.
- 2. Select the VNET in which you're planning to deploy your flexible server.
- 3. Choose the subnet that is delegated for flexible server.
+ 2. Select the VNET in which you're planning to deploy your Azure Database for PostgreSQL flexible server instance.
+ 3. Choose the subnet that's delegated for Azure Database for PostgreSQL flexible server.
4. On the pull-out screen, under **Service endpoint**, choose `Microsoft.storage` from the drop-down. 5. Save the changes. -- If you want to set up your own private DNS zone to use with the flexible server, see [private DNS overview](../../dns/private-dns-overview.md) documentation for more details.
+- If you want to set up your own private DNS zone to use with the Azure Database for PostgreSQL flexible server instance, see [private DNS overview](../../dns/private-dns-overview.md) documentation for more details.
-## Create Azure Database for PostgreSQL - Flexible Server in an already existing virtual network
+## Create an Azure Database for PostgreSQL flexible server instance in an already existing virtual network
1. Select **Create a resource** (+) in the upper-left corner of the portal. 2. Select **Databases** > **Azure Database for PostgreSQL**. You can also enter **PostgreSQL** in the search box to find the service.
-3. Select **Flexible server** as the deployment option.
+<!-- no longer happens 3. Select **Flexible server** as the deployment option.-->
4. Fill out the **Basics** form. 5. Go to the **Networking** tab to configure how you want to connect to your server. 6. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** and select the already existing *virtual network* and *Subnet* created as part of prerequisites. 7. Under **Private DNS Integration**, by default, a new private DNS zone will be created using the server name. Optionally, you can choose the *subscription* and the *Private DNS zone* from the drop-down list.
-8. Select **Review + create** to review your flexible server configuration.
+8. Select **Review + create** to review your Azure Database for PostgreSQL flexible server configuration.
9. Select **Create** to provision the server. Provisioning can take a few minutes. >[!Note]
-> After the flexible server is deployed to a virtual network and subnet, you can't move it to Public access (allowed IP addresses).
+> After the Azure Database for PostgreSQL flexible server instance is deployed to a virtual network and subnet, you can't move it to Public access (allowed IP addresses).
>[!Note]
-> If you want to connect to the flexible server from a client that is provisioned in another VNET, you have to link the private DNS zone with the VNET. See this [linking the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation on how to do it.
+> If you want to connect to the Azure Database for PostgreSQL flexible server instance from a client that's provisioned in another VNET, you have to link the private DNS zone with the VNET. See this [linking the virtual network](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) documentation on how to do it.
## Next steps - [Create and manage Azure Database for PostgreSQL - Flexible Server virtual network using Azure CLI](./how-to-manage-virtual-network-cli.md).
postgresql How To Manage Virtual Network Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-private-endpoint-portal.md
Title: Manage virtual networks - Azure portal with Private Link - Azure Database for PostgreSQL - Flexible Server
-description: Learn how to create a PostgreSQL server with public access by using the Azure portal, and how to add private networking to the server based on Azure Private Link.
+ Title: Manage virtual networks with Private Link - Azure portal
+description: Create an Azure Database for PostgreSQL - Flexible Server instance with public access by using the Azure portal, and add private networking to the server based on Azure Private Link.
Last updated 10/23/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server:
+Azure Database for PostgreSQL flexible server supports two types of mutually exclusive network connectivity methods to connect to your Azure Database for PostgreSQL flexible server instance. The two options are:
-* Public access through allowed IP addresses. You can further secure that method by using [Azure Private Link](./concepts-networking-private-link.md)-based networking with Azure Database for PostgreSQL - Flexible Server. The feature is in preview.
+* Public access through allowed IP addresses. You can further secure that method by using [Azure Private Link](./concepts-networking-private-link.md)-based networking with Azure Database for PostgreSQL flexible server. The feature is in preview.
* Private access through virtual network integration.
-This article focuses on creation of a PostgreSQL server with public access (allowed IP addresses) by the using Azure portal. You can then help secure the server by adding private networking based on Private Link technology.
+This article focuses on creating an Azure Database for PostgreSQL flexible server instance with public access (allowed IP addresses) by using the Azure portal. You can then help secure the server by adding private networking based on Private Link technology.
You can use [Private Link](../../private-link/private-link-overview.md) to access the following services over a private endpoint in your virtual network:
-* Azure platform as a service (PaaS) services, such as Azure Database for PostgreSQL - Flexible Server
+* Azure platform as a service (PaaS) services, such as Azure Database for PostgreSQL flexible server
* Customer-owned or partner services that are hosted in Azure Traffic between your virtual network and a service traverses the Microsoft backbone network, which eliminates exposure to the public internet. ## Prerequisites
-To add a flexible server to a virtual network by using Private Link, you need:
+To add an Azure Database for PostgreSQL flexible server instance to a virtual network by using Private Link, you need:
-* A [virtual network](../../virtual-network/quick-create-portal.md#create-a-virtual-network). The virtual network and subnet should be in the same region and subscription as your flexible server.
+* A [virtual network](../../virtual-network/quick-create-portal.md#create-a-virtual-network). The virtual network and subnet should be in the same region and subscription as your Azure Database for PostgreSQL flexible server instance.
Be sure to remove any locks (**Delete** or **Read only**) from your virtual network and all subnets before you add a server to the virtual network, because locks might interfere with operations on the network and DNS. You can reset the locks after server creation. * Registration of the [PostgreSQL private endpoint preview feature in your subscription](../../azure-resource-manager/management/preview-features.md).
-## Create an Azure Database for PostgreSQL - Flexible Server instance with a private endpoint
+## Create an Azure Database for PostgreSQL flexible server instance with a private endpoint
-To create an Azure Database for PostgreSQL server, take the following steps:
+To create an Azure Database for PostgreSQL flexible server instance, take the following steps:
1. In the upper-left corner of the Azure portal, select **Create a resource** (the plus sign). 2. Select **Databases** > **Azure Database for PostgreSQL**.
-3. Select the **Flexible server** deployment option.
-
-4. Fill out the **Basics** form with the following information:
+3. Fill out the **Basics** form with the following information:
|Setting |Value| |||
To create an Azure Database for PostgreSQL server, take the following steps:
|**Server name**| Enter a unique server name.| |**Admin username** |Enter an administrator name of your choosing.| |**Password**|Enter a password of your choosing. The password must have at least eight characters and meet the defined requirements.|
- |**Location**|Select an Azure region where you want to want your PostgreSQL server to reside.|
- |**Version**|Select the required database version of the PostgreSQL server.|
+ |**Location**|Select an Azure region where you want to want your Azure Database for PostgreSQL flexible server instance to reside.|
+ |**Version**|Select the required database version of the Azure Database for PostgreSQL flexible server instance.|
|**Compute + Storage**|Select the pricing tier that you need for the server, based on the workload.| 5. Select **Next: Networking**.
To create an Azure Database for PostgreSQL server, take the following steps:
11. On the **Review + create** tab, Azure validates your configuration. The **Networking** section lists information about your private endpoint.
- When you see the message that your configuration passed validation, select **Create**.
+ When you see the message that your configuration passed validation, select **Create**.
### Approval process for a private endpoint
A separation of duties is common in many enterprises today:
* A network administrator creates the cloud networking infrastructure, such as Azure Private Link services. * A database administrator (DBA) creates and manages database servers.
-After a network administrator creates a private endpoint, the PostgreSQL DBA can manage the private endpoint connection to Azure Database for PostgreSQL. The DBA uses the following approval process for a private endpoint connection:
+After a network administrator creates a private endpoint, the PostgreSQL DBA can manage the private endpoint connection to Azure Database for PostgreSQL flexible server. The DBA uses the following approval process for a private endpoint connection:
-1. In the Azure portal, go to the Azure Database for PostgreSQL - Flexible Server resource.
+1. In the Azure portal, go to the Azure Database for PostgreSQL flexible server resource.
1. On the left pane, select **Networking**.
After a network administrator creates a private endpoint, the PostgreSQL DBA can
## Next steps
-* Learn more about [networking in Azure Database for PostgreSQL - Flexible Server with Private Link](./concepts-networking-private-link.md).
-* Understand more about [virtual network integration in Azure Database for PostgreSQL - Flexible Server](./concepts-networking-private.md).
+* Learn more about [networking in Azure Database for PostgreSQL flexible server with Private Link](./concepts-networking-private-link.md).
+* Understand more about [virtual network integration in Azure Database for PostgreSQL flexible server](./concepts-networking-private.md).
postgresql How To Optimize Performance Pgvector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-optimize-performance-pgvector.md
+
+ Title: How to optimize performance when using pgvector
+description: How to optimize performance when using pgvector on Azure Database for PostgreSQL - Flexible Server.
+++++
+ - build-2023
+ - ignite-2023
+ Last updated : 11/03/2023++
+# How to optimize performance when using `pgvector` on Azure Database for PostgreSQL - Flexible Server
++
+The `pgvector` extension adds an open-source vector similarity search to Azure Database for PostgreSQL flexible server.
+
+This article explores the limitations and tradeoffs of [`pgvector`](https://github.com/pgvector/pgvector) and shows how to use partitioning, indexing and search settings to improve performance.
+
+For more on the extension itself, see [basics of `pgvector`](how-to-use-pgvector.md). You might also want to refer to the official [README](https://github.com/pgvector/pgvector/blob/master/README.md) of the project.
++
+## Next steps
+
+Congratulations, you just learned the tradeoffs, limitations and best practices to achieve the best performance with `pgvector`.
+
+> [!div class="nextstepaction"]
+> [Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL - Flexible Server](./generative-ai-azure-openai.md)
postgresql How To Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-optimize-query-stats-collection.md
Title: Optimize query stats collection - Azure Database for PostgreSQL - flexible Server
-description: This article describes how you can optimize query stats collection on an Azure Database for PostgreSQL - flexible Server
+ Title: Optimize query stats collection
+description: This article describes how you can optimize query stats collection on Azure Database for PostgreSQL - Flexible Server.
Last updated 03/23/2023
-# Optimize query statistics collection on an Azure Database for PostgreSQL - flexible Server
+# Optimize query statistics collection on Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article describes how to optimize query statistics collection on an Azure Database for PostgreSQL server.
+This article describes how to optimize query statistics collection on an Azure Database for PostgreSQL flexible server instance.
## Use pg_stat_statements
-**Pg_stat_statements** is a PostgreSQL extension that can be enabled in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. This module hooks into every query execution and comes with a non-trivial performance cost. Enabling **pg_stat_statements** forces query text writes to files on disk.
+**Pg_stat_statements** is a PostgreSQL extension that can be enabled in Azure Database for PostgreSQL flexible server. The extension provides a means to track execution statistics for all SQL statements executed by a server. This module hooks into every query execution and comes with a non-trivial performance cost. Enabling **pg_stat_statements** forces query text writes to files on disk.
If you have unique queries with long query text or you don't actively monitor **pg_stat_statements**, disable **pg_stat_statements** for best performance. To do so, change the setting to `pg_stat_statements.track = NONE`. To set `pg_stat_statements.track = NONE`: -- In the Azure portal, go to the [PostgreSQL resource management page and select the server parameters blade](concepts-server-parameters.md).
+- In the Azure portal, go to the [Azure Database for PostgreSQL flexible server resource management page and select the server parameters blade](concepts-server-parameters.md).
- Use the [Azure CLI](connect-azure-cli.md) az postgres server configuration set to `--name pg_stat_statements.track --resource-group myresourcegroup --server mydemoserver --value NONE`. ## Use the Query Store
-Using the [Query Store](concepts-query-store.md) feature in Azure Database for PostgreSQL - Flexible Server offers a different way to monitor query execution statistics. To prevent performance overhead, it is recommended to utilize only one mechanism, either the pg_stat_statements extension or the Query Store.
+Using the [Query Store](concepts-query-store.md) feature in Azure Database for PostgreSQL flexible server offers a different way to monitor query execution statistics. To prevent performance overhead, it is recommended to utilize only one mechanism, either the pg_stat_statements extension or the Query Store.
## Next steps
postgresql How To Perform Fullvacuum Pg Repack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-perform-fullvacuum-pg-repack.md
Title: Optimize Azure Database for PostgreSQL Flexible Server by using pg_repack
-description: Perform full vacuum using pg_Repack extension in Azure Database for PostgreSQL - Flexible Server
+ Title: Optimize by using pg_repack
+description: Perform full vacuum using pg_Repack extension.
-# Optimize Azure Database for PostgreSQL Flexible Server by using pg_repack
+# Optimize Azure Database for PostgreSQL - Flexible Server by using pg_repack
-In this article, you learn how to use pg_repack to remove bloat and improve your Azure Database performance for PostgreSQL Flexible Server. Bloat is the unnecessary data accumulating in tables and indexes due to frequent updates and deletes. Bloat can cause the database size to grow larger than expected and affect query performance. Using pg_repack, you can reclaim the wasted space and reorganize the data more efficiently.
+
+In this article, you learn how to use pg_repack to remove bloat and improve your Azure Database for PostgreSQL flexible server performance. Bloat is the unnecessary data accumulating in tables and indexes due to frequent updates and deletes. Bloat can cause the database size to grow larger than expected and affect query performance. Using pg_repack, you can reclaim the wasted space and reorganize the data more efficiently.
## What is pg_repack?
-pg_repack is a PostgreSQL extension that removes bloat from tables and indexes and reorganizes them more efficiently. pg_repack works by creating a new copy of the target table or index, applying any changes that occurred during the process, and then swapping the old and new versions atomically. pg_repack doesn't require any downtime or exclusive locks on the target table or index except for a brief period at the beginning and end of the operation. You can use pg_repack to optimize any table or index in your PostgreSQL database, except for the default PostgreSQL database.
+pg_repack is a PostgreSQL extension that removes bloat from tables and indexes and reorganizes them more efficiently. pg_repack works by creating a new copy of the target table or index, applying any changes that occurred during the process, and then swapping the old and new versions atomically. pg_repack doesn't require any downtime or exclusive locks on the target table or index except for a brief period at the beginning and end of the operation. You can use pg_repack to optimize any table or index in your Azure Database for PostgreSQL flexible server database, except for the default Azure Database for PostgreSQL flexible server database.
### How to use pg_repack?
-To use pg_repack, you need to install the extension in your PostgreSQL database and then run the pg_repack command, specifying the table name or index you want to optimize. The extension acquires locks on the table or index to prevent other operations from being performed while the optimization is in progress. It will then remove the bloat and reorganize the data more efficiently.
+To use pg_repack, you need to install the extension in your Azure Database for PostgreSQL flexible server database and then run the pg_repack command, specifying the table name or index you want to optimize. The extension acquires locks on the table or index to prevent other operations from being performed while the optimization is in progress. It will then remove the bloat and reorganize the data more efficiently.
### How full table repack works
During these steps, pg_repack will only hold an ACCESS EXCLUSIVE lock for a shor
pg_repack has some limitations that you should be aware of before using it: -- The pg_repack extension can't be used to repack the default database named `postgres`. This is due to pg_repack not having the necessary permissions to operate against extensions installed by default on this database. The extension can be created in PostgreSQL, but it can't run.
+- The pg_repack extension can't be used to repack the default database named `postgres`. This is due to pg_repack not having the necessary permissions to operate against extensions installed by default on this database. The extension can be created in Azure Database for PostgreSQL flexible server, but it can't run.
- The target table must have either a PRIMARY KEY or a UNIQUE index on a NOT NULL column for the operation to be successful. - While pg_repack is running, you won't be able to perform any DDL commands on the target table(s) except for VACUUM or ANALYZE. To ensure these restrictions are enforced, pg_repack will hold an ACCESS SHARE lock on the target table during a full table repack.
pg_repack has some limitations that you should be aware of before using it:
To enable the pg_repack extension, follow the steps below:
-1. Add pg_repack extension under Azure extensions as shown below from the server parameters blade on Flexible server portal
+1. Add pg_repack extension under Azure extensions as shown below from the server parameters blade on the Azure Database for PostgreSQL flexible server portal.
:::image type="content" source="./media/how-to-perform-fullvacuum-pg-repack/portal.png" alt-text="Screenshot of server parameters blade with Azure extensions parameter." lightbox="./media/how-to-perform-fullvacuum-pg-repack/portal.png"::: > [!NOTE]
-> Making this change will not require a server restart.
+> Making this change doesn't require a server restart.
### Install the packages for Ubuntu virtual machine
pg_repack -V
## Use pg_repack
-Example of how to run pg_repack on a table named info in a public schema within the Flexible Server with endpoint pgserver.postgres.database.azure.com, username azureuser, and database foo using the following command.
+Example of how to run pg_repack on a table named info in a public schema within the Azure Database for PostgreSQL flexible server instance with endpoint pgserver.postgres.database.azure.com, username azureuser, and database foo using the following command.
-1. Connect to the Flexible Server instance. This article uses psql for simplicity.
+1. Connect to the Azure Database for PostgreSQL flexible server instance. This article uses psql for simplicity.
```psql psql "host=xxxxxxxxx.postgres.database.azure.com port=5432 dbname=foo user=xxxxxxxxxxxxx password=[my_password] sslmode=require"
Example of how to run pg_repack on a table named info in a public schema within
Useful pg_repack options for production workloads: - -k, --no-superuser-check
- Skip the superuser checks in the client. This setting is helpful for using pg_repack on platforms that support running it as non-superusers, like Azure Database for PostgreSQL Flexible Servers.
+ Skip the superuser checks in the client. This setting is helpful for using pg_repack on platforms that support running it as non-superusers, like Azure Database for PostgreSQL flexible server instances.
- -j, --jobs
- Create the specified number of extra connections to PostgreSQL and use these extra connections to parallelize the rebuild of indexes on each table. Parallel index builds are only supported for full-table repacks.
+ Create the specified number of extra connections to Azure Database for PostgreSQL flexible server and use these extra connections to parallelize the rebuild of indexes on each table. Parallel index builds are only supported for full-table repacks.
- --index or --only indexes options
- If your PostgreSQL server has extra cores and disk I/O available, this can be a useful way to speed up pg_repack.
+ If your Azure Database for PostgreSQL flexible server instance has extra cores and disk I/O available, this can be a useful way to speed up pg_repack.
- -D, --no-kill-backend Skip to repack table if the lock can't be taken for duration specified --wait-timeout default 60 sec, instead of canceling conflicting queries. The default is false.
postgresql How To Perform Major Version Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-perform-major-version-upgrade-cli.md
Title: Major Version Upgrade of a flexible server - Azure CLI
-description: This article describes how to perform major version upgrade in Azure Database for PostgreSQL through Azure CLI.
+ Title: Major version upgrade - Azure CLI
+description: This article describes how to perform a major version upgrade in Azure Database for PostgreSQL - Flexible Server through the Azure CLI.
Last updated 02/13/2023
-# Major Version Upgrade of a flexible server - Flexible Server with Azure CLI
+# Major version upgrade of Azure Database for PostgreSQL - Flexible Server with Azure CLI
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides step-by-step procedure to perform Major Version Upgrade in flexible server using Azure CLI.
+This article provides step-by-step procedure to perform a major version upgrade in Azure Database for PostgreSQL flexible server using Azure CLI.
## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
This article provides step-by-step procedure to perform Major Version Upgrade in
az account set --subscription <subscription id> ``` -- Create a PostgreQL Flexible Server if you haven't already created one using the ```az postgres flexible-server create``` command.
+- Create an Azure Database for PostgreSQL flexible server instance if you haven't already created one using the `az postgres flexible-server create` command.
```azurecli az postgres flexible-server create --resource-group myresourcegroup --name myservername
postgresql How To Perform Major Version Upgrade Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-perform-major-version-upgrade-portal.md
Title: Major Version Upgrade of a flexible server - Azure portal
-description: This article describes how to perform major version upgrade in Azure Database for PostgreSQL Flexible Server through the Azure portal.
+ Title: Major version upgrade - Azure portal
+description: This article describes how to perform a major version upgrade in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
Last updated 02/13/2023
-# Major Version Upgrade of a Flexible Server
+# Major version upgrade of Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides a step-by-step procedure to perform Major Version Upgrade in a flexible server using Azure portal
+This article provides a step-by-step procedure to perform a major version upgrade in an Azure Database for PostgreSQL flexible server instance using Azure portal
> [!NOTE]
-> Major Version Upgrade action is irreversible. Please perform a Point-In-Time Recovery (PITR) of your production server and test the upgrade in the non-production environment.
+> The major version upgrade action is irreversible. Please perform a Point-In-Time Recovery (PITR) of your production server and test the upgrade in the non-production environment.
-## Follow these steps to upgrade your flexible server to the major version of your choice:
+## Follow these steps to upgrade your Azure Database for PostgreSQL flexible server instance to the major version of your choice:
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to upgrade.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to upgrade.
2. Select **Overview** from the left pane, and then select **Upgrade**.
This article provides a step-by-step procedure to perform Major Version Upgrade
6. You can click on the **Go to resource** tab to validate your upgrade. You notice that server name remained unchanged and PostgreSQL version upgraded to desired higher version with the latest minor version ## Next steps
postgresql How To Pgdump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-pgdump-restore.md
Title: Best practices for pg_dump and pg_restore in Azure Database for PostgreSQL - Flexible Server
-description: This article discusses best practices for pg_dump and pg_restore in Azure Database for PostgreSQL - Flexible Server
+ Title: Best practices for pg_dump and pg_restore
+description: This article discusses best practices for pg_dump and pg_restore in Azure Database for PostgreSQL - Flexible Server.
+ Last updated 09/16/2022
# Best practices for pg_dump and pg_restore for Azure Database for PostgreSQL - Flexible Server + This article reviews options and best practices for speeding up pg_dump and pg_restore. It also explains the best server configurations for carrying out pg_restore. ## Best practices for pg_dump
-You can use the pg_dump utility to extract a PostgreSQL database into a script file or archive file. A few of the command line options that you can use to reduce the overall dump time by using pg_dump are listed in the following sections.
+You can use the pg_dump utility to extract an Azure Database for PostgreSQL flexible server database into a script file or archive file. A few of the command line options that you can use to reduce the overall dump time by using pg_dump are listed in the following sections.
### Directory format(-Fd)
Use the following syntax for pg_dump:
## Best practices for pg_restore
-You can use the pg_restore utility to restore a PostgreSQL database from an archive that's created by pg_dump. A few command line options for reducing the overall restore time are listed in the following sections.
+You can use the pg_restore utility to restore an Azure Database for PostgreSQL flexible server database from an archive that's created by pg_dump. A few command line options for reducing the overall restore time are listed in the following sections.
### Parallel restore
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
Title: Manage read replicas - Azure portal, CLI, REST API - Azure Database for PostgreSQL - Flexible Server
-description: Learn how to manage read replicas Azure Database for PostgreSQL - Flexible Server from the Azure portal, CLI and REST API.
+ Title: Manage read replicas - Azure portal, REST API
+description: Learn how to manage read replicas for Azure Database for PostgreSQL - Flexible Server from the Azure portal, CLI, and REST API.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal, CLI and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
-
+In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL flexible server from the Azure portal, CLI, and REST API. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
> [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server is currently supporting the following features in Preview:
+> Azure Database for PostgreSQL flexible server is currently supporting the following features in Preview:
> > - Promote to primary server (to maintain backward compatibility, please use promote to independent server and remove from replication, which keeps the former behavior) > - Virtual endpoints
In this article, you learn how to create and manage read replicas in Azure Datab
## Prerequisites
-An [Azure Database for PostgreSQL server](./quickstart-create-server-portal.md) to be the primary server.
+An [Azure Database for PostgreSQL flexible server instance](./quickstart-create-server-portal.md) to be the primary server.
> [!NOTE] > When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and might never catch up with the primary. This might also increase storage usage at the primary as the WAL files are only deleted once received at the replica. ## Review primary settings
-Before setting up a read replica for Azure Database for PostgreSQL, ensure the primary server is configured to meet the necessary prerequisites. Specific settings on the primary server can affect the ability to create replicas.
+Before setting up a read replica for Azure Database for PostgreSQL flexible server, ensure the primary server is configured to meet the necessary prerequisites. Specific settings on the primary server can affect the ability to create replicas.
**Storage auto-grow**: The storage autogrow setting must be consistent between the primary server and it's read replicas. If the primary server has this feature enabled, the read replicas must also have it enabled to prevent inconsistencies in storage behavior that could interrupt replication. If it's disabled on the primary server, it should also be turned off on the replicas.
Before setting up a read replica for Azure Database for PostgreSQL, ensure the p
#### [Portal](#tab/portal)
-1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL - Flexible Server you want for the replica.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance you want for the replica.
2. On the **Overview** dialog, note the PostgreSQL version (ex `15.4`). Also, note the region your primary is deployed to (ex., `East US`).
Review and note the following settings:
#### [REST API](#tab/restapi)
-To obtain information about the configuration of a server in Azure Database for PostgreSQL - Flexible Server, especially to view settings for recently introduced features like storage auto-grow or private link, you should use the latest API version `2023-06-01-preview`. The `GET` request for this would be formatted as follows:
+To obtain information about the configuration of a server in Azure Database for PostgreSQL flexible server, especially to view settings for recently introduced features like storage auto-grow or private link, you should use the latest API version `2023-06-01-preview`. The `GET` request for this would be formatted as follows:
```http request https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{serverName}?api-version=2023-06-01-preview
To create a read replica, follow these steps:
#### [Portal](#tab/portal)
-1. Select an existing Azure Database for the PostgreSQL server to use as the primary server.
+1. Select an existing Azure Database for PostgreSQL flexible server instance to use as the primary server.
2. On the server sidebar, under **Settings**, select **Replication**.
Here, `{replicaserverName}` should be replaced with the name of the replica serv
## List virtual endpoints (preview)
-To list virtual endpoints in the preview version of Azure Database for PostgreSQL - Flexible Server, use the following steps:
+To list virtual endpoints in the preview version of Azure Database for PostgreSQL flexible server, use the following steps:
#### [Portal](#tab/portal)
Here, `{sourceserverName}` should be the name of the primary server from which y
### Modify application(s) to point to virtual endpoint
-Modify any applications that are using your Azure Database for PostgreSQL to use the new virtual endpoints (ex: `corp-pg-001.writer.postgres.database.azure.com` and `corp-pg-001.reader.postgres.database.azure.com`).
+Modify any applications that are using your Azure Database for PostgreSQL flexible server instance to use the new virtual endpoints (ex: `corp-pg-001.writer.postgres.database.azure.com` and `corp-pg-001.reader.postgres.database.azure.com`).
## Promote replicas
With all the necessary components in place, you're ready to perform a promote re
#### [Portal](#tab/portal) To promote replica from the Azure portal, follow these steps:
-1. In the [Azure portal](https://portal.azure.com/), select your primary Azure Database for PostgreSQL - Flexible server.
+1. In the [Azure portal](https://portal.azure.com/), select your primary Azure Database for PostgreSQL flexible server instance.
2. On the server menu, under **Settings**, select **Replication**.
Create a secondary read replica in a separate region to modify the reader virtua
#### [Portal](#tab/portal)
-1. In the [Azure portal](https://portal.azure.com/), choose the primary Azure Database for PostgreSQL - Flexible Server.
+1. In the [Azure portal](https://portal.azure.com/), choose the primary Azure Database for PostgreSQL flexible server instance.
2. On the server sidebar, under **Settings**, select **Replication**.
The location is set to `westus3`, but you can adjust this based on your geograph
#### [Portal](#tab/portal)
-1. In the [Azure portal](https://portal.azure.com/), choose the primary Azure Database for PostgreSQL - Flexible Server.
+1. In the [Azure portal](https://portal.azure.com/), choose the primary Azure Database for PostgreSQL flexible server instance.
2. On the server sidebar, under **Settings**, select **Replication**.
Rather than switchover to a replica, it's also possible to break the replication
#### [Portal](#tab/portal)
-1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL - Flexible Server primary server.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server primary server.
2. On the server sidebar, on the server menu, under **Settings**, select **Replication**.
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroup
#### [Portal](#tab/portal)
-You can delete a read replica similar to how you delete a standalone Azure Database for PostgreSQL - Flexible Server.
+You can delete a read replica similar to how you delete a standalone Azure Database for PostgreSQL flexible server instance.
1. In the Azure portal, open the **Overview** page for the read replica. Select **Delete**.
You can delete a read replica similar to how you delete a standalone Azure Datab
You can also delete the read replica from the **Replication** window by following these steps:
-2. In the Azure portal, select your primary Azure Database for the PostgreSQL server.
+2. In the Azure portal, select your primary Azure Database for PostgreSQL flexible server instance.
3. On the server menu, under **Settings**, select **Replication**.
You can only delete the primary server once all read replicas have been deleted.
To delete a server from the Azure portal, follow these steps:
-1. In the Azure portal, select your primary Azure Database for the PostgreSQL server.
+1. In the Azure portal, select your primary Azure Database for PostgreSQL flexible server instance.
2. Open the **Overview** page for the server and select **Delete**.
The **Read Replica Lag** metric shows the time since the last replayed transacti
## Related content -- [Read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md)
+- [Read replicas in Azure Database for PostgreSQL - Flexible Server](concepts-read-replicas.md)
postgresql How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-request-quota-increase.md
Title: How to request quota increase for Azure Database PostgreSQL Flexible Server resources
-description: Learn how to request a quota increase for Azure Database for PostgreSQL Flexible Server. You will also learn how to enable a subscription to access a region..
+ Title: How to request a quota increase
+description: Learn how to request a quota increase for Azure Database for PostgreSQL - Flexible Server. You also learn how to enable a subscription to access a region.
Last updated 03/31/2023
-# Request quota increases for Azure Database PostgreSQL Flexible Server
+# Request quota increases for Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The resources in Azure Database for PostgreSQL Flexible Server have default quotas/limits. However, there may be a case where your workload needs more quota than the default value. In such case, you must reach out to the Azure PostgreSQL DB team to request a quota increase. This article explains how to request a quota increase for Azure Database for PostgreSQL FLexible Server resources.
+The resources in Azure Database for PostgreSQL flexible server have default quotas/limits. However, there may be a case where your workload needs more quota than the default value. In such case, you must reach out to the Azure Database for PostgreSQL flexible server team to request a quota increase. This article explains how to request a quota increase for Azure Database for PostgreSQL flexible server resources.
## Create a new support request
-To request a quota increase, you must create a new support request with your workload details. The Azure Database for PostgreSQL Flexible Server team will then process your request and approve or deny it. Use the following steps to create a new support request from the Azure portal:
+To request a quota increase, you must create a new support request with your workload details. The Azure Database for PostgreSQL flexible server team then processes your request and approves or denies it. Use the following steps to create a new support request from the Azure portal:
1. Sign into the Azure portal.
To request a quota increase, you must create a new support request with your wor
* For **Subscription**, select the subscription for which you want to increase the quota. * For **Quota type**, select **Azure Database for PostgreSQL Flexible Server**
- :::image type="content" source="./media/how-to-create-support-request-quota-increase/create-quota-increase-request.png" alt-text="Create a new Azure Flexible Server request for quota increase":::
+ :::image type="content" source="./media/how-to-create-support-request-quota-increase/create-quota-increase-request.png" alt-text="Create a new Azure Database for PostgreSQL flexible server request for quota increase.":::
4. In the **Additional Details** tab, enter the details corresponding to your quota request. The Information provided on this tab will be used to further assess your issue and help the support engineer troubleshoot the problem.
To request a quota increase, you must create a new support request with your wor
7. Select **Next: Review+Create**. Validate the information provided and select **Create** to create a support request.
-The Azure Database for PostgreSQL Flexible Server DB support team process all quota requests in 24-48 hours.
+The Azure Database for PostgreSQL flexible server support team processes all quota requests in 24-48 hours.
## Next steps -- Learn how to [create a PostgreSQL server in the portal](how-to-manage-server-portal.md).
+- Learn how to [create an Azure Database for PostgreSQL flexible server instance in the portal](how-to-manage-server-portal.md).
- Learn about [service limits](concepts-limits.md).
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-cli.md
Title: Restart - Azure portal - Azure Database for PostgreSQL Flexible Server
-description: This article describes how to restart operations in Azure Database for PostgreSQL through the Azure CLI.
+ Title: Restart - Azure CLI
+description: This article describes how to restart operations in Azure Database for PostgreSQL - Flexible Server through the Azure CLI.
Last updated 11/30/2021
-# Restart an Azure Database for PostgreSQL - Flexible Server
+# Restart an Azure Database for PostgreSQL - Flexible Server instance
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article shows you how to perform restart, start and stop flexible server using Azure CLI.
+This article shows you how to perform restart, start and stop an Azure Database for PostgreSQL flexible server instance using Azure CLI.
## Prerequisites
This article shows you how to perform restart, start and stop flexible server us
az login ```` -- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the `az account set` command.
` ```azurecli az account set --subscription <subscription id> ``` -- Create a PostgreSQL Flexible Server if you have not already created one using the ```az postgres flexible-server create``` command.
+- Create an Azure Database for PostgreSQL flexible server instance if you haven't already created one using the `az postgres flexible-server create` command.
```azurecli az postgres flexible-server create --resource-group myresourcegroup --name myservername ``` ## Restart a server
-To restart a server, run ```az postgres flexible-server restart``` command. If you are using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+To restart a server, run the ```az postgres flexible-server restart``` command. If you're using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
**Usage:** ```azurecli
az postgres flexible-server restart
``` > [!IMPORTANT]
-> Once the server has restarted successfully, all management operations are now available for the flexible server.
+> Once the server has restarted successfully, all management operations are now available for the Azure Database for PostgreSQL flexible server instance.
## Next steps-- Learn more about [stopping and starting Azure Database for PostgreSQL Flexible Server](./how-to-stop-start-server-cli.md)
+- Learn more about [stopping and starting Azure Database for PostgreSQL - Flexible Server](./how-to-stop-start-server-cli.md)
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md
Title: Restart - Azure portal - Azure Database for PostgreSQL - Flexible Server
-description: This article describes how to perform restart operations in Azure Database for PostgreSQL through the Azure portal.
+ Title: Restart - Azure portal
+description: This article describes how to perform restart operations in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides step-by-step procedure to perform restart of the flexible server. This operation is useful to apply any static parameter changes that requires database server restart. The procedure is same for servers configured with zone redundant high availability.
+This article provides step-by-step procedure to perform restart of the Azure Database for PostgreSQL flexible server instance. This operation is useful to apply any static parameter changes that requires database server restart. The procedure is same for servers configured with zone redundant high availability.
> [!IMPORTANT] > When configured with high availability, both the primary and the standby servers are restarted at the same time.
This article provides step-by-step procedure to perform restart of the flexible
To complete this how-to guide, you need: -- You must have a flexible server.
+- An Azure Database for PostgreSQL flexible server instance.
## Restart your flexible server
-Follow these steps to restart your flexible server.
+Follow these steps to restart your Azure Database for PostgreSQL flexible server instance.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restart.
+1. In the [Azure portal](https://portal.azure.com/), choose your Azure Database for PostgreSQL flexible server instance that you want to restart.
2. Click **Overview** from the left panel and click **Restart**.
- :::image type="content" source="./media/how-to-restart-server-portal/restart-base-page.png" alt-text="Restart selection":::
+ :::image type="content" source="./media/how-to-restart-server-portal/restart-base-page.png" alt-text="Restart selection.":::
-3. A pop-up confirmation message will appear.
+3. A pop-up confirmation message appears.
4. Click **Yes** if you want to continue.
- :::image type="content" source="./media/how-to-restart-server-portal/restart-pop-up.png" alt-text="Restart confirm":::
+ :::image type="content" source="./media/how-to-restart-server-portal/restart-pop-up.png" alt-text="Restart confirm.":::
-6. A notification will be shown that the restart operation has been
+6. A notification is shown that the restart operation has been
initiated. > [!NOTE]
postgresql How To Restore Different Subscription Resource Group Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-different-subscription-resource-group-api.md
+
+ Title: Cross subscription and cross resource group restore - Azure REST API
+description: This article describes how to restore to a different Subscription or resource group server in Azure Database for PostgreSQL - Flexible Server using Azure REST API.
++++++ Last updated : 10/04/2023++
+# Cross subscription and cross resource group restore in Azure Database for PostgreSQL - Flexible Server using Azure REST API
+
+In this article, you learn how to restore an Azure Database for PostgreSQL flexible server instance to a different subscription or resource group using the REST API [Azure REST API](/rest/api/azure/). To learn more about backup and restore see the [overview](concepts-backup-restore.md).
+
+## Prerequisites
+An [Azure Database for PostgreSQL flexible server instance](quickstart-create-server-portal.md) to be the primary server.
+
+### Restore to a different Subscription or Resource group
+
+ 1. Browse to the [Azure Database for PostgreSQL flexible server Create Server REST API Page](/rest/api/postgresql/flexibleserver/servers/create) and select the **Try It** tab highlighted in green. Sign in with your Azure account.
+
+2. Provide the **resourceGroupName**(Target Resource group name), **serverName** (Target server name), **subscriptionId** (Target subscription) properties. Please use the latest api-version that is available. For this example we're using 2023-06-01-preview.
+
+ ![Screenshot showing the REST API Try It page.](./media/how-to-restore-server-portal/geo-restore-different-subscription-or-resource-group-api.png)
+++
+3. Go to **Request Body** section and paste the following replacing the "location" (e.g. CentralUS, EastUS etc.), "pointInTimeUTC", and ))"SourceServerResourceID", For "pointInTimeUTC", specify a timestamp value to which you want to restore. Finally, you can use createMode as **PointInTimeRestore** for performing regular restore and **GeoRestore** for restoring geo-redundant backups.
+
+ **GeoRestore**
+
+```json
+ {
+ "location": "NorthEurope",
+ "properties":
+ {
+ "pointInTimeUTC": "2023-10-03T16:05:02Z",
+ "SourceServerResourceID": "/subscriptions/fffffffff-ffff-ffff-fffffffffff/resourceGroups/source-resourcegroupname-rg/providers/Microsoft.DBforPostgreSQL/flexibleServers/SourceServer-Name",
+ "createMode": "GeoRestore"
+ }
+}
+```
+**Point In Time Restore**
+
+```json
+ {
+ "location": "EastUS",
+ "properties":
+ {
+ "pointInTimeUTC": "2023-06-15T16:05:02Z",
+ "createMode": "PointInTimeRestore",
+ "sourceServerResourceId": "/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/SourceResourceGroup-Name/providers/Microsoft.DBforPostgreSQL/flexibleServers/SourceServer-Name"
+ }
+ }
+```
++
+4. If you see Response Code 201 or 202, the restore request is successfully submitted.
+
+ The server creation can take time depending on the database size and compute resources provisioned on the original server. The restore status can be monitored from Activity log by filtering for
+ - **Subscription** = Your Subscription
+ - **Resource Type** = Azure Database for PostgreSQL flexible servers (Microsoft.DBforPostgreSQL/flexibleServers)
+ - **Operation** = Update PostgreSQL Server Create
++
+## Common Errors
+
+ - If you utilize the incorrect API version, you might experience restore failures or timeouts. Please use 2023-06-01-preview API to avoid such issues.
+ - To avoid potential DNS errors, it's recommended to use a different name when initiating the restore process, as some restore operations might fail with the same name.
+
+## Next steps
+
+- Learn about [business continuity](./concepts-business-continuity.md).
+- Learn about [zone-redundant high availability](./concepts-high-availability.md).
+- Learn about [backup and recovery](./concepts-backup-restore.md).
postgresql How To Restore Dropped Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-dropped-server.md
Title: Restore a dropped Azure Database for PostgreSQL - Flexible Server
+ Title: Restore a dropped server
description: This article describes how to restore a dropped server in Azure Database for PostgreSQL - Flexible Server using the Azure portal.
Last updated 06/15/2023
-# Restore a dropped Azure Database for PostgreSQL Flexible server
+# Restore a dropped Azure Database for PostgreSQL - Flexible Server instance
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-When a server is dropped, the database server backup is retained for five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped PostgreSQL server resource within five days from the time of server deletion. The recommended steps work only if the backup for the server is still available and not deleted from the system. While restoring a deleted server often succeeds, it is not always guaranteed, as restoring a deleted server depends on several other factors.
+When a server is dropped, the Azure Database for PostgreSQL flexible server backup is retained for five days in the service. The database backup can be accessed and restored only from the Azure subscription where the server originally resided. The following recommended steps can be followed to recover a dropped Azure Database for PostgreSQL flexible server resource within five days from the time of server deletion. The recommended steps work only if the backup for the server is still available and not deleted from the system. While restoring a deleted server often succeeds, it is not always guaranteed, as restoring a deleted server depends on several other factors.
## Prerequisites
-To restore a dropped Azure Database for PostgreSQL Flexible server, you need
+To restore a dropped Azure Database for PostgreSQL flexible server instance, you need
- Azure Subscription name hosting the original server - Location where the server was created - Use the 2023-03-01-preview **api-version** version
To restore a dropped Azure Database for PostgreSQL Flexible server, you need
![Screenshot showing activity log filtered for delete PostgreSQL server operation.](./media/how-to-restore-server-portal/activity-log-azure.png)
-3. Select the **Delete PostgreSQL Server** event, then select the **JSON tab**. Copy the `resourceId` and `submissionTimestamp` attributes in JSON output. The resourceId is in the following format: `/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/TargetResourceGroup/providers/Microsoft.DBforPostgreSQL/servers/deletedserver`.
+3. Select the **Delete PostgreSQL Server** event, then select the **JSON tab**. Copy the `resourceId` and `submissionTimestamp` attributes in JSON output. The resourceId is in the following format: `/subscriptions/ffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/ResourceGroup-name/providers/Microsoft.DBforPostgreSQL/flexibleServers/deletedserver`.
-4. Browse to the PostgreSQL [Create Server REST API Page](/rest/api/postgresql/flexibleserver/servers/create) and select the **Try It** tab highlighted in green. Sign in with your Azure account.
+4. Browse to the Azure Database for PostgreSQL flexible server [Create Server REST API Page](/rest/api/postgresql/flexibleserver/servers/create) and select the **Try It** tab highlighted in green. Sign in with your Azure account.
> [!Important] > Use this api-version **_2023-03-01-preview_** rather than the default before running to enable this API function as expected as detailed in the following step.
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-cli.md
Title: Restore Azure Database for PostgreSQL - Flexible Server with Azure CLI
-description: This article describes how to perform restore operations in Azure Database for PostgreSQL through the Azure CLI.
+ Title: Restore with Azure CLI
+description: This article describes how to perform restore operations in Azure Database for PostgreSQL - Flexible Server through the Azure CLI.
Last updated 11/30/2021
-# Point-in-time restore of an Azure Database for PostgreSQL - Flexible Server with Azure CLI
+# Point-in-time restore of an Azure Database for PostgreSQL - Flexible Server instance with Azure CLI
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups.
+This article provides step-by-step procedure to perform point-in-time recoveries in Azure Database for PostgreSQL flexible server using backups.
## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
This article provides step-by-step procedure to perform point-in-time recoveries
az login ```` -- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the `az account set` command.
` ```azurecli az account set --subscription <subscription id> ``` -- Create a PostgreQL Flexible Server if you haven't already created one using the ```az postgres flexible-server create``` command.
+- Create an Azure Database for PostgreSQL flexible server instance if you haven't already created one using the `az postgres flexible-server create` command.
```azurecli az postgres flexible-server create --resource-group myresourcegroup --name myservername
az postgres flexible-server geo-restore --source-server
[--subscription] ```
-**Example:** To perform a geo-restore of a source server 'mydemoserver' which is located in region East US to a new server 'mydemoserver-restored' in itΓÇÖs geo-paired location West US with the same network setting you can run following command.
+**Example:** To perform a geo-restore of a source server 'mydemoserver' which is located in region East US to a new server 'mydemoserver-restored' in its geo-paired location West US with the same network setting you can run following command.
```azurecli az postgres flexible-server geo-restore \
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
Title: Point-in-time restore of a flexible server - Azure portal
-description: This article describes how to perform restore operations in Azure Database for PostgreSQL Flexible Server through the Azure portal.
+ Title: Point-in-time restore - Azure portal
+description: This article describes how to perform restore operations in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
Last updated 11/30/2021
-# Point-in-time restore of a flexible server
+# Point-in-time restore of an Azure Database for PostgreSQL - Flexible Server instance
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides a step-by-step procedure for using the Azure portal to perform point-in-time recoveries in a flexible server through backups. You can perform this procedure to the latest restore point or to a custom restore point within your retention period.
+This article provides a step-by-step procedure for using the Azure portal to perform point-in-time recoveries in Azure Database for PostgreSQL flexible server through backups. You can perform this procedure to the latest restore point or to a custom restore point within your retention period.
## Prerequisites
-To complete this how-to guide, you need Azure Database for PostgreSQL - Flexible Server. The procedure is also applicable for a flexible server that's configured with zone redundancy.
+To complete this how-to guide, you need an Azure Database for PostgreSQL flexible server instance. The procedure is also applicable for an Azure Database for PostgreSQL flexible server instance that's configured with zone redundancy.
## Restore to the latest restore point
-Follow these steps to restore your flexible server to the latest restore point by using an existing backup:
+Follow these steps to restore your Azure Database for PostgreSQL flexible server instance to the latest restore point by using an existing backup:
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to restore the backup from.
2. Select **Overview** from the left pane, and then select **Restore**.
Follow these steps to restore your flexible server to the latest restore point b
## Restore to a custom restore point
-Follow these steps to restore your flexible server to a custom restore point by using an existing backup:
+Follow these steps to restore your Azure Database for PostgreSQL flexible server instance to a custom restore point by using an existing backup:
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to restore the backup from.
2. Select **Overview** from the left pane, and then select **Restore**.
If your source server is configured with geo-redundant backup, you can restore t
> [!NOTE] > For the first time that you perform a geo-restore, wait at least one hour after you create the source server.
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to geo-restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to geo-restore the backup from.
2. Select **Overview** from the left pane, and then select **Restore**.
postgresql How To Restore To Different Subscription Or Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-to-different-subscription-or-resource-group.md
Title: Cross Subscription and Cross Resource Group Restore in Azure Database for PostgreSQL - Flexible Server
-description: This article describes how to restore to a different Subscription or resource group server in Azure Database for PostgreSQL - Flexible Server using the Azure portal.
+ Title: Cross subscription and cross resource group restore
+description: This article describes how to restore to a different subscription or resource group server in Azure Database for PostgreSQL - Flexible Server using the Azure portal.
Last updated 09/30/2023
-# Cross Subscription and Cross Resource Group Restore in Azure Database for PostgreSQL Flexible Server
+# Cross subscription and cross resource group restore in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-Flexible-server](../includes/applies-to-postgresql-Flexible-server.md)]
-This article provides a step-by-step procedure for using the Azure portal to perform a restore to a different subscription or resource group in a flexible server through automated backups. You can perform this procedure to the latest restore point or to a custom restore point within your retention period.
+This article provides a step-by-step procedure for using the Azure portal to perform a restore to a different subscription or resource group in Azure Database for PostgreSQL flexible server through automated backups. You can perform this procedure to the latest restore point or to a custom restore point within your retention period.
## Prerequisites
-To complete this how-to guide, you need Azure Database for PostgreSQL - Flexible Server. The procedure is also applicable for a flexible server that's configured with zone redundancy.
+To complete this how-to guide, you need an Azure Database for PostgreSQL flexible server instance. The procedure is also applicable for an Azure Database for PostgreSQL flexible server instance that's configured with zone redundancy.
## Restore to a different Subscription or Resource group
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to restore.
2. Select **Overview** from the left pane, and then select **Restore**.
If your source server is configured with geo-redundant backup, you can restore t
> [!NOTE] > For the first time that you perform a geo-restore, wait at least one hour after you create the source server
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to restore the backup from.
2. Select **Overview** from the left pane, and then select **Restore**.
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
Title: Scale operations - Azure portal - Azure Database for PostgreSQL - Flexible Server
-description: This article describes how to perform scale operations in Azure Database for PostgreSQL through the Azure portal.
+ Title: Scale operations - Azure portal
+description: This article describes how to perform scale operations in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
Last updated 11/30/2021
-# Scale operations in Flexible Server
+# Scale operations in Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] This article provides steps to perform scaling operations for compute and storage. You're able to change your compute tiers between burstable, general purpose, and memory optimized SKUs, including choosing the number of vCores that is suitable to run your application. You can also scale up your storage. Expected IOPS are shown based on the compute tier, vCores and the storage capacity. The cost estimate is also shown based on your selection. > [!IMPORTANT]
-> You cannot scale down the storage.
+> You can't scale down the storage.
## Prerequisites To complete this how-to guide, you need: -- You must have an Azure Database for PostgreSQL - Flexible Server. The same procedure is also applicable for flexible server configured with zone redundancy.
+- You must have an Azure Database for PostgreSQL flexible server instance. The same procedure is also applicable for an Azure Database for PostgreSQL flexible server instance configured with zone redundancy.
## Scaling the compute tier and size Follow these steps to choose the compute tier.
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to restore the backup from.
2. Select **Compute+storage**.
Follow these steps to choose the compute tier.
6. If you want to change the number of vCores, you can select the drop-down of **Compute size** and select the desired number of vCores/Memory from the list. - Burstable compute tier:
- :::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-burstable-dropdown.png" alt-text="burstable compute":::
+ :::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-burstable-dropdown.png" alt-text="Screenshot that shows burstable compute.":::
- General purpose compute tier: :::image type="content" source="./media/how-to-scale-compute-storage-portal/compute-general-purpose-dropdown.png" alt-text="Screenshot that shows general-purpose compute.":::
Follow these steps to choose the compute tier.
Follow these steps to increase your storage size.
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server for which you want to increase the storage size.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance for which you want to increase the storage size.
2. Select **Compute+storage**.
Follow these steps to increase your storage size.
## Storage autogrow
-Use below steps to enable storage autogrow for your flexible server and automatically scale your storage in most cases.
+Use below steps to enable storage autogrow for your Azure Database for PostgreSQL flexible server instance and automatically scale your storage in most cases.
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server for which you want to increase the storage size.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance for which you want to increase the storage size.
2. Select **Compute+storage**.
Use below steps to enable storage autogrow for your flexible server and automati
### Scaling up
-Use the below steps to scale up the performance tier on your flexible server.
+Use the below steps to scale up the performance tier on your Azure Database for PostgreSQL flexible server instance.
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to scale up.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to scale up.
2. Select **Compute + storage**.
Use the below steps to scale up the performance tier on your flexible server.
### Scaling down
-Use the below steps to scale down the performance tier on your flexible server.
+Use the below steps to scale down the performance tier on your Azure Database for PostgreSQL flexible server instance.
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to scale down.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to scale down.
2. Select **Compute + storage**.
postgresql How To Server Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-portal.md
Title: 'Download server logs for Azure Database for PostgreSQL - Flexible Server'
+ Title: Download server logs
description: This article describes how to download server logs using Azure portal.
Last updated 1/16/2024
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can use server logs to help monitor and troubleshoot an instance of Azure Database for PostgreSQL - Flexible Server, and to gain detailed insights into the activities that have run on your servers.
+You can use server logs to help monitor and troubleshoot an instance of Azure Database for PostgreSQL flexible server, and to gain detailed insights into the activities that have run on your servers.
-By default, the server logs feature in Azure Database for PostgreSQL - Flexible Server is disabled. However, after you enable the feature, a flexible server starts capturing events of the selected log type and writes them to a file. You can then use the Azure portal or the Azure CLI to download the files to assist with your troubleshooting efforts. This article explains how to enable the server logs feature in Azure Database for PostgreSQL - Flexible Server and download server log files. It also provides information about how to disable the feature.
+By default, the server logs feature in Azure Database for PostgreSQL flexible server is disabled. However, after you enable the feature, Azure Database for PostgreSQL flexible server starts capturing events of the selected log type and writes them to a file. You can then use the Azure portal or the Azure CLI to download the files to assist with your troubleshooting efforts. This article explains how to enable the server logs feature in Azure Database for PostgreSQL flexible server and download server log files. It also provides information about how to disable the feature.
In this tutorial, youΓÇÖll learn how to: >[!div class="checklist"]
In this tutorial, youΓÇÖll learn how to:
## Prerequisites
-To complete this tutorial, you need an existing Azure Database for PostgreSQL - Flexible Server. If you need to create a new server, see [Create an Azure Database for PostgreSQL - Flexible Server](./quickstart-create-server-portal.md).
+To complete this tutorial, you need an Azure Database for PostgreSQL flexible server instance. If you need to create a new server, see [Create an Azure Database for PostgreSQL - Flexible Server](./quickstart-create-server-portal.md).
## Enable Server logs To enable the server logs feature, perform the following steps.
-1. In the [Azure portal](https://portal.azure.com), select your PostgreSQL flexible server.
+1. In the [Azure portal](https://portal.azure.com), select your Azure Database for PostgreSQL flexible server instance.
2. On the left pane, under **Monitoring**, select **Server logs**.
- :::image type="content" source="./media/how-to-server-logs-portal/1-how-to-server-log.png" alt-text="Screenshot showing Azure Database for PostgreSQL - Server Logs.":::
+ :::image type="content" source="./media/how-to-server-logs-portal/1-how-to-server-log.png" alt-text="Screenshot showing Azure Database for PostgreSQL flexible server logs.":::
3. To enable server logs, under **Server logs**, select **Enable**.
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
Title: Stop/start - Azure CLI - Azure Database for PostgreSQL Flexible Server
-description: This article describes how to stop/start operations in Azure Database for PostgreSQL through the Azure CLI.
+ Title: Stop/start - Azure CLI
+description: This article describes how to stop/start operations in Azure Database for PostgreSQL - Flexible Server through the Azure CLI.
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article shows you how to perform restart, start and stop flexible server using Azure CLI.
+This article shows you how to perform restart, start and stop Azure Database for PostgreSQL flexible server using Azure CLI.
## Prerequisites
This article shows you how to perform restart, start and stop flexible server us
az login ```` -- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the ```az account set``` command.
+- If you have multiple subscriptions, choose the appropriate subscription in which you want to create the server using the `az account set` command.
` ```azurecli az account set --subscription <subscription id> ``` -- Create a PostgreSQL Flexible Server if you haven't already created one using the ```az postgres flexible-server create``` command.
+- Create an Azure Database for PostgreSQL flexible server instance if you haven't already created one using the `az postgres flexible-server create` command.
```azurecli az postgres flexible-server create --resource-group myresourcegroup --name myservername ``` ## Stop a running server
-To stop a server, run ```az postgres flexible-server stop``` command. If you're using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+To stop a server, run `az postgres flexible-server stop` command. If you're using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
**Usage:** ```azurecli
az postgres flexible-server stop
``` ## Start a stopped server
-To start a server, run ```az postgres flexible-server start``` command. If you're using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
+To start a server, run `az postgres flexible-server start` command. If you're using [local context](/cli/azure/config/param-persist), you don't need to provide any arguments.
**Usage:** ```azurecli
az postgres flexible-server start
``` > [!IMPORTANT]
-> Once the server has restarted successfully, all management operations are now available for the flexible server.
+> Once the server has restarted successfully, all management operations are now available for the Azure Database for PostgreSQL flexible server instance.
## Next steps-- Learn more about [restarting Azure Database for PostgreSQL Flexible Server](./how-to-restart-server-cli.md)
+- Learn more about [restarting Azure Database for PostgreSQL flexible server](./how-to-restart-server-cli.md).
postgresql How To Stop Start Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-portal.md
Title: Stop/start - Azure portal - Azure Database for PostgreSQL Flexible Server
-description: This article describes how to stop/start operations in Azure Database for PostgreSQL through the Azure portal.
+ Title: Stop/start - Azure portal
+description: This article describes how to stop/start operations in Azure Database for PostgreSQL - Flexible Server through the Azure portal.
Last updated 11/30/2021
-# Stop/Start an Azure Database for PostgreSQL - Flexible Server using Azure portal
+# Stop/Start an Azure Database for PostgreSQL - Flexible Server instance using Azure portal
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides step-by-step instructions to stop and start a flexible server.
+This article provides step-by-step instructions to stop and start an Azure Database for PostgreSQL flexible server instance.
## Pre-requisites To complete this how-to guide, you need: -- You must have an Azure Database for PostgreSQL Flexible Server.
+- You must have an Azure Database for PostgreSQL flexible server instance.
## Stop a running server
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to stop.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to stop.
2. From the **Overview** page, click the **Stop** button in the toolbar. > [!NOTE]
-> Once the server is stopped, other management operations are not available for the flexible server.
-> While the database instance is in stopped state, it could be briefly restarted for our scheduled monthly maintenance, and then returned to its stopped state. This ensures that even instances in a stopped state stay up to date with all necessary patches and updates.
+> Once the server is stopped, other management operations are not available for the Azure Database for PostgreSQL flexible server instance.
+> While the Azure Database for PostgreSQL flexible server instance is in stopped state, it could be briefly restarted for scheduled monthly maintenance, and then returned to its stopped state. This ensures that even instances in a stopped state stay up to date with all necessary patches and updates.
## Start a stopped server
-1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to start.
+1. In the [Azure portal](https://portal.azure.com/), choose the Azure Database for PostgreSQL flexible server instance that you want to start.
2. From the **Overview** page, click the **Start** button in the toolbar. > [!NOTE]
-> Once the server is started, all management operations are now available for the flexible server.
+> Once the server is started, all management operations are now available for the Azure Database for PostgreSQL flexible server instance.
## Next steps -- Learn more about [compute and storage options in Azure Database for PostgreSQL Flexible Server](./concepts-compute-storage.md).
+- Learn more about [compute and storage options in Azure Database for PostgreSQL flexible server](./concepts-compute-storage.md).
postgresql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-cli-errors.md
Title: Troubleshoot Azure Database for PostgreSQL Flexible Server CLI errors
-description: This topic gives guidance on troubleshooting common issues with Azure CLI when using PostgreSQL Flexible Server.
+ Title: Troubleshoot CLI errors
+description: This topic gives guidance on troubleshooting common issues with Azure CLI when using Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
-# Troubleshoot Azure Database for PostgreSQL Flexible Server CLI errors
+# Troubleshoot Azure Database for PostgreSQL - Flexible Server CLI errors
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This doc will help you troubleshoot common issues with Azure CLI when using PostgreSQL Flexible Server.
+This article helps you troubleshoot common issues with Azure CLI when using Azure Database for PostgreSQL flexible server.
## Command not found
- If you receive and error that a command **is misspelled or not recognized by the system**. This could mean that CLI version on your client machine may not be up to date. Run ```az upgrade``` to upgrade to latest version. Doing an upgrade of your CLI version can help resolve issues with incompatibilities of a command due to any API changes.
+ If you receive an error that a command **is misspelled or not recognized by the system**, this could mean that CLI version on your client machine may not be up to date. Run `az upgrade` to upgrade to latest version. Doing an upgrade of your CLI version can help resolve issues with incompatibilities of a command due to any API changes.
## Debug deployment failures
-Currently, Azure CLI doesn't support turning on debug logging, but you can retrieve debug logging following the steps below.
+Currently, Azure CLI doesn't support turning on debug logging, but you can retrieve debug logging by doing the following steps.
>[!NOTE]
-> - Replace ```examplegroup``` and ```exampledeployment``` with the correct resource group and deployment name for your database server.
+> - Replace `examplegroup` and `exampledeployment` with the correct resource group and deployment name for your database server.
> - You can see the Deployment name in the deployments page in your resource group. See [how to find the deployment name](../../azure-resource-manager/templates/deployment-history.md?tabs=azure-portal)
-1. List the deployments in resource group to identify the PostgreSQL Server deployment.
+1. List the deployments in resource group to identify the Azure Database for PostgreSQL flexible server deployment.
```azurecli az deployment operation group list \
Currently, Azure CLI doesn't support turning on debug logging, but you can retri
--name exampledeployment ```
-2. Get the request content of the PostgreSQL Server deployment
+2. Get the request content of the Azure Database for PostgreSQL flexible server deployment.
```azurecli az deployment operation group list \ --name exampledeployment \
Currently, Azure CLI doesn't support turning on debug logging, but you can retri
--query [].properties.request ```
-3. Examine the response content
+3. Examine the response content.
```azurecli az deployment operation group list \
Currently, Azure CLI doesn't support turning on debug logging, but you can retri
| Error code | Mitigation | | - | - |
-|MissingSubscriptionRegistration|Register your subscription with the resource provider. Run the command ```az provider register --namespace Microsoft.DBPostgreSQL``` to resolve the issue.|
-|InternalServerError| Try to view the activity logs for your server to see if there is more information. Run the command ```az monitor activity-log list --correlation-id <enter correlation-id>```. You can try the same CLI command after a few minutes. If the issues persists, please [report it](https://github.com/Azure/azure-cli/issues) or reach out to Microsoft support.|
-|ResourceNotFound| Resource being reference cannot be found. You can check resource properties, or check if resource is deleted or check if the resource is another subscription. |
-|LocationNotAvailableForResourceType| - Check availability of Azure Database for PostgreSQL Flexible Server in [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). <br>- Check if Azure Database for PostgreSQL Resource types is registered with your subscription. |
+|MissingSubscriptionRegistration|Register your subscription with the resource provider. Run the command `az provider register --namespace Microsoft.DBPostgreSQL` to resolve the issue.|
+|InternalServerError| Try to view the activity logs for your server to see if there's more information. Run the command `az monitor activity-log list --correlation-id <enter correlation-id>`. You can try the same CLI command after a few minutes. If the issues persists, [report it](https://github.com/Azure/azure-cli/issues) or reach out to Microsoft support.|
+|ResourceNotFound| Resource being referenced can't be found. You can check resource properties, or check if resource is deleted or check if the resource is another subscription. |
+|LocationNotAvailableForResourceType| - Check availability of Azure Database for PostgreSQL flexible server in [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). <br>- Check if Azure Database for PostgreSQL flexible server Resource types is registered with your subscription. |
|ResourceGroupBeingDeleted| Resource group is being deleted. Wait for deletion to complete.| |PasswordTooLong| The provided password is too long. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).|
-|PasswordNotComplex| The provided password is not complex enough. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).|
-|PasswordTooShort| It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).|
-|SubscriptionNotFound| The requested subscription was not found. Run ```az account list all``` to see all your current subscriptions.|
-|InvalidParameterValue| An invalid value was given to a parameter.Check the [CLI reference docs](/cli/azure/postgres/flexible-server) to see what is the correct values supported for the arguments.|
-|InvalidLocation| An invalid location was specified. Check availability of Azure Database for PostgreSQL Flexible Server in [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql) |
-|InvalidServerName|Identified an invalid server name. Check the sever name. Run the command [az mysql flexible-server list](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-list) to see all the list of Flexible servers available.|
+|PasswordNotComplex| The provided password isn't complex enough. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).|
+|PasswordTooShort| Your password must contain between 8 and 128 characters. It must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).|
+|SubscriptionNotFound| The requested subscription wasn't found. Run `az account list all` to see all your current subscriptions.|
+|InvalidParameterValue| An invalid value was given to a parameter. Check the [CLI reference docs](/cli/azure/postgres/flexible-server) to see what is the correct values supported for the arguments.|
+|InvalidLocation| An invalid location was specified. Check availability of Azure Database for PostgreSQL flexible server in [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). |
+|InvalidServerName|Identified an invalid server name. Check the server name. Run the command [az mysql flexible-server list](/cli/azure/mysql/flexible-server#az-mysql-flexible-server-list) to see all the list of Azure Database for PostgreSQL flexible server instances available.|
|InvalidResourceIdSegment| A syntax error was identified in your Azure Resource Manager template. Use a JSON formatter tool to validate the JSON to identify the syntax error.| |InvalidUserName| Enter a valid username. The admin user name can't be azure_superuser, azure_pg_admin, admin, administrator, root, guest, or public. It can't start with pg_.| |BlockedUserName| The admin user name can't be azure_superuser, azure_pg_admin, admin, administrator, root, guest, or public. It can't start with pg_. Avoid using these patterns in the admin name.| ## Next steps -- If you are still experiencing issues, please [report the issue](https://github.com/Azure/azure-cli/issues).
+- If you're still experiencing issues, please [report the issue](https://github.com/Azure/azure-cli/issues).
- If you have questions, visit our Stack Overflow page: https://aka.ms/azcli/questions. -- Let us know how we are doing with this survey https://aka.ms/azureclihats.
+- Let us know how we're doing with this survey https://aka.ms/azureclihats.
postgresql How To Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-common-connection-issues.md
Title: Troubleshoot connections - Azure Database for PostgreSQL - flexible Server
-description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - flexible Server.
+ Title: Troubleshoot connections
+description: Learn how to troubleshoot connection issues to Azure Database for PostgreSQL - Flexible Server.
Last updated 03/23/2023
-# Troubleshoot connection issues to Azure Database for PostgreSQL - flexible Server
+# Troubleshoot connection issues to Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
Connection problems may be caused by various things, including:
* Firewall settings * Connection time-out * Incorrect login information
-* Maximum limit reached on some Azure Database for PostgreSQL resources
+* Maximum limit reached on some Azure Database for PostgreSQL flexible server resources
* Issues with the infrastructure of the service * Maintenance being performed in the service * The compute allocation of the server is changed by scaling the number of vCores or moving to a different service tier
-Generally, connection issues to Azure Database for PostgreSQL can be classified as follows:
+Generally, connection issues to Azure Database for PostgreSQL flexible server can be classified as follows:
* Transient errors (short-lived or intermittent) * Persistent or non-transient errors (errors that regularly recur) ## Troubleshoot transient errors
-Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for PostgreSQL service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
+Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. Azure Database for PostgreSQL flexible server has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
### Steps to resolve transient connectivity issues 1. Check the [Microsoft Azure Service Dashboard](https://azure.microsoft.com/status) for any known outages that occurred during the time in which the errors were reported by the application.
-2. Applications that connect to a cloud service such as Azure Database for PostgreSQL should expect transient errors and implement retry logic to handle these errors instead of surfacing these as application errors to users. Review [Handling of transient connectivity errors for Azure Database for PostgreSQL](concepts-connectivity.md) for best practices and design guidelines for handling transient errors.
-3. As a server approaches its resource limits, errors can seem to be transient connectivity issue. See [Limitations in Azure Database for PostgreSQL](concepts-limits.md).
+2. Applications that connect to a cloud service such as Azure Database for PostgreSQL flexible server should expect transient errors and implement retry logic to handle these errors instead of surfacing these as application errors to users. Review [Handling of transient connectivity errors - Azure Database for PostgreSQL - Flexible Server](concepts-connectivity.md) for best practices and design guidelines for handling transient errors.
+3. As a server approaches its resource limits, errors can seem to be transient connectivity issue. See [Limitations - Azure Database for PostgreSQL - Flexible Server](concepts-limits.md).
4. If connectivity problems continue, or if the duration for which your application encounters the error exceeds 60 seconds or if you see multiple occurrences of the error in a given day, file an Azure support request by selecting **Get Support** on the [Azure Support](https://azure.microsoft.com/support/options) site. ## Troubleshoot persistent errors
-If the application persistently fails to connect to Azure Database for PostgreSQL, it usually indicates an issue with one of the following:
+If the application persistently fails to connect to Azure Database for PostgreSQL flexible server, it usually indicates an issue with one of the following:
-* Server firewall configuration: Make sure that the Azure Database for PostgreSQL server firewall is configured to allow connections from your client, including proxy servers and gateways.
-* Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you can't connect to must be allowed and the application names such as PostgreSQL in some firewalls.
-* If you see the error _Server isn't configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server.
-* If you see the connection error _sslmode value "***" invalid when SSL support is not compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
+- Server firewall configuration: Make sure that the Azure Database for PostgreSQL flexible server firewall is configured to allow connections from your client, including proxy servers and gateways.
+- Client firewall configuration: The firewall on your client must allow connections to your database server. IP addresses and ports of the server that you can't connect to must be allowed and the application names such as PostgreSQL in some firewalls.
+- If you see the error _Server isn't configured to allow ipv6 connections_, note that the Basic tier doesn't support VNet service endpoints. You have to remove the Microsoft.Sql endpoint from the subnet that is attempting to connect to the Basic server.
+- If you see the connection error _sslmode value "\*\*" invalid when SSL support isn't compiled in_ error, it means your PostgreSQL client doesn't support SSL. Most probably, the client-side libpq hasn't been compiled with the "--with-openssl" flag. Try connecting with a PostgreSQL client that has SSL support.
### Steps to resolve persistent connectivity issues 1. Set up [firewall rules](concepts-firewall-rules.md) to allow the client IP address. For temporary testing purposes only, set up a firewall rule using 0.0.0.0 as the starting IP address and using 255.255.255.255 as the ending IP address. This will open the server to all IP addresses. If this resolves your connectivity issue, remove this rule and create a firewall rule for an appropriately limited IP address or address range. 2. On all firewalls between the client and the internet, make sure that port 5432 is open for outbound connections. 3. Verify your connection string and other connection settings.
-4. Check the service health in the dashboard. If you think thereΓÇÖs a regional outage, see [Overview of business continuity with Azure Database for PostgreSQL](concepts-business-continuity.md) for steps to recover to a new region.
+4. Check the service health in the dashboard. If you think thereΓÇÖs a regional outage, see [Overview of business continuity - Azure Database for PostgreSQL - Flexible Server](concepts-business-continuity.md) for steps to recover to a new region.
## Next steps
-* [Handling of transient connectivity errors for Azure Database for PostgreSQL](concepts-connectivity.md)
+* [Handling of transient connectivity errors - Azure Database for PostgreSQL - Flexible Server](concepts-connectivity.md)
postgresql How To Troubleshooting Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshooting-guides.md
Title: Troubleshooting guides - Azure portal - Azure Database for PostgreSQL - Flexible Server
-description: Learn how to use Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server from the Azure portal.
+ Title: Troubleshooting guides - Azure portal
+description: Learn how to use troubleshooting guides for Azure Database for PostgreSQL - Flexible Server from the Azure portal.
Last updated 03/21/2023
-# Use the Troubleshooting guides for Azure Database for PostgreSQL - Flexible Server
+# Use the troubleshooting guides for Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this article, you'll learn how to use Troubleshooting guides for Azure Database for PostgreSQL from the Azure portal. To learn more about Troubleshooting guides, see the [overview](concepts-troubleshooting-guides.md).
+In this article, you learn how to use troubleshooting guides for Azure Database for PostgreSQL flexible server from the Azure portal. To learn more about troubleshooting guides, see the [overview](concepts-troubleshooting-guides.md).
## Prerequisites To effectively troubleshoot specific issue, you need to make sure you have all the necessary data in place.
-Each troubleshooting guide requires a specific set of data, which is sourced from three separate features: [Diagnostic settings](howto-configure-and-access-logs.md), [Query Store](concepts-query-store.md), and [Enhanced Metrics](concepts-monitoring.md#enabling-enhanced-metrics).
+Each troubleshooting guide requires a specific set of data, which is sourced from three separate features: [Diagnostic settings](how-to-configure-and-access-logs.md), [Query Store](concepts-query-store.md), and [Enhanced Metrics](concepts-monitoring.md#enabling-enhanced-metrics).
All troubleshooting guides require logs to be sent to the Log Analytics workspace, but the specific category of logs to be captured may vary depending on the particular guide.
-Please follow the steps described in the [Configure and Access Logs in Azure Database for PostgreSQL - Flexible Server](howto-configure-and-access-logs.md) article to configure diagnostic settings and send the logs to the Log Analytics workspace.
-Query Store, and Enhanced Metrics are configured via the Server Parameters. Please follow the steps described in the "Configure server parameters in Azure Database for PostgreSQL - Flexible Server" articles for [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
+Please follow the steps described in [Configure and Access Logs - Azure Database for PostgreSQL - Flexible Server](howto-configure-and-access-logs.md) to configure diagnostic settings and send the logs to the Log Analytics workspace.
+Query Store, and Enhanced Metrics are configured via the Server Parameters. Please follow the steps described in the configure server parameters in Azure Database for PostgreSQL flexible server articles for [Azure portal](howto-configure-server-parameters-using-portal.md) or [Azure CLI](howto-configure-server-parameters-using-cli.md).
The table below provides information on the required log categories for each troubleshooting guide, as well as the necessary Query Store, Enhanced Metrics and Server Parameters prerequisites. | Troubleshooting guide | Diagnostic settings log categories | Query Store | Enhanced Metrics | Server Parameters | |:-|:--|-|-|-|
-| Autovacuum Blockers | PostgreSQL Sessions, PostgreSQL Database Remaining Transactions | N/A | N/A | N/A |
-| Autovacuum Monitoring | PostgreSQL Server Logs, PostgreSQL Tables Statistics, PostgreSQL Database Remaining Transactions | N/A | N/A | log_autovacuum_min_duration |
-| High CPU Usage | PostgreSQL Server Logs, PostgreSQL Sessions, AllMetrics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
-| High IOPS Usage | PostgreSQL Query Store Runtime, PostgreSQL Server Logs, PostgreSQL Sessions, PostgreSQL Query Store Wait Statistics | pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | track_io_timing to ON |
-| High Memory Usage | PostgreSQL Server Logs, PostgreSQL Sessions, PostgreSQL Query Store Runtime | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
-| High Temporary Files | PostgreSQL Sessions, PostgreSQL Query Store Runtime, PostgreSQL Query Store Wait Statistics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+| Autovacuum Blockers | Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Database Remaining Transactions | N/A | N/A | N/A |
+| Autovacuum Monitoring | Azure Database for PostgreSQL flexible server Logs, PostgreSQL Tables Statistics, Azure Database for PostgreSQL flexible server Database Remaining Transactions | N/A | N/A | log_autovacuum_min_duration |
+| High CPU Usage | Azure Database for PostgreSQL flexible server Logs, Azure Database for PostgreSQL flexible server Sessions, AllMetrics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+| High IOPS Usage | Azure Database for PostgreSQL flexible server Query Store Runtime, Azure Database for PostgreSQL flexible server Logs, Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Query Store Wait Statistics | pgms_wait_sampling.query_capture_mode to ALL | metrics.collector_database_activity | track_io_timing to ON |
+| High Memory Usage | Azure Database for PostgreSQL flexible server Logs, Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Query Store Runtime | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
+| High Temporary Files | Azure Database for PostgreSQL flexible server Sessions, Azure Database for PostgreSQL flexible server Query Store Runtime, Azure Database for PostgreSQL flexible server Query Store Wait Statistics | pg_qs.query_capture_mode to TOP or ALL | metrics.collector_database_activity | N/A |
> [!NOTE]
The table below provides information on the required log categories for each tro
To use troubleshooting guides, follow these steps:
-1. Open the Azure portal and find a Postgres instance that you want to examine.
+1. Open the Azure portal and find an Azure Database for PostgreSQL flexible server instance that you want to examine.
2. From the left-side menu, open Help > Troubleshooting guides.
To use troubleshooting guides, follow these steps:
### Retrieving the Query Text Due to privacy considerations, certain information such as query text and usernames may not be displayed within the Azure portal.
-To retrieve the query text, you will need to log in to your Azure Database for PostgreSQL - Flexible Server instance.
+To retrieve the query text, you need to log in to your Azure Database for PostgreSQL flexible server instance.
Access the `azure_sys` database using the PostgreSQL client of your choice, where query store data is stored. Once connected, query the `query_store.query_texts_view view` to retrieve the desired query text.
postgresql How To Use Pgvector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-use-pgvector.md
Title: How to enable and use pgvector - Azure Database for PostgreSQL - Flexible Server
-description: How to enable and use pgvector on Azure Database for PostgreSQL - Flexible Server
+ Title: How to enable and use pgvector
+description: How to enable and use pgvector on Azure Database for PostgreSQL - Flexible Server.
Last updated 11/03/2023
## Enable extension
-Before you can enable `pgvector` on your Flexible Server, you need to add it to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
+Before you can enable `pgvector` on your Azure Database for PostgreSQL flexible server instance, you need to add it to your allowlist as described in [how to use PostgreSQL extensions](./concepts-extensions.md#how-to-use-postgresql-extensions) and check if correctly added by running `SHOW azure.extensions;`.
Then you can install the extension, by connecting to your target database and running the [CREATE EXTENSION](https://www.postgresql.org/docs/current/static/sql-createextension.html) command. You need to repeat the command separately for every database you want the extension to be available in.
CREATE EXTENSION vector;
Learn more around performance, indexing and limitations using `pgvector`. > [!div class="nextstepaction"]
-> [Optimize performance using pgvector](howto-optimize-performance-pgvector.md)
+> [Optimize performance using pgvector](how-to-optimize-performance-pgvector.md)
> [!div class="nextstepaction"]
-> [Generate vector embeddings with Azure OpenAI on Azure Database for PostgreSQL Flexible Server](./generative-ai-azure-openai.md)
+> [Generate vector embeddings with Azure OpenAI - Azure Database for PostgreSQL - Flexible Server](./generative-ai-azure-openai.md)
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview-postgres-choose-server-options.md
Title: Choose the right PostgreSQL server option in Azure
-description: Provides guidelines for choosing the right PostgreSQL server option for your deployments.
+ Title: Choose hosting type
+description: Provides guidelines for choosing the right Azure Database for PostgreSQL - Flexible Server hosting option for your deployments.
Last updated 03/27/2023
-# Choose the right PostgreSQL server option in Azure
+# Choose the right Azure Database for PostgreSQL - Flexible Server hosting option in Azure
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] [!INCLUDE [azure-database-for-postgresql-single-server-deprecation](../includes/azure-database-for-postgresql-single-server-deprecation.md)]
-With Azure, your PostgreSQL Server workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has multiple deployment options, each with multiple service tiers. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, and make backups, or if you want to delegate these operations to Azure.
+With Azure, your PostgreSQL workloads can run in a hosted virtual machine infrastructure as a service (IaaS) or as a hosted platform as a service (PaaS). PaaS has multiple deployment options, each with multiple service tiers. When you choose between IaaS and PaaS, you must decide if you want to manage your database, apply patches, and make backups, or if you want to delegate these operations to Azure.
When making your decision, consider the following option in PaaS or alternatively running on Azure VMs (IaaS)-- [Azure Database for PostgreSQL Flexible Server](../flexible-server/overview.md)
+- [Azure Database for PostgreSQL - Flexible Server](../flexible-server/overview.md)
-**PostgreSQL on Azure VMs** option falls into the industry category of IaaS. With this service, you can run PostgreSQL Server inside a fully managed virtual machine on the Azure cloud platform. All recent versions and editions of PostgreSQL can be installed on an IaaS virtual machine. In the most significant difference from Azure Database for PostgreSQL, PostgreSQL on Azure VMs offers control over the database engine. However, this control comes at the cost of responsibility to manage the VMs and many database administration (DBA) tasks. These tasks include maintaining and patching database servers, database recovery, and high-availability design.
+**PostgreSQL on Azure VMs** option falls into the industry category of IaaS. With this service, you can run a PostgreSQL server inside a fully managed virtual machine on the Azure cloud platform. All recent versions and editions of PostgreSQL can be installed on an IaaS virtual machine. In the most significant difference from Azure Database for PostgreSQL flexible server, PostgreSQL on Azure VMs offers control over the database engine. However, this control comes at the cost of responsibility to manage the VMs and many database administration (DBA) tasks. These tasks include maintaining and patching database servers, database recovery, and high-availability design.
The main differences between these options are listed in the following table:
-| **Attribute** | **Postgres on Azure VMs** | **PostgreSQL as PaaS** |
+| **Attribute** | **Postgres on Azure VMs** | **Azure Database for PostgreSQL flexible server as PaaS** |
||--|--|
-| **Availability SLA** | - [Virtual Machine SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines) | - [Flexible Server](https://azure.microsoft.com/support/legal/sla/postgresql) |
-| **OS and PostgreSQL patching** | - Customer managed | - Flexible Server ΓÇô Automatic with optional customer managed window |
-| **High availability** | - Customers architect, implement, test, and maintain high availability. Capabilities might include clustering, replication etc. | - Flexible Server: built-in |
-| **Zone Redundancy** | - Azure VMs can be set up to run in different availability zones. For an on-premises solution, customers must create, manage, and maintain their own secondary data center. | - Flexible Server: Yes |
-| **Hybrid Scenario** | - Customer managed | - Flexible Server: supported |
-| **Backup and Restore** | - Customer Managed | - Flexible Server: built-in with user configuration on zone-redundant storage |
-| **Monitoring Database Operations** | - Customer Managed | - Flexible Server: All offer customers the ability to set alerts on the database operation and act upon reaching thresholds |
-| **Advanced Threat Protection** | - Customers must build this protection for themselves. | - Flexible Server: Not available during Preview |
-| **Disaster Recovery** | - Customer Managed | - Flexible Server: supported |
-| **Intelligent Performance** | - Customer Managed | - Flexible Server: supported |
+| **Availability SLA** | - [Virtual Machine SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines) | - [Azure Database for PostgreSQL flexible server](https://azure.microsoft.com/support/legal/sla/postgresql) |
+| **OS and PostgreSQL patching** | - Customer managed | Automatic with optional customer managed window |
+| **High availability** | - Customers architect, implement, test, and maintain high availability. Capabilities might include clustering, replication etc. | Built-in |
+| **Zone Redundancy** | - Azure VMs can be set up to run in different availability zones. For an on-premises solution, customers must create, manage, and maintain their own secondary data center. | Yes |
+| **Hybrid Scenario** | - Customer managed | Supported |
+| **Backup and Restore** | - Customer Managed | Built-in with user configuration on zone-redundant storage |
+| **Monitoring Database Operations** | - Customer Managed | All offer customers the ability to set alerts on the database operation and act upon reaching thresholds |
+| **Advanced Threat Protection** | - Customers must build this protection for themselves. | Not available during Preview |
+| **Disaster Recovery** | - Customer Managed | Supported |
+| **Intelligent Performance** | - Customer Managed | Supported |
## Total cost of ownership (TCO)
-TCO is often the primary consideration that determines the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for PostgreSQL and PostgreSQL on Azure VMs.
+TCO is often the primary consideration that determines the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for PostgreSQL flexible server and PostgreSQL on Azure VMs.
## Billing
-Azure Database for PostgreSQL is currently available as a service in several tiers with different prices for resources. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/postgresql/server/) You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
+Azure Database for PostgreSQL flexible server is currently available as a service in several tiers with different prices for resources. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/postgresql/server/) You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
-With Azure Database for PostgreSQL, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for PostgreSQL has [automated backup-link]() capabilities. These capabilities help you achieve significant cost savings, especially when you have a large number of databases. In contrast, with PostgreSQL on Azure VMs you can choose and run any PostgreSQL version. However, you need to pay for the provisioned VM, storage cost associated with the data, backup, monitoring data and log storage and the costs for the specific PostgreSQL license type used (if any).
+With Azure Database for PostgreSQL flexible server, Microsoft automatically configures, patches, and upgrades the database software. These automated actions reduce your administration costs. Also, Azure Database for PostgreSQL flexible server has [automated backup-link]() capabilities. These capabilities help you achieve significant cost savings, especially when you have a large number of databases. In contrast, with PostgreSQL on Azure VMs you can choose and run any PostgreSQL version. However, you need to pay for the provisioned VM, storage cost associated with the data, backup, monitoring data and log storage and the costs for the specific PostgreSQL license type used (if any).
-Azure Database for PostgreSQL Flexible Server provides built-in high availability at the zonal-level (within an AZ) for any kind of node-level interruption while still maintaining the [SLA guarantee](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) for the service. Flexible Server provides [uptime SLAs](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) with and without zone-redundant configuration. However, for database high availability within VMs, you use the high availability options like [Streaming Replication](https://www.postgresql.org/docs/12/warm-standby.html#STREAMING-REPLICATION) that are available on a PostgreSQL database. Using a supported high availability option doesn't provide an additional SLA. But it does let you achieve greater than 99.99% database availability at additional cost and administrative overhead.
+Azure Database for PostgreSQL flexible server provides built-in high availability at the zonal-level (within an AZ) for any kind of node-level interruption while still maintaining the [SLA guarantee](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) for the service. Azure Database for PostgreSQL flexible server provides [uptime SLAs](https://azure.microsoft.com/support/legal/sla/postgresql/v1_2/) with and without zone-redundant configuration. However, for database high availability within VMs, you use the high availability options like [Streaming Replication](https://www.postgresql.org/docs/12/warm-standby.html#STREAMING-REPLICATION) that are available on a PostgreSQL database. Using a supported high availability option doesn't provide another SLA. But it does let you achieve greater than 99.99% database availability at more cost and administrative overhead.
For more information on pricing, see the following articles:-- [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/)
+- [Azure Database for PostgreSQL flexible server pricing](https://azure.microsoft.com/pricing/details/postgresql/server/)
- [Virtual machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/) - [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/)
With PaaS, Microsoft:
- Encrypts the data at rest and in motion by default. - Monitors your server and provides features for query performance insights and performance recommendations.
-With Azure Database for PostgreSQL, you can continue to administer your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include:
+With Azure Database for PostgreSQL flexible server, you can continue to administer your database. But you no longer need to manage the database engine, the operating system, or the hardware. Examples of items you can continue to administer include:
- Databases - Sign-in
With Azure Database for PostgreSQL, you can continue to administer your database
Additionally, configuring high availability to another data center requires minimal to no configuration or administration. -- With PostgreSQL on Azure VMs, you have full control over the operating system and the PostgreSQL server instance configuration. With a VM, you decide when to update or upgrade the operating system and database software and what patches to apply. You also decide when to install any additional software such as an antivirus application. Some automated features are provided to greatly simplify patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md).
+- With PostgreSQL on Azure VMs, you have full control over the operating system and the PostgreSQL server instance configuration. With a VM, you decide when to update or upgrade the operating system and database software and what patches to apply. You also decide when to install any other software such as an antivirus application. Some automated features are provided to greatly simplify patching, backup, and high availability. You can control the size of the VM, the number of disks, and their storage configurations. For more information, see [Virtual machine and cloud service sizes for Azure](../../virtual-machines/sizes.md).
-## Time to move to Azure PostgreSQL Service (PaaS)
+## Time to move to Azure Database for PostgreSQL flexible server (PaaS)
-- Azure Database for PostgreSQL is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality that is like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database.
+- Azure Database for PostgreSQL flexible server is the right solution for cloud-designed applications when developer productivity and fast time to market for new solutions are critical. With programmatic functionality that is like DBA, the service is suitable for cloud architects and developers because it lowers the need for managing the underlying operating system and database.
- When you want to avoid the time and expense of acquiring new on-premises hardware, PostgreSQL on Azure VMs is the right solution for applications that require a granular control and customization of PostgreSQL engine not supported by the service or requiring access of the underlying OS. ## Next steps -- See Azure Database for [PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
+- See [Azure Database for PostgreSQL flexible server pricing](https://azure.microsoft.com/pricing/details/postgresql/server/).
- Get started by creating your first server.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
Title: Azure Database for PostgreSQL - Flexible Server
+ Title: Overview
description: Provides an overview of Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-[Azure Database for PostgreSQL](../overview.md) powered by the PostgreSQL community edition is available in two deployment modes:
+[Azure Database for PostgreSQL](../single-server/overview.md) powered by the PostgreSQL community edition is available in two deployment modes:
-- [Flexible Server](./overview.md)-- [Single Server](../overview-single-server.md)
+- [Azure Database for PostgreSQL Flexible Server](overview.md)
+- [Azure Database for PostgreSQL Single Server](../single-server/overview-single-server.md)
-This article provides an overview and introduction to the core concepts of flexible server deployment model.
-Whether you're just starting out or looking to refresh your knowledge, this introductory video offers a comprehensive overview of Azure Database for PostgreSQL - Flexible Server, helping you get acquainted with its key features and capabilities.
+This article provides an overview and introduction to the core concepts of the Azure Database for PostgreSQL flexible server deployment model.
+Whether you're just starting out or looking to refresh your knowledge, this introductory video offers a comprehensive overview of Azure Database for PostgreSQL flexible server, helping you get acquainted with its key features and capabilities.
>[!Video https://www.youtube.com/embed/NSEmJfUgNzE?si=8Ku9Z53PP455dICZ&amp;start=121] ## Overview
-Azure Database for PostgreSQL - Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. The service generally provides more flexibility and server configuration customizations based on user requirements. The flexible server architecture allows users to collocate the database engine with the client tier for lower latency and choose high availability within a single availability zone and across multiple availability zones. Flexible servers also provide better cost optimization controls with the ability to stop/start your server and a burstable compute tier ideal for workloads that don't need full compute capacity continuously. The service supports the community version of [PostgreSQL 11, 12, 13, 14, 15 and 16](./concepts-supported-versions.md). The service is available in various [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
+Azure Database for PostgreSQL flexible server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. The service generally provides more flexibility and server configuration customizations based on user requirements. The flexible server architecture allows users to collocate the database engine with the client tier for lower latency and choose high availability within a single availability zone and across multiple availability zones. Azure Database for PostgreSQL flexible server instances also provide better cost optimization controls with the ability to stop/start your server and a burstable compute tier ideal for workloads that don't need full compute capacity continuously. The service supports the community version of [PostgreSQL 11, 12, 13, 14, 15 and 16](./concepts-supported-versions.md). The service is available in various [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
-Flexible servers are best suited for:
+Azure Database for PostgreSQL flexible server instances are best suited for
- Application developments requiring better control and customizations. - Zone redundant high availability.
Flexible servers are best suited for:
## Architecture and high availability
-The flexible server deployment model is designed to support high availability within a single availability zone and across multiple availability zones. The architecture separates compute and storage. The database engine runs on a container inside a Linux virtual machine, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
+The Azure Database for PostgreSQL flexible server deployment model is designed to support high availability within a single availability zone and across multiple availability zones. The architecture separates compute and storage. The database engine runs on a container inside a Linux virtual machine, while data files reside on Azure storage. The storage maintains three locally redundant synchronous copies of the database files ensuring data durability.
If zone redundant high availability is configured, the service provisions and maintains a warm standby server across the availability zone within the same Azure region. The data changes on the source server are synchronously replicated to the standby server to ensure zero data loss. With zone redundant high availability, once the planned or unplanned failover event is triggered, the standby server comes online immediately and is available to process incoming transactions. This allows the service resiliency from availability zone failure within an Azure region that supports multiple availability zones, as shown in the picture below. :::image type="content" source="./media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Diagram of Zone redundant high availability." lightbox="./media/business-continuity/concepts-zone-redundant-high-availability-architecture.png":::
-See [High availability document](./concepts-high-availability.md) for more details.
+See [High availability](./concepts-high-availability.md) for more details.
## Automated patching with a managed maintenance window
The service performs automated patching of the underlying hardware, OS, and data
## Automatic backups
-The flexible server service automatically creates server backups and stores them on the region's zone redundant storage (ZRS). Backups can restore your server to any point within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured for up to 35 days. All backups are encrypted using AES 256-bit encryption. See [Backups](./concepts-backup-restore.md) for more details.
+Azure Database for PostgreSQL flexible server automatically creates server backups and stores them on the region's zone redundant storage (ZRS). Backups can restore your server to any point within the backup retention period. The default backup retention period is seven days. The retention can be optionally configured for up to 35 days. All backups are encrypted using AES 256-bit encryption. See [Backups](./concepts-backup-restore.md) for more details.
## Adjust performance and scale within seconds
-The flexible server service is available in three compute tiers: Burstable, General Purpose, and Memory Optimized. The Burstable tier is best suited for low-cost development and low concurrency workloads without continuous compute capacity. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first application on a small database for a few dollars a month and then seamlessly adjust the scale to meet the needs of your solution.
+Azure Database for PostgreSQL flexible server is available in three compute tiers: Burstable, General Purpose, and Memory Optimized. The Burstable tier is best suited for low-cost development and low concurrency workloads without continuous compute capacity. The General Purpose and Memory Optimized are better suited for production workloads requiring high concurrency, scale, and predictable performance. You can build your first application on a small database for a few dollars a month and then seamlessly adjust the scale to meet the needs of your solution.
## Stop/Start server to lower TCO
-The flexible server service allows you to stop and start the server on-demand to lower your TCO. The compute tier billing is stopped immediately when the server is stopped. This can allow significant cost savings during development, testing, and time-bound predictable production workloads. The server remains stopped for seven days unless restarted sooner.
+Azure Database for PostgreSQL flexible server allows you to stop and start the server on-demand to lower your TCO. The compute tier billing is stopped immediately when the server is stopped. This can allow significant cost savings during development, testing, and time-bound predictable production workloads. The server remains stopped for seven days unless restarted sooner.
## Enterprise-grade security
-The flexible server service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at rest. Data are encrypted, including backups and temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system-managed (default). The service encrypts data in motion with transport layer security (SSL/TLS) enforced by default. The service enforces and supports TLS version 1.2 only.
+Azure Database for PostgreSQL flexible server uses the FIPS 140-2 validated cryptographic module for storage encryption of data at rest. Data are encrypted, including backups and temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys can be system-managed (default). Azure Database for PostgreSQL flexible server encrypts data in motion with transport layer security (SSL/TLS) enforced by default. The service enforces and supports TLS version 1.2 only.
-Flexible servers allow full private access to the servers using Azure virtual network (VNet integration). Servers in the Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied, and servers can't be reached using public endpoints.
+Azure Database for PostgreSQL flexible server instances allow full private access to the servers using Azure virtual network (VNet integration). Servers in the Azure virtual network can only be reached and connected through private IP addresses. With VNet integration, public access is denied, and servers can't be reached using public endpoints.
## Monitor and alerting
-The flexible server service is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, each providing 30 days of history. You can configure alerts on the metrics. The service exposes host server metrics to monitor resource utilization and allows configuring slow query logs. Using these tools, you can quickly optimize your workloads and configure your server for the best performance.
+Azure Database for PostgreSQL flexible server is equipped with built-in performance monitoring and alerting features. All Azure metrics have a one-minute frequency, each providing 30 days of history. You can configure alerts on the metrics. The service exposes host server metrics to monitor resource utilization and allows configuring slow query logs. Using these tools, you can quickly optimize your workloads and configure your server for the best performance.
## Built-in PgBouncer
-The flexible server has a [built-in PgBouncer](concepts-pgbouncer.md), a connection pooler. You can enable it and connect your applications to your database server via PgBouncer using the same hostname and port 6432.
+An Azure Database for PostgreSQL flexible server instance has a [built-in PgBouncer](concepts-pgbouncer.md), a connection pooler. You can enable it and connect your applications to your Azure Database for PostgreSQL flexible server instance via PgBouncer using the same hostname and port 6432.
## Azure regions
-One advantage of running your workload in Azure is global reach. The flexible server is currently available in the following Azure regions:
+One advantage of running your workload in Azure is global reach. Azure Database for PostgreSQL flexible server is currently available in the following Azure regions:
-| Region | V3/V4/V5 compute availability | Zone-Redundant HA | Same-Zone HA | Geo-Redundant backup |
+| Region | Intel V3/V4/V5/AMD Compute | Zone-Redundant HA | Same-Zone HA | Geo-Redundant backup |
| | | | | |
+| Australia Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Australia Southeast | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Australia Southeast | (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
| Brazil South | :heavy_check_mark: (v3 only) | :x: $ | :heavy_check_mark: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Central US | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | China East 3 | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| China North 3 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: |
-| East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| China North 3 | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| East Asia | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: |
+| East US | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| East US 2 | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
-| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| France Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| France South | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Germany West Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Israel Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | Italy North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only) | :x: | :heavy_check_mark: | :x: |
-| Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: |
-| Korea South | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Korea Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: |
+| Korea South | :heavy_check_mark: (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
| Poland Central| :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x:| | North Central US | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | North Europe | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | Norway West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | Qatar Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| South Africa North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| South Africa North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| South Central US | :heavy_check_mark: (v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
-| South India | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| South India | :heavy_check_mark: (v3/v4/v5 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
| Southeast Asia | :heavy_check_mark:(v3/v4 only) | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
-| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Sweden Central | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Switzerland North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | UAE Central* | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
-| UAE North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| UAE North | :heavy_check_mark: (v3/v4/v5 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
| US Gov Arizona | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | US Gov Texas | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | US Gov Virginia | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
$$ New server deployments are temporarily blocked in these regions. Already prov
## Migration
-The service runs the community version of PostgreSQL. This allows full application compatibility and requires a minimal refactoring cost to migrate an existing application developed on the PostgreSQL engine to Flexible Server.
+Azure Database for PostgreSQL flexible server runs the community version of PostgreSQL. This allows full application compatibility and requires a minimal refactoring cost to migrate an existing application developed on the PostgreSQL engine to Azure Database for PostgreSQL flexible server.
-- **Single Server to Flexible Server Migration tool (Preview)** - [This tool](../migrate/concepts-single-to-flexible.md) provides an easier migration capability from Single server to Flexible Server.
+- **Azure Database for PostgreSQL singler server to Azure Database for PostgreSQL flexible server Migration tool (Preview)** - [This tool](../migrate/concepts-single-to-flexible.md) provides an easier migration capability from Azure Database for PostgreSQL single server to Azure Database for PostgreSQL flexible server.
- **Dump and Restore** ΓÇô For offline migrations, where users can afford some downtime, dump and restore using community tools like pg_dump and pg_restore can provide the fastest way to migrate. See [Migrate using dump and restore](../howto-migrate-using-dump-and-restore.md) for details.-- **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to flexible servers with minimal downtime, Azure Database Migration Service can be used. See [DMS via portal](../../dms/tutorial-postgresql-azure-postgresql-online-portal.md) and [DMS via CLI](../../dms/tutorial-postgresql-azure-postgresql-online.md). You can migrate from your Azure Database for PostgreSQL - Single Server to Flexible Server. See this [DMS article](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for details.
+- **Azure Database Migration Service** ΓÇô For seamless and simplified migrations to Azure Database for PostgreSQL flexible server with minimal downtime, Azure Database Migration Service can be used. See [DMS via portal](../../dms/tutorial-postgresql-azure-postgresql-online-portal.md) and [DMS via CLI](../../dms/tutorial-postgresql-azure-postgresql-online.md). You can migrate from your Azure Database for PostgreSQL single server instance to Azure Database for PostgreSQL flexible server. See this [DMS article](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for details.
## Frequently asked questions
-### Will Flexible Server replace Single Server?
+### Will Azure Database for PostgreSQL flexible server replace Azure Database for PostgreSQL single server?
-We continue to support Single Server and encourage you to adopt Flexible Server with richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you receive advance notice, including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
+We continue to support Azure Database for PostgreSQL single server and encourage you to adopt Azure Database for PostgreSQL flexible server with richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you receive advance notice, including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
### What is Microsoft's policy to address PostgreSQL engine defects?
Refer to Microsoft's current policy [here](../../postgresql/flexible-server/con
## Contacts
-For any questions or suggestions, you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)).
+For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL flexible server team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)).
> [!NOTE] > This email address isn't a technical support alias.
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
Title: 'Connect to Azure Database for PostgreSQL flexible server with private access in the Azure portal'
-description: This article shows how to create and connect to Azure Database for PostgreSQL flexible server with private access or virtual network using Azure portal.
+ Title: Connect with private access in the Azure portal
+description: This article shows how to create and connect to Azure Database for PostgreSQL - Flexible Server with private access or virtual network using the Azure portal.
Last updated 11/30/2021
-# Connect Azure Database for PostgreSQL Flexible Server with the private access connectivity method
+# Connect Azure Database for PostgreSQL - Flexible Server with the private access connectivity method
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL Flexible Server is a managed service that you can use to run, manage, and scale highly available PostgreSQL servers in the cloud. This quickstart shows you how to create a flexible server in a virtual network by using the Azure portal.
+Azure Database for PostgreSQL flexible server is a managed service that you can use to run, manage, and scale highly available PostgreSQL servers in the cloud. This quickstart shows you how to create an Azure Database for PostgreSQL flexible server instance in a virtual network by using the Azure portal.
Sign in to the [Azure portal](https://portal.azure.com). Enter your credentials
## Create an Azure Database for PostgreSQL flexible server
-You create a flexible server with a defined set of [compute and storage resources](./concepts-compute-storage.md). You create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+You create an Azure Database for PostgreSQL flexible server instance with a defined set of [compute and storage resources](./concepts-compute-storage.md). You create the server within an [Azure resource group](../../azure-resource-manager/management/overview.md).
-Complete these steps to create a flexible server:
+Complete these steps to create an Azure Database for PostgreSQL flexible server instance:
1. Search for and select **Azure Database for PostgreSQL servers** in the portal:
Complete these steps to create a flexible server:
2. Select **Add**.
-3. On the **Select Azure Database for PostgreSQL deployment option** page, select **Flexible server** as the deployment option:
+<!-- This no longer happens. 3. On the **Select Azure Database for PostgreSQL deployment option** page, select **Flexible server** as the deployment option:
:::image type="content" source="./media/quickstart-create-connect-server-vnet/deployment-option.png" alt-text="Screenshot that shows the Flexible server option." lightbox="./media/quickstart-create-connect-server-vnet/deployment-option.png":::
+-->
+4. On the **Basics** tab, enter the **subscription**, **resource group**, **region**, and **server name**. With the default values, this will provision an Azure Database for PostgreSQL flexible server instance of version 12 with General purpose pricing tier using 2 vCores, 8 GiB RAM, and 28 GiB storage. The backup retention is **seven** days. You can use **Development** workload to default to a lower-cost pricing tier.
-4. On the **Basics** tab, enter the **subscription**, **resource group**, **region**, and **server name**. With the default values, this will provision a PostgreSQL server of version 12 with General purpose pricing tier using 2 vCores, 8 GiB RAM, and 28 GiB storage. The backup retention is **seven** days. You can use **Development** workload to default to a lower-cost pricing tier.
-
- :::image type="content" source="./media/quickstart-create-connect-server-vnet/postgres-create-basics.png" alt-text="Screenshot that shows the Basics tab of the postgres flexible server page." lightbox="./media/quickstart-create-connect-server-vnet/postgres-create-basics.png":::
+ :::image type="content" source="./media/quickstart-create-connect-server-vnet/postgres-create-basics.png" alt-text="Screenshot that shows the Basics tab of the Azure Database for PostgreSQL flexible server page." lightbox="./media/quickstart-create-connect-server-vnet/postgres-create-basics.png":::
5. In the **Basics** tab, enter a unique **admin username** and **admin password**.
Complete these steps to create a flexible server:
:::image type="content" source="./media/quickstart-create-connect-server-vnet/create-new-vnet-for-postgres-server.png" alt-text="Screenshot that shows the Networking tab with new VNET." lightbox="./media/quickstart-create-connect-server-vnet/create-new-vnet-for-postgres-server.png":::
-7. Select **Review + create** to review your flexible server configuration.
+7. Select **Review + create** to review your Azure Database for PostgreSQL flexible server configuration.
8. Select **Create** to provision the server. Provisioning can take a few minutes. 9. Wait until the deployment is complete and successful.
- :::image type="content" source="./media/quickstart-create-connect-server-vnet/deployment-success.png" alt-text="Screenshot that shows the Networking settings with new VNET." lightbox="./media/quickstart-create-connect-server-vnet/deployment-success.png":::
+ :::image type="content" source="./media/quickstart-create-connect-server-vnet/deployment-success.png" alt-text="Screenshot that shows deployment success." lightbox="./media/quickstart-create-connect-server-vnet/deployment-success.png":::
-9. Select **Go to resource** to view the server's **Overview** page opens.
+9. Select **Go to resource** to view the server's **Overview** page.
## Create an Azure Linux virtual machine
-Since the server is in a virtual network, you can only connect to the server from other Azure services in the same virtual network as the server. To connect and manage the server, let's create a Linux virtual machine. The virtual machine must be created in the **same region** and **same subscription**. The Linux virtual machine can be used as an SSH tunnel to manage your database server.
+Since the server is in a virtual network, you can only connect to the server from other Azure services in the same virtual network as the server. To connect and manage the server, let's create a Linux virtual machine. The virtual machine must be created in the **same region** and **same subscription**. The Linux virtual machine can be used as an SSH tunnel to manage your Azure Database for PostgreSQL flexible server instance.
1. Go to your resource group in which the server was created. Select **Add**. 2. Select **Ubuntu Server 18.04 LTS**.
Since the server is in a virtual network, you can only connect to the server fro
:::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/project-details.png" alt-text="Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine." lightbox="../../virtual-machines/linux/media/quick-create-portal/project-details.png":::
-2. Under **Instance details**, type *myVM* for the **Virtual machine name**, and choose the same **Region** as your database server.
+2. Under **Instance details**, type *myVM* for the **Virtual machine name**, and choose the same **Region** as your Azure Database for PostgreSQL flexible server instance.
:::image type="content" source="../../virtual-machines/linux/media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size." lightbox="../../virtual-machines/linux/media/quick-create-portal/instance-details.png":::
psql --host=mydemoserver-pg.postgres.database.azure.com --port=5432 --username=m
``` ## Clean up resources
-You have now created an Azure Database for PostgreSQL flexible server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the PostgreSQL server. To delete the resource group, complete the following steps:
+You have now created an Azure Database for PostgreSQL flexible server instance in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting the resource group, or you can just delete the Azure Database for PostgreSQL flexible server instance. To delete the resource group, complete the following steps:
1. In the Azure portal, search for and select **Resource groups**. 1. In the list of resource groups, select the name of your resource group.
postgresql Quickstart Create Server Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-arm-template.md
Title: 'Quickstart: Create an Azure Database for PostgreSQL Flexible Server - ARM template'
-description: In this Quickstart, learn how to create an Azure Database for PostgreSQL Flexible server using ARM template.
+ Title: 'Quickstart: Create with ARM template'
+description: In this Quickstart, learn how to create an Azure Database for PostgreSQL - Flexible Server instance by using an ARM template.
Last updated 12/12/2023
-# Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - Flexible Server
+# Quickstart: Use an ARM template to create an Azure Database for PostgreSQL - Flexible Server instance
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use an Azure Resource Manager template (ARM template) to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server.
+Azure Database for PostgreSQL flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use an Azure Resource Manager template (ARM template) to provision an Azure Database for PostgreSQL flexible server instance to deploy multiple servers or multiple databases on a server.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
An Azure account with an active subscription. [Create one for free](https://azur
## Review the template
-An Azure Database for PostgreSQL Server is the parent resource for one or more databases within a region. It provides the scope for management policies that apply to its databases: login, firewall, users, roles, and configurations.
+An Azure Database for PostgreSQL flexible server instance is the parent resource for one or more databases within a region. It provides the scope for management policies that apply to its databases: login, firewall, users, roles, and configurations.
Create a _postgres-flexible-server-template.json_ file and copy the following JSON script into it.
These resources are defined in the template:
Select **Try it** from the following PowerShell code block to open Azure Cloud Shell. ```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for PostgreSQL server"
+$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for PostgreSQL flexible server instance"
$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist" $location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
-$adminUser = Read-Host -Prompt "Enter the Azure Database for PostgreSQL server's administrator account name"
+$adminUser = Read-Host -Prompt "Enter the Azure Database for PostgreSQL flexible server instance's administrator account name"
$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
Follow these steps to verify if your server was created in Azure.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You'll have to enter the name of the new server to view the details of your Azure Database for PostgreSQL Flexible server.
+You have to enter the name of the new server to view the details of your Azure Database for PostgreSQL flexible server instance.
```azurepowershell-interactive $serverName = Read-Host -Prompt "Enter the name of your Azure Database for PostgreSQL server"
Write-Host "Press [ENTER] to continue..."
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You'll have to enter the name and the resource group of the new server to view details about your Azure Database for PostgreSQL Flexible Server.
+You have to enter the name and the resource group of the new server to view details about your Azure Database for PostgreSQL flexible server instance.
```azurecli-interactive
-echo "Enter your Azure Database for PostgreSQL Flexible Server name:" &&
+echo "Enter your Azure Database for PostgreSQL flexible server instance name:" &&
read serverName &&
-echo "Enter the resource group where the Azure Database for PostgreSQL Flexible Server exists:" &&
+echo "Enter the resource group where the Azure Database for PostgreSQL flexible server instance exists:" &&
read resourcegroupName && az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DBforPostgreSQL/flexibleServers" ```
To delete the resource group:
In the [portal](https://portal.azure.com), select the resource group you want to delete. 1. Select **Delete resource group**.
-1. To confirm the deletion, type the name of the resource group
+1. To confirm the deletion, type the name of the resource group.
# [PowerShell](#tab/azure-powershell)
postgresql Quickstart Create Server Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-bicep.md
Title: 'Quickstart: Create an Azure Database for PostgreSQL Flexible Server - Bicep'
-description: In this Quickstart, learn how to create an Azure Database for PostgreSQL Flexible server using Bicep.
+ Title: 'Quickstart: Create with Bicep'
+description: In this Quickstart, learn how to create an Azure Database for PostgreSQL - Flexible Server instance by using Bicep.
Last updated 09/21/2022
-# Quickstart: Use a Bicep file to create an Azure Database for PostgreSQL - Flexible Server
+# Quickstart: Use a Bicep file to create an Azure Database for PostgreSQL - Flexible Server instance
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this quickstart, you'll learn how to use a Bicep file to create an Azure Database for PostgreSQL - Flexible Server.
+In this quickstart, you learn how to use a Bicep file to create an Azure Database for PostgreSQL flexible server instance.
-Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Bicep to provision a PostgreSQL Flexible Server to deploy multiple servers or multiple databases on a server.
+Azure Database for PostgreSQL flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Bicep to provision an Azure Database for PostgreSQL flexible server instance to deploy multiple servers or multiple databases on a server.
[!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
An Azure account with an active subscription. [Create one for free](https://azur
## Review the Bicep
-An Azure Database for PostgreSQL Server is the parent resource for one or more databases within a region. It provides the scope for management policies that apply to its databases: login, firewall, users, roles, and configurations.
+An Azure Database for PostgreSQL flexible server instance is the parent resource for one or more databases within a region. It provides the scope for management policies that apply to its databases: login, firewall, users, roles, and configurations.
Create a _main.bicep_ file and copy the following Bicep into it.
New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile "./mai
-You'll be prompted to enter these values:
+You're prompted to enter these values:
-- **serverName**: enter a unique name that identifies your Azure Database for PostgreSQL server. For example, `mydemoserver-pg`. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain at least 3 through 63 characters.
+- **serverName**: enter a unique name that identifies your Azure Database for PostgreSQL flexible server instance. For example, `mydemoserver-pg`. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain at least 3 through 63 characters.
- **administratorLogin**: enter your own login account to use when you connect to the server. For example, `myadmin`. The admin login name can't be `azure_superuser`, `azure_pg_admin`, `admin`, `administrator`, `root`, `guest`, or `public`. It can't start with `pg_`. - **administratorLoginPassword**: enter a new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).
postgresql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-cli.md
Title: 'Quickstart: Create a server - Azure CLI - Azure Database for PostgreSQL - Flexible Server'
-description: This quickstart describes how to use the Azure CLI to create an Azure Database for PostgreSQL Flexible Server in an Azure resource group.
+ Title: 'Quickstart: Create with Azure CLI'
+description: This quickstart describes how to use the Azure CLI to create an Azure Database for PostgreSQL - Flexible Server instance in an Azure resource group.
Last updated 12/12/2023
-# Quickstart: Create an Azure Database for PostgreSQL Flexible Server using Azure CLI
+# Quickstart: Create an Azure Database for PostgreSQL - Flexible Server instance using Azure CLI
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create an Azure Database for PostgreSQL Flexible Server in five minutes. If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azure-cli) commands in [Azure Cloud Shell](https://shell.azure.com) to create an Azure Database for PostgreSQL flexible server instance in five minutes. If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
If you prefer to install and use the CLI locally, this quickstart requires Azure
## Prerequisites
-You'll need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property in the output, which refers to the **Subscription ID** for your Azure account.
+You need to log in to your account using the [az login](/cli/azure/reference-index#az-login) command. Note the **id** property in the output, which refers to the **Subscription ID** for your Azure account.
```azurecli-interactive az login
az account set --subscription <subscription id>
## Create a flexible server
-Create an [Azure resource group](../../azure-resource-manager/management/overview.md) using the `az group create` command and then create your PostgreSQL flexible server inside this resource group. You should provide a unique name. The following example creates a resource group named `myresourcegroup` in the `eastus` location.
+Create an [Azure resource group](../../azure-resource-manager/management/overview.md) using the `az group create` command and then create your Azure Database for PostgreSQL flexible server instance inside this resource group. You should provide a unique name. The following example creates a resource group named `myresourcegroup` in the `eastus` location.
```azurecli-interactive az group create --name myresourcegroup --location eastus ```
-Create a flexible server with the `az postgres flexible-server create` command. A server can contain multiple databases. The following command creates a server in the resource group you just created:
+Create an Azure Database for PostgreSQL flexible server instance with the `az postgres flexible-server create` command. A server can contain multiple databases. The following command creates a server in the resource group you just created:
```azurecli az postgres flexible-server create --name mydemoserver --resource-group myresourcegroup
The server created has the following attributes:
- Service defaults for remaining server configurations: compute tier (General Purpose), compute size/SKU (`Standard_D2s_v3` - 2 vCore, 8 GB RAM), backup retention period (7 days), and PostgreSQL version (13) > [!NOTE]
-> The connectivity method cannot be changed after creating the server. For example, if you selected *Private access (VNet Integration)* during creation, then you cannot change it to *Public access (allowed IP addresses)* after creation. We highly recommend creating a server with Private access to securely access your server using VNet Integration. Learn more about Private access in the [concepts article](./concepts-networking.md).
+> The connectivity method can't be changed after creating the server. For example, if you selected *Private access (VNet Integration)* during creation, then you can't change it to *Public access (allowed IP addresses)* after creation. We highly recommend creating a server with Private access to securely access your server using VNet Integration. Learn more about Private access in the [concepts article](./concepts-networking.md).
If you'd like to change any defaults, please refer to the Azure CLI reference for [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create). > [!NOTE]
-> Connections to Azure Database for PostgreSQL communicate over port 5432. If you try to connect from within a corporate network, outbound traffic over port 5432 might not be allowed. If this is the case, you can't connect to your server unless your IT department opens port 5432. Notice that if you enable [PgBouncer](./concepts-pgbouncer.md) on your instance of Flexible Server and want to connect through it, because it runs on port 6432, it is that port that your IT department must open for outbound traffic.
+> Connections to Azure Database for PostgreSQL flexible server communicate over port 5432. If you try to connect from within a corporate network, outbound traffic over port 5432 might not be allowed. If this is the case, you can't connect to your server unless your IT department opens port 5432. Notice that if you enable [PgBouncer](./concepts-pgbouncer.md) on your instance of Azure Database for PostgreSQL flexible server and want to connect through it, because it runs on port 6432, it is that port that your IT department must open for outbound traffic.
## Get the connection information
The result is in JSON format. Make a note of the **fullyQualifiedDomainName** an
First, install the **[psql](https://www.postgresql.org/download/)** command-line tool.
-With psql, connect to the "flexibleserverdb" database using the below command. Replace values with the auto-generated domain name and username.
+With psql, connect to the "flexibleserverdb" database using the following command. Replace values with the auto-generated domain name and username.
```bash psql -h mydemoserver.postgres.database.azure.com -U myadmin flexibleserverdb
postgresql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-portal.md
Title: 'Quickstart: Create server - Azure portal - Azure Database for PostgreSQL - Flexible Server'
-description: Quickstart guide to creating and managing an Azure Database for PostgreSQL - Flexible Server by using the Azure portal user interface.
+ Title: 'Quickstart: Create with Azure portal'
+description: Quickstart guide to creating and managing an Azure Database for PostgreSQL - Flexible Server instance by using the Azure portal user interface.
Last updated 12/12/2023
-# Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal
+# Quickstart: Create an Azure Database for PostgreSQL - Flexible Server instance in the Azure portal
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This Quickstart shows you how to create an Azure Database for PostgreSQL - Flexible Server in about five minutes using the Azure portal.
+Azure Database for PostgreSQL flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This Quickstart shows you how to create an Azure Database for PostgreSQL flexible server instance in about five minutes using the Azure portal.
If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
Open your web browser and go to the [portal](https://portal.azure.com/). Enter y
## Create an Azure Database for PostgreSQL server
-An Azure Database for PostgreSQL server is created with a configured set of [compute and storage resources](./concepts-compute-storage.md). The server is created within an [Azure resource group](../../azure-resource-manager/management/overview.md).
+An Azure Database for PostgreSQL flexible server instance is created with a configured set of [compute and storage resources](./concepts-compute-storage.md). The server is created within an [Azure resource group](../../azure-resource-manager/management/overview.md).
-To create an Azure Database for PostgreSQL server, take the following steps:
+To create an Azure Database for PostgreSQL flexible server instance, take the following steps:
1. Select **Create a resource** (+) in the upper-left corner of the portal. 1. Select **Databases** > **Azure Database for PostgreSQL**.
- :::image type="content" source="./media/quickstart-create-database-portal/1-create-database.png" alt-text="The Azure Database for PostgreSQL in menu":::
+ :::image type="content" source="./media/quickstart-create-database-portal/1-create-database.png" alt-text="The Azure Database for PostgreSQL in menu.":::
-1. Select the **Flexible server** deployment option.
+<!--Doesn't happen anymore 1. Select the **Flexible server** deployment option.
:::image type="content" source="./media/quickstart-create-database-portal/2-select-deployment-option.png" alt-text="Select Azure Database for PostgreSQL - Flexible server deployment option":::-
+-->
1. Fill out the **Basics** form with the following information: :::image type="content" source="./media/quickstart-create-database-portal/3-create-basics.png" alt-text="Create a server.":::
To create an Azure Database for PostgreSQL server, take the following steps:
Workload type|Default SKU selection|You can choose from Development (Burstable SKU), Production small/medium (General Purpose SKU), or Production large (Memory Optimized SKU). You can further customize the SKU and storage by clicking *Configure server* link. Availability zone|Your preferred AZ|You can choose in which availability zone you want your server to be deployed. This is useful to co-locate with your application. If you choose *No preference*, a default AZ is selected for you. High availability|Enable it for same zone or zone-redundant deployment|By selecting this option, a standby server with the same configuration as your primary will be automatically provisioned in the same avaibality zone or a different availability zone in the same region, depending on the option selected for **High availability mode**. Note: You can enable or disable high availability post server creation as well.
- Server name|Your server name|A unique name that identifies your Azure Database for PostgreSQL Flexible Server. The domain name *postgres.database.azure.com* is appended to the server name you provide. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain between 3 and 63 characters.
+ Server name|Your server name|A unique name that identifies your Azure Database for PostgreSQL flexible server instance. The domain name *postgres.database.azure.com* is appended to the server name you provide. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain between 3 and 63 characters.
Admin username|Your admin user name|Your own login account to use when you connect to the server. The admin username must contain between 1 and 63 characters, must only cotain numbers and letters, canΓÇÖt start with **pg_** and can't be **azure_superuser**, **azure_pg_admin**, **admin**, **administrator**, **root**, **guest**, or **public**.
- Password|Your password|Specify a password for the server admin account. The password must contain between 8 and 128 characters. It must also contain characters from three of the following four categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, and so on). Your password cannot contain all or part of the login name. Part of a login name is defined as three or more consecutive alphanumeric characters.
- Location|The region closest to your users|The location that is closest to your users.
+ Password|Your password|Specify a password for the server admin account. The password must contain between 8 and 128 characters. It must also contain characters from three of the following four categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, and so on). Your password can't contain all or part of the login name. Part of a login name is defined as three or more consecutive alphanumeric characters.
+ Location|The region closest to your users|The location that's closest to your users.
Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise. Compute + storage|**General Purpose**, **4 vCores**, **512 GB**, **7 days**|The compute, storage, and backup configurations for your new server. Select **Configure server**. *General Purpose*, *4 vCores*, *512 GB*, and *7 days* are the default values for **Compute tier**, **vCore**, **Storage**, and **Backup retention period (in days)**. You can leave those sliders as they are or you can adjust them. <br> <br> To configure your server with geo-redundant backups to protect from region-level failures, you can enable the **Recover from regional outage or disaster** checkbox. Note that the geo-redundant backup can be configured only at the time of server creation. To save this pricing tier selection, select **Save**. The next screenshot captures these selections.
To create an Azure Database for PostgreSQL server, take the following steps:
1. Configure Networking options
-1. On the **Networking** tab, you can choose how your server is reachable. Azure Database for PostgreSQL Flexible Server provides two ways to connect to your server:
+1. On the **Networking** tab, you can choose how your server is reachable. Azure Database for PostgreSQL flexible server provides two ways to connect to your server:
- Public access (allowed IP addresses) - Private access (VNet Integration) When you use public access, access to your server is limited to allowed IP addresses that you add to a firewall rule. This method prevents external applications and tools from connecting to the server and any databases on the server, unless you create a rule to open the firewall for a specific IP address or range. When you use private access (VNet Integration), access to your server is limited to your virtual network. [Learn more about connectivity methods in the concepts article.](./concepts-networking.md)
- In this quickstart, you'll learn how to enable public access to connect to the server. On the **Networking** tab, for **Connectivity method** select **Public access (alllowed IP addresses)**. To configure **Firewall rules**, select **Add current client IP address**.
+ In this quickstart, you learn how to enable public access to connect to the server. On the **Networking** tab, for **Connectivity method** select **Public access (alllowed IP addresses)**. To configure **Firewall rules**, select **Add current client IP address**.
> [!NOTE] > You can't change the connectivity method after you create the server. For example, if you select **Public access (allowed IP addresses)** when you create the server, you can't change to **Private access (VNet Integration)** after the server is created. We highly recommend that you create your server with private access to help secure access to your server via VNet Integration. [Learn more about private access in the concepts article.](./concepts-networking.md)
To create an Azure Database for PostgreSQL server, take the following steps:
:::image type="content" source="./media/quickstart-create-database-portal/7-notifications.png" alt-text="The Notifications pane.":::
- By default, a **postgres** database is created under your server. The [postgres](https://www.postgresql.org/docs/current/static/app-initdb.html) database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You cannot access this database.)
+ By default, a **postgres** database is created under your server. The [postgres](https://www.postgresql.org/docs/current/static/app-initdb.html) database is a default database that's meant for use by users, utilities, and third-party applications. (The other default database is **azure_maintenance**. Its function is to separate the managed service processes from user actions. You can't access this database.)
> [!NOTE]
- > Connections to your Azure Database for PostgreSQL server communicate over port 5432. When you try to connect from within a corporate network, outbound traffic over port 5432 might not be allowed by your network's firewall. If so, you can't connect to your server unless your IT department opens port 5432.
+ > Connections to your Azure Database for PostgreSQL flexible server instance communicate over port 5432. When you try to connect from within a corporate network, outbound traffic over port 5432 might not be allowed by your network's firewall. If so, you can't connect to your server unless your IT department opens port 5432.
> ## Get the connection information
-When you create your Azure Database for PostgreSQL server, a default database named **postgres** is created. To connect to your database server, you need your full server name and admin login credentials. You might have noted those values earlier in the Quickstart article. If you didn't, you can easily find the server name and login information on the server **Overview** page in the portal.
+When you create your Azure Database for PostgreSQL flexible server instance, a default database named **postgres** is created. To connect to your database server, you need your full server name and admin login credentials. You might have noted those values earlier in the Quickstart article. If you didn't, you can easily find the server name and login information on the server **Overview** page in the portal.
Open your server's **Overview** page. Make a note of the **Server name** and the **Server admin login name**. Hover your cursor over each field, and the copy symbol appears to the right of the text. Select the copy symbol as needed to copy the values. :::image type="content" source="./media/quickstart-create-database-portal/8-server-name.png" alt-text="The server Overview page.":::
-## Connect to the PostgreSQL database using psql
+<a name="connect-to-the-postgresql-database-using-psql"></a>
+## Connect to the Azure Database for PostgreSQL flexible server database using psql
-There are a number of applications you can use to connect to your Azure Database for PostgreSQL server. If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to connect to an Azure PostgreSQL server. Let's now use the psql command-line utility to connect to the Azure PostgreSQL server.
+There are a number of applications you can use to connect to your Azure Database for PostgreSQL flexible server instance. If your client computer has PostgreSQL installed, you can use a local instance of [psql](https://www.postgresql.org/docs/current/static/app-psql.html) to connect to an Azure Database for PostgreSQL flexible server instance. Let's now use the psql command-line utility to connect to the Azure Database for PostgreSQL flexible server instance.
-1. Run the following psql command to connect to an Azure Database for PostgreSQL server
+1. Run the following psql command to connect to an Azure Database for PostgreSQL flexible server instance.
```bash psql --host=<servername> --port=<port> --username=<user> --dbname=<dbname> ```
- For example, the following command connects to the default database called **postgres** on your PostgreSQL server **mydemoserver.postgres.database.azure.com** using access credentials. Enter the `<server_admin_password>` you chose when prompted for password.
+ For example, the following command connects to the default database called **postgres** on your Azure Database for PostgreSQL flexible server instance **mydemoserver.postgres.database.azure.com** using access credentials. Enter the `<server_admin_password>` you chose when prompted for password.
```bash psql --host=mydemoserver-pg.postgres.database.azure.com --port=5432 --username=myadmin --dbname=postgres ```
- After you connect, the psql utility displays a postgres prompt where you type sql commands. In the initial connection output, a warning may appear because the psql you're using might be a different version than the Azure Database for PostgreSQL server version.
+ After you connect, the psql utility displays a postgres prompt where you type sql commands. In the initial connection output, a warning may appear because the psql you're using might be a different version than the Azure Database for PostgreSQL flexible server version.
Example psql output:
There are a number of applications you can use to connect to your Azure Database
``` > [!TIP]
- > If the firewall is not configured to allow the IP address of your client, the following error occurs:
+ > If the firewall isn't configured to allow the IP address of your client, the following error occurs:
> > "psql: FATAL: no pg_hba.conf entry for host `<IP address>`, user "myadmin", database "postgres", SSL on FATAL: SSL connection is required. Specify SSL options and retry. >
There are a number of applications you can use to connect to your Azure Database
1. Type `\q`, and then select the Enter key to quit psql.
-You connected to the Azure Database for PostgreSQL server via psql, and you created a blank user database.
+You connected to the Azure Database for PostgreSQL flexible server instance via psql, and you created a blank user database.
## Clean up resources
You can clean up the resources that you created in the Quickstart in one of two
To delete the entire resource group, including the newly created server:
-1. Locate your resource group in the portal. On the menu on the left, select **Resource groups**. Then select the name of your resource group in which you created your Azure Database for PostgreSQL Flexible Service resource.
+1. Locate your resource group in the portal. On the menu on the left, select **Resource groups**. Then select the name of your resource group in which you created your Azure Database for PostgreSQL flexible server resource.
1. On your resource group page, select **Delete**. Enter the name of your resource group in the text box to confirm deletion. Select **Delete**.
To delete only the newly created server:
1. On the **Overview** page, select **Delete**.
- :::image type="content" source="./media/quickstart-create-database-portal/9-delete.png" alt-text="The Delete button":::
+ :::image type="content" source="./media/quickstart-create-database-portal/9-delete.png" alt-text="The Delete button.":::
1. Confirm the name of the server you want to delete, and view the databases under it that are affected. Enter your server name in the text box, and select **Delete**.
postgresql Quickstart Create Server Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md
Title: 'Quickstart: Create an Azure Database for PostgreSQL Flexible Server - Azure libraries (SDK) for Python'
-description: In this Quickstart, learn how to create an Azure Database for PostgreSQL Flexible server using Azure libraries (SDK) for Python.
+ Title: 'Quickstart: Create with Azure libraries (SDK) for Python'
+description: In this Quickstart, learn how to create an Azure Database for PostgreSQL - Flexible Server instance using Azure libraries (SDK) for Python.
Last updated 04/24/2023
-# Quickstart: Use an Azure libraries (SDK) for Python to create an Azure Database for PostgreSQL - Flexible Server
+# Quickstart: Use an Azure libraries (SDK) for Python to create an Azure Database for PostgreSQL - Flexible Server instance
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this quickstart, you'll learn how to use the [Azure libraries (SDK) for Python](/azure/developer/python/sdk/azure-sdk-overview?view=azure-python&preserve-view=true)
-to create an Azure Database for PostgreSQL - Flexible Server.
+In this quickstart, you learn how to use the [Azure libraries (SDK) for Python](/azure/developer/python/sdk/azure-sdk-overview?view=azure-python&preserve-view=true)
+to create an Azure Database for PostgreSQL flexible server instance.
-Flexible Server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Python SDK to provision a PostgreSQL Flexible Server, multiple servers or multiple databases on a server.
+Azure Database for PostgreSQL flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Python SDK to provision an Azure Database for PostgreSQL flexible server instance, multiple servers, or multiple databases on a server.
## Prerequisites
Replace the following parameters with your data:
- **subscription_id**: Your own [subscription ID](../../azure-portal/get-subscription-tenant-id.md#find-your-azure-subscription). - **resource_group**: The name of the resource group you want to use. The script will create a new resource group if it doesn't exist. -- **server_name**: A unique name that identifies your Azure Database for PostgreSQL - Flexible Server. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server name must be at least 3 characters and at most 63 characters, and can only contain lowercase letters, numbers, and hyphens.-- **location**: The Azure region where you want to create your Azure Database for PostgreSQL - Flexible Server. It defines the geographical location where your server and its data reside. Choose a region close to your users for reduced latency. The location should be specified in the format of Azure region short names, like `westus2`, `eastus`, or `northeurope`.
+- **server_name**: A unique name that identifies your Azure Database for PostgreSQL flexible server instance. The domain name `postgres.database.azure.com` is appended to the server name you provide. The server name must be at least 3 characters and at most 63 characters, and can only contain lowercase letters, numbers, and hyphens.
+- **location**: The Azure region where you want to create your Azure Database for PostgreSQL flexible server instance. It defines the geographical location where your server and its data reside. Choose a region close to your users for reduced latency. The location should be specified in the format of Azure region short names, like `westus2`, `eastus`, or `northeurope`.
- **administrator_login**: The primary administrator username for the server. You can create additional users after the server has been created. - **administrator_login_password**: A password for the primary administrator for the server. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.).
You can use the Python SDK, Azure portal, Azure CLI, Azure PowerShell, and vario
# [Python SDK](#tab/PythonSDK)
-Add the `check_server_created` function to your existing script to use the servers attribute of the [`PostgreSQLManagementClient`](/python/api/azure-mgmt-rdbms/azure.mgmt.rdbms.postgresql_flexibleservers.postgresqlmanagementclient?view=azure-python&preserve-view=true) instance to check if the PostgreSQL Flexible Server was created:
+Add the `check_server_created` function to your existing script to use the servers attribute of the [PostgreSQLManagementClient](/python/api/azure-mgmt-rdbms/azure.mgmt.rdbms.postgresql_flexibleservers.postgresqlmanagementclient?view=azure-python&preserve-view=true) instance to check if the Azure Database for PostgreSQL flexible server instance was created:
```python def check_server_created(subscription_id, resource_group, server_name):
Get-AzResource -ResourceGroupName <resource_group>
## Clean up resources
-If you no longer need the PostgreSQL Flexible Server, you can delete it and the associated resource group using the following methods.
+If you no longer need the Azure Database for PostgreSQL flexible server instance, you can delete it and the associated resource group using the following methods.
# [Python SDK](#tab/PythonSDK)
-Add the `delete_resources` function to your existing script to delete your Postgres server and the associated resource group that was created in this quickstart.
+Add the `delete_resources` function to your existing script to delete your Azure Database for PostgreSQL flexible server instance and the associated resource group that was created in this quickstart.
```python def delete_resources(subscription_id, resource_group, server_name):
postgresql Reference Pg Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/reference-pg-azure-storage.md
Title: Azure Storage Extension in Azure Database for PostgreSQL - Flexible Server -Preview reference
+ Title: Azure Storage Extension Preview reference
description: Azure Storage Extension in Azure Database for PostgreSQL - Flexible Server -Preview reference
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-The [pg_azure_storage extension](./concepts-storage-extension.md) allows you to import or export data in multiple file formats directly between Azure blob storage and your Azure Database for PostgreSQL - Flexible Server Containers with access level "Private" or "Blob" requires adding private access key.
+The [pg_azure_storage extension](./concepts-storage-extension.md) allows you to import or export data in multiple file formats directly between Azure blob storage and your Azure Database for PostgreSQL flexible server instance. Containers with access level "Private" or "Blob" requires adding private access key.
You can create the extension by running: ```sql
CREATE TABLE IF NOT EXISTS public.events
### Add access key of storage account (mandatory for access level = private)
-The example illustrates adding of access key for the storage account to get access for querying from a session on the Azure Cosmos DB for Postgres cluster.
+The example illustrates adding of access key for the storage account to get access for querying from a session on the Azure Database for PostgreSQL flexible server cluster.
```sql SELECT azure_storage.account_add('pgquickstart', 'SECRET_ACCESS_KEY');
postgresql Release Notes Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes-api.md
Title: Azure Database for PostgreSQL - Flexible Server API Release notes
-description: API Release notes of Azure Database for PostgreSQL - Flexible Server.
+ Title: API release notes
+description: API release notes for Azure Database for PostgreSQL - Flexible Server.
Last updated 06/06/2023
-# API Release notes - Azure Database for PostgreSQL - Flexible Server
+# API release notes - Azure Database for PostgreSQL - Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
This page provides latest news and updates regarding the recommended API version
## API Releases > [!NOTE]
-> Every Stable and Preview API version is cummulative. This means that it includes the previous features in addition to the features included under the Comments column.
+> Every Stable and Preview API version is cumulative. This means that it includes the previous features in addition to the features included under the Comments column.
| API Version | Stable/Preview | Comments | | | | |
This page provides latest news and updates regarding the recommended API version
## Contacts
-For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address isn't a technical support alias.
+For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL flexible server Team ([@Ask Azure Database for PostgreSQL flexible server](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address isn't a technical support alias.
In addition, consider the following points of contact as appropriate:
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Title: Azure Database for PostgreSQL - Flexible Server Release notes
-description: Release notes of Azure Database for PostgreSQL - Flexible Server.
+ Title: Release notes
+description: Release notes for Azure Database for PostgreSQL - Flexible Server.
Last updated 12/11/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Flexible Server - PostgreSQL.
+This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Azure Database for PostgreSQL flexible server.
## Release: December 2023 * Public preview of [Server logs](./how-to-server-logs-portal.md).
This page provides latest news and updates regarding feature additions, engine v
* General availability of [Microsoft Defender support](./concepts-security.md) ## Release: November 2023
-* General availability of PostgreSQL 16 for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* General availability of PostgreSQL 16 for Azure Database for PostgreSQL flexible server.
* General availability of [near-zero downtime scaling](./concepts-scaling-resources.md). * General availability of [Pgvector 0.5.1](concepts-extensions.md) extension. * Public preview of Italy North region.
This page provides latest news and updates regarding feature additions, engine v
## Release: October 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.4, 14.9, 13.12, 12.16, 11.21 <sup>$</sup>
-* General availability of [Grafana Monitoring Dashboard](https://grafana.com/grafana/dashboards/19556-azure-azure-postgresql-flexible-server-monitoring/) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* General availability of [Grafana Monitoring Dashboard](https://grafana.com/grafana/dashboards/19556-azure-azure-postgresql-flexible-server-monitoring/) for Azure Database for PostgreSQL flexible server.
## Release: September 2023
-* General availability of [Storage auto-grow](./concepts-compute-storage.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* General availability of [Cross Subscription and Cross Resource Group Restore](how-to-restore-to-different-subscription-or-resource-group.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* General availability of [Storage auto-grow](./concepts-compute-storage.md) for Azure Database for PostgreSQL flexible server.
+* General availability of [Cross Subscription and Cross Resource Group Restore](how-to-restore-to-different-subscription-or-resource-group.md) for Azure Database for PostgreSQL flexible server.
## Release: August 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.3, 14.8, 13.11, 12.15, 11.20 <sup>$</sup>
-* General availability of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics), [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics), [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) and [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* General availability of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics), [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics), [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) and [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL flexible server.
## Release: July 2023
-* General Availability of PostgreSQL 15 for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Public preview of [Automation Tasks](./create-automation-tasks.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* General Availability of PostgreSQL 15 for Azure Database for PostgreSQL flexible server.
+* Public preview of [Automation Tasks](./create-automation-tasks.md) for Azure Database for PostgreSQL flexible server.
## Release: June 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.2 (preview), 14.7, 13.10, 12.14, 11.19 <sup>$</sup>
-* General availability of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* General availability of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* General availability of [Restore a dropped server](how-to-restore-dropped-server.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Public preview of [Storage auto-grow](./concepts-compute-storage.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* General availability of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL flexible server.
+* General availability of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server.
+* General availability of [Restore a dropped server](how-to-restore-dropped-server.md) for Azure Database for PostgreSQL flexible server.
+* Public preview of [Storage auto-grow](./concepts-compute-storage.md) for Azure Database for PostgreSQL flexible server.
## Release: May 2023
-* Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* PostgreSQL 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions (West Europe, East US, West US2, South East Asia, UK South, North Europe, Japan east).
+* Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL flexible server.
+* PostgreSQL 15 is now available in public preview for Azure Database for PostgreSQL flexible server in limited regions (West Europe, East US, West US2, South East Asia, UK South, North Europe, Japan east).
* General availability: [Pgvector extension](how-to-use-pgvector.md) for Azure Database for PostgreSQL - Flexible Server.
-* General availability :[Azure Key Vault Managed HSM](./concepts-data-encryption.md#using-azure-key-vault-managed-hsm) with Azure Database for PostgreSQL- Flexible server
-* General availability [32 TB Storage](./concepts-compute-storage.md) with Azure Database for PostgreSQL- Flexible server
-* Support for [Ddsv5 and Edsv5 SKUs](./concepts-compute-storage.md) with Azure Database for PostgreSQL- Flexible server.
+* General availability :[Azure Key Vault Managed HSM](./concepts-data-encryption.md#using-azure-key-vault-managed-hsm) with Azure Database for PostgreSQL flexible server.
+* General availability [32 TB Storage](./concepts-compute-storage.md) with Azure Database for PostgreSQL flexible server.
+* Support for [Ddsv5 and Edsv5 SKUs](./concepts-compute-storage.md) with Azure Database for PostgreSQL flexible server.
## Release: April 2023
-* Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Public preview of: [Power BI integration](./connect-with-power-bi-desktop.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Public preview of [Troubleshooting guides](./concepts-troubleshooting-guides.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL flexible server.
+* Public preview of: [Power BI integration](./connect-with-power-bi-desktop.md) for Azure Database for PostgreSQL flexible server.
+* Public preview of [Troubleshooting guides](./concepts-troubleshooting-guides.md) for Azure Database for PostgreSQL flexible server.
## Release: March 2023
-* General availability of [Read Replica](concepts-read-replicas.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Public preview of [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* General availability of [Azure Monitor workbooks](./concepts-workbooks.md) for Azure Database for PostgreSQL ΓÇô Flexible Server
+* General availability of [Read Replica](concepts-read-replicas.md) for Azure Database for PostgreSQL flexible server.
+* Public preview of [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) for Azure Database for PostgreSQL flexible server.
+* General availability of [Azure Monitor workbooks](./concepts-workbooks.md) for Azure Database for PostgreSQL flexible server
## Release: February 2023
-* Public preview of [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* Public preview of [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics) for Azure Database for PostgreSQL flexible server.
* Support for [extension](concepts-extensions.md) semver with new servers<sup>$</sup>
-* Public Preview of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* Public Preview of [Major Version Upgrade](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server.
* Support for [Geo-redundant backup feature](./concepts-backup-restore.md#geo-redundant-backup-and-restore) when using [Disk Encryption with Customer Managed Key (CMK)](./concepts-data-encryption.md#how-data-encryption-with-a-customer-managed-key-work) feature. * Support for [minor versions](./concepts-supported-versions.md) 14.6, 13.9, 12.13, 11.18 <sup>$</sup> ## Release: January 2023
-* General availability of [Azure Active Directory Support](./concepts-azure-ad-authentication.md) for Azure Database for PostgreSQL - Flexible Server in all Azure Public Regions
-* General availability of [Customer Managed Key feature](./concepts-data-encryption.md) with Azure Database for PostgreSQL - Flexible Server in all Azure public regions
+* General availability of [Azure Active Directory Support](./concepts-azure-ad-authentication.md) for Azure Database for PostgreSQL flexible server in all Azure Public Regions
+* General availability of [Customer Managed Key feature](./concepts-data-encryption.md) with Azure Database for PostgreSQL flexible server in all Azure public regions
## Release: December 2022 * Support for [extensions](concepts-extensions.md) pg_hint_plan with new servers<sup>$</sup>
-* General availability of [Customer Managed Key feature](./concepts-data-encryption.md) with Azure Database for PostgreSQL - Flexible Server in Canada East, Canada Central, Southeast Asia, Switzerland North, Switzerland West, Brazil South and East Asia Azure regions
+* General availability of [Customer Managed Key feature](./concepts-data-encryption.md) with Azure Database for PostgreSQL flexible server in Canada East, Canada Central, Southeast Asia, Switzerland North, Switzerland West, Brazil South and East Asia Azure regions
## Release: November 2022
-* Public preview of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server
+* Public preview of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics) for Azure Database for PostgreSQL flexible server
* Support for [minor versions](./concepts-supported-versions.md) 14.5, 13.8, 12.12, 11.17. <sup>$</sup>
-* General availability of Azure Database for PostgreSQL - Flexible Server in China North 3 & China East 3 Regions.
+* General availability of Azure Database for PostgreSQL flexible server in China North 3 & China East 3 Regions.
## Release: October 2022
This page provides latest news and updates regarding feature additions, engine v
* Support for [Read Replica](./concepts-read-replicas.md) feature in public preview. * Support for [Azure Active Directory](concepts-azure-ad-authentication.md) authentication in public preview. * Support for [Customer managed keys](concepts-data-encryption.md) in public preview.
-* Published [Security and compliance certifications](./concepts-compliance.md) for Flexible Server.
+* Published [Security and compliance certifications](./concepts-compliance.md) for Azure Database for PostgreSQL flexible server.
* Postgres 14 is now the default PostgreSQL version. ## Release: September 2022
This page provides latest news and updates regarding feature additions, engine v
* Support for choosing [standby availability zone](./how-to-manage-high-availability-portal.md) when deploying zone-redundant high availability. * Support for [extensions](concepts-extensions.md) PLV8, pgrouting with new servers<sup>$</sup> * Version updates for [extension](concepts-extensions.md) PostGIS.
-* General availability of Azure Database for PostgreSQL - Flexible Server in Canada East and Jio India West regions.
+* General availability of Azure Database for PostgreSQL flexible server in Canada East and Jio India West regions.
<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
This page provides latest news and updates regarding feature additions, engine v
## Release: April 2022 * Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.6, 12.10 and 11.15 with new server creates<sup>$</sup>.
-* Support for updating Private DNS Zone for [Azure Database for PostgreSQL - Flexible Server private networking](./concepts-networking.md) for existing servers<sup>$</sup>.
+* Support for updating Private DNS Zone for [Azure Database for PostgreSQL flexible server private networking](./concepts-networking.md) for existing servers<sup>$</sup>.
<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
This page provides latest news and updates regarding feature additions, engine v
## Release: November 2021
-* Azure Database for PostgreSQL is [**Generally Available**](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/azure-database-for-postgresql-flexible-server-is-now-ga/ba-p/2987030).
+* Azure Database for PostgreSQL flexible server is [**Generally Available**](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/azure-database-for-postgresql-flexible-server-is-now-ga/ba-p/2987030).
* Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.4, 12.8 and 11.13 with new server creates<sup>$</sup>. * Support for [Geo-redundant backup and restore](concepts-backup-restore.md) feature in preview in selected paired regions - East US 2, Central US, North Europe, West Europe, Japan East, and Japan West. * Support for [new regions](overview.md#azure-regions) North Central US, Sweden Central, and West US 3.
This page provides latest news and updates regarding feature additions, engine v
* Support for [new regions](overview.md#azure-regions) Central India and Japan West. * Support for non-SSL mode of connectivity using a new `require_secure_transport` server parameter. * Support for `log_line_prefix` server parameter, which adds the string at the beginning of each log line.
-* Support for [Azure Resource Health](../../service-health/resource-health-overview.md) for Flexible server health diagnosis and to get support.
+* Support for [Azure Resource Health](../../service-health/resource-health-overview.md) for Azure Database for PostgreSQL flexible server health diagnosis and to get support.
* Several bug fixes, stability, and performance improvements. ## Release: July 2021
This page provides latest news and updates regarding feature additions, engine v
## Release: October 2020 - March 2021
-* Improved experience to [connect](connect-azure-cli.md) to the Flexible server using Azure CLI with the `az postgres flexible- server connect` command.
+* Improved experience to [connect](connect-azure-cli.md) to the Azure Database for PostgreSQL flexible server instance using Azure CLI with the `az postgres flexible- server connect` command.
* Support for [new regions](overview.md#azure-regions). * Several portal improvements, including display of minor versions, summary of metrics on the overview blade. * Several bug fixes, stability, and performance improvements. ## Contacts
-For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address isn't a technical support alias.
+For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL flexible server Team ([@Ask Azure Database for PostgreSQL flexible server](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address isn't a technical support alias.
In addition, consider the following points of contact as appropriate:
In addition, consider the following points of contact as appropriate:
## Frequently Asked Questions
- Will Flexible Server, replace Single Server or Will Single Server be retired soon?
+Will Azure Database for PostgreSQL flexible server replace Azure Database for PostgreSQL single server or Will Azure Database for PostgreSQL single server be retired soon?
-We continue to support Single Server and encourage you to adopt Flexible Server, which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you'll receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
+We continue to support Azure Database for PostgreSQL single server and encourage you to adopt Azure Database for PostgreSQL flexible server, which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you'll receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
## Next steps
postgresql Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/service-overview.md
+
+ Title: Service overview
+description: Provides an overview of the Azure Database for PostgreSQL - Flexible Server relational database service.
++++++ Last updated : 06/24/2022
+adobe-target: true
++
+# What is Azure Database for PostgreSQL - Flexible Server?
+++
+> [!IMPORTANT]
+> Azure Database for PostgreSQL - Hyperscale (Citus) is now [Azure Cosmos DB for PostgreSQL](../../cosmos-db/postgresql/introduction.md). To learn more about this change, see [Where is Hyperscale (Citus)?](../hyperscale/moved.md).
+
+Azure Database for PostgreSQL flexible server is a relational database service in the Microsoft cloud based on the [PostgreSQL open source relational database](https://www.postgresql.org/). Azure Database for PostgreSQL flexible server delivers:
+
+- Built-in high availability.
+- Data protection using automatic backups and point-in-time-restore for up to 35 days.
+- Automated maintenance for underlying hardware, operating system and database engine to keep the service secure and up to date.
+- Predictable performance, using inclusive pay-as-you-go pricing.
+- Elastic scaling within seconds.
+- Enterprise grade security and industry-leading compliance to protect sensitive data at-rest and in-motion.
+- Monitoring and automation to simplify management and monitoring for large-scale deployments.
+- Industry-leading support experience.
++
+These capabilities require almost no administration, and all are provided at no extra cost. They allow you to focus on rapid application development and accelerating your time to market rather than allocating precious time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver with the speed and efficiency your business demands, all without having to learn new skills.
+
+## Deployment modes
+
+Azure Database for PostgreSQL flexible server powered by the PostgreSQL community edition has two deployment modes:
+
+- Azure Database for PostgreSQL flexible server
+- Azure Database for PostgreSQL single server
+
+### Azure Database for PostgreSQL flexible server
+
+Azure Database for PostgreSQL flexible server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Azure Database for PostgreSQL flexible server provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. Azure Database for PostgreSQL flexible server currently supports community version of PostgreSQL 11, 12, 13 and 14, with plans to add newer versions soon. Azure Database for PostgreSQL flexible server is generally available today in a wide variety of [Azure regions](overview.md#azure-regions).
+
+Azure Database for PostgreSQL flexible server instances are best suited for
+
+- Application developments requiring better control and customizations
+- Cost optimization controls with ability to stop/start server
+- Zone redundant high availability
+- Managed maintenance windows
+
+For a detailed overview of Azure Database for PostgreSQL flexible server deployment mode, see [Azure Database for PostgreSQL - Flexible Server](overview.md).
+
+### Azure Database for PostgreSQL single server
+
+Azure Database for PostgreSQL single server is a fully managed database service with minimal requirements for customizations of database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of PostgreSQL 9.5, 9,6, 10, and 11.
+
+The Azure Database for PostgreSQL single server deployment option has three pricing tiers: Basic, General Purpose, and Memory Optimized. Each tier offers different resource capabilities to support your database workloads. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Pricing tiers](../single-server/concepts-pricing-tiers.md) for details.
+
+Azure Database for PostgreSQL single server instances are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom PostgreSQL configuration settings.
+
+For detailed overview of Azure Database for PostgreSQL single server deployment mode, see [Azure Database for PostgreSQL - Single Server](../single-server/overview-single-server.md).
+
postgresql Troubleshoot Canceling Statement Due To Conflict With Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/troubleshoot-canceling-statement-due-to-conflict-with-recovery.md
Title: Canceling statement due to conflict with recovery - Azure Database for PostgreSQL - Flexible Server
+ Title: Canceling statement due to conflict with recovery
description: Provides resolutions for a read replica error - Canceling statement due to conflict with recovery.
Last updated 10/5/2023
# Canceling statement due to conflict with recovery++ This article helps you solve a problem that occurs during executing queries against read replica.
This article helps you solve a problem that occurs during executing queries agai
2. Error messages such as "Canceling statement due to conflict with recovery" appear in the logs or in the query output. 3. There might be a noticeable delay or lag in replication from the primary to the read replica.
-In the provided screenshot, on the left is the primary Azure Database for PostgreSQL - Flexible Server instance, and on the right is the read replica.
+In the provided screenshot, on the left is the primary Azure Database for PostgreSQL flexible server instance, and on the right is the read replica.
* **Read replica console (right side of the screenshot above)** * We can observe a lengthy `SELECT` statement in progress. A vital aspect to note about SQL is its consistent view of the data. When an SQL statement is executed, it essentially "freezes" its view of the data. Throughout its execution, the SQL statement always sees a consistent snapshot of the data, even if changes are occurring concurrently elsewhere. * **Primary console (left side of the screenshot above)** * An `UPDATE` operation has been executed. While an `UPDATE` by itself doesn't necessarily disrupt the behavior of the read replica, the subsequent operation does. After the update, a `VACUUM` operation (in this case, manually triggered for demonstration purposes, but it's noteworthy that an autovacuum process could also be initiated automatically) is executed. * The `VACUUM`'s role is to reclaim space by removing old versions of rows. Given that the read replica is running a lengthy `SELECT` statement, it's currently accessing some of these rows that `VACUUM` targets for removal.
- * These changes initiated by the `VACUUM` operation, which include the removal of rows, get logged into the Write-Ahead Log (`WAL`). As Azure Database for PostgreSQL Flexible Server read replicas utilize native PostgreSQL physical replication, these changes are later sent to the read replica.
+ * These changes initiated by the `VACUUM` operation, which include the removal of rows, get logged into the Write-Ahead Log (`WAL`). As Azure Database for PostgreSQL flexible server read replicas utilize native PostgreSQL physical replication, these changes are later sent to the read replica.
* Here lies the crux of the issue: the `VACUUM` operation, unaware of the ongoing `SELECT` statement on the read replica, removes rows that the read replica still needs. This scenario results in what's known as a replication conflict. The aftermath of this scenario is that the read replica experiences a replication conflict due to the rows removed by the `VACUUM` operation. By default, the read replica attempts to resolve this conflict for a duration of 30 seconds, since the default value of `max_standby_streaming_delay` is set to 30 seconds. After this duration, if the conflict remains unresolved, the query on the read replica is canceled. ## Cause
-The root cause of this issue is that the read replica in PostgreSQL is a continuously recovering system. This situation means that while the replica is catching up with the primary, it's essentially in a state of constant recovery.
-If a query on a read replica tries to read a row that is simultaneously being updated by the recovery process (because the primary has made a change), PostgreSQL might cancel the query to allow the recovery to proceed without interruption.
+The root cause of this issue is that the read replica in Azure Database for PostgreSQL flexible server is a continuously recovering system. This situation means that while the replica is catching up with the primary, it's essentially in a state of constant recovery.
+If a query on a read replica tries to read a row that is simultaneously being updated by the recovery process (because the primary has made a change), Azure Database for PostgreSQL flexible server might cancel the query to allow the recovery to proceed without interruption.
## Resolution 1. **Adjust `max_standby_streaming_delay`**: Increase the `max_standby_streaming_delay` parameter on the read replica. Increasing the value of the setting allows the replica more time to resolve conflicts before it decides to cancel a query. However, this might also increase replication lag, so it's a trade-off. This parameter is dynamic, meaning changes take effect without requiring a server restart.
If a query on a read replica tries to read a row that is simultaneously being up
> [!CAUTION] > Enabling `hot_standby_feedback` can lead to the following potential issues: >* This setting can prevent some necessary cleanup operations on the primary, potentially leading to table bloat (increased disk space usage due to unvacuumed old row versions).
->* Regular monitoring of the primary's disk space and table sizes is essential. Learn more about monitoring for Azure Database for PostgreSQL - Flexible Server [here](concepts-monitoring.md).
->* Be prepared to manage potential table bloat manually if it becomes problematic. Consider enabling [autovacuum tuning](how-to-enable-intelligent-performance-portal.md) in Azure Database for PostgreSQL - Flexible Server to help mitigate this issue.
+>* Regular monitoring of the primary's disk space and table sizes is essential. Learn more about monitoring for Azure Database for PostgreSQL flexible server [here](concepts-monitoring.md).
+>* Be prepared to manage potential table bloat manually if it becomes problematic. Consider enabling [autovacuum tuning](how-to-enable-intelligent-performance-portal.md) in Azure Database for PostgreSQL flexible server to help mitigate this issue.
-5. **Adjust `max_standby_archive_delay`**: The `max_standby_archive_delay` server parameter specifies the maximum delay that the server will allow when reading archived `WAL` data. If the replica of Azure Database for PostgreSQL - Flexible Server ever switches from streaming mode to file-based log shipping (though rare), tweaking this value can help resolve the query cancellation issue.
+5. **Adjust `max_standby_archive_delay`**: The `max_standby_archive_delay` server parameter specifies the maximum delay that the server will allow when reading archived `WAL` data. If the replica of the Azure Database for PostgreSQL flexible server instance ever switches from streaming mode to file-based log shipping (though rare), tweaking this value can help resolve the query cancellation issue.
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server by using Azure CLI'
+ Title: 'Tutorial: Deploy Django on AKS cluster by using Azure CLI'
description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this quickstart, you deploy a Django application on Azure Kubernetes Service (AKS) cluster with Azure Database for PostgreSQL - Flexible Server using the Azure CLI.
+In this quickstart, you deploy a Django application on Azure Kubernetes Service (AKS) cluster with Azure Database for PostgreSQL flexible server using the Azure CLI.
-**[AKS](../../aks/intro-kubernetes.md)** is a managed Kubernetes service that lets you quickly deploy and manage clusters. **[Azure Database for PostgreSQL - Flexible Server ](overview.md)** is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings.
+[AKS](../../aks/intro-kubernetes.md) is a managed Kubernetes service that lets you quickly deploy and manage clusters. [Azure Database for PostgreSQL flexible server](overview.md) is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings.
> [!NOTE] > This quickstart assumes a basic understanding of Kubernetes concepts, Django and PostgreSQL.
After a few minutes, the command completes and returns JSON-formatted informatio
To manage a Kubernetes cluster, you use [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/), the Kubernetes command-line client. If you use Azure Cloud Shell, `kubectl` is already installed. > [!NOTE]
-> If running Azure CLI locally , please run the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command to install `kubectl`.
+> If running Azure CLI locally , run the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command to install `kubectl`.
To configure `kubectl` to connect to your Kubernetes cluster, use the [az aks get-credentials](/cli/azure/aks#az-aks-get-credentials) command. This command downloads credentials and configures the Kubernetes CLI to use them.
NAME STATUS ROLES AGE VERSION
aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8 ```
-## Create an Azure Database for PostgreSQL - Flexible Server
+## Create an Azure Database for PostgreSQL flexible server instance
-Create a flexible server with the [az postgreSQL flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create) command. The following command creates a server using service defaults and values from your Azure CLI's local context:
+Create an Azure Database for PostgreSQL flexible server instance with the [az postgreSQL flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create) command. The following command creates a server using service defaults and values from your Azure CLI's local context:
```azurecli-interactive az postgres flexible-server create --public-access all ``` The server created has the below attributes:-- A new empty database, ```postgres``` is created when the server is first provisioned. In this quickstart we will use this database.-- Autogenerated server name, admin username, admin password, resource group name (if not already specified in local context), and in the same location as your resource group-- Using public-access argument allow you to create a server with public access to any client with correct username and password.-- Since the command is using local context it will create the server in the resource group ```django-project``` and in the region ```eastus```.
+- A new empty database, `postgres` is created when the server is first provisioned. In this quickstart we use this database.
+- Autogenerated server name, admin username, admin password, resource group name (if not already specified in local context), and in the same location as your resource group.
+- Using public-access argument allows you to create a server with public access to any client with correct username and password.
+- Since the command is using local context it creates the server in the resource group `django-project` and in the region `eastus`.
## Build your Django docker image
Create a new [Django application](https://docs.djangoproject.com/en/3.1/intro/)
ΓööΓöÇΓöÇΓöÇ manage.py ```
-Update ```ALLOWED_HOSTS``` in ```settings.py``` to make sure the Django application uses the external IP that gets assigned to kubernetes app.
+Update `ALLOWED_HOSTS` in `settings.py` to make sure the Django application uses the external IP that gets assigned to kubernetes app.
```python ALLOWED_HOSTS = ['*'] ```
-Update ```DATABASES={ }``` section in the ```settings.py``` file. The code snippet below is reading the database host, username and password from the Kubernetes manifest file.
+Update `DATABASES={ }` section in the `settings.py` file. The code snippet below is reading the database host, username and password from the Kubernetes manifest file.
```python DATABASES={
DATABASES={
### Generate a requirements.txt file
-Create a ```requirements.txt``` file to list out the dependencies for the Django Application. Here is an example ```requirements.txt``` file. You can use [``` pip freeze > requirements.txt```](https://pip.pypa.io/en/stable/reference/pip_freeze/) to generate a requirements.txt file for your existing application.
+Create a `requirements.txt` file to list out the dependencies for the Django Application. Here's an example `requirements.txt` file. You can use [pip freeze > requirements.txt](https://pip.pypa.io/en/stable/reference/pip_freeze/) to generate a requirements.txt file for your existing application.
``` text Django==2.2.17
pytz==2020.4
### Create a Dockerfile
-Create a new file named ```Dockerfile``` and copy the code snippet below. This Dockerfile in setting up Python 3.8 and installing all the requirements listed in requirements.txt file.
+Create a new file named `Dockerfile` and copy the code snippet below. This Dockerfile in setting up Python 3.8 and installing all the requirements listed in requirements.txt file.
```docker # Use the official Python image from the Docker Hub
CMD python manage.py runserver 0.0.0.0:8000
### Build your image
-Make sure you're in the directory ```my-django-app``` in a terminal using the ```cd``` command. Run the following command to build your bulletin board image:
+Make sure you're in the directory `my-django-app` in a terminal using the `cd` command. Run the following command to build your bulletin board image:
```bash docker build --tag myblog:latest .
docker build --tag myblog:latest .
Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#create-a-docker-hub-repository-and-push-your-image) or [Azure Container registry](../../container-registry/container-registry-get-started-azure-cli.md). > [!IMPORTANT]
-> If you are using Azure container registry (ACR), then run the ```az aks update``` command to attach ACR account with the AKS cluster.
+> If you are using Azure container registry (ACR), then run the `az aks update` command to attach ACR account with the AKS cluster.
> > ```azurecli-interactive > az aks update -n djangoappcluster -g django-project --attach-acr <your-acr-name>
Deploy your image to [Docker hub](https://docs.docker.com/get-started/part3/#cre
## Create Kubernetes manifest file
-A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named ```djangoapp.yaml``` and copy in the following YAML definition.
+A Kubernetes manifest file defines a desired state for the cluster, such as what container images to run. Let's create a manifest file named `djangoapp.yaml` and copy in the following YAML definition.
> [!IMPORTANT]
-> Update ```env``` section below with your ```SERVERNAME```, ```YOUR-DATABASE-USERNAME```, ```YOUR-DATABASE-PASSWORD``` of your postgres flexible server.
+> Update `env` section below with your `SERVERNAME`, `YOUR-DATABASE-USERNAME`, `YOUR-DATABASE-PASSWORD` of your Azure Database for PostgreSQL flexible server instance.
```yaml apiVersion: apps/v1
deployment "django-app" created
service "python-svc" created ```
-A deployment ```django-app``` allows you to describes details on of your deployment such as which images to use for the app, the number of pods and pod configuration. A service ```python-svc``` is created to expose the application through an external IP.
+A deployment `django-app` allows you to describes details of your deployment such as which images to use for the app, the number of pods and pod configuration. A service `python-svc` is created to expose the application through an external IP.
## Test the application
When the *EXTERNAL-IP* address changes from *pending* to an actual public IP add
django-app LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m ```
-Now open a web browser to the external IP address of your service (http://\<service-external-ip-address\>) and view the Django application.
+Now open a web browser to the external IP address of your service (`http://<service-external-ip-address>`) and view the Django application.
> [!NOTE]
-> - Currently the Django site is not using HTTPS. It is recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
-> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As > > applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
+> - Currently the Django site isn't using HTTPS. It's recommended to [ENABLE TLS with your own certificates](../../aks/ingress-own-tls.md).
+> - You can enable [HTTP routing](../../aks/http-application-routing.md) for your cluster. When http routing is enabled, it configures an Ingress controller in your AKS cluster. As applications are deployed, the solution also creates publicly accessible DNS names for application endpoints.
## Run database migrations
For any django application, you would need to run database migration or collect
$ kubectl get pods ```
-You will see an output like this:
+You see an output like this:
```output NAME READY STATUS RESTARTS AGE django-app-5d9cd6cd8-l6x4b 1/1 Running 0 2m ```
-Once the pod name has been found you can run django database migrations with the command ```$ kubectl exec <pod-name> -- [COMMAND]```. Note ```/code/``` is the working directory for the project define in ```Dockerfile``` above.
+Once the pod name has been found you can run django database migrations with the command `$ kubectl exec <pod-name> -- [COMMAND]`. Note `/code/` is the working directory for the project define in `Dockerfile` above.
```bash $ kubectl exec django-app-5d9cd6cd8-l6x4b -- python /code/manage.py migrate
Running migrations:
. . . . . . ```
-If you run into issues, please run ```kubectl logs <pod-name>``` to see what exception is thrown by your application. If the application is working successfully you would see an output like this when running ```kubectl logs```.
+If you run into issues, run `kubectl logs <pod-name>` to see what exception is thrown by your application. If the application is working successfully you would see an output like this when running `kubectl logs`.
```output Watching for file changes with StatReloader
az group delete --name django-project --yes --no-wait
``` > [!NOTE]
-> When you delete the cluster, the Microsoft Entra service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and does not require removal.
+> When you delete the cluster, the Microsoft Entra service principal used by the AKS cluster is not removed. For steps on how to remove the service principal, see [AKS service principal considerations and deletion](../../aks/kubernetes-service-principal.md#other-considerations). If you used a managed identity, the identity is managed by the platform and doesn't require removal.
## Next steps - Learn how to [access the Kubernetes web dashboard](../../aks/kubernetes-dashboard.md) for your AKS cluster - Learn how to [enable continuous deployment](../../aks/deployment-center-launcher.md) - Learn how to [scale your cluster](../../aks/tutorial-kubernetes-scale.md)-- Learn how to manage your [postgres flexible server](./quickstart-create-server-cli.md)
+- Learn how to manage your [Azure Database for PostgreSQL flexible server instance](./quickstart-create-server-cli.md)
- Learn how to [configure server parameters](./howto-configure-server-parameters-using-cli.md) for your database server.
postgresql Tutorial Django App Service Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-app-service-postgres.md
Title: Tutorial on how to Deploy Django app with App Service and Azure Database for PostgreSQL - Flexible Server in virtual network
-description: Deploy Django app with App Serice and Azure Database for PostgreSQL - Flexible Server in virtual network
+ Title: 'Tutorial: Deploy Django app with App Service in virtual network'
+description: Tutorial on how to deploy Django app with App Service and Azure Database for PostgreSQL - Flexible Server in a virtual network.
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this tutorial you'll learn how to deploy a Django application in Azure using App Services and Azure Database for PostgreSQL - Flexible Server in a virtual network.
+In this tutorial you learn how to deploy a Django application in Azure using App Services and Azure Database for PostgreSQL flexible server in a virtual network.
## Prerequisites
If you don't have an Azure subscription, create a [free](https://azure.microsoft
This article requires that you're running the Azure CLI version 2.0 or later locally. To see the version installed, run the `az --version` command. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-You'll need to log in to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
+You need to log in to your account using the [az login](/cli/azure/authenticate-azure-cli) command. Note the **id** property from the command output for the corresponding subscription name.
```azurecli az login
The djangoapp sample contains the data-driven Django polls app you get by follow
The sample is also modified to run in a production environment like App Service: - Production settings are in the *azuresite/production.py* file. Development details are in *azuresite/settings.py*.-- The app uses production settings when the `DJANGO_ENV` environment variable is set to "production". You create this environment variable later in the tutorial along with others used for the PostgreSQL database configuration.
+- The app uses production settings when the `DJANGO_ENV` environment variable is set to "production". You create this environment variable later in the tutorial along with others used for the Azure Database for PostgreSQL flexible server database configuration.
These changes are specific to configuring Django to run in any production environment and aren't particular to App Service. For more information, see the [Django deployment checklist](https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/). ## Create a PostgreSQL Flexible Server in a new virtual network
-Create a private flexible server and a database inside a virtual network (VNET) using the following command:
+Create a private Azure Database for PostgreSQL flexible server instance and a database inside a virtual network (VNET) using the following command:
```azurecli
-# Create Flexible server in a private virtual network (VNET)
+# Create Azure Database for PostgreSQL flexible server instance in a private virtual network (VNET)
az postgres flexible-server create --resource-group myresourcegroup --vnet myvnet --location westus2 ```
This command performs the following actions, which may take a few minutes:
- Create the resource group if it doesn't already exist. - Generates a server name if it isn't provided.-- Create a new virtual network for your new postgreSQL server, if you choose to do so after prompted. **Make a note of virtual network name and subnet name** created for your server since you need to add the web app to the same virtual network.
+- Create a new virtual network for your new Azure Database for PostgreSQL flexible server instance, if you choose to do so after prompted. **Make a note of virtual network name and subnet name** created for your server since you need to add the web app to the same virtual network.
- Creates admin username, password for your server if not provided. **Make a note of the username and password** to use in the next step.-- Create a database ```postgres``` that can be used for development. You can run [**psql** to connect to the database](quickstart-create-server-portal.md#connect-to-the-postgresql-database-using-psql) to create a different database.
+- Create a database `postgres` that can be used for development. You can [run psql to connect to the database](quickstart-create-server-portal.md#connect-to-the-postgresql-database-using-psql) to create a different database.
> [!NOTE]
-> Make a note of your password that will be generate for you if not provided. If you forget the password you would have to reset the password using `az postgres flexible-server update` command
+> Make a note of your password that's generated for you if not provided. If you forget the password you have to reset the password using the `az postgres flexible-server update` command.
## Deploy the code to Azure App Service
-In this section, you create app host in App Service app, connect this app to the Postgres database, then deploy your code to that host.
+In this section, you create app host in App Service app, connect this app to the Azure Database for PostgreSQL flexible server database, then deploy your code to that host.
### Create the App Service web app in a virtual network In the terminal, make sure you're in the repository root (`djangoapp`) that contains the app code.
-Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command:
+Create an App Service app (the host process) with the [az webapp up](/cli/azure/webapp#az-webapp-up) command:
```azurecli # Create a web app
az webapp up --resource-group myresourcegroup --location westus2 --plan DjangoPo
# Enable VNET integration for web app.
-# Replace <vnet-name> and <subnet-name> with the virtual network and subnet name that the flexible server is using.
+# Replace <vnet-name> and <subnet-name> with the virtual network and subnet name that the Azure Database for PostgreSQL flexible server instance is using.
az webapp vnet-integration add -g myresourcegroup -n mywebapp --vnet <vnet-name> --subnet <subnet-name> # Configure database information as environment variables
-# Use the postgres server name , database name , username , password for the database created in the previous steps
+# Use the Azure Database for PostgreSQL flexible server instance name, database name , username , password for the database created in the previous steps
az webapp config appsettings set --settings DJANGO_ENV="production" DBHOST="<postgres-server-name>.postgres.database.azure.com" DBNAME="postgres" DBUSER="<username>" DBPASS="<password>" ``` - For the `--location` argument, use the same location as you did for the database in the previous section.-- Replace *\<app-name>* with a unique name across all Azure (the server endpoint is `https://\<app-name>.azurewebsites.net`). Allowed characters for *\<app-name>* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier.
+- Replace *\<app-name>* with a unique name across all Azure (the server endpoint is `https://<app-name>.azurewebsites.net`). Allowed characters for *\<app-name>* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier.
- Create the [App Service plan](../../app-service/overview-hosting-plans.md) *DjangoPostgres-tutorial-plan* in the Basic pricing tier (B1), if it doesn't exist. `--plan` and `--sku` are optional. - Create the App Service app if it doesn't exist. - Enable default logging for the app, if not already enabled. - Upload the repository using ZIP deployment with build automation enabled.-- **az webapp vnet-integration** command adds the web app in the same virtual network as the postgres server.
+- **az webapp vnet-integration** command adds the web app in the same virtual network as the Azure Database for PostgreSQL flexible server instance.
- The app code expects to find database information in many environment variables. To set environment variables in App Service, you create "app settings" with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command. > [!TIP]
az webapp config appsettings set --settings DJANGO_ENV="production" DBHOST="<pos
### Run Django database migrations
-Django database migrations ensure that the schema in the PostgreSQL on Azure database match those described in your code.
+Django database migrations ensure that the schema in the Azure Database for PostgreSQL flexible server database match those described in your code.
-1. Open an SSH session in the browser by navigating to *https://\<app-name>.scm.azurewebsites.net/webssh/host* and sign in with your Azure account credentials (not the database server credentials).
+1. Open an SSH session in the browser by navigating to `https://<app-name>.scm.azurewebsites.net/webssh/host` and sign in with your Azure account credentials (not the database server credentials).
2. In the SSH session, run the following commands (you can paste commands using **Ctrl**+**Shift**+**V**):
Django database migrations ensure that the schema in the PostgreSQL on Azure dat
### Create a poll question in the app
-1. In a browser, open the URL *http:\//\<app-name>.azurewebsites.net*. The app should display the message "No polls are available" because there are no specific polls yet in the database.
+1. In a browser, open the URL `http://<app-name>.azurewebsites.net`. The app should display the message "No polls are available" because there are no specific polls yet in the database.
-2. Browse to *http:\//\<app-name>.azurewebsites.net/admin*. Sign in using superuser credentials from the previous section (`root` and `postgres1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
+2. Browse to `http://<app-name>.azurewebsites.net/admin`. Sign in using superuser credentials from the previous section (`root` and `postgres1`). Under **Polls**, select **Add** next to **Questions** and create a poll question with some choices.
-3. Browse again to *http:\//\<app-name>.azurewebsites.net/* to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
+3. Browse again to `http://<app-name>.azurewebsites.net/` to confirm that the questions are now presented to the user. Answer questions however you like to generate some data in the database.
-**Congratulations!** You're running a Python Django web app in Azure App Service for Linux, with an active Postgres database.
+**Congratulations!** You're running a Python Django web app in Azure App Service for Linux, with an active Azure Database for PostgreSQL flexible server database.
> [!NOTE] > App Service detects a Django project by looking for a *wsgi.py* file in each subfolder, which `manage.py startproject` creates by default. When App Service finds that file, it loads the Django web app. For more information, see [Configure built-in Python image](../../app-service/configure-language-python.md).
python manage.py createsuperuser
python manage.py runserver ```
-Once the web app is fully loaded, the Django development server provides the local app URL in the message, "Starting development server at http://127.0.0.1:8000/. Quit the server with CTRL-BREAK".
+Once the web app is fully loaded, the Django development server provides the local app URL in the message, "Starting development server at `http://127.0.0.1:8000/`. Quit the server with CTRL-BREAK".
Test the app locally with the following steps:
-1. Go to *http:\//localhost:8000* in a browser, which should display the message "No polls are available".
+1. Go to `http://localhost:8000` in a browser, which should display the message "No polls are available".
-2. Go to *http:\//localhost:8000/admin* and sign in using the admin user you created previously. Under **Polls**, again select **Add** next to **Questions** and create a poll question with some choices.
+2. Go to `http://localhost:8000/admin` and sign in using the admin user you created previously. Under **Polls**, again select **Add** next to **Questions** and create a poll question with some choices.
-3. Go to *http:\//localhost:8000* again and answer the question to test the app.
+3. Go to `http://localhost:8000` again and answer the question to test the app.
4. Stop the Django server by pressing **Ctrl**+**C**.
python manage.py makemigrations
python manage.py migrate ```
-Run the development server again with `python manage.py runserver` and test the app at to *http:\//localhost:8000/admin*:
+Run the development server again with `python manage.py runserver` and test the app at to `http://localhost:8000/admin`:
Stop the Django web server again with **Ctrl**+**C**.
This command uses the parameters cached in the *.azure/config* file. Because App
Because you made changes to the data model, you need to rerun database migrations in App Service.
-Open an SSH session again in the browser by navigating to *https://\<app-name>.scm.azurewebsites.net/webssh/host*. Then run the following commands:
+Open an SSH session again in the browser by navigating to `https://<app-name>.scm.azurewebsites.net/webssh/host`. Then run the following commands:
``` cd site/wwwroot
python manage.py migrate
### Review app in production
-Browse to *http:\//\<app-name>.azurewebsites.net* and test the app again in production. (Because you only changed the length of a database field, the change is only noticeable if you try to enter a longer response when creating a question.)
+Browse to `http://\<app-name>.azurewebsites.net` and test the app again in production. (Because you only changed the length of a database field, the change is only noticeable if you try to enter a longer response when creating a question.)
> [!TIP] > You can use [django-storages](https://django-storages.readthedocs.io/en/latest/backends/azure.html) to store static & media assets in Azure storage. You can use Azure CDN for gzipping for static files.
Browse to *http:\//\<app-name>.azurewebsites.net* and test the app again in prod
In the [Azure portal](https://portal.azure.com), search for the app name and select the app in the results. By default, the portal shows your app's **Overview** page, which provides a general performance view. Here, you can also perform basic management tasks like browse, stop, restart, and delete. The tabs on the left side of the page show the different configuration pages you can open. ## Clean up resources
postgresql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-webapp-server-vnet.md
Title: 'Tutorial: Create Azure Database for PostgreSQL - Flexible Server and Azure App Service Web App in same virtual network'
-description: Quickstart guide to create Azure Database for PostgreSQL - Flexible Server with Web App in a virtual network
+ Title: 'Tutorial: Create Azure App Service Web App in same virtual network'
+description: Quickstart guide to create an Azure Database for PostgreSQL - Flexible Server instance with a web app in the same virtual network.
Last updated 11/30/2021
-# Tutorial: Create an Azure Database for PostgreSQL - Flexible Server with App Services Web App in Virtual network
+# Tutorial: Create an Azure Database for PostgreSQL - Flexible Server instance with App Services Web App in virtual network
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This tutorial shows you how create a Azure App Service Web app with Azure Database for PostgreSQL - Flexible Server inside a [Virtual network](../../virtual-network/virtual-networks-overview.md).
+This tutorial shows you how create a Azure App Service Web app with Azure Database for PostgreSQL flexible server inside a [Virtual network](../../virtual-network/virtual-networks-overview.md).
In this tutorial you will learn how to: >[!div class="checklist"]
-> * Create a PostgreSQL flexible server in a virtual network
+> * Create an Azure Database for PostgreSQL flexible server instance in a virtual network
> * Create a web app > * Add the web app to the virtual network
-> * Connect to Postgres from the web app
+> * Connect to Azure Database for PostgreSQL flexible server from the web app
## Prerequisites
In this tutorial you will learn how to:
az account set --subscription <subscription ID> ```
-## Create a PostgreSQL Flexible Server in a new virtual network
+## Create an Azure Database for PostgreSQL flexible server instance in a new virtual network
-Create a private flexible server inside a virtual network (VNET) using the following command:
+Create a private Azure Database for PostgreSQL flexible server instance inside a virtual network (VNET) using the following command:
```azurecli az postgres flexible-server create --resource-group demoresourcegroup --name demoserverpostgres --vnet demoappvnet --location westus2
az postgres flexible-server create --resource-group demoresourcegroup --name dem
This command performs the following actions, which may take a few minutes: - Create the resource group if it doesn't already exist.-- Generates a server name if it is not provided.-- Create a new virtual network for your new postgreSQL server and subnet within this virtual network for the database server.
+- Generates a server name if it's not provided.
+- Create a new virtual network for your new Azure Database for PostgreSQL flexible server instance and subnet within this virtual network for the Azure Database for PostgreSQL flexible server instance.
- Creates admin username , password for your server if not provided. - Creates an empty database called **postgres**
Checking the existence of the resource group ''...
Creating Resource group 'demoresourcegroup ' ... Creating new vnet "demoappvnet" in resource group "demoresourcegroup" ... Creating new subnet "Subnet095447391" in resource group "demoresourcegroup " and delegating it to "Microsoft.DBforPostgreSQL/flexibleServers"...
-Creating PostgreSQL Server 'demoserverpostgres' in group 'demoresourcegroup'...
+Creating Azure Database for PostgreSQL flexible server instance 'demoserverpostgres' in group 'demoresourcegroup'...
Your server 'demoserverpostgres' is using sku 'Standard_D2s_v3' (Paid Tier). Please refer to https://aka.ms/postgres-pricing for pricing details
-Make a note of your password. If you forget, you would have to resetyour password with 'az postgres flexible-server update -n demoserverpostgres --resource-group demoresourcegroup -p <new-password>'.
+Make a note of your password. If you forget, you have to reset your password with 'az postgres flexible-server update -n demoserverpostgres --resource-group demoresourcegroup -p <new-password>'.
{ "connectionString": "postgresql://generated-username:generated-password@demoserverpostgres.postgres.database.azure.com/postgres?sslmode=require", "host": "demoserverpostgres.postgres.database.azure.com",
Make a note of your password. If you forget, you would have to resetyour passwor
``` ## Create a Web App
-In this section, you create app host in App Service app, connect this app to the Postgres database, then deploy your code to that host. Make sure you're in the repository root of your application code in the terminal. Note Basic Plan does not support VNET integration. Please use Standard or Premium.
+In this section, you create app host in App Service app, connect this app to the Azure Database for PostgreSQL flexible server database, then deploy your code to that host. Make sure you're in the repository root of your application code in the terminal. Note Basic Plan does not support VNET integration. Please use Standard or Premium.
Create an App Service app (the host process) with the az webapp up command
Before enabling VNET integration, you need to have subnet that is delegated to A
az network vnet show --resource-group demoresourcegroup -n demoappvnet ```
-Run the following command to create a new subnet in the same virtual network as the database server was created. **Update the address-prefix to avoid conflict with the database subnet.**
+Run the following command to create a new subnet in the same virtual network as the Azure Database for PostgreSQL flexible server instance was created. **Update the address-prefix to avoid conflict with the Azure Database for PostgreSQL flexible server subnet.**
```azurecli az network vnet subnet create --resource-group demoresourcegroup --vnet-name demoappvnet --name webappsubnet --address-prefixes 10.0.1.0/24 --delegations Microsoft.Web/serverFarms
az webapp vnet-integration add --resource-group demoresourcegroup -n mywebapp -
``` ## Configure environment variables to connect the database
-With the code now deployed to App Service, the next step is to connect the app to the flexible server in Azure. The app code expects to find database information in a number of environment variables. To set environment variables in App Service, use [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
+With the code now deployed to App Service, the next step is to connect the app to the Azure Database for PostgreSQL flexible server instance in Azure. The app code expects to find database information in a number of environment variables. To set environment variables in App Service, use [az webapp config appsettings set](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set) command.
```azurecli az webapp config appsettings set --name mywebapp --settings DBHOST="<postgres-server-name>.postgres.database.azure.com" DBNAME="postgres" DBUSER="<username>" DBPASS="<password>" ```-- Replace **postgres-server-name**,**username**,**password** for the newly created flexible server command.
+- Replace **postgres-server-name**,**username**,**password** for the newly created Azure Database for PostgreSQL flexible server instance command.
- Replace **\<username\>** and **\<password\>** with the credentials that the command also generated for you. - The resource group and app name are drawn from the cached values in the .azure/config file. - The command creates settings named **DBHOST**, **DBNAME**, **DBUSER***, and **DBPASS**. If your application code is using different name for the database information then use those names for the app settings as mentioned in the code.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
In this article, we provide compelling reasons for single server customers to mi
- **[Cost Savings](../flexible-server/how-to-deploy-on-azure-free-account.md)** ΓÇô Flexible server allows you to stop and start server on-demand to lower your TCO. Your compute tier billing is stopped immediately, which allows you to have significant cost savings during development, testing and for time-bound predictable production workloads. -- **[Support for new PG versions](../flexible-server/concepts-supported-versions.md)** - Flexible server currently supports PG version 11 and onwards till version 15. Newer community versions of PostgreSQL are supported only in flexible server.
+- **[Support for new PG versions](../flexible-server/concepts-supported-versions.md)** - Flexible server currently supports PG version 11 and onwards till version 16. Newer community versions of PostgreSQL are supported only in flexible server.
- **Minimized Latency** ΓÇô You can collocate your flexible server in the same availability zone as the application server that results in a minimal latency. This option isn't available in Single server.
Along with data migration, the tool automatically provides the following built-i
- Migration of permissions of database objects on your source server such as GRANTS/REVOKES to the target server. > [!NOTE]
-> This functionality is enabled only for flexible servers in **Central US**, **Canada Central**, **France Central**, **Japan East** and **Australia East** regions. It will be enabled for flexible servers in other Azure regions soon. In the meantime, you can follow the steps mentioned in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md#migrate-the-roles) to perform user/roles migration
+> This functionality is enabled by default for flexible servers in all Azure public regions. It will be enabled for flexible servers in gov clouds and China regions soon.
## Limitations
postgresql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-change-server-configuration.md
Title: Azure CLI script - Change server configurations (PostgreSQL)
+ Title: Azure CLI script - Change server configurations
description: This sample CLI script lists all available server configuration options and updates the value of one of the options.
Last updated 01/26/2022
-# List and update configurations of an Azure Database for PostgreSQL server using Azure CLI
+# List and update configurations of an Azure Database for PostgreSQL - Flexible Server instance using Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for PostgreSQL server, and sets the *log_retention_days* to a value that is other than the default one.
+This sample CLI script lists all available configuration parameters as well as their allowable values for Azure Database for PostgreSQL flexible server, and sets the *log_retention_days* to a value that is other than the default one.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
This script uses the commands outlined in the following table:
| **Command** | **Notes** | ||| | [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az postgres server create](/cli/azure/postgres/server) | Creates a PostgreSQL server that hosts the databases. |
-| [az postgres server configuration list](/cli/azure/postgres/server/configuration) | List the configurations of an Azure Database for PostgreSQL server. |
-| [az postgres server configuration set](/cli/azure/postgres/server/configuration) | Update the configuration of an Azure Database for PostgreSQL server. |
-| [az postgres server configuration show](/cli/azure/postgres/server/configuration) | Show the configuration of an Azure Database for PostgreSQL server. |
+| [az postgres server create](/cli/azure/postgres/server) | Creates an Azure Database for PostgreSQL flexible server instance that hosts the databases. |
+| [az postgres server configuration list](/cli/azure/postgres/server/configuration) | List the configurations of an Azure Database for PostgreSQL flexible server instance. |
+| [az postgres server configuration set](/cli/azure/postgres/server/configuration) | Update the configuration of an Azure Database for PostgreSQL flexible server instance. |
+| [az postgres server configuration show](/cli/azure/postgres/server/configuration) | Show the configuration of an Azure Database for PostgreSQL flexible server instance. |
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure).-- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)-- For more information on server parameters, see [How To Configure server parameters in Azure portal](../howto-configure-server-parameters-using-portal.md).
+- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL- Flexible Server](../single-server/sample-scripts-azure-cli.md)
+- For more information on server parameters, see [How to configure server parameters in Azure portal](../flexible-server/how-to-configure-server-parameters-using-portal.md).
postgresql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-and-firewall-rule.md
Title: Azure CLI Script - Create an Azure Database for PostgreSQL
-description: Azure CLI Script Sample - Creates an Azure Database for PostgreSQL server and configures a server-level firewall rule.
+ Title: Azure CLI Script - Create
+description: Azure CLI Script Sample - Creates an Azure Database for PostgreSQL - Flexible Server instance and configures a server-level firewall rule.
Last updated 01/26/2022
-# Create an Azure Database for PostgreSQL server and configure a firewall rule using the Azure CLI
+# Create an Azure Database for PostgreSQL - Flexible Server instance and configure a firewall rule using the Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This sample CLI script creates an Azure Database for PostgreSQL server and configures a server-level firewall rule. Once the script has been successfully run, the PostgreSQL server can be accessed from all Azure services and the configured IP address.
+This sample CLI script creates an Azure Database for PostgreSQL flexible server instance and configures a server-level firewall rule. Once the script has been successfully run, the Azure Database for PostgreSQL flexible server instance can be accessed from all Azure services and the configured IP address.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
This script uses the commands outlined in the following table:
| **Command** | **Notes** | ||| | [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az postgres server create](/cli/azure/postgres/server) | Creates a PostgreSQL server that hosts the databases. |
+| [az postgres server create](/cli/azure/postgres/server) | Creates an Azure Database for PostgreSQL flexible server instance that hosts the databases. |
| [az postgres server firewall create](/cli/azure/postgres/server/firewall-rule) | Creates a firewall rule to allow access to the server and databases under it from the entered IP address range. | | [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure)-- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)
+- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL - Flexible Server](../sample-scripts-azure-cli.md)
postgresql Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-with-vnet-rule.md
Title: CLI script - Create server with vNet rule - Azure Database for PostgreSQL
-description: This sample CLI script creates an Azure Database for PostgreSQL server with a service endpoint on a virtual network and configures a vNet rule.
+ Title: CLI script - Create with vNet rule
+description: This sample CLI script creates an Azure Database for PostgreSQL - Flexible Server instance with a service endpoint on a virtual network and configures a vNet rule.
Last updated 01/26/2022
-# Create a PostgreSQL server and configure a vNet rule using the Azure CLI
+# Create an Azure Database for PostgreSQL - Flexible Server instance and configure a vNet rule using the Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This sample CLI script creates an Azure Database for PostgreSQL server and configures a vNet rule.
+This sample CLI script creates an Azure Database for PostgreSQL flexible server instance and configures a vNet rule.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
This script uses the commands outlined in the following table:
| **Command** | **Notes** | ||| | [az group create](/cli/azure/group#az-group-create) | Creates a resource group in which all resources are stored. |
-| [az postgresql server create](/cli/azure/postgres/server/vnet-rule#az-postgres-server-vnet-rule-create) | Creates a PostgreSQL server that hosts the databases. |
-| [az network vnet list-endpoint-services](/cli/azure/network/vnet#az-network-vnet-list-endpoint-services#az-network-vnet-list-endpoint-services) | List which services support VNET service tunneling in a given region. |
+| [az postgresql server create](/cli/azure/postgres/server/vnet-rule#az-postgres-server-vnet-rule-create) | Creates an Azure Database for PostgreSQL flexible server instance that hosts the databases. |
+| [az network vnet list-endpoint-services](/cli/azure/network/vnet#az-network-vnet-list-endpoint-services#az-network-vnet-list-endpoint-services) | Lists which services support VNET service tunneling in a given region. |
| [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) | Creates a virtual network. |
-| [az network vnet subnet create](/cli/azure/network/vnet#az-network-vnet-subnet-create) | Create a subnet and associate an existing NSG and route table. |
+| [az network vnet subnet create](/cli/azure/network/vnet#az-network-vnet-subnet-create) | Creates a subnet and associates an existing NSG and route table. |
| [az network vnet subnet show](/cli/azure/network/vnet#az-network-vnet-subnet-show) |Shows details of a subnet. |
-| [az postgresql server vnet-rule create](/cli/azure/postgres/server/vnet-rule#az-postgres-server-vnet-rule-create) | Create a virtual network rule to allows access to a PostgreSQL server. |
+| [az postgresql server vnet-rule create](/cli/azure/postgres/server/vnet-rule#az-postgres-server-vnet-rule-create) | Creates a virtual network rule to allow access to an Azure Database for PostgreSQL flexible server instance. |
| [az group delete](/cli/azure/group#az-group-delete) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure).-- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)
+- Try more scripts: [Azure CLI samples for Azure Database for PostgreSQL - Flexible Server](../sample-scripts-azure-cli.md)
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-point-in-time-restore.md
Title: Azure CLI script - Restore an Azure Database for PostgreSQL server
-description: This sample Azure CLI script shows how to restore an Azure Database for PostgreSQL server and its databases to a previous point in time.
+ Title: Azure CLI script - Restore
+description: This sample Azure CLI script shows how to restore an Azure Database for PostgreSQL - Flexible Server instance and its databases to a previous point in time.
Last updated 02/11/2022
-# Restore an Azure Database for PostgreSQL server using Azure CLI
+# Restore an Azure Database for PostgreSQL - Flexible Server instance using Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This sample CLI script restores a single Azure Database for PostgreSQL server to a previous point in time.
+This sample CLI script restores a single Azure Database for PostgreSQL flexible server instance to a previous point in time.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
This script uses the commands outlined in the following table:
| **Command** | **Notes** | ||| | [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az postgresql server create](/cli/azure/postgres/server#az-postgres-server-create) | Creates a PostgreSQL server that hosts the databases. |
+| [az postgresql server create](/cli/azure/postgres/server#az-postgres-server-create) | Creates an Azure Database for PostgreSQL flexible server instance that hosts the databases. |
| [az postgresql server restore](/cli/azure/postgres/server#az-postgres-server-restore) | Restore a server from backup. | | [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure).-- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)-- [How to backup and restore a server in Azure Database for PostgreSQL using the Azure portal](../howto-restore-server-portal.md)
+- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL - Flexible Server](../sample-scripts-azure-cli.md)
+- [How to backup and restore a server in Azure Database for PostgreSQL - Flexible Server using the Azure portal](../howto-restore-server-portal.md)
postgresql Sample Scale Server Up Or Down https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-scale-server-up-or-down.md
Title: Azure CLI script - Scale and monitor Azure Database for PostgreSQL
-description: Azure CLI Script Sample - Scale Azure Database for PostgreSQL server to a different performance level after querying the metrics.
+ Title: Azure CLI script - Scale and monitor
+description: Azure CLI Script Sample - Scale an Azure Database for PostgreSQL - Flexible Server instance to a different performance level after querying the metrics.
Last updated 01/26/2022
-# Monitor and scale a single PostgreSQL server using Azure CLI
+# Monitor and scale a single Azure Database for PostgreSQL - Flexible Server instance using Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This sample CLI script scales compute and storage for a single Azure Database for PostgreSQL server after querying the metrics. Compute can scale up or down. Storage can only scale up.
+This sample CLI script scales compute and storage for a single Azure Database for PostgreSQL flexible server instance after querying the metrics. Compute can scale up or down. Storage can only scale up.
> [!IMPORTANT] > Storage can only be scaled up, not down.
This script uses the commands outlined in the following table:
| **Command** | **Notes** | ||| | [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) | Creates a PostgreSQL server that hosts the databases. |
-| [az postgres server update](/cli/azure/postgres/server#az-postgres-server-update) | Updates properties of the PostgreSQL server. |
-| [az monitor metrics list](/cli/azure/monitor/metrics) | List the metric value for the resources. |
+| [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) | Creates an Azure Database for PostgreSQL flexible server instance that hosts the databases. |
+| [az postgres server update](/cli/azure/postgres/server#az-postgres-server-update) | Updates properties of the Azure Database for PostgreSQL flexible server instance. |
+| [az monitor metrics list](/cli/azure/monitor/metrics) | Lists the metric value for the resources. |
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps -- Learn more about [Azure Database for PostgreSQL compute and storage](../concepts-pricing-tiers.md)-- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)
+- Learn more about [Azure Database for PostgreSQL - Flexible Server compute and storage](../concepts-pricing-tiers.md)
+- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL - Flexible Server](../sample-scripts-azure-cli.md)
- Learn more about the [Azure CLI](/cli/azure)
postgresql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-server-logs.md
Title: Azure CLI script - Download server logs in Azure Database for PostgreSQL
-description: This sample Azure CLI script shows how to enable and download the server logs of an Azure Database for PostgreSQL server.
+ Title: Azure CLI script - Download server logs
+description: This sample Azure CLI script shows how to enable and download the server logs of an Azure Database for PostgreSQL - Flexible Server instance.
Last updated 01/26/2022
-# Enable and download server slow query logs of an Azure Database for PostgreSQL server using Azure CLI
+# Enable and download server slow query logs of an Azure Database for PostgreSQL - Flexible Server instance using Azure CLI
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This sample CLI script enables and downloads the slow query logs of a single Azure Database for PostgreSQL server.
+This sample CLI script enables and downloads the slow query logs of a single Azure Database for PostgreSQL flexible server instance.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
This script uses the commands outlined in the following table:
| **Command** | **Notes** | ||| | [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. |
-| [az postgres server create](/cli/azure/postgres/server) | Creates a PostgreSQL server that hosts the databases. |
-| [az postgres server configuration list](/cli/azure/postgres/server/configuration) | List the configuration values for a server. |
-| [az postgres server configuration set](/cli/azure/postgres/server/configuration) | Update the configuration of a server. |
-| [az postgres server-logs list](/cli/azure/postgres/server-logs) | List log files for a server. |
-| [az postgres server-logs download](/cli/azure/postgres/server-logs) | Download log files. |
+| [az postgres server create](/cli/azure/postgres/server) | Creates an Azure Database for PostgreSQL flexible server instance that hosts the databases. |
+| [az postgres server configuration list](/cli/azure/postgres/server/configuration) | Lists the configuration values for a server. |
+| [az postgres server configuration set](/cli/azure/postgres/server/configuration) | Updates the configuration of a server. |
+| [az postgres server-logs list](/cli/azure/postgres/server-logs) | Lists log files for a server. |
+| [az postgres server-logs download](/cli/azure/postgres/server-logs) | Downloads log files. |
| [az group delete](/cli/azure/group) | Deletes a resource group including all nested resources. | ## Next steps - Read more information on the Azure CLI: [Azure CLI documentation](/cli/azure).-- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL](../sample-scripts-azure-cli.md)
+- Try additional scripts: [Azure CLI samples for Azure Database for PostgreSQL - Flexible Server instance](../sample-scripts-azure-cli.md)
- [Configure and access server logs in the Azure portal](../howto-configure-server-logs-in-portal.md)
postgresql Concept Reserved Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concept-reserved-pricing.md
- Title: Reserved compute pricing - Azure Database for PostgreSQL
-description: Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
------- Previously updated : 12/12/2023--
-# Prepay for Azure Database for PostgreSQL compute resources with reserved capacity
---
-Azure Database for PostgreSQL now helps you save money by prepaying for compute resources compared to pay-as-you-go prices. With Azure Database for PostgreSQL reserved capacity, you make an upfront commitment on PostgreSQL server for a one or three year period to get a significant discount on the compute costs. To purchase Azure Database for PostgreSQL reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term. </br>
-
-## How does the instance reservation work?
-
-You don't need to assign the reservation to specific Azure Database for PostgreSQL servers. An already running Azure Database for PostgreSQL (or ones that are newly deployed) automatically get the benefit of reserved pricing. By purchasing a reservation, you're prepaying for the compute costs for one or three years. As soon as you buy a reservation, the Azure database for PostgreSQL compute charges that match the reservation attributes are no longer charged at the pay-as-you go rates. A reservation doesn't cover software, networking, or storage charges associated with the PostgreSQL Database servers. At the end of the reservation term, the billing benefit expires, and the vCores used by Azure Database for PostgreSQL are billed at the pay-as-you go price. Reservations don't autorenew. For pricing information, see the [Azure Database for PostgreSQL reserved capacity offering](https://azure.microsoft.com/pricing/details/postgresql/). </br>
-
-> [!IMPORTANT]
-> Reserved capacity pricing is available for the Azure Database for PostgreSQL in [Single server](./overview.md#azure-database-for-postgresqlsingle-server) and [Flexible Server](../flexible-server/overview.md) deployment options.
-
-You can buy Azure Database for PostgreSQL reserved capacity in the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](../../cost-management-billing/reservations/prepare-buy-reservation.md). To buy the reserved capacity:
-
-* You must be in the owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.
-* For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
-* For Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Database for PostgreSQL reserved capacity. </br>
-
-For details on how enterprise customers and Pay-As-You-Go customers are charged for reservation purchases, see [understand Azure reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) and [understand Azure reservation usage for your Pay-As-You-Go subscription](../../cost-management-billing/reservations/understand-reserved-instance-usage.md).
-
-## Reservation exchanges and refunds
-
-You can exchange a reservation for another reservation of the same type, you can also exchange a reservation from Azure Database for PostgreSQL - Single Server with Flexible Server. It's also possible to refund a reservation, if you no longer need it. The Azure portal can be used to exchange or refund a reservation. For more information, see [Self-service exchanges and refunds for Azure Reservations](../../cost-management-billing/reservations/exchange-and-refund-azure-reservations.md).
-
-## Reservation discount
-
-You may save up to 65% on compute costs with reserved instances. In order to find the discount for your case, visit the [Reservation blade on the Azure portal](https://aka.ms/reservations) and check the savings per pricing tier and per region. Reserved instances help you manage your workloads, budget, and forecast better with an upfront payment for a one-year or three-year term. You can also exchange or cancel reservations as business needs change.
-
-## Determine the right server size before purchase
-
-The size of reservation should be based on total amount of compute used by the existing, or soon-to-be-deployed, servers within a specific region, and using the same performance tier and hardware generation.</br>
-
-For example, let's suppose that you're running one general purpose Gen5 ΓÇô 32 vCore PostgreSQL database, and two memory-optimized Gen5 ΓÇô 16 vCore PostgreSQL databases. Further, let's suppose that you plan to deploy an additional general purpose Gen5 ΓÇô 8 vCore database server, and one memory-optimized Gen5 ΓÇô 32 vCore database server within the next month. Let's suppose that you know that you need these resources for at least one year. In this case, you should purchase a 40 (32 + 8) vCores, one-year reservation for single database general purpose - Gen5 and a 64 (2x16 + 32) vCore one year reservation for single database memory optimized - Gen5.
-
-## Buy Azure Database for PostgreSQL reserved capacity
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select **All services** > **Reservations**.
-3. Select **Add** and then, in the Purchase reservations pane, select **Azure Database for PostgreSQL** to purchase a new reservation for your PostgreSQL databases.
-4. Fill in the required fields. Existing or new databases that match the attributes you select qualify to get the reserved capacity discount. The actual number of your Azure Database for PostgreSQL servers that get the discount depend on the scope and quantity selected.
--
-The following table describes required fields.
-
-| Field | Description |
-| : | :- |
-| Subscription | The subscription used to pay for the Azure Database for PostgreSQL reserved capacity reservation. The payment method on the subscription is charged the upfront costs for the Azure Database for PostgreSQL reserved capacity reservation. The subscription type must be an enterprise agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or an individual agreement with pay-as-you-go pricing (offer numbers: MS-AZR-0003P or MS-AZR-0023P). For an enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. For an individual subscription with pay-as-you-go pricing, the charges are billed to the credit card or invoice payment method on the subscription.
-| Scope | The vCore reservationΓÇÖs scope can cover one subscription or multiple subscriptions (shared scope). If you select: </br></br> **Shared**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers running in any subscriptions within your billing context. For enterprise customers, the shared scope is the enrollment and includes all subscriptions within the enrollment. For Pay-As-You-Go customers, the shared scope is all Pay-As-You-Go subscriptions created by the account administrator.</br></br>**Management group**, the reservation discount is applied to Azure Database for PostgreSQL running in any subscriptions that are a part of both the management group and billing scope.</br></br> **Single subscription**, the vCore reservation discount is applied to Azure Database for PostgreSQL servers in this subscription. </br></br> **Single resource group**, the reservation discount is applied to Azure Database for PostgreSQL servers in the selected subscription and the selected resource group within that subscription.
-| Region | The Azure region thatΓÇÖs covered by the Azure Database for PostgreSQL reserved capacity reservation.
-| Deployment Type | The Azure Database for PostgreSQL resource type that you want to buy the reservation for.
-| Performance Tier | The service tier for the Azure Database for PostgreSQL servers.
-| Term | This term can be either One year or Three years.
-| Quantity | The amount of compute resources being purchased within the Azure Database for PostgreSQL reserved capacity reservation. Corresponds to the number of vCores in the selected Azure region and Performance tier that are being reserved and get the billing discount. For example, if you're running or planning to run an Azure Database for PostgreSQL servers with the total compute capacity of Gen5 16 vCores in the East US region, then you would specify quantity as 16 to maximize the benefit for all servers.
-
-## Reserved instances API support
-
-Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
--- Find reservations to buy-- Buy a reservation-- View purchased reservations-- View and manage reservation access-- Split or merge reservations-- Change the scope of reservations-
-For more information, see [APIs for Azure reservation automation](../../cost-management-billing/reservations/reservation-apis.md).
-
-## vCore size flexibility
-
-vCore size flexibility helps you scale up or down within a performance tier and region, without losing the reserved capacity benefit. If you scale to higher vCores than your reserved capacity, you're billed for the excess vCores using pay-as-you-go pricing.
-
-## How to view reserved instance purchase details
-
-You can view your reserved instance purchase details via the [Reservations](https://aka.ms/reservations) blade in the Azure portal.
-
-## Reserved instance expiration
-
-You receive email notifications, first one 30 days prior to reservation expiry and another one at expiration. Once the reservation expires, deployed VMs continue to run and are billed at a pay-as-you-go rate.
-
-## Need help? Contact us
-
-If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-## Next steps
-
-The vCore reservation discount is applied automatically to the number of Azure Database for PostgreSQL servers that match the Azure Database for PostgreSQL reserved capacity reservation scope and attributes. You can update the scope of the Azure database for PostgreSQL reserved capacity reservation through Azure portal, PowerShell, CLI or through the API.
-
-To learn more about Azure Reservations, see the following articles:
-
-* [What are Azure Reservations](../../cost-management-billing/reservations/save-compute-costs-reservations.md)?
-* [Manage Azure Reservations](../../cost-management-billing/reservations/manage-reserved-vm-instance.md)
-* [Understand Azure Reservations discount](../../cost-management-billing/reservations/understand-reservation-charges.md)
-* [Understand reservation usage for your Enterprise enrollment](../../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md)
-* [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-supported-versions.md
adobe-target: true
[!INCLUDE [azure-database-for-postgresql-single-server-deprecation](../includes/azure-database-for-postgresql-single-server-deprecation.md)]
-Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for support policy details.
+See [Azure Database for PostgreSQL versioning policy](../flexible-server/concepts-version-policy.md) for support policy details.
Azure Database for PostgreSQL currently supports the following major versions:
The current minor release is 10.22. Refer to the [PostgreSQL documentation](http
## PostgreSQL version 9.6 (retired)
-Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.6 as of November 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
+To align with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL retired PostgreSQL version 9.6 as of November 11, 2021. See [Azure Database for PostgreSQL versioning policy](../flexible-server/concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
## PostgreSQL version 9.5 (retired)
-Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.5 as of February 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
+To align with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL retired PostgreSQL version 9.5 as of February 11, 2021. See [Azure Database for PostgreSQL versioning policy](../flexible-server/concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
## Managing upgrades The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
-Automatic in-place upgrades for major versions are not supported. To upgrade to a higher major version, you can
+Automatic in-place upgrades for major versions aren't supported. To upgrade to a higher major version, you can
* Use one of the methods documented in [major version upgrades using dump and restore](./how-to-upgrade-using-dump-and-restore.md). * Use [pg_dump and pg_restore](./how-to-migrate-using-dump-and-restore.md) to move a database to a server created with the new engine version. * Use [Azure Database Migration service](../../dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md) for doing online upgrades.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/overview.md
- Title: What is Azure Database for PostgreSQL
-description: Provides an overview of Azure Database for PostgreSQL relational database service in the context of single server.
------ Previously updated : 06/24/2022
-adobe-target: true
--
-# What is Azure Database for PostgreSQL?
---
-> [!IMPORTANT]
-> Azure Database for PostgreSQL - Hyperscale (Citus) is now [Azure Cosmos DB for PostgreSQL](../../cosmos-db/postgresql/introduction.md). To learn more about this change, see [Where is Hyperscale (Citus)?](../hyperscale/moved.md).
-
-Azure Database for PostgreSQL is a relational database service in the Microsoft cloud based on the [PostgreSQL open source relational database](https://www.postgresql.org/). Azure Database for PostgreSQL delivers:
--- Built-in high availability.-- Data protection using automatic backups and point-in-time-restore for up to 35 days.-- Automated maintenance for underlying hardware, operating system and database engine to keep the service secure and up to date.-- Predictable performance, using inclusive pay-as-you-go pricing.-- Elastic scaling within seconds.-- Enterprise grade security and industry-leading compliance to protect sensitive data at-rest and in-motion.-- Monitoring and automation to simplify management and monitoring for large-scale deployments.-- Industry-leading support experience.--
-These capabilities require almost no administration, and all are provided at no additional cost. They allow you to focus on rapid application development and accelerating your time to market rather than allocating precious time and resources to managing virtual machines and infrastructure. In addition, you can continue to develop your application with the open-source tools and platform of your choice to deliver with the speed and efficiency your business demands, all without having to learn new skills.
-
-## Deployment models
-
-Azure Database for PostgreSQL powered by the PostgreSQL community edition is available in two deployment modes:
--- Single Server-- Flexible Server-
-### Azure Database for PostgreSQL - Single Server
-
-Azure Database for PostgreSQL Single Server is a fully managed database service with minimal requirements for customizations of database. The single server platform is designed to handle most of the database management functions such as patching, backups, high availability, security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability on single availability zone. It supports community version of PostgreSQL 9.5, 9,6, 10, and 11. The service is generally available today in wide variety of [Azure regions](https://azure.microsoft.com/global-infrastructure/services/).
-
-The Single Server deployment option offers three pricing tiers: Basic, General Purpose, and Memory Optimized. Each tier offers different resource capabilities to support your database workloads. You can build your first app on a small database for a few dollars a month, and then adjust the scale to meet the needs of your solution. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You only pay for the resources you need, and only when you need them. See [Pricing tiers](./concepts-pricing-tiers.md) for details.
-
-Single servers are best suited for cloud native applications designed to handle automated patching without the need for granular control on the patching schedule and custom PostgreSQL configuration settings.
-
-For detailed overview of single server deployment mode, refer [single server overview](./overview-single-server.md).
-
-### Azure Database for PostgreSQL - Flexible Server
-
-Azure Database for PostgreSQL Flexible Server is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. In general, the service provides more flexibility and customizations based on the user requirements. The flexible server architecture allows users to opt for high availability within single availability zone and across multiple availability zones. Flexible Server provides better cost optimization controls with the ability to stop/start server and burstable compute tier, ideal for workloads that donΓÇÖt need full-compute capacity continuously. The service currently supports community version of PostgreSQL 11, 12, 13 and 14, with plans to add newer versions soon. The service is generally available today in wide variety of Azure regions.
-
-Flexible servers are best suited for
--- Application developments requiring better control and customizations-- Cost optimization controls with ability to stop/start server-- Zone redundant high availability-- Managed maintenance windows-
-For a detailed overview of flexible server deployment mode, see [flexible server overview](../flexible-server/overview.md).
-
-## Next steps
-
-Learn more about the three deployment modes for Azure Database for PostgreSQL and choose the right options based on your needs.
--- [Single Server](./overview-single-server.md)-- [Flexible Server](../flexible-server/overview.md)
postgresql Whats Happening To Postgresql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/whats-happening-to-postgresql-single-server.md
Learn how to migrate from Azure Database for PostgreSQL - Single Server to Azure
**Q. Can I still create a new version 11 Azure Database for PostgreSQL - Single Server after the community EOL date in November 2023?**
-**A.** Beginning November 30 2023, you'll no longer be able to create new single server instances for PostgreSQL version 11 through the Azure portal. However, you can still [make them via CLI until November 2024](https://azure.microsoft.com/updates/singlepg11-retirement/). We will continue to support single servers through our [versioning support policy.](/azure/postgresql/single-server/concepts-version-policy) It would be best to start migrating to Azure Database for PostgreSQL - Flexible Server immediately.
+**A.** Beginning November 30 2023, you'll no longer be able to create new single server instances for PostgreSQL version 11 through the Azure portal. However, you can still [make them via CLI until November 2024](https://azure.microsoft.com/updates/singlepg11-retirement/). We will continue to support single servers through our [versioning support policy.](../flexible-server/concepts-version-policy.md) It would be best to start migrating to Azure Database for PostgreSQL - Flexible Server immediately.
**Q. Can I continue running my Azure Database for PostgreSQL - Single Server beyond the sunset date of March 28, 2025?**
private-link Configure Asg Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/configure-asg-private-endpoint.md
Title: Configure an application security group with a private endpoint
-description: Learn how to create a private endpoint with an Application Security Group or apply an ASG to an existing private endpoint.
+description: Learn how to create a private endpoint with an application security group (ASG) or apply an ASG to an existing private endpoint.
Last updated 06/14/2022
-# Configure an application security group (ASG) with a private endpoint
+# Configure an application security group with a private endpoint
-Azure Private endpoints support application security groups for network security. Private endpoints can be associated with an existing ASG in your current infrastructure alongside virtual machines and other network resources.
+Azure Private Link private endpoints support application security groups (ASGs) for network security. You can associate private endpoints with an existing ASG in your current infrastructure alongside virtual machines and other network resources.
## Prerequisites - An Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure web app with a Premium V2 tier or higher app service plan deployed in your Azure subscription.
-- An Azure web app with a **PremiumV2-tier** or higher app service plan, deployed in your Azure subscription.
+ - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
+ - The example web app in this article is named **myWebApp1979**. Replace the example with your web app name.
- - For more information and an example, see [Quickstart: Create an ASP.NET Core web app in Azure](../app-service/quickstart-dotnetcore.md).
-
- - The example webapp in this article is named **myWebApp1979**. Replace the example with your webapp name.
--- An existing Application Security Group in your subscription. For more information about ASGs, see [Application security groups](../virtual-network/application-security-groups.md).
-
+- An existing ASG in your subscription. For more information about ASGs, see [Application security groups](../virtual-network/application-security-groups.md).
- The example ASG used in this article is named **myASG**. Replace the example with your application security group. -- An existing Azure Virtual Network and subnet in your subscription. For more information about creating a virtual network, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+- An existing Azure virtual network and subnet in your subscription. For more information about creating a virtual network, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
- The example virtual network used in this article is named **myVNet**. Replace the example with your virtual network. - The latest version of the Azure CLI, installed.
- Check your version of the Azure CLI in a terminal or command window by running `az --version`. For the latest version, see the most recent [release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
-
- If you don't have the latest version of the Azure CLI, update it by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
+ - Check your version of the Azure CLI in a terminal or command window by running `az --version`. For the latest version, see the most recent [release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
+ - If you don't have the latest version of the Azure CLI, update it by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+If you choose to install and use PowerShell locally, this article requires Azure PowerShell module version 5.4.1 or later. To find the installed version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-## Create private endpoint with an ASG
+## Create a private endpoint with an ASG
-An ASG can be associated with a private endpoint when it's created. The following procedures demonstrate how to associate an ASG with a private endpoint when it's created.
+You can associate an ASG with a private endpoint when it's created. The following procedures demonstrate how to associate an ASG with a private endpoint when it's created.
# [**Portal**](#tab/portal)
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
+1. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
-3. Select **+ Create** in **Private endpoints**.
+1. Select **+ Create** in **Private endpoints**.
-4. In the **Basics** tab of **Create a private endpoint**, enter or select the following information.
+1. On the **Basics** tab of **Create a private endpoint**, enter or select the following information:
| Value | Setting | | -- | - |
An ASG can be associated with a private endpoint when it's created. The followin
| Name | Enter **myPrivateEndpoint**. | | Region | Select **East US**. |
-5. Select **Next: Resource** at the bottom of the page.
+1. Select **Next: Resource** at the bottom of the page.
-6. In the **Resource** tab, enter or select the following information.
+1. On the **Resource** tab, enter or select the following information:
| Value | Setting | | -- | - | | Connection method | Select **Connect to an Azure resource in my directory.** |
- | Subscription | Select your subscription |
+ | Subscription | Select your subscription. |
| Resource type | Select **Microsoft.Web/sites**. | | Resource | Select **mywebapp1979**. | | Target subresource | Select **sites**. |
-7. Select **Next: Virtual Network** at the bottom of the page.
+1. Select **Next: Virtual Network** at the bottom of the page.
-8. In the **Virtual Network** tab, enter or select the following information.
+1. On the **Virtual Network** tab, enter or select the following information:
| Value | Setting | | -- | - | | **Networking** | | | Virtual network | Select **myVNet**. | | Subnet | Select your subnet. </br> In this example, it's **myVNet/myBackendSubnet(10.0.0.0/24)**. |
- | Enable network policies for all private endpoints in this subnet. | Leave the default of checked. |
+ | Enable network policies for all private endpoints in this subnet. | Leave the default selected. |
| **Application security group** | | | Application security group | Select **myASG**. |
- :::image type="content" source="./media/configure-asg-private-endpoint/asg-new-endpoint.png" alt-text="Screenshot of ASG selection when creating a new private endpoint.":::
+ :::image type="content" source="./media/configure-asg-private-endpoint/asg-new-endpoint.png" alt-text="Screenshot that shows ASG selection when creating a new private endpoint.":::
-9. Select **Next: DNS** at the bottom of the page.
+1. Select **Next: DNS** at the bottom of the page.
-10. Select **Next: Tags** at the bottom of the page.
+1. Select **Next: Tags** at the bottom of the page.
-11. Select **Next: Review + create**.
+1. Select **Next: Review + create**.
-12. Select **Create**.
+1. Select **Create**.
# [**PowerShell**](#tab/powershell)
az network private-endpoint create \
## Associate an ASG with an existing private endpoint
-An ASG can be associated with an existing private endpoint. The following procedures demonstrate how to associate an ASG with an existing private endpoint.
+You can associate an ASG with an existing private endpoint. The following procedures demonstrate how to associate an ASG with an existing private endpoint.
> [!IMPORTANT] > You must have a previously deployed private endpoint to proceed with the steps in this section. The example endpoint used in this section is named **myPrivateEndpoint**. Replace the example with your private endpoint. # [**Portal**](#tab/portal)
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
+1. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints** in the search results.
-3. In **Private endpoints**, select **myPrivateEndpoint**.
+1. In **Private endpoints**, select **myPrivateEndpoint**.
-4. In **myPrivateEndpoint**, in **Settings**, select **Application security groups**.
+1. In **myPrivateEndpoint**, in **Settings**, select **Application security groups**.
-5. In **Application security groups**, select **myASG** in the pull-down box.
+1. In **Application security groups**, select **myASG** in the dropdown box.
- :::image type="content" source="./media/configure-asg-private-endpoint/asg-existing-endpoint.png" alt-text="Screenshot of ASG selection when associating with an existing private endpoint.":::
+ :::image type="content" source="./media/configure-asg-private-endpoint/asg-existing-endpoint.png" alt-text="Screenshot that shows ASG selection when associating with an existing private endpoint.":::
-6. Select **Save**.
+1. Select **Save**.
# [**PowerShell**](#tab/powershell)
private-link Disable Private Endpoint Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-endpoint-network-policy.md
# Manage network policies for private endpoints
-By default, network policies are disabled for a subnet in a virtual network. To use network policies like User-Defined Routes (UDRs) and Network Security Groups support, network policy support must be enabled for the subnet. This setting is only applicable to private endpoints in the subnet, and affects all private endpoints in the subnet. For other resources in the subnet, access is controlled based on security rules in the network security group.
+By default, network policies are disabled for a subnet in a virtual network. To use network policies like user-defined routes and network security group support, network policy support must be enabled for the subnet. This setting only applies to private endpoints in the subnet and affects all private endpoints in the subnet. For other resources in the subnet, access is controlled based on security rules in the network security group.
-Network policies can be enabled either for Network Security Groups only, for User-Defined Routes only, or for both.
+You can enable network policies either for network security groups only, for user-defined routes only, or for both.
-If you enable network security policies for User-Defined Routes, you can use a custom address prefix equal to or larger than the VNet address space to invalidate the /32 default route propagated by the private endpoint. This can be useful if you want to ensure private endpoint connection requests go through a firewall or Virtual Appliance. Otherwise, the /32 default route would send traffic directly to the private endpoint in accordance with the [longest prefix match algorithm](../virtual-network/virtual-networks-udr-overview.md#how-azure-selects-a-route).
+If you enable network security policies for user-defined routes, you can use a custom address prefix equal to or larger than the virtual network address space to invalidate the /32 default route propagated by the private endpoint. This capability can be useful if you want to ensure that private endpoint connection requests go through a firewall or virtual appliance. Otherwise, the /32 default route sends traffic directly to the private endpoint in accordance with the [longest prefix match algorithm](../virtual-network/virtual-networks-udr-overview.md#how-azure-selects-a-route).
> [!IMPORTANT]
-> To invalidate a Private Endpoint route, UDRs must have a prefix equal to or larger than the VNet address space where the Private Endpoint is provisioned. For example, a UDR default route (0.0.0.0/0) doesn't invalidate Private Endpoint routes. Network policies should be enabled in the subnet that hosts the private endpoint.
+> To invalidate a private endpoint route, user-defined routes must have a prefix equal to or larger than the virtual network address space where the private endpoint is provisioned. For example, a user-defined routes default route (0.0.0.0/0) doesn't invalidate private endpoint routes. Network policies should be enabled in the subnet that hosts the private endpoint.
-Use the following step to enable or disable network policy for private endpoints:
+Use the following steps to enable or disable network policy for private endpoints:
* Azure portal * Azure PowerShell * Azure CLI
-* Azure Resource Manager templates
-
-The following examples describe how to enable and disable `PrivateEndpointNetworkPolicies` for a virtual network named **myVNet** with a **default** subnet of **10.1.0.0/24** hosted in a resource group named **myResourceGroup**.
+* Azure Resource Manager templates (ARM templates)
+
+The following examples describe how to enable and disable `PrivateEndpointNetworkPolicies` for a virtual network named `myVNet` with a `default` subnet of `10.1.0.0/24` hosted in a resource group named `myResourceGroup`.
## Enable network policy # [**Portal**](#tab/network-policy-portal)
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks**.
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks**.
-3. Select **myVNet**.
+1. Select **myVNet**.
-4. In settings of **myVNet**, select **Subnets**.
+1. In settings of **myVNet**, select **Subnets**.
-5. Select the **default** subnet.
+1. Select the **default** subnet.
-6. In the properties for the **default** subnet, enable the checkboxes for "Network Security Groups", "Route tables" or both in **NETWORK POLICY FOR PRIVATE ENDPOINTS**.
+1. In the properties for the **default** subnet, select the checkboxes for **Network Security Groups**, **Route tables**, or both in **NETWORK POLICY FOR PRIVATE ENDPOINTS**.
-7. Select **Save**.
+1. Select **Save**.
# [**PowerShell**](#tab/network-policy-powershell)
$vnet | Set-AzVirtualNetwork
# [**CLI**](#tab/network-policy-cli)
-Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to enable the policy. The Azure CLI only supports the values `true` or `false`, it doesn't allow yet to enable the policies selectively only for User-Defined Routes or Network Security Groups:
+Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to enable the policy. The Azure CLI only supports the values `true` or `false`. It doesn't allow you to enable the policies selectively only for user-defined routes or network security groups:
```azurecli az network vnet subnet update \
az network vnet subnet update \
# [**JSON**](#tab/network-policy-json)
-This section describes how to enable subnet private endpoint policies using an Azure Resource Manager template. The possible values for the `privateEndpointNetworkPolicies` are `Disabled`, `NetworkSecurityGroupEnabled`, `RouteTableEnabled`, and `Enabled`.
+This section describes how to enable subnet private endpoint policies by using an ARM template. The possible values for `privateEndpointNetworkPolicies` are `Disabled`, `NetworkSecurityGroupEnabled`, `RouteTableEnabled`, and `Enabled`.
```json {
This section describes how to enable subnet private endpoint policies using an A
# [**Portal**](#tab/network-policy-portal)
-1. Sign-in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks**.
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks**.
-3. Select **myVNet**.
+1. Select **myVNet**.
-4. In settings of **myVNet**, select **Subnets**.
+1. In settings of **myVNet**, select **Subnets**.
-5. Select the **default** subnet.
+1. Select the **default** subnet.
-6. In the properties for the **default** subnet, select **Disabled** in **NETWORK POLICY FOR PRIVATE ENDPOINTS**.
+1. In the properties for the **default** subnet, select **Disabled** in **NETWORK POLICY FOR PRIVATE ENDPOINTS**.
-7. Select **Save**.
+1. Select **Save**.
# [**PowerShell**](#tab/network-policy-powershell)
az network vnet subnet update \
# [**JSON**](#tab/network-policy-json)
-This section describes how to disable subnet private endpoint policies using an Azure Resource Manager template.
+This section describes how to disable subnet private endpoint policies by using an ARM template.
```json {
This section describes how to disable subnet private endpoint policies using an
> [!IMPORTANT]
-> There are limitations to private endpoints in relation to the network policy feature and Network Security Groups and User Defined Routes. For more information, see [Limitations](private-endpoint-overview.md#limitations).
+> There are limitations to private endpoints in relation to the network policy feature and network security groups and user-defined routes. For more information, see [Limitations](private-endpoint-overview.md#limitations).
## Next steps-- Learn more about [Azure private endpoint](private-endpoint-overview.md)
-
+
+- To learn more, see [What is a private endpoint?](private-endpoint-overview.md).
private-link Disable Private Link Service Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-link-service-network-policy.md
Title: 'Disable network policies for Azure Private Link service source IP address'
-description: Learn how to disable network policies for Azure private Link
+description: Learn how to disable network policies for Azure Private Link.
ms.devlang: azurecli
# Disable network policies for Private Link service source IP
-In order to choose a source IP address for your Private Link service, an explicit disable setting `privateLinkServiceNetworkPolicies` is required on the subnet. This setting is only applicable for the specific private IP address you chose as the source IP of the Private Link service. For other resources in the subnet, access is controlled based on Network Security Groups (NSG) security rules definition.
-
-When using the portal to create a Private Link service, this setting is automatically disabled as part of the create process. Deployments using any Azure client (PowerShell, CLI or templates), require an extra step to change this property.
-
-You can use the following to enable or disable the setting:
+To choose a source IP address for your Azure Private Link service, the explicit disable setting `privateLinkServiceNetworkPolicies` is required on the subnet. This setting only applies for the specific private IP address you chose as the source IP of the Private Link service. For other resources in the subnet, access is controlled based on the network security group security rules definition.
-* Azure PowerShell
+When you use the portal to create an instance of the Private Link service, this setting is automatically disabled as part of the creation process. Deployments using any Azure client (PowerShell, Azure CLI, or templates) require an extra step to change this property.
-* Azure CLI
+To enable or disable the setting, use one of the following options:
+* Azure PowerShell
+* Azure CLI
* Azure Resource Manager templates
-
-The following examples describe how to enable and disable `privateLinkServiceNetworkPolicies` for a virtual network named **myVNet** with a **default** subnet of **10.1.0.0/24** hosted in a resource group named **myResourceGroup**.
+
+The following examples describe how to enable and disable `privateLinkServiceNetworkPolicies` for a virtual network named `myVNet` with a `default` subnet of `10.1.0.0/24` hosted in a resource group named `myResourceGroup`.
# [**PowerShell**](#tab/private-link-network-policy-powershell)
-This section describes how to disable subnet private endpoint policies using Azure PowerShell. In the following code, replace "default" with the name of your virtual subnet.
+This section describes how to disable subnet private endpoint policies by using Azure PowerShell. In the following code, replace `default` with the name of your virtual subnet.
```azurepowershell $subnet = 'default'
$vnet | Set-AzVirtualNetwork
# [**CLI**](#tab/private-link-network-policy-cli)
-This section describes how to disable subnet private endpoint policies using Azure CLI.
+This section describes how to disable subnet private endpoint policies by using the Azure CLI.
```azurecli az network vnet subnet update \
az network vnet subnet update \
# [**JSON**](#tab/private-link-network-policy-json)
-This section describes how to disable subnet private endpoint policies using Azure Resource Manager Template.
+This section describes how to disable subnet private endpoint policies by using Azure Resource Manager templates.
+ ```json { "name": "myVNet",
This section describes how to disable subnet private endpoint policies using Azu
## Next steps -- Learn more about [Azure Private Endpoint](private-endpoint-overview.md)
-
+- Learn more about [Azure private endpoints](private-endpoint-overview.md).
private-link Inspect Traffic With Azure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md
Last updated 08/14/2023 -+ # Azure Firewall scenarios to inspect traffic destined to a private endpoint
Azure Firewall filters traffic using either:
* [FQDN in network rules](../firewall/fqdn-filtering-network-rules.md) for TCP and UDP protocols
-* [FQDN in application rules](../firewall/features.md#application-fqdn-filtering-rules) for HTTP, HTTPS, and MSSQL.
+* [FQDN in application rules](../firewall/features.md#application-fqdn-filtering-rules) for HTTP, HTTPS, and MSSQL.
-> [!IMPORTANT]
+> [!IMPORTANT]
> The use of application rules over network rules is recommended when inspecting traffic destined to private endpoints in order to maintain flow symmetry. Application rules are preferred over network rules to inspect traffic destined to private endpoints because Azure Firewall always SNATs traffic with application rules. If network rules are used, or an NVA is used instead of Azure Firewall, SNAT must be configured for traffic destined to private endpoints in order to maintain flow symmetry. > [!NOTE]
This scenario is implemented when:
* When only a few services are exposed in the virtual network using private endpoints
-The virtual machines have /32 system routes pointing to each private endpoint. One route per private endpoint is configured to route traffic through Azure Firewall.
+The virtual machines have /32 system routes pointing to each private endpoint. One route per private endpoint is configured to route traffic through Azure Firewall.
The administrative overhead of maintaining the route table increases as services are exposed in the virtual network. The possibility of hitting the route limit also increases.
Use this pattern when a migration to a hub and spoke architecture isn't possible
:::image type="content" source="./media/inspect-traffic-using-azure-firewall/on-premises.png" alt-text="On-premises traffic to private endpoints" border="true":::
-This architecture can be implemented if you have configured connectivity with your on-premises network using either:
+This architecture can be implemented if you have configured connectivity with your on-premises network using either:
* [ExpressRoute](..\expressroute\expressroute-introduction.md)
-* [Site to Site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)
+* [Site to Site VPN](../vpn-gateway/tutorial-site-to-site-portal.md)
If your security requirements require client traffic to services exposed via private endpoints to be routed through a security appliance, deploy this scenario.
private-link Manage Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/manage-private-endpoint.md
Title: Manage Azure Private Endpoints
+ Title: Manage Azure private endpoints
-description: Learn how to manage private endpoints in Azure
+description: Learn how to manage private endpoints in Azure.
-# Manage Azure Private Endpoints
+# Manage Azure private endpoints
-Azure Private Endpoints have several options when managing the configuration and their deployment.
+Azure private endpoints have several options for managing their configuration and deployment.
-**GroupId** and **MemberName** can be determined by querying the Private Link resource. The **GroupID** and **MemberName** values are needed to configure a static IP address for a private endpoint during creation.
+You can determine `GroupId` and `MemberName` values by querying the Azure Private Link resource. You need the `GroupId` and `MemberName` values to configure a static IP address for a private endpoint during creation.
-A private endpoint has two custom properties, static IP address and the network interface name. These properties must be set when the private endpoint is created.
+A private endpoint has two custom properties: static IP address and network interface name. These properties must be set when the private endpoint is created.
-With a service provider and consumer deployment of a Private Link Service, an approval process is in place to make the connection.
+With a service provider and consumer deployment of Private Link, an approval process is in place to make the connection.
## Determine GroupID and MemberName
-During the creation of a private endpoint with Azure PowerShell and Azure CLI, the **GroupId** and **MemberName** of the private endpoint resource might be needed.
+During the creation of a private endpoint with Azure PowerShell and the Azure CLI, the `GroupId` and `MemberName` values of the private endpoint resource might be needed.
-* **GroupId** is the subresource of the private endpoint.
+* `GroupId` is the subresource of the private endpoint.
+* `MemberName` is the unique stamp for the private IP address of the endpoint.
-* **MemberName** is the unique stamp for the private IP address of the endpoint.
+For more information about private endpoint subresources and their values, see [Private Link resource](private-endpoint-overview.md#private-link-resource).
-For more information about Private Endpoint subresources and their values, see [Private-link resource](private-endpoint-overview.md#private-link-resource).
-
-To determine the values of **GroupID** and **MemberName** for your private endpoint resource, use the following commands. **MemberName** is contained within the **RequiredMembers** property.
+To determine the values of `GroupId` and `MemberName` for your private endpoint resource, use the following commands. `MemberName` is contained within the `RequiredMembers` property.
# [**PowerShell**](#tab/manage-private-link-powershell)
-An Azure WebApp is used as the example private endpoint resource. Use **[Get-AzPrivateLinkResource](/powershell/module/az.network/get-azprivatelinkresource)** to determine **GroupId** and **MemberName**.
+An Azure web app is used as the example private endpoint resource. Use [Get-AzPrivateLinkResource](/powershell/module/az.network/get-azprivatelinkresource) to determine the values for `GroupId` and `MemberName`.
```azurepowershell ## Place the previously created webapp into a variable. ##
$resource =
Get-AzPrivateLinkResource -PrivateLinkResourceId $webapp.ID ```
-You should receive an output similar to the below example.
+You should receive an output similar to the following example.
# [**Azure CLI**](#tab/manage-private-link-cli)
-An Azure WebApp is used as the example private endpoint resource. Use **[az network private-link-resource list](/cli/azure/network/private-link-resource#az-network-private-link-resource-list)** to determine **GroupId** and **MemberName**. The parameter `--type` requires the namespace for the private link resource. For the webapp used in this example, the namespace is **Microsoft.Web/sites**. To determine the namespace for your private link resource, see **[Azure services DNS zone configuration](private-endpoint-dns.md#azure-services-dns-zone-configuration)**.
+An Azure web app is used as the example private endpoint resource. Use [az network private-link-resource list](/cli/azure/network/private-link-resource#az-network-private-link-resource-list) to determine `GroupId` and `MemberName`. The parameter `--type` requires the namespace for the Private Link resource. For the web app used in this example, the namespace is `Microsoft.Web/sites`. To determine the namespace for your Private Link resource, see [Azure services DNS zone configuration](private-endpoint-dns.md#azure-services-dns-zone-configuration).
```azurecli az network private-link-resource list \
az network private-link-resource list \
--type Microsoft.Web/sites ```
-You should receive an output similar to the below example.
+You should receive an output similar to the following example.
## Custom properties
-Network interface rename and static IP address assignment are custom properties that can be set on a private endpoint during creation.
+Network interface rename and static IP address assignment are custom properties that you can set on a private endpoint during creation.
### Network interface rename By default, when a private endpoint is created the network interface associated with the private endpoint is given a random name for its network interface. The network interface must be named when the private endpoint is created. The renaming of the network interface of an existing private endpoint is unsupported.
-Use the following commands when creating a private endpoint to rename the network interface.
+Use the following commands when you create a private endpoint to rename the network interface.
# [**PowerShell**](#tab/manage-private-link-powershell)
-To rename the network interface when the private endpoint is created, use the `-CustomNetworkInterfaceName` parameter. The following example uses an Azure PowerShell command to create a private endpoint to an Azure WebApp. For more information, see **[New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint)**.
+To rename the network interface when the private endpoint is created, use the `-CustomNetworkInterfaceName` parameter. The following example uses an Azure PowerShell command to create a private endpoint to an Azure web app. For more information, see [New-AzPrivateEndpoint](/powershell/module/az.network/new-azprivateendpoint).
```azurepowershell ## Place the previously created webapp into a variable. ##
New-AzPrivateEndpoint @pe
# [**Azure CLI**](#tab/manage-private-link-cli)
-To rename the network interface when the private endpoint is created, use the `--nic-name` parameter. The following example uses an Azure PowerShell command to create a private endpoint to an Azure WebApp. For more information, see **[az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create)**.
+To rename the network interface when the private endpoint is created, use the `--nic-name` parameter. The following example uses an Azure PowerShell command to create a private endpoint to an Azure web app. For more information, see [az network private-endpoint create](/cli/azure/network/private-endpoint#az-network-private-endpoint-create).
```azurecli id=$(az webapp list \
az network private-endpoint create \
### Static IP address
-By default, when a private endpoint is created the IP address for the endpoint is automatically assigned. The IP is assigned from the IP range of the virtual network configured for the private endpoint. A situation can arise when a static IP address for the private endpoint is required. The static IP address must be assigned when the private endpoint is created. The configuration of a static IP address for an existing private endpoint is currently unsupported.
+By default, when a private endpoint is created, the IP address for the endpoint is automatically assigned. The IP is assigned from the IP range of the virtual network configured for the private endpoint. A situation can arise when a static IP address for the private endpoint is required. The static IP address must be assigned when the private endpoint is created. The configuration of a static IP address for an existing private endpoint is currently unsupported.
-For procedures to configure a static IP address when creating a private endpoint, see [Create a private endpoint using Azure PowerShell](create-private-endpoint-powershell.md) and [Create a private endpoint using the Azure CLI](create-private-endpoint-cli.md).
+For procedures to configure a static IP address when you create a private endpoint, see [Create a private endpoint using Azure PowerShell](create-private-endpoint-powershell.md) and [Create a private endpoint using the Azure CLI](create-private-endpoint-cli.md).
## Private endpoint connections
-Azure Private Link works on an approval model where the Private Link service consumer can request a connection to the service provider for consuming the service.
+Private Link works on an approval model where the Private Link consumer can request a connection to the service provider for consuming the service.
-The service provider can then decide whether to allow the consumer to connect or not. Azure Private Link enables service providers to manage the private endpoint connection on their resources.
+The service provider can then decide whether to allow the consumer to connect or not. Private Link enables service providers to manage the private endpoint connection on their resources.
-There are two connection approval methods that a Private Link service consumer can choose from:
+There are two connection approval methods that a Private Link consumer can choose from:
-- **Automatic**: If the service consumer has Azure Role Based Access Control permissions on the service provider resource, the consumer can choose the automatic approval method. When the request reaches the service provider resource, no action is required from the service provider and the connection is automatically approved.
+- **Automatic**: If the service consumer has Azure role-based access control (RBAC) permissions on the service provider resource, the consumer can choose the automatic approval method. When the request reaches the service provider resource, no action is required from the service provider and the connection is automatically approved.
+- **Manual**: If the service consumer doesn't have RBAC permissions on the service provider resource, the consumer can choose the manual approval method. The connection request appears on the service resources as **Pending**. The service provider has to manually approve the request before connections can be established.
-- **Manual**: If the service consumer doesnΓÇÖt have Azure Role Based Access Control permissions on the service provider resource, the consumer can choose the manual approval method. The connection request appears on the service resources as **Pending**. The service provider has to manually approve the request before connections can be established.
-In manual cases, service consumer can also specify a message with the request to provide more context to the service provider. The service provider has following options to choose from for all private endpoint connections: **Approve**, **Reject**, **Remove**.
+ In manual cases, the service consumer can also specify a message with the request to provide more context to the service provider. The service provider has the following options to choose from for all private endpoint connections: **Approve**, **Reject**, and **Remove**.
> [!IMPORTANT]
-> To approve connections with a private endpoint that is in a separate subscription or tenant, ensure that the provider subscription or tenant has registered **Microsoft.Network**. The consumer subscription or tenant should also have the resource provider of the destination resource registered.
+> To approve connections with a private endpoint that's in a separate subscription or tenant, ensure that the provider subscription or tenant has registered `Microsoft.Network`. The consumer subscription or tenant should also have the resource provider of the destination resource registered.
-The below table shows the various service provider actions and the resulting connection states for private endpoints. The service provider can change the connection state at a later time without consumer intervention. The action updates the state of the endpoint on the consumer side.
+The following table shows the various service provider actions and the resulting connection states for private endpoints. The service provider can change the connection state at a later time without consumer intervention. The action updates the state of the endpoint on the consumer side.
| Service provider action | Service consumer private endpoint state | Description | |||| | None | Pending | Connection is created manually and is pending for approval by the Private Link resource owner. | | Approve | Approved | Connection is automatically or manually approved and is ready to be used. |
-| Reject | Rejected | The private link resource owner rejects the connection. |
-| Remove | Disconnected | The private link resource owner removes the connection, causing the private endpoint to become disconnected and it should be deleted for clean-up. |
+| Reject | Rejected | The Private Link resource owner rejects the connection. |
+| Remove | Disconnected | The Private Link resource owner removes the connection, causing the private endpoint to become disconnected and it should be deleted for cleanup. |
## Manage private endpoint connections on Azure PaaS resources
Use the following steps to manage a private endpoint connection in the Azure por
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Private link**. In the search results, select **Private link**.
+1. In the search box at the top of the portal, enter **Private Link**. In the search results, select **Private link**.
+
+1. In the **Private Link Center**, select **Private endpoints** or **Private link services**.
-3. In the **Private link center**, select **Private endpoints** or **Private link services**.
+1. For each of your endpoints, you can view the number of private endpoint connections associated with it. You can filter the resources as needed.
-4. For each of your endpoints, you can view the number of private endpoint connections associated with it. You can filter the resources as needed.
+1. Select the private endpoint. Under the connections listed, select the connection that you want to manage.
-5. Select the private endpoint. Under the connections listed, select the connection that you want to manage.
+1. You can change the state of the connection by selecting from the options at the top.
-6. You can change the state of the connection by selecting from the options at the top.
+## Manage private endpoint connections on a customer- or partner-owned Private Link service
-## Manage Private Endpoint connections on a customer/partner owned Private Link service
+Use the following PowerShell and Azure CLI commands to manage private endpoint connections on Microsoft partner services or customer-owned services.
-Use the following PowerShell and Azure CLI commands to manage private endpoint connections on Microsoft Partner Services or customer owned services.
-
# [**PowerShell**](#tab/manage-private-link-powershell)
-Use the following PowerShell commands to manage private endpoint connections.
+Use the following PowerShell commands to manage private endpoint connections.
## Get Private Link connection states
-Use **[Get-AzPrivateEndpointConnection](/powershell/module/az.network/get-azprivateendpointconnection)** to get the Private Endpoint connections and their states.
+Use [Get-AzPrivateEndpointConnection](/powershell/module/az.network/get-azprivateendpointconnection) to get the private endpoint connections and their states.
```azurepowershell $get = @{
$get = @{
Get-AzPrivateEndpointConnection @get ```
-## Approve a Private Endpoint connection
+## Approve a private endpoint connection
-Use **[Approve-AzPrivateEndpointConnection](/powershell/module/az.network/approve-azprivateendpointconnection)** cmdlet to approve a Private Endpoint connection.
+Use [Approve-AzPrivateEndpointConnection](/powershell/module/az.network/approve-azprivateendpointconnection) to approve a private endpoint connection.
```azurepowershell $approve = @{
$approve = @{
Approve-AzPrivateEndpointConnection @approve ```
-## Deny Private Endpoint connection
+## Deny a private endpoint connection
-Use **[Deny-AzPrivateEndpointConnection](/powershell/module/az.network/deny-azprivateendpointconnection)** cmdlet to reject a Private Endpoint connection.
+Use [Deny-AzPrivateEndpointConnection](/powershell/module/az.network/deny-azprivateendpointconnection) to reject a private endpoint connection.
```azurepowershell $deny = @{
$deny = @{
Deny-AzPrivateEndpointConnection @deny ```
-## Remove Private Endpoint connection
+## Remove a private endpoint connection
-Use **[Remove-AzPrivateEndpointConnection](/powershell/module/az.network/remove-azprivateendpointconnection)** cmdlet to remove a Private Endpoint connection.
+Use [Remove-AzPrivateEndpointConnection](/powershell/module/az.network/remove-azprivateendpointconnection) to remove a private endpoint connection.
```azurepowershell $remove = @{
Remove-AzPrivateEndpointConnection @remove
# [**Azure CLI**](#tab/manage-private-link-cli)
-Use the following Azure CLI commands to manage private endpoint connections.
+Use the following Azure CLI commands to manage private endpoint connections.
-## Get Private Link connection states
+## Get Private Link connection states
-Use **[az network private-endpoint-connection show](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-show)** to get the Private Endpoint connections and their states.
+Use [az network private-endpoint-connection show](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-show) to get the private endpoint connections and their states.
```azurecli az network private-endpoint-connection show \
Use **[az network private-endpoint-connection show](/cli/azure/network/private-e
--resource-group myResourceGroup ```
-## Approve a Private Endpoint connection
-
-Use **[az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve)** cmdlet to approve a Private Endpoint connection.
-
+## Approve a private endpoint connection
+
+Use [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) to approve a private endpoint connection.
+ ```azurecli az network private-endpoint-connection approve \ --name myPrivateEndpointConnection \ --resource-group myResourceGroup ```
-
-## Deny Private Endpoint connection
-
-Use **[az network private-endpoint-connection reject](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-reject)** cmdlet to reject a Private Endpoint connection.
+
+## Deny a private endpoint connection
+
+Use [az network private-endpoint-connection reject](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-reject) to reject a private endpoint connection.
```azurecli az network private-endpoint-connection reject \
Use **[az network private-endpoint-connection reject](/cli/azure/network/private
--resource-group myResourceGroup ```
-## Remove Private Endpoint connection
-
-Use **[az network private-endpoint-connection delete](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-delete)** cmdlet to remove a Private Endpoint connection.
+## Remove a private endpoint connection
+
+Use [az network private-endpoint-connection delete](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-delete) to remove a private endpoint connection.
```azurecli az network private-endpoint-connection delete \
Use **[az network private-endpoint-connection delete](/cli/azure/network/private
> [!NOTE]
-> Connections that have been previously denied can't be approved. You must remove the connection and create a new one.
-
+> Connections previously denied can't be approved. You must remove the connection and create a new one.
## Next steps-- [Learn about Private Endpoints](private-endpoint-overview.md)
-
+
+- [Learn about private endpoints](private-endpoint-overview.md)
private-link Private Endpoint Export Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-export-dns.md
Title: Export DNS records for a private endpoint - Azure portal
-description: In this tutorial, learn how to export the DNS records for a private endpoint in the Azure portal.
+description: In this tutorial, learn how to export DNS records for a private endpoint in the Azure portal.
Last updated 07/25/2021
-# Export DNS records for a private endpoint using the Azure portal
+# Export DNS records for a private endpoint by using the Azure portal
-A private endpoint in Azure requires DNS records for name resolution of the endpoint. The DNS record resolves the private IP address of the endpoint for the configured resource. To export the DNS records of the endpoint, use the Private Link center in the portal.
+A private endpoint in Azure requires DNS records for name resolution of the endpoint. The DNS record resolves the private IP address of the endpoint for the configured resource. To export the DNS records of the endpoint, use the Azure Private Link Center in the portal.
## Prerequisites -- An Azure account with an active subscription. [Create an account for free ](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A private endpoint configured in your subscription. For the example in this article, a private endpoint to an Azure Web App is used. For more information on creating a private endpoint for a web app, see [Tutorial: Connect to a web app using an Azure Private endpoint](tutorial-private-endpoint-webapp-portal.md).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A private endpoint configured in your subscription. For the example in this article, a private endpoint to an Azure web app is used. For more information on how to create a private endpoint for a web app, see [Tutorial: Connect to a web app using an Azure private endpoint](tutorial-private-endpoint-webapp-portal.md).
## Export endpoint DNS records
-In this section, you'll sign in to the Azure portal and search for the private link center.
+In this section, you sign in to the Azure portal and search for the Private Link Center.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Private Link**.
+1. In the search box at the top of the portal, enter **Private Link**.
-3. Select **Private link**.
+1. Select **Private link**.
-4. In the Private Link center, select **Private endpoints**.
+1. In the Private Link Center, select **Private endpoints**.
- :::image type="content" source="./media/private-endpoint-export-dns/private-link-center.png" alt-text="Select private endpoints in Private Link center":::
+ :::image type="content" source="./media/private-endpoint-export-dns/private-link-center.png" alt-text="Screenshot that shows selecting private endpoints in the Private Link Center.":::
-5. In **Private endpoints**, select the endpoint you want to export the DNS records for. Select **Download host file** to download the endpoint DNS records in a host file format.
+1. In **Private endpoints**, select the endpoint for which you want to export the DNS records. Select **Download host file** to download the endpoint DNS records in a host file format.
- :::image type="content" source="./media/private-endpoint-export-dns/download-host-file.png" alt-text="Download endpoint DNS records":::
+ :::image type="content" source="./media/private-endpoint-export-dns/download-host-file.png" alt-text="Screenshot that shows downloading endpoint DNS records.":::
-6. The downloaded host file records will look similar to below:
+1. The downloaded host file records look similar to this example:
```text # Exported from the Azure portal "2021-07-26 11:26:03Z"
In this section, you'll sign in to the Azure portal and search for the private l
## Next steps
-To learn more about Azure Private link and DNS, see [Azure Private Endpoint DNS configuration](private-endpoint-dns.md).
+To learn more about Azure Private Link and DNS, see [Azure private endpoint DNS configuration](private-endpoint-dns.md).
-For more information on Azure Private link, see:
+For more information on Azure Private Link, see:
* [What is Azure Private Link?](private-link-overview.md) * [What is Azure Private Link service?](private-link-service-overview.md)
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 12/21/2023 Last updated : 01/18/2024
View Virtual Machines in the portal and login as administrator
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/*/read | | > | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/*/read | |
-> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listCredentials/action | List the endpoint access credentials to the resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listCredentials/action | Gets the endpoint access credentials to the resource. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
View Virtual Machines in the portal and login as a regular user.
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkInterfaces/read | Gets a network interface definition. | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/*/read | | > | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/*/read | |
-> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listCredentials/action | List the endpoint access credentials to the resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listCredentials/action | Gets the endpoint access credentials to the resource. |
> | **NotActions** | | > | *none* | | > | **DataActions** | | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/login/action | Log in to a virtual machine as a regular user |
-> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/login/action | Log in to a Azure Arc machine as a regular user |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/login/action | Log in to an Azure Arc machine as a regular user |
> | **NotDataActions** | | > | *none* | |
Let's you manage the OS of your resource via Windows Admin Center as an administ
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkWatchers/securityGroupView/action | View the configured and effective network security group rules applied on a VM. | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/read | Gets a security rule definition | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/write | Creates a security rule or updates an existing security rule |
-> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/write | Create or update the endpoint to the target resource. |
-> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/read | Get or list of endpoints to the target resource. |
-> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | Get managed proxy details for the resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/write | Update the endpoint to the target resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/read | Gets the endpoint to the resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | Fetches the managed proxy details |
> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/read | Get the properties of a virtual machine | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/read | Retrieves the summary of the latest patch assessment operation | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/softwarePatches/read | Retrieves list of patches assessed during the last patch assessment operation |
Lets you manage Azure Cosmos DB accounts, but not access data in them. Prevents
> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/joinViaServiceEndpoint/action | Joins resource such as storage account or SQL database to a subnet. Not alertable. | > | **NotActions** | |
+> | [Microsoft.DocumentDB](resource-provider-operations.md#microsoftdocumentdb)/databaseAccounts/dataTransferJobs/* | |
> | [Microsoft.DocumentDB](resource-provider-operations.md#microsoftdocumentdb)/databaseAccounts/readonlyKeys/* | | > | [Microsoft.DocumentDB](resource-provider-operations.md#microsoftdocumentdb)/databaseAccounts/regenerateKey/* | | > | [Microsoft.DocumentDB](resource-provider-operations.md#microsoftdocumentdb)/databaseAccounts/listKeys/* | |
Lets you manage Azure Cosmos DB accounts, but not access data in them. Prevents
"Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action" ], "notActions": [
+ "Microsoft.DocumentDB/databaseAccounts/dataTransferJobs/*",
"Microsoft.DocumentDB/databaseAccounts/readonlyKeys/*", "Microsoft.DocumentDB/databaseAccounts/regenerateKey/*", "Microsoft.DocumentDB/databaseAccounts/listKeys/*",
Can read all monitoring data and edit monitoring settings. See also [Get started
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/sharedKeys/action | Retrieves the shared keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/storageinsightconfigs/* | Read/write/delete log analytics storage insight configurations. | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
-> | Microsoft.WorkloadMonitor/monitors/* | Get information about guest VM health monitors. |
> | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/smartDetectorAlertRules/* | | > | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/actionRules/* | | > | [Microsoft.AlertsManagement](resource-provider-operations.md#microsoftalertsmanagement)/smartGroups/* | |
Can read all monitoring data and edit monitoring settings. See also [Get started
"Microsoft.OperationalInsights/workspaces/sharedKeys/action", "Microsoft.OperationalInsights/workspaces/storageinsightconfigs/*", "Microsoft.Support/*",
- "Microsoft.WorkloadMonitor/monitors/*",
"Microsoft.AlertsManagement/smartDetectorAlertRules/*", "Microsoft.AlertsManagement/actionRules/*", "Microsoft.AlertsManagement/smartGroups/*",
Read-only role for Digital Twins data-plane properties
> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/digitaltwins/relationships/read | Read any Digital Twin Relationship | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/eventroutes/read | Read any Event Route | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/jobs/import/read | Read any Bulk Import Job |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/jobs/imports/read | Read any Bulk Import Job |
+> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/jobs/deletions/read | Read any Bulk Delete Job |
> | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/models/read | Read any Model | > | [Microsoft.DigitalTwins](resource-provider-operations.md#microsoftdigitaltwins)/query/action | Query any Digital Twins Graph | > | **NotDataActions** | |
Read-only role for Digital Twins data-plane properties
"Microsoft.DigitalTwins/digitaltwins/relationships/read", "Microsoft.DigitalTwins/eventroutes/read", "Microsoft.DigitalTwins/jobs/import/read",
+ "Microsoft.DigitalTwins/jobs/imports/read",
+ "Microsoft.DigitalTwins/jobs/deletions/read",
"Microsoft.DigitalTwins/models/read", "Microsoft.DigitalTwins/query/action" ],
sap Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/register-existing-system.md
When you register a system with Azure Center for SAP solutions, the following re
- A Storage account within the managed resource group which contains blobs that have scripts and logs necessary for the service to provide the various capabilities including discovering and registering all components of SAP system. > [!NOTE]
-> You can customize the names of the Managed resource group and the Storage account which get deployed as part of the registration process by using [Azure PowerShell](quickstart-register-system-powershell.md) or [Azure CLI](quickstart-register-system-cli.md) interfaces for registering your systems.
+> You can customize the names of the Managed resource group and the Storage account which get deployed as part of the registration process by using Azure Portal, [Azure PowerShell](quickstart-register-system-powershell.md) or [Azure CLI](quickstart-register-system-cli.md) interfaces, when you register your systems.
+
+> [!NOTE]
+> You can now enable secure access to the ACSS managed storage account from specific virtual networks using the [new option in the registration experience](#managed-storage-account-network-access-settings).
## Prerequisites
When you register a system with Azure Center for SAP solutions, the following re
- Use a [**Service tags**](../../virtual-network/service-tags-overview.md) to allow connectivity - Use a [Service tags with regional scope](../../virtual-network/service-tags-overview.md) to allow connectivity to resources in the same region as the VMs. - Allowlist the region-specific IP addresses for Azure Storage, ARM and Microsoft Entra ID.
+- ACSS deploys a **managed storage account** into your subscription, for each SAP system being registered. You have the option to choose [**network access**](#managed-storage-account-network-access-settings) setting for the storage account.
+ - If you choose network access from specific Virtual Networks option, then you need to make sure **Microsoft.Storage** service endpoint is enabled on all subnets in which the SAP system Virtual Machines exist. This service endpoint is used to enable access from the SAP virtual machine to the managed storage account, to access the scripts that ACSS runs on the VM extension.
+ - If you choose public network access option, then you need to grant access to Azure Storage accounts from the virtual network where the SAP system exists.
- Register the **Microsoft.Workloads** Resource Provider in the subscription where you have the SAP system. - Check that your Azure account has **Azure Center for SAP solutions administrator** and **Managed Identity Operator** or equivalent role access on the subscription or resource groups where you have the SAP system resources. - A **User-assigned managed identity** which has **Azure Center for SAP solutions service role** access on the Compute resource group and **Reader** role access on the Virtual Network resource group of the SAP system. Azure Center for SAP solutions service uses this identity to discover your SAP system resources and register the system as a VIS resource.
To provide permissions to the SAP system resources to a user-assigned managed id
1. [Assign **Azure Center for SAP solutions service role**](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#manage-access-to-user-assigned-managed-identities) role access to the user-assigned managed identity on the resource group(s) which have the Virtual Machines, Disks and Load Balancers of the SAP system and **Reader** role on the resource group(s) which have the Virtual Network components of the SAP system. 1. Once the permissions are assigned, this managed identity can be used in Azure Center for SAP solutions to register and manage SAP systems.
+## Managed storage account network access settings
+ACSS deploys a **managed storage account** into your subscription, for each SAP system being registered. When you register your SAP system using Azure Portal, PowerShell or REST API, you have the option to choose **network access** setting for the storage account.
+
+To secure the managed storage account and limit access to only the virtual network that has your SAP virtual machines, you can choose the network access setting as **Enable access from specific Virtual Networks**. You can learn more about storage account network security in [this documentation](../../storage/common/storage-network-security.md).
+
+> [!IMPORTANT]
+> When you limit storage account network access to specific virtual networks, you have to configure Microsoft.Storage [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) on all subnets related to the SAP system that you are registering. Without the service endpoint enabled, you will not be able to successfully register the system. Private endpoint on managed storage account is not currently supported in this scenario.
+
+When you choose to limit network access to specific virtual networks, Azure Center for SAP solutions service accesses this storage account using [**trusted access**](../../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services) based on the managed identity associated with the VIS resource.
+ ## Register SAP system To register an existing SAP system in Azure Center for SAP solutions:
sap Enable Sap Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/enable-sap-insights.md
In this how-to-guide, learn to enable Insights in Azure Monitor for SAP solution
To enable Insights for Azure Monitor for SAP solutions, you need to:
-1. [Run a PowerShell script for access](#run-a-powershell-script-for-access)
1. [Prerequisite - Unprotect methods](#unprotect-the-getenvironment-method)
+1. [Provide required access](#provide-required-access)
-### Run a PowerShell script for access
+### Unprotect the GetEnvironment method
+Follow steps to unprotect methods from the [NetWeaver provider configuration page](provider-netweaver.md#prerequisite-unprotect-methods-for-metrics).
+<br/>If you completed these steps during NetWeaver provider setup, you can skip this section. Ensure that you have unprotected the GetEnvironment method in particular for this capability to work.
+
+### Provide required access
+In order to provide issue correlations with infrastructure, the Azure Monitor for SAP solutions(AMS) service requires Reader access over the resource groups or subscriptions that hold your SAP system infrastructure - virtual machines and virtual networks. You can assign these role assignments using one of the two methods mentioned.
-> [!Note]
-> The intent of this step is to give the Azure Monitor for SAP solutions(AMS) instance access to the virtual machines that host the SAP systems you want to monitor. This will help your AMS instance correlate issues you face with Azure infrastructure telemetry, giving you an end-to-end troubleshooting experience.
+#### Provide access using AMS portal experience
+1. Open the AMS instance of your choice and visit the insights tab under Monitoring on the left navigation pane and choose to Configure Insights.
+1. Choose the 'Add role assignment' button to open the role assignment experience.
+1. Choose the scope at which you would want to assign the Reader role. You can assign the reader role to multiple resource groups at a time under a subscription scope. Make sure that the scope(s) chosen encompass the SAP system's infrastructure on Azure. Save the role assignments.
+
+#### Provide access using a PowerShell script
This script gives your AMS instance Reader role permission over the subscriptions that hold the SAP systems. Feel free to modify the script to scope it down to a resource group or a set of virtual machines. 1. Download the onboarding script [from GitHub](https://github.com/Azure/Azure-Monitor-for-SAP-solutions-preview/blob/main/Scripts/AMS_AIOPS_SETUP.ps1)
$subscriptions = "<Subscription ID 1>","<Subscription ID 2>"
```PowerShell .\AMS_AIOPS_SETUP.ps1 -ArmId $armId ```-
-### Unprotect the GetEnvironment method
-
-Follow steps to unprotect methods from the [NetWeaver provider configuration page](provider-netweaver.md#prerequisite-unprotect-methods-for-metrics).
-<br/>If you have already followed these steps during Netweaver provider setup, you can skip this section. Ensure that you have unprotected the GetEnvironment method in particular for this capability to work.
- > [!Important]
-> You might have to wait for up to 2 hours for your AMS to start receiving metadata of the infrastructure that it needs to monitor.
+> You might have to wait for up to 30 minutes for your AMS to start receiving metadata of the infrastructure that it needs to monitor.
## Using Insights on Azure Monitor for SAP Solutions(AMS) We have two categories of issues we help you get insights for.
This capability helps you get an overview regarding availability of your SAP sys
#### Steps to use availability insights 1. Open the AMS instance of your choice and visit the insights tab under Monitoring on the left navigation pane. :::image type="content" source="./media/enable-sap-insights/visit-insights-tab.png" alt-text="Screenshot that shows the landing page of Insights on AMS.":::
-1. If you have followed all [the steps mentioned](#steps-to-enable-insights-in-azure-monitor-for-sap-solutions), you should see the above screen asking for context to be set up. You can set the Time range, SID and the provider (optional, All selected by default).
+1. If you completed all [the steps mentioned](#steps-to-enable-insights-in-azure-monitor-for-sap-solutions), you should see the above screen asking for context to be set up. You can set the Time range, SID and the provider (optional, All selected by default).
1. On the top, you're able to see all the fired alerts related to SAP system and instance availability on this screen. :::image type="content" source="./media/enable-sap-insights/availability-overview.png" alt-text="Screenshot of the overview page of availability insights.":::
-1. If you're able to see SAP system availability trend, categorized by VM - SAP process list. If you have selected a fired alert in the previous step, you're able to see these trends in context with the fired alert. If not, these trends respect the time range you set on the main Time range filter.
+1. If you're able to see SAP system availability trend, categorized by VM - SAP process list. If you selected a fired alert in the previous step, you're able to see these trends in context with the fired alert. If not, these trends respect the time range you set on the main Time range filter.
:::image type="content" source="./media/enable-sap-insights/availability-trends.png" alt-text="Screenshot of the availability trends of availability insights.":::
-1. You can see the Azure virtual machine on which the process is hosted and the corresponding availability trends for the combination. To view detailed insights, select the Investigate link.
+1. You can see the Azure virtual machine on which the process is hosted and the corresponding availability trends for the combination. To view detailed insights, select the 'Investigate' link.
1. It opens a context pane that shows you availability insights on the corresponding virtual machine and the SAP application. It has two categories of insights: * Azure platform: VM health events filtered by the time range set, either by the workbook filter or the selected alert. This pane also consists of VM availability metric trend for the chosen VM.
This capability helps you get an overview regarding performance of your SAP syst
1. Open the AMS instance of your choice and visit the insights tab under Monitoring on the left navigation pane. 1. On the top, you're able to see all the fired alerts related to SAP application performance degradations. :::image type="content" source="./media/enable-sap-insights/performance-overview.png" alt-text="Screenshot of the overview page of performance insights.":::
-1. Next you're able to see key metrics related to performance issues and its trend during the timerange you have chosen.
+1. Next you're able to see key metrics related to performance issues and its trend during the timerange you chose.
1. To view detailed insights issues, you can either choose to investigate a fired alert or view insights for a key metric. 1. On investigating, you see a context pane, which shows you four categories of metrics in context of the issue/key metric chosen. * Issue/Key metric details - Detailed visualizations of the key metric that defines the problem.
sap Provider Netweaver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/provider-netweaver.md
You can collect the below metric using SAP NetWeaver Provider
To configure the NetWeaver provider for the current Azure Monitor for SAP solutions version, you'll need to: 1. [Prerequisite - Unprotect methods for metrics](#prerequisite-unprotect-methods-for-metrics)
-1. [Prerequisite to enable RFC metrics ](#prerequisite-to-enable-rfc-metrics)
+1. [Prerequisite to enable RFC metrics](#prerequisite-to-enable-rfc-metrics)
1. [Add the NetWeaver provider](#adding-netweaver-provider) Refer to troubleshooting section to resolve any issue faced while adding the SAP NetWeaver Provider.
sap High Availability Guide Suse Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-multi-sid.md
Previously updated : 06/21/2023 Last updated : 01/17/2024
In the example configurations, installation commands etc. three SAP NetWeaver 7.
* **NW3**: ASCS instance number **20** and virtual hostname **msnw3ascs**; ERS instance number **22** and virtual host name **msnw3ers**. The article doesn't cover the database layer and the deployment of the SAP NFS shares.
-In the examples in this article, we are using virtual names nw2-nfs for the NW2 NFS shares and nw3-nfs for the NW3 NFS shares, assuming that NFS cluster was deployed.
+In the examples in this article, we're using virtual names nw2-nfs for the NW2 NFS shares and nw3-nfs for the NW3 NFS shares, assuming that NFS cluster was deployed.
Before you begin, refer to the following SAP Notes and papers first:
Before you begin, refer to the following SAP Notes and papers first:
* Important capacity information for Azure VM sizes * Supported SAP software, and operating system (OS) and database combinations * Required SAP kernel version for Windows and Linux on Microsoft Azure- * SAP Note [2015553][2015553] lists prerequisites for SAP-supported SAP software deployments in Azure. * SAP Note [2205917][2205917] has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications * SAP Note [1944799][1944799] has SAP HANA Guidelines for SUSE Linux Enterprise Server for SAP Applications
Before you begin, refer to the following SAP Notes and papers first:
* [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] * [Azure Virtual Machines deployment for SAP on Linux][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide]
-* [SUSE SAP HA Best Practice Guides][suse-ha-guide]
- The guides contain all required information to set up Netweaver HA and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much more detailed information.
+* [SUSE SAP HA Best Practice Guides][suse-ha-guide] - The guides contain all required information to set up Netweaver HA and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much more detailed information.
* [SUSE High Availability Extension 12 SP3 Release Notes][suse-ha-12sp3-relnotes] * [SUSE multi-SID cluster guide for SLES 12 and SLES 15](https://documentation.suse.com/sbp/all/html/SBP-SAP-MULTI-SID/https://docsupdatetracker.net/index.html) * [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
The presented configuration for this multi-SID cluster example with three SAP sy
> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+>
+> * Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> * To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, you should update saptune version to 3.1.1 or higher. For more information, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
## SAP NFS shares
-SAP NetWeaver requires shared storage for the transport, profile directory, and so on. For highly available SAP system, it is important to have highly available NFS shares. You will need to decide on the architecture for your SAP NFS shares. One option is to build [Highly available NFS cluster on Azure VMs on SUSE Linux Enterprise Server][nfs-ha], which can be shared between multiple SAP systems.
+SAP NetWeaver requires shared storage for the transport, profile directory, and so on. For highly available SAP system, it's important to have highly available NFS shares. You would need to decide on the architecture for your SAP NFS shares. One option is to build [Highly available NFS cluster on Azure VMs on SUSE Linux Enterprise Server][nfs-ha], which can be shared between multiple SAP systems.
-Another option is to deploy the shares on [Azure NetApp Files NFS volumes](../../azure-netapp-files/azure-netapp-files-create-volumes.md). With Azure NetApp Files, you will get built-in high availability for the SAP NFS shares.
+Another option is to deploy the shares on [Azure NetApp Files NFS volumes](../../azure-netapp-files/azure-netapp-files-create-volumes.md). With Azure NetApp Files, you would get built-in high availability for the SAP NFS shares.
## Deploy the first SAP system in the cluster
-Now that you have decided on the architecture for the SAP NFS shares, deploy the first SAP system in the cluster, following the corresponding documentation.
+Based on the architecture for the SAP NFS shares, deploy the first SAP system in the cluster, following the corresponding documentation.
* If using highly available NFS server, follow [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications](./high-availability-guide-suse.md). * If using Azure NetApp Files NFS volumes, follow [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md)
-The documents listed above will guide you through the steps to prepare the necessary infrastructures, build the cluster, prepare the OS for running the SAP application.
+The documents listed above would guide you through the steps to prepare the necessary infrastructures, build the cluster, prepare the OS for running the SAP application.
> [!TIP] > Always test the fail over functionality of the cluster, after the first system is deployed, before adding the additional SAP SIDs to the cluster. That way you will know that the cluster functionality works, before adding the complexity of additional SAP systems to the cluster.
This documentation assumes that:
* The Pacemaker cluster is already configured and running. * At least one SAP system (ASCS / ERS instance) is already deployed and is running in the cluster.
-* The cluster fail over functionality has been tested.
+* The cluster fail over functionality is tested.
* The NFS shares for all SAP systems are deployed. ### Prepare for SAP NetWeaver Installation
-1. Add configuration for the newly deployed system (that is, **NW2**, **NW3**) to the existing Azure Load Balancer, following the instructions [Deploy Azure Load Balancer manually via Azure portal](./high-availability-guide-suse-netapp-files.md#deploy-azure-load-balancer-manually-via-azure-portal). Adjust the IP addresses, health probe ports, load-balancing rules for your configuration.
+1. Add configuration for the newly deployed system (that is, **NW2**, **NW3**) to the existing Azure Load Balancer, following the instructions [configure Azure Load Balancer manually via Azure portal](./high-availability-guide-suse-netapp-files.md#configure-azure-load-balancer). Adjust the IP addresses, health probe ports, load-balancing rules for your configuration.
2. **[A]** Set up name resolution for the additional SAP systems. You can either use DNS server or modify `/etc/hosts` on all nodes. This example shows how to use the `/etc/hosts` file. Adapt the IP addresses and the host names to your environment.
This documentation assumes that:
10.3.1.32 nw3-nfs ```
-3. **[A]** Create the shared directories for the additional **NW2** and **NW3** SAP systems that you are deploying to the cluster.
+3. **[A]** Create the shared directories for the additional **NW2** and **NW3** SAP systems that you're deploying to the cluster.
```bash sudo mkdir -p /sapmnt/NW2
This documentation assumes that:
sudo chattr +i /usr/sap/NW3/ERS22 ```
-4. **[A]** Configure `autofs` to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional SAP systems that you are deploying to the cluster. In this example **NW2** and **NW3**.
+4. **[A]** Configure `autofs` to mount the /sapmnt/SID and /usr/sap/SID/SYS file systems for the additional SAP systems that you're deploying to the cluster. In this example **NW2** and **NW3**.
- Update file `/etc/auto.direct` with the file systems for the additional SAP systems that you are deploying to the cluster.
+ Update file `/etc/auto.direct` with the file systems for the additional SAP systems that you're deploying to the cluster.
* If using NFS file server, follow the instructions on the [Azure VMs high availability for SAP NetWeaver on SLES](./high-availability-guide-suse.md#prepare-for-sap-netweaver-installation) page * If using Azure NetApp Files, follow the instructions on the [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](./high-availability-guide-suse-netapp-files.md#prepare-for-sap-netweaver-installation) page
- You will need to restart the `autofs` service to mount the newly added shares.
+ You need to restart the `autofs` service to mount the newly added shares.
### Install ASCS / ERS
-1. Create the virtual IP and health probe cluster resources for the ASCS instance of the additional SAP system you are deploying to the cluster. The example shown here is for **NW2** and **NW3** ASCS, using highly available NFS server.
+1. Create the virtual IP and health probe cluster resources for the ASCS instance of the additional SAP system you're deploying to the cluster. The example shown here is for **NW2** and **NW3** ASCS, using highly available NFS server.
> [!IMPORTANT] > Recent testing revealed situations, where netcat stops responding to requests due to backlog and its limitation of handling only one connection. The netcat resource stops listening to the Azure Load balancer requests and the floating IP becomes unavailable.
This documentation assumes that:
meta resource-stickiness=3000 ```
- As you creating the resources they may be assigned to different cluster resources. When you group them, they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+ As you creating the resources they may be assigned to different cluster resources. When you group them, they'll migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
2. **[1]** Install SAP NetWeaver ASCS
This documentation assumes that:
If the installation fails to create a subfolder in /usr/sap/**SID**/ASCS**Instance#**, try setting the owner to **sid**adm and group to sapsys of the ASCS**Instance#** and retry.
-3. **[1]** Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP system you are deploying to the cluster. The example shown here is for **NW2** and **NW3** ERS, using highly available NFS server.
+3. **[1]** Create a virtual IP and health-probe cluster resources for the ERS instance of the additional SAP system you're deploying to the cluster. The example shown here is for **NW2** and **NW3** ERS, using highly available NFS server.
```bash sudo crm configure primitive fs_NW2_ERS Filesystem device='nw2-nfs:/NW2/ASCSERS' directory='/usr/sap/NW2/ERS12' fstype='nfs4' \
This documentation assumes that:
sudo crm configure group g-NW3_ERS fs_NW3_ERS nc_NW3_ERS vip_NW3_ERS ```
- As you creating the resources they may be assigned to different cluster nodes. When you group them, they will migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started.
+ As you creating the resources they may be assigned to different cluster nodes. When you group them, they'll migrate to one of the cluster nodes. Make sure the cluster status is ok and that all resources are started.
Next, make sure that the resources of the newly created ERS group, are running on the cluster node, opposite to the cluster node where the ASCS instance for the same SAP system was installed. For example, if NW2 ASCS was installed on `slesmsscl1`, then make sure the NW2 ERS group is running on `slesmsscl2`. You can migrate the NW2 ERS group to `slesmsscl2` by running the following command:
This documentation assumes that:
4. **[2]** Install SAP NetWeaver ERS
- Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS. For example for system **NW2**, the virtual host name will be **msnw2ers**, **10.3.1.17** and the instance number that you used for the probe of the load balancer, for example **12**. For system **NW3**, the virtual host name **msnw3ers**, **10.3.1.19** and the instance number that you used for the probe of the load balancer, for example **22**.
+ Install SAP NetWeaver ERS as root on the other node, using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS. For example for system **NW2**, the virtual host name is **msnw2ers**, **10.3.1.17** and the instance number that you used for the probe of the load balancer, for example **12**. For system **NW3**, the virtual host name **msnw3ers**, **10.3.1.19** and the instance number that you used for the probe of the load balancer, for example **22**.
You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual host name.
This documentation assumes that:
crm resource unmigrate g-NW3_ERS ```
-5. **[1]** Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example shown below is for NW2. You will need to adapt the ASCS/SCS and ERS profiles for all SAP instances added to the cluster.
+5. **[1]** Adapt the ASCS/SCS and ERS instance profiles for the newly installed SAP system(s). The example shown below is for NW2. You'll need to adapt the ASCS/SCS and ERS profiles for all SAP instances added to the cluster.
* ASCS/SCS profile
This documentation assumes that:
8. **[1]** Create the SAP cluster resources for the newly installed SAP system.
- If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems **NW2** and **NW3** as follows:
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources for **NW2** and **NW3** systems. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+
+ #### [ENSA1](#tab/ensa1)
```bash sudo crm configure property maintenance-mode="true"
This documentation assumes that:
sudo crm configure property maintenance-mode="false" ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources for SAP systems **NW2** and **NW3** as follows:
+ #### [ENSA2](#tab/ensa2)
```bash sudo crm configure property maintenance-mode="true"
This documentation assumes that:
sudo crm configure property maintenance-mode="false" ```
- If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
- Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
- The following example shows the cluster resources status, after SAP systems **NW2** and **NW3** were added to the cluster.
+If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
- ```bash
- sudo crm_mon -r
-
- # Online: [ slesmsscl1 slesmsscl2 ]
-
- #Full list of resources:
-
- #stonith-sbd (stonith:external/sbd): Started slesmsscl1
- # Resource Group: g-NW1_ASCS
- # fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
- # nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
- # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
- # rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl2
- # Resource Group: g-NW1_ERS
- # fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
- # nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
- # vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
- # rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl1
- # Resource Group: g-NW2_ASCS
- # fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
- # nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
- # vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
- # rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
- # Resource Group: g-NW2_ERS
- # fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
- # nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
- # vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
- # rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
- # Resource Group: g-NW3_ASCS
- # fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
- # nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
- # vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
- # rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl1
- # Resource Group: g-NW3_ERS
- # fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
- # nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
- # vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
- # rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl2
- ```
+Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
- The following picture shows how the resources would look like in the HA Web Konsole(Hawk), with the resources for SAP system **NW2** expanded.
+The following example shows the cluster resources status, after SAP systems **NW2** and **NW3** were added to the cluster.
- [![SAP NetWeaver High Availability overview](./media/high-availability-guide-suse/ha-suse-multi-sid-hawk.png)](./media/high-availability-guide-suse/ha-suse-multi-sid-hawk-detail.png#lightbox)
+```bash
+sudo crm_mon -r
+
+# Online: [ slesmsscl1 slesmsscl2 ]
+
+#Full list of resources:
+
+#stonith-sbd (stonith:external/sbd): Started slesmsscl1
+# Resource Group: g-NW1_ASCS
+# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl2
+# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl2
+# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl2
+# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started slesmsscl2
+# Resource Group: g-NW1_ERS
+# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started slesmsscl1
+# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started slesmsscl1
+# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl1
+# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started slesmsscl1
+# Resource Group: g-NW2_ASCS
+# fs_NW2_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
+# nc_NW2_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
+# vip_NW2_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
+# rsc_sap_NW2_ASCS10 (ocf::heartbeat:SAPInstance): Started slesmsscl1
+# Resource Group: g-NW2_ERS
+# fs_NW2_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
+# nc_NW2_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
+# vip_NW2_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
+# rsc_sap_NW2_ERS12 (ocf::heartbeat:SAPInstance): Started slesmsscl2
+# Resource Group: g-NW3_ASCS
+# fs_NW3_ASCS (ocf::heartbeat:Filesystem): Started slesmsscl1
+# nc_NW3_ASCS (ocf::heartbeat:azure-lb): Started slesmsscl1
+# vip_NW3_ASCS (ocf::heartbeat:IPaddr2): Started slesmsscl1
+# rsc_sap_NW3_ASCS20 (ocf::heartbeat:SAPInstance): Started slesmsscl1
+# Resource Group: g-NW3_ERS
+# fs_NW3_ERS (ocf::heartbeat:Filesystem): Started slesmsscl2
+# nc_NW3_ERS (ocf::heartbeat:azure-lb): Started slesmsscl2
+# vip_NW3_ERS (ocf::heartbeat:IPaddr2): Started slesmsscl2
+# rsc_sap_NW3_ERS22 (ocf::heartbeat:SAPInstance): Started slesmsscl2
+```
+
+The following picture shows how the resources would look like in the HA Web Konsole(Hawk), with the resources for SAP system **NW2** expanded.
+
+[![SAP NetWeaver High Availability overview](./media/high-availability-guide-suse/ha-suse-multi-sid-hawk.png)](./media/high-availability-guide-suse/ha-suse-multi-sid-hawk-detail.png#lightbox)
### Proceed with the SAP installation
Complete your SAP installation by:
## Test the multi-SID cluster setup
-The following tests are a subset of the test cases in the best practices guides of SUSE. They are included for your convenience. For the full list of cluster tests, reference the following documentation:
+The following tests are a subset of the test cases in the best practices guides of SUSE. They're included for your convenience. For the full list of cluster tests, reference the following documentation:
* If using highly available NFS server, follow [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP applications](./high-availability-guide-suse.md). * If using Azure NetApp Files NFS volumes, follow [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md) Always read the SUSE best practices guides and perform all additional tests that might have been added.
-The tests that are presented are in a two node, multi-SID cluster with three SAP systems installed.
+The tests that are presented are in a two nodes, multi-SID cluster with three SAP systems installed.
1. Test HAGetFailoverConfig and HACheckFailoverConfig
The tests that are presented are in a two node, multi-SID cluster with three SAP
slesmsscl2:~ # echo b > /proc/sysrq-trigger ```
- If you use SBD, Pacemaker should not automatically start on the killed node. The status after the node is started again should look like this.
+ If you use SBD, Pacemaker shouldn't automatically start on the killed node. The status after the node is started again should look like this.
```text Online: [ slesmsscl1 ]
sap High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md
Previously updated : 09/15/2023 Last updated : 01/17/2024
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availabil
* Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions. * Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for the SAP application layer (ASCS/ERS, SAP application servers).
-## Deploy Linux VMs manually via Azure portal
+## Prepare infrastructure
+
+The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs.
+
+### Deploy Linux VMs manually via Azure portal
This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
-Deploy virtual machines for SAP ASCS, ERS, and application server instances. Choose a suitable SLES image that is supported with your SAP system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
+Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version of SLES image that is supported for SAP system. You can deploy VM in any one of the availability options - virtual machine scale set, availability zone, or availability set.
+
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow the steps below to configure a standard load balancer for the high-availability setup of SAP ASCS and SAP ERS.
+
+#### [Azure portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
++++
+> [!IMPORTANT]
+> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+
+> [!NOTE]
+> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+
+> [!IMPORTANT]
+>
+> * Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> * To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, you should update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
## Disable ID mapping (if using NFSv4.1) The instructions in this section are only applicable, if using Azure NetApp Files volumes with NFSv4.1 protocol. Perform the configuration on all VMs, where Azure NetApp Files NFSv4.1 volumes will be mounted.
-1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, that is, **`defaultv4iddomain.com`** and the mapping is set to **nobody**.
+1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain that is, **`defaultv4iddomain.com`** and the mapping is set to **nobody**.
> [!IMPORTANT] > Make sure to set the NFS domain in `/etc/idmapd.conf` on the VM to match the default domain configuration on Azure NetApp Files: **`defaultv4iddomain.com`**. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed as `nobody`.
The instructions in this section are only applicable, if using Azure NetApp File
## Setting up (A)SCS
-In this example, the resources were deployed manually via the [Azure portal](https://portal.azure.com/#home) .
-
-### Deploy Azure Load Balancer manually via Azure portal
-
-After you deploy the VMs for your SAP system, create a load balancer. Use VMs created for SAP ASCS/ERS instances in the backend pool.
-
-1. Create load balancer (internal, standard):
- 1. Create the frontend IP addresses
- 1. IP address 10.1.1.20 for the ASCS
- 1. Open the load balancer, select frontend IP pool, and click Add
- 2. Enter the name of the new frontend IP pool (for example **frontend.QAS.ASCS**)
- 3. Set the Assignment to Static and enter the IP address (for example **10.1.1.20**)
- 4. Click OK
- 2. IP address 10.1.1.21 for the ASCS ERS
- * Repeat the steps above under "a" to create an IP address for the ERS (for example **10.1.1.21** and **frontend.QAS.ERS**)
- 2. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **backend.QAS**).
- 3. Select **NIC** for Backend Pool Configuration.
- 4. Select **Add a virtual machine**.
- 5. Select the virtual machines of the ASCS cluster.
- 6. Select **Add**.
- 7. Select **Save**.
- 3. Create the health probes
- 1. Port 620**00** for ASCS
- 1. Open the load balancer, select health probes, and click Add
- 2. Enter the name of the new health probe (for example **health.QAS.ASCS**)
- 3. Select TCP as protocol, port 620**00**, keep Interval 5
- 4. Click OK
- 2. Port 621**01** for ASCS ERS
- * Repeat the steps above under "c" to create a health probe for the ERS (for example 621**01** and **health.QAS.ERS**)
- 4. Load-balancing rules
- 1. Create a backend pool for the ASCS
- 1. Open the load balancer, select Load-balancing rules and click Add
- 2. Enter the name of the new load balancer rule (for example **lb.QAS.ASCS**)
- 3. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example **frontend.QAS.ASCS**, **backend.QAS** and **health.QAS.ASCS**)
- 4. Select **HA ports**
- 5. Increase idle timeout to 30 minutes
- 6. **Make sure to enable Floating IP**
- 7. Click OK
- * Repeat the steps above to create load balancing rules for ERS (for example **lb.QAS.ERS**)
-
-> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
-
-> [!NOTE]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-
-> [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+Next, you'll prepare and install the SAP ASCS and ERS instances.
### Create Pacemaker cluster
The following items are prefixed with either **[A]** - applicable to all nodes,
2. **[A]** Update SAP resource agents
- A patch for the resource-agents package is required to use the new configuration, that is described in this article. You can check, if the patch is already installed with the following command
+ A patch for the resource-agents package is required to use the new configuration that is described in this article. You can check, if the patch is already installed with the following command
```bash sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance
The following items are prefixed with either **[A]** - applicable to all nodes,
9. **[1]** Create the SAP cluster resources.
- If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+
+ #### [ENSA1](#tab/ensa1)
```bash sudo crm configure property maintenance-mode="true"
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure property maintenance-mode="false" ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
-
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+ #### [ENSA2](#tab/ensa2)
> [!NOTE] > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure property maintenance-mode="false" ```
- If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
+
+If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
> [!NOTE] > The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-specific pause, related to NFSv4.1 lease renewals. For more information, see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf).
sap High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md
Previously updated : 09/15/2023 Last updated : 01/17/2024
The example configurations and installation commands use the following instance
## Prepare infrastructure
-This document assumes that you've already deployed an [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), subnet and resource group.
+The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs.
-1. Deploy your VMs. You can deploy VMs in virtual machine scale sets, availability zones, or in availability set, if the Azure region supports these options. If you need additional IP addresses for your VMs, deploy and attach a second NIC. DonΓÇÖt add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../load-balancer/load-balancer-multivip-overview.md#limitations).
-2. For your virtual IPs, deploy and configure an Azure [load balancer](../../load-balancer/load-balancer-overview.md). It's recommended to use a [Standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
- 1. Configure two frontend IPs: one for ASCS (`10.90.90.10`) and one for ERS (`10.90.90.9`).
- 2. Create a backend pool and add both VMs, which will be part of the cluster.
- 3. Create the health probe for ASCS. The probe port is `62000`. Create the probe port for ERS. The ERS probe port is `62101`. When you configure the Pacemaker resources later on, you must use matching probe ports.
- 4. Configure the load balancing rules for ASCS and ERS. Select the corresponding front IPs, health probes, and the backend pool. Select HA ports, increase the idle timeout to 30 minutes, and enable floating IP.
+### Deploy Linux VMs manually via Azure portal
+
+This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
+
+Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version of SLES image that is supported for SAP system. You can deploy VM in any one of the availability options - virtual machine scale set, availability zone, or availability set.
+
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow the steps below to configure a standard load balancer for the high-availability setup of SAP ASCS and SAP ERS.
+
+#### [Azure portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
++++
+> [!IMPORTANT]
+> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+
+> [!NOTE]
+> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
+
+> [!IMPORTANT]
+>
+> * Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> * To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, you should update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
### Deploy Azure Files storage account and NFS shares
When you plan your deployment with NFS on Azure Files, consider the following im
## Setting up (A)SCS
-In this example, you deploy the resources manually through the [Azure portal](https://portal.azure.com/#home).
-
-### Deploy Azure Load Balancer via Azure portal
-
-After you deploy the VMs for your SAP system, create a load balancer. Then, use the VMs in the backend pool.
-
-1. Create an internal, standard load balancer.
- 1. Create the frontend IP addresses
- 1. IP address 10.90.90.10 for the ASCS
- 1. Open the load balancer, select frontend IP pool, and click Add
- 2. Enter the name of the new frontend IP pool (for example **frontend.NW1.ASCS**)
- 3. Set the Assignment to Static and enter the IP address (for example **10.90.90.10**)
- 4. Click OK
- 2. IP address 10.90.90.9 for the ASCS ERS
- * Repeat the steps above under "a" to create an IP address for the ERS (for example **10.90.90.9** and **frontend.NW1.ERS**)
- 2. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**
- 2. Enter the name of the new back-end pool (for example, **backend.NW1**)
- 3. Select **NIC** for Backend Pool Configuration
- 4. Select **Add a virtual machine**
- 5. Select the virtual machines of the ASCS cluster
- 6. Select **Add**
- 7. Select **Save**
- 3. Create the health probes
- 1. Port 620**00** for ASCS
- 1. Open the load balancer, select health probes, and click Add
- 2. Enter the name of the new health probe (for example **health.NW1.ASCS**)
- 3. Select TCP as protocol, port 620**00**, keep Interval **5**
- 4. Click **Add**
- 5. Click **Save**.
- 2. Port 621**01** for ASCS ERS
- * Repeat the steps above under "c" to create a health probe for the ERS (for example 621**01** and **health.NW1.ERS**)
- 4. Load-balancing rules
- 1. Create a backend pool for the ASCS
- 1. Open the load balancer, select Load-balancing rules and click Add
- 2. Enter the name of the new load balancer rule (for example **lb.NW1.ASCS**)
- 3. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example **frontend.NW1.ASCS**, **backend.NW1**, and **health.NW1.ASCS**)
- 4. Select **HA ports**
- 5. Increase idle timeout to 30 minutes
- 6. **Make sure to enable Floating IP**
- 7. Click OK
- * Repeat the steps above to create load balancing rules for ERS (for example **lb.NW1.ERS**)
-
-> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
-
-> [!NOTE]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-
-> [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+Next, you'll prepare and install the SAP ASCS and ERS instances.
### Create Pacemaker cluster
The following items are prefixed with either **[A]** - applicable to all nodes,
2. **[A]** Update SAP resource agents
- A patch for the resource-agents package is required to use the new configuration, that is described in this article. You can check, if the patch is already installed with the following command
+ A patch for the resource-agents package is required to use the new configuration that is described in this article. You can check, if the patch is already installed with the following command
```bash sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance
The following items are prefixed with either **[A]** - applicable to all nodes,
If the grep command does not find the IS_ERS parameter, you need to install the patch listed on [the SUSE download page](https://download.suse.com/patch/finder/#bu=suse&familyId=&productId=&dateRange=&startDate=&endDate=&priority=&architecture=&keywords=resource-agents)
-3. **[A]** Setup host name resolution
+3. **[A]** Set up host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chattr +i /usr/sap/NW1/ERS01 ```
-2. **[A]** Mount the file systems, that will not be controlled by the Pacemaker cluster.
+2. **[A]** Mount the file systems that will not be controlled by the Pacemaker cluster.
```bash vi /etc/fstab
The following items are prefixed with either **[A]** - applicable to all nodes,
9. **[1]** Create the SAP cluster resources
- If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
- ```bash
- sudo crm configure property maintenance-mode="true"
+ #### [ENSA1](#tab/ensa1)
+
+ ```bash
+ sudo crm configure property maintenance-mode="true"
- sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
+ sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
operations \$id=rsc_sap_NW1_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10
- sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
+ sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
operations \$id=rsc_sap_NW1_ERS01-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" AUTOMATIC_RECOVER=false IS_ERS=true \ meta priority=1000
- sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
- sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
+ sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
+ sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
- sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
- sudo crm configure location loc_sap_NW1_failover_to_ers rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
- sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
+ sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
+ sudo crm configure location loc_sap_NW1_failover_to_ers rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
+ sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
- sudo crm_attribute --delete --name priority-fencing-delay
+ sudo crm_attribute --delete --name priority-fencing-delay
- sudo crm node online sap-cl1
- sudo crm configure property maintenance-mode="false"
- ```
-
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+ sudo crm node online sap-cl1
+ sudo crm configure property maintenance-mode="false"
+ ```
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+ #### [ENSA2](#tab/ensa2)
> [!NOTE] > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing). > > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
- ```bash
- sudo crm configure property maintenance-mode="true"
+ ```bash
+ sudo crm configure property maintenance-mode="true"
- sudo crm configure property priority-fencing-delay=30
+ sudo crm configure property priority-fencing-delay=30
- sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
+ sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
operations \$id=rsc_sap_NW1_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 priority=100
- sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
+ sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
operations \$id=rsc_sap_NW1_ERS01-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" AUTOMATIC_RECOVER=false IS_ERS=true
- sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
- sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
+ sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
+ sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
- sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
- sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
+ sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
+ sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
- sudo crm node online sap-cl1
- sudo crm configure property maintenance-mode="false"
- ```
+ sudo crm node online sap-cl1
+ sudo crm configure property maintenance-mode="false"
+ ```
- If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
- Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
- ```bash
- sudo crm_mon -r
- # Full list of resources:
- #
- # stonith-sbd (stonith:external/sbd): Started sap-cl2
- # Resource Group: g-NW1_ASCS
- # fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-cl1
- # nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
- # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
- # rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
- # Resource Group: g-NW1_ERS
- # fs_NW1_ERS (ocf::heartbeat:Filesystem): Started sap-cl2
- # nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
- # vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
- # rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1
- ```
+Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running.
+
+```bash
+sudo crm_mon -r
+# Full list of resources:
+#
+# stonith-sbd (stonith:external/sbd): Started sap-cl2
+# Resource Group: g-NW1_ASCS
+# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started sap-cl1
+# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
+# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
+# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
+# Resource Group: g-NW1_ERS
+# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started sap-cl2
+# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
+# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
+# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1
+```
## SAP NetWeaver application server preparation
The following items are prefixed with either **[A]** - applicable to both PAS an
vm.dirty_background_bytes = 314572800 ```
-1. **[A]** Setup host name resolution
+1. **[A]** Set up host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands
sap High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md
Previously updated : 09/15/2023 Last updated : 01/17/2024
The example configurations and installation commands use the following instance
## Prepare the infrastructure
-This article assumes that you've already deployed an [Azure virtual network](../../virtual-network/virtual-networks-overview.md), subnet, and resource group. To prepare the rest of your infrastructure:
+The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs.
-1. Deploy your VMs. You can deploy VMs in availability sets or in availability zones, if the Azure region supports these options.
- > [!IMPORTANT]
- > If you need additional IP addresses for your VMs, deploy and attach a second network interface controller (NIC). Don't add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../load-balancer/load-balancer-multivip-overview.md#limitations).
-2. For your virtual IPs, deploy and configure an [Azure load balancer](../../load-balancer/load-balancer-overview.md). We recommend that you use a [Standard load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
- 1. Create front-end IP address 10.27.0.9 for the ASCS instance:
- 1. Open the load balancer, select **Frontend IP pool**, and then select **Add**.
- 2. Enter the name of the new front-end IP pool (for example, **frontend.NW1.ASCS**).
- 3. Set **Assignment** to **Static** and enter the IP address (for example, **10.27.0.9**).
- 4. Select **OK**.
- 2. Create front-end IP address 10.27.0.10 for the ERS instance:
- * Repeat the preceding steps to create an IP address for ERS (for example, **10.27.0.10** and **frontend.NW1.ERS**).
- 3. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **backend.NW1**).
- 3. Select **NIC** for Backend Pool Configuration.
- 4. Select **Add a virtual machine**.
- 5. Select the virtual machines of the ASCS cluster.
- 6. Select **Add**.
- 7. Select **Save**.
- 4. Create a health probe for port 62000 for ASCS:
- 1. Open the load balancer, select **Health probes**, and then select **Add**.
- 2. Enter the name of the new health probe (for example, **health.NW1.ASCS**).
- 3. Select **TCP** as the protocol and **62000** as the port. Keep the interval of **5**.
- 4. Select **Add**.
- 5. Create a health probe for port 62101 for the ERS instance:
- * Repeat the preceding steps to create a health probe for ERS (for example, **62101** and **health.NW1.ERS**).
- 6. Create load-balancing rules for ASCS:
- 1. Open the load balancer, select **Load-balancing rules**, and then select **Add**.
- 2. Enter the name of the new load-balancing rule (for example, **lb.NW1.ASCS**).
- 3. Select the front-end IP address for ASCS, back-end pool, and health probe that you created earlier (for example, **frontend.NW1.ASCS**, **backend.NW1**, and **health.NW1.ASCS**).
- 4. Increase idle timeout to 30 minutes
- 5. Select **HA ports**.
- 6. Enable Floating IP.
- 7. Select **OK**.
- 7. Create load-balancing rules for ERS:
- * Repeat the preceding steps to create load-balancing rules for ERS (for example, **lb.NW1.ERS**).
+### Deploy Linux VMs manually via Azure portal
+
+This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
+
+Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version of SLES image that is supported for SAP system. You can deploy VM in any one of the availability options - virtual machine scale set, availability zone, or availability set.
+
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow the steps below to configure a standard load balancer for the high-availability setup of SAP ASCS and SAP ERS.
+
+#### [Azure portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
++++
+> [!IMPORTANT]
+> A floating IP address isn't supported on a network interface card (NIC) secondary IP configuration in load-balancing scenarios. For details, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need another IP address for the VM, deploy a second NIC.
> [!NOTE] > When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+>
+> * Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> * To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, you should update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
## Deploy NFS
The instructions in this section are applicable only if you're using Azure NetAp
10. **[1]** Create the SAP cluster resources.
- If you're using an ENSA1 architecture, define the resources as follows.
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+
+ #### [ENSA1](#tab/ensa1)
```bash sudo crm configure property maintenance-mode="true"
The instructions in this section are applicable only if you're using Azure NetAp
sudo crm configure property maintenance-mode="false" ```
- SAP introduced support for [ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+ #### [ENSA2](#tab/ensa2)
> [!NOTE] > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing). > > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster. For more information, see [Enqueue Replication 2 High Availability cluster with simple mount](https://documentation.suse.com/sbp/sap-15/html/SAP-S4HA10-setupguide-sle15/https://docsupdatetracker.net/index.html#multicluster)
- If you're using an ENSA2 architecture, define the resources as follows.
- ```bash sudo crm configure property maintenance-mode="true"
The instructions in this section are applicable only if you're using Azure NetAp
sudo crm configure property maintenance-mode="false" ```
- If you're upgrading from an older version and switching to ENSA2, see SAP Note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
- Make sure that the cluster status is OK and that all resources are started. It isn't important which node the resources are running on.
+If you're upgrading from an older version and switching to ENSA2, see SAP Note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
- ```bash
- sudo crm_mon -r
- # Full list of resources:
- #
- # stonith-sbd (stonith:external/sbd): Started sap-cl2
- # Resource Group: g-NW1_ASCS
- # nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
- # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
- # rsc_sapstartsrv_NW1_ASCS00 (ocf::suse:SAPStartSrv): Started sap-cl1
- # rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
- # Resource Group: g-NW1_ERS
- # nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
- # vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
- # rsc_sapstartsrv_NW1_ERS01 (ocf::suse:SAPStartSrv): Started sap-cl2
- # rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1
- ```
+Make sure that the cluster status is OK and that all resources are started. It isn't important which node the resources are running on.
+
+```bash
+sudo crm_mon -r
+# Full list of resources:
+#
+# stonith-sbd (stonith:external/sbd): Started sap-cl2
+# Resource Group: g-NW1_ASCS
+# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started sap-cl1
+# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1
+# rsc_sapstartsrv_NW1_ASCS00 (ocf::suse:SAPStartSrv): Started sap-cl1
+# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started sap-cl1
+# Resource Group: g-NW1_ERS
+# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started sap-cl2
+# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2
+# rsc_sapstartsrv_NW1_ERS01 (ocf::suse:SAPStartSrv): Started sap-cl2
+# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1
+```
## Prepare the SAP application server
sap High Availability Guide Suse Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs.md
Previously updated : 06/20/2023 Last updated : 01/17/2024
This document assumes that you've already deployed a resource group, [Azure Virt
Deploy two virtual machines for NFS servers. Choose a suitable SLES image that is supported with your SAP system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
-### Deploy Azure Load Balancer manually via Azure portal
-
-After you deploy the VMs for your SAP system, create a load balancer. Use VMs created for NFS servers in the backend pool.
-
-1. Create a Load Balancer (internal). We recommend [standard load balancer](../../load-balancer/load-balancer-overview.md).
- 1. Follow these instructions to create standard Load balancer:
- 1. Create the frontend IP addresses
- 1. IP address 10.0.0.4 for NW1
- 1. Open the load balancer, select frontend IP pool, and click Add
- 2. Enter the name of the new frontend IP pool (for example **nw1-frontend**)
- 3. Set the Assignment to Static and enter the IP address (for example **10.0.0.4**)
- 4. Click OK
- 2. IP address 10.0.0.5 for NW2
- * Repeat the steps above for NW2
- 2. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **nw-backend**).
- 3. Select **NIC** for Backend Pool Configuration.
- 4. Select **Add a virtual machine**.
- 5. Select the virtual machines of the cluster.
- 6. Select **Add**.
- 7. Select **Save**.
- 3. Create the health probes
- 1. Port 61000 for NW1
- 1. Open the load balancer, select health probes, and click Add
- 2. Enter the name of the new health probe (for example **nw1-hp**)
- 3. Select TCP as protocol, port 610**00**, keep Interval 5
- 4. Click OK
- 2. Port 61001 for NW2
- * Repeat the steps above to create a health probe for NW2
- 4. Load balancing rules
- 1. Open the load balancer, select load-balancing rules and click Add
- 2. Enter the name of the new load balancer rule (for example **nw1-lb**)
- 3. Select the frontend IP address, backend pool, and health probe you created earlier (for example **nw1-frontend**. **nw-backend** and **nw1-hp**)
- 4. Increase idle timeout to 30 minutes
- 5. Select **HA Ports**.
- 6. **Make sure to enable Floating IP**
- 7. Click OK
- * Repeat the steps above to create load balancing rule for NW2
+### Configure Azure load balancer
+
+Follow [create load balancer](../../load-balancer/quickstart-load-balancer-standard-internal-portal.md#create-load-balancer) guide to configure a standard load balancer for an NFS server high availability. During the configuration of load balancer, consider following points.
+
+1. **Frontend IP Configuration:** Create two frontend IP. Select the same virtual network and subnet as your NFS server.
+2. **Backend Pool:** Create backend pool and add NFS server VMs.
+3. **Inbound rules:** Create two load balancing rule, one for NW1 and another for NW2. Follow the same steps for both load balancing rules.
+ * Frontend IP address: Select frontend IP
+ * Backend pool: Select backend pool
+ * Check "High availability ports"
+ * Protocol: TCP
+ * Health Probe: Create health probe with below details (applies for both NW1 and NW2)
+ * Protocol: TCP
+ * Port: [for example: 61000 for NW1, 61001 for NW2]
+ * Interval: 5
+ * Probe Threshold: 2
+ * Idle timeout (minutes): 30
+ * Check "Enable Floating IP"
+
+> [!NOTE]
+> Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. So to control the number of successful or failed consecutive probes, set the property "probeThreshold" to 2. It is currently not possible to set this property using Azure portal, so use either the [Azure CLI](/cli/azure/network/lb/probe) or [PowerShell](/powershell/module/az.network/new-azloadbalancerprobeconfig) command.
> [!IMPORTANT] > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
After you deploy the VMs for your SAP system, create a load balancer. Use VMs cr
> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+>
+> * Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> * To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, you should update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
### Create Pacemaker cluster
sap High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md
Previously updated : 09/15/2023 Last updated : 01/17/2024
Read the following SAP Notes and papers first
* [Azure Virtual Machines deployment for SAP on Linux][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide] * [SUSE SAP HA Best Practice Guides][suse-ha-guide]
- The guides contain all required information to setup Netweaver HA and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much more detailed information.
+ The guides contain all required information to set up Netweaver HA and SAP HANA System Replication on-premises. Use these guides as a general baseline. They provide much more detailed information.
* [SUSE High Availability Extension 12 SP3 Release Notes][suse-ha-12sp3-relnotes] ## Overview
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and th
## Setting up a highly available NFS server > [!NOTE]
-> We recommend deploying one of the Azure first-party NFS
+> We recommend deploying one of the Azure first-party NFS
> The SAP configuration guides for SAP NW highly available SAP system with native NFS services are: > > * [High availability SAP NW on Azure VMswith simple mount and NFS on SLES for SAP Applications](./high-availability-guide-suse-nfs-simple-mount.md)
The NFS server, SAP NetWeaver ASCS, SAP NetWeaver SCS, SAP NetWeaver ERS, and th
SAP NetWeaver requires shared storage for the transport and profile directory. Read [High availability for NFS on Azure VMs on SUSE Linux Enterprise Server][nfs-ha] on how to set up an NFS server for SAP NetWeaver.
-## Setting up (A)SCS
+## Prepare infrastructure
+
+The resource agent for SAP Instance is included in SUSE Linux Enterprise Server for SAP Applications. An image for SUSE Linux Enterprise Server for SAP Applications 12 or 15 is available in Azure Marketplace. You can use the image to deploy new VMs.
-### Deploy Linux manually via Azure portal
+### Deploy Linux VMs manually via Azure portal
This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet.
-Deploy virtual machines for SAP ASCS, ERS, and application server instances. Choose a suitable SLES image that is supported with your SAP system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set.
-
-### Deploy Azure Load Balancer manually via Azure portal
-
-After you deploy the VMs for your SAP system, create a load balancer. Use VMs created for SAP ASCS/ERS instances in the backend pool.
-
-1. Create load balancer (internal, standard):
- 1. Create the frontend IP addresses
- 1. IP address 10.0.0.7 for the ASCS
- 1. Open the load balancer, select frontend IP pool, and click Add
- 2. Enter the name of the new frontend IP pool (for example **nw1-ascs-frontend**)
- 3. Set the Assignment to Static and enter the IP address (for example **10.0.0.7**)
- 4. Click OK
- 2. IP address 10.0.0.8 for the ASCS ERS
- * Repeat the steps above to create an IP address for the ERS (for example **10.0.0.8** and **nw1-aers-backend**)
- 2. Create a single back-end pool:
- 1. Open the load balancer, select **Backend pools**, and then select **Add**.
- 2. Enter the name of the new back-end pool (for example, **nw1-backend**).
- 3. Select **NIC** for Backend Pool Configuration.
- 4. Select **Add a virtual machine**.
- 5. Select the virtual machines of the ASCS cluster.
- 6. Select **Add**.
- 7. Select **Save**.
- 3. Create the health probes
- 1. Port 620**00** for ASCS
- 1. Open the load balancer, select health probes, and click Add
- 2. Enter the name of the new health probe (for example **nw1-ascs-hp**)
- 3. Select TCP as protocol, port 620**00**, keep Interval 5
- 4. Click OK
- 2. Port 621**02** for ASCS ERS
- * Repeat the steps above to create a health probe for the ERS (for example 621**02** and **nw1-aers-hp**)
- 4. Load-balancing rules
- 1. Load-balancing rules for ASCS
- 1. Open the load balancer, select load-balancing rules and click Add
- 2. Enter the name of the new load balancer rule (for example **nw1-lb-ascs**)
- 3. Select the frontend IP address, backend pool, and health probe you created earlier (for example **nw1-ascs-frontend**, **nw1-backend** and **nw1-ascs-hp**)
- 4. Select **HA ports**
- 5. Increase idle timeout to 30 minutes
- 6. **Make sure to enable Floating IP**
- 7. Click OK
- * Repeat the steps above to create load balancing rules for ERS (for example **nw1-lb-ers**)
+Deploy virtual machines with SLES for SAP Applications image. Choose a suitable version of SLES image that is supported for SAP system. You can deploy VM in any one of the availability options - virtual machine scale set, availability zone, or availability set.
+
+### Configure Azure load balancer
+
+During VM configuration, you have an option to create or select exiting load balancer in networking section. Follow the steps below to configure a standard load balancer for the high-availability setup of SAP ASCS and SAP ERS.
+
+#### [Azure portal](#tab/lb-portal)
++
+#### [Azure CLI](#tab/lb-azurecli)
++
+#### [PowerShell](#tab/lb-powershell)
+++ > [!IMPORTANT] > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
After you deploy the VMs for your SAP system, create a load balancer. Use VMs cr
> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+>
+> * Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> * To prevent saptune from changing the manually set `net.ipv4.tcp_timestamps` value from `0` back to `1`, you should update saptune version to 3.1.1 or higher. For more details, see [saptune 3.1.1 ΓÇô Do I Need to Update?](https://www.suse.com/c/saptune-3-1-1-do-i-need-to-update/).
+
+## Setting up (A)SCS
+
+Next, you'll prepare and install the SAP ASCS and ERS instances.
### Create Pacemaker cluster
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Update SAP resource agents
- A patch for the resource-agents package is required to use the new configuration, that is described in this article. You can check, if the patch is already installed with the following command
+ A patch for the resource-agents package is required to use the new configuration that is described in this article. You can check, if the patch is already installed with the following command
```bash sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Configure SWAP file Create a swap file as defined in [Create a SWAP file for an Azure Linux VM](/troubleshoot/azure/virtual-machines/create-swap-file-linux-vm)+ ```bash #!/bin/sh
The following items are prefixed with either **[A]** - applicable to all nodes,
```bash chmod +x /var/lib/cloud/scripts/per-boot/swap.sh ```+ Stop and start the VM. Stopping and starting the VM is only necessary the first time after you create the SWAP file. ### Installing SAP NetWeaver ASCS/ERS
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[1]** Create the SAP cluster resources
- If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+
+ #### [ENSA1](#tab/ensa1)
```bash sudo crm configure property maintenance-mode="true"
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure property maintenance-mode="false" ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
-
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+ #### [ENSA2](#tab/ensa2)
> [!NOTE] > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure property maintenance-mode="false" ```
- If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
-
- Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
-
- ```bash
- sudo crm_mon -r
-
- # Online: [ nw1-cl-0 nw1-cl-1 ]
- #
- # Full list of resources:
- #
- # stonith-sbd (stonith:external/sbd): Started nw1-cl-1
- # Resource Group: g-NW1_ASCS
- # fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
- # nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
- # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
- # rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
- # Resource Group: g-NW1_ERS
- # fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
- # nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
- # vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
- # rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
- ```
+
+
+If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
+Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
+
+```bash
+sudo crm_mon -r
+
+# Online: [ nw1-cl-0 nw1-cl-1 ]
+#
+# Full list of resources:
+#
+# stonith-sbd (stonith:external/sbd): Started nw1-cl-1
+# Resource Group: g-NW1_ASCS
+# fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-1
+# nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-1
+# vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-1
+# rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-1
+# Resource Group: g-NW1_ERS
+# fs_NW1_ERS (ocf::heartbeat:Filesystem): Started nw1-cl-0
+# nc_NW1_ERS (ocf::heartbeat:azure-lb): Started nw1-cl-0
+# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started nw1-cl-0
+# rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0
+```
## SAP NetWeaver application server preparation
The steps bellow assume that you install the application server on a server diff
vm.dirty_background_bytes = 314572800 ```
-1. Setup host name resolution
+1. Set up host name resolution
You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands
The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ```
-2. Kill enqueue server process
+1. Kill enqueue server process
Resource state before starting the test:
The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ```
-3. Kill enqueue replication server process
+1. Kill enqueue replication server process
Resource state before starting the test:
The following tests are a copy of the test cases in the best practices guides of
rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ```
-4. Kill enqueue sapstartsrv process
+1. Kill enqueue sapstartsrv process
Resource state before starting the test:
sap Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-one-region.md
description: Describes SAP HANA operations on Azure native VMs in one Azure regi
tags: azure-resource-manager-+
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
documentationcenter: saponazure
tags: azure-resource-manager-+
SAP high availability HANA System Replication configuration uses a dedicated vir
## Set up the Azure NetApp File infrastructure Before you continue with the setup for Azure NetApp Files infrastructure, familiarize yourself with the Azure [NetApp Files documentation](../../azure-netapp-files/index.yml).
-Azure NetApp Files is available in several [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=netapp). Check to see whether your selected Azure region offers Azure NetApp Files.
+Azure NetApp Files is available in several [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=netapp). Check to see whether your selected Azure region offers Azure NetApp Files.
For information about the availability of Azure NetApp Files by Azure region, see [Azure NetApp Files Availability by Azure Region](https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all). ### Important considerations
-As you create your Azure NetApp Files for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
+As you create your Azure NetApp Files for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
### Sizing of HANA database on Azure NetApp Files
The following instructions assume that you've already deployed your [Azure virtu
The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the *Ultra* Service level. For HANA workloads on Azure, we recommend using Azure NetApp Files *Ultra* or *Premium* [service Level](../../azure-netapp-files/azure-netapp-files-service-levels.md). 3. Delegate a subnet to Azure NetApp Files, as described in the instructions in [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md). 4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md).
-
+ As you deploy the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically.
-
+ Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-mnt00001, and so on, are the volume names and nfs://10.3.1.4/hanadb1-data-mnt00001, nfs://10.3.1.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes. On **hanadb1**
For more information about the required ports for SAP HANA, read the chapter [Co
```example 10.3.1.4:/hanadb1-data-mnt00001 /hana/data/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0 10.3.1.4:/hanadb1-log-mnt00001 /hana/log/HN1/mnt00001 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
- 10.3.1.4:/hanadb1-shared-mnt00001 /hana/shared/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
+ 10.3.1.4:/hanadb1-shared-mnt00001 /hana/shared/HN1 nfs rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 0 0
``` Example for hanadb2
For more information about the required ports for SAP HANA, read the chapter [Co
```bash #Check nfs4_disable_idmapping sudo cat /sys/module/nfs/parameters/nfs4_disable_idmapping
-
+ #If you need to set nfs4_disable_idmapping to Y sudo echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
-
+ #Make the configuration permanent sudo echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf ```
For more information about the required ports for SAP HANA, read the chapter [Co
net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 131072 16777216 net.ipv4.tcp_wmem = 4096 16384 16777216
- net.core.netdev_max_backlog = 300000
- net.ipv4.tcp_slow_start_after_idle=0
+ net.core.netdev_max_backlog = 300000
+ net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1
- net.ipv4.tcp_window_scaling = 1
+ net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1 ```
For more information about the required ports for SAP HANA, read the chapter [Co
``` > [!TIP]
- > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
+ > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
For more information about the required ports for SAP HANA, read the chapter [Co
6. **[A]** Install the SAP HANA Starting with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a tenant with same SID will be created together. In some cases, you don't want the default tenant. In case, if you donΓÇÖt want to create initial tenant along with the installation you can follow SAP Note [2629711](https://launchpad.support.sap.com/#/notes/2629711).
-
+ 1. Start the hdblcm program from the HANA installation software directory. ```bash
For more information about the required ports for SAP HANA, read the chapter [Co
- For Enter Database User (SYSTEM) Password: Enter the database user password - For Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm - For Restart system after machine reboot? [n]: press Enter to accept the default
- - For Do you want to continue? (y/n): Validate the summary. Enter **y** to continue
+ - For Do you want to continue? (y/n): Validate the summary. Enter **y** to continue
7. **[A]** Upgrade SAP Host Agent
This is an important step to optimize the integration with the cluster and impro
## Configure SAP HANA cluster resources
-This section describes the necessary steps required to configure the SAP HANA Cluster resources.
+This section describes the necessary steps required to configure the SAP HANA Cluster resources.
### Create SAP HANA cluster resources
Create a dummy file system cluster resource, which monitors and reports failures
```bash sudo crm status
-
+ # Cluster Summary: # Stack: corosync # Current DC: hanadb1 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum
Create a dummy file system cluster resource, which monitors and reports failures
# Last change: Tue Nov 2 17:57:38 2021 by root via crm_attribute on hanadb1 # 2 nodes configured # 11 resource instances configured
-
+ # Node List: # Online: [ hanadb1 hanadb2 ]
-
+ # Full List of Resources: # Clone Set: cln_azure-events [rsc_azure-events]: # Started: [ hanadb1 hanadb2 ]
Create a dummy file system cluster resource, which monitors and reports failures
`OCF_CHECK_LEVEL=20` attribute is added to the monitor operation, so that monitor operations perform a read/write test on the file system. Without this attribute, the monitor operation only verifies that the file system is mounted. This can be a problem because when connectivity is lost, the file system may remain mounted, despite being inaccessible.
- `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
+ `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced.
> [!IMPORTANT] > Timeouts in the above configuration may need to be adapted to the specific HANA set up to avoid unnecessary fence actions. DonΓÇÖt set the timeout values too low. Be aware that the filesystem monitor is not related to the HANA system replication. For details see [SUSE documentation](https://www.suse.com/support/kb/doc/?id=000019904).
This section describes how you can test your set up.
```bash SAPHanaSR-showAttr
-
+ # You should see something like below # hanadb1:~ SAPHanaSR-showAttr # Global cib-time maintenance
This section describes how you can test your set up.
```bash sudo crm status
-
+ #Cluster Summary: # Stack: corosync # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum
This section describes how you can test your set up.
# Last change: Mon Nov 8 23:25:19 2021 by root via crm_attribute on hanadb2 # 2 nodes configured # 11 resource instances configured
-
+ # Node List: # Online: [ hanadb1 hanadb2 ] # Full List of Resources:
This section describes how you can test your set up.
```bash sudo crm status
-
+ #Cluster Summary: # Stack: corosync # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum
This section describes how you can test your set up.
# Last change: Mon Nov 8 23:00:46 2021 by root via crm_attribute on hanadb1 # 2 nodes configured # 11 resource instances configured
-
+ #Node List: # Online: [ hanadb1 hanadb2 ]
-
+ #Full List of Resources: # Clone Set: cln_azure-events [rsc_azure-events]: # Started: [ hanadb1 hanadb2 ]
This section describes how you can test your set up.
```bash sudo crm status
-
+ #Cluster Summary: # Stack: corosync # Current DC: hanadb2 (version 2.0.5+20201202.ba59be712-4.9.1-2.0.5+20201202.ba59be712) - partition with quorum
This section describes how you can test your set up.
# Last change: Wed Nov 10 21:59:47 2021 by root via crm_attribute on hanadb2 # 2 nodes configured # 11 resource instances configured
-
+ #Node List: # Online: [ hanadb1 hanadb2 ]
-
+ #Full List of Resources: # Clone Set: cln_azure-events [rsc_azure-events]: # Started: [ hanadb1 hanadb2 ]
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-annotations-syntax.md
Title: Reference inputs and outputs in skillsets
+ Title: Reference enriched nodes during skillset execution
description: Explains the annotation syntax and how to reference inputs and outputs of a skillset in an AI enrichment pipeline in Azure AI Search.
- ignite-2023- Previously updated : 09/16/2022+ Last updated : 01/18/2024
-# Reference an annotation in an Azure AI Search skillset
-In this article, you'll learn how to reference *annotations* (or an enrichment node) in skill definitions, using examples to illustrate various scenarios.
+# Reference a path to enriched nodes using context and source properties an Azure AI Search skillset
-Skills read inputs and write outputs to nodes in an [enriched document](cognitive-search-working-with-skillsets.md#enrichment-tree) tree, building the tree as the enrichments progress. Any node can be referenced in an input for further downstream enrichment, or mapped to an output field in an index. This article introduces the syntax and provides examples for specifying a path to a node. For the full syntax, see [Skill context and input annotation language language](cognitive-search-skill-annotation-language.md).
+During skillset execution, the engine builds an in-memory [enrichment tree](cognitive-search-working-with-skillsets.md#enrichment-tree) that captures each enrichment, such as recognized entities or translated text. In this article, learn how to reference an enrichment node in the enrichment tree so that you can pass output to downstream skills or specify an output field mapping for a search index field.
-Paths to an annotation are specified in the "context" and "source" properties of a skillset, and in [output field mappings](cognitive-search-output-field-mapping.md) in an indexer. Here's an example of what paths might look like in a skillset:
--
-The example in the screenshot illustrates the path for an item in an Azure Cosmos DB collection.
-
-+ "context" path is `/document/HotelId` because the collection is partitioned into documents by the `/HotelId` field.
-
-+ "source" path is `/document/Description` because the skill is a translation skill, and the field that you'll want the skill to translate is the `Description` field in each document.
+This article uses examples to illustrate various scenarios. For the full syntax, see [Skill context and input annotation language language](cognitive-search-skill-annotation-language.md).
## Background concepts
Before reviewing the syntax, let's revisit a few important concepts to better un
| Term | Description | ||-|
-| "enriched document" | An enriched document is an internal structure that collects skill output as it's created and it holds all annotations related to a document. Think of an enriched document as a tree of annotations. Generally, an annotation created from a previous annotation becomes its child. </p>Enriched documents only exist for the duration of skillset execution. Once content is mapped to the search index, the enriched document is no longer needed. Although you don't interact with enriched documents directly, it's useful to have a mental model of the documents when creating a skillset. |
-| "annotation" | Within an enriched document, a node that is created and populated by a skill, such as "text" and "layoutText" in the OCR skill, is called an annotation. An enriched document is populated with both annotations and unchanged field values or metadata copied from the source. |
+| "enriched document" | An enriched document is an in-memory structure that collects skill output as it's created and it holds all enrichments related to a document. Think of an enriched document as a tree. Generally, the tree starts at the root document level, and each new enrichment is created from a previous as its child. |
+| "node" | Within an enriched document, a node (sometimes referred to as an "annotation") is created and populated by a skill, such as "text" and "layoutText" in the OCR skill. An enriched document is populated with both enrichments and original source field values or metadata copied from the source. |
| "context" | The scope of enrichment, which is either the entire document, a portion of a document, or if you're working with images, the extracted images from a document. By default, the enrichment context is at the `"/document"` level, scoped to individual documents contained in the data source. When a skill runs, the outputs of that skill become [properties of the defined context](#example-2). | ## Paths for different scenarios Paths are specified in the "context" and "source" properties of a skillset, and in the [output field mappings](cognitive-search-output-field-mapping.md) in an indexer. +
+The example in the screenshot illustrates the path for an item in an Azure Cosmos DB collection.
+++ `context` path is `/document/HotelId` because the collection is partitioned into documents by the `/HotelId` field.+++ `source` path is `/document/Description` because the skill is a translation skill, and the field that you'll want the skill to translate is the `Description` field in each document.+ All paths start with `/document`. An enriched document is created in the "document cracking" stage of indexer execution, when the indexer opens a document or reads in a row from the data source. Initially, the only node in an enriched document is the [root node (`/document`)](cognitive-search-skill-annotation-language.md#document-root), and it's the node from which all other enrichments occur. The following list includes several common examples:
Examples in the remainder of this article are based on the "content" field gener
## Example 1: Simple annotation reference
-In Azure Blob Storage, suppose you have a variety of files containing references to people's names that you want to extract using entity recognition. In the skill definition below, `"/document/content"` is the textual representation of the entire document, and "people" is an extraction of full names for entities identified as persons.
+In Azure Blob Storage, suppose you have a variety of files containing references to people's names that you want to extract using entity recognition. In the following skill definition, `"/document/content"` is the textual representation of the entire document, and "people" is an extraction of full names for entities identified as persons.
Because the default context is `"/document"`, the list of people can now be referenced as `"/document/people"`. In this specific case `"/document/people"` is an annotation, which could now be mapped to a field in an index, or used in another skill in the same skillset.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
Enriched documents are internal, but a debug session gives you access to the con
If the field mappings are correct, check individual skills for configuration and content. If a skill fails to produce output, it might be missing a property or parameter, which can be determined through error and validation messages.
-Other issues, such as an invalid context or input expression, can be harder to resolve because the error will tell you what is wrong, but not how to fix it. For help with context and input syntax, see [Reference annotations in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md#background-concepts). For help with individual messages, see [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md).
+Other issues, such as an invalid context or input expression, can be harder to resolve because the error will tell you what is wrong, but not how to fix it. For help with context and input syntax, see [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md#background-concepts). For help with individual messages, see [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md).
The following steps show you how to get information about a skill.
You can edit the skill definition in the portal.
### Test your code
-At this point, new requests from your debug session should now be sent to your local Azure Function. You can use breakpoints in your Visual Studio code to debug your code or run step by step.
+At this point, new requests from your debug session should now be sent to your local Azure Function. You can use breakpoints in your Visual Studio Code to debug your code or run step by step.
## Next steps
search Cognitive Search Output Field Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-output-field-mapping.md
Output field mappings are added to the `outputFieldMappings` array in an indexer
| Property | Description | |-|-|
-| sourceFieldName | Required. Specifies a path to enriched content. An example might be `/document/content`. See [Reference annotations in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md) for path syntax and examples. |
+| sourceFieldName | Required. Specifies a path to enriched content. An example might be `/document/content`. See [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md) for path syntax and examples. |
| targetFieldName | Optional. Specifies the search field that receives the enriched content. Target fields must be top-level simple fields or collections. It can't be a path to a subfield in a complex type. If you want to retrieve specific nodes in a complex structure, you can [flatten individual nodes](#flattening-information-from-complex-types) in memory, and then send the output to a string collection in your index. | | mappingFunction | Optional. Adds extra processing provided by [mapping functions](search-indexer-field-mappings.md#mappingFunctions) supported by indexers. In the case of enrichment nodes, encoding and decoding are the most commonly used functions. |
search Cognitive Search Skill Annotation Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-annotation-language.md
Parentheses can be used to change or disambiguate evaluation order.
## See also + [Create a skillset in Azure AI Search](cognitive-search-defining-skillset.md)
-+ [Reference annotations in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md)
++ [Reference enrichments in an Azure AI Search skillset](cognitive-search-concept-annotations-syntax.md)
search Cognitive Search Working With Skillsets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-working-with-skillsets.md
Skills can execute independently and in parallel, or sequentially if you feed th
+ Skill #2 is a [Sentiment Detection skill](cognitive-search-skill-sentiment.md) accepts "pages" as input, and produces a new field called "Sentiment" as output that contains the results of sentiment analysis.
-Notice how the output of the first skill ("pages") is used in sentiment analysis, where "/document/reviews_text/pages/*" is both the context and input. For more information about path formulation, see [How to reference annotations](cognitive-search-concept-annotations-syntax.md).
+Notice how the output of the first skill ("pages") is used in sentiment analysis, where "/document/reviews_text/pages/*" is both the context and input. For more information about path formulation, see [How to reference enrichments](cognitive-search-concept-annotations-syntax.md).
```json {
The enrichment tree now has a new node placed under the context of the skill. Th
![enrichment tree after skill #1](media/cognitive-search-working-with-skillsets/enrichment-tree-skill1.png "Enrichment tree after skill #1 executes")
-To access any of the enrichments added to a node by a skill, the full path for the enrichment is needed. For example, if you want to use the text from the ```pages``` node as an input to another skill, you'll need to specify it as ```"/document/reviews_text/pages/*"```. For more information about paths, see [Reference annotations](cognitive-search-concept-annotations-syntax.md).
+To access any of the enrichments added to a node by a skill, the full path for the enrichment is needed. For example, if you want to use the text from the ```pages``` node as an input to another skill, you'll need to specify it as ```"/document/reviews_text/pages/*"```. For more information about paths, see [Reference enrichments](cognitive-search-concept-annotations-syntax.md).
### Skill #2 Language detection
search Index Add Suggesters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-suggesters.md
Title: Configure a suggester
-description: Enable type-ahead query actions in Azure AI Search by creating suggesters and formulating requests that invoke autocomplete or autosuggested query terms.
+description: Enable typeahead query actions in Azure AI Search by creating suggesters and formulating requests that invoke autocomplete or autosuggested query terms.
Previously updated : 12/02/2022 Last updated : 01/18/2024 - devx-track-csharp - devx-track-dotnet - ignite-2023
-# Configure a suggester to enable autocomplete and suggested results in a query
+# Configure a suggester for autocomplete and suggested matches in a query
-In Azure AI Search, typeahead or "search-as-you-type" is enabled through a *suggester*. A suggester is defined in an index and provides a list of fields that undergo extra tokenization, generating prefix sequences to support matches on partial terms. For example, a suggester that includes a City field with a value for "Seattle" will have prefix combinations of "sea", "seat", "seatt", and "seattl" to support typeahead.
+In Azure AI Search, typeahead (autocomplete) or "search-as-you-type" is enabled through a *suggester*. A suggester is a configuration in an index specifying which fields should be used to populate autocomplete and suggestions. These fields undergo extra tokenization, generating prefix sequences to support matches on partial terms. For example, a suggester that includes a City field with a value for "Seattle" will have prefix combinations of "sea", "seat", "seatt", and "seattl" to support typeahead.
Matches on partial terms can be either an autocompleted query or a suggested match. The same suggester supports both experiences.
Matches on partial terms can be either an autocompleted query or a suggested mat
Typeahead can be *autocomplete*, which completes a partial input for a whole term query, or *suggestions* that invite click through to a particular match. Autocomplete produces a query. Suggestions produce a matching document.
-The following screenshot from [Create your first app in C#](tutorial-csharp-type-ahead-and-suggestions.md) illustrates both. Autocomplete anticipates a potential term, finishing "tw" with "in". Suggestions are mini search results, where a field like hotel name represents a matching hotel search document from the index. For suggestions, you can surface any field that provides descriptive information.
+The following screenshot illustrates both. Autocomplete anticipates a potential term, finishing "tw" with "in". Suggestions are mini search results, where a field like hotel name represents a matching hotel search document from the index. For suggestions, you can surface any field that provides descriptive information.
![Visual comparison of autocomplete and suggested queries](./media/index-add-suggesters/hotel-app-suggestions-autocomplete.png "Visual comparison of autocomplete and suggested queries")
You can use these features separately or together. To implement these behaviors
+ Add a suggester to a search index definition. The remainder of this article is focused on creating a suggester.
-+ Call a suggester-enabled query, in the form of a Suggestion request or Autocomplete request, using one of the [APIs listed below](#how-to-use-a-suggester).
++ Call a suggester-enabled query, in the form of a Suggestion request or Autocomplete request, using one of the [APIs listed in a later section](#how-to-use-a-suggester).
-Search-as-you-type support is enabled on a per-field basis for string fields. You can implement both typeahead behaviors within the same search solution if you want an experience similar to the one indicated in the screenshot. Both requests target the *documents* collection of specific index and responses are returned after a user has provided at least a three character input string.
+Search-as-you-type is enabled on a per-field basis for string fields. You can implement both typeahead behaviors within the same search solution if you want an experience similar to the one indicated in the screenshot. Both requests target the *documents* collection of specific index and responses are returned after a user provides at least a three character input string.
## How to create a suggester
-To create a suggester, add one to an [index definition](/rest/api/searchservice/create-index). A suggester takes a name and a collection of fields over which the typeahead experience is enabled. The best time to create a suggester is when you're also defining the field that will use it.
+To create a suggester, add one to an [index definition](/rest/api/searchservice/create-index). A suggester takes a name and a collection of fields over which the typeahead experience is enabled. The best time to create a suggester is when you're also defining the field that uses it.
+ Use string fields only.
To create a suggester, add one to an [index definition](/rest/api/searchservice/
+ Use the default standard Lucene analyzer (`"analyzer": null`) or a [language analyzer](index-add-language-analyzers.md) (for example, `"analyzer": "en.Microsoft"`) on the field.
-If you try to create a suggester using pre-existing fields, the API will disallow it. Prefixes are generated during indexing, when partial terms in two or more character combinations are tokenized alongside whole terms. Given that existing fields are already tokenized, you'll have to rebuild the index if you want to add them to a suggester. For more information, see [How to rebuild an Azure AI Search index](search-howto-reindex.md).
+If you try to create a suggester using pre-existing fields, the API disallows it. Prefixes are generated during indexing, when partial terms in two or more character combinations are tokenized alongside whole terms. Given that existing fields are already tokenized, you have to rebuild the index if you want to add them to a suggester. For more information, see [How to rebuild an Azure AI Search index](search-howto-reindex.md).
### Choose fields Although a suggester has several properties, it's primarily a collection of string fields for which you're enabling a search-as-you-type experience. There's one suggester for each index, so the suggester list must include all fields that contribute content for both suggestions and autocomplete.
-Autocomplete benefits from a larger pool of fields to draw from because the additional content has more term completion potential.
+Autocomplete benefits from a larger pool of fields to draw from because the extra content has more term completion potential.
-Suggestions, on the other hand, produce better results when your field choice is selective. Remember that the suggestion is a proxy for a search document so you'll want fields that best represent a single result. Names, titles, or other unique fields that distinguish among multiple matches work best. If fields consist of repetitive values, the suggestions consist of identical results and a user won't know which one to click.
+Suggestions, on the other hand, produce better results when your field choice is selective. Remember that the suggestion is a proxy for a search document so pick fields that best represent a single result. Names, titles, or other unique fields that distinguish among multiple matches work best. If fields consist of repetitive values, the suggestions consist of identical results and a user won't know which one to choose.
-To satisfy both search-as-you-type experiences, add all of the fields that you need for autocomplete, but then use "$select", "$top", "$filter", and "searchFields" to control results for suggestions.
+To satisfy both search-as-you-type experiences, add all of the fields that you need for autocomplete, but then use `select`, `top`, `filter`, and `searchFields` to control results for suggestions.
### Choose analyzers
-Your choice of an analyzer determines how fields are tokenized and subsequently prefixed. For example, for a hyphenated string like "context-sensitive", using a language analyzer will result in these token combinations: "context", "sensitive", "context-sensitive". Had you used the standard Lucene analyzer, the hyphenated string wouldn't exist.
+Your choice of an analyzer determines how fields are tokenized and prefixed. For example, for a hyphenated string like "context-sensitive", using a language analyzer results in these token combinations: "context", "sensitive", "context-sensitive". Had you used the standard Lucene analyzer, the hyphenated string wouldn't exist.
When evaluating analyzers, consider using the [Analyze Text API](/rest/api/searchservice/test-analyzer) for insight into how terms are processed. Once you build an index, you can try various analyzers on a string to view token output.
-Fields that use [custom analyzers](index-add-custom-analyzers.md) or [built-in analyzers](index-add-custom-analyzers.md#built-in-analyzers) (with the exception of standard Lucene) are explicitly disallowed to prevent poor outcomes.
+Fields that use [custom analyzers](index-add-custom-analyzers.md) or [built-in analyzers](index-add-custom-analyzers.md#built-in-analyzers) (except for standard Lucene) are explicitly disallowed to prevent poor outcomes.
> [!NOTE] > If you need to work around the analyzer constraint, for example if you need a keyword or ngram analyzer for certain query scenarios, you should use two separate fields for the same content. This will allow one of the fields to have a suggester, while the other can be set up with a custom analyzer configuration.
A suggester is used in a query. After a suggester is created, call one of the fo
In a search application, client code should use a library like [jQuery UI Autocomplete](https://jqueryui.com/autocomplete/) to collect the partial query and provide the match. For more information about this task, see [Add autocomplete or suggested results to client code](search-add-autocomplete-suggestions.md).
-API usage is illustrated in the following call to the Autocomplete REST API. There are two takeaways from this example. First, as with all queries, the operation is against the documents collection of an index and the query includes a "search" parameter, which in this case provides the partial query. Second, you must add "suggesterName" to the request. If a suggester isn't defined in the index, a call to autocomplete or suggestions will fail.
+API usage is illustrated in the following call to the Autocomplete REST API. There are two takeaways from this example. First, as with all queries, the operation is against the documents collection of an index and the query includes a `search` parameter, which in this case provides the partial query. Second, you must add `suggesterName` to the request. If a suggester isn't defined in the index, calls to autocomplete or suggestions fail.
```http
-POST /indexes/myxboxgames/docs/autocomplete?search&api-version=2020-06-30
+POST /indexes/myxboxgames/docs/autocomplete?search&api-version=2023-11-01
{ "search": "minecraf", "suggesterName": "sg"
search Knowledge Store Projection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-projection-overview.md
Title: Projection concepts
-description: Introduces projection concepts and best practices. If you are creating a knowledge store in Azure AI Search, projections will determine the type, quantity, and composition of objects in Azure Storage.
+description: Introduces projection concepts and best practices. If you are creating a knowledge store in Azure AI Search, projections determine the type, quantity, and composition of objects in Azure Storage.
- ignite-2023 Previously updated : 10/25/2022 Last updated : 01/18/2024 # Knowledge store "projections" in Azure AI Search
-Projections are the physical tables, objects, and files in a [**knowledge store**](knowledge-store-concept-intro.md) that accept content from an Azure AI Search enrichment pipeline. If you're creating a knowledge store, defining and shaping projections is most of the work.
+Projections define the physical tables, objects, and files in a [**knowledge store**](knowledge-store-concept-intro.md) that accept content from an Azure AI Search enrichment pipeline. If you're creating a knowledge store, defining and shaping projections is most of the work.
This article introduces projection concepts and workflow so that you have some background before you start coding.
A knowledge store is a logical construction that's physically expressed as a loo
## Projection definition
-Projections are specified under the "knowledgeStore" property of a [skillset](/rest/api/searchservice/create-skillset). Projection definitions are used during indexer invocation to create and load objects in Azure Storage with enriched content. If you are unfamiliar with these concepts, start with [AI enrichment](cognitive-search-concept-intro.md) for an introduction.
+Projections are specified under the "knowledgeStore" property of a [skillset](/rest/api/searchservice/create-skillset). Projection definitions are used during indexer invocation to create and load objects in Azure Storage with enriched content. If you're unfamiliar with these concepts, start with [AI enrichment](cognitive-search-concept-intro.md) for an introduction.
The following example illustrates the placement of projections under knowledgeStore, and the basic construction. The name, type, and content source make up a projection definition.
Projection groups have the following key characteristics of mutual exclusivity a
| Principle | Description | |--|-|
-| Mutual exclusivity | Each group is fully isolated from other groups to support different data shaping scenarios. For example, if you are testing different table structures and combinations, you would put each set in a different projection group for AB testing. Each group obtains data from the same source (enrichment tree) but is fully isolated from the table-object-file combination of any peer projection groups.|
-| Relatedness | Within a projection group, content in tables, objects, and files are related. Knowledge store uses generated keys as reference points to a common parent node. For example, consider a scenario where you have a document containing images and text. You could project the text to tables and the images to binary files, and both tables and objects will have a column/property containing the file URL.|
+| Mutual exclusivity | Each group is fully isolated from other groups to support different data shaping scenarios. For example, if you're testing different table structures and combinations, you would put each set in a different projection group for AB testing. Each group obtains data from the same source (enrichment tree) but is fully isolated from the table-object-file combination of any peer projection groups.|
+| Relatedness | Within a projection group, content in tables, objects, and files are related. Knowledge store uses generated keys as reference points to a common parent node. For example, consider a scenario where you have a document containing images and text. You could project the text to tables and the images to binary files, and both tables and objects have a column/property containing the file URL.|
## Projection "source" The source parameter is the third component of a projection definition. Because projections store data from an AI enrichment pipeline, the source of a projection is always the output of a skill. As such, output might be a single field (for example, a field of translated text), but often it's a reference to a data shape.
-Data shapes come from your skillset. Among all of the built-in skills provided in Azure AI Search, there is a utility skill called the [**Shaper skill**](cognitive-search-skill-shaper.md) that's used to create data shapes. You can include Shaper skills (as many as you need) to support the projections in the knowledge store.
+Data shapes come from your skillset. Among all of the built-in skills provided in Azure AI Search, there's a utility skill called the [**Shaper skill**](cognitive-search-skill-shaper.md) that's used to create data shapes. You can include Shaper skills (as many as you need) to support the projections in the knowledge store.
Shapes are frequently used with table projections, where the shape not only specifies which rows go into the table, but also which columns are created (you can also pass a shape to an object projection).
Shapes can be complex and it's out of scope to discuss them in depth here, but t
## Projection lifecycle
-Projections have a lifecycle that is tied to the source data in your data source. As source data is updated and reindexed, projections are updated with the results of the enrichments, ensuring your projections are eventually consistent with the data in your data source. However, projections are also independently stored in Azure Storage. They will not be deleted when the indexer or the search service itself is deleted.
+Projections have a lifecycle that is tied to the source data in your data source. As source data is updated and reindexed, projections are updated with the results of the enrichments, ensuring your projections are eventually consistent with the data in your data source. However, projections are also independently stored in Azure Storage. They won't be deleted when the indexer or the search service itself is deleted.
## Consume in apps
After the indexer is run, connect to projections and consume the data in other a
## Checklist for getting started
-Recall that projections are exclusive to knowledge stores, and are not used to structure a search index.
+Recall that projections are exclusive to knowledge stores, and aren't used to structure a search index.
1. In Azure Storage, get a connection string from **Access Keys** and verify the account is StorageV2 (general purpose V2).
-1. While in Azure Storage, familiarize yourself with existing content in containers and tables so that you choose non-conflicting names for the projections. A knowledge store is a loose collection of tables and containers. Consider adopting a naming convention to keep track of related objects.
+1. While in Azure Storage, familiarize yourself with existing content in containers and tables so that you choose nonconflicting names for the projections. A knowledge store is a loose collection of tables and containers. Consider adopting a naming convention to keep track of related objects.
-1. In Azure AI Search, [enable enrichment caching (preview)](search-howto-incremental-index.md) in the indexer and then [run the indexer](search-howto-run-reset-indexers.md) to execute the skillset and populate the cache. This is a preview feature, so be sure to use the preview REST API (api-version=2020-06-30-preview or later) on the indexer request. Once the cache is populated, you can modify projection definitions in a knowledge store free of charge (as long as the skills themselves are not modified).
+1. In Azure AI Search, [enable enrichment caching (preview)](search-howto-incremental-index.md) in the indexer and then [run the indexer](search-howto-run-reset-indexers.md) to execute the skillset and populate the cache. This is a preview feature, so be sure to use the preview REST API (api-version=2020-06-30-preview or later) on the indexer request. Once the cache is populated, you can modify projection definitions in a knowledge store free of charge (as long as the skills themselves aren't modified).
-1. In your code, all projections are defined solely in a skillset. There are no indexer properties (such as field mappings or output field mappings) that apply to projections. Within a skillset definition, you will focus on two areas: knowledgeStore property and skills array.
+1. In your code, all projections are defined solely in a skillset. There are no indexer properties (such as field mappings or output field mappings) that apply to projections. Within a skillset definition, you'll focus on two areas: knowledgeStore property and skills array.
1. Under knowledgeStore, specify table, object, file projections in the `projections` section. Object type, object name, and quantity (per the number of projections you define) are determined in this section.
- 1. From the skills array, determine which skill outputs will be referenced in the `source` of each projection. All projections have a source. The source can be the output of an upstream skill, but is often the output of a Shaper skill. The composition of your projection is determined through shapes.
+ 1. From the skills array, determine which skill outputs should be referenced in the `source` of each projection. All projections have a source. The source can be the output of an upstream skill, but is often the output of a Shaper skill. The composition of your projection is determined through shapes.
1. If you're adding projections to an existing skillset, [update the skillset](/rest/api/searchservice/update-skillset) and [run the indexer](/rest/api/searchservice/run-indexer). 1. Check your results in Azure Storage. On subsequent runs, avoid naming collisions by deleting objects in Azure Storage or changing project names in the skillset.
-1. If you are using [Table projections](knowledge-store-projections-examples.md#define-a-table-projection) check [Understanding the Table Service data model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model) and [Scalability and performance targets for Table storage](../storage/tables/scalability-targets.md) to make sure your data requirements are within Table storage documented limits.
+1. If you're using [Table projections](knowledge-store-projections-examples.md#define-a-table-projection) check [Understanding the Table Service data model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model) and [Scalability and performance targets for Table storage](../storage/tables/scalability-targets.md) to make sure your data requirements are within Table storage documented limits.
## Next steps
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search.md
Previously updated : 12/20/2022 Last updated : 01/18/2024 - subject-monitoring - ignite-2023
# Monitoring Azure AI Search
-[Azure Monitor](../azure-monitor/overview.md) is enabled with every subscription to provide monitoring capabilities over all Azure resources, including Azure AI Search. When you sign up for search, Azure Monitor collects [**activity logs**](../azure-monitor/data-sources.md#azure-activity-log) and [**platform metrics**](../azure-monitor/essentials/data-platform-metrics.md) as soon as you start using the service.
+[Azure Monitor](../azure-monitor/overview.md) is enabled with every subscription to provide uniform monitoring capabilities over all Azure resources, including Azure AI Search. When you create a search service, Azure Monitor collects [**activity logs**](../azure-monitor/data-sources.md#azure-activity-log) and [**platform metrics**](../azure-monitor/essentials/data-platform-metrics.md) as soon as you start using the service.
Optionally, you can enable diagnostic settings to collect [**resource logs**](../azure-monitor/essentials/resource-logs.md). Resource logs contain detailed information about search service operations that's useful for deeper analysis and investigation.
search Query Lucene Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-lucene-syntax.md
- ignite-2023 Previously updated : 06/29/2023 Last updated : 01/17/2024 # Lucene query syntax in Azure AI Search When creating queries in Azure AI Search, you can opt for the full [Lucene Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html) syntax for specialized query forms: wildcard, fuzzy search, proximity search, regular expressions. Much of the Lucene Query Parser syntax is [implemented intact in Azure AI Search](search-lucene-query-architecture.md), except for *range searches, which are constructed through **`$filter`** expressions.
-To use full Lucene syntax, set the queryType to "full" and pass in a query expression patterned for wildcard, fuzzy search, or one of the other query forms supported by the full syntax. In REST, query expressions are provided in the **`search`** parameter of a [Search Documents (REST API)](/rest/api/searchservice/search-documents) request.
+To use full Lucene syntax, set the queryType to `full` and pass in a query expression patterned for wildcard, fuzzy search, or one of the other query forms supported by the full syntax. In REST, query expressions are provided in the **`search`** parameter of a [Search Documents (REST API)](/rest/api/searchservice/search-documents) request.
## Example (full syntax) The following example is a search request constructed using the full syntax. This particular example shows in-field search and term boosting. It looks for hotels where the category field contains the term `budget`. Any documents containing the phrase `"recently renovated"` are ranked higher as a result of the term boost value (3). ```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "queryType": "full", "search": "category:budget AND \"recently renovated\"^3",
You can embed Boolean operators in a query string to improve the precision of a
* In full syntax, queries with a single negation are not allowed. For example, the query `-luxury` is not allowed. * In full syntax, negations will behave as if they are always ANDed onto the query regardless of the search mode. * For example, the full syntax query `wifi -luxury` in full syntax only fetches documents that contain the term `wifi`, and then applies the negation `-luxury` to those documents.
-* If you want to use negations to search over all documents in the index, simple syntax with the any search mode is recommended.
+* If you want to use negations to search over all documents in the index, simple syntax with the `any` search mode is recommended.
* If you want to use negations to search over a subset of documents in the index, full syntax or the simple syntax with the all search mode are recommended. | Query Type | Search Mode | Example Query | Behavior |
search Query Simple Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-simple-syntax.md
- ignite-2023 Previously updated : 10/27/2022 Last updated : 01/17/2024 # Simple query syntax in Azure AI Search
-Azure AI Search implements two Lucene-based query languages: [Simple Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) and the [Lucene Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html). The simple parser is more flexible and will attempt to interpret a request even if it's not perfectly composed. Because it's flexible, it's the default for queries in Azure AI Search.
+For full text search scenarios, Azure AI Search implements two Lucene-based query languages, each one aligned to a query parser. The [Simple Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) is the default. It covers common use cases and attempts to interpret a request even if it's not perfectly composed. The other parser is [Lucene Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html) and it supports more advanced query constructions.
-Query syntax for either parser applies to query expressions passed in the "search" parameter of a [Search Documents (REST API)](/rest/api/searchservice/search-documents) request, not to be confused with the [OData syntax](query-odata-filter-orderby-syntax.md) used for the ["$filter"](search-filters.md) and ["$orderby"](search-query-odata-orderby.md) expressions in the same request. OData parameters have different syntax and rules for constructing queries, escaping strings, and so on.
+This article is the query syntax reference for the simple query parser.
+
+Query syntax for both parsers applies to query expressions passed in the `search` parameter of a [query request](search-query-create.md), not to be confused with the [OData syntax](query-odata-filter-orderby-syntax.md), with its own syntax and rules for [`filter`](search-filters.md) and [`orderby`](search-query-odata-orderby.md) expressions in the same request.
Although the simple parser is based on the [Apache Lucene Simple Query Parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/simple/SimpleQueryParser.html) class, its implementation in Azure AI Search excludes fuzzy search. If you need [fuzzy search](search-query-fuzzy.md), consider the alternative [full Lucene query syntax](query-lucene-syntax.md) instead.
POST https://{{service-name}}.search.windows.net/indexes/hotel-rooms-sample/docs
} ```
-The "searchMode" parameter is relevant in this example. Whenever boolean operators are on the query, you should generally set `"searchMode=all"` to ensure that *all* of the criteria is matched. Otherwise, you can use the default `"searchMode=any"` that favors recall over precision.
+The `searchMode` parameter is relevant in this example. Whenever boolean operators are on the query, you should generally set `searchMode=all` to ensure that *all* of the criteria are matched. Otherwise, you can use the default `searchMode=any` that favors recall over precision.
For more examples, see [Simple query syntax examples](search-query-simple-examples.md). For details about the query request and parameters, see [Search Documents (REST API)](/rest/api/searchservice/Search-Documents). ## Keyword search on terms and phrases
-Strings passed to the "search" parameter can include terms or phrases in any supported language, boolean operators, precedence operators, wildcard or prefix characters for "starts with" queries, escape characters, and URL encoding characters. The "search" parameter is optional. Unspecified, search (`search=*` or `search=" "`) returns the top 50 documents in arbitrary (unranked) order.
+Strings passed to the `search` parameter can include terms or phrases in any supported language, boolean operators, precedence operators, wildcard or prefix characters for "starts with" queries, escape characters, and URL encoding characters. The `search` parameter is optional. Unspecified, search (`search=*` or `search=" "`) returns the top 50 documents in arbitrary (unranked) order.
+ A *term search* is a query of one or more terms, where any of the terms are considered a match.
Strings passed to the "search" parameter can include terms or phrases in any sup
Depending on your search client, you might need to escape the quotation marks in a phrase search. For example, in Postman in a POST request, a phrase search on `"Roach Motel"` in the request body would be specified as `"\"Roach Motel\""`.
-By default, all strings passed in the "search" parameter undergo lexical analysis. Make sure you understand the tokenization behavior of the analyzer you're using. Often, when query results are unexpected, the reason can be traced to how terms are tokenized at query time. You can [test tokenization on specific strings](/rest/api/searchservice/test-analyzer) to confirm the output.
+By default, all strings passed in the `search` parameter undergo lexical analysis. Make sure you understand the tokenization behavior of the analyzer you're using. Often, when query results are unexpected, the reason can be traced to how terms are tokenized at query time. You can [test tokenization on specific strings](/rest/api/searchservice/test-analyzer) to confirm the output.
Any text input with one or more terms is considered a valid starting point for query execution. Azure AI Search will match documents containing any or all of the terms, including any variations found during analysis of the text.
-As straightforward as this sounds, there's one aspect of query execution in Azure AI Search that *might* produce unexpected results, increasing rather than decreasing search results as more terms and operators are added to the input string. Whether this expansion actually occurs depends on the inclusion of a NOT operator, combined with a "searchMode" parameter setting that determines how NOT is interpreted in terms of AND or OR behaviors. For more information, see the NOT operator under [Boolean operators](#boolean-operators).
+As straightforward as this sounds, there's one aspect of query execution in Azure AI Search that *might* produce unexpected results, increasing rather than decreasing search results as more terms and operators are added to the input string. Whether this expansion actually occurs depends on the inclusion of a NOT operator, combined with a `searchMode` parameter setting that determines how NOT is interpreted in terms of `AND` or `OR` behaviors. For more information, see the `NOT` operator under [Boolean operators](#boolean-operators).
## Boolean operators
You can embed Boolean operators in a query string to improve the precision of a
| Character | Example | Usage | |-- |--|-|
-| `+` | `pool + ocean` | An AND operation. For example, `pool + ocean` stipulates that a document must contain both terms.|
-| `|` | `pool | ocean` | An OR operation finds a match when either term is found. In the example, the query engine will return match on documents containing either `pool` or `ocean` or both. Because OR is the default conjunction operator, you could also leave it out, such that `pool ocean` is the equivalent of `pool | ocean`.|
-| `-` | `pool ΓÇô ocean` | A NOT operation returns matches on documents that exclude the term. </p></p>The `searchMode` parameter on a query request controls whether a term with the NOT operator is ANDed or ORed with other terms in the query (assuming there's no boolean operators on the other terms). Valid values include `any` or `all`. </p>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `pool - ocean` will match documents that either contain the term `pool` or those that don't contain the term `ocean`. </p>`searchMode=all` increases the precision of queries by including fewer results, and by default `-` will be interpreted as "AND NOT". For example, with `searchMode=any`, the query `pool - ocean` will match documents that contain the term "pool" and all documents that don't contain the term "ocean". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.</p> When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
+| `+` | `pool + ocean` | An `AND` operation. For example, `pool + ocean` stipulates that a document must contain both terms.|
+| `|` | `pool | ocean` | An `OR` operation finds a match when either term is found. In the example, the query engine will return a match on documents containing either `pool` or `ocean` or both. Because `OR` is the default conjunction operator, you could also leave it out, such that `pool ocean` is the equivalent of `pool | ocean`.|
+| `-` | `pool ΓÇô ocean` | A `NOT` operation returns matches on documents that exclude the term. </p></p>The `searchMode` parameter on a query request controls whether a term with the `NOT` operator is `AND`ed or `OR`ed with other terms in the query (assuming there's no boolean operators on the other terms). Valid values include `any` or `all`. </p>`searchMode=any` increases the recall of queries by including more results, and by default `-` will be interpreted as "OR NOT". For example, `pool - ocean` will match documents that either contain the term `pool` or those that don't contain the term `ocean`. </p>`searchMode=all` increases the precision of queries by including fewer results, and by default `-` will be interpreted as "AND NOT". For example, with `searchMode=any`, the query `pool - ocean` will match documents that contain the term "pool" and all documents that don't contain the term "ocean". This is arguably a more intuitive behavior for the `-` operator. Therefore, you should consider using `searchMode=all` instead of `searchMode=any` if you want to optimize searches for precision instead of recall, *and* Your users frequently use the `-` operator in searches.</p> When deciding on a `searchMode` setting, consider the user interaction patterns for queries in various applications. Users who are searching for information are more likely to include an operator in a query, as opposed to e-commerce sites that have more built-in navigation structures. |
<a name="prefix-search"></a>
Special characters can range from currency symbols like '$' or 'Γé¼', to emojis.
If you need special character representation, you can assign an analyzer that preserves them:
-+ The "whitespace" analyzer considers any character sequence separated by white spaces as tokens (so the '❤' emoji would be considered a token).
++ The whitespace analyzer considers any character sequence separated by white spaces as tokens (so the '❤' emoji would be considered a token).
-+ A language analyzer, such as the Microsoft English analyzer ("en.microsoft"), would take the '$' or 'Γé¼' string as a token.
++ A [language analyzer](search-language-support.md), such as the Microsoft English analyzer (`en.microsoft`), would take the '$' or 'Γé¼' string as a token.
-For confirmation, you can [test an analyzer](/rest/api/searchservice/test-analyzer) to see what tokens are generated for a given string. As you might expect, you might not get full tokenization from a single analyzer. A workaround is to create multiple fields that contain the same content, but with different analyzer assignments (for example,"description_en", "description_fr", and so forth for language analyzers).
+For confirmation, you can [test an analyzer](/rest/api/searchservice/test-analyzer) to see what tokens are generated for a given string. As you might expect, you might not get full tokenization from a single analyzer. A workaround is to create multiple fields that contain the same content, but with different analyzer assignments (for example, `description_en`, `description_fr`, and so forth for language analyzers).
When using Unicode characters, make sure symbols are properly escaped in the query url (for instance for '❤' would use the escape sequence `%E2%9D%A4+`). Postman does this translation automatically.
search Search Create App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-app-portal.md
Title: "Quickstart: Create a demo app in Azure portal"
-description: Run the Create demo app \wizard to generate HTML pages and script for an operational web app. The page includes a search bar, results area, sidebar, and typeahead support.
+description: Run the Create demo app wizard to generate HTML pages and script for an operational web app. The page includes a search bar, results area, sidebar, and typeahead support.
Previously updated : 10/13/2022 Last updated : 01/17/2024 - mode-ui - ignite-2023
The wizard supports suggestions, and the fields that can provide suggested resul
"postCode", "tags" ]
+ }
+ ]
``` 1. In the wizard, select the **Suggestions** tab at the top of the page. You'll see a list of all fields that are designated in the index schema as suggestion providers.
The wizard supports suggestions, and the fields that can provide suggested resul
1. When prompted, select **Download your app** to download the file.
-1. Open the file and click the Search button. This action executes a query, which can be an empty query (`*`) that returns an arbitrary result set. The page should look similar to the following screenshot. Enter a term and use filters to narrow results.
+1. Open the file and select the **Search** button. This action executes a query, which can be an empty query (`*`) that returns an arbitrary result set. The page should look similar to the following screenshot. Enter a term and use filters to narrow results.
The underlying index is composed of fictitious, generated data that has been duplicated across documents, and descriptions sometimes don't match the image. You can expect a more cohesive experience when you create an app based on your own indexes.
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you're using a free service, remember that it's limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+Remember that a free service is limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Search Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-explorer.md
Previously updated : 11/16/2023 Last updated : 01/18/2024 - mode-ui - ignite-2023
Before you begin, have the following prerequisites in place:
:::image type="content" source="media/search-explorer/search-explorer-tab.png" alt-text="Screenshot of the Search explorer tab." border="true":::
-1. To specify syntax and choose an API version, select **JSON view**. The examples in this article assume JSON view throughout.
+1. To specify query parameters and an API version, switch to **JSON view**. The examples in this article assume JSON view throughout. You can paste JSON examples from this article into the text area.
:::image type="content" source="media/search-explorer/search-explorer-json-view.png" alt-text="Screenshot of the JSON view selector." border="true":::
search Search How To Load Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-load-search-index.md
- ignite-2023 Previously updated : 10/21/2022 Last updated : 01/17/2024 # Load data into a search index in Azure AI Search
-This article explains how to import, refresh, and manage content in a predefined search index. In Azure AI Search, a [search index is created first](search-how-to-create-search-index.md), with data import following as a second step. The exception is Import Data wizard, which creates and loads an index in one workflow.
+This article explains how to import, refresh, and manage content in a predefined search index. In Azure AI Search, a [search index is created first](search-how-to-create-search-index.md), with [data import](search-what-is-data-import.md) following as a second step. The exception is Import Data wizard and indexer pipelines, which create and load an index in one workflow.
-A search service imports and indexes text in JSON, used in full text search or knowledge mining scenarios. Text content is obtainable from alphanumeric fields in the external data source, metadata that's useful in search scenarios, or enriched content created by a [skillset](cognitive-search-working-with-skillsets.md) (skills can extract or infer textual descriptions from images and unstructured content).
+A search service imports and indexes text and vectors in JSON, used in full text search, vector search, hybrid search, and knowledge mining scenarios. Text content is obtainable from alphanumeric fields in the external data source, metadata that's useful in search scenarios, or enriched content created by a [skillset](cognitive-search-working-with-skillsets.md) (skills can extract or infer textual descriptions from images and unstructured content). Vector content is vectorized using an [external embedding model](vector-search-how-to-generate-embeddings.md) or [integrated vectorization (preview)](vector-search-integrated-vectorization.md).
Once data is indexed, the physical data structures of the index are locked in. For guidance on what can and can't be changed, see [Drop and rebuild an index](search-howto-reindex.md).
You can prepare these documents yourself, but if content resides in a [supported
### [**Azure portal**](#tab/portal)
-Using Azure portal, the sole means for loading an index is an indexer or running the [Import Data wizard](search-import-data-portal.md). The wizard creates objects. If you want to load an existing index, you'll need to use an alternative approach.
+In the Azure portal, use the Import Data wizards to create and load indexes in a seamless workflow. If you want to load an existing index, choose an alternative approach.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** on the command bar to create and populate a search index. You can follow this link to review the workflow: [Quickstart: Create an Azure AI Search index in the Azure portal](search-get-started-portal.md).
+1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** or **Import and vectorize data** on the command bar to create and populate a search index. You can follow these links to review the workflow: [Quickstart: Create an Azure AI Search index](search-get-started-portal.md) and [Quickstart: Integrated vectorization (preview)](search-get-started-portal-import-vectors.md).
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
-1. Alternatively, you can [reset and run an indexer](search-howto-run-reset-indexers.md), which is useful if you're adding fields incrementally. Reset forces the indexer to start over, picking up all fields from all source documents.
+If indexers are already defined, you can [reset and run an indexer](search-howto-run-reset-indexers.md) from the Azure portal, which is useful if you're adding fields incrementally. Reset forces the indexer to start over, picking up all fields from all source documents.
### [**REST**](#tab/import-rest)
-[Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents) is the means by which you can import data into a search index. The @search.action parameter determines whether documents are added in full, or partially in terms of new or replacement values for specific fields.
+[Documents - Index (REST)](/rest/api/searchservice/documents) is the means by which you can import data into a search index. The @search.action parameter determines whether documents are added in full, or partially in terms of new or replacement values for specific fields.
[**REST Quickstart: Create, load, and query an index**](search-get-started-rest.md) explains the steps. The following example is a modified version of the example. It's been trimmed for brevity and the first HotelId value has been altered to avoid overwriting an existing document. 1. Formulate a POST call specifying the index name, the "docs/index" endpoint, and a request body that includes the @search.action parameter. ```http
- POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2020-06-30
+ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2023-11-01
Content-Type: application/json api-key: [admin key] {
Using Azure portal, the sole means for loading an index is an indexer or running
1. [Look up the documents](/rest/api/searchservice/lookup-document) you just added as a validation step: ```http
- GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2020-06-30
+ GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2023-11-01
``` When the document key or ID is new, **null** becomes the value for any field that is unspecified in the document. For actions on an existing document, updated values replace the previous values. Any fields that weren't specified in a "merge" or "mergeUpload" are left intact in the search index.
When the document key or ID is new, **null** becomes the value for any field tha
Azure AI Search supports the following APIs for simple and bulk document uploads into an index:
-+ [IndexDocumentsAction](/dotnet/api/azure.search.documents.models.indexdocumentsaction)
-+ [IndexDocumentsBatch](/dotnet/api/azure.search.documents.models.indexdocumentsbatch)
++ [IndexDocumentsAsync (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocumentsasync)++ [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1) There are several samples that illustrate indexing in context of simple and large-scale indexing:
Azure AI Search supports document-level operations so that you can look up, upda
1. [Look up the document](/rest/api/searchservice/lookup-document) to verify the value of the document ID and to review its content before deleting it. Specify the key or document ID in the request. The following examples illustrate a simple string for the [Hotels sample index](search-get-started-portal.md) and a base-64 encoded string for the metadata_storage_path key of the [cog-search-demo index](cognitive-search-tutorial-blob.md). ```http
- GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2020-06-30
+ GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2023-11-01
``` ```http
- GET https://[service name].search.windows.net/indexes/cog-search-demo/docs/aHR0cHM6Ly9oZWlkaWJsb2JzdG9yYWdlMi5ibG9iLmNvcmUud2luZG93cy5uZXQvY29nLXNlYXJjaC1kZW1vL2d1dGhyaWUuanBn0?api-version=2020-06-30
+ GET https://[service name].search.windows.net/indexes/cog-search-demo/docs/aHR0cHM6Ly9oZWlkaWJsb2JzdG9yYWdlMi5ibG9iLmNvcmUud2luZG93cy5uZXQvY29nLXNlYXJjaC1kZW1vL2d1dGhyaWUuanBn0?api-version=2023-11-01
``` 1. [Delete the document](/rest/api/searchservice/addupdate-or-delete-documents) to remove it from the search index. ```http
- POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2020-06-30
+ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2023-11-01
Content-Type: application/json api-key: [admin key] {
search Search Howto Index Csv Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-csv-blobs.md
- ignite-2023 Previously updated : 10/03/2022 Last updated : 01/17/2024 # Index CSV blobs and files using delimitedText parsing mode
Whenever you're creating multiple search documents from a single blob, be sure t
## Setting up CSV indexing
-To index CSV blobs, create or update an indexer definition with the `delimitedText` parsing mode on a [Create Indexer](/rest/api/searchservice/create-indexer) request:
+To index CSV blobs, create or update an indexer definition with the `delimitedText` parsing mode on a [Create Indexer](/rest/api/searchservice/indexers/create) request.
+
+Only UTF-8 encoding is supported.
```http {
To index CSV blobs, create or update an indexer definition with the `delimitedTe
} ```
-`firstLineContainsHeaders` indicates that the first (non-blank) line of each blob contains headers.
+`firstLineContainsHeaders` indicates that the first (nonblank) line of each blob contains headers.
If blobs don't contain an initial header line, the headers should be specified in the indexer configuration: ```http
You can customize the delimiter character using the `delimitedTextDelimiter` con
``` > [!NOTE]
-> Currently, only the UTF-8 encoding is supported. If you need support for other encodings, vote for it on [UserVoice](https://feedback.azure.com/d365community/forum/9325d19e-0225-ec11-b6e6-000d3a4f07b8).
-
-> [!IMPORTANT]
-> When you use the delimited text parsing mode, Azure AI Search assumes that all blobs in your data source will be CSV. If you need to support a mix of CSV and non-CSV blobs in the same data source, please vote for it on [UserVoice](https://feedback.azure.com/d365community/forum/9325d19e-0225-ec11-b6e6-000d3a4f07b8). Otherwise, considering using [file extension filters](search-blob-storage-integration.md#controlling-which-blobs-are-indexed) to control which files are imported on each indexer run.
->
+> In delimited text parsing mode, Azure AI Search assumes that all blobs are CSV. If you have a mix of CSV and non-CSV blobs in the same data source, consider using [file extension filters](search-blob-storage-integration.md#controlling-which-blobs-are-indexed) to control which files are imported on each indexer run.
## Request examples
Putting it all together, here are the complete payload examples.
Datasource: ```http
-POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
+POST https://[service name].search.windows.net/datasources?api-version=2023-11-01
Content-Type: application/json api-key: [admin key]- { "name" : "my-blob-datasource", "type" : "azureblob",
api-key: [admin key]
Indexer: ```http
-POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+POST https://[service name].search.windows.net/indexers?api-version=2023-11-01
Content-Type: application/json api-key: [admin key]- { "name" : "my-csv-indexer", "dataSourceName" : "my-blob-datasource",
search Search Howto Index Plaintext Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-plaintext-blobs.md
- ignite-2023- Previously updated : 09/13/2022+ Last updated : 01/18/2024
-# How to index plain text blobs and files in Azure AI Search
+# Index plain text blobs and files in Azure AI Search
**Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
-When using an indexer to extract searchable blob text or file content for full text search, you can assign a parsing mode to get better indexing outcomes. By default, the indexer parses the content as a single chunk of text. However, if all blobs and files contain plain text in the same encoding, you can significantly improve indexing performance by using the `text` parsing mode.
+When using an indexer to extract searchable blob text or file content for full text search, you can assign a parsing mode to get better indexing outcomes. By default, the indexer parses a blob's `content` property as a single chunk of text. However, if all blobs and files contain plain text in the same encoding, you can significantly improve indexing performance by using the `text` parsing mode.
-Recommendations for use `text` parsing include:
+Recommendations for `text` parsing include either of the following characteristics:
-+ File type is .txt
-+ Files are of any type, but the content itself is text (for example, program source code, HTML, XML, and so forth). For files in a mark up language, any syntax characters will come through as static text.
++ File type is `.txt`++ Files are of any type, but the content itself is text (for example, program source code, HTML, XML, and so forth). For files in a markup language, the syntax characters come through as static text.
-Recall that all indexers serialize to JSON. By default, the contents of the entire text file will be indexed within one large field as `"content": "<file-contents>"`. Any new line and return instructions are embedded in the content field and expressed as `\r\n\`.
+Recall that all indexers serialize to JSON. By default, the content of the entire text file is indexed within one large field as `"content": "<file-contents>"`. New line and return instructions are embedded in the content field and expressed as `\r\n\`.
-If you want a more granular outcome, and if the file type is compatible, consider the following solutions:
+If you want a more refined or granular outcome, and if the file type is compatible, consider the following solutions:
+ [`delimitedText`](search-howto-index-csv-blobs.md) parsing mode, if the source is CSV + [`jsonArray` or `jsonLines`](search-howto-index-json-blobs.md), if the source is JSON
-A third option for breaking content into multiple parts requires advanced features in the form of [AI enrichment](cognitive-search-concept-intro.md). It adds analysis that identifies and assigns chunks of the file to different search fields. You might find a full or partial solution through [built-in skills](cognitive-search-predefined-skills.md), but a more likely solution would be learning model that understands your content, articulated in custom learning model, wrapped in a [custom skill](cognitive-search-custom-skill-interface.md).
+An alternative third option for breaking content into multiple parts requires advanced features in the form of [AI enrichment](cognitive-search-concept-intro.md). It adds analysis that identifies and assigns chunks of the file to different search fields. You might find a full or partial solution through [built-in skills](cognitive-search-predefined-skills.md) such as entity recognition or keyword extraction, but a more likely solution might be a custom learning model that understands your content, wrapped in a [custom skill](cognitive-search-custom-skill-interface.md).
## Set up plain text indexing
-To index plain text blobs, create or update an indexer definition with the `parsingMode` configuration property to `text` on a [Create Indexer](/rest/api/searchservice/create-indexer) request:
+To index plain text blobs, create or update an indexer definition with the `parsingMode` configuration property set to `text` on a [Create Indexer](/rest/api/searchservice/create-indexer) request:
```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
+PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2023-11-01
Content-Type: application/json api-key: [admin key]
By default, the `UTF-8` encoding is assumed. To specify a different encoding, us
Parsing modes are specified in the indexer definition. ```http
-POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+POST https://[service name].search.windows.net/indexers?api-version=2023-11-01
Content-Type: application/json api-key: [admin key]
search Search Howto Monitor Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-monitor-indexers.md
Title: Monitor indexer status and results
-description: Monitor the status, progress, and results of Azure AI Search indexers in the Azure portal, using the REST API, or the .NET SDK.
+description: Monitor the status, progress, and results of Azure AI Search indexers in the Azure portal, using the REST API, or the Azure SDKs.
- devx-track-dotnet - ignite-2023 Previously updated : 09/15/2022 Last updated : 01/18/2024 # Monitor indexer status and results in Azure AI Search
Metric views can be filtered or split up by a set of predefined dimensions.
| Document processed count | Shows the number of indexer processed documents. | Data source name, failed, index name, indexer name, skillset name | <br> - Can be referenced as a rough measure of throughput (number of documents processed by indexer over time) <br> - Set up to alert on failed documents | | Skill execution invocation count | Shows the number of skill invocations. | Data source name, failed, index name, indexer name, skill name, skill type, skillset name | <br> - Reference to ensure skills are invoked as expected by comparing relative invocation numbers between skills and number of skill invocations to the number of documents. <br> - Set up to alert on failed skill invocations |
-The screenshot below shows the number of documents processed by indexers within a service over an hour, split up by indexer name.
+The following screenshot shows the number of documents processed by indexers within a service over an hour, split up by indexer name.
![Indexer documents processed metric](media/search-monitor-indexers/indexers-documents-processed-metric.png "Indexer documents processed metric")
For more information about status codes and indexer monitoring data, see [Get In
## Monitor using .NET
-Using the Azure AI Search .NET SDK, the following C# example writes information about an indexer's status and the results of its most recent (or ongoing) run to the console.
+The following C# example writes information about an indexer's status and the results of its most recent (or ongoing) run to the console.
```csharp static void CheckIndexerStatus(SearchIndexerClient indexerClient, SearchIndexer indexer)
search Search Howto Schedule Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-schedule-indexers.md
- ignite-2023 Previously updated : 12/06/2022 Last updated : 01/17/2024 # Schedule an indexer in Azure AI Search
-Indexers can be configured to run on a schedule when you set the "schedule" property. Some situations where indexer scheduling is useful include:
+Indexers can be configured to run on a schedule when you set the `schedule` property. Some situations where indexer scheduling is useful include:
+ Source data is changing over time, and you want the indexer to automatically process the difference. + Source data is very large, and you need a recurring schedule to index all of the content.
Indexers can be configured to run on a schedule when you set the "schedule" prop
When indexing can't complete within the [typical 2-hour processing window](search-howto-run-reset-indexers.md#indexer-execution), you can schedule the indexer to run on a 2-hour cadence to work through a large volume of data. As long as your data source supports [change detection logic](search-howto-create-indexers.md#change-detection-and-internal-state), indexers can automatically pick up where they left off on each run.
-Once an indexer is on a schedule, it remains on the schedule until you clear the interval or start time, or set "disabled" to true. Leaving the indexer on a schedule when there's nothing to process won't impact system performance. Checking for changed content is a relatively fast operation.
+Once an indexer is on a schedule, it remains on the schedule until you clear the interval or start time, or set `disabled` to true. Leaving the indexer on a schedule when there's nothing to process won't impact system performance. Checking for changed content is a relatively fast operation.
## Prerequisites
Once an indexer is on a schedule, it remains on the schedule until you clear the
## Schedule definition
-A schedule is part of the indexer definition. If the "schedule" property is omitted, the indexer will only run on demand. The property has two parts.
+A schedule is part of the indexer definition. If the `schedule` property is omitted, the indexer will only run on demand. The property has two parts.
| Property | Description | |-|-|
The following example is a schedule that starts on January 1 at midnight and run
{ "dataSourceName" : "hotels-ds", "targetIndexName" : "hotels-idx",
- "schedule" : { "interval" : "PT2H", "startTime" : "2022-01-01T00:00:00Z" }
+ "schedule" : { "interval" : "PT2H", "startTime" : "2024-01-01T00:00:00Z" }
} ```
Schedules are specified in an indexer definition. To set up a schedule, you can
### [**Azure portal**](#tab/portal) 1. Sign in to the [Azure portal](https://portal.azure.com) and open the search service page.
-1. On the **Overview** page, select the **Indexers** tab.
-1. Select an indexer.
+1. On the left navigation pane, select **Indexers**.
+1. Open an indexer.
1. Select **Settings**. 1. Scroll down to **Schedule**, and then choose Hourly, Daily, or Custom to set a specific date, time, or custom interval.
+Switch to the **Indexer Definition (JSON)** tab at the top of the index to view the schedule definition in XSD format.
+ ### [**REST**](#tab/rest)
-1. Call [Create Indexer](/rest/api/searchservice/create-indexer) or [Update Indexer](/rest/api/searchservice/update-indexer).
+1. Call [Create Indexer](/rest/api/searchservice/indexers/create) or [Create or Update Indexer](/rest/api/searchservice/indexers/create-or-update).
1. Set the schedule property in the body of the request: ```http
- PUT /indexers/<indexer-name>?api-version=2020-06-30
+ PUT /indexers/<indexer-name>?api-version=2023-11-01
{ "dataSourceName" : "myazuresqldatasource", "targetIndexName" : "my-target-index-name",
- "schedule" : { "interval" : "PT10M", "startTime" : "2021-01-01T00:00:00Z" }
+ "schedule" : { "interval" : "PT10M", "startTime" : "2024-01-01T00:00:00Z" }
} ```
await indexerClient.CreateOrUpdateIndexerAsync(indexer);
## Scheduling behavior
-For text-based indexing, the scheduler can kick off as many indexer jobs as the search service supports, which is determined by the number of search units. For example, if the service has three replicas and four partitions, you can generally have 12 indexer jobs in active execution, whether initiated on demand or on a schedule.
+For text-based indexing, the scheduler can kick off as many indexer jobs as the search service supports, which is determined by the number of search units. For example, if the service has three replicas and four partitions, you can have 12 indexer jobs in active execution, whether initiated on demand or on a schedule.
Skills-based indexers run in a different [execution environment](search-howto-run-reset-indexers.md#indexer-execution). For this reason, the number of service units has no bearing on the number of skills-based indexer jobs you can run. Multiple skills-based indexers can run in parallel, but doing so depends on node availability within the execution environment. Although multiple indexers can run simultaneously, a given indexer is single instance. You can't run two copies of the same indexer concurrently. If an indexer happens to still be running when its next scheduled execution is set to start, the pending execution is postponed until the next scheduled occurrence, allowing the current job to finish.
-LetΓÇÖs consider an example to make this more concrete. Suppose we configure an indexer schedule with an interval of hourly and a start time of June 1, 2022 at 8:00:00 AM UTC. Here's what could happen when an indexer run takes longer than an hour:
+LetΓÇÖs consider an example to make this more concrete. Suppose we configure an indexer schedule with an interval of hourly and a start time of January 1, 2024 at 8:00:00 AM UTC. Here's what could happen when an indexer run takes longer than an hour:
-+ The first indexer execution starts at or around June 1, 2022 at 8:00 AM UTC. Assume this execution takes 20 minutes (or any amount of time that's less than 1 hour).
++ The first indexer execution starts at or around January 1, 2024 at 8:00 AM UTC. Assume this execution takes 20 minutes (or any amount of time that's less than 1 hour).
-+ The second execution starts at or around June 1, 2022 9:00 AM UTC. Suppose that this execution takes 70 minutes - more than an hour ΓÇô and it will not complete until 10:10 AM UTC.
++ The second execution starts at or around January 1, 2022 9:00 AM UTC. Suppose that this execution takes 70 minutes - more than an hour ΓÇô and it will not complete until 10:10 AM UTC. + The third execution is scheduled to start at 10:00 AM UTC, but at that time the previous execution is still running. This scheduled execution is then skipped. The next execution of the indexer won't start until 11:00 AM UTC.
search Search Indexer Field Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-field-mappings.md
- ignite-2023 Previously updated : 09/14/2022 Last updated : 01/17/2024 # Field mappings and transformations using Azure AI Search indexers ![Indexer Stages](./media/search-indexer-field-mappings/indexer-stages-field-mappings.png "indexer stages")
-When an [Azure AI Search indexer](search-indexer-overview.md) loads a search index, it determines the data path through source-to-destination field mappings. Implicit field mappings are internal and occur when field names and data types are compatible between the source and destination.
+When an [Azure AI Search indexer](search-indexer-overview.md) loads a search index, it determines the data path using source-to-destination field mappings. Implicit field mappings are internal and occur when field names and data types are compatible between the source and destination. If inputs and outputs don't match, you can define explicit *field mappings* to set up the data path, as described in this article.
-If inputs and outputs don't match, you can define explicit *field mappings* to set up the data path, as described in this article. Field mappings can also be used to introduce light-weight data conversion, such as encoding or decoding, through [mapping functions](#mappingFunctions). If more processing is required, consider [Azure Data Factory](../data-factory/index.yml) to bridge the gap.
+Field mappings can also be used for light-weight data conversions, such as encoding or decoding, through [mapping functions](#mappingFunctions). If more processing is required, consider [Azure Data Factory](../data-factory/index.yml) to bridge the gap.
Field mappings apply to:
-+ Physical data structures on both sides of the data stream (between a [supported data source](search-indexer-overview.md#supported-data-sources) and a [search index](search-what-is-an-index.md)). If you're importing skill-enriched content that resides in memory, use [outputFieldMappings](cognitive-search-output-field-mapping.md) instead.
++ Physical data structures on both sides of the data stream (between fields in a [supported data source](search-indexer-overview.md#supported-data-sources) and fields in a [search index](search-what-is-an-index.md)). If you're importing skill-enriched content that resides in memory, use [outputFieldMappings](cognitive-search-output-field-mapping.md) to map in-memory nodes to output fields in a search index. + Search indexes only. If you're populating a [knowledge store](knowledge-store-concept-intro.md), use [projections](knowledge-store-projections-examples.md) for data path configuration.
-+ Top-level search fields only, where the "targetFieldName" is either a simple field or a collection. A target field can't be a complex type.
++ Top-level search fields only, where the `targetFieldName` is either a simple field or a collection. A target field can't be a complex type. > [!NOTE] > If you're working with complex data (nested or hierarchical structures), and you'd like to mirror that data structure in your search index, your search index must match the source structure exactly (same field names, levels, and types) so that the default mappings will work. Optionally, you might want just a few nodes in the complex structure. To get individual nodes, you can flatten incoming data into a string collection (see [outputFieldMappings](cognitive-search-output-field-mapping.md#flatten-complex-structures-into-a-string-collection) for this workaround).
Field mappings apply to:
## Define a field mapping
-Field mappings are added to the "fieldMappings" array of an indexer definition. A field mapping consists of three parts.
+Field mappings are added to the `fieldMappings` array of an indexer definition. A field mapping consists of three parts.
```json "fieldMappings": [
Field mappings are added to the "fieldMappings" array of an indexer definition.
"targetFieldName": "city", "mappingFunction": null }
-],
+]
``` | Property | Description | |-|-| | sourceFieldName | Required. Represents a field in your data source. |
-| targetFieldName | Optional. Represents a field in your search index. If omitted, the value of "sourceFieldName" is assumed for the target. Target fields must be top-level simple fields or collections. It can't be a complex type or collection. If you're handling a data type issue, a field's data type is specified in the index definition. The field mapping just needs to have the field's name.|
+| targetFieldName | Optional. Represents a field in your search index. If omitted, the value of `sourceFieldName` is assumed for the target. Target fields must be top-level simple fields or collections. It can't be a complex type or collection. If you're handling a data type issue, a field's data type is specified in the index definition. The field mapping just needs to have the field's name.|
| mappingFunction | Optional. Consists of [predefined functions](#mappingFunctions) that transform data. | Azure AI Search uses case-insensitive comparison to resolve the field and function names in field mappings. This is convenient (you don't have to get all the casing right), but it means that your data source or index can't have fields that differ only by case.
This example maps a single source field to multiple target fields ("one-to-many"
### [**.NET SDK (C#)**](#tab/csharp)
-In the Azure SDK for .NET, use the [FieldMapping](/dotnet/api/azure.search.documents.indexes.models.fieldmapping) class that provides "SourceFieldName" and "TargetFieldName" properties and an optional "MappingFunction" reference.
+In the Azure SDK for .NET, use the [FieldMapping](/dotnet/api/azure.search.documents.indexes.models.fieldmapping) class that provides `SourceFieldName` and `TargetFieldName` properties and an optional `MappingFunction` reference.
Specify field mappings when constructing the indexer, or later by directly setting [SearchIndexer.FieldMappings](/dotnet/api/azure.search.documents.indexes.models.searchindexer.fieldmappings). The following C# example sets the field mappings when constructing an indexer.
Performs *URL-safe* Base64 encoding of the input string. Assumes that the input
Only URL-safe characters can appear in an Azure AI Search document key (so that you can address the document using the [Lookup API](/rest/api/searchservice/lookup-document)). If the source field for your key contains URL-unsafe characters, such as `-` and `\`, use the `base64Encode` function to convert it at indexing time.
-The following example specifies the base64Encode function on "metadata_storage_name" to handle unsupported characters.
+The following example specifies the base64Encode function on `metadata_storage_name `to handle unsupported characters.
```http PUT /indexers?api-version=2020-06-30
A document key (both before and after conversion) can't be longer than 1,024 cha
#### Example: Make a base-encoded field "searchable"
-There are times when you need to use an encoded version of a field like "metadata_storage_path" as the key, but also need an unencoded version for full text search. To support both scenarios, you can map "metadata_storage_path" to two fields: one for the key (encoded), and a second for a path field that we can assume is attributed as "searchable" in the index schema.
+There are times when you need to use an encoded version of a field like `metadata_storage_path` as the key, but also need an unencoded version for full text search. To support both scenarios, you can map `metadata_storage_path` to two fields: one for the key (encoded), and a second for a path field that we can assume is attributed as `searchable` in the index schema.
```http PUT /indexers/blob-indexer?api-version=2020-06-30
Your source data might contain Base64-encoded strings, such as blob metadata str
"name" : "base64Decode", "parameters" : { "useHttpServerUtilityUrlTokenDecode" : false } }
- }]
+ }
+]
``` If you don't include a parameters property, it defaults to the value `{"useHttpServerUtilityUrlTokenEncode" : true}`.
When you retrieve the encoded key at search time, you can then use the `urlDecod
"mappingFunction" : { "name" : "urlEncode" }
- }]
+ }
+]
``` <a name="urlDecodeFunction"></a>
Some Azure storage clients automatically URL-encode blob metadata if it contains
"mappingFunction" : { "name" : "urlDecode" }
- }]
+ }
+]
```
-
+ <a name="fixedLengthEncodeFunction"></a> ### fixedLengthEncode function
When errors occur that are related to document key length exceeding 1024 charact
"mappingFunction" : { "name" : "fixedLengthEncode" }
- }]
+ }
+]
``` ## See also
search Search Indexer Howto Access Trusted Service Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-trusted-service-exception.md
Title: Connect as trusted service
-description: Enable data access by an indexer in Azure AI Search to data stored securely in Azure Storage.
+description: Enable secure data access to Azure Storage from an indexer in Azure AI Search
- ignite-2023 Previously updated : 12/08/2022 Last updated : 01/18/2024 # Make indexer connections to Azure Storage as a trusted service
-In Azure AI Search, indexers that access Azure blobs can use the [trusted service exception](../storage/common/storage-network-security.md#exceptions) to securely access data. This mechanism offers customers who are unable to grant [indexer access using IP firewall rules](search-indexer-howto-access-ip-restricted.md) a simple, secure, and free alternative for accessing data in storage accounts.
+In Azure AI Search, indexers that access Azure blobs can use the [trusted service exception](../storage/common/storage-network-security.md#exceptions) to securely access blobs. This mechanism offers customers who are unable to grant [indexer access using IP firewall rules](search-indexer-howto-access-ip-restricted.md) a simple, secure, and free alternative for accessing data in storage accounts.
> [!NOTE] > If Azure Storage is behind a firewall and in the same region as Azure AI Search, you won't be able to create an inbound rule that admits requests from your search service. The solution for this scenario is for search to connect as a trusted service, as described in this article. ## Prerequisites
-+ A search service with a system-assigned managed identity ([see below](#check-service-identity)).
++ A search service with a system-assigned managed identity ([see check service identity](#check-service-identity)).
-+ A storage account with the **Allow trusted Microsoft services to access this storage account** network option ([see below](#check-network-settings)).
++ A storage account with the **Allow trusted Microsoft services to access this storage account** network option ([see check network settings](#check-network-settings)).
-+ An Azure role assignment in Azure Storage that grants permissions to the search service system-assigned managed identity ([see below](#check-permissions)).
++ An Azure role assignment in Azure Storage that grants permissions to the search service system-assigned managed identity ([see check permissions](#check-permissions)). > [!NOTE] > In Azure AI Search, a trusted service connection is limited to blobs and ADLS Gen2 on Azure Storage. It's unsupported for indexer connections to Azure Table Storage and Azure File Storage.
In Azure AI Search, indexers that access Azure blobs can use the [trusted servic
1. Make sure the checkbox is selected for **Allow Azure services on the trusted services list to access this storage account**.
- This option will only permit the specific search service instance with appropriate role-based access to the storage account (strong authentication) to access data in the storage account, even if it's secured by IP firewall rules.
+ Assuming your search service has role-based access to the storage account, it can access data even when connections to Azure Storage are secured by IP firewall rules.
## Check permissions
-A system managed identity is a Microsoft Entra login. The assignment needs **Storage Blob Data Reader** at a minimum.
+A system managed identity is a Microsoft Entra service principal. The assignment needs **Storage Blob Data Reader** at a minimum.
1. In the left navigation pane under **Access Control**, view all role assignments and make sure that **Storage Blob Data Reader** is assigned to the search service system identity.
search Search Indexer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-tutorial.md
Previously updated : 10/04/2022 Last updated : 01/18/2024 - devx-track-csharp - devx-track-dotnet
Configure an [indexer](search-indexer-overview.md) to extract searchable data from Azure SQL Database, sending it to a search index in Azure AI Search.
-This tutorial uses C# and the [.NET SDK](/dotnet/api/overview/azure/search) to perform the following tasks:
+This tutorial uses C# and the [Azure SDK for .NET](/dotnet/api/overview/azure/search) to perform the following tasks:
> [!div class="checklist"] > * Create a data source that connects to Azure SQL Database
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-* [Azure SQL Database](https://azure.microsoft.com/services/sql-database/)
+* [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) using SQL Server authentication
* [Visual Studio](https://visualstudio.microsoft.com/downloads/) * [Create](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices)
This tutorial uses Azure AI Search for indexing and queries, and Azure SQL Datab
### Start with Azure SQL Database
-In this step, create an external data source on Azure SQL Database that an indexer can crawl. You can use the Azure portal and the *hotels.sql* file from the sample download to create the dataset in Azure SQL Database. Azure AI Search consumes flattened rowsets, such as one generated from a view or query. The SQL file in the sample solution creates and populates a single table.
+This tutorial provides *hotels.sql* file in the sample download to populate the database. Azure AI Search consumes flattened rowsets, such as one generated from a view or query. The SQL file in the sample solution creates and populates a single table.
-If you have an existing Azure SQL Database resource, you can add the hotels table to it, starting at step 4.
+If you have an existing Azure SQL Database resource, you can add the hotels table to it, starting at the **Open query** step.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Create an Azure SQL database, using the instructions in [Quickstart: Create a single database](/azure/azure-sql/database/single-database-create-quickstart).
-1. Find or create a **SQL Database**. You can use defaults and the lowest level pricing tier. One advantage to creating a server is that you can specify an administrator user name and password, necessary for creating and loading tables in a later step.
+ Server configuration for the database is important.
- :::image type="content" source="media/search-indexer-tutorial/indexer-new-sqldb.png" alt-text="Screenshot of the Create SQL Database page in Azure portal." border="true":::
+ * Choose the SQL Server authentication option that prompts you to specify a username and password. You need this for the ADO.NET connection string used by the indexer.
-1. Select **Review + create** to deploy the new server and database. Wait for the server and database to deploy. Go to the resource.
+ * Choose a public connection. It makes this tutorial easier to complete. Public isn't recommended for production and we recommend [deleting this resource](#clean-up-resources) at the end of the tutorial.
-1. On the navigate pane, select **Getting started** and then select **Configure** to allow access.
+ :::image type="content" source="media/search-indexer-tutorial/sql-server-config.png" alt-text="Screenshot of server configuration.":::
-1. Under Public access, click **Selected networks**.
+1. In the Azure portal, go to the new resource.
-1. Under Firewall rules, add your client IPv4 address. This is the portal client.
+1. Add a firewall rule to allow access from your client, using the instructions in [Quickstart: Create a server-level firewall rule in Azure portal](/azure/azure-sql/database/firewall-create-server-level-portal-quickstart). You can run `ipconfig` from a command prompt to get your IP address.
-1. Under Exception, select **Allow Azure services and resources to access this server**.
+1. Use the Query editor to load the sample data. On the navigation pane, select **Query editor (preview)** and enter the user name and password of server admin.
-1. Save your changes and then close the Networking page.
-
-1. On the navigation pane, select **Query editor (preview)** and enter the user name and password of server admin.
-
- You'll probably get an access denied error. Copy the client IP address from the error message. Return to the firewall rules page to add a rule that allows access from your client.
+ If you get an access denied error, copy the client IP address from the error message, open the network security page for the server, and add an inbound rule that allows access from your client.
1. In Query editor, select **Open query** and navigate to the location of *hotels.sql* file on your local computer.
try
{ await indexerClient.RunIndexerAsync(indexer.Name); }
-catch (CloudException e) when (e.Response.StatusCode == (HttpStatusCode)429)
+catch (RequestFailedException ex) when (ex.Status == 429)
{
- Console.WriteLine("Failed to run indexer: {0}", e.Response.Content);
+ Console.WriteLine("Failed to run indexer: {0}", ex.Message);
} ```
Your code runs locally in Visual Studio, connecting to your search service on Az
Use Azure portal to verify object creation, and then use **Search explorer** to query the index.
-1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, open each list in turn to verify the object is created. **Indexes**, **Indexers**, and **Data Sources** will have "hotels", "azure-sql-indexer", and "azure-sql", respectively.
-
- :::image type="content" source="media/search-indexer-tutorial/tiles-portal.png" alt-text="Screenshot of the indexer and data source tiles in the Azure portal search service page." border="true":::
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service left navigation pane, open each page in turn to verify the object is created. **Indexes**, **Indexers**, and **Data Sources** will have "hotels-sql-idx", "hotels-sql-indexer", and "hotels-sql-ds", respectively.
-1. On the Indexes tab, select the hotels index. On the hotels page, **Search explorer** is the first tab.
+1. On the Indexes tab, select the hotels-sql-idx index. On the hotels page, **Search explorer** is the first tab.
1. Select **Search** to issue an empty query.
Use Azure portal to verify object creation, and then use **Search explorer** to
:::image type="content" source="media/search-indexer-tutorial/portal-search.png" alt-text="Screenshot of a Search Explorer query for the target index." border="true":::
-1. Next, enter a search string: `search=river&$count=true`.
+1. Next, [switch to **JSON View**](search-explorer.md#start-search-explorer) so that you can enter query parameters:
+
+ ```json
+ {
+ "search": "river",
+ "count": true
+ }
+ ```
This query invokes full text search on the term `river`, and the result includes a count of the matching documents. Returning the count of matching documents is helpful in testing scenarios when you have a large index with thousands or millions of documents. In this case, only one document matches the query.
-1. Lastly, enter a search string that limits the JSON output to fields of interest: `search=river&$count=true&$select=hotelId, baseRate, description`.
+1. Lastly, enter parameters that limit search results to fields of interest:
+
+ ```json
+ {
+ "search": "river",
+ "select": "hotelId, hotelName, baseRate, description",
+ "count": true
+ }
+ ```
The query response is reduced to selected fields, resulting in more concise output.
search Search Modeling Multitenant Saas Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-modeling-multitenant-saas-applications.md
- ignite-2023 Previously updated : 09/15/2022 Last updated : 01/18/2024 # Design patterns for multitenant SaaS applications and Azure AI Search
Multitenant applications must effectively distribute resources among the tenants
+ *Ease of Operations:* When developing a multitenant architecture, the impact on the application's operations and complexity is an important consideration. Azure AI Search has a [99.9% SLA](https://azure.microsoft.com/support/legal/sla/search/v1_0/).
-+ *Global footprint:* Multitenant applications may need to effectively serve tenants, which are distributed across the globe.
++ *Global footprint:* Multitenant applications often need to serve tenants who are distributed across the globe. + *Scalability:* Application developers need to consider how they reconcile between maintaining a sufficiently low level of application complexity and designing the application to scale with number of tenants and the size of tenants' data and workload.
A key attribute of the index-per-tenant model is the ability for the application
The index-per-tenant model provides the basis for a variable cost model, where an entire Azure AI Search service is bought up-front and then subsequently filled with tenants. This allows for unused capacity to be designated for trials and free accounts.
-For applications with a global footprint, the index-per-tenant model may not be the most efficient. If an application's tenants are distributed across the globe, a separate service may be necessary for each region, which may duplicate costs across each of them.
+For applications with a global footprint, the index-per-tenant model might not be the most efficient. If an application's tenants are distributed across the globe, a separate service can be necessary for each region, duplicating costs across each of them.
Azure AI Search allows for the scale of both the individual indexes and the total number of indexes to grow. If an appropriate pricing tier is chosen, partitions and replicas can be added to the entire search service when an individual index within the service grows too large in terms of storage or traffic.
search Search Query Lucene Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-lucene-examples.md
Title: Use full Lucene query syntax
+ Title: Examples of full Lucene query syntax
description: Query examples demonstrating the Lucene query syntax for fuzzy search, proximity search, term boosting, regular expression search, and wildcard searches in an Azure AI Search index.
- ignite-2023 Previously updated : 08/15/2022 Last updated : 01/17/2024
-# Use the "full" Lucene search syntax (advanced queries in Azure AI Search)
+# Examples of "full" Lucene search syntax (advanced queries in Azure AI Search)
When constructing queries for Azure AI Search, you can replace the default [simple query parser](query-simple-syntax.md) with the more powerful [Lucene query parser](query-lucene-syntax.md) to formulate specialized and advanced query expressions.
The Lucene parser supports complex query formats, such as field-scoped queries,
The following queries are based on the hotels-sample-index, which you can create by following the instructions in this [quickstart](search-get-started-portal.md).
-Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or another web client.
+Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or another web client. Or, use the JSON view of [Search Explorer](search-explorer.md) in the Azure portal. In JSON view, you can paste in the query examples shown here in this article.
Request headers must have the following values:
Request headers must have the following values:
URI parameters must include your search service endpoint with the index name, docs collections, search command, and API version, similar to the following example: ```http
-https://{{service-name}}.search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+https://{{service-name}}.search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2023-11-01
``` Request body should be formed as valid JSON:
Request body should be formed as valid JSON:
Fielded search scope individual, embedded search expressions to a specific field. This example searches for hotel names with the term "hotel" in them, but not "motel". You can specify multiple fields using AND.
-When you use this query syntax, you can omit the "searchFields" parameter when the fields you want to query are in the search expression itself. If you include "searchFields" with fielded search, the `fieldName:searchExpression` always takes precedence over "searchFields".
+When you use this query syntax, you can omit the `searchFields` parameter when the fields you want to query are in the search expression itself. If you include `searchFields` with fielded search, the `fieldName:searchExpression` always takes precedence over `searchFields`.
```http
-POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
+POST /indexes/hotel-samples-index/docs/search?api-version=2023-11-01
{ "search": "HotelName:(hotel NOT motel) AND Category:'Resort and Spa'", "queryType": "full",
The field specified in `fieldName:searchExpression` must be a searchable field.
Fuzzy search matches on terms that are similar, including misspelled words. To do a fuzzy search, append the tilde `~` symbol at the end of a single word with an optional parameter, a value between 0 and 2, that specifies the edit distance. For example, `blue~` or `blue~1` would return blue, blues, and glue. ```http
-POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
+POST /indexes/hotel-samples-index/docs/search?api-version=2023-11-01
{ "search": "Tags:conserge~", "queryType": "full",
Proximity search finds terms that are near each other in a document. Insert a ti
This query searches for the terms "hotel" and "airport" within 5 words of each other in a document. The quotation marks are escaped (`\"`) to preserve the phrase: ```http
-POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
+POST /indexes/hotel-samples-index/docs/search?api-version=2023-11-01
{ "search": "Description: \"hotel airport\"~5", "queryType": "full",
Term boosting refers to ranking a document higher if it contains the boosted ter
In this "before" query, search for "beach access" and notice that there are seven documents that match on one or both terms. ```http
-POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
+POST /indexes/hotel-samples-index/docs/search?api-version=2023-11-01
{ "search": "beach access", "queryType": "full",
After boosting the term "beach", the match on Old Carrabelle Hotel moves down to
A regular expression search finds a match based on the contents between forward slashes "/", as documented in the [RegExp class](https://lucene.apache.org/core/6_6_1/core/org/apache/lucene/util/automaton/RegExp.html). ```http
-POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
+POST /indexes/hotel-samples-index/docs/search?api-version=2023-11-01
{ "search": "HotelName:/(Mo|Ho)tel/", "queryType": "full",
You can use generally recognized syntax for multiple (`*`) or single (`?`) chara
In this query, search for hotel names that contain the prefix 'sc'. You can't use a `*` or `?` symbol as the first character of a search. ```http
-POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
+POST /indexes/hotel-samples-index/docs/search?api-version=2023-11-01
{ "search": "HotelName:sc*", "queryType": "full",
search Search Query Simple Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-simple-examples.md
Title: Use simple Lucene query syntax
+ Title: Examples of simple syntax
description: Query examples demonstrating the simple syntax for full text search, filter search, and geo search against an Azure AI Search index.
- ignite-2023 Previously updated : 08/15/2022 Last updated : 01/17/2024
-# Use the "simple" search syntax in Azure AI Search
+# Examples of "simple" search queries in Azure AI Search
-In Azure AI Search, the [simple query syntax](query-simple-syntax.md) invokes the default query parser for full text search. The parser is fast and handles common scenarios, including full text search, filtered and faceted search, and prefix search. This article uses examples to illustrate simple syntax usage in a [Search Documents (REST API)](/rest/api/searchservice/search-documents) request.
+In Azure AI Search, the [simple query syntax](query-simple-syntax.md) invokes the default query parser for full text search. The parser is fast and handles common scenarios, including full text search, filtered and faceted search, and prefix search. This article uses examples to illustrate simple syntax usage in a [Search Documents (REST API)](/rest/api/searchservice/documents/search-post) request.
> [!NOTE] > An alternative query syntax is [Full Lucene](query-lucene-syntax.md), supporting more complex query structures, such as fuzzy and wildcard search. For more information and examples, see [Use the full Lucene syntax](search-query-lucene-examples.md).
In Azure AI Search, the [simple query syntax](query-simple-syntax.md) invokes th
The following queries are based on the hotels-sample-index, which you can create by following the instructions in this [quickstart](search-get-started-portal.md).
-Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or another web client.
+Example queries are articulated using the REST API and POST requests. You can paste and run them in [Postman](search-get-started-rest.md) or another web client. Or, use the JSON view of [Search Explorer](search-explorer.md) in the Azure portal. In JSON view, you can paste in the query examples shown here in this article.
Request headers must have the following values:
Request headers must have the following values:
URI parameters must include your search service endpoint with the index name, docs collections, search command, and API version, similar to the following example: ```http
-https://{{service-name}}.search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+https://{{service-name}}.search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2023-11-01
``` Request body should be formed as valid JSON:
Request body should be formed as valid JSON:
Full text search can be any number of standalone terms or quote-enclosed phrases, with or without boolean operators. ```http
-POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
+POST /indexes/hotel-samples-index/docs/search?api-version=2023-11-01
{ "search": "pool spa +airport", "searchMode": "any",
Response for the "pool spa +airport" query should look similar to the following
"24-hour front desk service" ] }
+]
``` Notice the search score in the response. This is the relevance score of the match. By default, a search service returns the top 50 matches based on this score.
Uniform scores of "1.0" occur when there's no rank, either because the search wa
## Example 2: Look up by ID
-When you return search results in a query, a logical next step is to provide a details page that includes more fields from the document. This example shows you how to return a single document using [Lookup Document](/rest/api/searchservice/lookup-document) by passing in the document ID.
+When returning search results in a query, a logical next step is to provide a details page that includes more fields from the document. This example shows you how to return a single document using [Lookup Document](/rest/api/searchservice/documents/get) by passing in the document ID.
```http
-GET /indexes/hotels-sample-index/docs/41?api-version=2020-06-30
+GET /indexes/hotels-sample-index/docs/41?api-version=2023-11-01
```
-All documents have a unique identifier. If you're using the portal, select the index from the **Indexes** tab and then look at the field definitions to determine which field is the key. Using REST, the [Get Index](/rest/api/searchservice/get-index) call returns the index definition in the response body.
+All documents have a unique identifier. If you're using the portal, select the index from the **Indexes** tab and then look at the field definitions to determine which field is the key. Using REST, the [Get Index](/rest/api/searchservice/indexes/get) call returns the index definition in the response body.
Response for the above query consists of the document whose key is 41. Any field that is marked as "retrievable" in the index definition can be returned in search results and rendered in your app.
Response for the above query consists of the document whose key is 41. Any field
"StateProvince": "HI", "PostalCode": "96814", "Country": "USA"
- },
+ }
+}
``` ## Example 3: Filter on text
-[Filter syntax](search-query-odata-filter.md) is an OData expression that you can use by itself or with "search". Used together, "filter" is applied first to the entire index, and then the search is performed on the results of the filter. Filters can therefore be a useful technique to improve query performance since they reduce the set of documents that the search query needs to process.
+[Filter syntax](search-query-odata-filter.md) is an OData expression that you can use by itself or with `search`. Used together, `filter` is applied first to the entire index, and then the search is performed on the results of the filter. Filters can therefore be a useful technique to improve query performance since they reduce the set of documents that the search query needs to process.
-Filters can be defined on any field marked as "filterable" in the index definition. For hotels-sample-index, filterable fields include Category, Tags, ParkingIncluded, Rating, and most Address fields.
+Filters can be defined on any field marked as `filterable` in the index definition. For hotels-sample-index, filterable fields include Category, Tags, ParkingIncluded, Rating, and most Address fields.
```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "art tours", "queryType": "simple", "filter": "Category eq 'Resort and Spa'",
+ "searchFields": "HotelName,Description,Category",
"select": "HotelId,HotelName,Description,Category", "count": true }
Response for the above query is scoped to only those hotels categorized as "Repo
Filter expressions can include ["search.ismatch" and "search.ismatchscoring" functions](search-query-odata-full-text-search-functions.md), allowing you to build a search query within the filter. This filter expression uses a wildcard on *free* to select amenities including free wifi, free parking, and so forth. ```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "", "filter": "search.ismatch('free*', 'Tags', 'full', 'any')",
POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
} ```
-Response for the above query matches on 19 hotels that offer free amenities. Notice that the search score is a uniform "1.0" throughout the results. This is because the search expression is null or empty, resulting in verbatim filter matches, but no full text search. Relevance scores are only returned on full text search. If you're using filters without "search", make sure you have sufficient sortable fields so that you can control search rank.
+Response for the above query matches on 19 hotels that offer free amenities. Notice that the search score is a uniform "1.0" throughout the results. This is because the search expression is null or empty, resulting in verbatim filter matches, but no full text search. Relevance scores are only returned on full text search. If you're using filters without `search`, make sure you have sufficient sortable fields so that you can control search rank.
```json "@odata.count": 19,
Range filtering is supported through filters expressions for any data type. The
The following query is a numeric range. In hotels-sample-index, the only filterable numeric field is Rating. ```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "*", "filter": "Rating ge 2 and Rating lt 4",
Response for this query should look similar to the following example, trimmed fo
"HotelName": "Twin Dome Motel", "Rating": 3.6 }
+...
``` The next query is a range filter over a string field (Address/StateProvince): ```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "*", "filter": "Address/StateProvince ge 'A*' and Address/StateProvince lt 'D*'",
Response for this query should look similar to the example below, trimmed for br
"StateProvince": "CA " } },
+...
``` ## Example 6: Geospatial search
Response for this query should look similar to the example below, trimmed for br
The hotels-sample index includes a Location field with latitude and longitude coordinates. This example uses the [geo.distance function](search-query-odata-geo-spatial-functions.md#examples) that filters on documents within the circumference of a starting point, out to an arbitrary distance (in kilometers) that you provide. You can adjust the last value in the query (10) to reduce or enlarge the surface area of the query. ```http
-POST /indexes/v/docs/search?api-version=2020-06-30
+POST /indexes/v/docs/search?api-version=2023-11-01
{ "search": "*", "filter": "geo.distance(Location, geography'POINT(-122.335114 47.612839)') le 10",
Response for this query returns all hotels within a 10 kilometer distance of the
Simple syntax supports boolean operators in the form of characters (`+, -, |`) to support AND, OR, and NOT query logic. Boolean search behaves as you might expect, with a few noteworthy exceptions.
-In previous examples, the "searchMode" parameter was introduced as a mechanism for influencing precision and recall, with "searchMode=any" favoring recall (a document that satisfies any of the criteria is considered a match), and "searchMode=all" favoring precision (all criteria must be matched in a document).
+In previous examples, the `searchMode` parameter was introduced as a mechanism for influencing precision and recall, with `"searchMode": "any"` favoring recall (a document that satisfies any of the criteria is considered a match), and "searchMode=all" favoring precision (all criteria must be matched in a document).
-In the context of a Boolean search, the default "searchMode=any" can be confusing if you're stacking a query with multiple operators and getting broader instead of narrower results. This is particularly true with NOT, where results include all documents "not containing" a specific term or phrase.
+In the context of a Boolean search, the default `"searchMode": "any"` can be confusing if you're stacking a query with multiple operators and getting broader instead of narrower results. This is particularly true with NOT, where results include all documents "not containing" a specific term or phrase.
The following example provides an illustration. Running the following query with searchMode (any), 42 documents are returned: those containing the term "restaurant", plus all documents that don't have the phrase "air conditioning". Notice that there's no space between the boolean operator (`-`) and the phrase "air conditioning". ```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "restaurant -\"air conditioning\"", "searchMode": "any",
POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
} ```
-Changing to "searchMode=all" enforces a cumulative effect on criteria and returns a smaller result set (7 matches) consisting of documents containing the term "restaurant", minus those containing the phrase "air conditioning".
+Changing to `"searchMode": "all"` enforces a cumulative effect on criteria and returns a smaller result set (7 matches) consisting of documents containing the term "restaurant", minus those containing the phrase "air conditioning".
Response for this query would now look similar to the following example, trimmed for brevity.
Response for this query would now look similar to the following example, trimmed
"restaurant" ] },
+...
``` ## Example 8: Paging results
-In previous examples, you learned about parameters that affect search results composition, including "select" that determines which fields are in a result, sort orders, and how to include a count of all matches. This example is a continuation of search result composition in the form of paging parameters that allow you to batch the number of results that appear in any given page.
+In previous examples, you learned about parameters that affect search results composition, including `select` that determines which fields are in a result, sort orders, and how to include a count of all matches. This example is a continuation of search result composition in the form of paging parameters that allow you to batch the number of results that appear in any given page.
-By default, a search service returns the top 50 matches. To control the number of matches in each page, use "top" to define the size of the batch, and then use "skip" to pick up subsequent batches.
+By default, a search service returns the top 50 matches. To control the number of matches in each page, use `top` to define the size of the batch, and then use `skip` to pick up subsequent batches.
-The following example uses a filter and sort order on the Rating field (Rating is both filterable and sortable) because it's easier to see the effects of paging on sorted results. In a regular full search query, the top matches are ranked and paged by "@search.score".
+The following example uses a filter and sort order on the Rating field (Rating is both filterable and sortable) because it's easier to see the effects of paging on sorted results. In a regular full search query, the top matches are ranked and paged by `@search.score`.
```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "*", "filter": "Rating gt 4",
POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
} ```
-The query finds 21 matching documents, but because you specified "top", the response returns just the top five matches, with ratings starting at 4.9, and ending at 4.7 with "Lady of the Lake B & B".
+The query finds 21 matching documents, but because you specified `top`, the response returns just the top five matches, with ratings starting at 4.9, and ending at 4.7 with "Lady of the Lake B & B".
To get the next 5, skip the first batch: ```http
-POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST /indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "*", "filter": "Rating gt 4",
POST /indexes/hotels-sample-index/docs/search?api-version=2020-06-30
} ```
-The response for the second batch skips the first five matches, returning the next five, starting with "Pull'r Inn Motel". To continue with more batches, you would keep "top" at 5, and then increment "skip" by 5 on each new request (skip=5, skip=10, skip=15, and so forth).
+The response for the second batch skips the first five matches, returning the next five, starting with "Pull'r Inn Motel". To continue with more batches, you would keep `top` at 5, and then increment `skip` by 5 on each new request (skip=5, skip=10, skip=15, and so forth).
```json "value": [
search Search What Is Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-data-import.md
- ignite-2023 Previously updated : 12/15/2022 Last updated : 01/17/2024 # Data import in Azure AI Search In Azure AI Search, queries execute over user-owned content that's loaded into a [search index](search-what-is-an-index.md). This article describes the two basic workflows for populating an index: *push* your data into the index programmatically, or *pull* in the data using a [search indexer](search-indexer-overview.md).
-With either approach, the objective is to load data from an external data source. Although you can create an empty index, it's not queryable until you add the content.
+Both approaches load documents from an external data source. Although you can create an empty index, it's not queryable until you add the content.
> [!NOTE] > If [AI enrichment](cognitive-search-concept-intro.md) is a solution requirement, you must use the pull model (indexers) to load an index. Skillsets are attached to an indexer and don't run independently. ## Pushing data to an index
-The push model, used to programmatically send your data to Azure AI Search, is the most flexible approach for the following reasons:
+Push model is an approach that uses APIs to upload documents into an existing search index. You can upload documents individually or in batches up to 1000 per batch, or 16 MB per batch, whichever limit comes first.
-+ First, there are no restrictions on data source type. The dataset must be composed of JSON documents that map to your index schema, but the data can come from anywhere.
+Key benefits include:
-+ Second, there are no restrictions on frequency of execution. You can push changes to an index as often as you like. For applications having low latency requirements (for example, if you need search operations to be in sync with dynamic inventory databases), the push model is your only option.
++ No restrictions on data source type. The payload must be composed of JSON documents that map to your index schema, but the data can be sourced from anywhere.
-+ Third, you can upload documents individually or in batches up to 1000 per batch, or 16 MB per batch, whichever limit comes first.
++ No restrictions on frequency of execution. You can push changes to an index as often as you like. For applications having low latency requirements (for example, when the index needs to be in sync with product inventory fluctuations), the push model is your only option.
-+ Fourth, connectivity and the secure retrieval of documents are fully under your control. In contrast, indexer connections are authenticated using the security features provided in Azure AI Search.
++ Connectivity and the secure retrieval of documents are fully under your control. In contrast, indexer connections are authenticated using the security features provided in Azure AI Search. ### How to push data to an Azure AI Search index
-You can use the following APIs to load single or multiple documents into an index:
+Use the following APIs to load single or multiple documents into an index:
+ [Add, Update, or Delete Documents (REST API)](/rest/api/searchservice/AddUpdate-or-Delete-Documents)
-+ [IndexDocumentsAction class (Azure SDK for .NET)](/dotnet/api/azure.search.documents.models.indexdocumentsaction) or [IndexDocumentsBatch class](/dotnet/api/azure.search.documents.models.indexdocumentsbatch)
++ [IndexDocumentsAsync (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocumentsasync) or [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1)++ [IndexDocumentsBatch (Azure SDK for Python)](/python/api/azure-search-documents/azure.search.documents.indexdocumentsbatch) or [SearchIndexingBufferedSender](/python/api/azure-search-documents/azure.search.documents.searchindexingbufferedsender)++ [IndexDocumentsBatch (Azure SDK for Java)](/java/api/com.azure.search.documents.indexes.models.indexdocumentsbatch) or [SearchIndexingBufferedSender](/java/api/com.azure.search.documents.searchindexingbufferedasyncsender)++ [IndexDocumentsBatch (Azure SDK for JavaScript](/javascript/api/@azure/search-documents/indexdocumentsbatch) or [SearchIndexingBufferedSender](/javascript/api/@azure/search-documents/searchindexingbufferedsender)
-There is currently no tool support for pushing data via the portal.
+There's no support for pushing data via the Azure portal.
For an introduction to the push APIs, see:
For an introduction to the push APIs, see:
You can control the type of indexing action on a per-document basis, specifying whether the document should be uploaded in full, merged with existing document content, or deleted.
-Whether you use the REST API or an SDK, the following document operations are supported for data import:
+Whether you use the REST API or an Azure SDK, the following document operations are supported for data import:
-+ **Upload**, similar to an "upsert" where the document is inserted if it is new, and updated or replaced if it exists. If the document is missing values that the index requires, the document field's value is set to null.
++ **Upload**, similar to an "upsert" where the document is inserted if it's new, and updated or replaced if it exists. If the document is missing values that the index requires, the document field's value is set to null.
-+ **merge** updates a document that already exists, and fails a document that cannot be found. Merge replaces existing values. For this reason, be sure to check for collection fields that contain multiple values, such as fields of type `Collection(Edm.String)`. For example, if a `tags` field starts with a value of `["budget"]` and you execute a merge with `["economy", "pool"]`, the final value of the `tags` field is `["economy", "pool"]`. It won't be `["budget", "economy", "pool"]`.
++ **merge** updates a document that already exists, and fails a document that can't be found. Merge replaces existing values. For this reason, be sure to check for collection fields that contain multiple values, such as fields of type `Collection(Edm.String)`. For example, if a `tags` field starts with a value of `["budget"]` and you execute a merge with `["economy", "pool"]`, the final value of the `tags` field is `["economy", "pool"]`. It won't be `["budget", "economy", "pool"]`. + **mergeOrUpload** behaves like **merge** if the document exists, and **upload** if the document is new.
Whether you use the REST API or an SDK, the following document operations are su
## Pulling data into an index
-The pull model crawls a supported data source and automatically uploads the data into your index. In Azure AI Search, this capability is implemented through *indexers*, currently available for these platforms:
+The pull model uses *indexers* connecting to a supported data source, automatically uploading the data into your index. Indexers from Microsoft are available for these platforms:
+ [Azure Blob storage](search-howto-indexing-azure-blob-storage.md) + [Azure Table storage](search-howto-indexing-azure-tables.md)
The pull model crawls a supported data source and automatically uploads the data
+ [Azure SQL Database, SQL Managed Instance, and SQL Server on Azure VMs](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) + [SharePoint in Microsoft 365 (preview)](search-howto-index-sharepoint-online.md)
+You can use third-party connectors, developed and maintained by Microsoft partners. For more information and links, see [Data source gallery](search-data-sources-gallery.md).
+ Indexers connect an index to a data source (usually a table, view, or equivalent structure), and map source fields to equivalent fields in the index. During execution, the rowset is automatically transformed to JSON and loaded into the specified index. All indexers support schedules so that you can specify how frequently the data is to be refreshed. Most indexers provide change tracking if the data source supports it. By tracking changes and deletes to existing documents in addition to recognizing new documents, indexers remove the need to actively manage the data in your index. ### How to pull data into an Azure AI Search index
-Indexer functionality is exposed in the [Azure portal](search-import-data-portal.md), the [REST API](/rest/api/searchservice/create-indexer), and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.searchindexerclient).
+Use the following tools and APIs for indexer-based indexing:
+++ [Import data wizard in the Azure portal](search-import-data-portal.md)++ REST APIs: [Create Indexer (REST)](/rest/api/searchservice/indexers/create), [Create Data Source (REST)](/rest/api/searchservice/data-sources/create), [Create Index (REST)](/rest/api/searchservice/indexes/create)++ Azure SDK for .NET: [SearchIndexer](/dotnet/api/azure.search.documents.indexes.models.searchindexer), [SearchIndexerDataSourceConnection](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourceconnection), [SearchIndex](/dotnet/api/azure.search.documents.indexes.models.searchindex),++ Azure SDK for Python: [SearchIndexer](/python/api/azure-search-documents/azure.search.documents.indexes.models.searchindexer), [SearchIndexerDataSourceConnection](/python/api/azure-search-documents/azure.search.documents.indexes.models.searchindexerdatasourceconnection), [SearchIndex](/python/api/azure-search-documents/azure.search.documents.indexes.models.searchindex),++ Azure SDK for Java: [SearchIndexer](/java/api/com.azure.search.documents.indexes.models.searchindexer), [SearchIndexerDataSourceConnection](/java/api/com.azure.search.documents.indexes.models.searchindexerdatasourceconnection), [SearchIndex](/java/api/com.azure.search.documents.indexes.models.searchindex),++ Azure SDK for JavaScript: [SearchIndexer](/javascript/api/@azure/search-documents/searchindexer), [SearchIndexerDataSourceConnection](/javascript/api/@azure/search-documents/searchindexerdatasourceconnection), [SearchIndex](/javascript/api/@azure/search-documents/searchindex),+
+Indexer functionality is exposed in the [Azure portal], the [REST API](/rest/api/searchservice/create-indexer), and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.searchindexerclient).
-An advantage to using the portal is that Azure AI Search can usually generate a default index schema by reading the metadata of the source dataset. You can modify the generated index until the index is processed, after which the only schema edits allowed are those that do not require reindexing. If the changes affect the schema itself, you would need to rebuild the index.
+An advantage to using the portal is that Azure AI Search can usually generate a default index schema by reading the metadata of the source dataset.
## Verify data import with Search explorer
A quick way to perform a preliminary check on the document upload is to use [**S
The explorer lets you query an index without having to write any code. The search experience is based on default settings, such as the [simple syntax](/rest/api/searchservice/simple-query-syntax-in-azure-search) and default [searchMode query parameter](/rest/api/searchservice/search-documents). Results are returned in JSON so that you can inspect the entire document.
-Here is an example query that you can run in Search Explorer. The "HotelId" is the document key of the hotels-sample-index. The filter provides the document ID of a specific document:
+Here's an example query that you can run in Search Explorer in JSON view. The "HotelId" is the document key of the hotels-sample-index. The filter provides the document ID of a specific document:
-```http
-$filter=HotelId eq '50'
+```JSON
+{
+ "search": "*",
+ "filter": "HotelId eq '50'"
+}
``` If you're using REST, this [Look up query](search-query-simple-examples.md#example-2-look-up-by-id) achieves the same purpose.
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
Previously updated : 08/29/2022 Last updated : 01/17/2024 - devx-track-csharp - devx-track-dotnet
Azure AI Search can import, analyze, and index data from multiple data sources into a single consolidated search index.
-This tutorial uses C# and the [Azure.Search.Documents](/dotnet/api/overview/azure/search) client library in the Azure SDK for .NET to index sample hotel data from an Azure Cosmos DB instance, and merge that with hotel room details drawn from Azure Blob Storage documents. The result will be a combined hotel search index containing hotel documents, with rooms as a complex data types.
+This C# tutorial uses the [Azure.Search.Documents](/dotnet/api/overview/azure/search) client library in the Azure SDK for .NET to index sample hotel data from an Azure Cosmos DB instance, and merges that with hotel room details drawn from Azure Blob Storage documents. The result is a combined hotel search index containing hotel documents, with rooms as a complex data types.
In this tutorial, you'll perform the following tasks:
A finished version of the code in this tutorial can be found in the following pr
* [multiple-data-sources/v11 (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources/v11)
-For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/multiple-data-sources/v10) on GitHub.
- ## Prerequisites
-+ [Azure Cosmos DB](../cosmos-db/create-cosmosdb-resources-portal.md)
++ [Azure Cosmos DB for NoSQL](../cosmos-db/create-cosmosdb-resources-portal.md) + [Azure Storage](../storage/common/storage-account-create.md) + [Visual Studio](https://visualstudio.microsoft.com/) + [Azure AI Search (version 11.x) NuGet package](https://www.nuget.org/packages/Azure.Search.Documents/) + [Azure AI Search](search-create-service-portal.md) > [!NOTE]
-> You can use the free service for this tutorial. A free search service limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before starting, make sure you have room on your service to accept the new resources.
+> You can use the free service for this tutorial. A free search service limits you to three indexes, three indexers, and three data sources. This tutorial creates one index, two indexers, and two data sources. Before starting, make sure you have room on your service to accept the new resources.
## 1 - Create services
search Tutorial Optimize Indexing Push Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-optimize-indexing-push-api.md
Previously updated : 1/05/2023 Last updated : 1/18/2024 - devx-track-csharp - ignite-2023
# Tutorial: Optimize indexing with the push API
-Azure AI Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index: *pushing* your data into the index programmatically, or pointing an [Azure AI Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
+Azure AI Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index: *push* your data into the index programmatically, or pointing an [Azure AI Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
This tutorial describes how to efficiently index data using the [push model](search-what-is-data-import.md#pushing-data-to-an-index) by batching requests and using an exponential backoff retry strategy. You can [download and run the sample application](https://github.com/Azure-Samples/azure-search-dotnet-scale/tree/main/optimize-data-indexing). This article explains the key aspects of the application and factors to consider when indexing data.
-This tutorial uses C# and the [.NET SDK](/dotnet/api/overview/azure/search) to perform the following tasks:
+This tutorial uses C# and the [Azure.Search.Documents library from the Azure SDK for .NET](/dotnet/api/overview/azure/search) to perform the following tasks:
> [!div class="checklist"] > * Create an index
Source code for this tutorial is in the [optimize-data-indexing/v11](https://git
## Key considerations
-When pushing data into an index, there's several key considerations that impact indexing speeds. You can learn more about these factors in the [index large data sets article](search-howto-large-index.md).
-
-Six key factors to consider are:
+Factors affecting indexing speeds are listed next. You can learn more in [Index large data sets](search-howto-large-index.md).
+ **Service tier and number of partitions/replicas** - Adding partitions and increasing your tier will both increase indexing speeds. + **Index Schema** - Adding fields and adding additional properties to fields (such as *searchable*, *facetable*, or *filterable*) both reduce indexing speeds.
Six key factors to consider are:
+ **Retry strategy** - An exponential backoff retry strategy should be used to optimize indexing. + **Network data transfer speeds** - Data transfer speeds can be a limiting factor. Index data from within your Azure environment to increase data transfer speeds. - ## 1 - Create Azure AI Search service To complete this tutorial, you'll need an Azure AI Search service, which you can [create in the portal](search-create-service-portal.md). We recommend using the same tier you plan to use in production so that you can accurately test and optimize indexing speeds.
List<Hotel> hotels = dg.GetHotels(numDocuments, "large");
There are two sizes of hotels available for testing in this sample: **small** and **large**.
-The schema of your index can have a significant impact on indexing speeds. Because of this impact, it makes sense to convert this class to generate data matching your intended index schema after you run through this tutorial.
+The schema of your index has an effect on indexing speeds. For this reason, it makes sense to convert this class to generate data that best matches your intended index schema after you run through this tutorial.
## 4 - Test batch sizes
public static double EstimateObjectSize(object data)
} ```
-The function requires an `SearchClient` as well as the number of tries you'd like to test for each batch size. As there may be some variability in indexing times for each batch, we try each batch three times by default to make the results more statistically significant.
+The function requires a `SearchClient` plus the number of tries you'd like to test for each batch size. Because there might be variability in indexing times for each batch, we try each batch three times by default to make the results more statistically significant.
```csharp await TestBatchSizesAsync(searchClient, numTries: 3);
When you run the function, you should see an output like below in your console:
![Output of test batch size function](media/tutorial-optimize-data-indexing/test-batch-sizes.png "Output of test batch size function")
-Identify which batch size is most efficient and then use that batch size in the next step of the tutorial. You may see a plateau in MB/s across different batch sizes.
+Identify which batch size is most efficient and then use that batch size in the next step of the tutorial. You might see a plateau in MB/s across different batch sizes.
## 5 - Index data
Now that we've identified the batch size we intend to use, the next step is to b
To take full advantage of Azure AI Search's indexing speeds, you'll likely need to use multiple threads to send batch indexing requests concurrently to the service.
-Several of the key considerations mentioned above impact the optimal number of threads. You can modify this sample and test with different thread counts to determine the optimal thread count for your scenario. However, as long as you have several threads running concurrently, you should be able to take advantage of most of the efficiency gains.
+Several of the key considerations previously mentioned can affect the optimal number of threads. You can modify this sample and test with different thread counts to determine the optimal thread count for your scenario. However, as long as you have several threads running concurrently, you should be able to take advantage of most of the efficiency gains.
-As you ramp up the requests hitting the search service, you may encounter [HTTP status codes](/rest/api/searchservice/http-status-codes) indicating the request didn't fully succeed. During indexing, two common HTTP status codes are:
+As you ramp up the requests hitting the search service, you might encounter [HTTP status codes](/rest/api/searchservice/http-status-codes) indicating the request didn't fully succeed. During indexing, two common HTTP status codes are:
+ **503 Service Unavailable** - This error means that the system is under heavy load and your request can't be processed at this time. + **207 Multi-Status** - This error means that some documents succeeded, but at least one failed.
TimeSpan delay = delay = TimeSpan.FromSeconds(2);
int maxRetryAttempts = 5; ```
-The results of the indexing operation are stored in the variable `IndexDocumentResult result`. This variable is important because it allows you to check if any documents in the batch failed as shown below. If there is a partial failure, a new batch is created based on the failed documents' ID.
+The results of the indexing operation are stored in the variable `IndexDocumentResult result`. This variable is important because it allows you to check if any documents in the batch failed as shown below. If there's a partial failure, a new batch is created based on the failed documents' ID.
`RequestFailedException` exceptions should also be caught as they indicate the request failed completely and should also be retried.
You can explore the populated search index after the program has run programatic
### Programatically
-There are two main options for checking the number of documents in an index: the [Count Documents API](/rest/api/searchservice/count-documents) and the [Get Index Statistics API](/rest/api/searchservice/get-index-statistics). Both paths may require some additional time to update so don't be alarmed if the number of documents returned is lower than you expected initially.
+There are two main options for checking the number of documents in an index: the [Count Documents API](/rest/api/searchservice/count-documents) and the [Get Index Statistics API](/rest/api/searchservice/get-index-statistics). Both paths require time to process so don't be alarmed if the number of documents returned is initially lower than you expect.
#### Count Documents
In Azure portal, open the search service **Overview** page, and find the **optim
![List of Azure AI Search indexes](media/tutorial-optimize-data-indexing/portal-output.png "List of Azure AI Search indexes")
-The *Document Count* and *Storage Size* are based on [Get Index Statistics API](/rest/api/searchservice/get-index-statistics) and may take several minutes to update.
+The *Document Count* and *Storage Size* are based on [Get Index Statistics API](/rest/api/searchservice/get-index-statistics) and can take several minutes to update.
## Reset and rerun
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
For example, documents that talk about different species of dogs would be cluste
### Nearest neighbors search
-In vector search, the search engine searches through the vectors within the embedding space to identify those that are near to the query vector. This technique is called *nearest neighbor search*. Nearest neighbors help quantify the similarity between items. A high degree of vector similarity indicates that the original data was similar too. To facilitate fast nearest neighbor search, the search engine will perform optimizations or employ data structures or data partitioning to reduce the search space. Each vector search algorithm will have different approaches to this problem, trading off different characteristics such as latency, throughput, recall, and memory. To compute similarity, similarity metrics provide the mechanism for computing this distance.
+In vector search, the search engine searches through the vectors within the embedding space to identify those that are near to the query vector. This technique is called [*nearest neighbor search*](https://en.wikipedia.org/wiki/Nearest_neighbor_search). Nearest neighbors help quantify the similarity between items. A high degree of vector similarity indicates that the original data was similar too. To facilitate fast nearest neighbor search, the search engine will perform optimizations or employ data structures or data partitioning to reduce the search space. Each vector search algorithm will have different approaches to this problem, trading off different characteristics such as latency, throughput, recall, and memory. To compute similarity, similarity metrics provide the mechanism for computing this distance.
Azure AI Search currently supports the following algorithms:
ANN algorithms sacrifice some accuracy, but offer scalable and faster retrieval
Azure AI Search uses HNSW for its ANN algorithm. <!-- > [!NOTE]
-> Finding the true set of [_k_ nearest neighbors](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) requires comparing the input vector exhaustively against all vectors in the dataset. While each vector similarity calculation is relatively fast, performing these exhaustive comparisons across large datasets is computationally expensive and slow due to the sheer number of comparisons. For example, if a dataset contains 10 million 1,000-dimensional vectors, computing the distance between the query vector and all vectors in the dataset would require scanning 37 GB of data (assuming single-precision floating point vectors) and a high number of similarity calculations.
+> Finding the true set of [nearest neighbors](https://en.wikipedia.org/wiki/Nearest_neighbor_search) requires comparing the input vector exhaustively against all vectors in the dataset. While each vector similarity calculation is relatively fast, performing these exhaustive comparisons across large datasets is computationally expensive and slow due to the sheer number of comparisons. For example, if a dataset contains 10 million 1,000-dimensional vectors, computing the distance between the query vector and all vectors in the dataset would require scanning 37 GB of data (assuming single-precision floating point vectors) and a high number of similarity calculations.
> > To address this challenge, approximate nearest neighbor (ANN) search methods are used to trade off recall for speed. These methods can efficiently find a small set of candidate vectors that are similar to the query vector and have high likelihood to be in the globally most similar neighbors. Each algorithm has a different approach to reducing the total number of vectors comparisons, but they all share the ability to balance accuracy and efficiency by tweaking the algorithm configuration parameters. -->
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
Azure Government is a physically isolated cloud environment dedicated to US fede
For more information about Azure Government, see [What is Azure Government?](../../azure-government/documentation-government-welcome.md)
+> [!NOTE]
+> These lists and tables do not include feature or bundle availability in the Azure Government Secret or Azure Government Top Secret clouds.
+> For more information about specific availability for air-gapped clouds, please contact your account team.
+ ## Microsoft 365 integration Integrations between products rely on interoperability between Azure and Office platforms. Offerings hosted in the Azure environment are accessible from the Microsoft 365 Enterprise and Microsoft 365 Government platforms. Office 365 and Office 365 GCC are paired with Microsoft Entra ID in Azure. Office 365 GCC High and Office 365 DoD are paired with Microsoft Entra ID in Azure Government.
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
Last updated 07/25/2023
This article describes the features available in Microsoft Sentinel across different Azure environments. Features are listed as GA (generally available), public preview, or shown as not available.
+> [!NOTE]
+> These lists and tables do not include feature or bundle availability in the Azure Government Secret or Azure Government Top Secret clouds.
+> For more information about specific availability for air-gapped clouds, please contact your account team.
+ ## Analytics |Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet |
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
Title: 'Tutorial: Forward Syslog data to Microsoft Sentinel and Azure Monitor by using Azure Monitor Agent'
-description: In this tutorial, you learn how to monitor Linux-based devices by forwarding Syslog data to a Log Analytics workspace.
+description: In this tutorial, you learn how to monitor Linux-based devices by forwarding Syslog data to a Log Analytics workspace.
-+ Last updated 01/05/2023-+ #Customer intent: As a security engineer, I want to get Syslog data into Microsoft Sentinel so that I can do attack detection, threat visibility, proactive hunting, and threat response. As an IT administrator, I want to get Syslog data into my Log Analytics workspace to monitor my Linux-based devices.
If you're forwarding Syslog data to an Azure VM, follow these steps to allow rec
Connect to your Linux VM and run the following command to configure the Linux Syslog daemon: ```bash
-sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python3 Forwarder_AMA_installer.py
+sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python3 Forwarder_AMA_installer.py
``` This script can make changes for both rsyslog.d and syslog-ng.
service-bus-messaging Service Bus Python How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md
description: This tutorial shows you how to send messages to and receive message
documentationcenter: python Previously updated : 01/12/2023 Last updated : 01/18/2024 ms.devlang: python
See the following documentation and samples:
- [Azure Service Bus client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/servicebus/azure-servicebus) - [Samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/servicebus/azure-servicebus/samples).
- - The **sync_samples** folder has samples that show you how to interact with Service Bus in a synchronous manner. In this quick start, you used this method.
- - The **async_samples** folder has samples that show you how to interact with Service Bus in an asynchronous manner.
+ - The **sync_samples** folder has samples that show you how to interact with Service Bus in a synchronous manner.
+ - The **async_samples** folder has samples that show you how to interact with Service Bus in an asynchronous manner. In this quick start, you used this method.
- [azure-servicebus reference documentation](/python/api/azure-servicebus/azure.servicebus?preserve-view=true)
site-recovery Azure To Azure Replicate After Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-replicate-after-migration.md
Title: Set up disaster recovery after migration to Azure with Azure Site Recovery
+ Title: Set up disaster recovery after migration to Azure with Azure Site Recovery
description: This article describes how to prepare machines to set up disaster recovery between Azure regions after migration to Azure using Azure Site Recovery. -+ Last updated 05/02/2023
-# Set up disaster recovery for Azure VMs after migration to Azure
+# Set up disaster recovery for Azure VMs after migration to Azure
Follow this article if you've [migrated on-premises machines to Azure VMs](./migrate-tutorial-on-premises-azure.md) using the [Site Recovery](site-recovery-overview.md) service, and you now want to get the VMs set up for disaster recovery to a secondary Azure region. The article describes how to ensure that the Azure VM agent is installed on migrated VMs, and how to remove the Site Recovery Mobility service that's no longer needed after migration.
Follow this article if you've [migrated on-premises machines to Azure VMs](./mig
## Verify migration
-Before you set up disaster recovery, make sure that migration has completed as expected. To complete a migration successfully, after the failover, you should select the **Complete Migration** option, for each machine you want to migrate.
+Before you set up disaster recovery, make sure that migration has completed as expected. To complete a migration successfully, after the failover, you should select the **Complete Migration** option, for each machine you want to migrate.
## Verify the Azure VM agent
Each Azure VM must have the [Azure VM agent](../virtual-machines/extensions/agen
### Install the agent on Windows VMs
-If you're running a version of the Site Recovery mobility service earlier than 9.7.0.0, or you have some other need to install the agent manually, do the following:
+If you're running a version of the Site Recovery mobility service earlier than 9.7.0.0, or you have some other need to install the agent manually, do the following:
1. Ensure you have admin permissions on the VM. 2. Download the [VM Agent installer](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409).
Install the [Azure Linux VM](../virtual-machines/extensions/agent-linux.md) agen
2. We strongly recommend that you install the Linux VM agent using an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../virtual-machines/linux/endorsed-distros.md) integrate the Azure Linux agent package into their images and repositories. - We strongly recommend that you update the agent only through a distribution repository. - We don't recommend installing the Linux VM agent directly from GitHub and updating it.
- - If the latest agent for your distribution is not available, contact distribution support for instructions on how to install it.
+ - If the latest agent for your distribution is not available, contact distribution support for instructions on how to install it.
-#### Validate the installation
+#### Validate the installation
1. Run this command: **ps -e** to ensure that the Azure agent is running on the Linux VM. 2. If the process isn't running, restart it by using the following commands:
Install the [Azure Linux VM](../virtual-machines/extensions/agent-linux.md) agen
```bash sudo systemctl enable --now walinuxagent.service ```
- - For other distributions:
-
+ - For other distributions:
+ ```bash sudo systemctl enable --now waagent.service ```
Install the [Azure Linux VM](../virtual-machines/extensions/agent-linux.md) agen
## Uninstall the Mobility service
-1. Manually uninstall the Mobility service from the Azure VM, using one of the following methods.
+1. Manually uninstall the Mobility service from the Azure VM, using one of the following methods.
- For Windows, in the Control Panel > **Add/Remove Programs**, uninstall **Microsoft Azure Site Recovery Mobility Service/Master Target server**. At an elevated command prompt, run: ``` MsiExec.exe /qn /x {275197FC-14FD-4560-A5EB-38217F80CBD1} /L+*V "C:\ProgramData\ASRSetupLogs\UnifiedAgentMSIUninstall.log"
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
Title: Set up Hyper-V disaster recovery by using Azure Site Recovery
+ Title: Set up Hyper-V disaster recovery by using Azure Site Recovery
description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery and MARS. Last updated 05/04/2023-+
It's important to prepare the infrastructure before you set up disaster recovery
> [!TIP] > For this tutorial, you don't need to use the Deployment Planner. If you're planning a large deployment, download the Deployment Planner for Hyper-V from the link on the pane. [Learn more](hyper-v-deployment-planner-overview.md) about Hyper-V deployment planning.
-
+ :::image type="content" source="./media/hyper-v-azure-tutorial/deployment-planning.png" alt-text="Screenshot that shows the Deployment planning pane."::: 1. Select **Next**.
On **Prepare infrastructure**, on the **Replication policy** tab, complete these
1. For **Copy frequency**, select **5 Minutes**. 1. For **Recovery point retention in hours**, select **2**. 1. For **App-consistent snapshot frequency**, select **1**.
- 1. For **Initial replication start time**, select **Immediately**.
+ 1. For **Initial replication start time**, select **Immediately**.
1. Select **OK** to create the policy. When you create a new policy, it's automatically associated with the specified Hyper-V site. :::image type="content" source="./media/hyper-v-azure-tutorial/create-policy.png" alt-text="Screenshot that shows Create and associate policy pane and options.":::
You can track progress in your Azure portal notifications. When the job finishes
1. On the vault command bar, select **Enable Site Recovery**. 1. On **Site Recovery**, under the **Hyper-V machines to Azure** tile, select **Enable replication**. 1. On **Enable replication**, on the **Source environment** tab, select a source location, and then select **Next**.
-
+ :::image type="content" source="./media/hyper-v-azure-tutorial/enable-replication-source.png" alt-text="Screenshot that shows the source environment pane."::: 1. On the **Target environment** tab, complete these steps: 1. For **Subscription**, enter or select the subscription.
site-recovery Site Recovery Deployment Planner History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner-history.md
-+ Last updated 12/15/2023
This article provides history of all versions of Azure Site Recovery Deployment
- Added support for Windows Server 2019 and Red Hat Enterprise Linux (`RHEL`) workstation. > [!Note]
->- It is not recommended to run the deployment planner on the ESXi version 6.7.0 Update 2 Build 13006603, as it does not work as expected.
+>- It is not recommended to run the deployment planner on the ESXi version 6.7.0 Update 2 Build 13006603, as it does not work as expected.
## Version 2.3
This article provides history of all versions of Azure Site Recovery Deployment
- Fixed an issue that prevented the Deployment Planner from generating a report with the provided target location and subscription.
-## Version 2.2
+## Version 2.2
**Release Date: April 25, 2018**
This article provides history of all versions of Azure Site Recovery Deployment
- Fixed bugs in the GetThroughput operation. - Added option to limit the number of VMs to profile or generate the report. The default limit is 1,000 VMs. - VMware to Azure disaster recovery:
- - Fixed an issue of Windows Server 2016 VM going into the incompatible table.
+ - Fixed an issue of Windows Server 2016 VM going into the incompatible table.
- Updated compatibility messages for Extensible Firmware Interface (EFI) Windows VMs.-- Updated the VMware to Azure and Hyper-V to Azure, VM data churn limit per VM.
+- Updated the VMware to Azure and Hyper-V to Azure, VM data churn limit per VM.
- Improved reliability of VM list file parsing. ## Version 2.0.1
This article provides history of all versions of Azure Site Recovery Deployment
## Version 1.3.1
-**Release Date: July 19, 2017**
+**Release Date: July 19, 2017**
**Fixes:**
site-recovery Site Recovery Extension Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-extension-troubleshoot.md
-+ Last updated 05/03/2023
This article provides troubleshooting steps that can help you resolve Azure Site
This issue occurs when the system has low available memory, and is not able to allocate memory for mobility service installation. Ensure that enough memory has been freed up for the installation to proceed and complete successfully.
-## Azure Site Recovery extension time-out
+## Azure Site Recovery extension time-out
Error message: "Task execution has timed out while tracking for extension operation to be started"<br> Error code: "151076"
Error code: "151095"
This error occurs when the agent version on the Linux machine is out of date. Complete the following troubleshooting step: -- [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)
+- [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)
## Causes and solutions
This error occurs when the agent version on the Linux machine is out of date. Co
#### Solution The VM agent might have been corrupted, or the service might have been stopped. Reinstalling the VM agent helps get the latest version. It also helps restart communication with the service.
-1. Determine whether the Windows Azure Guest Agent service is running in the VM services (services.msc). Restart the Windows Azure Guest Agent service.
+1. Determine whether the Windows Azure Guest Agent service is running in the VM services (services.msc). Restart the Windows Azure Guest Agent service.
1. If the Windows Azure Guest Agent service isn't visible in services, open the Control Panel. Go to **Programs and Features** to see whether the Windows Guest Agent service is installed. 1. If the Windows Azure Guest Agent appears in **Programs and Features**, uninstall the Windows Azure Guest Agent. 1. Download and install the [latest version of the agent MSI](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409). You need administrator rights to complete the installation.
Most agent-related or extension-related failures for Linux VMs are caused by iss
```bash sudo systemctl enable --now walinuxagent.service ```
- - For other distributions:
+ - For other distributions:
```bash sudo systemctl enable --now waagent.service
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
Last updated 12/27/2023 - engagement-fy23
- - devx-track-linux
+ - linux-related-content
- ignite-2023
For Site Recovery components, we support N-4 versions, where N is the latest rel
### Update Rollup 69 > [!Note]
-> - The 9.56 version only has updates for Azure-to-Azure and Modernized VMware-to-Azure protection scenarios.
+> - The 9.56 version only has updates for Azure-to-Azure and Modernized VMware-to-Azure protection scenarios.
[Update rollup 69](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) provides the following updates:
For Site Recovery components, we support N-4 versions, where N is the latest rel
| **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
-**Azure VM disaster recovery** | Added support for Rocky Linux 8.7, Rocky Linux 9.0, Rocky Linux 9.1 and SUSE Linux Enterprise Server 15 SP5 Linux distros. <br><br/> Added support for Windows 11 servers.
-**VMware VM/physical disaster recovery to Azure** | Added support for Rocky Linux 8.7, Rocky Linux 9.0, Rocky Linux 9.1 and SUSE Linux Enterprise Server 15 SP5 Linux distros.
+**Azure VM disaster recovery** | Added support for Rocky Linux 8.7, Rocky Linux 9.0, Rocky Linux 9.1 and SUSE Linux Enterprise Server 15 SP5 Linux distros. <br><br/> Added support for Windows 11 servers.
+**VMware VM/physical disaster recovery to Azure** | Added support for Rocky Linux 8.7, Rocky Linux 9.0, Rocky Linux 9.1 and SUSE Linux Enterprise Server 15 SP5 Linux distros.
## Updates (November 2023)
You can now also manage Azure Site Recovery protections using Azure Business Con
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. **Azure VM disaster recovery** | Added support for RHEL 8.8 Linux distros.
-**VMware VM/physical disaster recovery to Azure** | Added support for RHEL 8.8 Linux distros.
+**VMware VM/physical disaster recovery to Azure** | Added support for RHEL 8.8 Linux distros.
## Updates (May 2023)
You can now also manage Azure Site Recovery protections using Azure Business Con
[Update rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) provides the following updates: > [!Note]
-> - The 9.49 version has not been released for VMware replications to Azure preview experience.
+> - The 9.49 version has not been released for VMware replications to Azure preview experience.
**Update** | **Details** |
You can now also manage Azure Site Recovery protections using Azure Business Con
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article. **Azure VM disaster recovery** | Added support for more kernels for Debian 10 and Ubuntu 20.04 Linux distros. <br/><br/> Added public preview support for on-Demand Capacity Reservation integration.
-**VMware VM/physical disaster recovery to Azure** | Added support for thin provisioned LVM volumes.<br/><br/>
+**VMware VM/physical disaster recovery to Azure** | Added support for thin provisioned LVM volumes.<br/><br/>
## Updates (January 2022)
You can now also manage Azure Site Recovery protections using Azure Business Con
| **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
-**Azure VM disaster recovery** | Support added for retention points to be available for up to 15 days.<br/><br/>Added support for replication to be enabled on Azure virtual machines via Azure Policy. <br/><br/> Added support for ZRS managed disks when replicating Azure virtual machines. <br/><br/> Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/>
-**VMware VM/physical disaster recovery to Azure** | Support added for retention points to be available for up to 15 days.<br/><br/>Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/>
+**Azure VM disaster recovery** | Support added for retention points to be available for up to 15 days.<br/><br/>Added support for replication to be enabled on Azure virtual machines via Azure Policy. <br/><br/> Added support for ZRS managed disks when replicating Azure virtual machines. <br/><br/> Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/>
+**VMware VM/physical disaster recovery to Azure** | Support added for retention points to be available for up to 15 days.<br/><br/>Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/>
## Next steps
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
-+ Last updated 08/01/2023
Last updated 08/01/2023
# Install a Linux master target server for failback
-After you fail over your virtual machines to Azure, you can fail back the virtual machines to the on-premises site. To fail back, you need to reprotect the virtual machine from Azure to the on-premises site. For this process, you need an on-premises master target server to receive the traffic.
+After you fail over your virtual machines to Azure, you can fail back the virtual machines to the on-premises site. To fail back, you need to reprotect the virtual machine from Azure to the on-premises site. For this process, you need an on-premises master target server to receive the traffic.
If your protected virtual machine is a Windows virtual machine, then you need a Windows master target. For a Linux virtual machine, you need a Linux master target. Read the following steps to learn how to create and install a Linux master target.
Post comments or questions at the end of this article or on the [Microsoft Q&A q
## Prerequisites
-* To choose the host on which to deploy the master target, determine if the failback is going to be to an existing on-premises virtual machine or to a new virtual machine.
+* To choose the host on which to deploy the master target, determine if the failback is going to be to an existing on-premises virtual machine or to a new virtual machine.
* For an existing virtual machine, the host of the master target should have access to the data stores of the virtual machine. * If the on-premises virtual machine does not exist (in case of Alternate Location Recovery), the failback virtual machine is created on the same host as the master target. You can choose any ESXi host to install the master target. * The master target should be on a network that can communicate with the process server and the configuration server.
Keep an Ubuntu 16.04.2 minimal 64-bit ISO in the DVD drive and start the system.
> From, version [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), Ubuntu 20.04 operating system is supported for Linux master target server.If you wish to use the latest OS, proceed setting up the machine with Ubuntu 20.04 iso image. 1. Select **English** as your preferred language, and then select **Enter**.
-
+ ![Select a language](./media/vmware-azure-install-linux-master-target/image1.png) 1. Select **Install Ubuntu Server**, and then select **Enter**.
Keep an Ubuntu 16.04.2 minimal 64-bit ISO in the DVD drive and start the system.
![Select the default option](./media/vmware-azure-install-linux-master-target/image16-ubuntu.png) 1. In the configure proxy selection, select the default option, select **Continue**, and then select **Enter**.
-
+ ![Screenshot that shows where to select Continue and then select Enter.](./media/vmware-azure-install-linux-master-target/image17-ubuntu.png) 1. Select **No automatic updates** option in the selection for managing upgrades on your system, and then select **Enter**.
Keep an Ubuntu 16.04.2 minimal 64-bit ISO in the DVD drive and start the system.
![Select software](./media/vmware-azure-install-linux-master-target/image19-ubuntu.png) 1. In the selection for installing the GRUB boot loader, Select **Yes**, and then select **Enter**.
-
+ ![GRUB boot installer](./media/vmware-azure-install-linux-master-target/image20.png) 1. Select the appropriate device for the boot loader installation (preferably **/dev/sda**), and then select **Enter**.
-
+ ![Select appropriate device](./media/vmware-azure-install-linux-master-target/image21.png) 1. Select **Continue**, and then select **Enter** to finish the installation.
To get the ID for each SCSI hard disk in a Linux virtual machine, the **disk.Ena
4. In the left pane, select **Advanced** > **General**, and then select the **Configuration Parameters** button on the lower-right part of the screen.
- ![Open configuration parameter](./media/vmware-azure-install-linux-master-target/image24-ubuntu.png)
+ ![Open configuration parameter](./media/vmware-azure-install-linux-master-target/image24-ubuntu.png)
The **Configuration Parameters** option is not available when the machine is running. To make this tab active, shut down the virtual machine.
Azure Site Recovery master target server requires a specific version of the Ubun
```bash sudo apt-get install -y multipath-tools lsscsi python-pyasn1 lvm2 kpartx ```
-
+ >[!NOTE] > From, version [9.42](https://support.microsoft.com/en-us/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), Ubuntu 20.04 operating system is supported for Linux master target server. > If you wish to use the latest OS, upgrade the operating system to Ubuntu 20.04 before proceeding. To upgrade the operating system later, you can follow the instructions listed [here](#upgrade-os-of-master-target-server-from-ubuntu-1604-to-ubuntu-2004).
Use the following steps to create a retention disk:
``` 5. Create the **fstab** entry to mount the retention drive every time the system starts.
-
+ ```bash sudo vi /etc/fstab ```
Use the following steps to create a retention disk:
sudo echo <passphrase> >passphrase.txt ```
- Example:
+ Example:
```bash sudo echo itUx70I47uxDuUVY >passphrase.txt`
Use the following steps to create a retention disk:
sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <ConfigurationServer IP Address> -P passphrase.txt ```
- Example:
-
+ Example:
+ ```bash sudo /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i 104.40.75.37 -P passphrase.txt ```
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Title: Manage the Mobility agent for VMware/physical servers with Azure Site Recovery
+ Title: Manage the Mobility agent for VMware/physical servers with Azure Site Recovery
description: Manage Mobility Service agent for disaster recovery of VMware VMs and physical servers to Azure using the Azure Site Recovery service. -+ Last updated 05/02/2023
-# Manage the Mobility agent
+# Manage the Mobility agent
You set up mobility agent on your server when you use Azure Site Recovery for disaster recovery of VMware VMs and physical servers to Azure. Mobility agent coordinates communications between your protected machine, configuration server/scale-out process server and manages data replication. This article summarizes common tasks for managing mobility agent after it's deployed.
When you deployed Site Recovery, to enable push installation of the Mobility ser
Uninstall from the UI or from a command prompt. - **From the UI**: In the Control Panel of the machine, select **Programs**. Select **Microsoft Azure Site Recovery Mobility Service/Master Target server** > **Uninstall**.-- **From a command prompt**: Open a command prompt window as an administrator on the machine. Run the following command:
+- **From a command prompt**: Open a command prompt window as an administrator on the machine. Run the following command:
``` MsiExec.exe /qn /x {275197FC-14FD-4560-A5EB-38217F80CBD1} /L+*V "C:\ProgramData\ASRSetupLogs\UnifiedAgentMSIUninstall.log" ```
Uninstall from the UI or from a command prompt.
```bash ./uninstall.sh -Y ```
-
+ ## Install Site Recovery VSS provider on source machine Azure Site Recovery VSS provider is required on the source machine to generate application consistency points. If the installation of the provider didn't succeed through push installation, follow the below given guidelines to install it manually.
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Last updated 08/01/2023-+ # About the Mobility service for VMware VMs and physical servers
During a push installation of the Mobility service, the following steps are perf
1. The agent is pushed to the source machine. Copying the agent to the source machine can fail due to multiple environmental errors. Refer to [our guidance](vmware-azure-troubleshoot-push-install.md) to troubleshoot push installation failures. 1. After the agent is successfully copied to the server, a prerequisite check is performed on the server. - If all prerequisites are met, the installation begins.
- - If one or more [prerequisites](vmware-physical-azure-support-matrix.md) aren't met, the installation fails.
+ - If one or more [prerequisites](vmware-physical-azure-support-matrix.md) aren't met, the installation fails.
1. As part of the agent installation, the Volume Shadow Copy Service (VSS) provider for Azure Site Recovery is installed. The VSS provider is used to generate application-consistent recovery points. If installation of the VSS provider fails, this step is skipped and the agent installation continues. 1. If the agent installation succeeds but the VSS provider installation fails, then the job status is marked as **Warning**. This doesn't impact crash-consistent recovery point generation.
During a push installation of the Mobility service, the following steps are perf
### Mobility service agent version 9.55 and higher
-1. The modernized architecture of mobility agent is set as default for the version 9.55 and above. Follow the instructions [here](#install-the-mobility-service-using-ui-modernized) to install the agent.
-2. To install the modernized architecture of mobility agent on versions 9.54 and above, follow the instructions [here](#install-the-mobility-service-using-command-prompt-modernized).
+1. The modernized architecture of mobility agent is set as default for the version 9.55 and above. Follow the instructions [here](#install-the-mobility-service-using-ui-modernized) to install the agent.
+2. To install the modernized architecture of mobility agent on versions 9.54 and above, follow the instructions [here](#install-the-mobility-service-using-command-prompt-modernized).
## Install the Mobility service using UI (Modernized)
During a push installation of the Mobility service, the following steps are perf
### Prerequisites
-Locate the installer files for the serverΓÇÖs operating system using the following steps:
+Locate the installer files for the serverΓÇÖs operating system using the following steps:
- On the appliance, go to the folder *E:\Software\Agents*. - Copy the installer corresponding to the source machineΓÇÖs operating system and place it on your source machine in a local folder, such as *C:\Program Files (x86)\Microsoft Azure Site Recovery*.
Locate the installer files for the serverΓÇÖs operating system using the followi
**Use the following steps to install the mobility service:** >[!NOTE]
-> If installing the agent version 9.54 and below, then ensure that the section [here](#install-the-mobility-service-using-command-prompt-modernized) is followed. For agent version 9.55 and above, the continue to follow the steps below.
+> If installing the agent version 9.54 and below, then ensure that the section [here](#install-the-mobility-service-using-command-prompt-modernized) is followed. For agent version 9.55 and above, the continue to follow the steps below.
-1. Copy the installation file to the location *C:\Program Files (x86)\Microsoft Azure Site Recovery*, and run it. This will launch the installer UI:
+1. Copy the installation file to the location *C:\Program Files (x86)\Microsoft Azure Site Recovery*, and run it. This will launch the installer UI:
- ![Image showing Install UI option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/mobility-service-install.png)
+ ![Image showing Install UI option for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/mobility-service-install.png)
-2. Provide the install location in the UI. This should be *C:\Program Files (x86)\Microsoft Azure Site Recovery*.
+2. Provide the install location in the UI. This should be *C:\Program Files (x86)\Microsoft Azure Site Recovery*.
-4. Click **Install**. This will start the installation of Mobility Service. Wait till the installation has been completed.
+4. Click **Install**. This will start the installation of Mobility Service. Wait till the installation has been completed.
![Image showing Installation progress for Mobility Service](./media/vmware-physical-mobility-service-overview-modernized/installation-progress.png)
Setting | Details
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath "config.json" /CSType CSPrime /CredentialLessDiscovery true` `/SourceConfigFilePath` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder. `/CSType` | Optional. Used to define modernized or legacy architecture. By default for all agents on or above the version 9.55, modernized architecture would be launched. (CSPrime or CSLegacy).
-`/CredentialLessDiscovery` | Optional. Specifies whether credential-less discovery will be performed or not.
+`/CredentialLessDiscovery` | Optional. Specifies whether credential-less discovery will be performed or not.
### Linux machine
Syntax | `"<InstallLocation>\UnifiedAgentConfigurator.exe" /SourceConfigFilePath
Setting | Details |
- Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -S config.json -q -D true -c CSPrime`
+ Syntax | `<InstallLocation>/Vx/bin/UnifiedAgentConfigurator.sh -S config.json -q -D true -c CSPrime`
`-S` | Mandatory. Full file path of the Mobility Service configuration file. Use any valid folder. `-c` | Optional. Used to define modernized and legacy architecture. By default for all agents on or above the version 9.55, modernized architecture would be launched. (CSPrime or CSLegacy). `-q` | Optional. Specifies whether to run the installer in silent mode.
- `-D` | Optional. Specifies whether credential-less discovery will be performed or not.
+ `-D` | Optional. Specifies whether credential-less discovery will be performed or not.
## Credential-less discovery in modernized architecture
-When providing both the machine credentials and the vCenter server or vSphere ESXi host credentials is not possible, then you should opt for credential-less discovery. When performing credential-less discovery, mobility service is installed manually on the source machine and during the installation, the check box for credential-less discovery should be set to true, so that when replication is enabled, no credentials will be required.
+When providing both the machine credentials and the vCenter server or vSphere ESXi host credentials is not possible, then you should opt for credential-less discovery. When performing credential-less discovery, mobility service is installed manually on the source machine and during the installation, the check box for credential-less discovery should be set to true, so that when replication is enabled, no credentials will be required.
![Screenshot showing credential-less-discovery-check-box.](./media/vmware-physical-mobility-service-overview-modernized/credential-less-discovery.png)
See information about [upgrading the mobility services](upgrade-mobility-service
Microsoft-ASR_UA*Windows*release.exe /q /x:C:\Program Files (x86)\Microsoft Azure Site Recovery ``` 1. Run the below command to launch the installation wizard for the agent .
- ```cmd
+ ```cmd
UnifiedAgentInstaller.exe /CSType CSLegacy ``` 1. In **Installation Option**, select **Install mobility service**.
Setting | Details
| Syntax | `UnifiedAgent.exe /Role \<MS/MT> /InstallLocation \<Install Location> /Platform "VmWare" /Silent /CSType CSLegacy` Setup logs | `%ProgramData%\ASRSetupLogs\ASRUnifiedAgentInstaller.log`
-`/Role` | Mandatory installation parameter. Specifies whether the mobility service (MS) or master target (MT) should be installed.
+`/Role` | Mandatory installation parameter. Specifies whether the mobility service (MS) or master target (MT) should be installed.
`/InstallLocation`| Optional parameter. Specifies the Mobility service installation location (any folder). `/Platform` | Mandatory. Specifies the platform on which the Mobility service is installed: <br/> **VMware** for VMware VMs/physical servers. <br/> **Azure** for Azure VMs.<br/><br/> If you're treating Azure VMs as physical machines, specify **VMware**. `/Silent`| Optional. Specifies whether to run the installer in silent mode.
Installer file | Operating system (64-bit only)
## Download latest mobility agent installer for SUSE 11 SP3, SUSE 11 SP4, RHEL 5, Cent OS 5, Debian 7, Debian 8, Debian 9, Oracle Linux 6 and Ubuntu 14.04 server
-### SUSE 11 SP3 or SUSE 11 SP4 server
+### SUSE 11 SP3 or SUSE 11 SP4 server
As a **prerequisite to update or protect SUSE Linux Enterprise Server 11 SP3 or SUSE 11 SP4 machines** from 9.36 version onwards:
spring-apps How To Enterprise Service Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-service-registry.md
This article uses the following environment variables. Set these variables to th
## Create Service A with Spring Boot
-Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20A&name=Sample%20Service%20A&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20A&dependencies=web,cloud-eureka) to create sample Service A. This link uses the following URL to initialize the settings.
+Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&packaging=jar&groupId=com.example&artifactId=Sample%20Service%20A&name=Sample%20Service%20A&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20A&dependencies=web,cloud-eureka) to create sample Service A. This link uses the following URL to initialize the settings.
```URL
-https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20A&name=Sample%20Service%20A&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20A&dependencies=web,cloud-eureka
+https://start.spring.io/#!type=maven-project&language=java&packaging=jar&groupId=com.example&artifactId=Sample%20Service%20A&name=Sample%20Service%20A&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20A&dependencies=web,cloud-eureka
``` The following screenshot shows Spring Initializr with the required settings.
You can now register the service to the Service Registry (Eureka Server) in Azur
### Implement Service B with Spring Boot
-Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20B&name=Sample%20Service%20B&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20B&dependencies=web,cloud-eureka) to create a new project for Service B. This link uses the following URL to initialize the settings:
+Navigate to [Spring Initializr](https://start.spring.io/#!type=maven-project&language=java&packaging=jar&groupId=com.example&artifactId=Sample%20Service%20B&name=Sample%20Service%20B&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20B&dependencies=web,cloud-eureka) to create a new project for Service B. This link uses the following URL to initialize the settings:
```URL
-https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.6.4&packaging=jar&jvmVersion=11&groupId=com.example&artifactId=Sample%20Service%20B&name=Sample%20Service%20B&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20B&dependencies=web,cloud-eureka
+https://start.spring.io/#!type=maven-project&language=java&packaging=jar&groupId=com.example&artifactId=Sample%20Service%20B&name=Sample%20Service%20B&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.Sample%20Service%20B&dependencies=web,cloud-eureka
``` Then, select **GENERATE** to get the new project.
spring-apps Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps.md
To deploy to Azure, you must sign in with your Azure account with Azure Toolkit
1. In the **Name** field, append *:api-gateway* to the existing **Name**. 1. In the **Artifact** textbox, select *spring-petclinic-api-gateway-3.0.1*. 1. In the **Subscription** textbox, verify your subscription.
-1. In the **Spring Cloud** textbox, select the instance of Azure Spring Apps that you created in [Provision Azure Spring Apps instance](./quickstart-provision-service-instance.md).
-1. Set **Public Endpoint** to *Enable*.
+1. In the **Spring Apps** textbox, select the instance of Azure Spring Apps that you created in [Provision Azure Spring Apps instance](./quickstart-provision-service-instance.md).
1. In the **App:** textbox, select **Create app...**. 1. Enter *api-gateway*, then select **OK**.
+1. Set **Public Endpoint** to *Enable*.
1. Specify the memory to 2 GB and JVM options: `-Xms2048m -Xmx2048m`. :::image type="content" source="media/quickstart-deploy-apps/memory-jvm-options.png" alt-text="Screenshot of memory and JVM options." lightbox="media/quickstart-deploy-apps/memory-jvm-options.png":::
storage Blobfuse2 Commands Mount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-all.md
description: Learn how to use the 'blobfuse2 mount all' all command to mount all blob containers in a storage account as a Linux file system. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Mount List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount-list.md
description: Learn how to use the 'blobfuse2 mount list' command to display all BlobFuse2 mount points. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md
description: Learn how to use the 'blobfuse2 mount' command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Mountv1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mountv1.md
description: How to generate a configuration file for BlobFuse2 from a BlobFuse v1 configuration file. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Unmount All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount-all.md
description: Learn how to use the 'blobfuse2 unmount all' command to unmount all blob containers in a storage account as a Linux file system. -+ Last updated 12/02/2022
storage Blobfuse2 Commands Unmount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-unmount.md
description: How to use the 'blobfuse2 unmount' command to unmount an existing mount point. -+ Last updated 12/02/2022
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
Last updated 01/26/2023-+ # How to mount an Azure Blob Storage container on Linux with BlobFuse2
To install BlobFuse2 from the repositories:
Configure the [Linux Package Repository for Microsoft Products](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software).
-# [RHEL](#tab/RHEL)
+# [RHEL](#tab/RHEL)
As an example, on a Redhat Enterprise Linux 8 distribution:
sudo rpm -Uvh https://packages.microsoft.com/config/rhel/8/packages-microsoft-pr
Similarly, change the URL to `.../rhel/7/...` to point to a Redhat Enterprise Linux 7 distribution. # [CentOS](#tab/CentOS)
-
+ As an example, on a CentOS 8 distribution: ```bash
Another example on an Ubuntu 20.04 distribution:
sudo wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb sudo dpkg -i packages-microsoft-prod.deb sudo apt-get update
-sudo apt-get install libfuse3-dev fuse3
+sudo apt-get install libfuse3-dev fuse3
``` Similarly, change the URL to `.../ubuntu/16.04/...` or `.../ubuntu/18.04/...` to reference another Ubuntu version.
-# [SLES](#tab/SLES)
+# [SLES](#tab/SLES)
```bash sudo rpm -Uvh https://packages.microsoft.com/config/sles/15/packages-microsoft-prod.rpm ```
-
+ #### Install BlobFuse2
-# [RHEL](#tab/RHEL)
+# [RHEL](#tab/RHEL)
```bash sudo yum install blobfuse2
sudo yum install blobfuse2
```bash sudo apt-get install blobfuse2 ```
-# [SLES](#tab/SLES)
+# [SLES](#tab/SLES)
```bash sudo zypper install blobfuse2
storage Simulate Primary Region Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/simulate-primary-region-failure.md
Last updated 09/06/2022
ms.devlang: javascript-+ # Tutorial: Simulate a failure in reading data from the primary region
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
Last updated 12/02/2022 -+ # How to mount Azure Blob Storage as a file system with BlobFuse v1
cat /etc/*-release
Configure the [Linux Package Repository for Microsoft Products](/windows-server/administration/Linux-Package-Repository-for-Microsoft-Software).
-# [RHEL](#tab/RHEL)
+# [RHEL](#tab/RHEL)
As an example, on a Redhat Enterprise Linux 8 distribution:
sudo rpm -Uvh https://packages.microsoft.com/config/rhel/8/packages-microsoft-pr
Similarly, change the URL to `.../rhel/7/...` to point to a Redhat Enterprise Linux 7 distribution. # [CentOS](#tab/CentOS)
-
+ As an example, on a CentOS 8 distribution: ```bash
sudo apt-get update
Similarly, change the URL to `.../ubuntu/16.04/...` or `.../ubuntu/18.04/...` to reference another Ubuntu version.
-# [SLES](#tab/SLES)
+# [SLES](#tab/SLES)
```bash sudo rpm -Uvh https://packages.microsoft.com/config/sles/15/packages-microsoft-prod.rpm ```
-
+ ### Install BlobFuse v1
-# [RHEL](#tab/RHEL)
+# [RHEL](#tab/RHEL)
```bash sudo yum install blobfuse
sudo yum install blobfuse
```bash sudo apt-get install blobfuse ```
-# [SLES](#tab/SLES)
+# [SLES](#tab/SLES)
```bash sudo zypper install blobfuse
containerName mycontainer
authType Key ```
-The `accountName` is the name of your storage account, and not the full URL. You need to update `myaccount`, `storageaccesskey`, and `mycontainer` with your storage information.
+The `accountName` is the name of your storage account, and not the full URL. You need to update `myaccount`, `storageaccesskey`, and `mycontainer` with your storage information.
Create this file using:
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
Previously updated : 07/20/2023 Last updated : 01/18/2024
In some cases, you will have to allow time for clean-up operations after a featu
> [!IMPORTANT] > You cannot upgrade a storage account to Data Lake Storage Gen2 that has **ever** had the change feed feature enabled.
-> Simply disabling change feed will not allow you to perform an upgrade. To convert such an account to Data Lake Storage Gen2, you must perform a manual migration. For migration options, see [Migrate a storage account](../common/storage-account-overview.md#migrate-a-storage-account).
+> Simply disabling change feed will not allow you to perform an upgrade. Instead, you must create an account with the hierarchical namespace feature enabled on it, and move then transfer your data into that account.
### Ensure the segments of each blob path are named
After the upgrade has completed, break the leases you created to resume allowing
1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-2. Verify that the version of Azure CLI that have installed is `2.29.0` or higher by using the following command.
+2. Verify that the version of Azure CLI that has installed is `2.29.0` or higher by using the following command.
```azurecli az --version
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
Last updated 09/12/2023 -+ # Connect to Elastic SAN Preview volumes - Linux
To achieve higher IOPS and throughput to a volume and reach its maximum limits,
Install the Multipath I/O package for your Linux distribution. The installation will vary based on your distribution, and you should consult their documentation. As an example, on Ubuntu the command would be `sudo apt install multipath-tools`, for SLES the command would be `sudo zypper install multipath-tools` and for RHEL the command would be `sudo yum install device-mapper-multipath`.
-Once you've installed the package, check if **/etc/multipath.conf** exists. If **/etc/multipath.conf** doesn't exist, create an empty file and use the settings in the following example for a general configuration. As an example, `mpathconf --enable` will create **/etc/multipath.conf** on RHEL.
+Once you've installed the package, check if **/etc/multipath.conf** exists. If **/etc/multipath.conf** doesn't exist, create an empty file and use the settings in the following example for a general configuration. As an example, `mpathconf --enable` will create **/etc/multipath.conf** on RHEL.
You'll need to make some modifications to **/etc/multipath.conf**. You'll need to add the devices section in the following example, and the defaults section in the following example sets some defaults are generally applicable. If you need to make any other specific configurations, such as excluding volumes from the multipath topology, see the manual page for multipath.conf.
You can use the following script to create your connections. To execute it, you
- n <vol1, vol2, ...>: Names of volumes 1 and 2 and other volume names that you might require, comma separated - s: Number of sessions to each volume (set to 32 by default)
-Copy the script from [here](https://github.com/Azure-Samples/azure-elastic-san/blob/main/CLI%20(Linux)%20Multi-Session%20Connect%20Scripts/connect_for_documentation.py) and save it as a .py file, for example, connect.py. Then execute it with the required parameters. The following is an example of how you'd run the script:
+Copy the script from [here](https://github.com/Azure-Samples/azure-elastic-san/blob/main/CLI%20(Linux)%20Multi-Session%20Connect%20Scripts/connect_for_documentation.py) and save it as a .py file, for example, connect.py. Then execute it with the required parameters. The following is an example of how you'd run the script:
```bash ./connect.py --subscription <subid> -g <rgname> -e <esanname> -v <vgname> -n <vol1, vol2> -s 32
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
ZRS is only available in France Central, North Europe, West Europe and West US 2
|Maximum total IOPS |500,000 |500,000 |500,000 |500,000 | |Maximum total throughput (MB/s) |8,000 |8,000 |8,000 |8,000 |
-#### Quota Increases
+#### Quota and Capacity Increases
To increase quota, raise a support ticket with the subscription ID and region information to request for an increase in quota for the ΓÇ£Maximum number of Elastic SAN that can be deployed per subscription per regionΓÇ¥.
+For capacity increase requests, please raise a support ticket with the subscription ID and the region information and it will be evaluated.
+ ## Volume group An Elastic SAN can have a maximum of 200 volume groups, and a volume group can contain up to 1,000 volumes.
storage File Sync Networking Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-networking-endpoints.md
Last updated 04/26/2023 -+ # Configuring Azure File Sync network endpoints
-Azure Files and Azure File Sync provide two main types of endpoints for accessing Azure file shares:
+Azure Files and Azure File Sync provide two main types of endpoints for accessing Azure file shares:
- Public endpoints, which have a public IP address and can be accessed from anywhere in the world. - Private endpoints, which exist within a virtual network and have a private IP address from within the address space of that virtual network.
This article focuses on how to configure the networking endpoints for both Azure
We recommend reading [Azure File Sync networking considerations](file-sync-networking-overview.md) prior to reading this how to guide.
-## Prerequisites
+## Prerequisites
This article assumes that: - You have an Azure subscription. If you don't already have a subscription, then create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - You have already created an Azure file share in a storage account which you would like to connect to from on-premises. To learn how to create an Azure file share, see [Create an Azure file share](../files/storage-how-to-create-file-share.md?toc=/azure/storage/filesync/toc.json).
When you create a private endpoint for an Azure resource, the following resource
- **A network interface (NIC)**: The network interface that maintains a private IP address within the specified virtual network/subnet. This is the exact same resource that gets deployed when you deploy a virtual machine, however instead of being assigned to a VM, it's owned by the private endpoint. - **A private DNS zone**: If you've never deployed a private endpoint for this virtual network before, a new private DNS zone will be deployed for your virtual network. A DNS A record will also be created for Azure resource in this DNS zone. If you've already deployed a private endpoint in this virtual network, a new A record for Azure resource will be added to the existing DNS zone. Deploying a DNS zone is optional, however highly recommended to simplify the DNS management required.
-> [!Note]
+> [!Note]
> This article uses the DNS suffixes for the Azure Public regions, `core.windows.net` for storage accounts and `afs.azure.net` for Storage Sync Services. This commentary also applies to Azure Sovereign clouds such as the Azure US Government cloud - just substitute the appropriate suffixes for your environment. ### Create the storage account private endpoint
In the **Basics** blade, select the desired resource group, name, and region for
![A screenshot of the Basics section of the create private endpoint section](media/storage-sync-files-networking-endpoints/create-storage-sync-private-endpoint-1.png)
-In the **Resource** blade, select the radio button for **Connect to an Azure resource in my directory**. Under the **Resource type**, select **Microsoft.StorageSync/storageSyncServices** for the resource type.
+In the **Resource** blade, select the radio button for **Connect to an Azure resource in my directory**. Under the **Resource type**, select **Microsoft.StorageSync/storageSyncServices** for the resource type.
The **Configuration** blade allows you to select the specific virtual network and subnet you would like to add your private endpoint to. Select the same virtual network as the one you used for the storage account above. The Configuration blade also contains the information for creating/updating the private DNS zone. Click **Review + create** to create the private endpoint.
-You can test that your private endpoint has been set up correctly by running the following commands from PowerShell.
+You can test that your private endpoint has been set up correctly by running the following commands from PowerShell.
```powershell $privateEndpointResourceGroupName = "<your-private-endpoint-resource-group>"
IP4Address : 192.168.1.7
``` # [PowerShell](#tab/azure-powershell)
-To create a private endpoint for your Storage Sync Service, first you will need to get a reference to your Storage Sync Service. Remember to replace `<storage-sync-service-resource-group>` and `<storage-sync-service>` with the correct values for your environment. The following PowerShell commands assume that you are using have already populated the virtual network information from above.
+To create a private endpoint for your Storage Sync Service, first you will need to get a reference to your Storage Sync Service. Remember to replace `<storage-sync-service-resource-group>` and `<storage-sync-service>` with the correct values for your environment. The following PowerShell commands assume that you are using have already populated the virtual network information from above.
```powershell $storageSyncServiceResourceGroupName = "<storage-sync-service-resource-group>"
if ($null -eq $storageSyncService) {
To create a private endpoint, you must create a private link service connection to the Storage Sync Service. The private link connection is an input to the creation of the private endpoint.
-```powershell
+```powershell
# Disable private endpoint network policies $subnet.PrivateEndpointNetworkPolicies = "Disabled" $virtualNetwork = $virtualNetwork | `
$privateEndpoint = New-AzPrivateEndpoint `
-ErrorAction Stop ```
-Creating an Azure private DNS zone enables the host names for the Storage Sync Service, such as `mysssmanagement.westus2.afs.azure.net`, to resolve to the correct private IPs for the Storage Sync Service inside of the virtual network. Although optional from the perspective of creating a private endpoint, it is explicitly required for the Azure File Sync agent to access the Storage Sync Service.
+Creating an Azure private DNS zone enables the host names for the Storage Sync Service, such as `mysssmanagement.westus2.afs.azure.net`, to resolve to the correct private IPs for the Storage Sync Service inside of the virtual network. Although optional from the perspective of creating a private endpoint, it is explicitly required for the Azure File Sync agent to access the Storage Sync Service.
```powershell # Get the desired Storage Sync Service suffix (afs.azure.net for public cloud).
switch($azureEnvironment) {
"AzureUSGovernment" { $storageSyncSuffix = "afs.azure.us"
- }
+ }
"AzureChinaCloud" { $storageSyncSuffix = "afs.azure.cn" }
-
+ default {
- Write-Error
+ Write-Error
-Message "The Azure environment $_ is not currently supported by Azure File Sync." ` -ErrorAction Stop }
$dnsZone = Get-AzPrivateDnsZone | `
-ResourceGroupName $_.ResourceGroupName ` -ZoneName $_.Name ` -ErrorAction SilentlyContinue
-
+ $privateDnsLink.VirtualNetworkId -eq $virtualNetwork.Id }
if ($null -eq $dnsZone) {
``` Now that you have a reference to the private DNS zone, you must create an A record for your Storage Sync Service.
-```powershell
+```powershell
$privateEndpointIpFqdnMappings = $privateEndpoint | ` Select-Object -ExpandProperty NetworkInterfaces | ` Select-Object -ExpandProperty Id | ` ForEach-Object { Get-AzNetworkInterface -ResourceId $_ } | ` Select-Object -ExpandProperty IpConfigurations | `
- ForEach-Object {
- $privateIpAddress = $_.PrivateIpAddress;
+ ForEach-Object {
+ $privateIpAddress = $_.PrivateIpAddress;
$_ | ` Select-Object -ExpandProperty PrivateLinkConnectionProperties | ` Select-Object -ExpandProperty Fqdns | ` Select-Object `
- @{
- Name = "PrivateIpAddress";
- Expression = { $privateIpAddress }
+ @{
+ Name = "PrivateIpAddress";
+ Expression = { $privateIpAddress }
}, `
- @{
- Name = "FQDN";
- Expression = { $_ }
- }
+ @{
+ Name = "FQDN";
+ Expression = { $_ }
+ }
} foreach($ipFqdn in $privateEndpointIpFqdnMappings) { $privateDnsRecordConfig = New-AzPrivateDnsRecordConfig ` -IPv4Address $ipFqdn.PrivateIpAddress
-
- $dnsEntry = $ipFqdn.FQDN.Substring(0,
+
+ $dnsEntry = $ipFqdn.FQDN.Substring(0,
$ipFqdn.FQDN.IndexOf(".", $ipFqdn.FQDN.IndexOf(".") + 1)) New-AzPrivateDnsRecordSet `
foreach($ipFqdn in $privateEndpointIpFqdnMappings) {
``` # [Azure CLI](#tab/azure-cli)
-To create a private endpoint for your Storage Sync Service, first you will need to get a reference to your Storage Sync Service. Remember to replace `<storage-sync-service-resource-group>` and `<storage-sync-service>` with the correct values for your environment. The following CLI commands assume that you are using have already populated the virtual network information from above.
+To create a private endpoint for your Storage Sync Service, first you will need to get a reference to your Storage Sync Service. Remember to replace `<storage-sync-service-resource-group>` and `<storage-sync-service>` with the correct values for your environment. The following CLI commands assume that you are using have already populated the virtual network information from above.
```azurecli storageSyncServiceResourceGroupName="<storage-sync-service-resource-group>"
privateEndpoint=$(az network private-endpoint create \
tr -d '"') ```
-Creating an Azure private DNS zone enables the host names for the Storage Sync Service, such as `mysssmanagement.westus2.afs.azure.net`, to resolve to the correct private IPs for the Storage Sync Service inside of the virtual network. Although optional from the perspective of creating a private endpoint, it is explicitly required for the Azure File Sync agent to access the Storage Sync Service.
+Creating an Azure private DNS zone enables the host names for the Storage Sync Service, such as `mysssmanagement.westus2.afs.azure.net`, to resolve to the correct private IPs for the Storage Sync Service inside of the virtual network. Although optional from the perspective of creating a private endpoint, it is explicitly required for the Azure File Sync agent to access the Storage Sync Service.
```azurecli # Get the desired storage account suffix (afs.azure.net for public cloud).
do
--ids $possibleDnsZone \ --query "resourceGroup" | \ tr -d '"')
-
+ link=$(az network private-dns link vnet list \ --resource-group $possibleResourceGroupName \ --zone-name $dnsZoneName \ --query "[?virtualNetwork.id == '$virtualNetwork'].id" \ --output tsv)
-
+ if [ -z $link ] then echo "1" >
- else
+ else
dnsZoneResourceGroup=$possibleResourceGroupName dnsZone=$possibleDnsZone break
- fi
+ fi
done if [ -z $dnsZone ]
then
--name $dnsZoneName \ --query "id" | \ tr -d '"')
-
+ az network private-dns link vnet create \ --resource-group $virtualNetworkResourceGroupName \ --zone-name $dnsZoneName \
then
--virtual-network $virtualNetwork \ --registration-enabled false \ --output none
-
+ dnsZoneResourceGroup=$virtualNetworkResourceGroupName fi ```
privateEndpointNIC=$(az network private-endpoint show \
privateIpAddresses=$(az network nic show \ --ids $privateEndpointNIC \ --query "ipConfigurations[].privateIpAddress" \
- --output tsv)
+ --output tsv)
hostNames=$(az network nic show \ --ids $privateEndpointNIC \
do
--zone-name $dnsZoneName \ --name "$endpointName.$storageSyncServiceRegion" \ --output none
-
+ az network private-dns record-set a add-record \ --resource-group $dnsZoneResourceGroup \ --zone-name $dnsZoneName \
done
## Restrict access to the public endpoints
-You can restrict access to the public endpoints of both the storage account and the Storage Sync Services. Restrict access to the public endpoint provides additional security by ensuring that network packets are only accepted from approved locations.
+You can restrict access to the public endpoints of both the storage account and the Storage Sync Services. Restrict access to the public endpoint provides additional security by ensuring that network packets are only accepted from approved locations.
### Restrict access to the storage account public endpoint Access restriction to the public endpoint is done using the storage account firewall settings. In general, most firewall policies for a storage account will restrict networking access to one or more virtual networks. There are two approaches to restricting access to a storage account to a virtual network:
Access restriction to the public endpoint is done using the storage account fire
- [Create one or more private endpoints for the storage account](#create-the-storage-account-private-endpoint) and disable access to the public endpoint. This ensures that only traffic originating from within the desired virtual networks can access the Azure file shares within the storage account. - Restrict the public endpoint to one or more virtual networks. This works by using a capability of the virtual network called *service endpoints*. When you restrict the traffic to a storage account via a service endpoint, you are still accessing the storage account via the public IP address.
-> [!Note]
+> [!Note]
> The **Allow Azure services on the trusted services list to access this storage account** exception must be selected on your storage account to allow trusted first party Microsoft services such as Azure File Sync to access the storage account. To learn more, see [Grant access to trusted Azure services](../common/storage-network-security.md#grant-access-to-trusted-azure-services). #### Grant access to trusted Azure services and disable access to the storage account public endpoint
-When access to the public endpoint is disabled, the storage account can still be accessed through its private endpoints. Otherwise valid requests to the storage account's public endpoint will be rejected.
+When access to the public endpoint is disabled, the storage account can still be accessed through its private endpoints. Otherwise valid requests to the storage account's public endpoint will be rejected.
# [Portal](#tab/azure-portal) [!INCLUDE [storage-files-networking-endpoints-public-disable-portal](../../../includes/storage-files-networking-endpoints-public-disable-portal.md)]
When access to the public endpoint is disabled, the storage account can still be
-#### Grant access to trusted Azure services and restrict access to the storage account public endpoint to specific virtual networks
+#### Grant access to trusted Azure services and restrict access to the storage account public endpoint to specific virtual networks
When you restrict the storage account to specific virtual networks, you are allowing requests to the public endpoint from within the specified virtual networks. This works by using a capability of the virtual network called *service endpoints*. This can be used with or without private endpoints. # [Portal](#tab/azure-portal)
When you restrict the storage account to specific virtual networks, you are allo
### Disable access to the Storage Sync Service public endpoint Azure File Sync enables you to restrict access to specific virtual networks through private endpoints only; Azure File Sync doesn't support service endpoints for restricting access to the public endpoint to specific virtual networks. This means that the two states for the Storage Sync Service's public endpoint are **enabled** and **disabled**.
-> [!IMPORTANT]
+> [!IMPORTANT]
> You must create a private endpoint before disabling access to the public endpoint. If the public endpoint is disabled and there's no private endpoint configured, sync can't work. # [Portal](#tab/azure-portal)
The following pre-defined policies are available for Azure Files and Azure File
### Set up a private endpoint deployment policy To set up a private endpoint deployment policy, go to the [Azure portal](https://portal.azure.com/), and search for **Policy**. The Azure Policy center should be a top result. Navigate to **Authoring** > **Definitions** in the Policy center's table of contents. The resulting **Definitions** pane contains the pre-defined policies across all Azure services. To find the specific policy, select the **Storage** category in the category filter, or search for **Configure Azure File Sync with private endpoints**. Select **...** and **Assign** to create a new policy from the definition.
-The **Basics** blade of the **Assign policy** wizard enables you to set a scope, resource or resource group exclusion list, and to give your policy a friendly name to help you distinguish it. You don't need to modify these for the policy to work, but you can if you want to make modifications. Select **Next** to advance to the **Parameters** page.
+The **Basics** blade of the **Assign policy** wizard enables you to set a scope, resource or resource group exclusion list, and to give your policy a friendly name to help you distinguish it. You don't need to modify these for the policy to work, but you can if you want to make modifications. Select **Next** to advance to the **Parameters** page.
On the **Parameters** blade, select the **...** next to the **privateEndpointSubnetId** drop down list to select the virtual network and subnet where the private endpoints for your Storage Sync Service resources should be deployed. The resulting wizard may take several seconds to load the available virtual networks in your subscription. Select the appropriate virtual network/subnet for your environment and click **Select**. Select **Next** to advance to the **Remediation** blade.
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
description: Plan for a deployment with Azure File Sync, a service that allows y
Previously updated : 10/02/2023 Last updated : 01/18/2024
:::column::: Azure File Sync is a service that allows you to cache several Azure file shares on an on-premises Windows Server or cloud VM.
- This article introduces you to Azure File Sync concepts and features. Once you are familiar with Azure File Sync, consider following the [Azure File Sync deployment guide](file-sync-deployment-guide.md) to try out this service.
+ This article introduces you to Azure File Sync concepts and features. Once you're familiar with Azure File Sync, consider following the [Azure File Sync deployment guide](file-sync-deployment-guide.md) to try out this service.
:::column-end::: :::row-end::: The files will be stored in the cloud in [Azure file shares](../files/storage-files-introduction.md). Azure file shares can be used in two ways: by directly mounting these serverless Azure file shares (SMB) or by caching Azure file shares on-premises using Azure File Sync. Which deployment option you choose changes the aspects you need to consider as you plan for your deployment. -- **Direct mount of an Azure file share**: Since Azure Files provides SMB access, you can mount Azure file shares on-premises or in the cloud using the standard SMB client available in Windows, macOS, and Linux. Because Azure file shares are serverless, deploying for production scenarios does not require managing a file server or NAS device. This means you don't have to apply software patches or swap out physical disks.
+- **Direct mount of an Azure file share**: Because Azure Files provides SMB access, you can mount Azure file shares on-premises or in the cloud using the standard SMB client available in Windows, macOS, and Linux. Because Azure file shares are serverless, deploying for production scenarios doesn't require managing a file server or NAS device. This means you don't have to apply software patches or swap out physical disks.
- **Cache Azure file share on-premises with Azure File Sync**: Azure File Sync enables you to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. Azure File Sync transforms an on-premises (or cloud) Windows Server into a quick cache of your Azure file share. ## Management concepts+ An Azure File Sync deployment has three fundamental management objects: - **Azure file share**: An Azure file share is a serverless cloud file share, which provides the *cloud endpoint* of an Azure File Sync sync relationship. Files in an Azure file share can be accessed directly with SMB or the FileREST protocol, although we encourage you to primarily access the files through the Windows Server cache when the Azure file share is being used with Azure File Sync. This is because Azure Files today lacks an efficient change detection mechanism like Windows Server has, so changes to the Azure file share directly will take time to propagate back to the server endpoints.-- **Server endpoint**: The path on the Windows Server that is being synced to an Azure file share. This can be a specific folder on a volume or the root of the volume. Multiple server endpoints can exist on the same volume if their namespaces do not overlap.
+- **Server endpoint**: The path on the Windows Server that is being synced to an Azure file share. This can be a specific folder on a volume or the root of the volume. Multiple server endpoints can exist on the same volume if their namespaces don't overlap.
- **Sync group**: The object that defines the sync relationship between a **cloud endpoint**, or Azure file share, and a server endpoint. Endpoints within a sync group are kept in sync with each other. If for example, you have two distinct sets of files that you want to manage with Azure File Sync, you would create two sync groups and add different endpoints to each sync group. ### Azure file share management concepts+ [!INCLUDE [storage-files-file-share-management-concepts](../../../includes/storage-files-file-share-management-concepts.md)] ### Azure File Sync management concepts+ Sync groups are deployed into **Storage Sync Services**, which are top-level objects that register servers for use with Azure File Sync and contain the sync group relationships. The Storage Sync Service resource is a peer of the storage account resource, and can similarly be deployed to Azure resource groups. A Storage Sync Service can create sync groups that contain Azure file shares across multiple storage accounts and multiple registered Windows Servers. Before you can create a sync group in a Storage Sync Service, you must first register a Windows Server with the Storage Sync Service. This creates a **registered server** object, which represents a trust relationship between your server or cluster and the Storage Sync Service. To register a Storage Sync Service, you must first install the Azure File Sync agent on the server. An individual server or cluster can be registered with only one Storage Sync Service at a time.
A sync group contains one cloud endpoint, or Azure file share, and at least one
> You can make changes to the namespace of any cloud endpoint or server endpoint in the sync group and have your files synced to the other endpoints in the sync group. If you make a change to the cloud endpoint (Azure file share) directly, changes first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint only once every 24 hours. For more information, see [Azure Files frequently asked questions](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-change-detection). ### Consider the count of Storage Sync Services needed
-A previous section discusses the core resource to configure for Azure File Sync: a *Storage Sync Service*. A Windows Server can only be registered to one Storage Sync Service. So it is often best to only deploy a single Storage Sync Service and register all servers on it.
+
+A previous section discusses the core resource to configure for Azure File Sync: a *Storage Sync Service*. A Windows Server can only be registered to one Storage Sync Service. So it's often best to only deploy a single Storage Sync Service and register all servers on it.
Create multiple Storage Sync Services only if you have: * distinct sets of servers that must never exchange data with one another. In this case, you want to design the system to exclude certain sets of servers to sync with an Azure file share that is already in use as a cloud endpoint in a sync group in a different Storage Sync Service. Another way to look at this is that Windows Servers registered to different storage sync service cannot sync with the same Azure file share. * a need to have more registered servers or sync groups than a single Storage Sync Service can support. Review the [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets) for more details. ## Plan for balanced sync topologies
-Before you deploy any resources, it is important to plan out what you will sync on a local server, with which Azure file share. Making a plan will help you determine how many storage accounts, Azure file shares, and sync resources you will need. These considerations are still relevant, even if your data doesn't currently reside on a Windows Server or the server you want to use long term. The [migration section](#migration) can help determine appropriate migration paths for your situation.
+
+Before you deploy any resources, it's important to plan out what you will sync on a local server, with which Azure file share. Making a plan will help you determine how many storage accounts, Azure file shares, and sync resources you'll need. These considerations are still relevant, even if your data doesn't currently reside on a Windows Server or the server you want to use long term. The [migration section](#migration) can help determine appropriate migration paths for your situation.
[!INCLUDE [storage-files-migration-namespace-mapping](../../../includes/storage-files-migration-namespace-mapping.md)] ## Windows file server considerations
-To enable the sync capability on Windows Server, you must install the Azure File Sync downloadable agent. The Azure File Sync agent provides two main components: `FileSyncSvc.exe`, the background Windows service that is responsible for monitoring changes on the server endpoints and initiating sync sessions, and `StorageSync.sys`, a file system filter that enables cloud tiering and fast disaster recovery.
+
+To enable the sync capability on Windows Server, you must install the Azure File Sync downloadable agent. The Azure File Sync agent provides two main components: `FileSyncSvc.exe`, the background Windows service that's responsible for monitoring changes on the server endpoints and initiating sync sessions, and `StorageSync.sys`, a file system filter that enables cloud tiering and fast disaster recovery.
### Operating system requirements+ Azure File Sync is supported with the following versions of Windows Server: | Version | Supported SKUs | Supported deployment options |
Azure File Sync is supported with the following versions of Windows Server:
Future versions of Windows Server will be added as they are released.
-> [!Important]
+> [!IMPORTANT]
> We recommend keeping all servers that you use with Azure File Sync up to date with the latest updates from Windows Update. ### Minimum system resources+ Azure File Sync requires a server, either physical or virtual, with at least one CPU, minimum of 2 GiB of memory and a locally attached volume formatted with the NTFS file system.
-> [!Important]
+> [!IMPORTANT]
> If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum of 2048 MiB of memory.
-For most production workloads, we do not recommend configuring an Azure File Sync sync server with only the minimum requirements. See [Recommended system resources](#recommended-system-resources) for more information.
+For most production workloads, we don't recommend configuring an Azure File Sync sync server with only the minimum requirements. See [Recommended system resources](#recommended-system-resources) for more information.
### Recommended system resources+ Just like any server feature or application, the system resource requirements for Azure File Sync are determined by the scale of the deployment; larger deployments on a server require greater system resources. For Azure File Sync, scale is determined by the number of objects across the server endpoints and the churn on the dataset. A single server can have server endpoints in multiple sync groups and the number of objects listed in the following table accounts for the full namespace that a server is attached to. For example, server endpoint A with 10 million objects + server endpoint B with 10 million objects = 20 million objects. For that example deployment, we would recommend 8 CPUs, 16 GiB of memory for steady state, and (if possible) 48 GiB of memory for the initial migration. Namespace data is stored in memory for performance reasons. Because of that, bigger namespaces require more memory to maintain good performance, and more churn requires more CPU to process.
-In the following table, we have provided both the size of the namespace as well as a conversion to capacity for typical general purpose file shares, where the average file size is 512 KiB. If your file sizes are smaller, consider adding additional memory for the same amount of capacity. Base your memory configuration on the size of the namespace.
+In the following table, we've provided both the size of the namespace as well as a conversion to capacity for typical general purpose file shares, where the average file size is 512 KiB. If your file sizes are smaller, consider adding additional memory for the same amount of capacity. Base your memory configuration on the size of the namespace.
| Namespace size - files & directories (millions) | Typical capacity (TiB) | CPU Cores | Recommended memory (GiB) | |||||
In the following table, we have provided both the size of the namespace as well
| 50 | 23.3 | 16 | 64 (initial sync)/ 32 (typical churn) | | 100* | 46.6 | 32 | 128 (initial sync)/ 32 (typical churn) |
-\*Syncing more than 100 million files & directories is not recommended at this time. This is a soft limit based on our tested thresholds. For more information, see [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets).
+\*Syncing more than 100 million files & directories isn't recommended. This is a soft limit based on our tested thresholds. For more information, see [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets).
> [!TIP]
-> Initial synchronization of a namespace is an intensive operation and we recommend allocating more memory until initial synchronization is complete. This isn't required but, may speed up initial sync.
+> Initial synchronization of a namespace is an intensive operation, and we recommend allocating more memory until initial synchronization is complete. This isn't required but might speed up initial sync.
> > Typical churn is 0.5% of the namespace changing per day. For higher levels of churn, consider adding more CPU. ### Evaluation cmdlet
-Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation cmdlet. This cmdlet checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported operating system version. Its checks cover most but not all of the features mentioned below; we recommend you read through the rest of this section carefully to ensure your deployment goes smoothly.
+
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation cmdlet. This cmdlet checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported operating system version. These checks cover most but not all of the features mentioned below; we recommend you read through the rest of this section carefully to ensure your deployment goes smoothly.
The evaluation cmdlet can be installed by installing the Az PowerShell module, which can be installed by following the instructions here: [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
-#### Usage
+#### Usage
+ You can invoke the evaluation tool in a few different ways: you can perform the system checks, the dataset checks, or both. To perform both the system and dataset checks: ```powershell
Invoke-AzStorageSyncCompatibilityCheck -Path <path>
``` To test only your dataset:+ ```powershell Invoke-AzStorageSyncCompatibilityCheck -Path <path> -SkipSystemChecks ``` To test system requirements only:+ ```powershell Invoke-AzStorageSyncCompatibilityCheck -ComputerName <computer name> -SkipNamespaceChecks ``` To display the results in CSV:+ ```powershell $validation = Invoke-AzStorageSyncCompatibilityCheck C:\DATA $validation.Results | Select-Object -Property Type, Path, Level, Description, Result | Export-Csv -Path C:\results.csv -Encoding utf8 ``` ### File system compatibility
-Azure File Sync is only supported on directly attached, NTFS volumes. Direct attached storage, or DAS, on Windows Server means that the Windows Server operating system owns the file system. DAS can be provided through physically attaching disks to the file server, attaching virtual disks to a file server VM (such as a VM hosted by Hyper-V), or even through ISCSI.
-Only NTFS volumes are supported; ReFS, FAT, FAT32, and other file systems are not supported.
+Azure File Sync is only supported on directly attached, NTFS volumes. Direct attached storage, or DAS, on Windows Server means that the Windows Server operating system owns the file system. DAS can be provided through physically attaching disks to the file server, attaching virtual disks to a file server VM (such as a VM hosted by Hyper-V), or even through iSCSI.
+
+Only NTFS volumes are supported; ReFS, FAT, FAT32, and other file systems aren't supported.
The following table shows the interop state of NTFS file system features:
The following table shows the interop state of NTFS file system features:
| Mount points | Partially supported | Mount points might be the root of a server endpoint, but they are skipped if they are contained in a server endpoint's namespace. | | Junctions | Skipped | For example, Distributed File System DfrsrPrivate and DFSRoots folders. | | Reparse points | Skipped | |
-| NTFS compression | Partially supported | Azure File Sync does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. |
+| NTFS compression | Partially supported | Azure File Sync doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. |
| Sparse files | Fully supported | Sparse files sync (are not blocked), but they sync to the cloud as a full file. If the file contents change in the cloud (or on another server), the file is no longer sparse when the change is downloaded. |
-| Alternate Data Streams (ADS) | Preserved, but not synced | For example, classification tags created by the File Classification Infrastructure are not synced. Existing classification tags on files on each of the server endpoints are left untouched. |
+| Alternate Data Streams (ADS) | Preserved, but not synced | For example, classification tags created by the File Classification Infrastructure aren't synced. Existing classification tags on files on each of the server endpoints are left untouched. |
<a id="files-skipped"></a>Azure File Sync will also skip certain temporary files and system folders:
The following table shows the interop state of NTFS file system features:
| \\SyncShareState | Folder for sync | | .SystemShareInformation | Folder for sync in Azure file share |
-> [!Note]
-> While Azure File Sync supports syncing database files, databases are not a good workload for sync solutions (including Azure File Sync) since the log files and databases need to be synced together and they can get out of sync for various reasons which could lead to database corruption.
+> [!NOTE]
+> While Azure File Sync supports syncing database files, databases aren't a good workload for sync solutions (including Azure File Sync) because the log files and databases need to be synced together, and they can get out of sync for various reasons which could lead to database corruption.
### Consider how much free space you need on your local disk
-When planning on using Azure File Sync, consider how much free space you need on the local disk you plan to have a server endpoint on.
+
+When planning to use Azure File Sync, consider how much free space you need on the local disk you plan to have a server endpoint on.
With Azure File Sync, you will need to account for the following taking up space on your local disk: - With cloud tiering enabled:
We'll use an example to illustrate how to estimate the amount of free space woul
1. NTFS allocates a cluster size for each of the tiered files. 1 million files * 4 KiB cluster size = 4,000,000 KiB (4 GiB) > [!Note] > To fully benefit from cloud tiering, it is recommended to use smaller NTFS cluster sizes (less than 64KiB) since each tiered file occupies a cluster. Also, the space occupied by tiered files is allocated by NTFS. Therefore, it will not show up in any UI.
-1. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KB cluster size = 4,400,000 KiB (4.4 GiB)
+1. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KiB cluster size = 4,400,000 KiB (4.4 GiB)
1. Azure File Sync heatstore occupies 1.1 KiB per file. 1 million files * 1.1 KiB = 1,100,000 KiB (1.1 GiB) 1. Volume free space policy is 20%. 1000 GiB * 0.2 = 200 GiB In this case, Azure File Sync would need about 209,500,000 KiB (209.5 GiB) of space for this namespace. Add this amount to any additional free space that is desired in order to figure out how much free space is required for this disk. ### Failover Clustering+ 1. Windows Server Failover Clustering is supported by Azure File Sync for the "File Server for general use" deployment option. For more information on how to configure the "File Server for general use" role on a Failover Cluster, see [Deploying a two-node clustered file server](/windows-server/failover-clustering/deploy-two-node-clustered-file-server). 2. The only scenario supported by Azure File Sync is Windows Server Failover Cluster with Clustered Disks
-3. Failover Clustering is not supported on "Scale-Out File Server for application data" (SOFS) or on Clustered Shared Volumes (CSVs) or local disks.
+3. Failover Clustering isn't supported on "Scale-Out File Server for application data" (SOFS) or on Clustered Shared Volumes (CSVs) or local disks.
-> [!Note]
+> [!NOTE]
> The Azure File Sync agent must be installed on every node in a Failover Cluster for sync to work correctly. ### Data Deduplication+ **Windows Server 2022, Windows Server 2019, and Windows Server 2016** Data Deduplication is supported irrespective of whether cloud tiering is enabled or disabled on one or more server endpoints on the volume for Windows Server 2016, Windows Server 2019, and Windows Server 2022. Enabling Data Deduplication on a volume with cloud tiering enabled lets you cache more files on-premises without provisioning more storage. When Data Deduplication is enabled on a volume with cloud tiering enabled, Dedup optimized files within the server endpoint location will be tiered similar to a normal file based on the cloud tiering policy settings. Once the Dedup optimized files have been tiered, the Data Deduplication garbage collection job will run automatically to reclaim disk space by removing unnecessary chunks that are no longer referenced by other files on the volume.
-Note the volume savings only apply to the server; your data in the Azure file share will not be deduped.
+Note the volume savings only apply to the server; your data in the Azure file share won't be deduped.
-> [!Note]
+> [!NOTE]
> To support Data Deduplication on volumes with cloud tiering enabled on Windows Server 2019, Windows update [KB4520062 - October 2019](https://support.microsoft.com/help/4520062) or a later monthly rollup update must be installed. **Windows Server 2012 R2**
-Azure File Sync does not support Data Deduplication and cloud tiering on the same volume on Windows Server 2012 R2. If Data Deduplication is enabled on a volume, cloud tiering must be disabled.
+Azure File Sync doesn't support Data Deduplication and cloud tiering on the same volume on Windows Server 2012 R2. If Data Deduplication is enabled on a volume, cloud tiering must be disabled.
**Notes** - If Data Deduplication is installed prior to installing the Azure File Sync agent, a restart is required to support Data Deduplication and cloud tiering on the same volume. - If Data Deduplication is enabled on a volume after cloud tiering is enabled, the initial Deduplication optimization job will optimize files on the volume that are not already tiered and will have the following impact on cloud tiering: - Free space policy will continue to tier files as per the free space on the volume by using the heatmap. - Date policy will skip tiering of files that may have been otherwise eligible for tiering due to the Deduplication optimization job accessing the files.-- For ongoing Deduplication optimization jobs, cloud tiering with date policy will get delayed by the Data Deduplication [MinimumFileAgeDays](/powershell/module/deduplication/set-dedupvolume) setting, if the file is not already tiered.
+- For ongoing Deduplication optimization jobs, cloud tiering with date policy will get delayed by the Data Deduplication [MinimumFileAgeDays](/powershell/module/deduplication/set-dedupvolume) setting, if the file isn't already tiered.
- Example: If the MinimumFileAgeDays setting is seven days and cloud tiering date policy is 30 days, the date policy will tier files after 37 days. - Note: Once a file is tiered by Azure File Sync, the Deduplication optimization job will skip the file. - If a server running Windows Server 2012 R2 with the Azure File Sync agent installed is upgraded to Windows Server 2016, Windows Server 2019 or Windows Server 2022, the following steps must be performed to support Data Deduplication and cloud tiering on the same volume:
Azure File Sync does not support Data Deduplication and cloud tiering on the sam
Note: The Azure File Sync configuration settings on the server are retained when the agent is uninstalled and reinstalled. ### Distributed File System (DFS)+ Azure File Sync supports interop with DFS Namespaces (DFS-N) and DFS Replication (DFS-R). **DFS Namespaces (DFS-N)**: Azure File Sync is fully supported with DFS-N implementation. You can install the Azure File Sync agent on one or more file servers to sync data between the server endpoints and the cloud endpoint, and then use DFS-N to provide namespace service. For more information, see [DFS Namespaces overview](/windows-server/storage/dfs-namespaces/dfs-overview) and [DFS Namespaces with Azure Files](../files/files-manage-namespaces.md). **DFS Replication (DFS-R)**: Since DFS-R and Azure File Sync are both replication solutions, in most cases, we recommend replacing DFS-R with Azure File Sync. There are however several scenarios where you would want to use DFS-R and Azure File Sync together: -- You are migrating from a DFS-R deployment to an Azure File Sync deployment. For more information, see [Migrate a DFS Replication (DFS-R) deployment to Azure File Sync](file-sync-deployment-guide.md#migrate-a-dfs-replication-dfs-r-deployment-to-azure-file-sync).
+- You're migrating from a DFS-R deployment to an Azure File Sync deployment. For more information, see [Migrate a DFS Replication (DFS-R) deployment to Azure File Sync](file-sync-deployment-guide.md#migrate-a-dfs-replication-dfs-r-deployment-to-azure-file-sync).
- Not every on-premises server that needs a copy of your file data can be connected directly to the internet. - Branch servers consolidate data onto a single hub server, for which you would like to use Azure File Sync. For Azure File Sync and DFS-R to work side by side: 1. Azure File Sync cloud tiering must be disabled on volumes with DFS-R replicated folders.
-2. Server endpoints should not be configured on DFS-R read-only replication folders.
-3. Only a single server endpoint can overlap with a DFS-R location. Multiple server endpoints overlapping with other active DFS-R locations may lead to conflicts.
+2. Server endpoints shouldn't be configured on DFS-R read-only replication folders.
+3. Only a single server endpoint can overlap with a DFS-R location. Multiple server endpoints overlapping with other active DFS-R locations might lead to conflicts.
For more information, see [DFS Replication overview](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj127250(v=ws.11)). ### Sysprep
-Using sysprep on a server that has the Azure File Sync agent installed is not supported and can lead to unexpected results. Agent installation and server registration should occur after deploying the server image and completing sysprep mini-setup.
+
+Using sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. Agent installation and server registration should occur after deploying the server image and completing sysprep mini-setup.
### Windows Search
-If cloud tiering is enabled on a server endpoint, files that are tiered are skipped and not indexed by Windows Search. Non-tiered files are indexed properly.
-> [!Note]
+If cloud tiering is enabled on a server endpoint, files that are tiered are skipped and aren't indexed by Windows Search. Non-tiered files are indexed properly.
+
+> [!NOTE]
> Windows clients will cause recalls when searching the file share if the **Always search file names and contents** setting is enabled on the client machine. This setting is disabled by default. ### Other Hierarchical Storage Management (HSM) solutions+ No other HSM solutions should be used with Azure File Sync. ## Performance and Scalability Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution is better measured in the number of objects (files and directories) processed per second.
-Changes made to the Azure file share by using the Azure portal or SMB are not immediately detected and replicated like changes to the server endpoint. Azure Files does not yet have change notifications or journaling, so there's no way to automatically initiate a sync session when files are changed. On Windows Server, Azure File Sync uses [Windows USN journaling](/windows/win32/fileio/change-journals) to automatically initiate a sync session when files change
+Changes made to the Azure file share by using the Azure portal or SMB aren't immediately detected and replicated like changes to the server endpoint. Azure Files doesn't have change notifications or journaling, so there's no way to automatically initiate a sync session when files are changed. On Windows Server, Azure File Sync uses [Windows USN journaling](/windows/win32/fileio/change-journals) to automatically initiate a sync session when files change.
To detect changes to the Azure file share, Azure File Sync has a scheduled job called a change detection job. A change detection job enumerates every file in the file share, and then compares it to the sync version for that file. When the change detection job determines that files have changed, Azure File Sync initiates a sync session. The change detection job is initiated every 24 hours. Because the change detection job works by enumerating every file in the Azure file share, change detection takes longer in larger namespaces than in smaller namespaces. For large namespaces, it might take longer than once every 24 hours to determine which files have changed. For more information, see [Azure File Sync performance metrics](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-performance-metrics) and [Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets) ## Identity
-Azure File Sync works with your standard AD-based identity without any special setup beyond setting up sync. When you are using Azure File Sync, the general expectation is that most accesses go through the Azure File Sync caching servers, rather than through the Azure file share. Since the server endpoints are located on Windows Server, and Windows Server has supported AD and Windows-style ACLs for a long time, nothing is needed beyond ensuring the Windows file servers registered with the Storage Sync Service are domain joined. Azure File Sync will store ACLs on the files in the Azure file share, and will replicate them to all server endpoints.
-Even though changes made directly to the Azure file share will take longer to sync to the server endpoints in the sync group, you may also want to ensure that you can enforce your AD permissions on your file share directly in the cloud as well. To do this, you must domain join your storage account to your on-premises AD, just like how your Windows file servers are domain joined. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](../files/storage-files-active-directory-overview.md?toc=/azure/storage/filesync/toc.json).
+Azure File Sync works with your standard AD-based identity without any special setup beyond setting up sync. When you're using Azure File Sync, the general expectation is that most accesses go through the Azure File Sync caching servers, rather than through the Azure file share. Since the server endpoints are located on Windows Server, and Windows Server has supported AD and Windows-style ACLs for a long time, nothing is needed beyond ensuring the Windows file servers registered with the Storage Sync Service are domain joined. Azure File Sync will store ACLs on the files in the Azure file share, and will replicate them to all server endpoints.
+
+Even though changes made directly to the Azure file share will take longer to sync to the server endpoints in the sync group, you might also want to ensure that you can enforce your AD permissions on your file share directly in the cloud as well. To do this, you must domain join your storage account to your on-premises AD, just like how your Windows file servers are domain joined. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](../files/storage-files-active-directory-overview.md?toc=/azure/storage/filesync/toc.json).
-> [!Important]
-> Domain joining your storage account to Active Directory is not required to successfully deploy Azure File Sync. This is a strictly optional step that allows the Azure file share to enforce on-premises ACLs when users mount the Azure file share directly.
+> [!IMPORTANT]
+> Domain joining your storage account to Active Directory isn't required to successfully deploy Azure File Sync. This is a strictly optional step that allows the Azure file share to enforce on-premises ACLs when users mount the Azure file share directly.
## Networking+ The Azure File Sync agent communicates with your Storage Sync Service and Azure file share using the Azure File Sync REST protocol and the FileREST protocol, both of which always use HTTPS over port 443. SMB is never used to upload or download data between your Windows Server and the Azure file share. Because most organizations allow HTTPS traffic over port 443, as a requirement for visiting most websites, special networking configuration is usually not required to deploy Azure File Sync.
-> [!Important]
-> Azure File Sync does not support internet routing. The default network routing option, Microsoft routing, is supported by Azure File Sync.
+> [!IMPORTANT]
+> Azure File Sync doesn't support internet routing. The default network routing option, Microsoft routing, is supported by Azure File Sync.
-Based on your organization's policy or unique regulatory requirements, you may require more restrictive communication with Azure, and therefore Azure File Sync provides several mechanisms for you configure networking. Based on your requirements, you can:
+Based on your organization's policy or unique regulatory requirements, you might require more restrictive communication with Azure, and therefore Azure File Sync provides several mechanisms for you to configure networking. Based on your requirements, you can:
- Tunnel sync and file upload/download traffic over your ExpressRoute or Azure VPN. - Make use of Azure Files and Azure Networking features such as service endpoints and private endpoints. - Configure Azure File Sync to support your proxy in your environment. - Throttle network activity from Azure File Sync.
-> [!Tip]
-> If you want to communicate with your Azure file share over SMB but port 445 is blocked, consider using SMB over QUIC, which offers zero-config "SMB VPN" for SMB access to your Azure file shares using the QUIC transport protocol over port 443. Although Azure Files does not directly support SMB over QUIC, you can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see [SMB over QUIC with Azure File Sync](../files/storage-files-networking-overview.md#smb-over-quic).
+> [!TIP]
+> If you want to communicate with your Azure file share over SMB but port 445 is blocked, consider using SMB over QUIC, which offers zero-config "SMB VPN" for SMB access to your Azure file shares using the QUIC transport protocol over port 443. Although Azure Files doesn't directly support SMB over QUIC, you can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see [SMB over QUIC with Azure File Sync](../files/storage-files-networking-overview.md#smb-over-quic).
To learn more about Azure File Sync and networking, see [Azure File Sync networking considerations](file-sync-networking-overview.md). ## Encryption+ When using Azure File Sync, there are three different layers of encryption to consider: encryption on the at-rest storage of Windows Server, encryption in transit between the Azure File Sync agent and Azure, and encryption at rest of your data in the Azure file share.
-### Windows Server encryption at rest
-There are two strategies for encrypting data on Windows Server that work generally with Azure File Sync: encryption beneath the file system such that the file system and all of the data written to it is encrypted, and encryption within the file format itself. These methods are not mutually exclusive; they can be used together if desired since the purpose of encryption is different.
+### Windows Server encryption at rest
-To provide encryption beneath the file system, Windows Server provides BitLocker inbox. BitLocker is fully transparent to Azure File Sync. The primary reason to use an encryption mechanism like BitLocker is to prevent physical exfiltration of data from your on-premises datacenter by someone stealing the disks and to prevent sideloading an unauthorized OS to perform unauthorized reads/writes to your data. To learn more about BitLocker, see [BitLocker overview](/windows/security/information-protection/bitlocker/bitlocker-overview).
+There are two strategies for encrypting data on Windows Server that work generally with Azure File Sync: encryption beneath the file system such that the file system and all of the data written to it is encrypted, and encryption within the file format itself. These methods aren't mutually exclusive; they can be used together if desired because the purpose of encryption is different.
+
+To provide encryption beneath the file system, Windows Server provides BitLocker inbox. BitLocker is fully transparent to Azure File Sync. The primary reason to use an encryption mechanism like BitLocker is to prevent physical exfiltration of data from your on-premises datacenter by someone stealing the disks, and to prevent sideloading an unauthorized OS to perform unauthorized reads/writes to your data. To learn more about BitLocker, see [BitLocker overview](/windows/security/information-protection/bitlocker/bitlocker-overview).
Third-party products that work similarly to BitLocker, in that they sit beneath the NTFS volume, should similarly work fully transparently with Azure File Sync.
-The other main method for encrypting data is to encrypt the file's data stream when the application saves the file. Some applications may do this natively, however this is usually not the case. An example of a method for encrypting the file's data stream is Azure Information Protection (AIP)/Azure Rights Management Services (Azure RMS)/Active Directory RMS. The primary reason to use an encryption mechanism like AIP/RMS is to prevent data exfiltration of data from your file share by people copying it to alternate locations, like to a flash drive, or emailing it to an unauthorized person. When a file's data stream is encrypted as part of the file format, this file will continue to be encrypted on the Azure file share.
+The other main method for encrypting data is to encrypt the file's data stream when the application saves the file. Some applications might do this natively, however this usually isn't the case. An example of a method for encrypting the file's data stream is Azure Information Protection (AIP)/Azure Rights Management Services (Azure RMS)/Active Directory RMS. The primary reason to use an encryption mechanism like AIP/RMS is to prevent data exfiltration of data from your file share by people copying it to alternate locations, like to a flash drive, or emailing it to an unauthorized person. When a file's data stream is encrypted as part of the file format, this file will continue to be encrypted on the Azure file share.
-Azure File Sync does not interoperate with NTFS Encrypted File System (NTFS EFS) or third-party encryption solutions that sit above the file system but below the file's data stream.
+Azure File Sync doesn't interoperate with NTFS Encrypted File System (NTFS EFS) or third-party encryption solutions that sit above the file system but below the file's data stream.
### Encryption in transit > [!NOTE]
-> Azure File Sync service will remove support for TLS1.0 and 1.1 on August 1st, 2020. All supported Azure File Sync agent versions already use TLS1.2 by default. Using an earlier version of TLS could occur if TLS1.2 was disabled on your server or a proxy is used. If you are using a proxy, we recommend you check the proxy configuration. Azure File Sync service regions added after 5/1/2020 will only support TLS1.2 and support for TLS1.0 and 1.1 will be removed from existing regions on August 1st, 2020. For more information, see the [troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-cloud-tiering?toc=/azure/storage/file-sync/toc.json#tls-12-required-for-azure-file-sync).
+> Azure File Sync service removed support for TLS1.0 and 1.1 on August 1st, 2020. All supported Azure File Sync agent versions already use TLS1.2 by default. Using an earlier version of TLS could occur if TLS1.2 was disabled on your server or a proxy is used. If you are using a proxy, we recommend you check the proxy configuration. Azure File Sync service regions added after 5/1/2020 only support TLS1.2. For more information, see the [troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-cloud-tiering?toc=/azure/storage/file-sync/toc.json#tls-12-required-for-azure-file-sync).
-Azure File Sync agent communicates with your Storage Sync Service and Azure file share using the Azure File Sync REST protocol and the FileREST protocol, both of which always use HTTPS over port 443. Azure File Sync does not send unencrypted requests over HTTP.
+The Azure File Sync agent communicates with your Storage Sync Service and Azure file share using the Azure File Sync REST protocol and the FileREST protocol, both of which always use HTTPS over port 443. Azure File Sync doesn't send unencrypted requests over HTTP.
Azure storage accounts contain a switch for requiring encryption in transit, which is enabled by default. Even if the switch at the storage account level is disabled, meaning that unencrypted connections to your Azure file shares are possible, Azure File Sync will still only used encrypted channels to access your file share.
We strongly recommend ensuring encryption of data in-transit is enabled.
For more information about encryption in transit, see [requiring secure transfer in Azure storage](../common/storage-require-secure-transfer.md?toc=/azure/storage/files/toc.json). ### Azure file share encryption at rest+ [!INCLUDE [storage-files-encryption-at-rest](../../../includes/storage-files-encryption-at-rest.md)] ## Storage tiers+ [!INCLUDE [storage-files-tiers-overview](../../../includes/storage-files-tiers-overview.md)] #### Regional availability+ [!INCLUDE [storage-files-tiers-large-file-share-availability](../../../includes/storage-files-tiers-large-file-share-availability.md)] ## Azure file sync region availability
The following regions require you to request access to Azure Storage before you
To request access for these regions, follow the process in [this document](/troubleshoot/azure/general/region-access-request-process). ## Redundancy+ [!INCLUDE [storage-files-redundancy-overview](../../../includes/storage-files-redundancy-overview.md)]
-> [!Important]
-> Geo-redundant and Geo-zone redundant storage have the capability to manually failover storage to the secondary region. We recommend that you do not do this outside of a disaster when you are using Azure File Sync because of the increased likelihood of data loss. In the event of a disaster where you would like to initiate a manual failover of storage, you will need to open up a support case with Microsoft to get Azure File Sync to resume sync with the secondary endpoint.
+> [!IMPORTANT]
+> Geo-redundant and geo-zone redundant storage have the capability to manually failover storage to the secondary region. We recommend that you don't do this outside of a disaster when you're using Azure File Sync because of the increased likelihood of data loss. In the event of a disaster where you would like to initiate a manual failover of storage, you'll need to open up a support case with Microsoft to get Azure File Sync to resume sync with the secondary endpoint.
## Migration
-If you have an existing Windows file server 2012R2 or newer, Azure File Sync can be directly installed in place, without the need to move data over to a new server. If you are planning to migrate to a new Windows file server as a part of adopting Azure File Sync, or if your data is currently located on Network Attached Storage (NAS) there are several possible migration approaches to use Azure File Sync with this data. Which migration approach you should choose, depends on where your data currently resides.
-Check out the [Azure File Sync and Azure file share migration overview](../files/storage-files-migration-overview.md?toc=/azure/storage/filesync/toc.json) article where you can find detailed guidance for your scenario.
+If you have an existing Windows file server 2012R2 or newer, Azure File Sync can be directly installed in place, without the need to move data over to a new server. If you're planning to migrate to a new Windows file server as a part of adopting Azure File Sync, or if your data is currently located on Network Attached Storage (NAS), there are several possible migration approaches to use Azure File Sync with this data. Which migration approach you should choose depends on where your data currently resides.
+
+See the [Azure File Sync and Azure file share migration overview](../files/storage-files-migration-overview.md?toc=/azure/storage/filesync/toc.json) article for detailed guidance.
## Antivirus
-Because antivirus works by scanning files for known malicious code, an antivirus product might cause the recall of tiered files, resulting in high egress charges. Tiered files have the secure Windows attribute FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS set and we recommend consulting with your software vendor to learn how to configure their solution to skip reading files with this attribute set (many do it automatically).
-Microsoft's in-house antivirus solutions, Windows Defender and System Center Endpoint Protection (SCEP), both automatically skip reading files that have this attribute set. We have tested them and identified one minor issue: when you add a server to an existing sync group, files smaller than 800 bytes are recalled (downloaded) on the new server. These files will remain on the new server and will not be tiered since they do not meet the tiering size requirement (>64kb).
+Because antivirus works by scanning files for known malicious code, an antivirus product might cause the recall of tiered files, resulting in high egress charges. Tiered files have the secure Windows attribute `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS` set, and we recommend consulting with your software vendor to learn how to configure their solution to skip reading files with this attribute set (many do it automatically).
-> [!Note]
+Microsoft's in-house antivirus solutions, Windows Defender and System Center Endpoint Protection (SCEP), both automatically skip reading files that have this attribute set. We have tested them and identified one minor issue: when you add a server to an existing sync group, files smaller than 800 bytes are recalled (downloaded) on the new server. These files will remain on the new server and won't be tiered because they don't meet the tiering size requirement (>64kb).
+
+> [!NOTE]
> Antivirus vendors can check compatibility between their product and Azure File Sync using the [Azure File Sync Antivirus Compatibility Test Suite](https://www.microsoft.com/download/details.aspx?id=58322), which is available for download on the Microsoft Download Center.
-## Backup
-If cloud tiering is enabled, solutions that directly back up the server endpoint or a VM on which the server endpoint is located should not be used. Cloud tiering causes only a subset of your data to be stored on the server endpoint, with the full dataset residing in your Azure file share. Depending on the backup solution used, tiered files will either be skipped and not backed up (because they have the FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS attribute set), or they will be recalled to disk, resulting in high egress charges. We recommend using a cloud backup solution to back up the Azure file share directly. For more information, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json) or contact your backup provider to see if they support backing up Azure file shares.
+## Backup
-If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group and existing files will be replaced with the version restored from backup. Volume-level restores will not replace newer file versions in the Azure file share or other server endpoints.
+If cloud tiering is enabled, solutions that directly back up the server endpoint or a VM on which the server endpoint is located shouldn't be used. Cloud tiering causes only a subset of your data to be stored on the server endpoint, with the full dataset residing in your Azure file share. Depending on the backup solution used, tiered files will either be skipped and not backed up (because they have the `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS` attribute set), or they will be recalled to disk, resulting in high egress charges. We recommend using a cloud backup solution to back up the Azure file share directly. For more information, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json) or contact your backup provider to see if they support backing up Azure file shares.
-> [!Note]
-> Bare-metal (BMR) restore can cause unexpected results and is not currently supported.
+If you prefer to use an on-premises backup solution, backups should be performed on a server in the sync group that has cloud tiering disabled. When performing a restore, use the volume-level or file-level restore options. Files restored using the file-level restore option will be synced to all endpoints in the sync group, and existing files will be replaced with the version restored from backup. Volume-level restores won't replace newer file versions in the Azure file share or other server endpoints.
-> [!Note]
-> VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#optional-self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
+> [!NOTE]
+> Bare-metal (BMR) restore can cause unexpected results and isn't currently supported. VSS snapshots (including Previous Versions tab) are supported on volumes which have cloud tiering enabled. However, you must enable previous version compatibility through PowerShell. [Learn how](file-sync-deployment-guide.md#optional-self-service-restore-through-previous-versions-and-vss-volume-shadow-copy-service).
## Data Classification
-If you have data classification software installed, enabling cloud tiering may result in increased cost for two reasons:
-1. With cloud tiering enabled, your hottest files are cached locally and coolest files are tiered to the Azure file share in the cloud. If your data classification regularly scans all files in the file share, the files tiered to the cloud must be recalled whenever scanned.
+If you have data classification software installed, enabling cloud tiering might result in increased cost for two reasons:
+
+1. With cloud tiering enabled, your hottest files are cached locally, and your coolest files are tiered to the Azure file share in the cloud. If your data classification regularly scans all files in the file share, the files tiered to the cloud must be recalled whenever scanned.
2. If the data classification software uses the metadata in the data stream of a file, the file must be fully recalled in order for the software to see the classification. These increases in both the number of recalls and the amount of data being recalled can increase costs. ## Azure File Sync agent update policy+ [!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)] ## Next steps+ * [Consider firewall and proxy settings](file-sync-firewall-and-proxy.md) * [Deploy Azure Files](../files/storage-how-to-create-file-share.md?toc=/azure/storage/filesync/toc.json) * [Deploy Azure File Sync](file-sync-deployment-guide.md)
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
Last updated 10/16/2023 -+ # NFS file shares in Azure Files
NFS file shares are often used in the following scenarios:
- Backing storage for Linux/UNIX-based applications, such as line-of-business applications written using Linux or POSIX file system APIs (even if they don't require POSIX-compliance). - Workloads that require POSIX-compliant file shares, case sensitivity, or Unix style permissions (UID/GID).-- New application and service development, particularly if that application or service has a requirement for random I/O and hierarchical storage.
+- New application and service development, particularly if that application or service has a requirement for random I/O and hierarchical storage.
## Features - Fully POSIX-compliant file system. - Hard link support.-- Symbolic link support.
+- Symbolic link support.
- NFS file shares currently only support most features from the [4.1 protocol specification](https://tools.ietf.org/html/rfc5661). Some features such as delegations and callback of all kinds, Kerberos authentication, ACLs, and encryption-in-transit aren't supported. > [!NOTE]
NFS file shares are often used in the following scenarios:
## Security and networking All data stored in Azure Files is encrypted at rest using Azure storage service encryption (SSE). Storage service encryption works similarly to BitLocker on Windows: data is encrypted beneath the file system level. Because data is encrypted beneath the Azure file share's file system, as it's encoded to disk, you don't have to have access to the underlying key on the client to read or write to the Azure file share. Encryption at rest applies to both the SMB and NFS protocols.
-For encryption in transit, Azure provides a layer of encryption for all data in transit between Azure datacenters using [MACSec](https://en.wikipedia.org/wiki/IEEE_802.1AE). Through this, encryption exists when data is transferred between Azure datacenters.
+For encryption in transit, Azure provides a layer of encryption for all data in transit between Azure datacenters using [MACSec](https://en.wikipedia.org/wiki/IEEE_802.1AE). Through this, encryption exists when data is transferred between Azure datacenters.
-Unlike Azure Files using the SMB protocol, file shares using the NFS protocol don't offer user-based authentication. Authentication for NFS shares is based on the configured network security rules. Due to this, to ensure only secure connections are established to your NFS share, you must set up either a private endpoint or a service endpoint for your storage account.
+Unlike Azure Files using the SMB protocol, file shares using the NFS protocol don't offer user-based authentication. Authentication for NFS shares is based on the configured network security rules. Due to this, to ensure only secure connections are established to your NFS share, you must set up either a private endpoint or a service endpoint for your storage account.
A private endpoint (also called a private link) gives your storage account a private, static IP address within your virtual network, preventing connectivity interruptions from dynamic IP address changes. Traffic to your storage account stays within peered virtual networks, including those in other regions and on premises. Standard [data processing rates](https://azure.microsoft.com/pricing/details/private-link/) apply.
-If you don't require a static IP address, you can enable a [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Files within the virtual network. A service endpoint configures storage accounts to allow access only from specific subnets. The allowed subnets can belong to a virtual network in the same subscription or a different subscription, including those that belong to a different Microsoft Entra tenant. There's no extra charge for using service endpoints.
+If you don't require a static IP address, you can enable a [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Files within the virtual network. A service endpoint configures storage accounts to allow access only from specific subnets. The allowed subnets can belong to a virtual network in the same subscription or a different subscription, including those that belong to a different Microsoft Entra tenant. There's no extra charge for using service endpoints.
If you want to access shares from on-premises, then you must set up a VPN or ExpressRoute in addition to a private endpoint. Requests that don't originate from the following sources will be rejected:
For more details on the available networking options, see [Azure Files networkin
## Support for Azure Storage features
-The following table shows the current level of support for Azure Storage features in accounts that have the NFS 4.1 feature enabled.
+The following table shows the current level of support for Azure Storage features in accounts that have the NFS 4.1 feature enabled.
The status of items that appear in this table might change over time as support continues to expand.
storage Storage Files Quick Create Use Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-linux.md
Title: Tutorial - Create an NFS Azure file share and mount it on a Linux VM usin
description: This tutorial covers how to use the Azure portal to deploy a Linux virtual machine, create an Azure file share using the NFS protocol, and mount the file share so that it's ready to store files. -+ Last updated 10/10/2023
Next, create an Azure VM running Linux to represent the on-premises server. When
:::image type="content" source="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" alt-text="Screenshot showing how to configure the inbound port rules for a new V M." lightbox="media/storage-files-quick-create-use-linux/create-vm-inbound-port-rules.png" border="true"::: > [!IMPORTANT]
- > Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later, go back to the **Basics** tab.
+ > Setting SSH port(s) open to the internet is only recommended for testing. If you want to change this setting later, go back to the **Basics** tab.
1. Select the **Review + create** button at the bottom of the page.
If you encounter a warning that the authenticity of the host can't be establishe
Now that you've created an NFS share, to use it you have to mount it on your Linux client.
-1. Select **Home** and then **Storage accounts**.
+1. Select **Home** and then **Storage accounts**.
1. Select the storage account you created.
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
Title: Mount SMB Azure file share on Linux
description: Learn how to mount an Azure file share over SMB on Linux and review SMB security considerations on Linux clients. -+ Last updated 01/10/2023
If your Linux distribution isn't listed in the above table, you can check the Li
uname -r ```
-> [!Note]
+> [!Note]
> SMB 2.1 support was added to Linux kernel version 3.7. If you're using a version of the Linux kernel after 3.7, it should support SMB 2.1. ## Applies to
uname -r
## Prerequisites <a id="smb-client-reqs"></a>
-* <a id="install-cifs-utils"></a>**Ensure the cifs-utils package is installed.**
- The cifs-utils package can be installed using the package manager on the Linux distribution of your choice.
+* <a id="install-cifs-utils"></a>**Ensure the cifs-utils package is installed.**
+ The cifs-utils package can be installed using the package manager on the Linux distribution of your choice.
# [Ubuntu](#tab/Ubuntu)
sudo dnf install cifs-utils
On older versions of Red Hat Enterprise Linux use the `yum` package ```bash
-sudo yum install cifs-utils
+sudo yum install cifs-utils
``` # [SLES](#tab/SLES)
On other distributions, use the appropriate package manager or [compile from sou
```bash RESOURCE_GROUP_NAME="<your-resource-group>" STORAGE_ACCOUNT_NAME="<your-storage-account>"
-
+ # This command assumes you have logged in with az login HTTP_ENDPOINT=$(az storage account show \ --resource-group $RESOURCE_GROUP_NAME \
MNT_PATH="$MNT_ROOT/$STORAGE_ACCOUNT_NAME/$FILE_SHARE_NAME"
sudo mkdir -p $MNT_PATH ```
-Next, mount the file share using the `mount` command. In the following example, the `$SMB_PATH` command is populated using the fully qualified domain name for the storage account's file endpoint and `$STORAGE_ACCOUNT_KEY` is populated with the storage account key.
+Next, mount the file share using the `mount` command. In the following example, the `$SMB_PATH` command is populated using the fully qualified domain name for the storage account's file endpoint and `$STORAGE_ACCOUNT_KEY` is populated with the storage account key.
# [SMB 3.1.1](#tab/smb311)
-> [!Note]
-> Starting in Linux kernel version 5.0, SMB 3.1.1 is the default negotiated protocol. If you're using a version of the Linux kernel older than 5.0, specify `vers=3.1.1` in the mount options list.
+> [!Note]
+> Starting in Linux kernel version 5.0, SMB 3.1.1 is the default negotiated protocol. If you're using a version of the Linux kernel older than 5.0, specify `vers=3.1.1` in the mount options list.
```azurecli # This command assumes you have logged in with az login
MNT_ROOT="/media"
sudo mkdir -p $MNT_ROOT ```
-To mount an Azure file share on Linux, use the storage account name as the username of the file share, and the storage account key as the password. Because the storage account credentials may change over time, you should store the credentials for the storage account separately from the mount configuration.
+To mount an Azure file share on Linux, use the storage account name as the username of the file share, and the storage account key as the password. Because the storage account credentials may change over time, you should store the credentials for the storage account separately from the mount configuration.
The following example shows how to create a file to store the credentials. Remember to replace `<resource-group-name>` and `<storage-account-name>` with the appropriate information for your environment.
CREDENTIAL_ROOT="/etc/smbcredentials"
sudo mkdir -p "/etc/smbcredentials" # Get the storage account key for the indicated storage account.
-# You must be logged in with az login and your user identity must have
+# You must be logged in with az login and your user identity must have
# permissions to list the storage account keys for this command to work. STORAGE_ACCOUNT_KEY=$(az storage account keys list \ --resource-group $RESOURCE_GROUP_NAME \
SMB_CREDENTIAL_FILE="$CREDENTIAL_ROOT/$STORAGE_ACCOUNT_NAME.cred"
if [ ! -f $SMB_CREDENTIAL_FILE ]; then echo "username=$STORAGE_ACCOUNT_NAME" | sudo tee $SMB_CREDENTIAL_FILE > echo "password=$STORAGE_ACCOUNT_KEY" | sudo tee -a $SMB_CREDENTIAL_FILE >
-else
+else
echo "The credential file $SMB_CREDENTIAL_FILE already exists, and was not modified." fi
fi
sudo chmod 600 $SMB_CREDENTIAL_FILE ```
-To automatically mount a file share, you have a choice between using a static mount via the `/etc/fstab` utility or using a dynamic mount via the `autofs` utility.
+To automatically mount a file share, you have a choice between using a static mount via the `/etc/fstab` utility or using a dynamic mount via the `autofs` utility.
### Static mount with /etc/fstab Using the earlier environment, create a folder for your storage account/file share under your mount folder. Replace `<file-share-name>` with the appropriate name of your Azure file share.
fi
sudo mount -a ```
-> [!Note]
+> [!Note]
> Starting in Linux kernel version 5.0, SMB 3.1.1 is the default negotiated protocol. You can specify alternate protocol versions using the `vers` mount option (protocol versions are `3.1.1`, `3.0`, and `2.1`). ### Dynamically mount with autofs
-To dynamically mount a file share with the `autofs` utility, install it using the package manager on the Linux distribution of your choice.
+To dynamically mount a file share with the `autofs` utility, install it using the package manager on the Linux distribution of your choice.
-# [Ubuntu](#tab/Ubuntu)
+# [Ubuntu](#tab/Ubuntu)
On Ubuntu and Debian distributions, use the `apt` package
On Ubuntu and Debian distributions, use the `apt` package
sudo apt update sudo apt install autofs ```
-# [RHEL](#tab/RHEL)
+# [RHEL](#tab/RHEL)
Same applies for CentOS or Oracle Linux
sudo dnf install autofs
On older versions of Red Hat Enterprise Linux, use the `yum` package ```bash
-sudo yum install autofs
+sudo yum install autofs
``` # [SLES](#tab/SLES)
-
+ On SUSE Linux Enterprise Server, use the `zypper` package ```bash sudo zypper install autofs ```
-Next, update the `autofs` configuration files.
+Next, update the `autofs` configuration files.
```bash FILE_SHARE_NAME="<file-share-name>"
After you've created the file share snapshot, follow these instructions to mount
1. In the Azure portal, navigate to the storage account that contains the file share that you want to mount a snapshot of. 2. Select **Data storage > File shares** and select the file share. 3. Select **Operations > Snapshots** and take note of the name of the snapshot you want to mount. The snapshot name will be a GMT timestamp, such as in the screenshot below.
-
+ :::image type="content" source="media/storage-how-to-use-files-linux/mount-snapshot.png" alt-text="Screenshot showing how to locate a file share snapshot name and timestamp in the Azure portal." border="true" :::
-
+ 4. Convert the timestamp to the format expected by the `mount` command, which is **@GMT-year.month.day-hour.minutes.seconds**. In this example, you'd convert **2023-01-05T00:08:20.0000000Z** to **@GMT-2023.01.05-00.08.20**. 5. Run the `mount` command using the GMT time to specify the `snapshot` value. Be sure to replace `<storage-account-name>`, `<file-share-name>`, and the GMT timestamp with your values. The .cred file contains the credentials to be used to mount the share (see [Automatically mount file shares](#automatically-mount-file-shares)).
-
+ ```bash sudo mount -t cifs //<storage-account-name>.file.core.windows.net/<file-share-name> /media/<file-share-name>/snapshot1 -o credentials=/etc/smbcredentials/snapshottestlinux.cred,snapshot=@GMT-2023.01.05-00.08.20 ```
-
+ 6. If you're able to browse the snapshot under the path `/media/<file-share-name>/snapshot1`, then the mount succeeded. If the mount fails, see [Troubleshoot Azure Files connectivity and access issues (SMB)](/troubleshoot/azure/azure-storage/files-troubleshoot-smb-connectivity?toc=/azure/storage/files/toc.json).
update-manager Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequsite-for-schedule-patching.md
Title: Configure schedule patching on Azure VMs for business continuity description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager. Previously updated : 09/18/2023 Last updated : 01/17/2024
PATCH on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/provider
} } ```
+# [PowerShell](#tab/new-prereq-powershell)
+
+## Prerequisites
+
+- Patch mode = `AutomaticByPlatform`
+- `BypassPlatformSafetyChecksOnUserSchedule` = TRUE
+
+### Enable on Windows VMs
+
+```powershell-interactive
+$VirtualMachine = Get-AzVM -ResourceGroupName "<resourceGroup>" -Name "<vmName>"
+Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -PatchMode "AutomaticByPlatform"
+$AutomaticByPlatformSettings = $VirtualMachine.OSProfile.WindowsConfiguration.PatchSettings.AutomaticByPlatformSettings
+
+if ($null -eq $AutomaticByPlatformSettings) {
+ $VirtualMachine.OSProfile.WindowsConfiguration.PatchSettings.AutomaticByPlatformSettings = New-Object -TypeName Microsoft.Azure.Management.Compute.Models.WindowsVMGuestPatchAutomaticByPlatformSettings -Property @{BypassPlatformSafetyChecksOnUserSchedule = $true}
+} else {
+ $AutomaticByPlatformSettings.BypassPlatformSafetyChecksOnUserSchedule = $true
+}
+
+Update-AzVM -VM $VirtualMachine -ResourceGroupName "<resourceGroup>"
+```
+### Enable on Linux VMs
+
+```powershell-interactive
+$VirtualMachine = Get-AzVM -ResourceGroupName "<resourceGroup>" -Name "<vmName>"
+Set-AzVMOperatingSystem -VM $VirtualMachine -Linux -PatchMode "AutomaticByPlatform"
+$AutomaticByPlatformSettings = $VirtualMachine.OSProfile.LinuxConfiguration.PatchSettings.AutomaticByPlatformSettings
+
+if ($null -eq $AutomaticByPlatformSettings) {
+ $VirtualMachine.OSProfile.LinuxConfiguration.PatchSettings.AutomaticByPlatformSettings = New-Object -TypeName Microsoft.Azure.Management.Compute.Models.LinuxVMGuestPatchAutomaticByPlatformSettings -Property @{BypassPlatformSafetyChecksOnUserSchedule = $true}
+} else {
+ $AutomaticByPlatformSettings.BypassPlatformSafetyChecksOnUserSchedule = $true
+}
+
+Update-AzVM -VM $VirtualMachine -ResourceGroupName "<resourceGroup>"
+```
> [!NOTE]
-> Currently, you can only enable the new prerequisite for schedule patching via the Azure portal and the REST API. It can't be enabled via the Azure CLI or PowerShell.
+> Currently, you can only enable the new prerequisite for schedule patching via the Azure portal, REST API, and PowerShell. It can't be enabled via the Azure CLI.
## Enable automatic guest VM patching on Azure VMs
virtual-machine-scale-sets Disk Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-cli.md
Last updated 11/22/2022 -+ # Encrypt OS and attached data disks in a Virtual Machine Scale Set with the Azure CLI
Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). T
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
+ --orchestration-mode Flexible \
--image <SKU Linux Image> \ --admin-username azureuser \ --generate-ssh-keys \
az keyvault update --name $keyvault_name --enabled-for-disk-encryption
## Enable encryption > [!NOTE]
-> If using Virtual Machine Scale Sets in Flexible Orchestration Mode, only new instances will be encrypted. Existing instances in the scale set will need to be encrypted individually or removed and replaced.
+> If using Virtual Machine Scale Sets in Flexible Orchestration Mode, only new instances will be encrypted. Existing instances in the scale set will need to be encrypted individually or removed and replaced.
To encrypt VM instances in a scale set, first get some information on the Key Vault resource ID with [az keyvault show](/cli/azure/keyvault#az-keyvault-show). These variables are used to then start the encryption process with [az vmss encryption enable](/cli/azure/vmss/encryption#az-vmss-encryption-enable):
az vmss encryption disable --resource-group myResourceGroup --name myScaleSet
## Next steps - In this article, you used the Azure CLI to encrypt a Virtual Machine Scale Set. You can also use [Azure PowerShell](disk-encryption-powershell.md) or [Azure Resource Manager templates](disk-encryption-azure-resource-manager.md).-- If you wish to have Azure Disk Encryption applied after another extension is provisioned, you can use [extension sequencing](virtual-machine-scale-sets-extension-sequencing.md).
+- If you wish to have Azure Disk Encryption applied after another extension is provisioned, you can use [extension sequencing](virtual-machine-scale-sets-extension-sequencing.md).
- An end-to-end batch file example for Linux scale set data disk encryption can be found [here](https://gist.githubusercontent.com/ejarvi/7766dad1475d5f7078544ffbb449f29b/raw/03e5d990b798f62cf188706221ba6c0c7c2efb3f/enable-linux-vmss.bat). This example creates a resource group, Linux scale set, mounts a 5-GB data disk, and encrypts the Virtual Machine Scale Set.
virtual-machine-scale-sets Disk Encryption Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-powershell.md
$vmssName="myScaleSet"
New-AzVmss ` -ResourceGroupName $rgName ` -VMScaleSetName $vmssName `
+ -OrchestrationMode "flexible" `
-Location $location ` -VirtualNetworkName "myVnet" ` -SubnetName "mySubnet" `
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-cli.md
Last updated 11/22/2022 -+ # Create virtual machines in a scale set using Azure CLI
-This article steps through using the Azure CLI to create a Virtual Machine Scale Set.
+This article steps through using the Azure CLI to create a Virtual Machine Scale Set.
Make sure that you've installed the latest [Azure CLI](/cli/azure/install-az-cli2) and are logged in to an Azure account with [az login](/cli/azure/reference-index). ## Launch Azure Cloud Shell
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also launch Cloud Shell in a separate browser tab by going to [https://shell.azure.com/cli](https://shell.azure.com/cli). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and press enter to run it.
Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). T
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
+ --orchestration-mode Flexible \
--image <SKU Linux Image> \
- --upgrade-policy-mode automatic \
--instance-count 2 \ --admin-username azureuser \ --generate-ssh-keys
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md
Last updated 11/22/2022 -+ # Create virtual machines in a scale set using Azure portal
-This article steps through using Azure portal to create a Virtual Machine Scale Set.
+This article steps through using Azure portal to create a Virtual Machine Scale Set.
## Log in to Azure Sign in to the [Azure portal](https://portal.azure.com).
You can deploy a scale set with a Windows Server image or Linux image such as RH
1. In the Azure portal search bar, search for and select **Virtual Machine Scale Sets**. 1. Select **Create** on the **Virtual Machine Scale Sets** page.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and create a new resource group called *myVMSSResourceGroup*.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and create a new resource group called *myVMSSResourceGroup*.
1. Under **Scale set details**, set *myScaleSet* for your scale set name and select a **Region** that is close to your area. 1. Under **Orchestration**, select *Flexible*. 1. Under **Instance details**, select a marketplace image for **Image**. Select any of the Supported Distros.
-1. Under **Administrator account** configure the admin username and set up an associated password or SSH public key.
+1. Under **Administrator account** configure the admin username and set up an associated password or SSH public key.
- A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-). - If you select a Linux OS disk image, you can instead choose **SSH public key**. You can use an existing key or create a new one. In this example, we will have Azure generate a new key pair for us. For more information on generating key pairs, see [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md). :::image type="content" source="media/quickstart-guides/quick-start-portal-1.png" alt-text="A screenshot of the Basics tab in the Azure portal during the Virtual Machine Scale Set creation process.":::
-1. Select **Next: Disks** to move the disk configuration options. For this quickstart, leave the default disk configurations.
+1. Select **Next: Disks** to move the disk configuration options. For this quickstart, leave the default disk configurations.
-1. Select **Next: Networking** to move the networking configuration options.
+1. Select **Next: Networking** to move the networking configuration options.
-1. On the **Networking** page, under **Load balancing**, select the **Use a load balancer** checkbox to put the scale set instances behind a load balancer.
+1. On the **Networking** page, under **Load balancing**, select the **Use a load balancer** checkbox to put the scale set instances behind a load balancer.
1. In **Load balancing options**, select **Azure load balancer**. 1. In **Select a load balancer**, select a load balancer or create a new one. 1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**.
You can deploy a scale set with a Windows Server image or Linux image such as RH
1. Select **Next: Scaling** to move to the scaling configurations.
-1. On the **Scaling** page, set the **initial instance count** field to *5*. You can set this number up to 1000.
-1. For the **Scaling policy**, keep it *Manual*.
+1. On the **Scaling** page, set the **initial instance count** field to *5*. You can set this number up to 1000.
+1. For the **Scaling policy**, keep it *Manual*.
:::image type="content" source="media/quickstart-guides/quick-start-portal-3.png" alt-text="A screenshot of the Scaling tab in the Azure portal during the Virtual Machine Scale Set creation process.":::
-1. When you're done, select **Review + create**.
+1. When you're done, select **Review + create**.
1. After it passes validation, select **Create** to deploy the scale set.
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-powershell.md
Now create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.c
```azurepowershell-interactive New-AzVmss ` -ResourceGroup "myVMSSResourceGroup" `
- -Name "myScaleSet" `
+ -Name "myScaleSet" `
+ -OrchestrationMode "Flexible" `
-Location "East US" ` -InstanceCount "2" ` -ImageName "Win2019Datacenter"
virtual-machine-scale-sets Instance Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --image $imgDef \
+ --orchestration-mode Flexible \
--admin-username azureuser \ --generate-ssh-keys ```
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-cli.md
Last updated 11/22/2022 -+ # Quickstart: Create a Virtual Machine Scale Set with the Azure CLI
A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scalin
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)] -- This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+- This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
## Create a scale set
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
Last updated 04/18/2023 -+ # Quickstart: Create a Virtual Machine Scale Set in the Azure portal
Sign in to the [Azure portal](https://portal.azure.com).
## Create a load balancer
-Azure [load balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
+Azure [load balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
First, create a public Standard Load Balancer by using the portal. The name and public IP address you create are automatically configured as the load balancer's front end.
First, create a public Standard Load Balancer by using the portal. The name and
| Setting | Value | | | |
- | Subscription | Select your subscription. |
+ | Subscription | Select your subscription. |
| Resource group | Select **Create new** and type *myVMSSResourceGroup* in the text box.| | Name | *myLoadBalancer* | | Region | Select **East US**. |
First, create a public Standard Load Balancer by using the portal. The name and
| Assignment| Static | | Availability zone | Select **Zone-redundant**. |
-1. When you're done, select **Review + create**
-1. After it passes validation, select **Create**.
+1. When you're done, select **Review + create**
+1. After it passes validation, select **Create**.
![Create a load balancer](./media/virtual-machine-scale-sets-create-portal/load-balancer.png) ## Create Virtual Machine Scale Set You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
-1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**. Select **Create** on the **Virtual Machine Scale Sets** page, which opens the **Create a Virtual Machine Scale Set** page.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list.
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**. Select **Create** on the **Virtual Machine Scale Sets** page, which opens the **Create a Virtual Machine Scale Set** page.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list.
1. Type *myScaleSet* as the name for your scale set. 1. In **Region**, select a region that is close to your area.
-1. Under **Orchestration**, ensure the *Uniform* option is selected for **Orchestration mode**.
+1. Under **Orchestration**, ensure the *Uniform* option is selected for **Orchestration mode**.
1. Select a marketplace image for **Image**. In this example, we have chosen *Ubuntu Server 18.04 LTS*. 1. Enter your desired username, and select which authentication type you prefer. - A **Password** must be at least 12 characters long and meet three out of the four following complexity requirements: one lower case character, one upper case character, one number, and one special character. For more information, see [username and password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-). - If you select a Linux OS disk image, you can instead choose **SSH public key**. Only provide your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md).
-
+ :::image type="content" source="./media/virtual-machine-scale-sets-create-portal/quick-create-scale-set.png" alt-text="Image shows create options for scale sets in the Azure portal.":::
-1. Select **Next** to move the other pages.
+1. Select **Next** to move the other pages.
1. Leave the defaults for the **Disks** page.
-1. On the **Networking** page, under **Load balancing**, select the **Use a load balancer** option to put the scale set instances behind a load balancer.
+1. On the **Networking** page, under **Load balancing**, select the **Use a load balancer** option to put the scale set instances behind a load balancer.
1. In **Load balancing options**, select **Azure load balancer**. 1. In **Select a load balancer**, select *myLoadBalancer* that you created earlier. 1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**.
-1. When you're done, select **Review + create**.
+1. When you're done, select **Review + create**.
1. After it passes validation, select **Create** to deploy the scale set.
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
Last updated 12/16/2022 -+ # Tutorial: Automatically scale a Virtual Machine Scale Set with the Azure CLI When you create a scale set, you define the number of VM instances that you wish to run. As your application demand changes, you can automatically increase or decrease the number of VM instances. The ability to autoscale lets you keep up with customer demand or respond to application performance changes throughout the lifecycle of your app. In this tutorial you learn how to:
To connect to an individual instance, see [Tutorial: Connect to Virtual Machine
Once logged in, install the **stress** or **stress-ng** utility. Start *10* **stress** workers that generate CPU load. These workers run for *420* seconds, which is enough to cause the autoscale rules to implement the desired action.
-# [Ubuntu, Debian](#tab/Ubuntu)
+# [Ubuntu, Debian](#tab/Ubuntu)
```bash sudo apt-get update sudo apt-get -y install stress sudo stress --cpu 10 --timeout 420 & ```
-# [RHEL, CentOS](#tab/redhat)
+# [RHEL, CentOS](#tab/redhat)
```bash sudo dnf install stress-ng
ssh azureuser@13.92.224.66 -p 50003
Install and run **stress** or **stress-ng**, then start ten workers on this second VM instance.
-# [Ubuntu, Debian](#tab/Ubuntu)
+# [Ubuntu, Debian](#tab/Ubuntu)
```bash sudo apt-get -y install stress sudo stress --cpu 10 --timeout 420 & ```
-# [RHEL, CentOS](#tab/redhat)
+# [RHEL, CentOS](#tab/redhat)
```bash sudo dnf install stress-ng
virtual-machine-scale-sets Tutorial Create And Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-cli.md
Last updated 12/16/2022 -+ # Tutorial: Create and manage a Virtual Machine Scale Set with the Azure CLI A Virtual Machine Scale Set allows you to deploy and manage a set of virtual machines. Throughout the lifecycle of a Virtual Machine Scale Set, you may need to run one or more management tasks. In this tutorial you learn how to:
A Virtual Machine Scale Set allows you to deploy and manage a set of virtual mac
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
-This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
## Create a resource group
-An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a Virtual Machine Scale Set. Create a resource group with the [az group create](/cli/azure/group) command. In this example, a resource group named *myResourceGroup* is created in the *eastus* region.
+An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a Virtual Machine Scale Set. Create a resource group with the [az group create](/cli/azure/group) command. In this example, a resource group named *myResourceGroup* is created in the *eastus* region.
```azurecli-interactive az group create --name myResourceGroup --location eastus
You create a Virtual Machine Scale Set with the [az vmss create](/cli/azure/vmss
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
+ --orchestration-mode flexible \
--image <SKU image> \ --admin-username azureuser \ --generate-ssh-keys
When you created a scale set at the start of the tutorial, a default VM SKU of *
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
+ --orchestration-mode flexible \
--image <SKU image> \ --vm-sku Standard_F1 \ --admin-user azureuser \
myScaleSet_instance3 myResourceGroup eastus
``` ## Stop and deallocate VM instances in a scale set
-To stop all the VM instances in a scale set, use [az vmss stop](/cli/azure/vmss).
+To stop all the VM instances in a scale set, use [az vmss stop](/cli/azure/vmss).
```azurecli-interactive az vmss stop \ --resource-group myResourceGroup \
- --name myScaleSet
+ --name myScaleSet
```
-To stop individual VM instances in a scale set, use [az vm stop](/cli/azure/vm) and specify the instance name.
+To stop individual VM instances in a scale set, use [az vm stop](/cli/azure/vm) and specify the instance name.
```azurecli-interactive
-az vm stop \
+az vm stop \
--resource-group myResourceGroup \ --name myScaleSet_instance1 ```
-Stopped VM instances remain allocated and continue to incur compute charges. If you instead wish the VM instances to be deallocated and only incur storage charges, use [az vm deallocate](/cli/azure/vm) and specify the instance names you want deallocated.
+Stopped VM instances remain allocated and continue to incur compute charges. If you instead wish the VM instances to be deallocated and only incur storage charges, use [az vm deallocate](/cli/azure/vm) and specify the instance names you want deallocated.
```azurecli-interactive az vm deallocate \
az vm deallocate \
``` ## Start VM instances in a scale set
-To start all the VM instances in a scale set, use [az vmss start](/cli/azure/vmss).
+To start all the VM instances in a scale set, use [az vmss start](/cli/azure/vmss).
```azurecli-interactive az vmss start \ --resource-group myResourceGroup \
- --name myScaleSet
+ --name myScaleSet
```
-To start individual VM instances in a scale set, use [az vm start](/cli/azure/vm) and specify the instance name.
+To start individual VM instances in a scale set, use [az vm start](/cli/azure/vm) and specify the instance name.
```azurecli-interactive az vm start \
az vm start \
``` ## Restart VM instances in a scale set
-To restart all the VM instances in a scale set, use [az vmss restart](/cli/azure/vmss).
+To restart all the VM instances in a scale set, use [az vmss restart](/cli/azure/vmss).
```azurecli-interactive az vmss restart \ --resource-group myResourceGroup \
- --name myScaleSet
+ --name myScaleSet
```
-To restart individual VM instances in a scale set, use [az vm restart](/cli/azure/vm) and specify the instance name.
+To restart individual VM instances in a scale set, use [az vm restart](/cli/azure/vm) and specify the instance name.
```azurecli-interactive az vm restart \
virtual-machine-scale-sets Tutorial Create And Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-powershell.md
Now create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.c
New-AzVmss ` -ResourceGroupName "myResourceGroup" ` -VMScaleSetName "myScaleSet" `
+ -OrchestrationMode "Flexible" `
-Location "EastUS" ` -Credential $cred ```
virtual-machine-scale-sets Tutorial Install Apps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-cli.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --image Ubuntu2204 \
+ --orchestration-mode Flexible \
--admin-username azureuser \ --generate-ssh-keys ```
virtual-machine-scale-sets Tutorial Modify Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-modify-scale-sets-cli.md
Last updated 11/22/2022 -+ # Tutorial: Modify a Virtual Machine Scale Set using Azure CLI Throughout the lifecycle of your applications, you may need to modify or update your Virtual Machine Scale Set. These updates may include how to update the configuration of the scale set, or change the application configuration. This article describes how to modify an existing scale set using the Azure CLI.
Additionally, if you previously deployed the scale set with the `az vmss create`
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
+ --orchestration-mode flexible \
--image RHELRaw8LVMGen2 \ --admin-username azureuser \ --generate-ssh-keys \
The exact presentation of the output depends on the options you provide to the c
} ```
-These properties describe the configuration of a VM instance within a scale set, not the configuration of the scale set as a whole.
+These properties describe the configuration of a VM instance within a scale set, not the configuration of the scale set as a whole.
You can perform updates to individual VM instances in a scale set just like you would a standalone VM. For example, attaching a new data disk to instance 1:
Running [az vm show](/cli/azure/vm#az-vm-show) again, we now will see that the V
"toBeDetached": false, } ],
-
+ ``` ## Add an Instance to your scale set
-There are times where you might want to add a new VM to your scale set but want different configuration options than then listed in the scale set model. VMs can be added to a scale set during creation by using the [az vm create](/cli/azure/vmss#az-vmss-create) command and specifying the scale set name you want the instance added to.
+There are times where you might want to add a new VM to your scale set but want different configuration options than then listed in the scale set model. VMs can be added to a scale set during creation by using the [az vm create](/cli/azure/vmss#az-vmss-create) command and specifying the scale set name you want the instance added to.
```azurecli-interactive az vm create --name myNewInstance --resource-group myResourceGroup --vmss myScaleSet --image RHELRaw8LVMGen2
az vm list --resource-group myResourceGroup --output table
``` ```output
-Name ResourceGroup Location
+Name ResourceGroup Location
- - myNewInstance myResourceGroup eastus myScaleSet_Instance1 myResourceGroup eastus myScaleSet_Instance1 myResourceGroup eastus
-```
+```
## Bring VMs up-to-date with the latest scale set model > [!NOTE]
-> Upgrade modes are not currently supported on Virtual Machine Scale Sets using Flexible orchestration mode.
+> Upgrade modes are not currently supported on Virtual Machine Scale Sets using Flexible orchestration mode.
Scale sets have an "upgrade policy" that determine how VMs are brought up-to-date with the latest scale set model. The three modes for the upgrade policy are: -- **Automatic** - In this mode, the scale set makes no guarantees about the order of VMs being brought down. The scale set may take down all VMs at the same time.
+- **Automatic** - In this mode, the scale set makes no guarantees about the order of VMs being brought down. The scale set may take down all VMs at the same time.
- **Rolling** - In this mode, the scale set rolls out the update in batches with an optional pause time between batches. - **Manual** - In this mode, when you update the scale set model, nothing happens to existing VMs until a manual update is triggered.
-
+ If your scale set is set to manual upgrades, you can trigger a manual upgrade using [az vmss update](/cli/azure/vmss#az-vmss-update). ```azurecli
-az vmss update --resource-group myResourceGroup --name myScaleSet
+az vmss update --resource-group myResourceGroup --name myScaleSet
``` >[!NOTE]
Virtual Machine Scale Sets will generate a unique name for each VM in the scale
- Flexible orchestration Mode: `{scale-set-name}_{8-char-guid}` - Uniform orchestration mode: `{scale-set-name}_{instance-id}`
-
+ In the cases where you need to reimage a specific instance, use [az vmss reimage](/cli/azure/vmss#az-vmss-reimage) and specify the instance names. ```azurecli
Let's say you have a scale set with an Azure Load Balancer, and you want to repl
```azurecli-interactive # Remove the load balancer backend pool from the scale set model az vmss update --resource-group myResourceGroup --name myScaleSet --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerBackendAddressPools 0
-
+ # Remove the load balancer backend pool from the scale set model; only necessary if you have NAT pools configured on the scale set az vmss update --resource-group myResourceGroup --name myScaleSet --remove virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].loadBalancerInboundNatPools 0
-
+ # Add the application gateway backend pool to the scale set model az vmss update --resource-group myResourceGroup --name myScaleSet --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].ApplicationGatewayBackendAddressPools '{"id": "/subscriptions/{subscriptionId}/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/backendAddressPools/{applicationGatewayBackendPoolName}"}' ```
virtual-machine-scale-sets Tutorial Use Custom Image Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-cli.md
Last updated 12/16/2022 -+ # Tutorial: Create and use a custom image for Virtual Machine Scale Sets with the Azure CLI
When you create a scale set, you specify an image to be used when the VM instanc
- This article requires version 2.4.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Overview
-A [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations.
+A [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Custom images can be used to bootstrap configurations such as preloading applications, application configurations, and other OS configurations.
-The Azure Compute Gallery lets you share your custom VM images with others. Choose which images you want to share, which regions you want to make them available in, and who you want to share them with.
+The Azure Compute Gallery lets you share your custom VM images with others. Choose which images you want to share, which regions you want to make them available in, and who you want to share them with.
## Create and configure a source VM First, create a resource group with [az group create](/cli/azure/group), then create a VM with [az vm create](/cli/azure/vm#az-vm-create). This VM is then used as the source for the image. The following example creates a VM named *myVM* in the resource group named *myResourceGroup*:
az vm create \
> [!IMPORTANT] > The **ID** of your VM is shown in the output of the [az vm create](/cli/azure/vm#az-vm-create) command. Copy this someplace safe so you can use it later in this tutorial.
-## Create an image gallery
-An image gallery is the primary resource used for enabling image sharing.
+## Create an image gallery
+An image gallery is the primary resource used for enabling image sharing.
-Allowed characters for Gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name cannot contain dashes. Gallery names must be unique within your subscription.
+Allowed characters for Gallery name are uppercase or lowercase letters, digits, dots, and periods. The gallery name cannot contain dashes. Gallery names must be unique within your subscription.
Create an image gallery using [az sig create](/cli/azure/sig#az-sig-create). The following example creates a resource group named gallery named *myGalleryRG* in *East US*, and a gallery named *myGallery*.
az sig create --resource-group myGalleryRG --gallery-name myGallery
``` ## Create an image definition
-Image definitions create a logical grouping for images. They are used to manage information about the image versions that are created within them.
+Image definitions create a logical grouping for images. They are used to manage information about the image versions that are created within them.
-Image definition names can be made up of uppercase or lowercase letters, digits, dots, dashes, and periods.
+Image definition names can be made up of uppercase or lowercase letters, digits, dots, dashes, and periods.
Make sure your image definition is the right type. If you have generalized the VM (using Sysprep for Windows, or waagent -deprovision for Linux) then you should create a generalized image definition using `--os-state generalized`. If you want to use the VM without removing existing user accounts, create a specialized image definition using `--os-state specialized`.
For more information about the values you can specify for an image definition, s
Create an image definition in the gallery using [az sig image-definition create](/cli/azure/sig/image-definition#az-sig-image-definition-create).
-In this example, the image definition is named *myImageDefinition*, and is for a [specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) Linux OS image. To create a definition for images using a Windows OS, use `--os-type Windows`.
+In this example, the image definition is named *myImageDefinition*, and is for a [specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) Linux OS image. To create a definition for images using a Windows OS, use `--os-type Windows`.
-```azurecli-interactive
+```azurecli-interactive
az sig image-definition create \ --resource-group myGalleryRG \ --gallery-name myGallery \
az sig image-definition create \
> The **ID** of your image definition is shown in the output of the command. Copy this someplace safe so you can use it later in this tutorial. ## Create the image version
-Create an image version from the VM using [az image gallery create-image-version](/cli/azure/sig/image-version#az-sig-image-version-create).
+Create an image version from the VM using [az image gallery create-image-version](/cli/azure/sig/image-version#az-sig-image-version-create).
Allowed characters for image version are numbers and periods. Numbers must be within the range of a 32-bit integer. Format: *MajorVersion*.*MinorVersion*.*Patch*.
In this example, the version of our image is *1.0.0* and we are going to create
Replace the value of `--managed-image` in this example with the ID of your VM from the previous step.
-```azurecli-interactive
+```azurecli-interactive
az sig image-version create \ --resource-group myGalleryRG \ --gallery-name myGallery \
az sig image-version create \
>Starting November 2023, VM scale sets created using PowerShell and Azure CLI will default to Flexible Orchestration Mode if no orchestration mode is specified. For more information about this change and what actions you should take, go to [Breaking Change for VMSS PowerShell/CLI Customers - Microsoft Community Hub]( https://techcommunity.microsoft.com/t5/azure-compute-blog/breaking-change-for-vmss-powershell-cli-customers/ba-p/3818295)
-Create a scale set from the specialized image using [`az vmss create`](/cli/azure/vmss#az-vmss-create).
+Create a scale set from the specialized image using [`az vmss create`](/cli/azure/vmss#az-vmss-create).
-Create the scale set using [`az vmss create`](/cli/azure/vmss#az-vmss-create) using the --specialized parameter to indicate the image is a specialized image.
+Create the scale set using [`az vmss create`](/cli/azure/vmss#az-vmss-create) using the --specialized parameter to indicate the image is a specialized image.
-Use the image definition ID for `--image` to create the scale set instances from the latest version of the image that is available. You can also create the scale set instances from a specific version by supplying the image version ID for `--image`.
+Use the image definition ID for `--image` to create the scale set instances from the latest version of the image that is available. You can also create the scale set instances from a specific version by supplying the image version ID for `--image`.
Create a scale set named *myScaleSet* the latest version of the *myImageDefinition* image we created earlier.
az group create --name myResourceGroup --location eastus
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
+ --orchestration-mode flexible \
--image "/subscriptions/<Subscription ID>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition" \ --specialized ```
virtual-machine-scale-sets Tutorial Use Custom Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-powershell.md
New-AzResourceGroup -ResourceGroupName $resourceGroupName -Location $location
# Create a configuration $vmssConfig = New-AzVmssConfig ` -Location $location `
+ -OrchestrationMode Flexible `
-SkuCapacity 2 ` -SkuName "Standard_D2s_v3"
virtual-machine-scale-sets Tutorial Use Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-cli.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --image Ubuntu2204 \
+ --orchestration-mode Flexible \
--admin-username azureuser \ --generate-ssh-keys \ --data-disk-sizes-gb 64 128
virtual-machine-scale-sets Tutorial Use Disks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-powershell.md
New-AzResourceGroup -Name "myResourceGroup" -Location "East US"
New-AzVmss ` -ResourceGroupName "myResourceGroup" ` -Location "EastUS" `
+ -OrchestrationMode "Flexible" `
-VMScaleSetName "myScaleSet" ` -VirtualNetworkName "myVnet" ` -SubnetName "mySubnet" `
virtual-machine-scale-sets Use Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/use-spot.md
az vmss create \
--resource-group myResourceGroup \ --name myScaleSet \ --image Ubuntu2204 \
+ --orchestration-mode Flexible \
--single-placement-group false \ --admin-username azureuser \ --generate-ssh-keys \
Just add '-Priority Spot', and supply a `-max-price` to the [New-AzVmssConfig](/
$vmssConfig = New-AzVmssConfig ` -Location "East US 2" ` -SkuCapacity 2 `
+ -OrchestrationMode "Flexible" `
-SkuName "Standard_DS2" ` -Priority "Spot" ` -max-price -1 `
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
Last updated 07/25/2023 -
-
++ # Automatic instance repairs for Azure Virtual Machine Scale Sets > [!IMPORTANT]
-> The **Reimage** and **Restart** repair actions are currently in PREVIEW.
+> The **Reimage** and **Restart** repair actions are currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Some aspects of this feature may change prior to general availability (GA).
For instances marked as "Unhealthy" or "Unknown" (*Unknown* state is only availa
Automatic repairs policy is supported for compute API version 2018-10-01 or higher.
-The `repairAction` setting for Reimage (Preview) and Restart (Preview) is supported for compute API versions 2021-11-01 or higher.
+The `repairAction` setting for Reimage (Preview) and Restart (Preview) is supported for compute API versions 2021-11-01 or higher.
**Restrictions on resource or subscription moves**
Automatic repairs currently do not support scenarios where a VM instance is mark
## How do automatic instance repairs work?
-Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, the scale set will perform a preconfigured repair action on the unhealthy instance. Automatic instance repairs can be enabled in the Virtual Machine Scale Set model by using the `automaticRepairsPolicy` object.
+Automatic instance repair feature relies on health monitoring of individual instances in a scale set. VM instances in a scale set can be configured to emit application health status using either the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md). If an instance is found to be unhealthy, the scale set will perform a preconfigured repair action on the unhealthy instance. Automatic instance repairs can be enabled in the Virtual Machine Scale Set model by using the `automaticRepairsPolicy` object.
The automatic instance repairs process goes as follows:
The automatic instance repairs process goes as follows:
### Available repair actions > [!CAUTION]
-> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
+> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
> For more information, see [set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md). There are three available repair actions for automatic instance repairs ΓÇô Replace, Reimage (Preview), and Restart (Preview). The default repair action is Replace, but you can switch to Reimage (Preview) or Restart (Preview) by enrolling in the preview and modifying the `repairAction` setting under `automaticRepairsPolicy` object. - **Replace** deletes the unhealthy instance and creates a new instance to replace it. The latest Virtual Machine Scale Set model is used to create the new instance. This repair action is the default. -- **Reimage** applies the reimage operation to the unhealthy instance.
+- **Reimage** applies the reimage operation to the unhealthy instance.
-- **Restart** applies the restart operation to the unhealthy instance.
+- **Restart** applies the restart operation to the unhealthy instance.
-The following table compares the differences between all three repair actions:
+The following table compares the differences between all three repair actions:
| Repair action | VM instance ID preserved? | Private IP preserved? | Managed data disk preserved? | Managed OS disk preserved? | Local (temporary) disk preserved? | |--|--|--|--|--|--|
If an instance in a scale set is protected by applying one of the [protection po
## Terminate notification and automatic repairs
-If the [terminate notification](./virtual-machine-scale-sets-terminate-notification.md) feature is enabled on a scale set, then during a *Replace* operation, the deletion of an unhealthy instance follows the terminate notification configuration. A terminate notification is sent through Azure metadata service ΓÇô scheduled events ΓÇô and instance deletion is delayed during the configured delay timeout. However, the creation of a new instance to replace the unhealthy one doesn't wait for the delay timeout to complete.
+If the [terminate notification](./virtual-machine-scale-sets-terminate-notification.md) feature is enabled on a scale set, then during a *Replace* operation, the deletion of an unhealthy instance follows the terminate notification configuration. A terminate notification is sent through Azure metadata service ΓÇô scheduled events ΓÇô and instance deletion is delayed during the configured delay timeout. However, the creation of a new instance to replace the unhealthy one doesn't wait for the delay timeout to complete.
## Enabling automatic repairs policy when creating a new scale set
The automatic instance repair feature can be enabled while creating a new scale
New-AzVmssConfig ` -Location "EastUS" ` -SkuCapacity 2 `
+ -OrchestrationMode "Flexible" `
-SkuName "Standard_DS2" ` -EnableAutomaticRepair $true ` -AutomaticRepairGracePeriod "PT30M"
az vmss create \
--resource-group <myResourceGroup> \ --name <myVMScaleSet> \ --image RHELRaw8LVMGen2 \
+ --orchestration-mode Flexible \
--admin-username <azureuser> \ --generate-ssh-keys \ --load-balancer <existingLoadBalancer> \
az vmss update \
## Configure a repair action on automatic repairs policy > [!CAUTION]
-> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
+> The `repairAction` setting, is currently under PREVIEW and not suitable for production workloads. To preview the **Restart** and **Reimage** repair actions, you must register your Azure subscription with the AFEC flag `AutomaticRepairsWithConfigurableRepairActions` and your compute API version must be 2021-11-01 or higher.
> For more information, see [set up preview features in Azure subscription](../azure-resource-manager/management/preview-features.md).
-The `repairAction` setting under `automaticRepairsPolicy` allows you to specify the desired repair action performed in response to an unhealthy instance. If you are updating the repair action on an existing automatic repairs policy, you must first disable automatic repairs on the scale set and re-enable with the updated repair action. This process is illustrated in the examples below.
+The `repairAction` setting under `automaticRepairsPolicy` allows you to specify the desired repair action performed in response to an unhealthy instance. If you are updating the repair action on an existing automatic repairs policy, you must first disable automatic repairs on the scale set and re-enable with the updated repair action. This process is illustrated in the examples below.
### [REST API](#tab/rest-api-3)
-This example demonstrates how to update the repair action on a scale set with an existing automatic repairs policy. Use API version 2021-11-01 or higher.
+This example demonstrates how to update the repair action on a scale set with an existing automatic repairs policy. Use API version 2021-11-01 or higher.
-**Disable the existing automatic repairs policy on your scale set**
+**Disable the existing automatic repairs policy on your scale set**
```
-PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2021-11-01'
+PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2021-11-01'
``` ```json
PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupNa
**Re-enable automatic repairs policy with the desired repair action** ```
-PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2021-11-01'
+PUT or PATCH on '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2021-11-01'
``` ```json { "properties": { "automaticRepairsPolicy": {
- "enabled": "true",
- "gracePeriod": "PT40M",
+ "enabled": "true",
+ "gracePeriod": "PT40M",
"repairAction": "Reimage" } }
This example demonstrates how to update the repair action on a scale set with an
**Disable the existing automatic repairs policy on your scale set** ```azurecli-interactive
-az vmss update \
- --resource-group <myResourceGroup> \
- --name <myVMScaleSet> \
+az vmss update \
+ --resource-group <myResourceGroup> \
+ --name <myVMScaleSet> \
--enable-automatic-repairs false ```
-**Re-enable automatic repairs policy with the desired repair action**
+**Re-enable automatic repairs policy with the desired repair action**
```azurecli-interactive
-az vmss update \
- --resource-group <myResourceGroup> \
- --name <myVMScaleSet> \
- --enable-automatic-repairs true \
- --automatic-repairs-grace-period 30 \
- --automatic-repairs-action Replace
+az vmss update \
+ --resource-group <myResourceGroup> \
+ --name <myVMScaleSet> \
+ --enable-automatic-repairs true \
+ --automatic-repairs-grace-period 30 \
+ --automatic-repairs-action Replace
``` ### [Azure PowerShell](#tab/powershell-3)
-This example demonstrates how to update the repair action on a scale set with an existing automatic repairs policy, using [Update-AzVmss](/powershell/module/az.compute/update-azvmss). Use PowerShell Version 7.3.6 or higher.
+This example demonstrates how to update the repair action on a scale set with an existing automatic repairs policy, using [Update-AzVmss](/powershell/module/az.compute/update-azvmss). Use PowerShell Version 7.3.6 or higher.
**Disable the existing automatic repairs policy on your scale set**
-```azurepowershell-interactive
- -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myScaleSet" `
- -EnableAutomaticRepair $false
+```azurepowershell-interactive
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myScaleSet" `
+ -EnableAutomaticRepair $false
```
-**Re-enable automatic repairs policy with the desired repair action**
+**Re-enable automatic repairs policy with the desired repair action**
```azurepowershell-interactive
-Update-AzVmss `
- -ResourceGroupName "myResourceGroup" `
- -VMScaleSetName "myScaleSet" `
- -EnableAutomaticRepair $true `
- -AutomaticRepairGracePeriod "PT40M" `
- -AutomaticRepairAction "Restart"
+Update-AzVmss `
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myScaleSet" `
+ -EnableAutomaticRepair $true `
+ -AutomaticRepairGracePeriod "PT40M" `
+ -AutomaticRepairAction "Restart"
```
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
-+ Last updated 10/26/2023
The availability-first model for platform orchestrated updates described below e
- VMs in different Availability Zones are not updated concurrently with the same update. **Within a 'set':**-- All VMs in a common scale set are not updated concurrently.
+- All VMs in a common scale set are not updated concurrently.
- VMs in a common Virtual Machine Scale Set are grouped in batches and updated within Update Domain boundaries as described below. The platform orchestrated updates process is followed for rolling out supported OS platform image upgrades every month. For custom images through Azure Compute Gallery, an image upgrade is only kicked off for a particular Azure region when the new image is published and [replicated](../virtual-machines/azure-compute-gallery.md#replication) to the region of that scale set.
The scale set OS upgrade orchestrator checks for the overall scale set health be
To modify the default settings associated with Rolling Upgrades, review Azure's [Rolling Upgrade Policy](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#rollingupgradepolicy). > [!NOTE]
->Automatic OS upgrade does not upgrade the reference image Sku on the scale set. To change the Sku (such as Ubuntu 18.04-LTS to 20.04-LTS), you must update the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model) directly with the desired image Sku. Image publisher and offer can't be changed for an existing scale set.
+>Automatic OS upgrade does not upgrade the reference image Sku on the scale set. To change the Sku (such as Ubuntu 18.04-LTS to 20.04-LTS), you must update the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model) directly with the desired image Sku. Image publisher and offer can't be changed for an existing scale set.
-## OS image upgrade versus reimage
+## OS image upgrade versus reimage
Both **OS Image Upgrade** and **[Reimage](/rest/api/compute/virtual-machine-scale-sets/reimage)** are methods used to update VMs within a scale set, but they serve different purposes and have distinct impacts.
The following platform SKUs are currently supported (and more are added periodic
| Publisher | OS Offer | Sku | |-||--|
-| Canonical | UbuntuServer | 18.04-LTS |
-| Canonical | UbuntuServer | 18_04-LTS-Gen2 |
-| Canonical | 0001-com-ubuntu-server-focal | 20_04-LTS |
-| Canonical | 0001-com-ubuntu-server-focal | 20_04-LTS-Gen2 |
-| Canonical | 0001-com-ubuntu-server-jammy | 22_04-LTS |
-| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-1 |
-| MicrosoftCblMariner | Cbl-Mariner | 1-Gen2 |
-| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2
-| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2-Gen2 |
-| MicrosoftSqlServer | Sql2017-ws2019| enterprise |
+| Canonical | UbuntuServer | 18.04-LTS |
+| Canonical | UbuntuServer | 18_04-LTS-Gen2 |
+| Canonical | 0001-com-ubuntu-server-focal | 20_04-LTS |
+| Canonical | 0001-com-ubuntu-server-focal | 20_04-LTS-Gen2 |
+| Canonical | 0001-com-ubuntu-server-jammy | 22_04-LTS |
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-1 |
+| MicrosoftCblMariner | Cbl-Mariner | 1-Gen2 |
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2-Gen2 |
+| MicrosoftSqlServer | Sql2017-ws2019| enterprise |
| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2016-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-gensecond |
The following platform SKUs are currently supported (and more are added periodic
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gensecond | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gs | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers-gs | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-smalldisk |
The following platform SKUs are currently supported (and more are added periodic
- For scale sets using Windows virtual machines, starting with Compute API version 2019-03-01, the *virtualMachineProfile.osProfile.windowsConfiguration.enableAutomaticUpdates* property must set to *false* in the scale set model definition. The *enableAutomaticUpdates* property enables in-VM patching where "Windows Update" applies operating system patches without replacing the OS disk. With automatic OS image upgrades enabled on your scale set, an extra patching process through Windows Update is not required. > [!NOTE]
-> After an OS disk is replaced through reimage or upgrade, the attached data disks may have their drive letters reassigned. To retain the same drive letters for attached disks, it is suggested to use a custom boot script.
+> After an OS disk is replaced through reimage or upgrade, the attached data disks may have their drive letters reassigned. To retain the same drive letters for attached disks, it is suggested to use a custom boot script.
### Service Fabric requirements
Automatic OS image upgrade is supported for custom images deployed through [Azur
- The new image version should not be excluded from the latest version for that gallery image. Image versions excluded from the gallery image's latest version are not rolled out to the scale set through automatic OS image upgrade. > [!NOTE]
-> It can take up to 3 hours for a scale set to trigger the first image upgrade rollout after the scale set is first configured for automatic OS upgrades due to certain factors such as Maintenance Windows or other restrictions. Customers on the latest image may not get an upgrade until a new image is available.
+> It can take up to 3 hours for a scale set to trigger the first image upgrade rollout after the scale set is first configured for automatic OS upgrades due to certain factors such as Maintenance Windows or other restrictions. Customers on the latest image may not get an upgrade until a new image is available.
## Configure automatic OS image upgrade
az vmss update --name myScaleSet --resource-group myResourceGroup --set UpgradeP
The following example describes how to set automatic OS upgrades on a scale set model via Azure Resource Manager templates (ARM templates): ```json
-"properties": {
- "upgradePolicy": {
- "mode": "Automatic",
+"properties": {
+ "upgradePolicy": {
+ "mode": "Automatic",
"RollingUpgradePolicy": { "BatchInstancePercent": 20, "MaxUnhealthyInstancePercent": 25, "MaxUnhealthyUpgradedInstancePercent": 25, "PauseTimeBetweenBatches": "PT0S" },
- "automaticOSUpgradePolicy": {
+ "automaticOSUpgradePolicy": {
"enableAutomaticOSUpgrade": true, "useRollingUpgradePolicy": true,
- "disableAutomaticRollback": false
- }
+ "disableAutomaticRollback": false
+ }
}, }, "imagePublisher": {
The following example describes how to set automatic OS upgrades on a scale set
The following example describes how to set automatic OS upgrades on a scale set model via Bicep: ```json
-properties:ΓÇ»{
-    overprovision: overProvision
-    upgradePolicy: {
-      mode: 'Automatic'
-      automaticOSUpgradePolicy: {
-        enableAutomaticOSUpgrade: true
-      }
-    }
+properties:ΓÇ»{
+    overprovision: overProvision
+    upgradePolicy: {
+      mode: 'Automatic'
+      automaticOSUpgradePolicy: {
+        enableAutomaticOSUpgrade: true
+      }
+    }
} ```
A scale set can optionally be configured with Application Health Probes to provi
If the scale set is configured to use multiple placement groups, probes using a [Standard Load Balancer](../load-balancer/load-balancer-overview.md) need to be used. > [!NOTE]
-> Only one source of health monitoring can be used for a Virtual Machine Scale Set, either an Application Health Extension or a Health Probe. If you have both options enabled, you will need to remove one before using orchestration services like Instance Repairs or Automatic OS Upgrades.
+> Only one source of health monitoring can be used for a Virtual Machine Scale Set, either an Application Health Extension or a Health Probe. If you have both options enabled, you will need to remove one before using orchestration services like Instance Repairs or Automatic OS Upgrades.
### Configuring a Custom Load Balancer Probe as Application Health Probe on a scale set
As the extension reports health from within a VM, the extension can be used in s
There are multiple ways of deploying the Application Health extension to your scale sets as detailed in the examples in [this article](virtual-machine-scale-sets-health-extension.md#deploy-the-application-health-extension). > [!NOTE]
-> Only one source of health monitoring can be used for a Virtual Machine Scale Set, either an Application Health Extension or a Health Probe. If you have both options enabled, you will need to remove one before using orchestration services like Instance Repairs or Automatic OS Upgrades.
+> Only one source of health monitoring can be used for a Virtual Machine Scale Set, either an Application Health Extension or a Health Probe. If you have both options enabled, you will need to remove one before using orchestration services like Instance Repairs or Automatic OS Upgrades.
## Get the history of automatic OS image upgrades You can check the history of the most recent OS upgrade performed on your scale set with Azure PowerShell, Azure CLI 2.0, or the REST APIs. You can get history for the last five OS upgrade attempts within the past two months.
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection.md
# Instance Protection for Azure Virtual Machine Scale Set instances > [!NOTE]
-> This document covers Virtual Machine Scale Sets using Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
Azure Virtual Machine Scale Sets enable better elasticity for your workloads through [Autoscale](virtual-machine-scale-sets-autoscale-overview.md), so you can configure when your infrastructure scales-out and when it scales-in. Scale sets also enable you to centrally manage, configure, and update a large number of VMs through different [upgrade policy](virtual-machine-scale-sets-upgrade-policy.md) settings. You can configure an update on the scale set model and the new configuration is applied automatically to every scale set instance if you've set the upgrade policy to Automatic or Rolling.
PUT on `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/provi
``` > [!NOTE]
->Instance protection is only supported with API version 2019-03-01 and above
+>With Flexible orchestration mode, instance protection is only supported with API version 2023-09-01 and above. For Uniform orchestration mode, instance protection is available with API version 2019-03-01 and above.
### Azure PowerShell
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Fault Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md
You can set the parameter `--platform-fault-domain-count` to 1, 2, or 3 (default
az vmss create \ --resource-group myResourceGroup \ --name myScaleSet \
+ --orchestration-mode Flexible \
--image Ubuntu2204 \ --admin-username azureuser \ --platform-fault-domain-count 3\
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Terminate Notifications (Virtual Machine Scale Set) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | Yes, read [Terminate Notifications documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md) | N/A | | Monitor Application Health | Application health extension | Application health extension or Azure load balancer probe | Application health extension | | Instance Repair (Virtual Machine Scale Set) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | N/A |
-| Instance Protection | No, use [Azure resource lock](../azure-resource-manager/management/lock-resources.md) | Yes | No |
+| Instance Protection | Yes | Yes | No |
| Scale In Policy | Yes | Yes | No | | VMSS Get Instance View | No | Yes | N/A | | VM Batch Operations (Start all, Stop all, delete subset, etc.) | Yes | Yes | No |
The following Virtual Machine Scale Set parameters aren't currently supported wi
- Application health via SLB health probe - use Application Health Extension on instances - Virtual Machine Scale Set upgrade policy - must be null or empty - Unmanaged disks-- Virtual Machine Scale Set Instance Protection - Basic Load Balancer - Port Forwarding via Standard Load Balancer NAT Pool - you can configure NAT rules - System assigned Managed Identity - Use User assigned Managed Identity instead
virtual-machine-scale-sets Virtual Machine Scale Sets Scale In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md
New-AzVmss `
-ResourceGroupName "myResourceGroup" ` -Location "<VMSS location>" ` -VMScaleSetName "myScaleSet" `
+ -OrchestrationMode "Flexible" `
-ScaleInPolicy ΓÇ£OldestVMΓÇ¥ ```
az group create --name <myResourceGroup> --location <VMSSLocation>
az vmss create \ --resource-group <myResourceGroup> \ --name <myVMScaleSet> \
+ --orchestration-mode flexible \
--image Ubuntu2204 \ --admin-username <azureuser> \ --generate-ssh-keys \
virtual-machine-scale-sets Virtual Machine Scale Sets Scaling Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scaling-profile.md
By default, the Azure CLI will create a scale set with a scaling profile. Omit t
az vmss create \ --name myVmss \ --resource-group myResourceGroup \
+ --orchestration-mode flexible \
--platform-fault-domain-count 3 ```
virtual-machine-scale-sets Virtual Machine Scale Sets Terminate Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md
This sample script walks through the creation of a scale set and associated reso
New-AzVmssConfig ` -Location "VMSSLocation" ` -SkuCapacity 2 `
+ -OrchestrationMode "Flexible" `
-SkuName "Standard_DS2" ` -TerminateScheduledEvents $true ` -TerminateScheduledEventNotBeforeTimeoutInMinutes 10
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
Last updated 11/22/2022 -+ # Create a Virtual Machine Scale Set that uses Availability Zones
Virtual Machine Scale Sets supports three zonal deployment models:
- Zonal or zone aligned (single zone) - Regional
-### Zone redundant or zone spanning
+### Zone redundant or zone spanning
A zone redundant or zone spanning scale set spreads instances across all selected zones, `"zones": ["1","2","3"]`. By default, the scale set performs a best effort approach to evenly spread instances across selected zones. However, you can specify that you want strict zone balance by setting `"zoneBalance": "true"` in your deployment. Each VM and its disks are zonal, so they are pinned to a specific zone. Instances between zones are connected by high-performance network with low latency. In the event of a zonal outage or connectivity issue, connectivity to instances within the affected zone may be compromised, while instances in other availability zones should be unaffected. You may add capacity to the scale set during a zonal outage, and the scale set adds more instances to the unaffected zones. When the zone is restored, you may need to scale down your scale set to the original capacity. A best practice would be to configure [autoscale](virtual-machine-scale-sets-autoscale-overview.md) rules based on CPU or memory usage. The autoscale rules would allow the scale set to respond to a loss of the VM instances in that one zone by scaling out new instances in the remaining operational zones.
A zonal or zone aligned scale set places instances in a single availability zone
### Regional
-A regional Virtual Machine Scale Set is when the zone assignment isn't explicitly set (`"zones"=[]` or `"zones"=null`). In this configuration, the scale set creates Regional (not-zone pinned) instances and implicitly places instances throughout the region. There is no guarantee for balance or spread across zones, or that instances land in the same availability zone. Disk colocation is guaranteed for Ultra and Premium v2 disks, best effort for Premium V1 disks, and not guaranteed for Standard SKU (SSD or HDD) disks.
+A regional Virtual Machine Scale Set is when the zone assignment isn't explicitly set (`"zones"=[]` or `"zones"=null`). In this configuration, the scale set creates Regional (not-zone pinned) instances and implicitly places instances throughout the region. There is no guarantee for balance or spread across zones, or that instances land in the same availability zone. Disk colocation is guaranteed for Ultra and Premium v2 disks, best effort for Premium V1 disks, and not guaranteed for Standard SKU (SSD or HDD) disks.
In the rare case of a full zonal outage, any or all instances within the scale set may be impacted.
With max spreading, the scale set spreads your VMs across as many fault domains
### Placement groups > [!IMPORTANT]
-> Placement groups only apply to Virtual Machine Scale Sets running in Uniform orchestration mode.
+> Placement groups only apply to Virtual Machine Scale Sets running in Uniform orchestration mode.
When you deploy a scale set, you can deploy with a single [placement group](./virtual-machine-scale-sets-placement-groups.md) per Availability Zone, or with multiple per zone. For regional (non-zonal) scale sets, the choice is to have a single placement group in the region or to have multiple in the region. If the scale set property called `singlePlacementGroup` is set to false, the scale set can be composed of multiple placement groups and has a range of 0-1,000 VMs. When set to the default value of true, the scale set is composed of a single placement group, and has a range of 0-100 VMs. For most workloads, we recommend multiple placement groups, which allows for greater scale. In API version *2017-12-01*, scale sets default to multiple placement groups for single-zone and cross-zone scale sets, but they default to single placement group for regional (non-zonal) scale sets.
New-AzVmss `
## Use Azure Resource Manager templates
-The process to create a scale set that uses an Availability Zone is the same as detailed in the getting started article for [Linux](quick-create-template-linux.md) or [Windows](quick-create-template-windows.md).
+The process to create a scale set that uses an Availability Zone is the same as detailed in the getting started article for [Linux](quick-create-template-linux.md) or [Windows](quick-create-template-windows.md).
```json {
Get-AzProviderPreviewFeature -Name <feature-name> -ProviderNamespace Microsoft.C
### Expand scale set to use availability zones
-You can update the scale set to scale out instances to one or more additional availability zones, up to the number of availability zones supported by the region. For regions that support zones, the minimum number of zones is 3.
+You can update the scale set to scale out instances to one or more additional availability zones, up to the number of availability zones supported by the region. For regions that support zones, the minimum number of zones is 3.
> [!IMPORTANT]
-> When you expand the scale set to additional zones, the original instances are not migrated or changed. When you scale out, new instances will be created and spread evenly across the selected availability zones. When you scale in the scale set, any regional instances will be priorized for removal first. After that, instances will be removed based on the [scale in policy](virtual-machine-scale-sets-scale-in-policy.md).
+> When you expand the scale set to additional zones, the original instances are not migrated or changed. When you scale out, new instances will be created and spread evenly across the selected availability zones. When you scale in the scale set, any regional instances will be priorized for removal first. After that, instances will be removed based on the [scale in policy](virtual-machine-scale-sets-scale-in-policy.md).
Expanding to a zonal scale set is done in 3 steps:
Expanding to a zonal scale set is done in 3 steps:
> This preview allows you to add zones to the scale set. You can't go back to a regional scale set or remove zones once they have been added. In order to prepare for zonal expansion:
-* [Check that you have enough quota](../virtual-machines/quotas.md) for the VM size in the selected region to handle more instances.
+* [Check that you have enough quota](../virtual-machines/quotas.md) for the VM size in the selected region to handle more instances.
* Check that the VM size and disk types you are using are available in all the desired zones. You can use the [Compute Resources SKUs API](/rest/api/compute/resource-skus/list?tabs=HTTP) to determine which sizes are available in which zones * Validate that the scale set configuration is valid for zonal scale sets: * `platformFaultDomainCount` must be set to 1 or 5. Fixed spreading with 2 or 3 fault domains isn't supported for zonal deployments.
PATCH /subscriptions/subscriptionid/resourceGroups/resourcegroupo/providers/Micr
```javascript { "zones": [
- "1",
+ "1",
"2", "3" ]
PATCH /subscriptions/subscriptionid/resourceGroups/resourcegroupo/providers/Micr
#### Manually scale out and in
-[Update the capacity](virtual-machine-scale-sets-autoscale-overview.md) of the scale set to add more instances. The new capacity should be set to the original capacity plus the number of new instances. For example, if your scale set had 5 regional instances and you would like to scale out so that you have 3 instances in each of 3 zones, you should set the capacity to 14.
+[Update the capacity](virtual-machine-scale-sets-autoscale-overview.md) of the scale set to add more instances. The new capacity should be set to the original capacity plus the number of new instances. For example, if your scale set had 5 regional instances and you would like to scale out so that you have 3 instances in each of 3 zones, you should set the capacity to 14.
You can update the zones parameter and the scale set capacity in the same ARM template or REST API call.
When you are satisfied that the new instances are ready, scale in your scale set
#### Automate with Rolling upgrades + MaxSurge
-With [Rolling upgrades + MaxSurge](virtual-machine-scale-sets-upgrade-policy.md), new zonal instances are created and brought up-to-date with the latest scale model in batches. Once a batch of new instances is added to the scale set and report as healthy, a batch of old instances are automated removed from the scale set. Upgrades continue until all instances are brought up-to-date.
+With [Rolling upgrades + MaxSurge](virtual-machine-scale-sets-upgrade-policy.md), new zonal instances are created and brought up-to-date with the latest scale model in batches. Once a batch of new instances is added to the scale set and report as healthy, a batch of old instances are automated removed from the scale set. Upgrades continue until all instances are brought up-to-date.
> [!IMPORTANT] > Rolling upgrades with MaxSurge is currently under Public Preview. It is only available for VMSS Uniform Orchestration Mode. ### Preview known issues and limitations
-* The preview is targeted to stateless workloads on Virtual Machine Scale Sets.
+* The preview is targeted to stateless workloads on Virtual Machine Scale Sets.
* Scale sets running Service Fabric or Azure Kubernetes Service are not supported.
With [Rolling upgrades + MaxSurge](virtual-machine-scale-sets-upgrade-policy.md)
* Capacity reservations are not supported during zone expansion. Once the scale set is fully zonal (no more regional instances), you can add a capacity reservation group to the scale set.
-* Azure Dedicated Host deployments are not supported
+* Azure Dedicated Host deployments are not supported
## Next steps
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
Last updated 10/20/2021 -+ # Automatic VM guest patching for Azure VMs
Patches are installed within 30 days of the monthly patch releases, following av
Definition updates and other patches not classified as *Critical* or *Security* won't be installed through automatic VM guest patching. To install patches with other patch classifications or schedule patch installation within your own custom maintenance window, you can use [Update Management](./windows/tutorial-config-management.md#manage-windows-updates).
-For IaaS VMs, customers can choose to configure VMs to enable automatic VM guest patching. This will limit the blast radius of VMs getting the updated patch and do an orchestrated update of the VMs. The service also provides [health monitoring](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md) to detect issues any issues with the update.
+For IaaS VMs, customers can choose to configure VMs to enable automatic VM guest patching. This will limit the blast radius of VMs getting the updated patch and do an orchestrated update of the VMs. The service also provides [health monitoring](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md) to detect issues any issues with the update.
### Availability-first Updates
-The patch installation process is orchestrated globally by Azure for all VMs that have automatic VM guest patching enabled. This orchestration follows availability-first principles across different levels of availability provided by Azure.
+The patch installation process is orchestrated globally by Azure for all VMs that have automatic VM guest patching enabled. This orchestration follows availability-first principles across different levels of availability provided by Azure.
For a group of virtual machines undergoing an update, the Azure platform will orchestrate updates:
VMs on Azure now support the following patch orchestration modes:
- For Windows VMs, setting this mode also disables the native Automatic Updates on the Windows virtual machine to avoid duplication. - To use this mode on Linux VMs, set the property `osProfile.linuxConfiguration.patchSettings.patchMode=AutomaticByPlatform` in the VM template. - To use this mode on Windows VMs, set the property `osProfile.windowsConfiguration.patchSettings.patchMode=AutomaticByPlatform` in the VM template.-- Enabling this mode will set the Registry Key SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU\NoAutoUpdate to 1
+- Enabling this mode will set the Registry Key SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU\NoAutoUpdate to 1
**AutomaticByOS:** - This mode is supported only for Windows VMs.
VMs on Azure now support the following patch orchestration modes:
**Manual:** - This mode is supported only for Windows VMs.-- This mode disables Automatic Updates on the Windows virtual machine. When deploying a VM using CLI or PowerShell, setting `--enable-auto-updates` to `false` will also set `patchMode` to `manual` and will disable Automatic Updates.
+- This mode disables Automatic Updates on the Windows virtual machine. When deploying a VM using CLI or PowerShell, setting `--enable-auto-updates` to `false` will also set `patchMode` to `manual` and will disable Automatic Updates.
- This mode does not support availability-first patching. - This mode should be set when using custom patching solutions. - To use this mode on Windows VMs, set the property `osProfile.windowsConfiguration.enableAutomaticUpdates=false`, and set the property `osProfile.windowsConfiguration.patchSettings.patchMode=Manual` in the VM template.-- Enabling this mode will set the Registry Key SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU\NoAutoUpdate to 1
+- Enabling this mode will set the Registry Key SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate\AU\NoAutoUpdate to 1
**ImageDefault:** - This mode is supported only for Linux VMs.
VMs on Azure now support the following patch orchestration modes:
- The virtual machine must be able to access the configured update endpoints. If your virtual machine is configured to use private repositories for Linux or Windows Server Update Services (WSUS) for Windows VMs, the relevant update endpoints must be accessible. - Use Compute API version 2021-03-01 or higher to access all functionality including on-demand assessment and on-demand patching. - Custom images aren't currently supported.-- VMSS Flexible Orchestration requires the installation of [Application Health extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md). This is optional for IaaS VMs.
+- VMSS Flexible Orchestration requires the installation of [Application Health extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md). This is optional for IaaS VMs.
## Enable automatic VM guest patching
-Automatic VM guest patching can be enabled on any Windows or Linux VM that is created from a supported platform image.
+Automatic VM guest patching can be enabled on any Windows or Linux VM that is created from a supported platform image.
### REST API for Linux VMs
az vm update --resource-group myResourceGroup --name myVM --set osProfile.window
``` ### Azure portal
-When creating a VM using the Azure portal, patch orchestration modes can be set under the **Management** tab for both Linux and Windows.
+When creating a VM using the Azure portal, patch orchestration modes can be set under the **Management** tab for both Linux and Windows.
:::image type="content" source="./media/automatic-vm-guest-patching/auto-guest-patching-portal.png" alt-text="Shows the management tab in the Azure portal used to enable patch orchestration modes.":::
az vm install-patches --resource-group myResourceGroup --name myVM --maximum-dur
``` ## Strict Safe Deployment on Canonical Images (Preview)
-[Microsoft and Canonical have partnered](https://ubuntu.com/blog/ubuntu-snapshots-on-azure-ensuring-predictability-and-consistency-in-cloud-deployments) to make it easier for our customers to stay current with Linux OS updates and increase the security and resiliency of their Ubuntu workloads on Azure. By leveraging CanonicalΓÇÖs snapshot service, Azure will now apply the same set of Ubuntu updates consistently to your fleet across regions.
+[Microsoft and Canonical have partnered](https://ubuntu.com/blog/ubuntu-snapshots-on-azure-ensuring-predictability-and-consistency-in-cloud-deployments) to make it easier for our customers to stay current with Linux OS updates and increase the security and resiliency of their Ubuntu workloads on Azure. By leveraging CanonicalΓÇÖs snapshot service, Azure will now apply the same set of Ubuntu updates consistently to your fleet across regions.
Azure will store the package related updates within the customer repository for up to 90 days, depending on the available space. This allows customers to update their fleet leveraging Strict Safe Deployment for VMs that are up to 3 months behind on updates.
-There is no action required for customers that have enabled Auto Patching. The platform will install a package that is snapped to a point-in-time by default. In the event a snapshot-based update cannot be installed, Azure will apply the latest package on the VM to ensure the VM remains secure. The point-in-time updates will be consistent on all VMs across regions to ensure homogeneity. Customers can view the published date information related to the applied update in [Azure Resource Graph](/azure/governance/resource-graph/overview) and the [Instance View](/powershell/module/az.compute/get-azvm) of the VM.
+There is no action required for customers that have enabled Auto Patching. The platform will install a package that is snapped to a point-in-time by default. In the event a snapshot-based update cannot be installed, Azure will apply the latest package on the VM to ensure the VM remains secure. The point-in-time updates will be consistent on all VMs across regions to ensure homogeneity. Customers can view the published date information related to the applied update in [Azure Resource Graph](/azure/governance/resource-graph/overview) and the [Instance View](/powershell/module/az.compute/get-azvm) of the VM.
## Image End-of-Life (EOL)
virtual-machines Boot Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-diagnostics.md
Title: Azure boot diagnostics
description: Overview of Azure boot diagnostics and managed boot diagnostics -+
Boot diagnostics is a debugging feature for Azure virtual machines (VM) that all
When you create a VM in Azure portal, boot diagnostics is enabled by default. The recommended boot diagnostics experience is to use a managed storage account, as it yields significant performance improvements in the time to create an Azure VM. An Azure managed storage account is used, removing the time it takes to create a user storage account to store the boot diagnostics data. > [!IMPORTANT]
-> The boot diagnostics data blobs (which comprise of logs and snapshot images) are stored in a managed storage account. Customers will be charged only on used GiBs by the blobs, not on the disk's provisioned size. The snapshot meters will be used for billing of the managed storage account. Because the managed accounts are created on either Standard LRS or Standard ZRS, customers will be charged at $0.05/GB per month for the size of their diagnostic data blobs only. For more information on this pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Customers see this charge tied to their VM resource URI.
+> The boot diagnostics data blobs (which comprise of logs and snapshot images) are stored in a managed storage account. Customers will be charged only on used GiBs by the blobs, not on the disk's provisioned size. The snapshot meters will be used for billing of the managed storage account. Because the managed accounts are created on either Standard LRS or Standard ZRS, customers will be charged at $0.05/GB per month for the size of their diagnostic data blobs only. For more information on this pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Customers see this charge tied to their VM resource URI.
An alternative boot diagnostic experience is to use a custom storage account. A user can either create a new storage account or use an existing one. When the storage firewall is enabled on the custom storage account (**Enabled from all networks** option isn't selected), you must:
An alternative boot diagnostic experience is to use a custom storage account. A
To configure the storage firewall for Azure Serial Console, see [Use Serial Console with custom boot diagnostics storage account firewall enabled](/troubleshoot/azure/virtual-machines/serial-console-windows#use-serial-console-with-custom-boot-diagnostics-storage-account-firewall-enabled). > [!NOTE]
-> The custom storage account associated with boot diagnostics requires the storage account and the associated virtual machines reside in the same region and subscription.
+> The custom storage account associated with boot diagnostics requires the storage account and the associated virtual machines reside in the same region and subscription.
## Boot diagnostics view
Managed boot diagnostics can be enabled through the Azure portal, CLI and ARM Te
### Enable managed boot diagnostics using the Azure portal
-When you create a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. Navigate to the *Management* tab during the VM creation to view it.
+When you create a VM in the Azure portal, the default setting is to have boot diagnostics enabled using a managed storage account. Navigate to the *Management* tab during the VM creation to view it.
:::image type="content" source="./media/boot-diagnostics/boot-diagnostics-enable-portal.png" alt-text="Screenshot enabling managed boot diagnostics during VM creation.":::
Everything after API version 2020-06-01 supports managed boot diagnostics. For m
## Limitations -- Managed boot diagnostics is only available for Azure Resource Manager VMs.
+- Managed boot diagnostics is only available for Azure Resource Manager VMs.
- Managed boot diagnostics doesn't support VMs using unmanaged OS disks.-- Boot diagnostics doesn't support premium storage accounts or zone redundant storage accounts. If either of these are used for boot diagnostics users receive an `StorageAccountTypeNotSupported` error when starting the VM.
+- Boot diagnostics doesn't support premium storage accounts or zone redundant storage accounts. If either of these are used for boot diagnostics users receive an `StorageAccountTypeNotSupported` error when starting the VM.
- Managed storage accounts are supported in Resource Manager API version "2020-06-01" and later. - Portal only supports the use of boot diagnostics with a managed storage account for single instance VMs. - Users can't configure a retention period for Managed Boot Diagnostics. The logs are overwritten when the total size crosses 1 GB.
virtual-machines Compiling Scaling Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/compiling-scaling-applications.md
Title: Scaling HPC applications - Azure Virtual Machines | Microsoft Docs
description: Learn how to scale HPC applications on Azure VMs. -+ Last updated 04/11/2023
Adaptive Routing (AR) allows Azure Virtual Machines (VMs) running EDR and HDR In
## Process pinning -- Pin processes to cores using a sequential pinning approach (as opposed to an autobalance approach).
+- Pin processes to cores using a sequential pinning approach (as opposed to an autobalance approach).
- Binding by Numa/Core/HwThread is better than default binding. - For hybrid parallel applications (OpenMP+MPI), use four threads and one MPI rank per [CCX]([HB-series virtual machines overview including info on CCXs](/azure/virtual-machines/hb-series-overview)) on HB and HBv2 VM sizes. - For pure MPI applications, experiment with between one to four MPI ranks per CCX for optimal performance on HB and HBv2 VM sizes.
The AMD Optimizing C/C++ Compiler (AOCC) compiler system offers a high level of
### Clang
-Clang is a C, C++, and Objective-C compiler handling preprocessing, parsing, optimization, code generation, assembly, and linking.
+Clang is a C, C++, and Objective-C compiler handling preprocessing, parsing, optimization, code generation, assembly, and linking.
Clang supports the `-march=znver1` flag to enable best code generation and tuning for AMDΓÇÖs Zen based x86 architecture. ### FLANG
The FLANG compiler is a recent addition to the AOCC suite (added April 2018) and
DragonEgg is a gcc plugin that replaces GCCΓÇÖs optimizers and code generators from the LLVM project. DragonEgg that comes with AOCC works with gcc-4.8.x, has been tested for x86-32/x86-64 targets, and has been successfully used on various Linux platforms.
-GFortran is the actual frontend for Fortran programs responsible for preprocessing, parsing, and semantic analysis generating the GCC GIMPLE intermediate representation (IR). DragonEgg is a GNU plugin, plugging into GFortran compilation flow. It implements the GNU plugin API. With the plugin architecture, DragonEgg becomes the compiler driver, driving the different phases of compilation. After following the download and installation instructions, Dragon Egg can be invoked using:
+GFortran is the actual frontend for Fortran programs responsible for preprocessing, parsing, and semantic analysis generating the GCC GIMPLE intermediate representation (IR). DragonEgg is a GNU plugin, plugging into GFortran compilation flow. It implements the GNU plugin API. With the plugin architecture, DragonEgg becomes the compiler driver, driving the different phases of compilation. After following the download and installation instructions, Dragon Egg can be invoked using:
```bash
-gfortran [gFortran flags]
- -fplugin=/path/AOCC-1.2-Compiler/AOCC-1.2-
- FortranPlugin/dragonegg.so [plugin optimization flags]
+gfortran [gFortran flags]
+ -fplugin=/path/AOCC-1.2-Compiler/AOCC-1.2-
+ FortranPlugin/dragonegg.so [plugin optimization flags]
-c xyz.f90 $ clang -O3 -lgfortran -o xyz xyz.o $./xyz ```
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/configure.md
Title: Configuration and Optimization of InfiniBand enabled H-series and N-serie
description: Learn about configuring and optimizing the InfiniBand enabled H-series and N-series VMs for HPC. -+ Last updated 10/03/2023
virtual-machines Dedicated Hosts How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts-how-to.md
-+ Last updated 07/12/2023
az vm host group create \
-z 1 \ --ultra-ssd-enabled true \ --platform-fault-domain-count 2 \
- --automatic-placement true
+ --automatic-placement true
``` ### [PowerShell](#tab/powershell)
Move the VM to a dedicated host using the [portal](https://portal.azure.com).
### [CLI](#tab/cli)
-Move the existing VM to a dedicated host using the CLI. The VM must be Stop/Deallocated using [az vm deallocate](/cli/azure/vm#az_vm_stop) in order to assign it to a dedicated host.
+Move the existing VM to a dedicated host using the CLI. The VM must be Stop/Deallocated using [az vm deallocate](/cli/azure/vm#az_vm_stop) in order to assign it to a dedicated host.
Replace the values with your own information.
Move a VM from dedicated host to multitenant infrastructure using the [portal](h
### [CLI](#tab/cli)
-Move a VM from dedicated host to multitenant infrastructure using the CLI. The VM must be Stop/Deallocated using [az vm deallocate](/cli/azure/vm#az_vm_stop) in order to assign it to reconfigure it as a multitenant VM.
+Move a VM from dedicated host to multitenant infrastructure using the CLI. The VM must be Stop/Deallocated using [az vm deallocate](/cli/azure/vm#az_vm_stop) in order to assign it to reconfigure it as a multitenant VM.
Replace the values with your own information.
Stop-AzVM `
Update-AzVM ` -ResourceGroupName $vmRGName ` -VM $myVM `
- -HostId ''
+ -HostId ''
Start-AzVM ` -ResourceGroupName $vmRGName `
Restarting a host does not completely power off the host. When the host restarts
### [Portal](#tab/portal) 1. Search for and select the host.
-1. In the top menu bar, select the **Restart** button.
+1. In the top menu bar, select the **Restart** button.
1. In the **Essentials** section of the Host Resource Pane, Host Status will switch to **Host undergoing restart** during the restart. 1. Once the restart has completed, the Host Status will return to **Host available**.
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
Last updated 05/09/2022 -+ # Delete a VM and attached resources
Depending on how you delete a VM, it may only delete the VM resource, not the ne
1. When you're done making selections, select **Review + create**. 1. You can verify which resources you have chosen to delete when you delete the VM.
-1. When you're satisfied with your selections, and validation passes, select **Create** to deploy the VM.
+1. When you're satisfied with your selections, and validation passes, select **Create** to deploy the VM.
### [CLI](#tab/cli2)
New-AzVm `
-VirtualNetworkName "myVnet" ` -SubnetName "mySubnet" ` -SecurityGroupName "myNetworkSecurityGroup" `
- -PublicIpAddressName "myPublicIpAddress"
+ -PublicIpAddressName "myPublicIpAddress"
``` ### [REST](#tab/rest2)
-This example shows how to set the data disk and NIC to be deleted when the VM is deleted. Note, the API version specified in the api-version parameter must be '2021-03-01' or newer to configure the delete option.
+This example shows how to set the data disk and NIC to be deleted when the VM is deleted. Note, the API version specified in the api-version parameter must be '2021-03-01' or newer to configure the delete option.
```rest
-PUT
-https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachines/myVM?api-version=xx
-{
- "storageProfile": {
- "dataDisks": [
- {
- "diskSizeGB": 1023,
- "name": "myVMdatadisk",
- "createOption": "Empty",
- "lun": 0,
+PUT
+https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachines/myVM?api-version=xx
+{
+ "storageProfile": {
+ "dataDisks": [
+ {
+ "diskSizeGB": 1023,
+ "name": "myVMdatadisk",
+ "createOption": "Empty",
+ "lun": 0,
"deleteOption": "Delete" }
- ]
- },
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "/subscriptions/.../Microsoft.Network/networkInterfaces/myNIC",
- "properties": {
+ ]
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "/subscriptions/.../Microsoft.Network/networkInterfaces/myNIC",
+ "properties": {
"primary": true, "deleteOption": "Delete" }
- }
+ }
] }
-}
+}
``` You can also set this property for a Public IP associated with a NIC, so that the Public IP is automatically deleted when the NIC gets deleted. ```rest
-PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/networkInterfaces/test-nic?api-version=xx
-{
+PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Network/networkInterfaces/test-nic?api-version=xx
+{
- "properties": {
+ "properties": {
- "enableAcceleratedNetworking": true,
+ "enableAcceleratedNetworking": true,
- "ipConfigurations": [
+ "ipConfigurations": [
- {
+ {
- "name": "ipconfig1",
+ "name": "ipconfig1",
- "properties": {
+ "properties": {
- "publicIPAddress": {
+ "publicIPAddress": {
- "id": "/subscriptions/../publicIPAddresses/test-ip",
+ "id": "/subscriptions/../publicIPAddresses/test-ip",
-          "properties": {
+          "properties": {
            "deleteOption": "Delete" }
- },
+ },
- "subnet": {
+ "subnet": {
- "id": "/subscriptions/../virtualNetworks/rg1-vnet/subnets/default"
+ "id": "/subscriptions/../virtualNetworks/rg1-vnet/subnets/default"
- }
+ }
- }
+ }
- }
+ }
- ]
+ ]
- },
+ },
- "location": "eastus"
+ "location": "eastus"
} ```
PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/provider
## Update the delete behavior on an existing VM
-You can change the behavior when you delete a VM.
+You can change the behavior when you delete a VM.
### [CLI](#tab/cli3)
$vmConfig.StorageProfile.OsDisk.DeleteOption = 'Delete'
$vmConfig.StorageProfile.DataDisks | ForEach-Object { $_.DeleteOption = 'Delete' } $vmConfig.NetworkProfile.NetworkInterfaces | ForEach-Object { $_.DeleteOption = 'Delete' } $vmConfig | Update-AzVM
-```
+```
### [REST](#tab/rest3)
-The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted. Note, the API version specified in the api-version parameter must be '2021-03-01' or newer to configure the delete option.
+The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted. Note, the API version specified in the api-version parameter must be '2021-03-01' or newer to configure the delete option.
```rest
-PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/virtualMachines/testvm?api-version=2021-07-01
--
-{
- "properties": {
- "hardwareProfile": {
- "vmSize": "Standard_D2s_v3"
- },
- "storageProfile": {
- "imageReference": {
- "publisher": "MicrosoftWindowsServer",
- "offer": "WindowsServer",
- "sku": "2019-Datacenter",
- "version": "latest",
- "exactVersion": "17763.3124.2111130129"
- },
- "osDisk": {
- "osType": "Windows",
- "name": "OsDisk_1",
- "createOption": "FromImage",
- "caching": "ReadWrite",
- "managedDisk": {
- "storageAccountType": "Premium_LRS",
- "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/disks/OsDisk_1"
- },
- "deleteOption": "Delete",
- "diskSizeGB": 127
- },
- "dataDisks": [
- {
- "lun": 0,
- "name": "DataDisk_0",
- "createOption": "Attach",
- "caching": "None",
- "writeAcceleratorEnabled": false,
- "managedDisk": {
- "storageAccountType": "Premium_LRS",
- "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/disks/DataDisk_0"
- },
- "deleteOption": "Delete",
- "diskSizeGB": 1024,
- "toBeDetached": false
- },
- {
- "lun": 1,
- "name": "DataDisk_1",
- "createOption": "Attach",
- "caching": "None",
- "writeAcceleratorEnabled": false,
- "managedDisk": {
- "storageAccountType": "Premium_LRS",
- "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/disks/DataDisk_1"
- },
- "deleteOption": "Delete",
- "diskSizeGB": 1024,
- "toBeDetached": false
- }
- ]
- },
- "osProfile": {
- "computerName": "testvm",
- "adminUsername": "azureuser",
- "windowsConfiguration": {
- "provisionVMAgent": true,
- "enableAutomaticUpdates": true,
- "patchSettings": {
- "patchMode": "AutomaticByOS",
- "assessmentMode": "ImageDefault",
- "enableHotpatching": false
- }
- },
- "secrets": [],
- "allowExtensionOperations": true,
- "requireGuestProvisionSignal": true
- },
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Network/networkInterfaces/nic336",
- "properties": {
- "deleteOption": "Delete"
- }
- }
- ]
- }
- }
-}
+PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/virtualMachines/testvm?api-version=2021-07-01
++
+{
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_D2s_v3"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2019-Datacenter",
+ "version": "latest",
+ "exactVersion": "17763.3124.2111130129"
+ },
+ "osDisk": {
+ "osType": "Windows",
+ "name": "OsDisk_1",
+ "createOption": "FromImage",
+ "caching": "ReadWrite",
+ "managedDisk": {
+ "storageAccountType": "Premium_LRS",
+ "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/disks/OsDisk_1"
+ },
+ "deleteOption": "Delete",
+ "diskSizeGB": 127
+ },
+ "dataDisks": [
+ {
+ "lun": 0,
+ "name": "DataDisk_0",
+ "createOption": "Attach",
+ "caching": "None",
+ "writeAcceleratorEnabled": false,
+ "managedDisk": {
+ "storageAccountType": "Premium_LRS",
+ "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/disks/DataDisk_0"
+ },
+ "deleteOption": "Delete",
+ "diskSizeGB": 1024,
+ "toBeDetached": false
+ },
+ {
+ "lun": 1,
+ "name": "DataDisk_1",
+ "createOption": "Attach",
+ "caching": "None",
+ "writeAcceleratorEnabled": false,
+ "managedDisk": {
+ "storageAccountType": "Premium_LRS",
+ "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/disks/DataDisk_1"
+ },
+ "deleteOption": "Delete",
+ "diskSizeGB": 1024,
+ "toBeDetached": false
+ }
+ ]
+ },
+ "osProfile": {
+ "computerName": "testvm",
+ "adminUsername": "azureuser",
+ "windowsConfiguration": {
+ "provisionVMAgent": true,
+ "enableAutomaticUpdates": true,
+ "patchSettings": {
+ "patchMode": "AutomaticByOS",
+ "assessmentMode": "ImageDefault",
+ "enableHotpatching": false
+ }
+ },
+ "secrets": [],
+ "allowExtensionOperations": true,
+ "requireGuestProvisionSignal": true
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Network/networkInterfaces/nic336",
+ "properties": {
+ "deleteOption": "Delete"
+ }
+ }
+ ]
+ }
+ }
+}
```
Force delete allows you to forcefully delete your virtual machine, reducing dele
### [Portal](#tab/portal4)
-When you go to delete an existing VM, you'll find an option to apply force delete in the delete pane.
+When you go to delete an existing VM, you'll find an option to apply force delete in the delete pane.
1. Open the [portal](https://portal.azure.com). 1. Navigate to your virtual machine.
-1. On the **Overview** page, select **Delete**.
-1. In the **Delete virtual machine** pane, select the checkbox for **Apply force delete**.
-1. Select **Ok**.
+1. On the **Overview** page, select **Delete**.
+1. In the **Delete virtual machine** pane, select the checkbox for **Apply force delete**.
+1. Select **Ok**.
### [CLI](#tab/cli4)
-Use the `--force-deletion` parameter for [az vm delete](/cli/azure/vm#az-vm-delete).
+Use the `--force-deletion` parameter for [az vm delete](/cli/azure/vm#az-vm-delete).
```azurecli-interactive az vm delete \
az vm delete \
### [PowerShell](#tab/powershell4)
-Use the `-ForceDeletion` parameter for [Remove-AzVm](/powershell/module/az.compute/remove-azvm).
+Use the `-ForceDeletion` parameter for [Remove-AzVm](/powershell/module/az.compute/remove-azvm).
```azurepowershell Remove-AzVm `
Force delete allows you to forcefully delete your Virtual Machine Scale Set, red
### [Portal](#tab/portal5)
-When you go to delete an existing scale set, you'll find an option to apply force delete in the delete pane.
+When you go to delete an existing scale set, you'll find an option to apply force delete in the delete pane.
1. Open the [portal](https://portal.azure.com). 1. Navigate to your Virtual Machine Scale Set.
-1. On the **Overview** page, select **Delete**.
-1. In the **Delete Virtual Machine Scale Set** pane, select the checkbox for **Apply force delete**.
-1. Select **Ok**.
+1. On the **Overview** page, select **Delete**.
+1. In the **Delete Virtual Machine Scale Set** pane, select the checkbox for **Apply force delete**.
+1. Select **Ok**.
### [CLI](#tab/cli5)
-Use the `--force-deletion` parameter for [`az vmss delete`](/cli/azure/vmss#az-vmss-delete).
+Use the `--force-deletion` parameter for [`az vmss delete`](/cli/azure/vmss#az-vmss-delete).
```azurecli-interactive az vmss delete \
az vmss delete \
### [PowerShell](#tab/powershell5)
-Use the `-ForceDeletion` parameter for [Remove-AzVmss](/powershell/module/az.compute/remove-azvmss).
+Use the `-ForceDeletion` parameter for [Remove-AzVmss](/powershell/module/az.compute/remove-azvmss).
```azurepowershell Remove-AzVmss `
A: No, this feature is only available on disks and NICs associated with a VM.
### Q: How does this feature work with Flexible Virtual Machine Scale Set?
-A: For Flexible Virtual Machine Scale Set the disks, NICs, and PublicIPs have `deleteOption` set to `Delete` by default so these resources are automatically cleaned up when the VMs are deleted.
+A: For Flexible Virtual Machine Scale Set the disks, NICs, and PublicIPs have `deleteOption` set to `Delete` by default so these resources are automatically cleaned up when the VMs are deleted.
For data disks that were explicitly created and attached to the VMs, you can modify this property to ΓÇÿDetachΓÇÖ instead of ΓÇÿDeleteΓÇÖ if you want the disks to persist after the VM is deleted.
For data disks that were explicitly created and attached to the VMs, you can mod
A: Yes, you can use this feature for Spot VMs just the way you would for on-demand VMs.
-### Q: How do I persist the disks, NIC, and Public IPs associated with a VM?
+### Q: How do I persist the disks, NIC, and Public IPs associated with a VM?
A: By default, disks, NICs, and Public IPs associated with a VM are persisted when the VM is deleted. If you configure these resources to be automatically deleted, you can update the settings so that the resources remain after the VM is deleted. To keep these resources, set the `deleteOption` property to `Detach`.
virtual-machines Ephemeral Os Disks Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks-deploy.md
Title: Deploy Ephemeral OS disks
+ Title: Deploy Ephemeral OS disks
description: Learn to deploy ephemeral OS disks for Azure VMs.
Last updated 07/23/2020 --++ # How to deploy Ephemeral OS disks for Azure VMs
This article shows you how to create a virtual machine or virtual machine scale
In the Azure portal, you can choose to use ephemeral disks when deploying a virtual machine or virtual machine scale sets by opening the **Advanced** section of the **Disks** tab. For choosing placement of Ephemeral OS disk, select **OS cache placement** or **Temp disk placement**. ![Screenshot showing the radio button for choosing to use an ephemeral OS disk](./media/virtual-machines-common-ephemeral/ephemeral-portal-temp.png)
-
+ If the option for using an ephemeral disk or OS cache placement or Temp disk placement is greyed out, you might have selected a VM size that doesn't have a cache/temp size larger than the OS image or that doesn't support Premium storage. Go back to the **Basics** page and try choosing another VM size. ## Scale set template deployment
-The process to create a scale set that uses an ephemeral OS disk is to add the `diffDiskSettings` property to the
+The process to create a scale set that uses an ephemeral OS disk is to add the `diffDiskSettings` property to the
`Microsoft.Compute/virtualMachineScaleSets/virtualMachineProfile` resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement can be changed to `CacheDisk` for OS cache disk placement. ```json
-{
- "type": "Microsoft.Compute/virtualMachineScaleSets",
- "name": "myScaleSet",
- "location": "East US 2",
- "apiVersion": "2019-12-01",
- "sku": {
- "name": "Standard_DS2_v2",
- "capacity": "2"
- },
- "properties": {
- "upgradePolicy": {
- "mode": "Automatic"
- },
- "virtualMachineProfile": {
- "storageProfile": {
- "osDisk": {
- "diffDiskSettings": {
+{
+ "type": "Microsoft.Compute/virtualMachineScaleSets",
+ "name": "myScaleSet",
+ "location": "East US 2",
+ "apiVersion": "2019-12-01",
+ "sku": {
+ "name": "Standard_DS2_v2",
+ "capacity": "2"
+ },
+ "properties": {
+ "upgradePolicy": {
+ "mode": "Automatic"
+ },
+ "virtualMachineProfile": {
+ "storageProfile": {
+ "osDisk": {
+ "diffDiskSettings": {
"option": "Local" , "placement": "ResourceDisk"
- },
- "caching": "ReadOnly",
- "createOption": "FromImage"
- },
- "imageReference": {
- "publisher": "publisherName",
- "offer": "offerName",
- "sku": "skuName",
- "version": "imageVersion"
- }
- },
- "osProfile": {
- "computerNamePrefix": "myvmss",
- "adminUsername": "azureuser",
- "adminPassword": "P@ssw0rd!"
- }
- }
- }
-}
+ },
+ "caching": "ReadOnly",
+ "createOption": "FromImage"
+ },
+ "imageReference": {
+ "publisher": "publisherName",
+ "offer": "offerName",
+ "sku": "skuName",
+ "version": "imageVersion"
+ }
+ },
+ "osProfile": {
+ "computerNamePrefix": "myvmss",
+ "adminUsername": "azureuser",
+ "adminPassword": "P@ssw0rd!"
+ }
+ }
+ }
+}
``` > [!NOTE] > Replace all the other values accordingly.
-## VM template deployment
+## VM template deployment
You can deploy a VM with an ephemeral OS disk using a template. The process to create a VM that uses ephemeral OS disks is to add the `diffDiskSettings` property to Microsoft.Compute/virtualMachines resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement option can be changed to `CacheDisk` for OS cache disk placement. ```json
-{
- "type": "Microsoft.Compute/virtualMachines",
- "name": "myVirtualMachine",
- "location": "East US 2",
- "apiVersion": "2019-12-01",
- "properties": {
- "storageProfile": {
- "osDisk": {
- "diffDiskSettings": {
+{
+ "type": "Microsoft.Compute/virtualMachines",
+ "name": "myVirtualMachine",
+ "location": "East US 2",
+ "apiVersion": "2019-12-01",
+ "properties": {
+ "storageProfile": {
+ "osDisk": {
+ "diffDiskSettings": {
"option": "Local" , "placement": "ResourceDisk"
- },
- "caching": "ReadOnly",
- "createOption": "FromImage"
- },
- "imageReference": {
- "publisher": "MicrosoftWindowsServer",
- "offer": "WindowsServer",
- "sku": "2016-Datacenter-smalldisk",
- "version": "latest"
- },
- "hardwareProfile": {
- "vmSize": "Standard_DS2_v2"
- }
- },
- "osProfile": {
- "computerNamePrefix": "myvirtualmachine",
- "adminUsername": "azureuser",
- "adminPassword": "P@ssw0rd!"
- }
- }
- }
+ },
+ "caching": "ReadOnly",
+ "createOption": "FromImage"
+ },
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2016-Datacenter-smalldisk",
+ "version": "latest"
+ },
+ "hardwareProfile": {
+ "vmSize": "Standard_DS2_v2"
+ }
+ },
+ "osProfile": {
+ "computerNamePrefix": "myvirtualmachine",
+ "adminUsername": "azureuser",
+ "adminPassword": "P@ssw0rd!"
+ }
+ }
+ }
``` ## CLI
You can reimage a Virtual Machine instance with ephemeral OS disk using REST API
``` POST https://management.azure.com/subscriptions/{sub-
-id}/resourceGroups/{rgName}/providers/Microsoft.Compute/VirtualMachines/{vmName}/reimage?api-version=2019-12-01"
+id}/resourceGroups/{rgName}/providers/Microsoft.Compute/VirtualMachines/{vmName}/reimage?api-version=2019-12-01"
``` ## PowerShell
-To use an ephemeral disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` and `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk`.
+To use an ephemeral disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` and `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk`.
```powershell Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement ResourceDisk -Caching ReadOnly ```
-To use an ephemeral disk on cache disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `CacheDisk`.
+To use an ephemeral disk on cache disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `CacheDisk`.
```PowerShell Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement CacheDisk -Caching ReadOnly ```
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
Title: Ephemeral OS disks
+ Title: Ephemeral OS disks
description: Learn more about ephemeral OS disks for Azure VMs. -+ Last updated 07/23/2020 -+ # Ephemeral OS disks for Azure VMs
The key features of ephemeral disks are:
- Ideal for stateless applications. - Supported by Marketplace, custom images, and by [Azure Compute Gallery](./shared-image-galleries.md) (formerly known as Shared Image Gallery).-- Ability to fast reset or reimage VMs and scale set instances to the original boot state. -- Lower latency, similar to a temporary disk.
+- Ability to fast reset or reimage VMs and scale set instances to the original boot state.
+- Lower latency, similar to a temporary disk.
- Ephemeral OS disks are free, you incur no storage cost for OS disks.-- Available in all Azure regions.
+- Available in all Azure regions.
Key differences between persistent and ephemeral OS disks:
Key differences between persistent and ephemeral OS disks:
| **Specialized OS disk support** | Yes| No| | **OS disk resize**| Supported during VM creation and after VM is stop-deallocated| Supported during VM creation only| | **Resizing to a new VM size**| OS disk data is preserved| Data on the OS disk is deleted, OS is reprovisioned |
-| **Redeploy** | OS disk data is preserved | Data on the OS disk is deleted, OS is reprovisioned |
-| **Stop/ Start of VM** | OS disk data is preserved | Not Supported |
+| **Redeploy** | OS disk data is preserved | Data on the OS disk is deleted, OS is reprovisioned |
+| **Stop/ Start of VM** | OS disk data is preserved | Not Supported |
| **Page file placement**| For Windows, page file is stored on the resource disk| For Windows, page file is stored on the OS disk (for both OS cache placement and Temp disk placement).|
-| **Maintenance of VM/VMSS using [healing](understand-vm-reboots.md#unexpected-downtime)** | OS disk data is preserved | OS disk data is not preserved |
-| **Maintenance of VM/VMSS using [Live Migration](maintenance-and-updates.md#live-migration)** | OS disk data is preserved | OS disk data is preserved |
+| **Maintenance of VM/VMSS using [healing](understand-vm-reboots.md#unexpected-downtime)** | OS disk data is preserved | OS disk data is not preserved |
+| **Maintenance of VM/VMSS using [Live Migration](maintenance-and-updates.md#live-migration)** | OS disk data is preserved | OS disk data is preserved |
\* 4 TiB is the maximum supported OS disk size for managed (persistent) disks. However, many OS disks are partitioned with master boot record (MBR) by default and because of this are limited to 2 TiB. For details, see [OS disk](managed-disks-overview.md#os-disk). ## Placement options for Ephemeral OS disks
-Ephemeral OS disk can be stored either on VM's OS cache disk or VM's temp/resource disk.
+Ephemeral OS disk can be stored either on VM's OS cache disk or VM's temp/resource disk.
[DiffDiskPlacement](/rest/api/compute/virtualmachines/list#diffdiskplacement) is the new property that can be used to specify where you want to place the Ephemeral OS disk. With this feature, when a Windows VM is provisioned, we configure the pagefile to be located on the OS Disk. ## Size requirements
Ephemeral disks also require that the VM size supports **Premium storage**. The
- Disk snapshots - Azure Disk Encryption - Azure Backup-- Azure Site Recovery
+- Azure Site Recovery
- OS Disk Swap ## Trusted Launch for Ephemeral OS disks
For more information on [how to deploy a trusted launch VM](trusted-launch-porta
## Confidential VMs using Ephemeral OS disks AMD-based Confidential VMs cater to high security and confidentiality requirements of customers. These VMs provide a strong, hardware-enforced boundary to help meet your security needs. There are limitations to use Confidential VMs. Check the [region](../confidential-computing/confidential-vm-overview.md#regions), [size](../confidential-computing/confidential-vm-overview.md#size-support) and [OS supported](../confidential-computing/confidential-vm-overview.md#os-support) limitations for confidential VMs.
-Virtual machine guest state (VMGS) blob contains the security information of the confidential VM.
+Virtual machine guest state (VMGS) blob contains the security information of the confidential VM.
Confidential VMs using Ephemeral OS disks by default **1 GiB** from the **OS cache** or **temp storage** based on the chosen placement option is reserved for VMGS.The lifecycle of the VMGS blob is tied to that of the OS Disk. > [!IMPORTANT] >
For more information on [confidential VM](../confidential-computing/confidential
## Customer Managed key
-You can choose to use customer managed keys or platform managed keys when you enable end-to-end encryption for VMs using Ephemeral OS disk. Currently this option is available only via [PowerShell](./windows/disks-enable-customer-managed-keys-powershell.md), [CLI](./linux/disks-enable-customer-managed-keys-cli.md) and SDK in all regions.
+You can choose to use customer managed keys or platform managed keys when you enable end-to-end encryption for VMs using Ephemeral OS disk. Currently this option is available only via [PowerShell](./windows/disks-enable-customer-managed-keys-powershell.md), [CLI](./linux/disks-enable-customer-managed-keys-cli.md) and SDK in all regions.
> [!IMPORTANT] >
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Title: Azure Linux VM Agent overview
+ Title: Azure Linux VM Agent overview
description: Learn how to install and configure the Azure Linux VM Agent (waagent) to manage your virtual machine's interaction with the Azure fabric controller. -+ Last updated 03/28/2023
Ensure that your VM has access to IP address 168.63.129.16. For more information
## Installation The supported method of installing and upgrading the Azure Linux VM Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux VM Agent package into their images and repositories.
-Some Linux distributions might disable the Azure Linux VM Agent **Auto Update** feature and some of the repositories might also contain older versions, those might have issues with modern extensions so, we recommend to have the latest stable version installed.
+Some Linux distributions might disable the Azure Linux VM Agent **Auto Update** feature and some of the repositories might also contain older versions, those might have issues with modern extensions so, we recommend to have the latest stable version installed.
To make sure the Azure Linux VM Agent is updating properly we recommend having the option `AutoUpdate.Enabled=Y` in the `/etc/waagent.conf` file or simply commenting out that option will result in its defaults too. Having `AutoUpdate.Enabled=N` will not allow the Azure Linux VM Agent to update properly. For advanced installation options, such as installing from a source or to custom locations or prefixes, see [Microsoft Azure Linux VM Agent](https://github.com/Azure/WALinuxAgent). Other than these scenarios, we do not support or recommend upgrading or reinstalling the Azure Linux VM Agent from source.
For advanced installation options, such as installing from a source or to custom
- `Nameserver` configuration in */etc/resolv.conf*. - The root password from */etc/shadow*, if `Provisioning.DeleteRootPassword` is `y` in the configuration file. - Cached DHCP client leases.
-
+ The client resets the host name to `localhost.localdomain`. > [!WARNING]
Configuration options are of three types: `Boolean`, `String`, or `Integer`. You
### Provisioning.Enabled ```txt
-Type: Boolean
+Type: Boolean
Default: y ```
This option allows the user to enable or disable the provisioning functionality
### Provisioning.DeleteRootPassword ```txt
-Type: Boolean
+Type: Boolean
Default: n ```
If the value is `y`, the agent erases the root password in the */etc/shadow* fil
### Provisioning.RegenerateSshHostKeyPair ```txt
-Type: Boolean
+Type: Boolean
Default: y ```
Configure the encryption type for the fresh key pair by using the `Provisioning.
### Provisioning.SshHostKeyPairType ```txt
-Type: String
+Type: String
Default: rsa ```
You can set this option to an encryption algorithm type that the SSH daemon supp
### Provisioning.MonitorHostName ```txt
-Type: Boolean
+Type: Boolean
Default: y ```
If the value is `y`, waagent monitors the Linux VM for a host name change, as re
### Provisioning.DecodeCustomData ```txt
-Type: Boolean
+Type: Boolean
Default: n ```
If the value is `y`, waagent decodes `CustomData` from Base64.
### Provisioning.ExecuteCustomData ```txt
-Type: Boolean
+Type: Boolean
Default: n ```
This option allows the password for the system user to be reset. It's disabled b
### Provisioning.PasswordCryptId ```txt
-Type: String
+Type: String
Default: 6 ``` This option specifies the algorithm that `crypt` uses when it's generating a password hash. Valid values are: -- `1`: MD5 -- `2a`: Blowfish -- `5`: SHA-256 -- `6`: SHA-512
+- `1`: MD5
+- `2a`: Blowfish
+- `5`: SHA-256
+- `6`: SHA-512
### Provisioning.PasswordCryptSaltLength ```txt
-Type: String
+Type: String
Default: 10 ```
This option specifies the length of random salt used in generating a password ha
### ResourceDisk.Format ```txt
-Type: Boolean
+Type: Boolean
Default: y ```
If the value is `y`, waagent formats and mounts the resource disk that the platf
### ResourceDisk.Filesystem ```txt
-Type: String
+Type: String
Default: ext4 ```
This option specifies the file system type for the resource disk. Supported valu
### ResourceDisk.MountPoint ```txt
-Type: String
-Default: /mnt/resource
+Type: String
+Default: /mnt/resource
``` This option specifies the path at which the resource disk is mounted. The resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned.
This option specifies the path at which the resource disk is mounted. The resour
### ResourceDisk.MountOptions ```txt
-Type: String
+Type: String
Default: None ```
This option specifies disk mount options to be passed to the `mount -o` command.
### ResourceDisk.EnableSwap ```txt
-Type: Boolean
+Type: Boolean
Default: n ```
If you set this option, the agent creates a swap file (*/swapfile*) on the resou
### ResourceDisk.SwapSizeMB ```txt
-Type: Integer
+Type: Integer
Default: 0 ```
This option specifies the size of the swap file in megabytes.
### Logs.Verbose ```txt
-Type: Boolean
+Type: Boolean
Default: n ```
If you set this option, log verbosity is boosted. Waagent logs to */var/log/waag
### OS.EnableRDMA ```txt
-Type: Boolean
+Type: Boolean
Default: n ```
If you set this option, the agent attempts to install and then load an RDMA kern
### OS.RootDeviceScsiTimeout ```txt
-Type: Integer
+Type: Integer
Default: 300 ```
This option configures the SCSI timeout in seconds on the OS disk and data drive
### OS.OpensslPath ```txt
-Type: String
+Type: String
Default: None ```
You can use this option to specify an alternate path for the *openssl* binary to
### HttpProxy.Host, HttpProxy.Port ```txt
-Type: String
+Type: String
Default: None ```
Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-
- `Provisioning.Enabled` defaults to `n` on Ubuntu Cloud Images that use cloud-init to perform provisioning tasks. - The following configuration parameters have no effect on Ubuntu Cloud Images that use cloud-init to manage the resource disk and swap space:
-
+ - `ResourceDisk.Format` - `ResourceDisk.Filesystem` - `ResourceDisk.MountPoint`
Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-
- `ResourceDisk.SwapSizeMB` To configure the resource disk mount point and swap space on Ubuntu Cloud Images during provisioning, see the following resources:
-
+ - [Ubuntu wiki: AzureSwapPartitions](https://go.microsoft.com/fwlink/?LinkID=532955&clcid=0x409) - [Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension](../windows/tutorial-automate-vm-deployment.md)
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
-+ Last updated 03/31/2023
You can store sensitive data in a protected configuration, which is encrypted an
"autoUpgradeMinorVersion": true, "settings": { "skipDos2Unix":false,
- "timestamp":123456789
+ "timestamp":123456789
}, "protectedSettings": { "commandToExecute": "<command-to-execute>",
You can deploy Azure VM extensions by using Azure Resource Manager templates. Th
"protectedSettings": { "commandToExecute": "sh hello.sh <param2>", "fileUris": ["https://github.com/MyProject/Archive/hello.sh"
- ]
+ ]
} } }
sudo ls -l /var/lib/waagent/custom-script/download/0/
To troubleshoot, first check the Linux Agent log and ensure that the extension ran: ```bash
-sudo cat /var/log/waagent.log
+sudo cat /var/log/waagent.log
``` Look for the extension execution. It looks something like:
virtual-machines Diagnostics Linux V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux-v3.md
Previously updated : 12/13/2018 - Last updated : 12/13/2018+ ms.devlang: azurecli # Use Linux diagnostic extension 3.0 to monitor metrics and logs
This extension works with both Azure deployment models.
You can enable the extension by using Azure PowerShell cmdlets, Azure CLI scripts, Azure Resource Monitor templates (ARM templates), or the Azure portal. For more information, see [Extensions features](features-linux.md). >[!NOTE]
->Some components of the LAD VM extension are also shipped in the [Log Analytics VM extension](./oms-linux.md). Because of this architecture, conflicts can arise if both extensions are instantiated in the same ARM template.
+>Some components of the LAD VM extension are also shipped in the [Log Analytics VM extension](./oms-linux.md). Because of this architecture, conflicts can arise if both extensions are instantiated in the same ARM template.
> >To avoid install-time conflicts, use the [`dependsOn` directive](../../azure-resource-manager/templates/resource-dependency.md#dependson) to ensure the extensions are installed sequentially. The extensions can be installed in either order.
The downloadable configuration is just an example. Modify it to suit your needs.
* **Azure Linux Agent version 2.2.0 or later**. Most Azure VM Linux gallery images include version 2.2.7 or later. Run `/usr/sbin/waagent -version` to confirm the version installed on the VM. If the VM is running an older version, [Update the guest agent](./update-linux-agent.md). * The **Azure CLI**. If necessary, [set up the Azure CLI](/cli/azure/install-azure-cli) environment on your machine. * The **wget command**. If you don't already have it, install it using the corresponding package manager.
-* An existing **Azure subscription**.
+* An existing **Azure subscription**.
* An existing **general purpose storage account** to store the data. General purpose storage accounts must support table storage. A blob storage account won't work. * **Python 2**.
The Linux diagnostic extension requires Python 2. If your virtual machine uses a
The `python2` executable file must be aliased to *python*. Here's one method to set this alias: 1. Run the following command to remove any existing aliases.
-
+ ```bash sudo update-alternatives --remove-all python ```
The `python2` executable file must be aliased to *python*. Here's one method to
### Sample installation
-The sample configuration downloaded in the following examples collects a set of standard data and sends it to table storage. The URL for the sample configuration and its contents can change.
+The sample configuration downloaded in the following examples collects a set of standard data and sends it to table storage. The URL for the sample configuration and its contents can change.
In most cases, you should download a copy of the portal settings JSON file and customize it for your needs. Then use templates or your own automation to use a customized version of the configuration file rather than downloading from the URL each time.
$sasToken = New-AzStorageAccountSASToken -Service Blob,Table -ResourceType Servi
$protectedSettings="{'storageAccountName': '$storageAccountName', 'storageAccountSasToken': '$sasToken'}" # Finally, install the extension with the settings you built
-Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName -Location $vm.Location -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
+Set-AzVMExtension -ResourceGroupName $VMresourceGroup -VMName $vmName -Location $vm.Location -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
``` ### Update the extension settings
After you change your protected or public settings, deploy them to the VM by run
### Migrate from previous versions of the extension
-The latest version of the extension is *4.0*.
+The latest version of the extension is *4.0*.
> [!IMPORTANT] > This extension introduces breaking changes to the configuration. One such change improved the security of the extension, so backward compatibility with 2.x couldn't be maintained. Also, the extension publisher for this extension differs from the publisher for the 2.x versions.
LAD version 3.0 supports two sink types: `EventHub` and `JsonBlob`.
] ```
-The `"sasURL"` entry contains the full URL, including the SAS token, for the event hub to which data should be published. LAD requires a SAS to name a policy that enables the send claim.
+The `"sasURL"` entry contains the full URL, including the SAS token, for the event hub to which data should be published. LAD requires a SAS to name a policy that enables the send claim.
For example:
For more information about generating and retrieving information on SAS tokens f
] ```
-Data directed to a `JsonBlob` sink is stored in blobs in Azure Storage. Each instance of LAD creates a blob every hour for each sink name. Each blob always contains a syntactically valid JSON array of objects. New entries are atomically added to the array.
+Data directed to a `JsonBlob` sink is stored in blobs in Azure Storage. Each instance of LAD creates a blob every hour for each sink name. Each blob always contains a syntactically valid JSON array of objects. New entries are atomically added to the array.
Blobs are stored in a container that has the same name as the sink. The Azure Storage rules for blob container names apply to the names of `JsonBlob` sinks. The name must have between 3 and 63 lowercase alphanumeric ASCII characters or dashes.
The following sections provide details for the remaining elements.
The `ladCfg` structure is optional. It controls the gathering of metrics and logs that are delivered to the Azure Monitor Metrics service and to other data sinks. You must specify:
-* Either `performanceCounters` or `syslogEvents` or both.
+* Either `performanceCounters` or `syslogEvents` or both.
* The `metrics` structure. Element | Value
type | Identifies the actual provider of the metric.
class | Together with `"counter"`, identifies the specific metric within the provider's namespace. counter | Together with `"class"`, identifies the specific metric within the provider's namespace. counterSpecifier | Identifies the specific metric within the Azure Monitor Metrics namespace.
-condition | (Optional) Selects a specific instance of the object to which the metric applies. Or it selects the aggregation across all instances of that object.
+condition | (Optional) Selects a specific instance of the object to which the metric applies. Or it selects the aggregation across all instances of that object.
sampleRate | The IS 8601 interval that sets the rate at which raw samples for this metric are collected. If the value isn't set, the collection interval is set by the value of [`sampleRateInSeconds`](#ladcfg). The shortest supported sample rate is 15 seconds (PT15S). unit | Defines the unit for the metric. Should be one of these strings: `"Count"`, `"Bytes"`, `"Seconds"`, `"Percent"`, `"CountPerSecond"`, `"BytesPerSecond"`, `"Millisecond"`. Consumers of the collected data expect the collected data values to match this unit. LAD ignores this field. displayName | The label to be attached to the data in Azure Monitor Metrics. This label is in the language specified by the associated locale setting. LAD ignores this field.
-The `counterSpecifier` is an arbitrary identifier. Consumers of metrics, like the Azure portal charting and alerting feature, use `counterSpecifier` as the "key" that identifies a metric or an instance of a metric.
+The `counterSpecifier` is an arbitrary identifier. Consumers of metrics, like the Azure portal charting and alerting feature, use `counterSpecifier` as the "key" that identifies a metric or an instance of a metric.
-For `builtin` metrics, we recommend `counterSpecifier` values that begin with `/builtin/`. If you're collecting a specific instance of a metric, we recommend you attach the identifier of the instance to the `counterSpecifier` value.
+For `builtin` metrics, we recommend `counterSpecifier` values that begin with `/builtin/`. If you're collecting a specific instance of a metric, we recommend you attach the identifier of the instance to the `counterSpecifier` value.
Here are some examples:
Here are some examples:
LAD and the Azure portal don't expect the `counterSpecifier` value to match any pattern. Be consistent in how you construct `counterSpecifier` values.
-When you specify `performanceCounters`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. But you can't disable storing data to a table.
+When you specify `performanceCounters`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. But you can't disable storing data to a table.
-All instances of LAD that use the same storage account name and endpoint add their metrics and logs to the same table. If too many VMs write to the same table partition, Azure can throttle writes to that partition.
+All instances of LAD that use the same storage account name and endpoint add their metrics and logs to the same table. If too many VMs write to the same table partition, Azure can throttle writes to that partition.
-The `eventVolume` setting causes entries to be spread across 1 (small), 10 (medium), or 100 (large) partitions. Usually, medium partitions are sufficient to avoid traffic throttling.
+The `eventVolume` setting causes entries to be spread across 1 (small), 10 (medium), or 100 (large) partitions. Usually, medium partitions are sufficient to avoid traffic throttling.
The Azure Monitor Metrics feature of the Azure portal uses the data in this table to produce graphs or to trigger alerts. The table name is the concatenation of these strings:
sinks | A comma-separated list of names of sinks to which individual log events
facilityName | A syslog facility name, such as `"LOG_USER"` or `"LOG\LOCAL0"`. For more information, see the "Facility" section of the [syslog man page](http://man7.org/linux/man-pages/man3/syslog.3.html). minSeverity | A syslog severity level, such as `"LOG_ERR"` or `"LOG_INFO"`. For more information, see the "Level" section of the [syslog man page](http://man7.org/linux/man-pages/man3/syslog.3.html). The extension captures events sent to the facility at or above the specified level.
-When you specify `syslogEvents`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. But you can't disable storing data to a table.
+When you specify `syslogEvents`, LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs or Event Hubs or both. But you can't disable storing data to a table.
The partitioning behavior for the table is the same as described for `performanceCounters`. The table name is the concatenation of these strings:
The `fileLogs` section controls the capture of log files. LAD captures new text
Element | Value - | -- file | The full path name of the log file to be watched and captured. The path name must name a single file. It can't name a directory or contain wildcard characters. The `omsagent` user account must have read access to the file path.
-table | (Optional) The Azure Storage table into which new lines from the "tail" of the file are written. The table must be in the designated storage account, as specified in the protected configuration.
+table | (Optional) The Azure Storage table into which new lines from the "tail" of the file are written. The table must be in the designated storage account, as specified in the protected configuration.
sinks | (Optional) A comma-separated list of names of more sinks to which log lines are sent. Either `"table"` or `"sinks"`, or both, must be specified.
The `builtin` metric provider is a source of metrics that are the most interesti
### builtin metrics for the Processor class
-The Processor class of metrics provides information about processor usage in the VM. When percentages are aggregated, the result is the average across all CPUs.
+The Processor class of metrics provides information about processor usage in the VM. When percentages are aggregated, the result is the average across all CPUs.
In a two-vCPU VM, if one vCPU is 100 percent busy and the other is 100 percent idle, the reported `PercentIdleTime` is 50. If each vCPU is 50 percent busy for the same period, the reported result is also 50. In a four-vCPU VM, when one vCPU is 100 percent busy and the others are idle, the reported `PercentIdleTime` is 75.
This class of metrics has only one instance. The `"condition"` attribute has no
### builtin metrics for the Network class
-The Network class of metrics provides information about network activity on an individual network interface since the startup.
+The Network class of metrics provides information about network activity on an individual network interface since the startup.
LAD doesn't expose bandwidth metrics. You can get these metrics from host metrics.
ReadsPerSecond | Read operations per second
WritesPerSecond | Write operations per second TransfersPerSecond | Read or write operations per second
-You can get aggregated values across all file systems by setting `"condition": "IsAggregate=True"`. Get values for a specific mounted file system, such as `"/mnt"`, by setting `"condition": 'Name="/mnt"'`.
+You can get aggregated values across all file systems by setting `"condition": "IsAggregate=True"`. Get values for a specific mounted file system, such as `"/mnt"`, by setting `"condition": 'Name="/mnt"'`.
> [!NOTE] > If you're working in the Azure portal instead of JSON, the condition field form is `Name='/mnt'`. ### builtin metrics for the Disk class
-The Disk class of metrics provides information about disk device usage. These statistics apply to the entire drive.
+The Disk class of metrics provides information about disk device usage. These statistics apply to the entire drive.
When a device has multiple file systems, the counters for that device are, effectively, aggregated across all file systems.
If your protected settings are in the file *ProtectedSettings.json* and your pub
az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json ```
-The command assumes you're using the Azure Resource Manager mode of the Azure CLI. To configure LAD for classic deployment model VMs, switch to "asm" mode (`azure config mode asm`) and omit the resource group name in the command.
+The command assumes you're using the Azure Resource Manager mode of the Azure CLI. To configure LAD for classic deployment model VMs, switch to "asm" mode (`azure config mode asm`) and omit the resource group name in the command.
For more information, see the [cross-platform CLI documentation](/cli/azure/authenticate-azure-cli).
Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> -Lo
Based on the preceding definitions, this section provides a sample LAD 3.0 extension configuration and some explanation. To apply this sample to your case, use your own storage account name, account SAS token, and Event Hubs SAS tokens. > [!NOTE]
-> Depending on whether you use the Azure CLI or PowerShell to install LAD, the method for providing public and protected settings differs:
+> Depending on whether you use the Azure CLI or PowerShell to install LAD, the method for providing public and protected settings differs:
>
-> * If you're using the Azure CLI, save the following settings to *ProtectedSettings.json* and *PublicSettings.json* to use the preceding sample command.
+> * If you're using the Azure CLI, save the following settings to *ProtectedSettings.json* and *PublicSettings.json* to use the preceding sample command.
> * If you're using PowerShell, save the following settings to `$protectedSettings` and `$publicSettings` by running `$protectedSettings = '{ ... }'`. ### Protected settings
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Last updated 04/04/2023-+ ms.devlang: azurecli # Use the Linux diagnostic extension 4.0 to monitor metrics and logs
Set-AzVMExtension -ResourceGroupName <resource_group_name> -VMName <vm_name> `
-### Enable auto update
+### Enable auto update
To enable automatic update of the agent, we recommend that you enable the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature:
my_diagnostic_storage_account_sastoken=$(az storage account generate-sas \
my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'}" # Finally, tell Azure to install and enable the extension.
-az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic
+az vmss extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic
--version 4.0 --resource-group $my_resource_group --vmss-name $my_linux_vmss \ --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json ```
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md
Title: Enable InfiniBand on HPC VMs - Azure Virtual Machines | Microsoft Docs
description: Learn how to enable InfiniBand on Azure HPC VMs. -+ Last updated 04/12/2023
For Windows, download and install the [Mellanox OFED for Windows drivers](https:
## Enable IP over InfiniBand (IB) If you plan to run MPI jobs, you typically don't need IPoIB. The MPI library will use the verbs interface for IB communication (unless you explicitly use the TCP/IP channel of MPI library). But if you have an app that uses TCP/IP for communication and you want to run over IB, you can use IPoIB over the IB interface. Use the following commands (for RHEL/CentOS) to enable IP over InfiniBand.
-> [!IMPORTANT]
+> [!IMPORTANT]
> To avoid issues, ensure you aren't running older versions of Microsoft Azure Linux Agent (waagent). We recommend using at least [version 2.4.0.2](https://github.com/Azure/WALinuxAgent/releases/tag/v2.4.0.2) before enabling IP over IB. ```bash
virtual-machines Extensions Rmpolicy Howto Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-cli.md
description: Use Azure Policy to restrict VM extension deployments.
-+
Last updated 04/11/2023
If you want to prevent the installation of certain extensions on your Linux VMs, you can create an Azure Policy definition using the Azure CLI to restrict extensions for VMs within a resource group. To learn the basics of Azure VM extensions for Linux, see [Virtual machine extensions and features for Linux](./features-linux.md).
-This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. If you want to run the Azure CLI locally, you need to install version 2.0.26 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. If you want to run the Azure CLI locally, you need to install version 2.0.26 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Create a rules file
This example demonstrates how to deny the installation of disallowed VM extensio
## Create a parameters file
-You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the unauthorized extensions.
+You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the unauthorized extensions.
This example shows you how to create a parameter file for Linux VMs in Cloud Shell.
virtual-machines Extensions Rmpolicy Howto Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-ps.md
Last updated 04/11/2023-+ # Use Azure Policy to restrict extensions installation on Windows VMs If you want to prevent the use or installation of certain extensions on your Windows VMs, you can create an Azure Policy definition using PowerShell to restrict extensions for VMs within a resource group.
-This tutorial uses Azure PowerShell within the Cloud Shell, which is constantly updated to the latest version.
+This tutorial uses Azure PowerShell within the Cloud Shell, which is constantly updated to the latest version.
## Create a rules file
Set-AzVMAccessExtension `
-ResourceGroupName "myResourceGroup" ` -VMName "myVM" ` -Name "myVMAccess" `
- -Location EastUS
+ -Location EastUS
``` In the portal, the password change should fail with the "The template deployment failed because of policy violation." message.
virtual-machines Features Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/features-linux.md
Last updated 05/24/2022-+ # Virtual machine extensions and features for Linux
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
vm-linux Last updated 04/21/2023-+
This extension supports the following OS distros, depending on driver support fo
| CentOS | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 | | Red Hat Enterprise Linux | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 |
-> [!IMPORTANT]
+> [!IMPORTANT]
> This document references a release version of Linux that is nearing or at, End of Life (EOL). Please consider updating to a more current version. ### Internet connectivity
The following JSON shows the schema for the extension.
## Deployment
-### Azure Resource Manager Template
+### Azure Resource Manager Template
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment configuration.
-The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
+The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
The following example assumes the extension is nested inside the virtual machine resource. When nesting the extension resource, the JSON is placed in the `"resources": []` object of the virtual machine.
az vm extension set \
--vm-name myVM \ --name InfiniBandDriverLinux \ --publisher Microsoft.HpcCompute \
- --version 1.2
+ --version 1.2
``` ### Add extension to a Virtual Machine Scale Set
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
vm-linux-+ Last updated 07/28/2023
This extension supports the following OS distros, depending on driver support fo
> [!NOTE] > The latest supported CUDA drivers for NC-series VMs are currently 470.82.01. Later driver versions aren't supported on the K80 cards in NC. While the extension is being updated with this end of support for NC, install CUDA drivers manually for K80 cards on the NC-series.
-> [!IMPORTANT]
-> This document references a release version of Linux that is nearing or at, End of Life (EOL). Please consider updating to a more current version.
+> [!IMPORTANT]
+> This document references a release version of Linux that is nearing or at, End of Life (EOL). Please consider updating to a more current version.
### Internet connectivity
You can deploy Azure NVIDIA VM extensions in the Azure portal.
1. Select **Review + create**, and select **Create**. Wait a few minutes for the driver to deploy. :::image type="content" source="./media/nvidia-ext-portal/create-nvidia-extension-linux.png" alt-text="Screenshot that shows selecting the Review + create button.":::
-
+ 1. Verify that the extension was added to the list of installed extensions. :::image type="content" source="./media/nvidia-ext-portal/verify-extension-linux.png" alt-text="Screenshot that shows the new extension in the list of extensions for the V M.":::
az vm extension set \
--vm-name myVM \ --name NvidiaGpuDriverLinux \ --publisher Microsoft.HpcCompute \
- --version 1.6
+ --version 1.6
``` The following example also adds two optional custom settings as an example for nondefault driver installation. Specifically, it updates the OS kernel to the latest and installs a specific CUDA toolkit version driver. Again, note the `--settings` are optional and default. Updating the kernel might increase the extension installation times. Also, choosing a specific (older) CUDA toolkit version might not always be compatible with newer kernels.
virtual-machines Key Vault Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-linux.md
Title: Azure Key Vault VM Extension for Linux
+ Title: Azure Key Vault VM Extension for Linux
description: Deploy an agent performing automatic refresh of Key Vault certificates on virtual machines using a virtual machine extension.
Last updated 12/02/2019--++ # Key Vault virtual machine extension for Linux
-The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault. Specifically, the extension monitors a list of observed certificates stored in key vaults. The extension retrieves and installs the corresponding certificates after detecting a change. This document details the supported platforms, configurations, and deployment options for the Key Vault VM extension for Linux.
+The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault. Specifically, the extension monitors a list of observed certificates stored in key vaults. The extension retrieves and installs the corresponding certificates after detecting a change. This document details the supported platforms, configurations, and deployment options for the Key Vault VM extension for Linux.
### Operating system The Key Vault VM extension supports these Linux distributions: -- Ubuntu 20.04, 22.04
+- Ubuntu 20.04, 22.04
- [Azure Linux](../../azure-linux/intro-azure-linux.md) > [!NOTE]
The Key Vault VM extension supports these Linux distributions:
- [Use Azure RBAC secret, key, and certificate permissions with Azure Key Vault](/azure/key-vault/general/rbac-guide#using-azure-rbac-secret-key-and-certificate-permissions-with-key-vault) - [Key Vault scope role assignment](/azure/key-vault/general/rbac-guide?tabs=azure-cli#key-vault-scope-role-assignment) - VMSS should have the following identity setting:
- `
+ `
"identity": { "type": "UserAssigned", "userAssignedIdentities": {
The Key Vault VM extension supports these Linux distributions:
} } `
-
+ - AKV extension should have this setting: ` "authenticationSettings": {
The Key Vault VM extension supports these Linux distributions:
```azurecli az vm extension delete --name KeyVaultForLinux --resource-group ${resourceGroup} --vm-name ${vmName} az vm extension set -n "KeyVaultForLinux" --publisher Microsoft.Azure.KeyVault --resource-group "${resourceGroup}" --vm-name "${vmName}" ΓÇôsettings .\akvvm.json ΓÇôversion 2.0
-```
- The flag --version 2.0 is optional because the latest version is installed by default.
+```
+ The flag --version 2.0 is optional because the latest version is installed by default.
* If the VM has certificates downloaded by v1.0, deleting the v1.0 AKVVM extension doesn't delete the downloaded certificates. After installing v2.0, the existing certificates aren't modified. You would need to delete the certificate files or roll-over the certificate to get the PEM file with full-chain on the VM. ## Extension schema
-The following JSON shows the schema for the Key Vault VM extension. The extension doesn't require protected settings - all its settings are considered information without security impact. The extension requires a list of monitored secrets, polling frequency, and the destination certificate store. Specifically:
+The following JSON shows the schema for the Key Vault VM extension. The extension doesn't require protected settings - all its settings are considered information without security impact. The extension requires a list of monitored secrets, polling frequency, and the destination certificate store. Specifically:
```json { "type": "Microsoft.Compute/virtualMachines/extensions",
The following JSON shows the schema for the Key Vault VM extension. The extensio
> [!NOTE] > Your observed certificates URLs should be of the form `https://myVaultName.vault.azure.net/secrets/myCertName`.
->
+>
> This is because the `/secrets` path returns the full certificate, including the private key, while the `/certificates` path doesn't. More information about certificates can be found here: [Key Vault Certificates](../../key-vault/general/about-keys-secrets-certificates.md) > [!IMPORTANT] > The 'authenticationSettings' property is **required** for VMs with any **user assigned identities**. Even if you want to use a system assigned identity this is still required otherwise the VM extension doesn't know which identity to use. Without this section, a VM with user assigned identities will result in the Key Vault extension failing and being unable to download certificates. > Set msiClientId to the identity that will authenticate to Key Vault.
->
+>
> Also **required** for **Azure Arc-enabled VMs**. > Set msiEndpoint to `http://localhost:40342/metadata/identity`.
The following JSON shows the schema for the Key Vault VM extension. The extensio
## Template deployment
-Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment refresh of certificates. The extension can be deployed to individual VMs or virtual machine scale sets. The schema and configuration are common to both template types.
+Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment refresh of certificates. The extension can be deployed to individual VMs or virtual machine scale sets. The schema and configuration are common to both template types.
The JSON configuration for a virtual machine extension must be nested inside the virtual machine resource fragment of the template, specifically `"resources": []` object for the virtual machine template and for a virtual machine scale set under `"virtualMachineProfile":"extensionProfile":{"extensions" :[]` object. > [!NOTE] > The VM extension would require system or user managed identity to be assigned to authenticate to Key vault. See [How to authenticate to Key Vault and assign a Key Vault access policy.](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md)
->
+>
```json {
The JSON configuration for a virtual machine extension must be nested inside the
"certificateStoreName": <ingnored on linux>, "certificateStoreLocation": <disk path where certificate is stored, default: "/var/lib/waagent/Microsoft.Azure.KeyVault">, "observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.: "https://myvault.vault.azure.net/secrets/mycertificate"
- }
+ }
} } }
To turn on extension dependency, set the following:
> [!WARNING] > PowerShell clients often add `\` to `"` in the settings.json which will cause akvvm_service fails with error: `[CertificateManagementConfiguration] Failed to parse the configuration settings with:not an object.`
-The Azure PowerShell can be used to deploy the Key Vault VM extension to an existing virtual machine or virtual machine scale set.
+The Azure PowerShell can be used to deploy the Key Vault VM extension to an existing virtual machine or virtual machine scale set.
* To deploy the extension on a VM:
-
+ ```powershell # Build settings
- $settings = '{"secretsManagementSettings":
- { "pollingIntervalInS": "' + <pollingInterval> +
- '", "certificateStoreName": "' + <certStoreName> +
- '", "certificateStoreLocation": "' + <certStoreLoc> +
+ $settings = '{"secretsManagementSettings":
+ { "pollingIntervalInS": "' + <pollingInterval> +
+ '", "certificateStoreName": "' + <certStoreName> +
+ '", "certificateStoreLocation": "' + <certStoreLoc> +
'", "observedCertificates": ["' + <observedCert1> + '","' + <observedCert2> + '"] } }' $extName = "KeyVaultForLinux" $extPublisher = "Microsoft.Azure.KeyVault" $extType = "KeyVaultForLinux"
-
-
++ # Start the deployment Set-AzVmExtension -TypeHandlerVersion "2.0" -EnableAutomaticUpgrade true -ResourceGroupName <ResourceGroupName> -Location <Location> -VMName <VMName> -Name $extName -Publisher $extPublisher -Type $extType -SettingString $settings
-
+ ``` * To deploy the extension on a virtual machine scale set: ```powershell
-
+ # Build settings
- $settings = '{"secretsManagementSettings":
- { "pollingIntervalInS": "' + <pollingInterval> +
- '", "certificateStoreName": "' + <certStoreName> +
- '", "certificateStoreLocation": "' + <certStoreLoc> +
+ $settings = '{"secretsManagementSettings":
+ { "pollingIntervalInS": "' + <pollingInterval> +
+ '", "certificateStoreName": "' + <certStoreName> +
+ '", "certificateStoreLocation": "' + <certStoreLoc> +
'", "observedCertificates": ["' + <observedCert1> + '","' + <observedCert2> + '"] } }' $extName = "KeyVaultForLinux" $extPublisher = "Microsoft.Azure.KeyVault" $extType = "KeyVaultForLinux"
-
+ # Add Extension to VMSS $vmss = Get-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> Add-AzVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $extPublisher -Type $extType -TypeHandlerVersion "2.0" -EnableAutomaticUpgrade true -Setting $settings # Start the deployment
- Update-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> -VirtualMachineScaleSet $vmss
+ Update-AzVmss -ResourceGroupName <ResourceGroupName> -VMScaleSetName <VmssName> -VirtualMachineScaleSet $vmss
``` ## Azure CLI deployment
-The Azure CLI can be used to deploy the Key Vault VM extension to an existing virtual machine or virtual machine scale set.
-
+The Azure CLI can be used to deploy the Key Vault VM extension to an existing virtual machine or virtual machine scale set.
+ * To deploy the extension on a VM:
-
+ ```azurecli # Start the deployment az vm extension set -n "KeyVaultForLinux" `
The Azure CLI can be used to deploy the Key Vault VM extension to an existing vi
``` Please be aware of the following restrictions/requirements: - Key Vault restrictions:
- - It must exist at the time of the deployment
+ - It must exist at the time of the deployment
- The Key Vault Access Policy must be set for VM/VMSS Identity using a Managed Identity. See [How to Authenticate to Key Vault](../../key-vault/general/authentication.md) and [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy-cli.md).
Symbolic links or Symlinks are advanced shortcuts. To avoid monitoring the folde
* Is there's a limit on the number of observedCertificates you can configure? No, Key Vault VM Extension doesnΓÇÖt have limit on the number of observedCertificates.
-
+ ### Support
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
Title: Network Watcher Agent VM extension - Linux
+ Title: Network Watcher Agent VM extension - Linux
description: Deploy the Network Watcher Agent virtual machine extension on Linux virtual machines.
Last updated 06/29/2023-+ # Network Watcher Agent virtual machine extension for Linux
virtual-machines Stackify Retrace Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/stackify-retrace-linux.md
Title: Stackify Retrace Azure Linux Agent Extension
+ Title: Stackify Retrace Azure Linux Agent Extension
description: Deploy the Stackify Retrace Linux agent on a Linux virtual machine.
Previously updated : 04/12/2018 - Last updated : 04/12/2018+ ms.devlang: azurecli # Stackify Retrace Linux Agent Extension
Retrace is the ONLY tool that delivers all of the following capabilities across
**About Stackify Linux Agent Extension**
-This extension provides an install path for the Linux Agent for Retrace.
+This extension provides an install path for the Linux Agent for Retrace.
## Prerequisites
-### Operating system
+### Operating system
The Retrace agent can be run against these Linux distributions
The Retrace agent can be run against these Linux distributions
> [!IMPORTANT]
-> Keep in consideration Red Hat Enterprise Linux 6.X is already EOL.
+> Keep in consideration Red Hat Enterprise Linux 6.X is already EOL.
> RHEL 6.10 has available [ELS support](https://www.redhat.com/en/resources/els-datasheet), which [will end on 06/2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204). ### Internet connectivity
-The Stackify Agent extension for Linux requires that the target virtual machine is connected to the internet.
+The Stackify Agent extension for Linux requires that the target virtual machine is connected to the internet.
-You may need to adjust your network configuration to allow connections to Stackify, see https://support.stackify.com/hc/en-us/articles/207891903-Adding-Exceptions-to-a-Firewall.
+You may need to adjust your network configuration to allow connections to Stackify, see https://support.stackify.com/hc/en-us/articles/207891903-Adding-Exceptions-to-a-Firewall.
## Extension schema
The following JSON shows the schema for the Stackify Retrace Agent extension. Th
"activationKey": "myActivationKey" } }
- }
+ }
```
-## Template deployment
+## Template deployment
-Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Stackify Retrace Linux Agent extension during an Azure Resource Manager template deployment.
+Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Stackify Retrace Linux Agent extension during an Azure Resource Manager template deployment.
The JSON for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON affects the value of the resource name and type. For more information, see Set name and type for child resources.
The extension requires the `environment` and `activationKey`.
"activationKey": "myActivationKey" } }
- }
+ }
``` When placing the extension JSON at the root of the template, the resource name includes a reference to the parent virtual machine, and the type reflects the nested configuration.
Set-AzVMExtension -ExtensionName "Stackify.LinuxAgent.Extension" `
-Location WestUS ` ```
-## Azure CLI deployment
+## Azure CLI deployment
-The Azure CLI tool can be used to deploy the Stackify Retrace Linux Agent virtual machine extension to an existing virtual machine.
+The Azure CLI tool can be used to deploy the Stackify Retrace Linux Agent virtual machine extension to an existing virtual machine.
The extension requires the `environment` and `activationKey`.
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
Title: Update the Azure Linux Agent from GitHub
+ Title: Update the Azure Linux Agent from GitHub
description: Learn how to update Azure Linux Agent for your Linux VM in Azure -+
Open [the release of Azure Linux Agent in GitHub](https://github.com/Azure/WALin
For version 2.2.x or later, type: ```bash
-wget https://github.com/Azure/WALinuxAgent/archive/refs/tags/v2.2.x.zip
+wget https://github.com/Azure/WALinuxAgent/archive/refs/tags/v2.2.x.zip
unzip v2.2.x.zip cd WALinuxAgent-2.2.x ```
The following line uses version 2.2.14 as an example:
```bash wget https://github.com/Azure/WALinuxAgent/archive/refs/tags/v2.2.14.zip
-unzip v2.2.14.zip
+unzip v2.2.14.zip
cd WALinuxAgent-2.2.14 ```
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-hc-known-issues.md
Title: Troubleshooting known issues with HPC and GPU VMs - Azure Virtual Machine
description: Learn about troubleshooting known issues with HPC and GPU VM sizes in Azure. -+ Last updated 03/10/2023
If it is necessary to use the incompatible OFED, a solution is to use the **Cano
## Accelerated Networking on HB, HC, HBv2, HBv3, HBv4, HX, NDv2 and NDv4
-[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](hb-series.md), [HC](hc-series.md), [HBv2](hbv2-series.md), [HBv3](hbv3-series.md), [HBv4](hbv4-series.md), [HX](hx-series.md), [NDv2](ndv2-series.md) and [NDv4](nda100-v4-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X).
+[Azure Accelerated Networking](https://azure.microsoft.com/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/) is now available on the RDMA and InfiniBand capable and SR-IOV enabled VM sizes [HB](hb-series.md), [HC](hc-series.md), [HBv2](hbv2-series.md), [HBv3](hbv3-series.md), [HBv4](hbv4-series.md), [HX](hx-series.md), [NDv2](ndv2-series.md) and [NDv4](nda100-v4-series.md). This capability now allows enhanced throughout (up to 30 Gbps) and latencies over the Azure Ethernet network. Though this is separate from the RDMA capabilities over the InfiniBand network, some platform changes for this capability may impact behavior of certain MPI implementations when running jobs over InfiniBand. Specifically the InfiniBand interface on some VMs may have a slightly different name (mlx5_1 as opposed to earlier mlx5_0). This may require tweaking of the MPI command lines especially when using the UCX interface (commonly with OpenMPI and HPC-X).
The simplest solution currently is to use the latest HPC-X on the CentOS-HPC VM images where we rename the InfiniBand and Accelerated Networking interfaces accordingly or to run the [script](https://github.com/Azure/azhpc-images/blob/master/common/install_azure_persistent_rdma_naming.sh) to rename the InfiniBand interface.
virtual-machines Hb Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-overview.md
Title: HB-series VM overview - Azure Virtual Machines | Microsoft Docs
description: Learn about the preview support for the HB-series VM size in Azure. -+ Last updated 04/20/2023
The following diagram shows the segregation of cores reserved for Azure Hypervis
| OS Support for SRIOV RDMA | CentOS/RHEL 7.6+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ | | Orchestrator Support | CycleCloud, Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) |
-> [!IMPORTANT]
+> [!IMPORTANT]
> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version. ## Next steps
virtual-machines Hbv2 Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-performance.md
-+ Last updated 03/04/2023
-
+ # HBv2-series virtual machine sizes
MPI bandwidth test from the OSU microbenchmark suite is run. Sample scripts are
## Mellanox Perftest
-The [Mellanox Perftest package](https://community.mellanox.com/s/article/perftest-package) has many InfiniBand tests such as latency (ib_send_lat) and bandwidth (ib_send_bw). An example command is below.
+The [Mellanox Perftest package](https://community.mellanox.com/s/article/perftest-package) has many InfiniBand tests such as latency (ib_send_lat) and bandwidth (ib_send_bw). An example command is below.
```bash
virtual-machines Hbv2 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series-overview.md
Title: HBv2-series VM overview - Azure Virtual Machines | Microsoft Docs
description: Learn about the HBv2-series VM size in Azure. tags: azure-resource-manager-+ Previously updated : 07/13/2023 Last updated : 01/18/2024
-
++
-
-# HBv2 series virtual machine overview
+# HBv2 series virtual machine overview
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Maximizing high performance compute (HPC) application performance on AMD EPYC requires a thoughtful approach memory locality and process placement. Below we outline the AMD EPYC architecture and our implementation of it on Azure for HPC applications. We use the term **pNUMA** to refer to a physical NUMA domain, and **vNUMA** to refer to a virtualized NUMA domain.
+Maximizing high performance compute (HPC) application performance on AMD EPYC requires a thoughtful approach memory locality and process placement. Below we outline the AMD EPYC architecture and our implementation of it on Azure for HPC applications. We use the term **pNUMA** to refer to a physical NUMA domain, and **vNUMA** to refer to a virtualized NUMA domain.
-Physically, an [HBv2-series](hbv2-series.md) server is 2 * 64-core EPYC 7742 CPUs for a total of 128 physical cores. These 128 cores are divided into 32 pNUMA domains (16 per socket), each of which is 4 cores and termed by AMD as a **Core Complex** (or **CCX**). Each CCX has its own L3 cache, which is how an OS sees a pNUMA/vNUMA boundary. Four adjacent CCXs share access to two channels of physical DRAM.
+Physically, an [HBv2-series](hbv2-series.md) server is 2 * 64-core EPYC 7V12 CPUs for a total of 128 physical cores. These 128 cores are divided into 32 pNUMA domains (16 per socket), each of which is 4 cores and termed by AMD as a **Core Complex** (or **CCX**). Each CCX has its own L3 cache, which is how an OS sees a pNUMA/vNUMA boundary. Four adjacent CCXs share access to two channels of physical DRAM.
To provide room for the Azure hypervisor to operate without interfering with the VM, we reserve physical pNUMA domains 0 and 16 (that is, the first CCX of each CPU socket). All remaining 30 pNUMA domains are assigned to the VM at which point they become vNUMA. Thus, the VM sees:
-`(30 vNUMA domains) * (4 cores/vNUMA) = 120` cores per VM
+`(30 vNUMA domains) * (4 cores/vNUMA) = 120` cores per VM
The VM itself has no awareness that pNUMA 0 and 16 are reserved. It enumerates the vNUMA it sees as 0-29, with 15 vNUMA per socket symmetrically, vNUMA 0-14 on vSocket 0, and vNUMA 15-29 on vSocket 1.
-Process pinning works on HBv2-series VMs because we expose the underlying silicon as-is to the guest VM. We strongly recommend process pinning for optimal performance and consistency.
+Process pinning works on HBv2-series VMs because we expose the underlying silicon as-is to the guest VM. We strongly recommend process pinning for optimal performance and consistency.
-## Hardware specifications
+## Hardware specifications
-| Hardware Specifications | HBv2-series VM |
+| Hardware Specifications | HBv2-series VM |
|-|-|
-| Cores | 120 (SMT disabled) |
-| CPU | AMD EPYC 7742 |
-| CPU Frequency (non-AVX) | ~3.1 GHz (single + all cores) |
-| Memory | 4 GB/core (480 GB total) |
-| Local Disk | 960 GiB NVMe (block), 480 GB SSD (page file) |
-| Infiniband | 200 Gb/s HDR Mellanox ConnectX-6 |
-| Network | 50 Gb/s Ethernet (40 Gb/s usable) Azure second Gen SmartNIC |
+| Cores | 120 (SMT disabled) |
+| CPU | AMD EPYC 7V12 |
+| CPU Frequency (non-AVX) | ~3.1 GHz (single + all cores) |
+| Memory | 4 GB/core (480 GB total) |
+| Local Disk | 960 GiB NVMe (block), 480 GB SSD (page file) |
+| Infiniband | 200 Gb/s HDR Mellanox ConnectX-6 |
+| Network | 50 Gb/s Ethernet (40 Gb/s usable) Azure second Gen SmartNIC |
-## Software specifications
+## Software specifications
-| Software Specifications | HBv2-series VM |
+| Software Specifications | HBv2-series VM |
|--|--| | Max MPI Job Size | 36000 cores (300 VMs in a single virtual machine scale set with singlePlacementGroup=true) | | MPI Support | HPC-X, Intel MPI, OpenMPI, MVAPICH2, MPICH, Platform MPI |
Process pinning works on HBv2-series VMs because we expose the underlying silico
| OS Support for SRIOV RDMA | CentOS/RHEL 7.9+, Ubuntu 18.04+, SLES 12 SP5+, WinServer 2016+ | | Orchestrator Support | CycleCloud, Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) |
-> [!NOTE]
+> [!NOTE]
> Windows Server 2012 R2 is not supported on HBv2 and other VMs with more than 64 (virtual or physical) cores. See [Supported Windows guest operating systems for Hyper-V on Windows Server](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) for more details. ## Next steps
virtual-machines Hbv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-HBv2-series VMs are optimized for applications that are driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7742 processor cores, 4 GB of RAM per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 350 GB/s of memory bandwidth, and up to 4 teraFLOPS of FP64 compute.
+HBv2-series VMs are optimized for applications that are driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7V12 processor cores, 4 GB of RAM per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 350 GB/s of memory bandwidth, and up to 4 teraFLOPS of FP64 compute.
HBv2-series VMs feature 200 Gb/sec Mellanox HDR InfiniBand. These VMs are connected in a non-blocking fat tree for optimized and consistent RDMA performance. These VMs support Adaptive Routing and the Dynamic Connected Transport (DCT, in addition to standard RC and UD transports). These features enhance application performance, scalability, and consistency, and their usage is recommended.
virtual-machines Hbv3 Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-performance.md
-+ Last updated 03/04/2023
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series-overview.md
Title: HBv3-series VM overview, architecture, topology - Azure Virtual Machines
description: Learn about the HBv3-series VM size in Azure. tags: azure-resource-manager-+
-# HBv3-series virtual machine overview
+# HBv3-series virtual machine overview
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
Standard_HB120-16rs_v3 | 4 | 4 | Dual
> [!NOTE] > The constrained cores VM sizes only reduce the number of physical cores exposed to the VM. All global shared assets (RAM, memory bandwidth, L3 cache, GMI and xGMI connectivity, InfiniBand, Azure Ethernet network, local SSD) stay constant. This allows a customer to pick a VM size best tailored to a given set of workload or software licensing needs.
-The virtual NUMA mapping of each HBv3 VM size is mapped to the underlying physical NUMA topology. There is no potentially misleading abstraction of the hardware topology.
+The virtual NUMA mapping of each HBv3 VM size is mapped to the underlying physical NUMA topology. There is no potentially misleading abstraction of the hardware topology.
The exact topology for the various [HBv3 VM size](hbv3-series.md) appears as follows using the output of [lstopo](https://linux.die.net/man/1/lstopo): ```bash
Two other, larger SSDs are provided as unformatted block NVMe devices via NVMeDi
When paired in a striped array, the NVMe SSD provides up to 7 GB/s reads and 3 GB/s writes, and up to 186,000 IOPS (reads) and 201,000 IOPS (writes) for deep queue depths.
-## Hardware specifications
+## Hardware specifications
| Hardware specifications | HBv3-series VMs | |-|-|
-| Cores | 120, 96, 64, 32, or 16 (SMT disabled) |
-| CPU | AMD EPYC 7V73X |
-| CPU Frequency (non-AVX) | 3.0 GHz (all cores), 3.5 GHz (up to 10 cores) |
-| Memory | 448 GB (RAM per core depends on VM size) |
-| Local Disk | 2 * 960 GB NVMe (block), 480 GB SSD (page file) |
-| Infiniband | 200 Gb/s Mellanox ConnectX-6 HDR InfiniBand |
-| Network | 50 Gb/s Ethernet (40 Gb/s usable) Azure second Gen SmartNIC |
+| Cores | 120, 96, 64, 32, or 16 (SMT disabled) |
+| CPU | AMD EPYC 7V73X |
+| CPU Frequency (non-AVX) | 3.0 GHz (all cores), 3.5 GHz (up to 10 cores) |
+| Memory | 448 GB (RAM per core depends on VM size) |
+| Local Disk | 2 * 960 GB NVMe (block), 480 GB SSD (page file) |
+| Infiniband | 200 Gb/s Mellanox ConnectX-6 HDR InfiniBand |
+| Network | 50 Gb/s Ethernet (40 Gb/s usable) Azure second Gen SmartNIC |
-## Software specifications
+## Software specifications
-| Software specifications | HBv3-series VMs |
+| Software specifications | HBv3-series VMs |
|--|--| | Max MPI Job Size | 36,000 cores (300 VMs in a single Virtual Machine Scale Set with singlePlacementGroup=true) | | MPI Support | HPC-X, Intel MPI, OpenMPI, MVAPICH2, MPICH |
When paired in a striped array, the NVMe SSD provides up to 7 GB/s reads and 3 G
| Azure Storage Support | Standard and Premium Disks (maximum 32 disks) | | OS Support for SRIOV RDMA | CentOS/RHEL 7.9+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ | | Recommended OS for Performance | CentOS 8.1, Windows Server 2019+
-| Orchestrator Support | Azure CycleCloud, Azure Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) |
+| Orchestrator Support | Azure CycleCloud, Azure Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) |
-> [!NOTE]
+> [!NOTE]
> Windows Server 2012 R2 is not supported on HBv3 and other VMs with more than 64 (virtual or physical) cores. For more details, see [Supported Windows guest operating systems for Hyper-V on Windows Server](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows).
-> [!IMPORTANT]
+> [!IMPORTANT]
> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version. ## Next steps
virtual-machines Hc Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series-overview.md
Title: HC-series VM overview - Azure Virtual Machines| Microsoft Docs
-description: Learn about the preview support for the HC-series VM size in Azure.
+description: Learn about the preview support for the HC-series VM size in Azure.
-+ Last updated 04/18/2023
The following diagram shows the segregation of cores reserved for Azure Hypervis
| OS Support for SRIOV RDMA | CentOS/RHEL 7.6+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ | | Orchestrator Support | CycleCloud, Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) |
-> [!IMPORTANT]
+> [!IMPORTANT]
> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version. ## Next steps
virtual-machines How To Enable Write Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/how-to-enable-write-accelerator.md
Title: Azure Write Accelerator
+ Title: Azure Write Accelerator
description: Documentation on how to enable and use Write Accelerator
Last updated 04/11/2023 --++ # Enable Write Accelerator
A new switch parameter, **-WriteAccelerator** has been added to the following cm
- [Add-AzVmssDataDisk](/powershell/module/az.compute/Add-AzVmssDataDisk) >[!NOTE]
-> If enabling Write Accelerator on Virtual Machine Scale Sets using Flexible Orchestration Mode, you need to enable it on each individual instance.
+> If enabling Write Accelerator on Virtual Machine Scale Sets using Flexible Orchestration Mode, you need to enable it on each individual instance.
Not giving the parameter sets the property to false and will deploy disks that have no support by Write Accelerator.
$vmName="myVM"
#Specify your Resource Group $rgName = "myWAVMs" #data disk name
-$datadiskname = "test-log001"
-#new Write Accelerator status ($true for enabled, $false for disabled)
+$datadiskname = "test-log001"
+#new Write Accelerator status ($true for enabled, $false for disabled)
$newstatus = $true #Pulls the VM info for later $vm=Get-AzVM -ResourceGroupName $rgname -Name $vmname
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md
-+ Last updated 04/11/2023
-# Azure Instance Metadata Service
+# Azure Instance Metadata Service
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
The Azure Instance Metadata Service (IMDS) provides information about currently running virtual machine instances. You can use it to manage and configure your virtual machines. This information includes the SKU, storage, network configurations, and upcoming maintenance events. For a complete list of the data available, see the [Endpoint Categories Summary](#endpoint-categories).
Any request that doesn't meet **both** of these requirements are rejected by the
> [!IMPORTANT] > IMDS is **not** a channel for sensitive data. The API is unauthenticated and open to all processes on the VM. Information exposed through this service should be considered as shared information to all applications running inside the VM.
-If it isn't necessary for every process on the VM to access IMDS endpoint, you can set local firewall rules to limit the access.
-For example, if only a known system service needs to access instance metadata service, you can set a firewall rule on IMDS endpoint, only allowing the specific process(es) to access, or denying access for the rest of the processes.
+If it isn't necessary for every process on the VM to access IMDS endpoint, you can set local firewall rules to limit the access.
+For example, if only a known system service needs to access instance metadata service, you can set a firewall rule on IMDS endpoint, only allowing the specific process(es) to access, or denying access for the rest of the processes.
## Proxies
Endpoints may support required and/or optional parameters. See [Schema](#schema)
### Query parameters
-IMDS endpoints support HTTP query string parameters. For example:
+IMDS endpoints support HTTP query string parameters. For example:
```URL http://169.254.169.254/metadata/instance/compute?api-version=2021-01-01&format=json
Requests with duplicate query parameter names will be rejected.
### Route parameters
-For some endpoints that return larger json blobs, we support appending route parameters to the request endpoint to filter down to a subset of the response:
+For some endpoints that return larger json blobs, we support appending route parameters to the request endpoint to filter down to a subset of the response:
```URL http://169.254.169.254/metadata/<endpoint>/[<filter parameter>/...]?<query parameters>
Data | Description | Version introduced |
| `keyEncryptionKey.keyUrl` | The location of the key | 2021-11-01 The resource disk object contains the size of the [Local Temp Disk](managed-disks-overview.md#temporary-disk) attached to the VM, if it has one, in kilobytes.
-If there's [no local temp disk for the VM](azure-vms-no-temp-disk.yml), this value is 0.
+If there's [no local temp disk for the VM](azure-vms-no-temp-disk.yml), this value is 0.
| Data | Description | Version introduced | ||-|--|
If there's [no local temp disk for the VM](azure-vms-no-temp-disk.yml), this val
| `macAddress` | VM mac address | 2017-04-02 > [!NOTE]
-> The nics returned by the network call are not guaranteed to be in order.
+> The nics returned by the network call are not guaranteed to be in order.
### Get user data
-When creating a new VM, you can specify a set of data to be used during or after the VM provision, and retrieve it through IMDS. Check the end to end user data experience [here](user-data.md).
+When creating a new VM, you can specify a set of data to be used during or after the VM provision, and retrieve it through IMDS. Check the end to end user data experience [here](user-data.md).
To set up user data, utilize the quickstart template [here](https://aka.ms/ImdsUserDataArmTemplate). The sample below shows how to retrieve this data through IMDS. This feature is released with version `2021-01-01` and above.
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
}, "hostGroup": { "id": "testHostGroupId"
- },
+ },
"isHostCompatibilityLayerVm": "true", "licenseType": "Windows_Client", "location": "westus",
curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/co
}, "hostGroup": { "id": "testHostGroupId"
- },
+ },
"isHostCompatibilityLayerVm": "true", "licenseType": "Windows_Client", "location": "westus",
Verification successful
"expiresOn": "11/28/18 06:16:17 -0000" }, "vmId": "d3e0e374-fda6-4649-bbc9-7f20dc379f34",
- "licenseType": "Windows_Client",
+ "licenseType": "Windows_Client",
"subscriptionId": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx", "sku": "RS3-Pro" }
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
-+ Last updated 04/06/2023
# Connect to a Linux VM When hosting a Linux virtual machine on Azure, the most common method for accessing that VM is through the Secure Shell Protocol (SSH). Any standard SSH client commonly found in Linux and Windows allows you to connect. You can also use [Azure Cloud Shell](../cloud-shell/overview.md) from any browser.
-
+ This document describes how to connect, via SSH, to a VM that has a public IP. If you need to connect to a VM without a public IP, see [Azure Bastion Service](../bastion/bastion-overview.md). ## Prerequisites - You need an SSH key pair. If you don't already have one, Azure creates a key pair during the deployment process. If you need help with creating one manually, see [Create and use an SSH public-private key pair for Linux VMs in Azure](./linux/mac-create-ssh-keys.md). - You need an existing Network Security Group (NSG). Most VMs have an NSG by default, but if you don't already have one you can create one and attach it manually. For more information, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).-- To connect to a Linux VM, you need the appropriate port open. Typically SSH uses port 22. The following instructions assume port 22 but the process is the same for other port numbers. You can validate an appropriate port is open for SSH using the troubleshooter or by checking manually in your VM settings. To check if port 22 is open:
+- To connect to a Linux VM, you need the appropriate port open. Typically SSH uses port 22. The following instructions assume port 22 but the process is the same for other port numbers. You can validate an appropriate port is open for SSH using the troubleshooter or by checking manually in your VM settings. To check if port 22 is open:
1. On the page for the VM, select **Networking** from the left menu. 1. On the **Networking** page, check to see if there's a rule that allows TCP on port 22 from the IP address of the computer you're using to connect to the VM. If the rule exists, you can move to the next section.
-
+ :::image type="content" source="media/linux-vm-connect/check-rule.png" alt-text="Screenshot showing how to check to see if there's already a rule allowing S S H connections."::: 1. If there isn't a rule, add one by selecting **Add inbound port rule**. 1. For **Service**, select **SSH** from the dropdown.
-
+ :::image type="content" source="media/linux-vm-connect/create-rule.png" alt-text="Screenshot showing where to choose S S H when creating a new N S G rule."::: 1. Edit **Priority** and **Source** if necessary
This document describes how to connect, via SSH, to a VM that has a public IP. I
1. You should now have an SSH rule in the table of inbound port rules. - Your VM must have a public IP address. To check if your VM has a public IP address, select **Overview** from the left menu and look at the **Networking** section. If you see an IP address next to **Public IP address**, then your VM has a public IP
-
+ If your VM doesn't have a public IP Address, it looks like this: :::image type="content" source="media/linux-vm-connect/no-public-ip.png" alt-text="Screenshot of how the networking section looks when you don't have a public I P.":::
-
+ To learn more about adding a public IP address to an existing VM, see [Associate a public IP address to a virtual machine](../virtual-network/ip-services/associate-public-ip-address-vm.md) - Verify your VM is running. On the Overview tab, in the **Essentials** section, verify the status of the VM is **Running**. To start the VM, select **Start** at the top of the page. :::image type="content" source="media/linux-vm-connect/running.png" alt-text="Screenshot showing how to check to make sure your virtual machine is in the running state.":::
-
+ If you're having trouble connecting, you can also use portal: 1. Go to the [Azure portal](https://portal.azure.com/) to connect to a VM. Search for and select **Virtual machines**. 2. Select the virtual machine from the list. 3. Select **Connect** from the left menu. 4. Select the option that fits with your preferred way of connecting. The portal helps walk you through the prerequisites for connecting.
-
+ ## Connect to the VM Once the above prerequisites are met, you're ready to connect to your VM. Open your SSH client of choice. The SSH client command is typically included in Linux, macOS, and Windows. If you're using Windows 7 or older, where Win32 OpenSSH isn't included by default, consider installing [WSL](/windows/wsl/about) or using [Azure Cloud Shell](../cloud-shell/overview.md) from the browser.
Once the above prerequisites are met, you're ready to connect to your VM. Open y
1. Ensure your public and private keys are in the correct directory. The directory is usually `~/.ssh`. If you generated keys manually or generated them with the CLI, then the keys are probably already there. However, if you downloaded them in pem format from the Azure portal, you may need to move them to the right location. Moving the keys is done with the following syntax: `mv PRIVATE_KEY_SOURCE PRIVATE_KEY_DESTINATION`
-
+ For example, if the key is in the `Downloads` folder, and `myKey.pem` is the name of your SSH key, type: ```bash mv /Downloads/myKey.pem ~/.ssh
- ```
+ ```
> [!NOTE] > If you're using WSL, local files are found in the `mnt/c/` directory. Accordingly, the path to the downloads folder and SSH key would be `/mnt/c/Users/{USERNAME}/Downloads/myKey.pem`
-
-2. Ensure you have read-only access to the private key by running
+
+2. Ensure you have read-only access to the private key by running
```bash chmod 400 ~/.ssh/myKey.pem
- ```
+ ```
3. Run the SSH command with the following syntax: `ssh -i PATH_TO_PRIVATE_KEY USERNAME@EXTERNAL_IP`
-
+ For example, if your `azureuser` is the username you created and `20.51.230.13` is the public IP address of your VM, type: ```bash ssh -i ~/.ssh/myKey.pem azureuser@20.51.230.13
Once the above prerequisites are met, you're ready to connect to your VM. Open y
4. Validate the returned fingerprint. If you have never connected to this VM before, you're asked to verify the hosts fingerprint. It's tempting to accept the fingerprint presented, but that exposes you to a potential person in the middle attack. You should always validate the hosts fingerprint. You only need to do this the first time you connect from a client. To get the host fingerprint via the portal, use the Run Command feature to execute the command:
-
+ ```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
Once the above prerequisites are met, you're ready to connect to your VM. Open y
ssh azureuser@20.51.230.13 ``` 2. Validate the returned fingerprint.
-
+ If you have never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
-
+ ```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
Once the above prerequisites are met, you're ready to connect to your VM. Open y
3. Success! You should now be connected to your VM. If you're unable to connect, see our [troubleshooting guide](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection). ### Password authentication
-
+ > [!WARNING] > This type of authentication method is not as secure as an SSH key pair and is not recommended.
Once the above prerequisites are met, you're ready to connect to your VM. Open y
For example, if your `azureuser` is the username you created and `20.51.230.13` is the public IP address of your VM, type: ```powershell
- ssh -i .\Downloads\myKey.pem azureuser@20.51.230.13
+ ssh -i .\Downloads\myKey.pem azureuser@20.51.230.13
``` 3. Validate the returned fingerprint. If you have never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
-
+ ```azurepowershell-interactive
- Invoke-AzVMRunCommand -ResourceGroupName 'myResourceGroup' -VMName 'myVM' -CommandId 'RunPowerShellScript' -ScriptString
+ Invoke-AzVMRunCommand -ResourceGroupName 'myResourceGroup' -VMName 'myVM' -CommandId 'RunPowerShellScript' -ScriptString
'ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}'' ``` 4. Success! You should now be connected to your VM. If you're unable to connect, see [Troubleshoot SSH connections](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection). ### Password authentication
-
+ > [!WARNING] > This type of authentication method is not as secure and is not our recommended way to connect.
-1. Run the following command in your SSH client, where `20.51.230.13` is the public IP Address of your VM and `azureuser` is the username you created when you created the VM.
+1. Run the following command in your SSH client, where `20.51.230.13` is the public IP Address of your VM and `azureuser` is the username you created when you created the VM.
```bash ssh azureuser@20.51.230.13
Once the above prerequisites are met, you're ready to connect to your VM. Open y
If you forgot your password or username see [Reset Access to an Azure VM](./extensions/vmaccess-linux.md) 2. Validate the returned fingerprint.
-
+ If you have never connected to the desired VM from your current SSH client before you're asked to verify the host's fingerprint. While the default option is to accept the fingerprint presented, you're exposed to a possible "person in the middle attack". You should always validate the host's fingerprint, which only needs to be done the first time your client connects. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command:
-
+ ```bash ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}' ```
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
Title: Add a data disk to Linux VM using the Azure CLI
+ Title: Add a data disk to Linux VM using the Azure CLI
description: Learn to add a persistent data disk to your Linux VM with the Azure CLI -+ Last updated 01/09/2023
# Add a disk to a Linux VM
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article shows you how to attach a persistent disk to your VM so that you can preserve your data - even if your VM is reprovisioned due to maintenance or resizing.
ssh azureuser@10.123.123.25
### Find the disk
-Once you connect to your VM, find the disk. In this example, we're using `lsblk` to list the disks.
+Once you connect to your VM, find the disk. In this example, we're using `lsblk` to list the disks.
```bash lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
lrwxrwxrwx 1 root root 12 Mar 28 19:41 lun0 -> ../../../sdc
### Format the disk
-Format the disk with `parted`, if the disk size is two tebibytes (TiB) or larger then you must use GPT partitioning, if it is under 2TiB, then you can use either MBR or GPT partitioning.
+Format the disk with `parted`, if the disk size is two tebibytes (TiB) or larger then you must use GPT partitioning, if it is under 2TiB, then you can use either MBR or GPT partitioning.
> [!NOTE] > It is recommended that you use the latest version `parted` that is available for your distro.
-> If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
+> If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
The following example uses `parted` on `/dev/sdc`, which is where the first data disk will typically be on most VMs. Replace `sdc` with the correct option for your disk. We're also formatting it using the [XFS](https://xfs.wiki.kernel.org/) filesystem.
virtual-machines Attach Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/attach-disk-portal.md
Title: Attach a data disk to a Linux VM
+ Title: Attach a data disk to a Linux VM
description: Use the portal to attach new or existing data disk to a Linux VM. -+ Last updated 08/09/2023
-# Use the portal to attach a data disk to a Linux VM
+# Use the portal to attach a data disk to a Linux VM
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-This article shows you how to attach both new and existing disks to a Linux virtual machine through the Azure portal. You can also [attach a data disk to a Windows VM in the Azure portal](../windows/attach-managed-disk-portal.md).
+This article shows you how to attach both new and existing disks to a Linux virtual machine through the Azure portal. You can also [attach a data disk to a Windows VM in the Azure portal](../windows/attach-managed-disk-portal.md).
Before you attach disks to your VM, review these tips:
Before you attach disks to your VM, review these tips:
1. On the **Disks** pane, under **Data disks**, select **Create and attach a new disk**. 1. Enter a name for your managed disk. Review the default settings, and update the **Storage type**, **Size (GiB)**, **Encryption** and **Host caching** as necessary.
-
+ :::image type="content" source="./medi.png" alt-text="Review disk settings.":::
Before you attach disks to your VM, review these tips:
## Attach an existing disk 1. On the **Disks** pane, under **Data disks**, select **Attach existing disks**.
-1. Select the drop-down menu for **Disk name** and select a disk from the list of available managed disks.
+1. Select the drop-down menu for **Disk name** and select a disk from the list of available managed disks.
1. Select **Save** to attach the existing managed disk and update the VM configuration:
-
+ ## Connect to the Linux VM to mount the new disk
-To partition, format, and mount your new disk so your Linux VM can use it, SSH into your VM. For more information, see [How to use SSH with Linux on Azure](mac-create-ssh-keys.md). The following example connects to a VM with the public IP address of *10.123.123.25* with the username *azureuser*:
+To partition, format, and mount your new disk so your Linux VM can use it, SSH into your VM. For more information, see [How to use SSH with Linux on Azure](mac-create-ssh-keys.md). The following example connects to a VM with the public IP address of *10.123.123.25* with the username *azureuser*:
```bash ssh azureuser@10.123.123.25
ssh azureuser@10.123.123.25
## Find the disk
-Once connected to your VM, you need to find the disk. In this example, we're using `lsblk` to list the disks.
+Once connected to your VM, you need to find the disk. In this example, we're using `lsblk` to list the disks.
```bash lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
From the output of `lsblk` you can see that the 4GB disk at LUN 0 is `sdc`, the
### Prepare a new empty disk > [!IMPORTANT]
-> If you are using an existing disk that contains data, skip to [mounting the disk](#mount-the-disk).
+> If you are using an existing disk that contains data, skip to [mounting the disk](#mount-the-disk).
> The following instructions will delete data on the disk. If you're attaching a new disk, you need to partition the disk. The `parted` utility can be used to partition and to format a data disk. - Use the latest version `parted` that is available for your distro.-- If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
+- If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
The following example uses `parted` on `/dev/sdc`, which is where the first data disk will typically be on most VMs. Replace `sdc` with the correct option for your disk. We're also formatting it using the [XFS](https://xfs.wiki.kernel.org/) filesystem.
When you're done editing the file, save and close the editor.
> [!NOTE] > Later removing a data disk without editing fstab could cause the VM to fail to boot. Most distributions provide either the *nofail* and/or *nobootwait* fstab options. These options allow a system to boot even if the disk fails to mount at boot time. Consult your distribution's documentation for more information on these parameters.
->
+>
> The *nofail* option ensures that the VM starts even if the filesystem is corrupt or the disk does not exist at boot time. Without this option, you may encounter behavior as described in [Cannot SSH to Linux VM due to FSTAB errors](/archive/blogs/linuxonazure/cannot-ssh-to-linux-vm-after-adding-data-disk-to-etcfstab-and-rebooting)
There are two ways to enable TRIM support in your Linux VM. As usual, consult yo
UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,discard 1 2 ``` * In some cases, the `discard` option may have performance implications. Alternatively, you can run the `fstrim` command manually from the command line, or add it to your crontab to run regularly:
-
+ # [Ubuntu](#tab/ubuntu) ```bash
virtual-machines Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-dns.md
description: Name Resolution scenarios for Linux virtual machines in Azure IaaS,
-+ Last updated 04/11/2023
# DNS Name Resolution options for Linux virtual machines in Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Azure provides DNS name resolution by default for all virtual machines that are in a single virtual network. You can implement your own DNS name resolution solution by configuring your own DNS services on your virtual machines that Azure hosts. The following scenarios should help you choose the one that works for your situation.
sudo systemctl restart NetworkManager
DNS is primarily a UDP protocol. Because the UDP protocol doesn't guarantee message delivery, the DNS protocol itself handles retry logic. Each DNS client (operating system) can exhibit different retry logic depending on the creator's preference: * Windows operating systems retry after one second and then again after another two, four, and another four seconds.
-* The default Linux setup retries after five seconds. You should change this to retry five times at one-second intervals.
+* The default Linux setup retries after five seconds. You should change this to retry five times at one-second intervals.
To check the current settings on a Linux virtual machine, 'cat /etc/resolv.conf', and look at the 'options' line, for example:
The `/etc/resolv.conf` file is auto-generated and should not be edited. The spec
## Name resolution using your own DNS server
-Your name resolution needs may go beyond the features that Azure provides. For example, you might require DNS resolution between virtual networks. To cover this scenario, you can use your own DNS servers.
+Your name resolution needs may go beyond the features that Azure provides. For example, you might require DNS resolution between virtual networks. To cover this scenario, you can use your own DNS servers.
DNS servers within a virtual network can forward DNS queries to recursive resolvers of Azure to resolve hostnames that are in the same virtual network. For example, a DNS server that runs in Azure can respond to DNS queries for its own DNS zone files and forward all other queries to Azure. This functionality enables virtual machines to see both your entries in your zone files and hostnames that Azure provides (via the forwarder). Access to the recursive resolvers of Azure is provided via the virtual IP 168.63.129.16.
If forwarding queries to Azure doesn't suit your needs, you need to provide your
* Be secured against access from the Internet to mitigate threats posed by external agents. > [!NOTE]
-> For best performance, when you use virtual machines in Azure DNS servers, disable IPv6 and assign an [Instance-Level Public IP](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) to each DNS server virtual machine.
+> For best performance, when you use virtual machines in Azure DNS servers, disable IPv6 and assign an [Instance-Level Public IP](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) to each DNS server virtual machine.
> >
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Last updated 05/02/2023 -+ # Azure Hybrid Benefit for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines
Azure dedicated host instances and SQL hybrid benefits aren't eligible for Azure
You can invoke AHB at the time of virtual machine creation. Benefits of doing so are threefold: - You can provision both PAYG and BYOS virtual machines by using the same image and process.-- It enables future licensing mode changes.
+- It enables future licensing mode changes.
- The virtual machine is connected to Red Hat Update Infrastructure (RHUI) by default, to help keep it up to date and secure. You can change the updated mechanism after deployment at any time. #### [Azure portal](#tab/ahbNewPortal)
To enable Azure Hybrid Benefit when you create a virtual machine, use the follow
![Screenshot of the Azure portal that shows checkboxes selected for licensing.](./media/azure-hybrid-benefit/create-vm-ahb-checkbox.png) 1. Create a virtual machine by following the next set of instructions.
-1. On the **Configuration** pane, confirm that the option is enabled.
+1. On the **Configuration** pane, confirm that the option is enabled.
![Screenshot of the Azure Hybrid Benefit configuration pane after you create a virtual machine.](./media/azure-hybrid-benefit/create-configuration-blade.png) #### [Azure CLI](#tab/ahbNewCli)
-You can use the `az vm extension` and `az vm update` commands to update new virtual machines after they've been created.
+You can use the `az vm extension` and `az vm update` commands to update new virtual machines after they've been created.
1. Install the extension ```azurecli
To enable Azure Hybrid Benefit on an existing virtual machine:
#### [Azure CLI](#tab/ahbExistingCli)
-You can use the `az vm extension` and `az vm update` commands to update existing virtual machines.
+You can use the `az vm extension` and `az vm update` commands to update existing virtual machines.
1. Install the extension ```azurecli
It is required the Azure Hybrid Benefit extension be installed on the VM to swit
### [Azure CLI](#tab/licenseazcli)
-1. You can use the `az vm get-instance-view` command to check whether the extension is installed or not. Look for the `AHBForSLES` or `AHBForRHEL` extension, if the corresponding one is installed, the Azure Hybrid Benefit has been enabled,
+1. You can use the `az vm get-instance-view` command to check whether the extension is installed or not. Look for the `AHBForSLES` or `AHBForRHEL` extension, if the corresponding one is installed, the Azure Hybrid Benefit has been enabled,
review the license type to review which licensing model your VM is using. ```azurecli
review the license type to review which licensing model your VM is using.
- For RHEL: `RHEL_BYOS` - For SLES: `SLES_BYOS`
-If the license type of the VM has not been modified, the previous command returns an empty string and the VM continues to use the billing model of the image used to deploy it.
+If the license type of the VM has not been modified, the previous command returns an empty string and the VM continues to use the billing model of the image used to deploy it.
### [Azure PowerShell](#tab/licensepowershell)
-1. You can use the `az vm get-instance-view` command to check whether the extension is installed or not. Look for the `AHBForSLES` or `AHBForRHEL` extension, if the corresponding one is installed, the Azure Hybrid Benefit has been enabled,
+1. You can use the `az vm get-instance-view` command to check whether the extension is installed or not. Look for the `AHBForSLES` or `AHBForRHEL` extension, if the corresponding one is installed, the Azure Hybrid Benefit has been enabled,
review the license type to review which licensing model your VM is using. ```azurepowershell
review the license type to review which licensing model your VM is using.
- For RHEL: `RHEL_BYOS` - For SLES: `SLES_BYOS`
-If the license type of the VM has not been modified, the previous command returns an empty string and the VM continues to use the billing model of the image used to deploy it.
+If the license type of the VM has not been modified, the previous command returns an empty string and the VM continues to use the billing model of the image used to deploy it.
If you deployed an Azure Marketplace image with PAYG licensing model and desire
```
-
+ ## BYOS to PAYG conversions Converting to PAYG model is supported for Azure Marketplace images labeled BYOS, machines imported from on-premises or a third party cloud provider.
Converting to PAYG model is supported for Azure Marketplace images labeled BYOS,
```azurecli # This will enable Azure Hybrid Benefit to fetch software updates for RHEL base/regular repositories az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASE
-
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL EUS repositories az vm update -g myResourceGroup -n myVmName --license-type RHEL_EUS
-
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL SAP APPS repositories az vm update -g myResourceGroup -n myVmName --license-type RHEL_SAPAPPS
-
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL SAP HA repositories az vm update -g myResourceGroup -n myVmName --license-type RHEL_SAPHA
-
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL BASE SAP APPS repositories az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASESAPAPPS
-
+ # This will enable Azure Hybrid Benefit to fetch software updates for RHEL BASE SAP HA repositories az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASESAPHA ```
To start using Azure Hybrid Benefit for Red Hat:
1. Depending on the software updates that you want, change the license type to a relevant value. Here are the available license type values and the software updates associated with them:
- | License type | Software updates | Allowed virtual machines|
+ | License type | Software updates | Allowed virtual machines|
|||| | RHEL_BASE | Installs Red Hat regular/base repositories on your virtual machine. | RHEL BYOS virtual machines, RHEL custom image virtual machines| | RHEL_EUS | Installs Red Hat Extended Update Support (EUS) repositories on your virtual machine. | RHEL BYOS virtual machines, RHEL custom image virtual machines|
To start using Azure Hybrid Benefit for Red Hat:
> [!NOTE] > If the extension isn't running by itself, you can run it on demand.
-1. You should now be connected to Azure Red Hat Update. The relevant repositories are installed on your machine.
+1. You should now be connected to Azure Red Hat Update. The relevant repositories are installed on your machine.
1. If you want to switch back to the bring-your-own-subscription model, just change the license type to `None` and run the extension. This action removes all Red Hat Update Infrastructure (RHUI) repositories from your virtual machine and stops the billing. > [!Note]
-> In the unlikely event that the extension can't install repositories or there are any other issues, switch the license type back to empty and reach out to Microsoft support. This ensures that you don't get billed for software updates.
+> In the unlikely event that the extension can't install repositories or there are any other issues, switch the license type back to empty and reach out to Microsoft support. This ensures that you don't get billed for software updates.
#### [SUSE (SLES)](#tab/slespaygconversion)
To start using Azure Hybrid Benefit for SLES virtual machines:
1. Install the `AHBForSLES` extension on the SLES virtual machine. 1. Change the license type to the value that reflects the software updates you want. Here are the available license type values and the software updates associated with them:
- | License type | Software updates | Allowed virtual machines|
+ | License type | Software updates | Allowed virtual machines|
|||| | SLES | Installs SLES Standard repositories on your virtual machine. | SLES BYOS virtual machines, SLES custom image virtual machines| | SLES_SAP | Installs SLES SAP repositories on your virtual machine. | SLES SAP BYOS virtual machines, SLES custom image virtual machines|
Customers who use Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines h
Customers can use RHUI as the main update source for Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines without attaching subscriptions. Customers who choose the RHUI option are responsible for ensuring RHEL subscription compliance.
-Customers who choose either Red Hat Satellite Server or Red Hat Subscription Manager should remove the RHUI configuration and then attach a cloud-access-enabled RHEL subscription to Azure Hybrid Benefit for PAYG RHEL virtual machines.
+Customers who choose either Red Hat Satellite Server or Red Hat Subscription Manager should remove the RHUI configuration and then attach a cloud-access-enabled RHEL subscription to Azure Hybrid Benefit for PAYG RHEL virtual machines.
For more information about Red Hat subscription compliance, software updates, and sources for Azure Hybrid Benefit for pay-as-you-go RHEL virtual machines, see the [Red Hat article about using RHEL subscriptions with Azure Hybrid Benefit](https://access.redhat.com/articles/5419341).
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/build-image-with-packer.md
-+ Last updated 04/11/2023
# How to use Packer to create Linux virtual machine images in Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Each virtual machine (VM) in Azure is created from an image that defines the Linux distribution and OS version. Images can include pre-installed applications and configurations. The Azure Marketplace provides many first and third-party images for most common distributions and application environments, or you can create your own custom images tailored to your needs. This article details how to use the open source tool [Packer](https://www.packer.io/) to define and build custom images in Azure.
az vm open-port \
Now you can open a web browser and enter `http://publicIpAddress` in the address bar. Provide your own public IP address from the VM create process. The default NGINX page is displayed as in the following example:
-![NGINX default site](./media/build-image-with-packer/nginx.png)
+![NGINX default site](./media/build-image-with-packer/nginx.png)
## Next steps You can also use existing Packer provisioner scripts with [Azure Image Builder](image-builder.md).
virtual-machines Cli Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-manage.md
Title: Common Azure CLI commands
description: Learn some of the common Azure CLI commands to get you started managing your VMs in Azure Resource Manager mode -+ Last updated 04/11/2023 # Common Azure CLI commands for managing Azure resources
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
The Azure CLI allows you to create and manage your Azure resources on macOS, Linux, and Windows. This article details some of the most common commands to create and manage virtual machines (VMs).
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
Title: Find and use marketplace purchase plan information using the CLI
+ Title: Find and use marketplace purchase plan information using the CLI
description: Learn how to use the Azure CLI to find image URNs and purchase plan parameters, like the publisher, offer, SKU, and version, for Marketplace VM images.
Last updated 02/09/2023
-+ # Find Azure Marketplace image information using the Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
This topic describes how to use the Azure CLI to find VM images in the Azure Marketplace. Use this information to specify a Marketplace image when you create a VM programmatically with the CLI, Resource Manager templates, or other tools.
-You can also browse available images and offers using the [Azure Marketplace](https://azuremarketplace.microsoft.com/) or [Azure PowerShell](../windows/cli-ps-findimage.md).
+You can also browse available images and offers using the [Azure Marketplace](https://azuremarketplace.microsoft.com/) or [Azure PowerShell](../windows/cli-ps-findimage.md).
## Terminology
A Marketplace image in Azure has the following attributes:
* **Publisher**: The organization that created the image. Examples: Canonical, RedHat, SUSE. * **Offer**: The name of a group of related images created by a publisher. Examples: 0001-com-ubuntu-server-jammy, RHEL, sles-15-sp3. * **SKU**: An instance of an offer, such as a major release of a distribution. Examples: 22_04-lts-gen2, 8-lvm-gen2, gen2.
-* **Version**: The version number of an image SKU.
+* **Version**: The version number of an image SKU.
These values can be passed individually or as an image *URN*, combining the values separated by the colon (:). For example: *Publisher*:*Offer*:*Sku*:*Version*. You can replace the version number in the URN with `latest` to use the latest version of the image.
You can filter the list of images by `--publisher` or another parameter to limit
For example, the following command displays all Debian offers: ```azurecli-interactive
-az vm image list --offer Debian --all --output table
+az vm image list --offer Debian --all --output table
``` You can limit your results to a single architecture by adding the `--architecture` parameter. For example, to display all Arm64 images available from Canonical:
az vm image list --architecture Arm64 --publisher Canonical --all --output table
``` ## Look at all available images
-
+ Another way to find an image in a location is to run the [az vm image list-publishers](/cli/azure/vm/image), [az vm image list-offers](/cli/azure/vm/image), and [az vm image list-skus](/cli/azure/vm/image) commands in sequence. With these commands, you determine these values: 1. List the image publishers for a location. In this example, we're looking at the *West US* region.
-
+ ```azurecli-interactive az vm image list-publishers --location westus --output table ``` 1. For a given publisher, list their offers. In this example, we add *RedHat* as the publisher.
-
+ ```azurecli-interactive az vm image list-offers --location westus --publisher RedHat --output table ```
If you deploy a VM with a Resource Manager template, you set the image parameter
## Check the purchase plan information
-Some VM images in the Azure Marketplace have extra license and purchase terms that you must accept before you can deploy them programmatically.
+Some VM images in the Azure Marketplace have extra license and purchase terms that you must accept before you can deploy them programmatically.
To deploy a VM from such an image, you'll need to accept the image's terms the first time you use it, once per subscription. You'll also need to specify *purchase plan* parameters to deploy a VM from that image
Output:
} ```
-Running a similar command for the RabbitMQ Certified by Bitnami image shows the following `plan` properties: `name`, `product`, and `publisher`. (Some images also have a `promotion code` property.)
+Running a similar command for the RabbitMQ Certified by Bitnami image shows the following `plan` properties: `name`, `product`, and `publisher`. (Some images also have a `promotion code` property.)
```azurecli-interactive az vm image show --location westus --urn bitnami:rabbitmq:rabbitmq:latest
To view and accept the license terms, use the [az vm image terms](/cli/azure/vm/
```azurecli-interactive az vm image terms show --urn bitnami:rabbitmq:rabbitmq:latest
-```
+```
The output includes a `licenseTextLink` to the license terms, and indicates that the value of `accepted` is `true`:
To accept the terms, type:
```azurecli-interactive az vm image terms accept --urn bitnami:rabbitmq:rabbitmq:latest
-```
+```
## Deploy a new VM using the image parameters
-With information about the image, you can deploy it using the `az vm create` command.
+With information about the image, you can deploy it using the `az vm create` command.
To deploy an image that doesn't have plan information, like the latest Ubuntu Server 18.04 image from Canonical, pass the URN for `--image`:
az vm create \
--name myVM \ --admin-username azureuser \ --generate-ssh-keys \
- --image Canonical:UbuntuServer:18.04-LTS:latest
+ --image Canonical:UbuntuServer:18.04-LTS:latest
```
If you get a message about accepting the terms of the image, review section [Acc
## Using an existing VHD with purchase plan information
-If you have an existing VHD from a VM that was created using a paid Azure Marketplace image, you might need to give the purchase plan information when creating a new VM from that VHD.
+If you have an existing VHD from a VM that was created using a paid Azure Marketplace image, you might need to give the purchase plan information when creating a new VM from that VHD.
If you still have the original VM, or another VM created using the same marketplace image, you can get the plan name, publisher, and product information from it using [az vm get-instance-view](/cli/azure/vm#az-vm-get-instance-view). This example gets a VM named *myVM* in the *myResourceGroup* resource group and then displays the purchase plan information.
az vm create \
--attach-os-disk myVHD \ --plan-name planName \ --plan-publisher planPublisher \
- --plan-product planProduct
+ --plan-product planProduct
```
virtual-machines Cloud Init Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-deep-dive.md
Title: Understanding cloud-init
+ Title: Understanding cloud-init
description: Deep dive for understanding provisioning an Azure VM using cloud-init.-+ Last updated 09/06/2023 -+ # Diving deeper into cloud-init
virtual-machines Cloud Init Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-troubleshooting.md
Title: Troubleshoot using cloud-init
+ Title: Troubleshoot using cloud-init
description: Troubleshoot provisioning an Azure VM using cloud-init.
Last updated 03/29/2023
-+ # Troubleshooting VM provisioning with cloud-init
Once you have found an error or warning, read backwards in the cloud-init log to
If you have access to the [Serial Console](/troubleshoot/azure/virtual-machines/serial-console-grub-single-user-mode), you can try to rerun the command that cloud-init was trying to run.
-The logging for `/var/log/cloud-init.log` can also be reconfigured within /etc/cloud/cloud.cfg.d/05_logging.cfg. For more details of cloud-init logging, refer to the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/development/logging.html).
+The logging for `/var/log/cloud-init.log` can also be reconfigured within /etc/cloud/cloud.cfg.d/05_logging.cfg. For more details of cloud-init logging, refer to the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/development/logging.html).
### /var/log/cloud-init-output.log
virtual-machines Cloudinit Add User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-add-user.md
Title: Use cloud-init to add a user to a Linux VM on Azure
+ Title: Use cloud-init to add a user to a Linux VM on Azure
description: How to use cloud-init to add a user to a Linux VM during creation with the Azure CLI
Last updated 03/29/2022 -+ # Use cloud-init to add a user to a Linux VM in Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io) to add a user on a virtual machine (VM) or virtual machine scale sets (VMSS) at provisioning time in Azure. This cloud-init script runs on first boot once the resources have been provisioned by Azure. For more information about how cloud-init works natively in Azure and the supported Linux distros, see [cloud-init overview](using-cloud-init.md).
users:
ssh-authorized-keys: - ssh-rsa AAAAB3<snip> ```
-> [!NOTE]
+> [!NOTE]
> The #cloud-config file includes the `- default` parameter included. This will append the user, to the existing admin user created during provisioning. If you create a user without the `- default` parameter - the auto generated admin user created by the Azure platform would be overwritten. Before deploying this image, you need to create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location.
az vm create \
--name vmName \ --image imageCIURN \ --custom-data cloud_init_add_user.txt \
- --generate-ssh-keys
+ --generate-ssh-keys
``` > [!NOTE] > Replace **myResourceGroup**, **vmName**, and **imageCIURN** values accordingly. Make sure an image with Cloud-init is chosen.
myadminuser:x:1000:
## Next steps For additional cloud-init examples of configuration changes, see the following:
-
+ - [Add an additional Linux user to a VM](cloudinit-add-user.md) - [Run a package manager to update existing packages on first boot](cloudinit-update-vm.md)-- [Change VM local hostname](cloudinit-update-vm-hostname.md)
+- [Change VM local hostname](cloudinit-update-vm-hostname.md)
- [Install an application package, update configuration files and inject keys](tutorial-automate-vm-deployment.md)
virtual-machines Cloudinit Bash Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-bash-script.md
Title: Use cloud-init to run a bash script in a Linux VM on Azure
+ Title: Use cloud-init to run a bash script in a Linux VM on Azure
description: How to use cloud-init to run a bash script in a Linux VM during creation with the Azure CLI
Last updated 03/29/2023 -+ # Use cloud-init to run a bash script in a Linux VM in Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io) to run an existing bash script on a Linux virtual machine (VM) or virtual machine scale sets (VMSS) at provisioning time in Azure. These cloud-init scripts run on first boot once the resources have been provisioned by Azure. For more information about how cloud-init works natively in Azure and the supported Linux distros, see [cloud-init overview](using-cloud-init.md)
With cloud-init you do not need to convert your existing scripts into a cloud-co
If you have been using the Linux Custom Script Azure Extension to run your scripts, you can migrate them to use cloud-init. However, Azure Extensions have integrated reporting to alert to script failures, a cloud-init image deployment will NOT fail if the script fails.
-To see this functionality in action, create a simple bash script for testing. Like the cloud-init `#cloud-config` file, this script must be local to where you will be running the AzureCLI commands to provision your virtual machine. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
+To see this functionality in action, create a simple bash script for testing. Like the cloud-init `#cloud-config` file, this script must be local to where you will be running the AzureCLI commands to provision your virtual machine. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
```bash #!/bin/sh
az vm create \
--name vmName \ --image imageCIURN \ --custom-data simple_bash.sh \
- --generate-ssh-keys
+ --generate-ssh-keys
``` > [!NOTE]
SSH to the public IP address of your VM shown in the output from the preceding c
ssh <user>@<publicIpAddress> ```
-Verify that `/tmp/myScript.txt` file exists and has the appropriate text inside of it.
+Verify that `/tmp/myScript.txt` file exists and has the appropriate text inside of it.
```bash sudo cat /tmp/myScript
Running config-scripts-user using lock Running command ['/var/lib/cloud/instance
## Next steps For additional cloud-init examples of configuration changes, see the following:
-
+ - [Add an additional Linux user to a VM](cloudinit-add-user.md) - [Run a package manager to update existing packages on first boot](cloudinit-update-vm.md)-- [Change VM local hostname](cloudinit-update-vm-hostname.md)
+- [Change VM local hostname](cloudinit-update-vm-hostname.md)
- [Install an application package, update configuration files and inject keys](tutorial-automate-vm-deployment.md)
virtual-machines Cloudinit Configure Swapfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-configure-swapfile.md
Title: Use cloud-init to configure a swap partition on a Linux VM
+ Title: Use cloud-init to configure a swap partition on a Linux VM
description: How to use cloud-init to configure a swap partition in a Linux VM during creation with the Azure CLI
Last updated 03/29/2023 -+ # Use cloud-init to configure a swap partition on a Linux VM
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io) to configure the swap partition on various Linux distributions. The swap partition was traditionally configured by the Linux Agent (WALA) based on which distributions required one. This document outlines the process for building the swap partition on demand during provisioning time using cloud-init. For more information about how cloud-init works natively in Azure and the supported Linux distros, see [cloud-init overview](using-cloud-init.md)
By default on Azure, Ubuntu gallery images do not create swap partitions. To ena
## Create swap partition for Red Hat and CentOS based images
-Create a file in your current shell named *cloud_init_swappart.txt* and paste the following configuration. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
+Create a file in your current shell named *cloud_init_swappart.txt* and paste the following configuration. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
```yaml #cloud-config
az vm create \
--name vmName \ --image imageCIURN \ --custom-data cloud_init_swappart.txt \
- --generate-ssh-keys
+ --generate-ssh-keys
``` > [!NOTE]
For more cloud-init examples of configuration changes, see the following:
- [Add an additional Linux user to a VM](cloudinit-add-user.md) - [Run a package manager to update existing packages on first boot](cloudinit-update-vm.md)-- [Change VM local hostname](cloudinit-update-vm-hostname.md)
+- [Change VM local hostname](cloudinit-update-vm-hostname.md)
- [Install an application package, update configuration files and inject keys](tutorial-automate-vm-deployment.md)
virtual-machines Cloudinit Update Vm Hostname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm-hostname.md
Last updated 03/29/2023 -+ # Use cloud-init to set hostname for a Linux VM in Azure
This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io)
## Set the hostname with cloud-init
-By default, the hostname is the same as the VM name when you create a new virtual machine in Azure. To run a cloud-init script to change this default hostname when you create a VM in Azure with [az vm create](/cli/azure/vm), specify the cloud-init file with the `--custom-data` switch.
+By default, the hostname is the same as the VM name when you create a new virtual machine in Azure. To run a cloud-init script to change this default hostname when you create a VM in Azure with [az vm create](/cli/azure/vm), specify the cloud-init file with the `--custom-data` switch.
-To see upgrade process in action, create a file in your current shell named *cloud_init_hostname.txt* and paste the following configuration. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
+To see upgrade process in action, create a file in your current shell named *cloud_init_hostname.txt* and paste the following configuration. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
```yaml #cloud-config
az vm create \
--name vmName \ --image imageCIURN \ --custom-data cloud_init_hostname.txt \
- --generate-ssh-keys
+ --generate-ssh-keys
``` > [!NOTE]
For additional cloud-init examples of configuration changes, see the following:
- [Add an additional Linux user to a VM](cloudinit-add-user.md) - [Run a package manager to update existing packages on first boot](cloudinit-update-vm.md)-- [Change VM local hostname](cloudinit-update-vm-hostname.md)
+- [Change VM local hostname](cloudinit-update-vm-hostname.md)
- [Install an application package, update configuration files and inject keys](tutorial-automate-vm-deployment.md)
virtual-machines Cloudinit Update Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm.md
Last updated 03/29/2023 -+ # Use cloud-init to update and install packages in a Linux VM in Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io) to update packages on a Linux virtual machine (VM) or virtual machine scale sets at provisioning time in Azure. These cloud-init scripts run on first boot once the resources have been provisioned by Azure. For more information about how cloud-init works natively in Azure and the supported Linux distros, see [cloud-init overview](using-cloud-init.md)
This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io)
For security purposes, you may want to configure a VM to apply the latest updates on first boot. As cloud-init works across different Linux distros, there is no need to specify `apt`, `zypper` or `yum` for the package manager. Instead, you define `package_upgrade` and let the cloud-init process determine the appropriate mechanism for the distro in use.
-For this example, we will be using the Azure Cloud Shell. To see the upgrade process in action, create a file named *cloud_init_upgrade.txt* and paste the following configuration. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
+For this example, we will be using the Azure Cloud Shell. To see the upgrade process in action, create a file named *cloud_init_upgrade.txt* and paste the following configuration. You can use any editor you wish. Make sure that the whole cloud-init file is copied correctly, especially the first line.
-Copy the text below and paste it into the `cloud_init_upgrade.txt` file. Make sure that the whole cloud-init file is copied correctly, especially the first line.
+Copy the text below and paste it into the `cloud_init_upgrade.txt` file. Make sure that the whole cloud-init file is copied correctly, especially the first line.
```yaml #cloud-config
az vm create \
--image imageCIURN \ --custom-data cloud_init_upgrade.txt \ --admin-username azureuser \
- --generate-ssh-keys
+ --generate-ssh-keys
``` > [!NOTE]
Run the package management tool and check for updates:
sudo yum check-update ```
-As cloud-init checked for and installed updates on boot, there should be no additional updates to apply.
+As cloud-init checked for and installed updates on boot, there should be no additional updates to apply.
- You can see the update process, number of altered packages as well as the installation of `httpd` by running the following command and review the output.
For additional cloud-init examples of configuration changes, see the following:
- [Add an additional Linux user to a VM](cloudinit-add-user.md) - [Run a package manager to update existing packages on first boot](cloudinit-update-vm.md)-- [Change VM local hostname](cloudinit-update-vm-hostname.md)
+- [Change VM local hostname](cloudinit-update-vm-hostname.md)
- [Install an application package, update configuration files and inject keys](tutorial-automate-vm-deployment.md)
virtual-machines Create Cli Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-cli-complete.md
Title: Create a Linux environment with the Azure CLI
description: Create storage, a Linux VM, a virtual network and subnet, a load balancer, an NIC, a public IP, and a network security group, all from the ground up by using the Azure CLI. -+ Last updated 3/29/2023
# Create a complete Linux virtual machine with the Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
To quickly create a virtual machine (VM) in Azure, you can use a single Azure CLI command that uses default values to create any required supporting resources. Resources such as a virtual network, public IP address, and network security group rules are automatically created. For more control of your environment in production use, you may create these resources ahead of time and then add your VMs to them. This article guides you through how to create a VM and each of the supporting resources one by one.
az group create --name myResourceGroup --location eastus
By default, the output of Azure CLI commands is in JSON (JavaScript Object Notation). To change the default output to a list or table, for example, use [az config set core.output=table](/cli/azure/reference-index). You can also add `--output` to any command for a one time change in output format. The following example shows the JSON output from the `az group create` command:
-```json
+```json
{ "id": "/subscriptions/guid/resourceGroups/myResourceGroup", "location": "eastus",
Output:
## Create an availability set
-Availability sets help spread your VMs across fault domains and update domains. Even though you only create one VM right now, it's best practice to use availability sets to make it easier to expand in the future.
+Availability sets help spread your VMs across fault domains and update domains. Even though you only create one VM right now, it's best practice to use availability sets to make it easier to expand in the future.
Fault domains define a grouping of virtual machines that share a common power source and network switch. By default, the virtual machines that are configured within your availability set are separated across up to three fault domains. A hardware issue in one of these fault domains does not affect every VM that is running your app.
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
Title: Create and upload a CentOS-based Linux VHD
description: Learn to create and upload an Azure virtual hard disk (VHD) that contains a CentOS-based Linux operating system. -+ Last updated 12/14/2022
# Prepare a CentOS-based virtual machine for Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Learn to create and upload an Azure virtual hard disk (VHD) that contains a CentOS-based Linux operating system.
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/ enabled=1 gpgcheck=0
-
+ [base] name=CentOS-$releasever - Base #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
+ #released updates [updates] name=CentOS-$releasever - Updates
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
+ #additional packages that may be useful [extras] name=CentOS-$releasever - Extras
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/extras/$basearch/ gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
+ #additional packages that extend functionality of existing packages [centosplus] name=CentOS-$releasever - Plus
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
enabled=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 ```
-
+ > [!Note] > The rest of this guide will assume you're using at least the `[openlogic]` repo, which will be used to install the Azure Linux agent below.
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf ```
- ```bash
+ ```bash
sudo echo "Adding mounts and disk_setup to init stage" sudo sed -i '/ - mounts/d' /etc/cloud/cloud.cfg sudo sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
13. Swap configuration
-
+ Don't create swap space on the operating system disk. Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however t
14. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure: > [!NOTE]
- > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
+ > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
```bash sudo rm -f /var/log/waagent.log
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
Title: Prepare Linux for imaging
description: Learn how to prepare a Linux system to be used for an image in Azure. -+ Last updated 12/14/2022
In this case, resize the VM by using either the Hyper-V Manager console or the [
gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}') rounded_size=$(((($size+$MB-1)/$MB)*$MB))
-
+ echo "Rounded Size = $rounded_size" ```
Here are some considerations for using the Azure Linux Agent:
```bash cd /boot
- sudo cp initramfs-<kernel-version>.img <kernel-version>.img.bak
+ sudo cp initramfs-<kernel-version>.img <kernel-version>.img.bak
sudo dracut -f -v initramfs-<kernel-version>.img <kernel-version> --add-drivers "hv_vmbus hv_netvsc hv_storvsc"
- sudo grub-mkconfig -o /boot/grub/grub.cfg
- sudo grub2-mkconfig -o /boot/grub2/grub.cfg
+ sudo grub-mkconfig -o /boot/grub/grub.cfg
+ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
``` Add the Hyper-V module for initrd by using `mkinitramfs`:
Here are some considerations for using the Azure Linux Agent:
cd /boot sudo cp initrd.img-<kernel-version> initrd.img-<kernel-version>.bak sudo mkinitramfs -o initrd.img-<kernel-version> <kernel-version> --with=hv_vmbus,hv_netvsc,hv_storvsc
- sudo update-grub
+ sudo update-grub
``` 4. Ensure that the SSH server is installed and configured to start at boot time. This configuration is usually the default.
Here are some considerations for using the Azure Linux Agent:
sudo waagent -force -deprovision+user sudo rm -f ~/.bash_history sudo export HISTSIZE=0
- ```
+ ```
On VirtualBox, you might see an error message after you run `waagent -force -deprovision` that says `[Errno 5] Input/output error`. This error message is not critical, and you can ignore it.
virtual-machines Create Upload Openbsd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-openbsd.md
Title: Create and upload an OpenBSD image
+ Title: Create and upload an OpenBSD image
description: Learn how to create and upload a virtual hard disk (VHD) that contains the OpenBSD operating system to create an Azure virtual machine through Azure CLI -+ Last updated 05/24/2017
This article shows you how to create and upload a virtual hard disk (VHD) that c
## Prerequisites This article assumes that you have the following items:
-* **An Azure subscription** - If you don't have an account, you can create one in just a couple of minutes. If you have an MSDN subscription, see [Monthly Azure credit for Visual Studio subscribers](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/). Otherwise, learn how to [create a free trial account](https://azure.microsoft.com/pricing/free-trial/).
+* **An Azure subscription** - If you don't have an account, you can create one in just a couple of minutes. If you have an MSDN subscription, see [Monthly Azure credit for Visual Studio subscribers](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/). Otherwise, learn how to [create a free trial account](https://azure.microsoft.com/pricing/free-trial/).
* **Azure CLI** - Make sure you have the latest [Azure CLI](/cli/azure/install-azure-cli) installed and logged in to your Azure account with [az login](/cli/azure/reference-index). * **OpenBSD operating system installed in a .vhd file** - A supported OpenBSD operating system ([6.6 version AMD64](https://ftp.openbsd.org/pub/OpenBSD/7.2/amd64/)) must be installed to a virtual hard disk. Multiple tools exist to create .vhd files. For example, you can use a virtualization solution such as Hyper-V to create the .vhd file and install the operating system. For instructions about how to install and use Hyper-V, see [Install Hyper-V and create a virtual machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
On the VM where you installed the OpenBSD operating system 6.1, which added Hype
1. If DHCP is not enabled during installation, enable the service as follows:
- ```sh
+ ```sh
doas echo dhcp > /etc/hostname.hvn0 ```
On the VM where you installed the OpenBSD operating system 6.1, which added Hype
```sh doas echo "https://ftp.openbsd.org/pub/OpenBSD" > /etc/installurl ```
-
+ 4. By default, the `root` user is disabled on virtual machines in Azure. Users can run commands with elevated privileges by using the `doas` command on OpenBSD VM. Doas is enabled by default. 5. Install and configure prerequisites for the Azure Agent as follows:
On the VM where you installed the OpenBSD operating system 6.1, which added Hype
6. The latest release of the Azure agent can always be found on [GitHub](https://github.com/Azure/WALinuxAgent/releases). Install the agent as follows: ```sh
- doas git clone https://github.com/Azure/WALinuxAgent
+ doas git clone https://github.com/Azure/WALinuxAgent
doas cd WALinuxAgent doas python setup.py install doas waagent -register-service
az vm list-ip-addresses --resource-group myResourceGroup --name myOpenBSD61
``` Now you can SSH to your OpenBSD VM as normal:
-
+ ```bash ssh azureuser@<ip address> ```
virtual-machines Create Upload Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-ubuntu.md
Title: Create and upload an Ubuntu Linux VHD in Azure
description: Learn to create and upload an Azure virtual hard disk (VHD) that contains an Ubuntu Linux operating system. -+ Last updated 07/28/2021
# Prepare an Ubuntu virtual machine for Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Ubuntu now publishes official Azure VHDs for download at [https://cloud-images.ubuntu.com/](https://cloud-images.ubuntu.com/). If you need to build your own specialized Ubuntu image for Azure, rather than use the manual procedure below it's recommended to start with these known working VHDs and customize as needed. The latest image releases can always be found at the following locations:
sudo rm -f /etc/netplan/*.yaml
9. Configure cloud-init to provision the system using the Azure datasource: ```bash
-cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/90_dpkg.cfg
+cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/90_dpkg.cfg
datasource_list: [ Azure ] EOF
sudo cp -r ubuntu/ boot
```bash cd boot ```
-
+ 19. Rename the shimx64.efi file: ```bash sudo mv shimx64.efi bootx64.efi
sudo mv shimx64.efi bootx64.efi
20. Rename the grub.cfg file to bootx64.cfg: ```bash
-sudo mv grub.cfg bootx64.cfg
+sudo mv grub.cfg bootx64.cfg
``` ## Next steps
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/detach-disk.md
Last updated 08/09/2023 -+ # How to detach a data disk from a Linux virtual machine
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
When you no longer need a data disk that's attached to a virtual machine, you can easily detach it. This removes the disk from the virtual machine, but doesn't remove it from storage. In this article, we are working with an Ubuntu LTS 16.04 distribution. If you are using a different distribution, the instructions for unmounting the disk might be different. > [!WARNING] > If you detach a disk it is not automatically deleted. If you have subscribed to Premium storage, you will continue to incur storage charges for the disk. For more information, see [Pricing and Billing when using Premium Storage](https://azure.microsoft.com/pricing/details/storage/page-blobs/).
-If you want to use the existing data on the disk again, you can reattach it to the same virtual machine, or another one.
+If you want to use the existing data on the disk again, you can reattach it to the same virtual machine, or another one.
## Connect to the VM to unmount the disk Before you can detach the disk using either CLI or the portal, you need to unmount the disk and removed references to if from your fstab file.
-Connect to the VM. In this example, the public IP address of the VM is *10.0.1.4* with the username *azureuser*:
+Connect to the VM. In this example, the public IP address of the VM is *10.0.1.4* with the username *azureuser*:
```bash ssh azureuser@10.0.1.4
The output looks similar to the following example:
```
-Edit the */etc/fstab* file to remove references to the disk.
+Edit the */etc/fstab* file to remove references to the disk.
> [!NOTE] > Improperly editing the **/etc/fstab** file could result in an unbootable system. If unsure, refer to the distribution's documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file is created before editing.
sudo umount /dev/sdc1 /datadrive
```
-## Detach a data disk using Azure CLI
+## Detach a data disk using Azure CLI
This example detaches the *myDataDisk* disk from VM named *myVM* in *myResourceGroup*.
virtual-machines Disable Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disable-provisioning.md
-+ Last updated 04/11/2023
# Disable or remove the Linux Agent from VMs and images
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Before removing the Linux Agent, you must understand of what VM will not be able to do after the Linux Agent is removed.
sudo zypper --non-interactive remove python-azure-agent
> [!IMPORTANT] >
-> You can remove all associated artifacts of the Linux Agent, but this will mean you cannot reinstall it at a later date. Therefore, it is strongly recommended you consider disabling the Linux Agent first, removing the Linux Agent using the above only.
+> You can remove all associated artifacts of the Linux Agent, but this will mean you cannot reinstall it at a later date. Therefore, it is strongly recommended you consider disabling the Linux Agent first, removing the Linux Agent using the above only.
If you know you will not ever reinstall the Linux Agent again, then you can run the following:
sudo rm -f /var/log/waagent.log
If you have an image that already contains cloud-init, and you want to remove the Linux agent, but still provision using cloud-init, run the steps in Step 2 (and optionally Step 3) as root to remove the Azure Linux Agent and then the following will remove the cloud-init configuration and cached data, and prepare the VM to create a custom image. ```bash
-sudo cloud-init clean --logs --seed
+sudo cloud-init clean --logs --seed
``` ## Deprovision and create an image
az image create -g <resource_group> -n <image_name> --source <vm_name>
```azurecli-interactive az sig image-version create \
- -g $sigResourceGroup
- --gallery-name $sigName
- --gallery-image-definition $imageDefName
- --gallery-image-version 1.0.0
+ -g $sigResourceGroup
+ --gallery-name $sigName
+ --gallery-image-definition $imageDefName
+ --gallery-image-version 1.0.0
--managed-image /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/imageGroups/providers/images/MyManagedImage ```
virtual-machines Disk Encryption Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-cli-quickstart.md
Last updated 03/29/2023-+ # Quickstart: Create and encrypt a Linux VM with the Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to create and encrypt a Linux virtual machine (VM).
When encryption is enabled, you will see "EnableEncryption" in the returned outp
## Clean up resources
-When no longer needed, you can use the [az group delete](/cli/azure/group) command to remove the resource group, VM, and Key Vault.
+When no longer needed, you can use the [az group delete](/cli/azure/group) command to remove the resource group, VM, and Key Vault.
```azurecli-interactive az group delete --name "myResourceGroup"
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
Last updated 07/07/2023-+ # Azure Disk Encryption scenarios on Linux VMs
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Azure Disk Encryption for Linux virtual machines (VMs) uses the DM-Crypt feature of Linux to provide full disk encryption of the OS disk and data disks. Additionally, it provides encryption of the temporary disk when using the EncryptFormatAll feature.
az account list
az account set --subscription "<subscription name or ID>" ```
-For more information, see [Get started with Azure CLI 2.0](/cli/azure/get-started-with-azure-cli).
+For more information, see [Get started with Azure CLI 2.0](/cli/azure/get-started-with-azure-cli).
# [Azure PowerShell](#tab/powershellazure)
-The [Azure PowerShell az module](/powershell/azure/new-azureps-module-az) provides a set of cmdlets that uses the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources. You can use it in your browser with [Azure Cloud Shell](../../cloud-shell/overview.md), or you can install it on your local machine using the instructions in [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
+The [Azure PowerShell az module](/powershell/azure/new-azureps-module-az) provides a set of cmdlets that uses the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources. You can use it in your browser with [Azure Cloud Shell](../../cloud-shell/overview.md), or you can install it on your local machine using the instructions in [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell).
If you already have it installed locally, make sure you use the latest version of Azure PowerShell SDK version to configure Azure Disk Encryption. Download the latest version of [Azure PowerShell release](https://github.com/Azure/azure-powershell/releases).
For more information, see [Getting started with Azure PowerShell](/powershell/az
In this scenario, you can enable encryption by using the Resource Manager template, PowerShell cmdlets, or CLI commands. If you need schema information for the virtual machine extension, see the [Azure Disk Encryption for Linux extension](../extensions/azure-disk-enc-linux.md) article. >[!IMPORTANT]
- >It is mandatory to snapshot and/or backup a managed disk based VM instance outside of, and prior to enabling Azure Disk Encryption. A snapshot of the managed disk can be taken from the portal, or through [Azure Backup](../../backup/backup-azure-vms-encryption.md). Backups ensure that a recovery option is possible in the case of any unexpected failure during encryption. Once a backup is made, the Set-AzVMDiskEncryptionExtension cmdlet can be used to encrypt managed disks by specifying the -skipVmBackup parameter. The Set-AzVMDiskEncryptionExtension command will fail against managed disk based VMs until a backup has been made and this parameter has been specified.
+ >It is mandatory to snapshot and/or backup a managed disk based VM instance outside of, and prior to enabling Azure Disk Encryption. A snapshot of the managed disk can be taken from the portal, or through [Azure Backup](../../backup/backup-azure-vms-encryption.md). Backups ensure that a recovery option is possible in the case of any unexpected failure during encryption. Once a backup is made, the Set-AzVMDiskEncryptionExtension cmdlet can be used to encrypt managed disks by specifying the -skipVmBackup parameter. The Set-AzVMDiskEncryptionExtension command will fail against managed disk based VMs until a backup has been made and this parameter has been specified.
> > Encrypting or disabling encryption may cause the VM to reboot.
Use the [az vm encryption enable](/cli/azure/vm/encryption#az-vm-encryption-show
``` >[!NOTE]
- > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
+ > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br> > The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
-https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
+https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
-- **Verify the disks are encrypted:** To check on the encryption status of a VM, use the [az vm encryption show](/cli/azure/vm/encryption#az-vm-encryption-show) command.
+- **Verify the disks are encrypted:** To check on the encryption status of a VM, use the [az vm encryption show](/cli/azure/vm/encryption#az-vm-encryption-show) command.
```azurecli-interactive az vm encryption show --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup"
Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvm
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname; $diskEncryptionKeyVaultUrl = $KeyVault.VaultUri; $KeyVaultResourceId = $KeyVault.ResourceId;
- $sequenceVersion = [Guid]::NewGuid();
+ $sequenceVersion = [Guid]::NewGuid();
Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -VolumeType '[All|OS|Data]' -SequenceVersion $sequenceVersion -skipVmBackup; ``` -- **Encrypt a running VM using KEK:** You may need to add the -VolumeType parameter if you're encrypting data disks and not the OS disk.
+- **Encrypt a running VM using KEK:** You may need to add the -VolumeType parameter if you're encrypting data disks and not the OS disk.
```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup';
Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvm
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri; $KeyVaultResourceId = $KeyVault.ResourceId; $keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name $keyEncryptionKeyName).Key.kid;
- $sequenceVersion = [Guid]::NewGuid();
+ $sequenceVersion = [Guid]::NewGuid();
Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -KeyEncryptionKeyUrl $keyEncryptionKeyUrl -KeyEncryptionKeyVaultId $KeyVaultResourceId -VolumeType '[All|OS|Data]' -SequenceVersion $sequenceVersion -skipVmBackup; ``` >[!NOTE]
- > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
-/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
+ > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
+/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
> The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
-https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
+https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
- **Verify the disks are encrypted:** To check on the encryption status of a VM, use the [Get-AzVmDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) cmdlet.
- ```azurepowershell-interactive
+ ```azurepowershell-interactive
Get-AzVmDiskEncryptionStatus -ResourceGroupName 'MyVirtualMachineResourceGroup' -VMName 'MySecureVM' ```
The following table lists Resource Manager template parameters for existing or r
| keyVaultName | Name of the key vault that the encryption key should be uploaded to. You can get it by using the cmdlet `(Get-AzKeyVault -ResourceGroupName <MyKeyVaultResourceGroupName>). Vaultname` or the Azure CLI command `az keyvault list --resource-group "MyKeyVaultResourceGroupName"`.| | keyVaultResourceGroup | Name of the resource group that contains the key vault. | | keyEncryptionKeyURL | URL of the key encryption key that's used to encrypt the encryption key. This parameter is optional if you select **nokek** in the UseExistingKek drop-down list. If you select **kek** in the UseExistingKek drop-down list, you must enter the _keyEncryptionKeyURL_ value. |
-| volumeType | Type of volume that the encryption operation is performed on. Valid values are _OS_, _Data_, and _All_.
+| volumeType | Type of volume that the encryption operation is performed on. Valid values are _OS_, _Data_, and _All_.
| forceUpdateTag | Pass in a unique value like a GUID every time the operation needs to be force run. | | location | Location for all resources. |
The **EncryptFormatAll** parameter reduces the time for Linux data disks to be e
>[!WARNING] > EncryptFormatAll shouldn't be used when there is needed data on a VM's data volumes. You may exclude disks from encryption by unmounting them. You should first try out the EncryptFormatAll first on a test VM, understand the feature parameter and its implication before trying it on the production VM. The EncryptFormatAll option formats the data disk and all the data on it will be lost. Before proceeding, verify that disks you wish to exclude are properly unmounted. </br></br>
- >If you're setting this parameter while updating encryption settings, it might lead to a reboot before the actual encryption. In this case, you will also want to remove the disk you don't want formatted from the fstab file. Similarly, you should add the partition you want encrypt-formatted to the fstab file before initiating the encryption operation.
+ >If you're setting this parameter while updating encryption settings, it might lead to a reboot before the actual encryption. In this case, you will also want to remove the disk you don't want formatted from the fstab file. Similarly, you should add the partition you want encrypt-formatted to the fstab file before initiating the encryption operation.
### EncryptFormatAll criteria
Use the [az vm encryption enable](/cli/azure/vm/encryption#az-vm-encryption-enab
# [Use the EncryptFormatAll parameter with a PowerShell cmdlet](#tab/efaps)
-Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) cmdlet with the EncryptFormatAll parameter.
+Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) cmdlet with the EncryptFormatAll parameter.
**Encrypt a running VM using EncryptFormatAll:** As an example, the script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet with the EncryptFormatAll parameter. The resource group, VM, and key vault were created as prerequisites. Replace MyVirtualMachineResourceGroup, MySecureVM, and MySecureVault with your values.
-
+ ```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup'; $VMRGName = 'MyVirtualMachineResourceGroup';
We recommend an LVM-on-crypt setup. For detailed instructions about the LVM on c
## New VMs created from customer-encrypted VHD and encryption keys
-In this scenario, you can enable encrypting by using PowerShell cmdlets or CLI commands.
+In this scenario, you can enable encrypting by using PowerShell cmdlets or CLI commands.
Use the instructions in the Azure Disk encryption same scripts for preparing pre-encrypted images that can be used in Azure. After the image is created, you can use the steps in the next section to create an encrypted Azure VM.
New-AzVM -VM $VirtualMachine -ResourceGroupName "MyVirtualMachineResourceGroup"
## Enable encryption on a newly added data disk
-You can add a new data disk using [az vm disk attach](add-disk.md), or [through the Azure portal](attach-disk-portal.md). Before you can encrypt, you need to mount the newly attached data disk first. You must request encryption of the data drive since the drive will be unusable while encryption is in progress.
+You can add a new data disk using [az vm disk attach](add-disk.md), or [through the Azure portal](attach-disk-portal.md). Before you can encrypt, you need to mount the newly attached data disk first. You must request encryption of the data drive since the drive will be unusable while encryption is in progress.
# [Using Azure CLI](#tab/adedatacli)
- If the VM was previously encrypted with "All" then the --volume-type parameter should remain "All". All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS", then the --volume-type parameter should be changed to "All" so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data", then it can remain "Data" as demonstrated below. Adding and attaching a new data disk to a VM is not sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM prior to enabling encryption. On Linux the disk must be mounted in /etc/fstab with a [persistent block device name](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
+ If the VM was previously encrypted with "All" then the --volume-type parameter should remain "All". All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS", then the --volume-type parameter should be changed to "All" so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data", then it can remain "Data" as demonstrated below. Adding and attaching a new data disk to a VM is not sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM prior to enabling encryption. On Linux the disk must be mounted in /etc/fstab with a [persistent block device name](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
In contrast to PowerShell syntax, the CLI does not require the user to provide a unique sequence version when enabling encryption. The CLI automatically generates and uses its own unique sequence version value.
In contrast to PowerShell syntax, the CLI does not require the user to provide a
``` >[!NOTE]
- > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
-/subscriptions/[subscription-id-guid]/resourceGroups/[KVresource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
+ > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
+/subscriptions/[subscription-id-guid]/resourceGroups/[KVresource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
> The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
-https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
+https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
You can disable the Azure disk encryption extension, and you can remove the Azur
To remove ADE, it is recommended that you first disable encryption and then remove the extension. If you remove the encryption extension without disabling it, the disks will still be encrypted. If you disable encryption **after** removing the extension, the extension will be reinstalled (to perform the decrypt operation) and will need to be removed a second time. > [!WARNING]
-> You can **not** disable encryption if the OS disk is encrypted. (OS disks are encrypted when the original encryption operation specifies volumeType=ALL or volumeType=OS.)
+> You can **not** disable encryption if the OS disk is encrypted. (OS disks are encrypted when the original encryption operation specifies volumeType=ALL or volumeType=OS.)
> > Disabling encryption works only when data disks are encrypted but the OS disk is not.
You can disable encryption using Azure PowerShell, the Azure CLI, or with a Reso
Disable-AzVMDiskEncryption -ResourceGroupName "MyVirtualMachineResourceGroup" -VMName "MySecureVM" -VolumeType "data" ``` -- **Disable encryption with the Azure CLI:** To disable encryption, use the [az vm encryption disable](/cli/azure/vm/encryption#az-vm-encryption-disable) command.
+- **Disable encryption with the Azure CLI:** To disable encryption, use the [az vm encryption disable](/cli/azure/vm/encryption#az-vm-encryption-disable) command.
```azurecli-interactive az vm encryption disable --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup" --volume-type "data" ``` -- **Disable encryption with a Resource Manager template:**
+- **Disable encryption with a Resource Manager template:**
1. Click **Deploy to Azure** from the [Disable disk encryption on running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-running-linux-vm-without-aad) template. 2. Select the subscription, resource group, location, VM, volume type, legal terms, and agreement.
You can disable encryption using Azure PowerShell, the Azure CLI, or with a Reso
If you want to decrypt your disks and remove the encryption extension, you must disable encryption **before** removing the extension; see [disable encryption](#disable-encryption).
-You can remove the encryption extension using Azure PowerShell or the Azure CLI.
+You can remove the encryption extension using Azure PowerShell or the Azure CLI.
- **Disable disk encryption with Azure PowerShell:** To remove the encryption, use the [Remove-AzVMDiskEncryptionExtension](/powershell/module/az.compute/remove-azvmdiskencryptionextension) cmdlet.
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Last updated 06/14/2023-+ # Azure Disk Encryption for Linux VMs
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Azure Disk Encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. It uses the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide volume encryption for the OS and data disks of Azure virtual machines (VMs), and is integrated with [Azure Key Vault](../../key-vault/index.yml) to help you control and manage the disk encryption keys and secrets.
If you use [Microsoft Defender for Cloud](../../security-center/index.yml), you'
![Microsoft Defender for Cloud disk encryption alert](media/disk-encryption/security-center-disk-encryption-fig1.png) > [!WARNING]
-> - If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue to use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details.
+> - If you have previously used Azure Disk Encryption with Microsoft Entra ID to encrypt a VM, you must continue to use this option to encrypt your VM. See [Azure Disk Encryption with Microsoft Entra ID (previous release)](disk-encryption-overview-aad.md) for details.
> - Certain recommendations might increase data, network, or compute resource usage, resulting in additional license or subscription costs. You must have a valid active Azure subscription to create resources in Azure in the supported regions. You can learn the fundamentals of Azure Disk Encryption for Linux in just a few minutes with the [Create and encrypt a Linux VM with Azure CLI quickstart](disk-encryption-cli-quickstart.md) or the [Create and encrypt a Linux VM with Azure PowerShell quickstart](disk-encryption-powershell-quickstart.md).
Linux server distributions that are not endorsed by Azure do not support Azure D
> RHEL: > - The new Azure Disk Encryption implementation is supported for RHEL OS and data disk for RHEL7 Pay-As-You-Go images. > - ADE is also supported for RHEL Bring-Your-Own-Subscription Gold Images, but only **after** the subscription has been registered . For more information, see [Red Hat Enterprise Linux Bring-Your-Own-Subscription Gold Images in Azure](../workloads/redhat/byos.md#encrypt-red-hat-enterprise-linux-bring-your-own-subscription-gold-images)
->
+>
> All distros:
-> - ADE support for a particular offer type does not extend beyond the end-of-life date provided by the publisher.
+> - ADE support for a particular offer type does not extend beyond the end-of-life date provided by the publisher.
> - The legacy ADE solution (using Microsoft Entra credentials) is not recommended for new VMs and is not compatible with RHEL versions later than RHEL 7.8 or with Python 3 as default. ## Additional VM requirements
-Azure Disk Encryption requires the dm-crypt and vfat modules to be present on the system. Removing or disabling vfat from the default image will prevent the system from reading the key volume and obtaining the key needed to unlock the disks on subsequent reboots. System hardening steps that remove the vfat module from the system or enforce expanding the OS mountpoints/folders on data drives are not compatible with Azure Disk Encryption.
+Azure Disk Encryption requires the dm-crypt and vfat modules to be present on the system. Removing or disabling vfat from the default image will prevent the system from reading the key volume and obtaining the key needed to unlock the disks on subsequent reboots. System hardening steps that remove the vfat module from the system or enforce expanding the OS mountpoints/folders on data drives are not compatible with Azure Disk Encryption.
Before enabling encryption, the data disks to be encrypted must be properly listed in /etc/fstab. Use the "nofail" option when creating entries, and choose a persistent block device name (as device names in the "/dev/sdX" format may not be associated with the same disk across reboots, particularly after encryption; for more detail on this behavior, see: [Troubleshoot Linux VM device name changes](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems)).
-Make sure the /etc/fstab settings are configured properly for mounting. To configure these settings, run the mount -a command or reboot the VM and trigger the remount that way. Once that is complete, check the output of the lsblk command to verify that the drive is still mounted.
+Make sure the /etc/fstab settings are configured properly for mounting. To configure these settings, run the mount -a command or reboot the VM and trigger the remount that way. Once that is complete, check the output of the lsblk command to verify that the drive is still mounted.
- If the /etc/fstab file doesn't mount the drive properly before enabling encryption, Azure Disk Encryption won't be able to mount it properly. - The Azure Disk Encryption process will move the mount information out of /etc/fstab and into its own configuration file as part of the encryption process. Don't be alarmed to see the entry missing from /etc/fstab after data drive encryption completes.-- Before starting encryption, be sure to stop all services and processes that could be writing to mounted data disks and disable them, so that they do not restart automatically after a reboot. These could keep files open on these partitions, preventing the encryption procedure to remount them, causing failure of the encryption.
+- Before starting encryption, be sure to stop all services and processes that could be writing to mounted data disks and disable them, so that they do not restart automatically after a reboot. These could keep files open on these partitions, preventing the encryption procedure to remount them, causing failure of the encryption.
- After reboot, it will take time for the Azure Disk Encryption process to mount the newly encrypted disks. They won't be immediately available after a reboot. The process needs time to start, unlock, and then mount the encrypted drives before being available for other processes to access. This process may take more than a minute after reboot depending on the system characteristics. Here is an example of the commands used to mount the data disks and create the necessary /etc/fstab entries:
To enable the Azure Disk Encryption feature, the Linux VMs must meet the followi
- To get a token to connect to your key vault, the Linux VM must be able to connect to a Microsoft Entra endpoint, \[login.microsoftonline.com\]. - To write the encryption keys to your key vault, the Linux VM must be able to connect to the key vault endpoint. - The Linux VM must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and an Azure storage account that hosts the VHD files.
- - If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and configure a specific rule to allow outbound connectivity to the IPs. For more information, see [Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md).
+ - If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and configure a specific rule to allow outbound connectivity to the IPs. For more information, see [Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md).
-## Encryption key storage requirements
+## Encryption key storage requirements
Azure Disk Encryption requires an Azure Key Vault to control and manage disk encryption keys and secrets. Your key vault and VMs must reside in the same Azure region and subscription.
The following table defines some of the common terms used in Azure disk encrypti
## Next steps - [Quickstart - Create and encrypt a Linux VM with Azure CLI ](disk-encryption-cli-quickstart.md)-- [Quickstart - Create and encrypt a Linux VM with Azure PowerShell](disk-encryption-powershell-quickstart.md)
+- [Quickstart - Create and encrypt a Linux VM with Azure PowerShell](disk-encryption-powershell-quickstart.md)
- [Azure Disk Encryption scenarios on Linux VMs](disk-encryption-linux.md) - [Azure Disk Encryption prerequisites CLI script](https://github.com/ejarvi/ade-cli-getting-started) - [Azure Disk Encryption prerequisites PowerShell script](https://github.com/Azure/azure-powershell/tree/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts)
virtual-machines Disk Encryption Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-portal-quickstart.md
Last updated 01/04/2023-+ # Quickstart: Create and encrypt a virtual machine with the Azure portal
It will take a few minutes for your VM to be deployed. When the deployment is fi
:::image type="content" source="../media/disk-encryption/portal-quickstart-keyvault-enable.png" alt-text="disks and encryption selection":::
-1. Select **Review + create**.
+1. Select **Review + create**.
1. After the key vault has passed validation, select **Create**. This will return you to the **Select key from Azure Key Vault** screen. 1. Leave the **Key** field blank and choose **Select**. 1. At the top of the encryption screen, click **Save**. A popup will warn you that the VM will reboot. Click **Yes**.
When no longer needed, you can delete the resource group, virtual machine, and a
## Next steps
-In this quickstart, you created a Key Vault that was enabled for encryption keys, created a virtual machine, and enabled the virtual machine for encryption.
+In this quickstart, you created a Key Vault that was enabled for encryption keys, created a virtual machine, and enabled the virtual machine for encryption.
> [!div class="nextstepaction"] > [Azure Disk Encryption overview](disk-encryption-overview.md)
virtual-machines Disk Encryption Powershell Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-powershell-quickstart.md
Last updated 01/04/2023-+ # Quickstart: Create and encrypt a Linux VM in Azure with Azure PowerShell
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line or in scripts. This quickstart shows you how to use the Azure PowerShell module to create a Linux virtual machine (VM), create a Key Vault for the storage of encryption keys, and encrypt the VM. This quickstart uses the Ubuntu 16.04 LTS marketplace image from Canonical and a VM Standard_D2S_V3 size. However, any [ADE supported Linux image version](/azure/virtual-machines/linux/disk-encryption-overview#supported-operating-systems) could be used instead of an Ubuntu VM.
$cred = Get-Credential
New-AzVM -Name MyVm -Credential $cred -ResourceGroupName MyResourceGroup -Image Canonical:UbuntuServer:18.04-LTS:latest -Size Standard_D2S_V3 ```
-It takes a few minutes for your VM to be deployed.
+It takes a few minutes for your VM to be deployed.
## Create a Key Vault configured for encryption keys
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
Last updated 03/29/2023-+ # Azure Disk Encryption sample scripts for Linux VMs
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-This article provides sample scripts for preparing pre-encrypted VHDs and other tasks.
+This article provides sample scripts for preparing pre-encrypted VHDs and other tasks.
> [!NOTE] > All scripts refer to the latest, non-AAD version of ADE, except where noted.
virtual-machines Disk Encryption Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-upgrade.md
Last updated 05/27/2021-+ # Upgrading the Azure Disk Encryption version
You can determine the version of ADE with which a VM was encrypted via Azure CLI
To determine the ADE version, run the Azure CLI [az vm get-instance-view](/cli/azure/vm#az-vm-get-instance-view) command. ```azurecli-interactive
-az vm get-instance-view --resource-group <ResourceGroupName> --name <VMName>
+az vm get-instance-view --resource-group <ResourceGroupName> --name <VMName>
``` Locate the AzureDiskEncryption extension in the output and identify the version number from the "TypeHandlerVersion" field in the output.
virtual-machines Disks Enable Customer Managed Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-customer-managed-keys-cli.md
Last updated 05/03/2023
-+ # Use the Azure CLI to enable server-side encryption with customer-managed keys for managed disks
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
Azure Disk Storage allows you to manage your own keys when using server-side encryption (SSE) for managed disks, if you choose. For conceptual information on SSE with customer managed keys, as well as other managed disk encryption types, see the [Customer-managed keys](../disk-encryption.md#customer-managed-keys) section of our disk encryption article.
rgName=yourResourceGroupName
vmName=yourVMName location=westcentralus vmSize=Standard_DS3_V2
-image=LinuxImageURN
+image=LinuxImageURN
diskEncryptionSetName=yourDiskencryptionSetName diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $rgName --query [id] -o tsv)
diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $
az vm create -g $rgName -n $vmName -l $location --image $image --size $vmSize --generate-ssh-keys --os-disk-encryption-set $diskEncryptionSetId --data-disk-sizes-gb 128 128 --data-disk-encryption-sets $diskEncryptionSetId $diskEncryptionSetId ```
-### Encrypt existing managed disks
+### Encrypt existing managed disks
Your existing disks must not be attached to a running VM in order for you to encrypt them using the following script:
Your existing disks must not be attached to a running VM in order for you to enc
rgName=yourResourceGroupName diskName=yourDiskName diskEncryptionSetName=yourDiskEncryptionSetName
-
+ az disk update -n $diskName -g $rgName --encryption-type EncryptionAtRestWithCustomerKey --disk-encryption-set $diskEncryptionSetId ```
rgName=yourResourceGroupName
vmssName=yourVMSSName location=westcentralus vmSize=Standard_DS3_V2
-image=LinuxImageURN
+image=LinuxImageURN
diskEncryptionSetName=yourDiskencryptionSetName diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $rgName --query [id] -o tsv)
az disk create -n $diskName -g $rgName -l $location --encryption-type Encryption
diskId=$(az disk show -n $diskName -g $rgName --query [id] -o tsv)
-az vm disk attach --vm-name $vmName --lun $diskLUN --ids $diskId
+az vm disk attach --vm-name $vmName --lun $diskLUN --ids $diskId
```
virtual-machines Disks Enable Host Based Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-host-based-encryption-cli.md
- references_regions - devx-track-azurecli
- - devx-track-linux
+ - linux-related-content
- ignite-2023 # Use the Azure CLI to enable end-to-end encryption using encryption at host
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
When you enable encryption at host, data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. For conceptual information on encryption at host, and other managed disk encryption types, see [Encryption at host - End-to-end encryption for your VM data](../disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
rgName=yourRGName
vmName=yourVMName location=eastus vmSize=Standard_DS2_v2
-image=LinuxImageURN
+image=LinuxImageURN
diskEncryptionSetName=yourDiskEncryptionSetName diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $rgName --query [id] -o tsv)
rgName=yourRGName
vmName=yourVMName location=eastus vmSize=Standard_DS2_v2
-image=LinuxImageURN
+image=LinuxImageURN
az vm create -g $rgName \ -n $vmName \
az vm update -n $vmName \
### Create a Virtual Machine Scale Set with encryption at host enabled with customer-managed keys
-Create a Virtual Machine Scale Set with managed disks using the resource URI of the DiskEncryptionSet created earlier to encrypt cache of OS and data disks with customer-managed keys. The temp disks are encrypted with platform-managed keys.
+Create a Virtual Machine Scale Set with managed disks using the resource URI of the DiskEncryptionSet created earlier to encrypt cache of OS and data disks with customer-managed keys. The temp disks are encrypted with platform-managed keys.
> [!IMPORTANT] >Starting November 2023, VM scale sets created using PowerShell and Azure CLI will default to Flexible Orchestration Mode if no orchestration mode is specified. For more information about this change and what actions you should take, go to [Breaking Change for VMSS PowerShell/CLI Customers - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/azure-compute-blog/breaking-change-for-vmss-powershell-cli-customers/ba-p/3818295)
rgName=yourRGName
vmssName=yourVMSSName location=westus2 vmSize=Standard_DS3_V2
-image=Ubuntu2204
+image=Ubuntu2204
diskEncryptionSetName=yourDiskEncryptionSetName diskEncryptionSetId=$(az disk-encryption-set show -n $diskEncryptionSetName -g $rgName --query [id] -o tsv)
az vmss create -g $rgName \
-n $vmssName \ --encryption-at-host \ --image $image \
+--orchestration-mode flexible \
--admin-username azureuser \ --generate-ssh-keys \ --os-disk-encryption-set $diskEncryptionSetId \
az vmss create -g $rgName \
### Create a Virtual Machine Scale Set with encryption at host enabled with platform-managed keys
-Create a Virtual Machine Scale Set with encryption at host enabled to encrypt cache of OS/data disks and temp disks with platform-managed keys.
+Create a Virtual Machine Scale Set with encryption at host enabled to encrypt cache of OS/data disks and temp disks with platform-managed keys.
> [!IMPORTANT] >Starting November 2023, VM scale sets created using PowerShell and Azure CLI will default to Flexible Orchestration Mode if no orchestration mode is specified. For more information about this change and what actions you should take, go to [Breaking Change for VMSS PowerShell/CLI Customers - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/azure-compute-blog/breaking-change-for-vmss-powershell-cli-customers/ba-p/3818295)
rgName=yourRGName
vmssName=yourVMSSName location=westus2 vmSize=Standard_DS3_V2
-image=Ubuntu2204
+image=Ubuntu2204
az vmss create -g $rgName \ -n $vmssName \ --encryption-at-host \ --image $image \
+--orchestration-mode flexible \
--admin-username azureuser \ --generate-ssh-keys \ --data-disk-sizes-gb 64 128 \
When calling the [Resource Skus API](/rest/api/compute/resourceskus/list), check
For the Azure PowerShell module, use the [Get-AzComputeResourceSku](/powershell/module/az.compute/get-azcomputeresourcesku) cmdlet. ```azurepowershell-interactive
-$vmSizes=Get-AzComputeResourceSku | where{$_.ResourceType -eq 'virtualMachines' -and $_.Locations.Contains('CentralUSEUAP')}
+$vmSizes=Get-AzComputeResourceSku | where{$_.ResourceType -eq 'virtualMachines' -and $_.Locations.Contains('CentralUSEUAP')}
foreach($vmSize in $vmSizes) {
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Last updated 07/12/2023 -+ # Expand virtual hard disks on a Linux VM
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks. An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach data disks and use them for data storage. If you need to store data on the OS disk and require the additional space, convert it to GUID Partition Table (GPT). > [!WARNING]
-> Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md).
+> Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md).
## <a id="identifyDisk"></a>Identify Azure data disk object within the operating system ##
If a data disk was expanded without downtime using the procedure mentioned previ
I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x43d10aad
-
+ Device Boot Start End Sectors Size Id Type /dev/sda1 2048 536870878 536868831 256G 83 Linux ```
If a data disk was expanded without downtime using the procedure mentioned previ
I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x43d10aad
-
+ Device Boot Start End Sectors Size Id Type /dev/sda1 2048 536870878 536868831 256G 83 Linux ```
virtual-machines How To Configure Lvm Raid On Crypt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-configure-lvm-raid-on-crypt.md
Last updated 04/06/2023-+ # Configure LVM and RAID on encrypted devices
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article is a step-by-step process for how to perform Logical Volume Management (LVM) and RAID on encrypted devices. The process applies to the following environments:
This article is a step-by-step process for how to perform Logical Volume Managem
## Scenarios
-The procedures in this article support the following scenarios:
+The procedures in this article support the following scenarios:
- Configure LVM on top of encrypted devices (LVM-on-crypt) - Configure RAID on top of encrypted devices (RAID-on-crypt)
-After the underlying device or devices are encrypted, then you can create the LVM or RAID structures on top of that encrypted layer.
+After the underlying device or devices are encrypted, then you can create the LVM or RAID structures on top of that encrypted layer.
-The physical volumes (PVs) are created on top of the encrypted layer. The physical volumes are used to create the volume group. You create the volumes and add the required entries on /etc/fstab.
+The physical volumes (PVs) are created on top of the encrypted layer. The physical volumes are used to create the volume group. You create the volumes and add the required entries on /etc/fstab.
![Diagram of the layers of LVM structures](./media/disk-encryption/lvm-raid-on-crypt/000-lvm-raid-crypt-diagram.png)
The Azure Disk Encryption dual-pass version is on a deprecation path and should
When you're using the "on-crypt" configurations, use the process outlined in the following procedures.
->[!NOTE]
+>[!NOTE]
>We're using variables throughout the article. Replace the values accordingly.
-### Deploy a VM
+### Deploy a VM
The following commands are optional, but we recommend that you apply them on a newly deployed virtual machine (VM). PowerShell:
$storageType = 'Standard_LRS'
$dataDiskName = ${VMNAME} + '_datadisk0' $diskConfig = New-AzDiskConfig -SkuName $storageType -Location $LOCATION -CreateOption Empty -DiskSizeGB 5 $dataDisk1 = New-AzDisk -DiskName $dataDiskName -Disk $diskConfig -ResourceGroupName ${RGNAME}
-$vm = Get-AzVM -Name ${VMNAME} -ResourceGroupName ${RGNAME}
+$vm = Get-AzVM -Name ${VMNAME} -ResourceGroupName ${RGNAME}
$vm = Add-AzVMDataDisk -VM $vm -Name $dataDiskName -CreateOption Attach -ManagedDiskId $dataDisk1.Id -Lun 0 Update-AzVM -VM ${VM} -ResourceGroupName ${RGNAME} ```
Portal:
OS: ```bash
-lsblk
+lsblk
``` ![List of attached disks in the OS](./media/disk-encryption/lvm-raid-on-crypt/004-lvm-raid-check-disks-os.png)
This configuration is done at the operating system level. The corresponding disk
Check the device letter assigned to the new disks. In this example, we're using four data disks. ```bash
-lsblk
+lsblk
``` ![Data disks attached to the OS](./media/disk-encryption/lvm-raid-on-crypt/004-lvm-raid-check-disks-os.png)
mkdir /tempdata${disk}; \
echo "UUID=${diskuuid} /tempdata${disk} ext4 defaults,nofail 0 0" >> /etc/fstab; \ mount -a; \ done
-```
+```
### Verify that the disks are mounted properly ```bash
cat /etc/fstab
PowerShell using a key encryption key (KEK): ```powershell
-$sequenceVersion = [Guid]::NewGuid()
+$sequenceVersion = [Guid]::NewGuid()
Set-AzVMDiskEncryptionExtension -ResourceGroupName $RGNAME ` -VMName ${VMNAME} ` -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl `
lsblk
The extension will add the file systems to /var/lib/azure_disk_encryption_config/azure_crypt_mount (an old encryption) or to /etc/crypttab (new encryptions).
->[!NOTE]
+>[!NOTE]
>Do not modify any of these files.
-This file will take care of activating these disks during the boot process so that LVM or RAID can use them later.
+This file will take care of activating these disks during the boot process so that LVM or RAID can use them later.
Don't worry about the mount points on this file. Azure Disk Encryption will lose the ability to get the disks mounted as a normal file system after we create a physical volume or a RAID device on top of those encrypted devices. (This will remove the file system format that we used during the preparation process.)
echo "y" | pvcreate /dev/mapper/4159c60a-a546-455b-985f-92865d51158c
``` ![Verification that a physical volume was created](./media/disk-encryption/lvm-raid-on-crypt/014-lvm-raid-pvcreate.png)
->[!NOTE]
+>[!NOTE]
>The /dev/mapper/device names here need to be replaced for your actual values based on the output of **lsblk**. #### Verify the information for physical volumes
pvs
```bash lvcreate -L 10G -n lvdata1 vgdata lvcreate -L 7G -n lvdata2 vgdata
-```
+```
#### Check the created logical volumes
It's important to make sure that the **nofail** option is added to the mount poi
If you don't use the **nofail** option: -- The OS will never get into the stage where Azure Disk Encryption is started and the data disks are unlocked and mounted. -- The encrypted disks will be unlocked at the end of the boot process. The LVM volumes and file systems will be automatically mounted until Azure Disk Encryption unlocks them.
+- The OS will never get into the stage where Azure Disk Encryption is started and the data disks are unlocked and mounted.
+- The encrypted disks will be unlocked at the end of the boot process. The LVM volumes and file systems will be automatically mounted until Azure Disk Encryption unlocks them.
You can test rebooting the VM and validate that the file systems are also automatically getting mounted after boot time. This process might take several minutes, depending on the number and sizes of file systems.
mdadm --create /dev/md10 \
``` ![Information for configured RAID via the mdadm command](./medi-creation.png)
->[!NOTE]
+>[!NOTE]
>The /dev/mapper/device names here need to be replaced with your actual values, based on the output of **lsblk**. ### Check/monitor RAID creation
mkfs.ext4 /dev/md10
Create a new mount point for the file system, add the new file system to /etc/fstab, and mount it:
->[!NOTE]
+>[!NOTE]
>This cycle iterates only on one device for this particular example, is built this way to be used for multiple md devices if needed. ```bash
virtual-machines How To Resize Encrypted Lvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-resize-encrypted-lvm.md
description: This article provides instructions for resizing ADE encrypted disks
-+ Last updated 04/11/2023
Last updated 04/11/2023
# How to resize logical volume management devices that use Azure Disk Encryption
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
In this article, you'll learn how to resize data disks that use Azure Disk Encryption. To resize the disks, you'll use logical volume management (LVM) on Linux. The steps apply to multiple scenarios.
When you need to add a new disk to increase the VG size, extend your traditional
![Screenshot showing the code that checks the disk list. The results are highlighted.](./media/disk-encryption/resize-lvm/009-resize-lvm-scenariob-check-scsi12.png) ```bash
- sudo lsbk
+ sudo lsblk
``` ![Screenshot showing the code that checks the disk list by using l s b l k. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/009-resize-lvm-scenariob-check-lsblk1.png)
When you need to add a new disk to increase the VG size, extend your traditional
sudo lvextend -r -L +2G /dev/vgname/lvname ```
- ![Screenshot showing code that increases the size of the file system online. The results are highlighted.](./media/disk-encryption/resize-lvm/013-resize-lvm-scenariob-lvextend.png)
+ ![Screenshot showing code that increases the size of the file system online. The results are highlighted.](./media/disk-encryption/resize-lvm/013-resize-lvm-scenariob-lvextend.png)
14. Verify the new sizes of the LV and file system:
When you need to add a new disk to increase the VG size, extend your traditional
> >At this point, the encrypted layer is expanded to the new disk. The actual data disk has no encryption settings at the platform level, so its encryption status isn't updated. >
- >These are some of the reasons why LVM-on-crypt is the recommended approach.
+ >These are some of the reasons why LVM-on-crypt is the recommended approach.
15. Check the encryption information from the portal:
Follow the next steps to verify your changes.
![Screenshot showing the code that verifies that the LVM layer is on top of the encrypted layer. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-lsblk3.png)
- If you use `lsblk` without options, then you see the mount points multiple times. The command sorts by device and LVs.
+ If you use `lsblk` without options, then you see the mount points multiple times. The command sorts by device and LVs.
You might want to use `lsblk -fs`. In this command, `-fs` reverses the sort order so that the mount points are shown once. The disks are shown multiple times.
Follow the next steps to verify your changes.
![Screenshot showing the code that checks the size of disks. The results are highlighted.](./media/disk-encryption/resize-lvm/045-resize-lvm-scenariof-fdisk01.png)
-7. Resize the data disk. You can use the portal, CLI, or PowerShell. For more information, see the disk-resize section in [Expand virtual hard disks on a Linux VM](expand-disks.md#expand-an-azure-managed-disk).
+7. Resize the data disk. You can use the portal, CLI, or PowerShell. For more information, see the disk-resize section in [Expand virtual hard disks on a Linux VM](expand-disks.md#expand-an-azure-managed-disk).
>[!IMPORTANT] >You can't resize virtual disks while the VM is running. Deallocate your VM for this step.
virtual-machines How To Verify Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-verify-encryption-status.md
Last updated 04/11/2023-+
-# Verify encryption status for Linux
+# Verify encryption status for Linux
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-The scope of this article is to validate the encryption status of a virtual machine by using different methods: the Azure portal, PowerShell, the Azure CLI, or the operating system of the virtual machine (VM).
+The scope of this article is to validate the encryption status of a virtual machine by using different methods: the Azure portal, PowerShell, the Azure CLI, or the operating system of the virtual machine (VM).
You can validate the encryption status during or after the encryption, by either: -- Checking the disks attached to a particular VM.
+- Checking the disks attached to a particular VM.
- Querying the encryption settings on each disk, whether the disk is attached or unattached. This scenario applies for Azure Disk Encryption dual-pass and single-pass extensions. Linux distributions are the only environment for this scenario.
->[!NOTE]
+>[!NOTE]
>We're using variables throughout the article. Replace the values accordingly. ## Portal
Another way to validate the encryption status is by looking at the **Disk settin
![Encryption status for OS disk and data disks](./media/disk-encryption/verify-encryption-linux/portal-check-004.png)
->[!NOTE]
+>[!NOTE]
> This status means the disks have encryption settings stamped, not that they were actually encrypted at the OS level. >
-> By design, the disks are stamped first and encrypted later. If the encryption process fails, the disks may end up stamped but not encrypted.
+> By design, the disks are stamped first and encrypted later. If the encryption process fails, the disks may end up stamped but not encrypted.
> > To confirm if the disks are truly encrypted, you can double check the encryption of each disk at the OS level.
In a single pass, the encryption settings are stamped on each of the disks (OS a
$RGNAME = "RGNAME" $VMNAME = "VMNAME"
-$VM = Get-AzVM -Name ${VMNAME} -ResourceGroupName ${RGNAME}
+$VM = Get-AzVM -Name ${VMNAME} -ResourceGroupName ${RGNAME}
$Sourcedisk = Get-AzDisk -ResourceGroupName ${RGNAME} -DiskName $VM.StorageProfile.OsDisk.Name Write-Host "=============================================================================================================================================================" Write-Host "Encryption Settings:"
sudo lsblk
![OS crypt layer for a partition](./media/disk-encryption/verify-encryption-linux/verify-os-crypt-layer.png)
-You can get more details by using the following **lsblk** variant.
+You can get more details by using the following **lsblk** variant.
You'll see a **crypt** type layer that is mounted by the extension. The following example shows logical volumes and normal disks having **crypto\_LUKS FSTYPE**.
virtual-machines Image Builder Devops Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-devops-task.md
Last updated 07/31/2023
-+ ms.devlang: azurecli
Before you begin, you must:
1. Select **Release Pipeline** > **Edit**.
-1. On the User Agent, select the plus sign (+) to add and search for **Image Builder**.
+1. On the User Agent, select the plus sign (+) to add and search for **Image Builder**.
1. Select **Add**.
The source images must be of the supported VM Image Builder operating systems. Y
If you need to get the latest Compute Gallery version, use an Azure PowerShell or Azure CLI task to get it and set a DevOps variable. Use the variable in the VM Image Builder DevOps task. For more information, see the examples in [Get the latest image version resource ID](https://github.com/danielsollondon/azvmimagebuilder/tree/master/solutions/8_Getting_Latest_SIG_Version_ResID#getting-the-latest-image-version-resourceid-from-shared-image-gallery).
-* (Marketplace) Base image: Use the dropdown list of popular images, which always uses the latest version of the supported operating systems.
+* (Marketplace) Base image: Use the dropdown list of popular images, which always uses the latest version of the supported operating systems.
If the base image isn't in the list, you can specify the exact image by using `Publisher:Offer:Sku`.
The following example explains how this works:
```azurepowershell-interactive # Clean up buildArtifacts directory Remove-Item -Path "C:\buildArtifacts\*" -Force -Recurse
-
+ # Delete the buildArtifacts directory
- Remove-Item -Path "C:\buildArtifacts" -Force
+ Remove-Item -Path "C:\buildArtifacts" -Force
``` * For Linux: The build artifacts are put into the */tmp* directory. However, on many Linux operating systems, the */tmp* directory contents are deleted on reboot. We suggest that you use code to remove the contents and not rely on the operating system to remove the contents. For example:
The task uses the properties that are passed to the task to create the VM Image
* Downloads the build artifact zip file and any other associated scripts. The files are saved in a storage account in the temporary VM Image Builder resource group `IT_<DestinationResourceGroup>_<TemplateName>`.
-* Creates a template that's prefixed with *t_* and a 10-digit monotonic integer. The template is saved to the resource group that you selected, and it exists for the duration of the build in the resource group.
+* Creates a template that's prefixed with *t_* and a 10-digit monotonic integer. The template is saved to the resource group that you selected, and it exists for the duration of the build in the resource group.
Example output:
You'll see an error in the DevOps log for the VM Image Builder task, and the mes
:::image type="content" source="./media/image-builder-devops-task/devops-task-error.png" alt-text="Screenshot of an example DevOps task error that describes the failure and provides the location of the customization.log file.":::
-For more information, see [Troubleshoot the VM Image Builder service](image-builder-troubleshoot.md).
+For more information, see [Troubleshoot the VM Image Builder service](image-builder-troubleshoot.md).
After you've investigated the failure, you can delete the staging resource group. First, delete the VM Image Builder template resource artifact. The artifact is prefixed with *t_*, and you can find it in the DevOps task build log:
virtual-machines Image Builder Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-gallery.md
Last updated 11/10/2023
-+ # Create a Linux image and distribute it to an Azure Compute Gallery
This article shows you how you can use the Azure Image Builder, and the Azure CLI, to create an image version in an [Azure Compute Gallery](../shared-image-galleries.md) (formerly known as Shared Image Gallery), then distribute the image globally. You can also do this using [Azure PowerShell](../windows/image-builder-gallery.md).
-We'll be using a sample .json template to configure the image. The .json file we're using is here: [helloImageTemplateforSIG.json](https://github.com/azure/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
+We'll be using a sample .json template to configure the image. The .json file we're using is here: [helloImageTemplateforSIG.json](https://github.com/azure/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
To distribute the image to an Azure Compute Gallery, the template uses [sharedImage](image-builder-json.md#distribute-sharedimage) as the value for the `distribute` section of the template.
sed -i -e "s%<imgBuilderId>%$imgBuilderId%g" helloImageTemplateforSIG.json
## Create the image version
-This next part will create the image version in the gallery.
+This next part will create the image version in the gallery.
Submit the image configuration to the Azure Image Builder service.
az resource invoke-action \
--resource-group $sigResourceGroup \ --resource-type Microsoft.VirtualMachineImages/imageTemplates \ -n helloImageTemplateforSIG01 \
- --action Run
+ --action Run
``` Creating the image and replicating it to both regions can take a while. Wait until this part is finished before moving on to creating a VM.
az sig image-version delete \
--gallery-name $sigName \ --gallery-image-definition $imageDefName \ --subscription $subscriptionID
-```
+```
Delete the image definition.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
Last updated 10/03/2023
-+ # Create an Azure Image Builder Bicep or ARM template JSON template
Customize properties:
### File customizer
-The `File` customizer lets Image Builder download a file from a GitHub repo or Azure storage. The customizer supports both Linux and Windows. If you have an image build pipeline that relies on build artifacts, you can set the file customizer to download from the build share, and move the artifacts into the image.
+The `File` customizer lets Image Builder download a file from a GitHub repo or Azure storage. The customizer supports both Linux and Windows. If you have an image build pipeline that relies on build artifacts, you can set the file customizer to download from the build share, and move the artifacts into the image.
# [JSON](#tab/json)
The **versioning** property is for the `sharedImage` distribute type only. It's
- **latest** - New strictly increasing schema per design - **source** - Schema based upon the version number of the source image.
-The default version numbering schema is `latest`. The latest schema has an additional property, ΓÇ£majorΓÇ¥ which specifies the major version under which to generate the latest version.
+The default version numbering schema is `latest`. The latest schema has an additional property, ΓÇ£majorΓÇ¥ which specifies the major version under which to generate the latest version.
> [!NOTE] > The existing version generation logic for `sharedImage` distribution is deprecated. Two new options are provided: monotonically increasing versions that are always the latest version in a gallery, and versions generated based on the version number of the source image. The enum specifying the version generation schema allows for expansion in the future with additional version generation schemas.
The `optimize` property can be enabled while creating a VM image and allows VM o
# [JSON](#tab/json) ```json
-"optimize": {
- "vmboot": {
- "state": "Enabled"
+"optimize": {
+ "vmboot": {
+ "state": "Enabled"
} } ```
The following JSON sets the source image as an image stored in a [Direct Shared
```json source: {
- "type": "SharedImageVersion",
- "imageVersionId": "<replace with resourceId of the image stored in the Direct Shared Gallery>"
+ "type": "SharedImageVersion",
+ "imageVersionId": "<replace with resourceId of the image stored in the Direct Shared Gallery>"
}, ```
properties: {
- **The stagingResourceGroup property is specified with a resource group that exists** If the `stagingResourceGroup` property is specified with a resource group that does exist, then the Image Builder service checks to make sure the resource group isn't associated with another image template, is empty (no resources inside), in the same region as the image template, and has either "Contributor" or "Owner" RBAC applied to the identity assigned to the Azure Image Builder image template resource. If any of the aforementioned requirements aren't met, an error is thrown. The staging resource group has the following tags added to it: `usedBy`, `imageTemplateName`, `imageTemplateResourceGroupName`. Pre-existing tags aren't deleted.
-
+ > [!IMPORTANT] > You will need to assign the contributor role to the resource group for the service principal corresponding to Azure Image Builder's first party app when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image. For the CLI command and portal instructions on how to assign the contributor role to the resource group see the following documentation [Troubleshoot VM Azure Image Builder: Authorization error creating disk](./image-builder-troubleshoot.md#authorization-error-creating-disk)
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
Last updated 11/27/2023
-+ # Troubleshoot Azure VM Image Builder
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Use this article to troubleshoot and resolve common issues that you might encounter when you're using Azure VM Image Builder.
Get-AzImageBuilderTemplate -ImageTemplateName <imageTemplateName> -ResourceGrou
### **Error output for version 2020-02-14 and earlier** ```output
-{
+{
"code": "ValidationFailed",
- "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute/images/imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
-}
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute/images/imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+}
``` ### **Error output for version 2021-10-01 and later** ```output
-{
+{
"error": {
- "code": "ValidationFailed",
- "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute/images/imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
+ "code": "ValidationFailed",
+ "message": "Validation failed: 'ImageTemplate.properties.source': Field 'imageId' has a bad value: '/subscriptions/subscriptionID/resourceGroups/resourceGroupName/providers/Microsoft.Compute/images/imageName'. Please review http://aka.ms/azvmimagebuildertmplref for details on fields requirements in the Image Builder Template."
} } ```
The assigned managed identity cannot be used. Please remove the existing one and
#### Cause
-There are cases where [Managed Service Identities (MSI)](/azure/virtual-machines/linux/image-builder-permissions-cli#create-a-user-assigned-managed-identity) assigned to the image template cannot be used:
+There are cases where [Managed Service Identities (MSI)](/azure/virtual-machines/linux/image-builder-permissions-cli#create-a-user-assigned-managed-identity) assigned to the image template cannot be used:
- The Image Builder template uses a customer provided staging resource group and the MSI is deleted before the image template is deleted ([staging resource group](./image-builder-json.md#properties-stagingresourcegroup) scenario)
Depending on your scenario, VM Image Builder might need permissions to:
- The source image or Azure Compute Gallery (formerly Shared Image Gallery) resource group. - The distribution image or Azure Compute Gallery resource.-- The storage account, container, or blob that the `File` customizer is accessing.
+- The storage account, container, or blob that the `File` customizer is accessing.
Also, ensure the staging resource group name is uniquely specified for each image template.
For more information about configuring permissions, see [Configure VM Image Buil
```output Build (Managed Image) step failed: Error getting Managed Image '/subscriptions/.../providers/Microsoft.Compute/images/mymanagedmg1': Error getting managed image (...): compute. ImagesClient#Get: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error.
-Status=403 Code="AuthorizationFailed" Message="The client '......' with object id '......' doesn't have authorization to perform action 'Microsoft.Compute/images/read' over scope
+Status=403 Code="AuthorizationFailed" Message="The client '......' with object id '......' doesn't have authorization to perform action 'Microsoft.Compute/images/read' over scope
``` #### Cause
For more information about configuring permissions, see [Configure VM Image Buil
#### Error ```output
-Build (Shared Image Version) step failed for Image Version '/subscriptions/.../providers/Microsoft.Compute/galleries/.../images/... /versions/0.23768.4001': Error getting Image Version '/subscriptions/.../resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/.../images/.../versions/0.23768.4001': Error getting image version '... :0.23768.4001': compute.GalleryImageVersionsClient#Get: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error.
+Build (Shared Image Version) step failed for Image Version '/subscriptions/.../providers/Microsoft.Compute/galleries/.../images/... /versions/0.23768.4001': Error getting Image Version '/subscriptions/.../resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/.../images/.../versions/0.23768.4001': Error getting image version '... :0.23768.4001': compute.GalleryImageVersionsClient#Get: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error.
Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/galleries/.../images/.../versions/0.23768.4001' under resource group '<rgName>' was not found." ```
The Azure Image Builder build fails with an authorization error that looks like
#### Error ```output
-Attempting to deploy created Image template in Azure fails with an 'The client '6df325020-fe22-4e39-bd69-10873965ac04' with object id '6df325020-fe22-4e39-bd69-10873965ac04' does not have authorization to perform action 'Microsoft.Compute/disks/write' over scope '/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/disks/proxyVmDiskWin_<timestamp>' or the scope is invalid. If access was recently granted, please refresh your credentials.'
+Attempting to deploy created Image template in Azure fails with an 'The client '6df325020-fe22-4e39-bd69-10873965ac04' with object id '6df325020-fe22-4e39-bd69-10873965ac04' does not have authorization to perform action 'Microsoft.Compute/disks/write' over scope '/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/disks/proxyVmDiskWin_<timestamp>' or the scope is invalid. If access was recently granted, please refresh your credentials.'
``` #### Cause
-This error is caused when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image.
+This error is caused when trying to specify a pre-existing resource group and VNet to the Azure Image Builder service with a Windows source image.
#### Solution
az ad sp show --id {servicePrincipalName, or objectId}
Then, to implement this solution using CLI, use the following command: ```azurecli-interactive
-az role assignment create -g {ResourceGroupName} --assignee {AibrpSpOid} --role Contributor
+az role assignment create -g {ResourceGroupName} --assignee {AibrpSpOid} --role Contributor
``` To implement this solution in portal, follow the instructions in this documentation: [Assign Azure roles using the Azure portal - Azure RBAC](../../role-based-access-control/role-assignments-portal.md).
-For [Step 1: Identify the needed scope](../../role-based-access-control/role-assignments-portal.md#step-1-identify-the-needed-scope): The needed scope is your resource group.
+For [Step 1: Identify the needed scope](../../role-based-access-control/role-assignments-portal.md#step-1-identify-the-needed-scope): The needed scope is your resource group.
-For [Step 3: Select the appropriate role](../../role-based-access-control/role-assignments-portal.md#step-3-select-the-appropriate-role): The role is Contributor.
+For [Step 3: Select the appropriate role](../../role-based-access-control/role-assignments-portal.md#step-3-select-the-appropriate-role): The role is Contributor.
For [Step 4: Select who needs access](../../role-based-access-control/role-assignments-portal.md#step-4-select-who-needs-access): Select member “Azure Virtual Machine Image Builder”
The `customization.log` file includes the following stages:
(telemetry) ending file (telemetry) Starting provisioner windows-restart (telemetry) ending windows-restart
-
+ (telemetry) Finalizing. - This means the build hasfinished ```
-1. *Deprovision* stage. VM Image Builder adds a hidden customizer. This deprovision step is responsible for preparing the VM for deprovisioning. In Windows, it runs `Sysprep` (by using *c:\DeprovisioningScript.ps1*). In Linux, it runs `waagent-deprovision` (by using /tmp/DeprovisioningScript.sh).
+1. *Deprovision* stage. VM Image Builder adds a hidden customizer. This deprovision step is responsible for preparing the VM for deprovisioning. In Windows, it runs `Sysprep` (by using *c:\DeprovisioningScript.ps1*). In Linux, it runs `waagent-deprovision` (by using /tmp/DeprovisioningScript.sh).
For example:
Customization failure.
#### Solution
-Review the log to locate customizer failures. Search for *(telemetry)*.
+Review the log to locate customizer failures. Search for *(telemetry)*.
For example:
The build exceeded the build time-out. This error is seen in the 'lastrunstatus'
#### Error ```text
-[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
+[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
myBigFile.zip 826 B / 826000 B 1.00%
-[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
+[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
myBigFile.zip 1652 B / 826000 B 2.00%
-[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
+[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
.. hours later... .. myBigFile.zip 826000 B / 826000 B 100.00%
-[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
+[086cf9c4-0457-4e8f-bfd4-908cfe3fe43c] PACKER OUT
```
-#### Cause
+#### Cause
`File` customizer is downloading a large file.
myBigFile.zip 826000 B / 826000 B 100.00%
[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:46:26 machine readable: azure-arm,error []string{"Timeout waiting for machine to restart."} [864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT --> azure-arm: Timeout waiting for machine to restart. [864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR ==> Builds finished but no artifacts were created.
-[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT
+[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT
[864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER ERR 2023/06/13 08:46:26 [INFO] (telemetry) Finalizing. [864c0337-b300-48ab-8e8e-7894bc695b7c] PACKER OUT ==> Builds finished but no artifacts were created. ```
Increase the value of `buildTimeoutInMinutes`.
[45f485cf-5a8c-4379-9937-8d85493bc791] PACKER ERR 2020/04/30 23:38:59 packer-provisioner-windows-update: 2020/04/30 23:38:59 [INFO] RPC client: Communicator ended with: 1115 [45f485cf-5a8c-4379-9937-8d85493bc791] PACKER ERR 2020/04/30 23:38:59 packer-provisioner-windows-update: 2020/04/30 23:38:59 Retryable error: Machine not yet available (exit status 1115) [45f485cf-5a8c-4379-9937-8d85493bc791] PACKER OUT Build 'azure-arm' errored: unexpected EOF
-[45f485cf-5a8c-4379-9937-8d85493bc791] PACKER OUT
+[45f485cf-5a8c-4379-9937-8d85493bc791] PACKER OUT
``` #### Cause
Increase the build VM size.
```text [<log_id>] PACKER 2023/09/14 19:01:18 ui: Build 'azure-arm' finished after 3 minutes 13 seconds.
-[<log_id>] PACKER 2023/09/14 19:01:18 ui:
+[<log_id>] PACKER 2023/09/14 19:01:18 ui:
[<log_id>] PACKER ==> Wait completed after 3 minutes 13 seconds
-[<log_id>] PACKER 2023/09/14 19:01:18 ui:
+[<log_id>] PACKER 2023/09/14 19:01:18 ui:
[<log_id>] PACKER ==> Builds finished but no artifacts were created. [<log_id>] PACKER 2023/09/14 19:01:18 [INFO] (telemetry) Finalizing. [<log_id>] PACKER 2023/09/14 19:01:19 waiting for all plugin processes to complete...
The above warning can safely be ignored.
```text [<log_id>] PACKER 2023/09/14 19:00:18 ui: ==> azure-arm: -> Snapshot ID : '/subscriptions/<subscription_id>/resourceGroups/<resourcegroup_name>/providers/Microsoft.Compute/snapshots/<snapshot_name>' [<log_id>] PACKER 2023/09/14 19:00:18 ui: ==> azure-arm: Skipping image creation...
-[<log_id>] PACKER 2023/09/14 19:00:18 ui: ==> azure-arm:
+[<log_id>] PACKER 2023/09/14 19:00:18 ui: ==> azure-arm:
[<log_id>] PACKER ==> azure-arm: Deleting individual resources ... [<log_id>] PACKER 2023/09/14 19:00:18 packer-plugin-azure plugin: 202 ```
Missing permissions.
#### Solution
-Recheck to ensure that VM Image Builder has all the permissions it requires.
+Recheck to ensure that VM Image Builder has all the permissions it requires.
For more information about configuring permissions, see [Configure VM Image Builder permissions by using the Azure CLI](image-builder-permissions-cli.md) or [Configure VM Image Builder permissions by using PowerShell](image-builder-permissions-powershell.md).
For more information about configuring permissions, see [Configure VM Image Buil
[922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER ERR 2020/05/05 22:26:17 Cancelling builder after context cancellation context canceled [922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER OUT Cancelling build after receiving terminated [922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER ERR 2020/05/05 22:26:17 packer: 2020/05/05 22:26:17 Cancelling provisioning due to context cancellation: context canceled
-[922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER OUT ==> azure-arm:
+[922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER OUT ==> azure-arm:
[922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER ERR 2020/05/05 22:26:17 packer: 2020/05/05 22:26:17 Cancelling hook after context cancellation context canceled [922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER OUT ==> azure-arm: The resource group was not created by Packer, deleting individual resources ... [922bdf36-b53c-4e78-9cd8-6b70b9674685] PACKER ERR ==> azure-arm: The resource group was not created by Packer, deleting individual resources ...
Early in the build process, the build fails and the log indicates a JSON Web Tok
```text PACKER OUT Error: Failed to prepare build: "azure-arm"
-PACKER ERR
-PACKER OUT
+PACKER ERR
+PACKER OUT
PACKER ERR * client_jwt will expire within 5 minutes, please use a JWT that is valid for at least 5 minutes PACKER OUT 1 error(s) occurred: ```
Making these observations is especially important in build failures, where these
When images are stuck in template deletion, the customization log might show the below error: ```output
-error deleting resource id /subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Network/networkInterfaces/<networkInterfacName>: resources.Client#DeleteByID: Failure sending request: StatusCode=400 --
+error deleting resource id /subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Network/networkInterfaces/<networkInterfacName>: resources.Client#DeleteByID: Failure sending request: StatusCode=400 --
Original Error: Code="NicInUseWithPrivateEndpoint" Message="Network interface /subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Network/networkInterfaces/<networkInterfacName> cannot be deleted because it is currently in use with an private endpoint (/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Network/privateEndpoints/<pIname>)." Details=[] ```
To resolve the issue, delete the below resources one by one in the specific orde
For additional assistance, you can [contact Azure support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to resolve the stuck deletion error.
-### Distribute target not found in the update request
+### Distribute target not found in the update request
-#### Error
+#### Error
```text Validation failed: Distribute target with Runoutput name <runoutputname> not found in the update request. Deleting a distribution target is not allowed. ``` #### Cause
-This error occurs when an existing distribute target isn't found in the Patch request body.
+This error occurs when an existing distribute target isn't found in the Patch request body.
-#### Solution
+#### Solution
-The distribution array should contain all the distribution targets that is, new targets (if any), existing targets with no change and updated targets. If you want to remove an existing distribution target, delete and re-create the image template as deleting a distribution target is currently not supported through the Patch API.
+The distribution array should contain all the distribution targets that is, new targets (if any), existing targets with no change and updated targets. If you want to remove an existing distribution target, delete and re-create the image template as deleting a distribution target is currently not supported through the Patch API.
-### Missing required fields
+### Missing required fields
-#### Error
+#### Error
```text Validation failed: 'ImageTemplate.properties.distribute[<index>]': Missing field <fieldname>. Please review http://aka.ms/azvmimagebuildertmplref for details on fields required in the Image Builder Template.
-```
+```
#### Cause
-This error occurs when a required field is missing from a distribute target.
+This error occurs when a required field is missing from a distribute target.
-#### Solution
+#### Solution
When creating a request, please provide every required field in a distribute target even if there's no change.
When creating a request, please provide every required field in a distribute tar
The task fails only if an error occurs during customization. When this happens, the task reports the failure and leaves the staging resource group, with the logs, so that you can identify the issue.
-To locate the log, you need to know the template name. Go to **pipeline** > **failed build**, and then drill down into the VM Image Builder DevOps task.
+To locate the log, you need to know the template name. Go to **pipeline** > **failed build**, and then drill down into the VM Image Builder DevOps task.
You'll see the log and a template name:
For more information about Azure DevOps capabilities and limitations, see [Micro
#### Solution
-You can host your own DevOps agents or look to reduce the time of your build. For example, if you're distributing to Azure Compute Gallery, you can replicate them to one region or replicate them asynchronously.
+You can host your own DevOps agents or look to reduce the time of your build. For example, if you're distributing to Azure Compute Gallery, you can replicate them to one region or replicate them asynchronously.
### Slow Windows logon
Please wait for the Windows Modules Installer
1. In the image build, check to ensure that: - There are no outstanding reboots required by adding a Windows Restart customizer as the last customization.
- - All software installation is complete.
+ - All software installation is complete.
-1. Add the [/mode:vm](/windows-hardware/manufacture/desktop/sysprep-command-line-options) option to the default `Sysprep` that VM Image Builder uses. For more information, go to the ["Override the commands"](#override-the-commands) section under "VMs created from VM Image Builder images aren't created successfully."
+1. Add the [/mode:vm](/windows-hardware/manufacture/desktop/sysprep-command-line-options) option to the default `Sysprep` that VM Image Builder uses. For more information, go to the ["Override the commands"](#override-the-commands) section under "VMs created from VM Image Builder images aren't created successfully."
## VMs created from VM Image Builder images aren't created successfully
-By default, VM Image Builder runs *deprovision* code at the end of each image customization phase to *generalize* the image. To generalize an image is to set it up to reuse to create multiple VMs. As part of the process, you can pass in VM settings, such as hostname, username, and so on. In Windows, VM Image Builder runs `Sysprep`, and in Linux, VM Image Builder runs `waagent -deprovision`.
+By default, VM Image Builder runs *deprovision* code at the end of each image customization phase to *generalize* the image. To generalize an image is to set it up to reuse to create multiple VMs. As part of the process, you can pass in VM settings, such as hostname, username, and so on. In Windows, VM Image Builder runs `Sysprep`, and in Linux, VM Image Builder runs `waagent -deprovision`.
In Windows, VM Image Builder uses a generic `Sysprep` command. However, this command might not be suitable for every successful Windows generalization. With VM Image Builder, you can customize the `Sysprep` command. Note that VM Image Builder is an image automation tool that's responsible for running `Sysprep` command successfully. But you might need different `Sysprep` commands to make your image reusable. In Linux, VM Image Builder uses a generic `waagent -deprovision+user` command. For more information, see [Microsoft Azure Linux Agent documentation](https://github.com/Azure/WALinuxAgent#command-line-options).
In Linux:
### The `Sysprep` command: Windows
-```azurepowershell-interactive
+```azurepowershell-interactive
Write-Output '>>> Waiting for GA Service (RdAgent) to start ...' while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 } Write-Output '>>> Waiting for GA Service (WindowsAzureTelemetryService) to start ...'
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder.md
Last updated 11/10/2023
-+ # Create a Linux image and distribute it to an Azure Compute Gallery by using the Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
In this article, you learn how to use Azure VM Image Builder and the Azure CLI to create an image version in an [Azure Compute Gallery](../shared-image-galleries.md) (formerly Shared Image Gallery) and then distribute the image globally. You can also create an image version by using [Azure PowerShell](../windows/image-builder-gallery.md).
-This article uses a sample JSON template to configure the image. The JSON file is at [helloImageTemplateforSIG.json](https://github.com/danielsollondon/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
+This article uses a sample JSON template to configure the image. The JSON file is at [helloImageTemplateforSIG.json](https://github.com/danielsollondon/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json).
To distribute the image to an Azure Compute Gallery, the template uses [sharedImage](image-builder-json.md#distribute-sharedimage) as the value for the `distribute` section of the template.
az role assignment create \
To use VM Image Builder with Azure Compute Gallery, you need to have an existing gallery and image definition. VM Image Builder doesn't create the gallery and image definition for you.
-If you don't already have a gallery and image definition to use, start by creating them.
+If you don't already have a gallery and image definition to use, start by creating them.
First, create a gallery:
sed -i -e "s%<imgBuilderId>%$imgBuilderId%g" helloImageTemplateforSIG.json
## Create the image version
-In this section you create the image version in the gallery.
+In this section you create the image version in the gallery.
Submit the image configuration to the Azure VM Image Builder service:
az resource invoke-action \
--resource-group $sigResourceGroup \ --resource-type Microsoft.VirtualMachineImages/imageTemplates \ -n helloImageTemplateforSIG01 \
- --action Run
+ --action Run
``` It can take a few moments to create the image and replicate it to both regions. Wait until this part is finished before you move on to create a VM.
virtual-machines Mac Create Ssh Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/mac-create-ssh-keys.md
Title: Create and use an SSH key pair for Linux VMs in Azure
+ Title: Create and use an SSH key pair for Linux VMs in Azure
description: How to create and use an SSH public-private key pair for Linux VMs in Azure to improve the security of the authentication process. -+ Last updated 01/02/2024
# Quick steps: Create and use an SSH public-private key pair for Linux VMs in Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-With a secure shell (SSH) key pair, you can create virtual machines (VMs) in Azure that use SSH keys for authentication. This article shows you how to quickly generate and use an SSH public-private key file pair for Linux VMs. You can complete these steps with the Azure Cloud Shell, a macOS, or a Linux host.
+With a secure shell (SSH) key pair, you can create virtual machines (VMs) in Azure that use SSH keys for authentication. This article shows you how to quickly generate and use an SSH public-private key file pair for Linux VMs. You can complete these steps with the Azure Cloud Shell, a macOS, or a Linux host.
For help with troubleshooting issues with SSH, see [Troubleshoot SSH connections to an Azure Linux VM that fails, errors out, or is refused](/troubleshoot/azure/virtual-machines/troubleshoot-ssh-connection). > [!NOTE]
-> VMs created using SSH keys are by default configured with passwords disabled, which greatly increases the difficulty of brute-force guessing attacks.
+> VMs created using SSH keys are by default configured with passwords disabled, which greatly increases the difficulty of brute-force guessing attacks.
For more background and examples, see [Detailed steps to create SSH key pairs](create-ssh-keys-detailed.md).
ssh-keygen -m PEM -t rsa -b 4096
If you use the [Azure CLI](/cli/azure) to create your VM with the [az vm create](/cli/azure/vm#az-vm-create) command, you can optionally generate SSH public and private key files using the `--generate-ssh-keys` option. The key files are stored in the ~/.ssh directory unless specified otherwise with the `--ssh-dest-key-path` option. If an ssh key pair already exists and the `--generate-ssh-keys` option is used, a new key pair won't be generated but instead the existing key pair will be used. In the following command, replace *VMname*, *RGname* and *UbuntuLTS* with your own values: ```azurecli-interactive
-az vm create --name VMname --resource-group RGname --image Ubuntu2204 --generate-ssh-keys
+az vm create --name VMname --resource-group RGname --image Ubuntu2204 --generate-ssh-keys
``` ## Provide an SSH public key when deploying a VM
virtual-machines Multiple Nics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/multiple-nics.md
Title: Create a Linux VM in Azure with multiple NICs
+ Title: Create a Linux VM in Azure with multiple NICs
description: Learn how to create a Linux VM with multiple NICs attached to it using the Azure CLI or Resource Manager templates. -+ Last updated 04/06/2023 # How to create a Linux virtual machine in Azure with multiple network interface cards
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article details how to create a VM with multiple NICs with the Azure CLI.
Azure Resource Manager templates use declarative JSON files to define your envir
} ```
-Read more about [creating multiple instances using *copy*](../../azure-resource-manager/templates/copy-resources.md).
+Read more about [creating multiple instances using *copy*](../../azure-resource-manager/templates/copy-resources.md).
You can also use a `copyIndex()` to then append a number to a resource name, which allows you to create `myNic1`, `myNic2`, etc. The following shows an example of appending the index value: ```json
-"name": "[concat('myNic', copyIndex())]",
+"name": "[concat('myNic', copyIndex())]",
``` You can read a complete example of [creating multiple NICs using Resource Manager templates](../../virtual-network/template-samples.md).
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
Title: Azure N-series GPU driver setup for Linux
+ Title: Azure N-series GPU driver setup for Linux
description: How to set up NVIDIA GPU drivers for N-series VMs running Linux in Azure
-+ Last updated 04/06/2023
# Install NVIDIA GPU drivers on N-series VMs running Linux
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install NVIDIA GPU drivers. The [NVIDIA GPU Driver Extension](../extensions/hpccompute-gpu-linux.md) installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as the Azure CLI or Azure Resource Manager templates. See the [NVIDIA GPU Driver Extension documentation](../extensions/hpccompute-gpu-linux.md) for supported distributions and deployment steps. If you choose to install NVIDIA GPU drivers manually, this article provides supported distributions, drivers, and installation and verification steps. Manual driver setup information is also available for [Windows VMs](../windows/n-series-driver-setup.md).
-For N-series VM specs, storage capacities, and disk details, see [GPU Linux VM sizes](../sizes-gpu.md?toc=/azure/virtual-machines/linux/toc.json).
+For N-series VM specs, storage capacities, and disk details, see [GPU Linux VM sizes](../sizes-gpu.md?toc=/azure/virtual-machines/linux/toc.json).
[!INCLUDE [virtual-machines-n-series-linux-support](../../../includes/virtual-machines-n-series-linux-support.md)] ## Install CUDA drivers on N-series VMs
-Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N-series VMs.
+Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N-series VMs.
C and C++ developers can optionally install the full Toolkit to build GPU-accelerated applications. For more information, see the [CUDA Installation Guide](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/https://docsupdatetracker.net/index.html).
lspci lists the PCIe devices on the VM, including the InfiniBand NIC and GPUs, i
Then run installation commands specific for your distribution.
-### Ubuntu
+### Ubuntu
-1. Download and install the CUDA drivers from the NVIDIA website.
+1. Download and install the CUDA drivers from the NVIDIA website.
> [!NOTE]
- > The example shows the CUDA package path for Ubuntu 20.04. Replace the path specific to the version you plan to use.
- >
+ > The example shows the CUDA package path for Ubuntu 20.04. Replace the path specific to the version you plan to use.
+ >
> Visit the [NVIDIA Download Center](https://developer.download.nvidia.com/compute/cuda/repos/) or the [NVIDIA CUDA Resources page](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_network) for the full path specific to each version.
- >
+ >
```bash
- wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
+ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb sudo apt-get update sudo apt-get -y install cuda-drivers
With Secure Boot enabled, all Linux kernel modules are required to be signed by
```bash sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/ /" ```
-
+ where `$distro/$arch` should be replaced by one of the following: ```
With Secure Boot enabled, all Linux kernel modules are required to be signed by
ubuntu2204/arm64 ubuntu2204/x86_64 ```
-
+ If `add-apt-repository` command is not found, run `sudo apt-get install software-properties-common` to install it. 4. Install kernel headers and development packages, and remove outdated signing key
With Secure Boot enabled, all Linux kernel modules are required to be signed by
``` Note: When prompt on different versions of cuda-keyring, select `Y or I : install the package maintainer's version` to proceed.
-
+ 6. Update APT repository cache and install NVIDIA GPUDirect Storage ```bash
With Secure Boot enabled, all Linux kernel modules are required to be signed by
``` 8. Verify NVIDIA CUDA drivers are installed and loaded
-
+ ```bash dpkg -l | grep -i nvidia nvidia-smi
With Secure Boot enabled, all Linux kernel modules are required to be signed by
2. Install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS isn't required.
- LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Refer to the [Linux Integration Services documentation](https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
+ LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Refer to the [Linux Integration Services documentation](https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.
Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions. ```bash
With Secure Boot enabled, all Linux kernel modules are required to be signed by
``` 3. Reconnect to the VM and continue installation with the following commands:
-
+ ```bash sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo
With Secure Boot enabled, all Linux kernel modules are required to be signed by
sudo yum -y install nvidia-driver-latest-dkms cuda-drivers ```
- The installation can take several minutes.
-
+ The installation can take several minutes.
+ > [!NOTE] > Visit [Fedora](https://dl.fedoraproject.org/pub/epel/) and [Nvidia CUDA repo](https://developer.download.nvidia.com/compute/cuda/repos/) to pick the correct package for the CentOS or RHEL version you want to use.
- >
+ >
For example, CentOS 8 and RHEL 8 need the following steps. ```bash sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm sudo yum install dkms
-
+ sudo wget https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo -O /etc/yum.repos.d/cuda-rhel8.repo sudo yum install cuda-drivers
For example, CentOS 8 and RHEL 8 need the following steps.
``` > [!NOTE] > If you see an error message related to missing packages like vulkan-filesystem then you may need to edit /etc/yum.repos.d/rh-cloud , look for optional-rpms and set enabled to 1
- >
+ >
5. Reboot the VM and proceed to verify the installation. ### Verify driver installation
-To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
+To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
If the driver is installed, Nvidia SMI lists the **GPU-Util** as 0% until you run a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.
RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC
### Distributions Deploy RDMA-capable N-series VMs from one of the images in the Azure Marketplace that supports RDMA connectivity on N-series VMs:
-
+ * **Ubuntu 16.04 LTS** - Configure RDMA drivers on the VM and register with Intel to download Intel MPI: [!INCLUDE [virtual-machines-common-ubuntu-rdma](../../../includes/virtual-machines-common-ubuntu-rdma.md)]
Deploy RDMA-capable N-series VMs from one of the images in the Azure Marketplace
## Install GRID drivers on NV or NVv3-series VMs
-To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection to each VM and follow the steps for your Linux distribution.
+To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection to each VM and follow the steps for your Linux distribution.
-### Ubuntu
+### Ubuntu
1. Run the `lspci` command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
5. Download and install the GRID driver: ```bash
- wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
+ wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
chmod +x NVIDIA-Linux-x86_64-grid.run sudo ./NVIDIA-Linux-x86_64-grid.run
- ```
+ ```
6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select **Yes**.
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
``` 8. Add the following to `/etc/nvidia/gridd.conf`:
-
+ ``` IgnoreSP=FALSE EnableUI=FALSE ```
-
+ 9. Remove the following from `/etc/nvidia/gridd.conf` if it is present:
-
+ ``` FeatureType=0 ```
-
+ 10. Reboot the VM and proceed to verify the installation. #### Install GRID driver on Ubuntu with Secure Boot enabled
To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection
The GRID driver installation process does not offer any options to skip kernel module build and installation and select a different source of signed kernel modules, so secure boot has to be disabled in Linux VMs in order to use them with GRID, after installing signed kernel modules.
-### CentOS or Red Hat Enterprise Linux
+### CentOS or Red Hat Enterprise Linux
1. Update the kernel and DKMS (recommended). If you choose not to update the kernel, ensure that the versions of `kernel-devel` and `dkms` are appropriate for your kernel.
-
- ```bash
+
+ ```bash
sudo yum update sudo yum install kernel-devel sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
The GRID driver installation process does not offer any options to skip kernel m
blacklist lbm-nouveau ```
-3. Reboot the VM, reconnect, and install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS isn't required.
+3. Reboot the VM, reconnect, and install the latest [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS isn't required.
Skip this step if you plan to use CentOS/RHEL 7.8 (or higher versions) as LIS is no longer required for these versions.
The GRID driver installation process does not offer any options to skip kernel m
sudo reboot ```
-
+ 4. Reconnect to the VM and run the `lspci` command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.
-
+ 5. Download and install the GRID driver: ```bash
- wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
+ wget -O NVIDIA-Linux-x86_64-grid.run https://go.microsoft.com/fwlink/?linkid=874272
chmod +x NVIDIA-Linux-x86_64-grid.run sudo ./NVIDIA-Linux-x86_64-grid.run ```
-
+ 6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select **Yes**. 7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/
-
+ ```bash sudo cp /etc/nvidia/gridd.conf.template /etc/nvidia/gridd.conf ```
-
+ 8. Add two lines to `/etc/nvidia/gridd.conf`:
-
+ ``` IgnoreSP=FALSE
- EnableUI=FALSE
+ EnableUI=FALSE
```
-
+ 9. Remove one line from `/etc/nvidia/gridd.conf` if it is present:
-
+ ``` FeatureType=0 ```
-
+ 10. Reboot the VM and proceed to verify the installation. ### Verify driver installation
-To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
+To query the GPU device state, SSH to the VM and run the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) command-line utility installed with the driver.
If the driver is installed, Nvidia SMI will list the **GPU-Util** as 0% until you run a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown. ![Screenshot that shows the output when the GPU device state is queried.](./media/n-series-driver-setup/smi-nv.png)
-
+ ### X11 server If you need an X11 server for remote connections to an NV or NVv2 VM, [x11vnc](https://wiki.archlinux.org/title/X11vnc) is recommended because it allows hardware acceleration of graphics. The BusID of the M60 device must be manually added to the X11 configuration file (usually, `etc/X11/xorg.conf`). Add a `"Device"` section similar to the following:
-
+ ``` Section "Device" Identifier "Device0"
Section "Device"
BusID "PCI:0@your-BusID:0:0" EndSection ```
-
+ Additionally, update your `"Screen"` section to use this device.
-
+ The decimal BusID can be found by running ```bash nvidia-xconfig --query-gpu-info | awk '/PCI BusID/{print $4}' ```
-
+ The BusID can change when a VM gets reallocated or rebooted. Therefore, you may want to create a script to update the BusID in the X11 configuration when a VM is rebooted. For example, create a script named `busidupdate.sh` (or another name you choose) with contents similar to the following:
-```bash
+```bash
#!/bin/bash XCONFIG="/etc/X11/xorg.conf" OLDBUSID=`awk '/BusID/{gsub(/"/, "", $2); print $2}' ${XCONFIG}`
NEWBUSID=`nvidia-xconfig --query-gpu-info | awk '/PCI BusID/{print $4}'`
if [[ "${OLDBUSID}" == "${NEWBUSID}" ]] ; then echo "NVIDIA BUSID not changed - nothing to do" else
- echo "NVIDIA BUSID changed from \"${OLDBUSID}\" to \"${NEWBUSID}\": Updating ${XCONFIG}"
+ echo "NVIDIA BUSID changed from \"${OLDBUSID}\" to \"${NEWBUSID}\": Updating ${XCONFIG}"
sed -e 's|BusID.*|BusID '\"${NEWBUSID}\"'|' -i ${XCONFIG} fi ```
Then, create an entry for your update script in `/etc/rc.d/rc3.d` so the script
## Troubleshooting * You can set persistence mode using `nvidia-smi` so the output of the command is faster when you need to query cards. To set persistence mode, execute `nvidia-smi -pm 1`. Note that if the VM is restarted, the mode setting goes away. You can always script the mode setting to execute upon startup.
-* If you updated the NVIDIA CUDA drivers to the latest version and find RDMA connectivity is no longer working, [reinstall the RDMA drivers](#rdma-network-connectivity) to reestablish that connectivity.
+* If you updated the NVIDIA CUDA drivers to the latest version and find RDMA connectivity is no longer working, [reinstall the RDMA drivers](#rdma-network-connectivity) to reestablish that connectivity.
* During installation of LIS, if a certain CentOS/RHEL OS version (or kernel) is not supported for LIS, an error ΓÇ£Unsupported kernel versionΓÇ¥ is thrown. Please report this error along with the OS and kernel versions.
-* If jobs are interrupted by ECC errors on the GPU (either correctable or uncorrectable), first check to see if the GPU meets any of Nvidia's [RMA criteria for ECC errors](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#faq-pre). If the GPU is eligible for RMA, please contact support about getting it serviced; otherwise, reboot your VM to reattach the GPU as described [here](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#bl_reset_reboot). Less invasive methods such as `nvidia-smi -r` don't work with the virtualization solution deployed in Azure.
+* If jobs are interrupted by ECC errors on the GPU (either correctable or uncorrectable), first check to see if the GPU meets any of Nvidia's [RMA criteria for ECC errors](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#faq-pre). If the GPU is eligible for RMA, please contact support about getting it serviced; otherwise, reboot your VM to reattach the GPU as described [here](https://docs.nvidia.com/deploy/dynamic-page-retirement/https://docsupdatetracker.net/index.html#bl_reset_reboot). Less invasive methods such as `nvidia-smi -r` don't work with the virtualization solution deployed in Azure.
## Next steps
virtual-machines No Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/no-agent.md
Title: Create Linux images without a provisioning agent
+ Title: Create Linux images without a provisioning agent
description: Create generalized Linux images without a provisioning agent in Azure.
-+ Last updated 04/11/2023
# Creating generalized images without a provisioning agent
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
Microsoft Azure provides provisioning agents for Linux VMs in the form of the [walinuxagent](https://github.com/Azure/WALinuxAgent) or [cloud-init](https://github.com/canonical/cloud-init) (recommended). But there could be a scenario when you don't want to use either of these applications for your provisioning agent, such as: - Your Linux distro/version doesn't support cloud-init/Linux Agent. - You require specific VM properties to be set, such as hostname.
-> [!NOTE]
+> [!NOTE]
> > If you do not require any properties to be set or any form of provisioning to happen you should consider creating a specialized image.
$ sudo rm -rf /var/lib/waagent /etc/waagent.conf /var/log/waagent.log
### Add required code to the VM
-Also inside the VM, because we've removed the Azure Linux Agent we need to provide a mechanism to report ready.
+Also inside the VM, because we've removed the Azure Linux Agent we need to provide a mechanism to report ready.
#### Python script
With the unit on the filesystem, run the following to enable it:
$ sudo systemctl enable azure-provisioning.service ```
-Now the VM is ready to be generalized and have an image created from it.
+Now the VM is ready to be generalized and have an image created from it.
#### Completing the preparation of the image
$ az vm create \
--location eastus \ --ssh-key-value <ssh_pub_key_path> \ --public-ip-address-dns-name demo12 \
- --image "$IMAGE_ID"
+ --image "$IMAGE_ID"
--enable-agent false ```
virtual-machines Oracle Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/oracle-create-upload-vhd.md
Title: Create and upload an Oracle Linux VHD
+ Title: Create and upload an Oracle Linux VHD
description: Learn to create and upload an Azure virtual hard disk (VHD) that contains an Oracle Linux operating system. -+ Last updated 11/09/2021
# Prepare an Oracle Linux virtual machine for Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article assumes that you've already installed an Oracle Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
This article assumes that you've already installed an Oracle Linux operating sys
* All VHDs on Azure must have a virtual size aligned to 1 MB. When converting from a raw disk to VHD, you must ensure that the raw disk size is a multiple of 1 MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information. * Make sure that the `Addons` repository is enabled. Edit the file `/etc/yum.repos.d/public-yum-ol6.repo`(Oracle Linux 6) or `/etc/yum.repos.d/public-yum-ol7.repo`(Oracle Linux 7), and change the line `enabled=0` to `enabled=1` under **[ol6_addons]** or **[ol7_addons]** in this file.
-## Oracle Linux 6.X
+## Oracle Linux 6.X
> [!IMPORTANT] > Keep in consideration Oracle Linux 6.x is already EOL. Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf), which [will end on 07/2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf).
You must complete specific configuration steps in the operating system for the v
4. Create a file named **network** in the `/etc/sysconfig/` directory that contains the following text:
- ```config
+ ```config
NETWORKING=yes HOSTNAME=localhost.localdomain ```
You must complete specific configuration steps in the operating system for the v
9. Modify the kernel boot line in your grub configuration to include more kernel parameters for Azure. To do this open "/boot/grub/menu.lst" in a text editor and ensure that the kernel includes the following parameters: ```config-grub
- console=ttyS0 earlyprintk=ttyS0
+ console=ttyS0 earlyprintk=ttyS0
``` This setting ensures all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
-
+ In addition to the above, we recommend to *remove* the following parameters: ```config-grub
You must complete specific configuration steps in the operating system for the v
``` Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port.
-
+ The `crashkernel` option may be left configured if desired, but note that this parameter reduces the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes. 10. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
You must complete specific configuration steps in the operating system for the v
Installing the WALinuxAgent package removes the NetworkManager and NetworkManager-gnome packages if they weren't already removed as described in step 2. 12. Don't create swap space on the OS disk.
-
+ The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in /etc/waagent.conf appropriately: ```config-conf
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
8. Run the following command to clear the current yum metadata and install any updates:
- ```bash
+ ```bash
sudo yum clean all sudo yum -y update ```
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
```config-grub rhgb quiet crashkernel=auto ```
-
+ Graphical and quiet boot aren't useful in a cloud environment where we want all the logs to be sent to the serial port.
-
+ The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes. 10. Once you're done editing "/etc/default/grub" per above, run the following command to rebuild the grub configuration:
Preparing an Oracle Linux 7 virtual machine for Azure is similar to Oracle Linux
sudo sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg
- ```
+ ```
```bash echo "Allow only Azure datasource, disable fetching network setting via IMDS"
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/proximity-placement-groups.md
Title: Create a proximity placement group using the Azure CLI
-description: Learn about creating and using proximity placement groups for virtual machines in Azure.
+description: Learn about creating and using proximity placement groups for virtual machines in Azure.
-+ Last updated 4/6/2023 # Deploy VMs to proximity placement groups using Azure CLI
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
To get VMs as close as possible, achieving the lowest possible latency, you should deploy them within a [proximity placement group](../co-location.md#proximity-placement-groups).
A proximity placement group is a logical grouping used to make sure that Azure c
## Create the proximity placement group
-Create a proximity placement group using [az ppg create](/cli/azure/ppg#az-ppg-create).
+Create a proximity placement group using [az ppg create](/cli/azure/ppg#az-ppg-create).
```azurecli-interactive az group create --name myPPGGroup --location eastus
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-portal.md
Last updated 01/04/2024 -+ # Quickstart: Create a Linux virtual machine in the Azure portal
Sign in to the [Azure portal](https://portal.azure.com).
1. Under **Services**, select **Virtual machines**. 1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
-1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Enter *myResourceGroup* for the name.*.
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose to **Create new** resource group. Enter *myResourceGroup* for the name.*.
![Screenshot of the Project details section showing where you select the Azure subscription and the resource group for the virtual machine](./media/quick-create-portal/project-details.png)
Sign in to the [Azure portal](https://portal.azure.com).
![Screenshot of the Administrator account section where you select an authentication type and provide the administrator credentials](./media/quick-create-portal/administrator-account.png)
-1. Under **Inbound port rules** > **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** and **HTTP (80)** from the drop-down.
+1. Under **Inbound port rules** > **Public inbound ports**, choose **Allow selected ports** and then select **SSH (22)** and **HTTP (80)** from the drop-down.
![Screenshot of the inbound port rules section where you select what ports inbound connections are allowed on](./media/quick-create-portal/inbound-port-rules.png)
Sign in to the [Azure portal](https://portal.azure.com).
Create an [SSH connection](/azure/virtual-machines/linux-vm-connect) with the VM.
-1. If you are on a Mac or Linux machine, open a Bash prompt and set read-only permission on the .pem file using `chmod 400 ~/Downloads/myKey.pem`. If you are on a Windows machine, open a PowerShell prompt.
+1. If you are on a Mac or Linux machine, open a Bash prompt and set read-only permission on the .pem file using `chmod 400 ~/Downloads/myKey.pem`. If you are on a Windows machine, open a PowerShell prompt.
1. At your prompt, open an SSH connection to your virtual machine. Replace the IP address with the one from your VM, and replace the path to the `.pem` with the path to where the key file was downloaded.
Use a web browser of your choice to view the default NGINX welcome page. Type th
When no longer needed, you can delete the resource group, virtual machine, and all related resources. 1. On the Overview page for the VM, select the **Resource group** link.
-1. At the top of the page for the resource group, select **Delete resource group**.
+1. At the top of the page for the resource group, select **Delete resource group**.
1. A page will open warning you that you are about to delete resources. Type the name of the resource group and select **Delete** to finish deleting the resources and the resource group.
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
Title: Create and upload a Red Hat Enterprise Linux VHD for use in Azure
+ Title: Create and upload a Red Hat Enterprise Linux VHD for use in Azure
description: Learn to create and upload an Azure virtual hard disk (VHD) that contains a Red Hat Linux operating system.
vm-linux-+ Last updated 04/25/2023
# Prepare a Red Hat-based virtual machine for Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
In this article, you'll learn how to prepare a Red Hat Enterprise Linux (RHEL) virtual machine for use in Azure. The versions of RHEL that are covered in this article are 6.X, 7.X, and 8.X. The hypervisors for preparation that are covered in this article are Hyper-V, kernel-based virtual machine (KVM), and VMware. For more information about eligibility requirements for participating in Red Hat's Cloud Access program, see [Red Hat's Cloud Access website](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) and [Running RHEL on Azure](https://access.redhat.com/ecosystem/ccsp/microsoft-azure). For ways to automate building RHEL images, see [Azure Image Builder](../image-builder-overview.md). > [!NOTE]
This section assumes that you've already obtained an ISO file from the Red Hat w
> **_Cloud-init >= 21.2 removes the udf requirement_**. However, without the udf module enabled, the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data using user data. However, unlike custom data, user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
-### RHEL 6 using Hyper-V Manager
+### RHEL 6 using Hyper-V Manager
> [!IMPORTANT] > Starting on 30 November 2020, Red Hat Enterprise Linux 6 will reach end of maintenance phase. The maintenance phase is followed by the Extended Life Phase. As Red Hat Enterprise Linux 6 transitions out of the Full/Maintenance Phases, we strongly recommend upgrading to Red Hat Enterprise Linux 7, 8, or 9. If customers must stay on Red Hat Enterprise Linux 6, we recommend adding the Red Hat Enterprise Linux Extended Life Cycle Support (ELS) Add-On.
sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules
# Accelerated Networking on Azure exposes a new SRIOV interface to the VM. # This interface is transparentlybonded to the synthetic interface, # so NetworkManager should just ignore any SRIOV interfaces.
-SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
+SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
EOF ``` 7. Ensure that the network service will start at boot time by running the following command:
EOF
10. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/boot/grub/menu.lst` in a text editor, and ensure that the default kernel includes the following parameters: ```config-grub
- console=ttyS0 earlyprintk=ttyS0
+ console=ttyS0 earlyprintk=ttyS0
``` This will also ensure that all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
-
+ In addition, we recommended that you remove the following parameters: ```config-grub rhgb quiet crashkernel=auto ```
-
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more. This configuration might be problematic on smaller virtual machine sizes.
EOF
15. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
- > [!NOTE]
+ > [!NOTE]
> If you're migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step. ```bash
EOF
7. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this modification, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter. For example:
-
+ ```config-grub GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 earlyprintk=ttyS0 net.ifnames=0" GRUB_TERMINAL_OUTPUT="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1" ```
-
+ This will also ensure that all console messages are sent to the first serial port and enable interaction with the serial console, which can assist Azure support with debugging issues. This configuration also turns off the new RHEL 7 naming conventions for NICs. ```config rhgb quiet crashkernel=auto ```
-
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes. 8. After you're done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
EOF
``` > [!NOTE] > If you are migrating a specific virtual machine and don't wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
-
+ 1. Configure mounts: ```bash
EOF
sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg ```
-
+ 1. Configure Azure datasource: ```bash
EOF
```
-13. Swap configuration.
+13. Swap configuration.
Don't create swap space on the operating system disk. Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
EOF
sudo rm -f ~/.bash_history sudo export HISTSIZE=0 ```
-
+ 16. Click **Action** > **Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [**uploaded to Azure**](./upload-vhd.md#option-1-upload-a-vhd).
EOF
sudo subscription-manager register --auto-attach --username=XXX --password=XXX ```
-6. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure and enable the serial console.
+6. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure and enable the serial console.
1. Remove current GRUB parameters: ```bash
EOF
GRUB_TERMINAL_OUTPUT="serial console" GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1" ```
-
+ This will also ensure that all console messages are sent to the first serial port and enable interaction with the serial console, which can assist Azure support with debugging issues. This configuration also turns off the new naming conventions for NICs.
-
+ 1. Additionally, we recommend that you remove the following parameters: ```config rhgb quiet crashkernel=auto ```
-
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes. 7. After you are done editing `/etc/default/grub`, run the following command to rebuild the grub configuration:
EOF
``` > [!NOTE] > If you're migrating a specific virtual machine and don't wish to create a generalized image, set `Provisioning.Agent=disabled` on the `/etc/waagent.conf` config.
-
+ 1. Configure mounts: ```bash
EOF
sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg ```
-
+ 1. Configure Azure datasource: ```bash
EOF
fi ``` 1. Configure cloud-init logging:
-
+ ```bash sudo echo "Add console log file" sudo cat >> /etc/cloud/cloud.cfg.d/05_logging.cfg <<EOF
EOF
EOF ```
-11. Swap configuration
+11. Swap configuration
Don't create swap space on the operating system disk. Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However, this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
EOF
## KVM
-This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) or [RHEL 7](#rhel-7-using-kvm) distro to upload to Azure.
+This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) or [RHEL 7](#rhel-7-using-kvm) distro to upload to Azure.
### RHEL 6 using KVM
This section shows you how to use KVM to prepare a [RHEL 6](#rhel-6-using-kvm) o
To apply it:<br> ```
-sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules
+sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules
# Accelerated Networking on Azure exposes a new SRIOV interface to the VM. # This interface is transparently bonded to the synthetic interface, # so NetworkManager should just ignore any SRIOV interfaces.
-SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
+SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
EOF ``` 7. Ensure that the network service will start at boot time by running the following command:
EOF
9. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this configuration, open `/boot/grub/menu.lst` in a text editor, and ensure that the default kernel includes the following parameters: ```config-grub
- console=ttyS0 earlyprintk=ttyS0
+ console=ttyS0 earlyprintk=ttyS0
``` This will also ensure that all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
-
+ In addition, we recommend that you remove the following parameters: ```config-grub
EOF
Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes.
-10. Add Hyper-V modules to initramfs:
+10. Add Hyper-V modules to initramfs:
Edit `/etc/dracut.conf`, and add the following content:
EOF
15. Install cloud-init Follow the steps in 'Prepare a RHEL 7 virtual machine from Hyper-V Manager', step 12, 'Install cloud-init to handle the provisioning.'
-16. Swap configuration
+16. Swap configuration
Don't create swap space on the operating system disk. Follow the steps in 'Prepare a RHEL 7 virtual machine from Hyper-V Manager', step 13, 'Swap configuration'
This section assumes that you have already installed a RHEL virtual machine in V
To apply it:<br> ```
-sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules
+sudo cat <<EOF>> /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules
# Accelerated Networking on Azure exposes a new SRIOV interface to the VM. # This interface is transparently bonded to the synthetic interface, # so NetworkManager should just ignore any SRIOV interfaces.
-SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
+SUBSYSTEM=="net", DRIVERS=="hv_pci", ACTION=="add", ENV{NM_UNMANAGED}="1"
EOF ```
EOF
```config-grub rhgb quiet crashkernel=auto ```
-
+ Graphical and quiet boots aren't useful in a cloud environment where we want all the logs to be sent to the serial port. You can leave the `crashkernel` option configured if desired. Note that this parameter reduces the amount of available memory in the virtual machine by 128 MB or more, which might be problematic on smaller virtual machine sizes. 9. Add Hyper-V modules to initramfs:
virtual-machines Spot Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-template.md
-+ Last updated 05/31/2023
Here's a sample template with added properties for an Azure Spot VM. Replace the
## Simulate an eviction
-You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of an Azure Spot VM, to test your application response to a sudden eviction.
+You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of an Azure Spot VM, to test your application response to a sudden eviction.
-Replace the below parameters with your information:
+Replace the below parameters with your information:
- `subscriptionId` - `resourceGroupName`
virtual-machines Static Dns Name Resolution For Linux On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/static-dns-name-resolution-for-linux-on-azure.md
Title: Use internal DNS for VM name resolution with the Azure CLI
+ Title: Use internal DNS for VM name resolution with the Azure CLI
description: How to create virtual network interface cards and use internal DNS for VM name resolution on Azure with the Azure CLI. -+ Last updated 04/06/2023
# Create virtual network interface cards and use internal DNS for VM name resolution on Azure
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
This article shows you how to set static internal DNS names for Linux VMs using virtual network interface cards (vNics) and DNS label names with the Azure CLI. Static DNS names are used for permanent infrastructure services like a Jenkins build server, which is used for this document, or a Git server.
az vm create \
## Detailed walkthrough
-A full continuous integration and continuous deployment (CiCd) infrastructure on Azure requires certain servers to be static or long-lived servers. It's recommended that Azure assets like the virtual networks and Network Security Groups are static and long lived resources that are rarely deployed. Once a virtual network has been deployed, it can be reused in new deployments without any adverse affects to the infrastructure. You can later add a Git repository server or a Jenkins automation server delivers CiCd to this virtual network for your development or test environments.
+A full continuous integration and continuous deployment (CiCd) infrastructure on Azure requires certain servers to be static or long-lived servers. It's recommended that Azure assets like the virtual networks and Network Security Groups are static and long lived resources that are rarely deployed. Once a virtual network has been deployed, it can be reused in new deployments without any adverse affects to the infrastructure. You can later add a Git repository server or a Jenkins automation server delivers CiCd to this virtual network for your development or test environments.
Internal DNS names are only resolvable inside an Azure virtual network. Because the DNS names are internal, they aren't resolvable to the outside internet, providing extra security to the infrastructure.
az group create --name myResourceGroup --location westus
## Create the virtual network
-The next step is to build a virtual network to launch the VMs into. The virtual network contains one subnet for this walkthrough. For more information on Azure virtual networks, see [Create a virtual network](../../virtual-network/manage-virtual-network.md#create-a-virtual-network).
+The next step is to build a virtual network to launch the VMs into. The virtual network contains one subnet for this walkthrough. For more information on Azure virtual networks, see [Create a virtual network](../../virtual-network/manage-virtual-network.md#create-a-virtual-network).
Create the virtual network with [az network vnet create](/cli/azure/network/vnet). The following example creates a virtual network named `myVnet` and subnet named `mySubnet`:
az network vnet create \
``` ## Create the Network Security Group
-Azure Network Security Groups are equivalent to a firewall at the network layer. For more information about Network Security Groups, see [How to create NSGs in the Azure CLI](../../virtual-network/tutorial-filter-network-traffic-cli.md).
+Azure Network Security Groups are equivalent to a firewall at the network layer. For more information about Network Security Groups, see [How to create NSGs in the Azure CLI](../../virtual-network/tutorial-filter-network-traffic-cli.md).
Create the network security group with [az network nsg create](/cli/azure/network/nsg). The following example creates a network security group named `myNetworkSecurityGroup`:
az network vnet subnet update \
## Create the virtual network interface card and static DNS names
-To use DNS names for VM name resolution, you need to create virtual network interface cards (vNics) that include a DNS label. vNics are important as you can reuse them by connecting them to different VMs over the infrastructure lifecycle. This approach keeps the vNic as a static resource while the VMs can be temporary. By using DNS labeling on the vNic, we're able to enable simple name resolution from other VMs in the VNet. Using resolvable names enables other VMs to access the automation server by the DNS name `Jenkins` or the Git server as `gitrepo`.
+To use DNS names for VM name resolution, you need to create virtual network interface cards (vNics) that include a DNS label. vNics are important as you can reuse them by connecting them to different VMs over the infrastructure lifecycle. This approach keeps the vNic as a static resource while the VMs can be temporary. By using DNS labeling on the vNic, we're able to enable simple name resolution from other VMs in the VNet. Using resolvable names enables other VMs to access the automation server by the DNS name `Jenkins` or the Git server as `gitrepo`.
Create the vNic with [az network nic create](/cli/azure/network/nic). The following example creates a vNic named `myNic`, connects it to the `myVnet` virtual network named `myVnet`, and creates an internal DNS name record called `jenkins`:
az vm create \
--ssh-key-value ~/.ssh/id_rsa.pub ```
-By using the CLI flags to call out existing resources, we instruct Azure to deploy the VM inside the existing network. To reiterate, once a VNet and subnet have been deployed, they can be left as static or permanent resources inside your Azure region.
+By using the CLI flags to call out existing resources, we instruct Azure to deploy the VM inside the existing network. To reiterate, once a VNet and subnet have been deployed, they can be left as static or permanent resources inside your Azure region.
## Next steps * [Create your own custom environment for a Linux VM using Azure CLI commands directly](create-cli-complete.md)
virtual-machines Suse Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/suse-create-upload-vhd.md
-+ Last updated 12/14/2022
As an alternative to building your own VHD, SUSE also publishes BYOS (bring your
``` ```shell
- /usr/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
+ /usr/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg
``` 3. Register your SUSE Linux Enterprise system to allow it to download updates and install packages.
As an alternative to building your own VHD, SUSE also publishes BYOS (bring your
6. Enable `waagent` and cloud-init to start on boot: ```bash
- sudo systemctl enable waagent
+ sudo systemctl enable waagent
sudo systemctl enable cloud-init-local.service sudo systemctl enable cloud-init.service sudo systemctl enable cloud-config.service
As an alternative to building your own VHD, SUSE also publishes BYOS (bring your
7. Update the cloud-init configuration: ```bash
- cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg
+ cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg
datasource_list: [ Azure ] datasource: Azure:
As an alternative to building your own VHD, SUSE also publishes BYOS (bring your
```bash cat <<EOF | sudo tee -a /etc/systemd/system.conf 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"'
- EOF
+ EOF
cat <<EOF | sudo tee /etc/cloud/cloud.cfg.d/00-azure-swap.cfg #cloud-config
As an alternative to building your own VHD, SUSE also publishes BYOS (bring your
```bash sudo rm -f /etc/udev/rules.d/70-persistent-net.rules sudo rm -f /etc/udev/rules.d/85-persistent-net-cloud-init.rules
- sudo rm -f /etc/sysconfig/network/ifcfg-eth*
+ sudo rm -f /etc/sysconfig/network/ifcfg-eth*
``` 12. We recommend that you edit the */etc/sysconfig/network/dhcp* file and change the `DHCLIENT_SET_HOSTNAME` parameter to the following:
As an alternative to building your own VHD, SUSE also publishes BYOS (bring your
6. Modify the kernel boot line in your GRUB configuration to include other kernel parameters for Azure. To do this, open */boot/grub/menu.lst* in a text editor and ensure that the default kernel includes the following parameters: ```config-grub
- console=ttyS0 earlyprintk=ttyS0
+ console=ttyS0 earlyprintk=ttyS0
``` This option ensures that all console messages are sent to the first serial port, which can assist Azure support with debugging issues. In addition, remove the following parameters from the kernel boot line if they exist:
virtual-machines Tutorial Automate Vm Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-automate-vm-deployment.md
Last updated 04/06/2023 -+ #Customer intent: As an IT administrator or developer, I want learn about cloud-init so that I customize and configure Linux VMs in Azure on first boot to minimize the number of post-deployment configuration tasks required. # Tutorial - How to use cloud-init to customize a Linux virtual machine in Azure on first boot
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
In a previous tutorial, you learned how to SSH to a virtual machine (VM) and manually install NGINX. To create VMs in a quick and consistent manner, some form of automation is typically desired. A common approach to customize a VM on first boot is to use [cloud-init](https://cloudinit.readthedocs.io). In this tutorial you learn how to:
virtual-machines Tutorial Lamp Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-lamp-stack.md
ms.devlang: azurecli-+ Last updated 4/4/2023
# Tutorial: Install a LAMP stack on an Azure Linux VM
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
This article walks you through how to deploy an Apache web server, MySQL, and PHP (the LAMP stack) on an Ubuntu VM in Azure. To see the LAMP server in action, you can optionally install and configure a WordPress site. In this tutorial you learn how to: > [!div class="checklist"]
-> * Create an Ubuntu VM
+> * Create an Ubuntu VM
> * Open port 80 for web traffic > * Install Apache, MySQL, and PHP > * Verify installation and configuration
-> * Install WordPress
+> * Install WordPress
This setup is for quick tests or proof of concept. For more on the LAMP stack, including recommendations for a production environment, see the [Ubuntu documentation](https://help.ubuntu.com/community/ApacheMySQLPHP).
If you choose to install and use the CLI locally, this tutorial requires that yo
## Create a resource group
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
The following example creates a resource group named *myResourceGroup* in the *eastus* location.
az group create --name myResourceGroup --location eastus
## Create a virtual machine
-Create a VM with the [az vm create](/cli/azure/vm) command.
+Create a VM with the [az vm create](/cli/azure/vm) command.
-The following example creates a VM named *myVM* and creates SSH keys if they don't already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option. The command also sets *azureuser* as an administrator user name. You use this name later to connect to the VM.
+The following example creates a VM named *myVM* and creates SSH keys if they don't already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option. The command also sets *azureuser* as an administrator user name. You use this name later to connect to the VM.
```azurecli-interactive az vm create \
When the VM has been created, the Azure CLI shows information similar to the fol
-## Open port 80 for web traffic
+## Open port 80 for web traffic
+
+By default, only SSH connections are allowed into Linux VMs deployed in Azure. Because this VM is going to be a web server, you need to open port 80 from the internet. Use the [az vm open-port](/cli/azure/vm) command to open the desired port.
-By default, only SSH connections are allowed into Linux VMs deployed in Azure. Because this VM is going to be a web server, you need to open port 80 from the internet. Use the [az vm open-port](/cli/azure/vm) command to open the desired port.
-
```azurecli-interactive az vm open-port --port 80 --resource-group myResourceGroup --name myVM ```
ssh azureuser@40.68.254.142
## Install Apache, MySQL, and PHP
-Run the following command to update Ubuntu package sources and install Apache, MySQL, and PHP. Note the caret (^) at the end of the command, which is part of the `lamp-server^` package name.
+Run the following command to update Ubuntu package sources and install Apache, MySQL, and PHP. Note the caret (^) at the end of the command, which is part of the `lamp-server^` package name.
```bash sudo apt update && sudo apt install lamp-server^ ```
-You're prompted to install the packages and other dependencies. This process installs the minimum required PHP extensions needed to use PHP with MySQL.
+You're prompted to install the packages and other dependencies. This process installs the minimum required PHP extensions needed to use PHP with MySQL.
## Verify Apache
Check the version of MySQL with the following command (note the capital `V` para
mysql -V ```
-To help secure the installation of MySQL, including setting a root password, run the `mysql_secure_installation` script.
+To help secure the installation of MySQL, including setting a root password, run the `mysql_secure_installation` script.
```bash sudo mysql_secure_installation
virtual-machines Tutorial Secure Web Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-secure-web-server.md
Last updated 04/09/2023 -+ #Customer intent: As an IT administrator or developer, I want to learn how to secure a web server with TLS/SSL certificates so that I can protect my customer data on web applications that I build and run. # Tutorial: Use TLS/SSL certificates to secure a web server
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
To secure web servers, a Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), certificate can be used to encrypt web traffic. These TLS/SSL certificates can be stored in Azure Key Vault, and allow secure deployments of certificates to Linux virtual machines (VMs) in Azure. In this tutorial you learn how to:
Rather than using a custom VM image that includes certificates baked-in, you inj
## Create an Azure Key Vault Before you can create a Key Vault and certificates, create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroupSecureWeb* in the *eastus* location:
-```azurecli-interactive
+```azurecli-interactive
az group create --name myResourceGroupSecureWeb --location eastus ``` Next, create a Key Vault with [az keyvault create](/cli/azure/keyvault) and enable it for use when you deploy a VM. Each Key Vault requires a unique name, and should be all lowercase. Replace *\<mykeyvault>* in the following example with your own unique Key Vault name:
-```azurecli-interactive
+```azurecli-interactive
keyvault_name=<mykeyvault> az keyvault create \ --resource-group myResourceGroupSecureWeb \
az keyvault create \
## Generate a certificate and store in Key Vault For production use, you should import a valid certificate signed by trusted provider with [az keyvault certificate import](/cli/azure/keyvault/certificate). For this tutorial, the following example shows how you can generate a self-signed certificate with [az keyvault certificate create](/cli/azure/keyvault/certificate) that uses the default certificate policy:
-```azurecli-interactive
+```azurecli-interactive
az keyvault certificate create \ --vault-name $keyvault_name \ --name mycert \
az keyvault certificate create \
### Prepare a certificate for use with a VM To use the certificate during the VM create process, obtain the ID of your certificate with [az keyvault secret list-versions](/cli/azure/keyvault/secret). Convert the certificate with [az vm secret format](/cli/azure/vm/secret#az-vm-secret-format). The following example assigns the output of these commands to variables for ease of use in the next steps:
-```azurecli-interactive
+```azurecli-interactive
secret=$(az keyvault secret list-versions \ --vault-name $keyvault_name \ --name mycert \
vm_secret=$(az vm secret format --secrets "$secret" -g myResourceGroupSecureWeb
### Create a cloud-init config to secure NGINX [Cloud-init](https://cloudinit.readthedocs.io) is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. As cloud-init runs during the initial boot process, there are no extra steps or required agents to apply your configuration.
-When you create a VM, certificates and keys are stored in the protected */var/lib/waagent/* directory. To automate adding the certificate to the VM and configuring the web server, use cloud-init. In this example, you install and configure the NGINX web server. You can use the same process to install and configure Apache.
+When you create a VM, certificates and keys are stored in the protected */var/lib/waagent/* directory. To automate adding the certificate to the VM and configuring the web server, use cloud-init. In this example, you install and configure the NGINX web server. You can use the same process to install and configure Apache.
Create a file named *cloud-init-web-server.txt* and paste the following configuration:
runcmd:
### Create a secure VM Now create a VM with [az vm create](/cli/azure/vm). The certificate data is injected from Key Vault with the `--secrets` parameter. You pass in the cloud-init config with the `--custom-data` parameter:
-```azurecli-interactive
+```azurecli-interactive
az vm create \ --resource-group myResourceGroupSecureWeb \ --name myVM \
It takes a few minutes for the VM to be created, the packages to install, and th
To allow secure web traffic to reach your VM, open port 443 from the Internet with [az vm open-port](/cli/azure/vm):
-```azurecli-interactive
+```azurecli-interactive
az vm open-port \ --resource-group myResourceGroupSecureWeb \ --name myVM \
virtual-machines Use Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/use-remote-desktop.md
-+ Last updated 03/28/2023
This article requires an existing Ubuntu 18.04 LTS or Ubuntu 20.04 LTS VM in Azu
- The [Azure CLI](quick-create-cli.md) - The [Azure portal](quick-create-portal.md)
-
+ ## Install a desktop environment on your Linux VM Most Linux VMs in Azure don't have a desktop environment installed by default. Linux VMs are commonly managed using SSH connections rather than a desktop environment, however there are several desktop environments that you can choose to install. Depending on your choice of desktop environment, it consumes up to 2 GB of disk space and take up to ten minutes to both install and configure all the required packages.
virtual-machines Lsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv2-series.md
description: Specifications for the Lsv2-series VMs.
-+ Last updated 06/01/2022
The Lsv2-series features high throughput, low latency, directly mapped local NVM
> > The high throughput and IOPs of the local disk makes the Lsv2-series VMs ideal for NoSQL stores such as Apache Cassandra and MongoDB which replicate data across multiple VMs to achieve persistence in the event of the failure of a single VM. >
-> To learn more, see Optimize performance on the Lsv2-series virtual machines for [Windows](../virtual-machines/windows/storage-performance.md) or [Linux](../virtual-machines/linux/storage-performance.md).
+> To learn more, see Optimize performance on the Lsv2-series virtual machines for [Windows](../virtual-machines/windows/storage-performance.md) or [Linux](../virtual-machines/linux/storage-performance.md).
[ACU](acu.md): 150-175<br> [Premium Storage](premium-storage-performance.md): Supported<br>
Bursting: Supported<br>
<sup>4</sup> Lsv2-series VMs do not provide host cache for data disk as it does not benefit the Lsv2 workloads.
-<sup>5</sup> Lsv2-series VMs can [burst](./disk-bursting.md) their disk performance for up to 30 minutes at a time.
+<sup>5</sup> Lsv2-series VMs can [burst](./disk-bursting.md) their disk performance for up to 30 minutes at a time.
<sup>6</sup> VMs with more than 64 vCPUs require one of these supported guest operating systems:
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/m-series.md
description: Specifications for the M-series VMs.
-+ Last updated 04/12/2023
M-series VM's feature Intel&reg; Hyper-Threading Technology.
<sup>3</sup> [Constrained core sizes available](./constrained-vcpu.md).
-<sup>4</sup> M-series VMs can [burst](./disk-bursting.md) their disk performance for up to 30 minutes at a time.
+<sup>4</sup> M-series VMs can [burst](./disk-bursting.md) their disk performance for up to 30 minutes at a time.
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Migration Classic Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-cli.md
Title: Migrate VMs to Resource Manager using Azure CLI
+ Title: Migrate VMs to Resource Manager using Azure CLI
description: This article walks through the platform-supported migration of resources from classic to Azure Resource Manager by using Azure CLI.
Last updated 04/12/2023--++ # Migrate IaaS resources from classic to Azure Resource Manager by using Azure CLI
Here are a few best practices that we recommend as you evaluate migrating IaaS r
* If you have automated scripts that deploy your infrastructure and applications today, try to create a similar test setup by using those scripts for migration. Alternatively, you can set up sample environments by using the Azure portal. > [!IMPORTANT]
-> Application Gateways are not currently supported for migration from classic to Resource Manager. To migrate a classic virtual network with an Application gateway, remove the gateway before running a Prepare operation to move the network. After you complete the migration, reconnect the gateway in Azure Resource Manager.
+> Application Gateways are not currently supported for migration from classic to Resource Manager. To migrate a classic virtual network with an Application gateway, remove the gateway before running a Prepare operation to move the network. After you complete the migration, reconnect the gateway in Azure Resource Manager.
> >ExpressRoute gateways connecting to ExpressRoute circuits in another subscription cannot be migrated automatically. In such cases, remove the ExpressRoute gateway, migrate the virtual network and recreate the gateway. Please see [Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager deployment model](../expressroute/expressroute-migration-classic-resource-manager.md) for more information. >
azure account set "<azure-subscription-name>"
``` > [!NOTE]
-> Registration is a one time step but it needs to be done once before attempting migration. Without registering you'll see the following error message
->
-> *BadRequest : Subscription is not registered for migration.*
->
->
+> Registration is a one time step but it needs to be done once before attempting migration. Without registering you'll see the following error message
+>
+> *BadRequest : Subscription is not registered for migration.*
+>
+>
Register with the migration resource provider by using the following command. Note that in some cases, this command times out. However, the registration will be successful.
virtual-machines Restore Point Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/restore-point-troubleshooting.md
description: Symptoms, causes, and resolutions of restore point failures related
Last updated 04/12/2023 -+ # Troubleshoot restore point failures: Issues with the agent or extension
Most common restore point failures can be resolved by following the troubleshoot
If any extension is in a failed state, then it can interfere with the restore point operation. - In the Azure portal, go to **Virtual machines** > **Settings** > **Extensions** > **Extensions status** and check if all the extensions are in **provisioning succeeded** state. - Ensure all [extension issues](../virtual-machines/extensions/overview.md#troubleshoot-extensions) are resolved and retry the restore point operation.-- **Ensure COM+ System Application** is up and running. Also, the **Distributed Transaction Coordinator service** should be running as **Network Service account**.
+- **Ensure COM+ System Application** is up and running. Also, the **Distributed Transaction Coordinator service** should be running as **Network Service account**.
Follow the troubleshooting steps in [troubleshoot COM+ and MSDTC issues](../backup/backup-azure-vms-troubleshoot.md#extensionsnapshotfailedcom--extensioninstallationfailedcom--extensioninstallationfailedmdtcextension-installationoperation-failed-due-to-a-com-error) in case of issues.
Restore points use the VM Snapshot Extension to take an application consistent s
- **Ensure VMSnapshot extension isn't in a failed state**: Follow the steps in [Troubleshooting](../backup/backup-azure-troubleshoot-vm-backup-fails-snapshot-timeout.md#usererrorvmprovisioningstatefailedthe-vm-is-in-failed-provisioning-state) to verify and ensure the Azure VM snapshot extension is healthy. - **Check if antivirus is blocking the extension**: Certain antivirus software can prevent extensions from executing.
-
+ At the time of the restore point failure, verify if there are log entries in **Event Viewer Application logs** with *faulting application name: IaaSBcdrExtension.exe*. If you see entries, the antivirus configured in the VM could be restricting the execution of the VMSnapshot extension. Test by excluding the following directories in the antivirus configuration and retry the restore point operation. - `C:\Packages\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot` - `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot`
Restore points use the VM Snapshot Extension to take an application consistent s
- **Ensure DHCP is enabled inside the guest VM**: This is required to get the host or fabric address from DHCP for the restore point to work. If you need a static private IP, you should configure it through the **Azure portal**, or **PowerShell** and make sure the DHCP option inside the VM is enabled. [Learn more](#the-snapshot-status-cant-be-retrieved-or-a-snapshot-cant-be-taken). -- **Ensure the VSS writer service is up and running**:
+- **Ensure the VSS writer service is up and running**:
Follow these steps to [troubleshoot VSS writer issues](../backup/backup-azure-vms-troubleshoot.md#extensionfailedvsswriterinbadstatesnapshot-operation-failed-because-vss-writers-were-in-a-bad-state). ## Common issues
The Azure VM agent might be stopped, outdated, in an inconsistent state, or not
**Error code**: VMRestorePointInternalError
-**Error message**: Restore Point creation failed due to an internal execution error while creating VM snapshot. Please retry the operation after some time.
+**Error message**: Restore Point creation failed due to an internal execution error while creating VM snapshot. Please retry the operation after some time.
-After you trigger a restore point operation, the compute service starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, restore point creation will fail. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+After you trigger a restore point operation, the compute service starts the job by communicating with the VM backup extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, restore point creation will fail. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-**Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-is-installed-in-the-vm-but-its-unresponsive-for-windows-vms)**
+**Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-is-installed-in-the-vm-but-its-unresponsive-for-windows-vms)**
**Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)**
After you trigger a restore point operation, the compute service starts the job
**Cause 5: [Application control solution is blocking IaaSBcdrExtension.exe](#application-control-solution-is-blocking-iaasbcdrextensionexe)** This error could also occur when one of the extension failures puts the VM into provisioning failed state. If the above steps didn't resolve your issue, then do the following:
-
+ In the Azure portal, go to **Virtual Machines** > **Settings** > **Extensions** and ensure all extensions are in **provisioning succeeded** state. [Learn more](states-billing.md) about Provisioning states. - If any extension is in a failed state, it can interfere with the restore point operation. Ensure the extension issues are resolved and retry the restore point operation.
Restore point operations fail if the COM+ service is not running or if there are
**Error code**: VMRestorePointClientError
-**Error message**: Restore Point creation failed due to insufficient memory available in COM+ memory quota. Please restart windows service "COM+ System Application" (COMSysApp). If the issue persists, restart the VM.
+**Error message**: Restore Point creation failed due to insufficient memory available in COM+ memory quota. Please restart windows service "COM+ System Application" (COMSysApp). If the issue persists, restart the VM.
Restore point operations fail if there's insufficient memory in the COM+ service. Restarting the COM+ System Application service and the VM usually frees up the memory. Once restarted, retry the restore point operation.
Restore point operations fail if there's insufficient memory in the COM+ service
**Error code**: VMRestorePointClientError
-**Error message**: Restore Point creation failed due to VSS Writers in bad state. Restart VSS Writer services and reboot VM.
+**Error message**: Restore Point creation failed due to VSS Writers in bad state. Restart VSS Writer services and reboot VM.
Restore point creation invokes VSS writers to flush in-memory IOs to the disk before taking snapshots to achieve application consistency. If the VSS writers are in bad state, it affects the restore point creation operation. Restart the VSS writer service and restart the VM before retrying the operation.
-### VMRestorePointClientError - Restore Point creation failed due to failure in installation of Visual C++ Redistributable for Visual Studio 2012.
+### VMRestorePointClientError - Restore Point creation failed due to failure in installation of Visual C++ Redistributable for Visual Studio 2012.
-**Error code**: VMRestorePointClientError
+**Error code**: VMRestorePointClientError
-**Error message**: Restore Point creation failed due to failure in installation of Visual C++ Redistributable for Visual Studio 2012. Please install Visual C++ Redistributable for Visual Studio 2012. If you are observing issues with installation or if it is already installed and you are observing this error, please restart the VM to clean installation issues.
+**Error message**: Restore Point creation failed due to failure in installation of Visual C++ Redistributable for Visual Studio 2012. Please install Visual C++ Redistributable for Visual Studio 2012. If you are observing issues with installation or if it is already installed and you are observing this error, please restart the VM to clean installation issues.
Restore point operations require Visual C++ Redistributable for Visual Studio 2021. Download Visual C++ Redistributable for Visual Studio 2012 and restart the VM before retrying the restore point operation.
The number of restore points across the restore point collections and resource g
### VMRestorePointClientError - Restore Point creation failed with the error "COM+ was unable to talk to the Microsoft Distributed Transaction Coordinator".
-**Error code**: VMRestorePointClientError
+**Error code**: VMRestorePointClientError
-**Error message**: Restore Point creation failed with the error "COM+ was unable to talk to the Microsoft Distributed Transaction Coordinator".
+**Error message**: Restore Point creation failed with the error "COM+ was unable to talk to the Microsoft Distributed Transaction Coordinator".
-Follow these steps to resolve this error:
+Follow these steps to resolve this error:
- Open services.msc from an elevated command prompt-- Make sure that **Log On As** value for **Distributed Transaction Coordinator** service is set to **Network Service** and the service is running.
+- Make sure that **Log On As** value for **Distributed Transaction Coordinator** service is set to **Network Service** and the service is running.
- If this service fails to start, reinstall this service. ### VMRestorePointClientError - Restore Point creation failed due to inadequate VM resources.
-**Error code**: VMRestorePointClientError
+**Error code**: VMRestorePointClientError
-**Error message**: Restore Point creation failed due to inadequate VM resources. Increase VM resources by changing the VM size and retry the operation. To resize the virtual machine, refer https://azure.microsoft.com/blog/resize-virtual-machines/.
+**Error message**: Restore Point creation failed due to inadequate VM resources. Increase VM resources by changing the VM size and retry the operation. To resize the virtual machine, refer https://azure.microsoft.com/blog/resize-virtual-machines/.
Creating a restore point requires enough compute resource to be available. If you get the above error when creating a restore point, you need resize the VM and choose a higher VM size. Follow the steps in [how to resize your VM](https://azure.microsoft.com/blog/resize-virtual-machines/). Once the VM is resized, retry the restore point operation.
Creating a restore point requires enough compute resource to be available. If yo
After you trigger creation of restore point, the compute service starts communicating with the VM snapshot extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a restore point failure might occur. Complete the following troubleshooting step, and then retry your operation:
-**[The snapshot status can't be retrieved, or a snapshot can't be taken].(#the-snapshot-status-cant-be-retrieved-or-a-snapshot-cant-be-taken)**
+**[The snapshot status can't be retrieved, or a snapshot can't be taken].(#the-snapshot-status-cant-be-retrieved-or-a-snapshot-cant-be-taken)**
### VMRestorePointClientError - RestorePoint creation failed since a concurrent 'Create RestorePoint' operation was triggered on the VM.
To check the restore points in progress, do the following steps:
3. Select **Settings** > **Restore points** to view all the restore points. If a restore point is in progress, wait for it to complete. 4. Retry creating a new restore point.
-### DiskRestorePointClientError - Keyvault associated with DiskEncryptionSet is not found.
+### DiskRestorePointClientError - Keyvault associated with DiskEncryptionSet is not found.
**Error code**: DiskRestorePointClientError
-**Error message**: Keyvault associated with DiskEncryptionSet not found. The resource may have been deleted due to which Restore Point creation failed. Please retry the operation after re-creating the missing resource with the same name.
+**Error message**: Keyvault associated with DiskEncryptionSet not found. The resource may have been deleted due to which Restore Point creation failed. Please retry the operation after re-creating the missing resource with the same name.
If you are creating restore points for a VM that has encrypted disks, you must ensure the keyvault where the keys are stored, is available. We use the same keys to create encrypted restore points.
Restore points are supported only with API version 2022-03-01 or later. If you a
**Error message**: An internal execution error occurred. Please retry later.
-After you trigger creation of restore point, the compute service starts communicating with the VM snapshot extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a restore point failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
+After you trigger creation of restore point, the compute service starts communicating with the VM snapshot extension to take a point-in-time snapshot. Any of the following conditions might prevent the snapshot from being triggered. If the snapshot isn't triggered, a restore point failure might occur. Complete the following troubleshooting steps in the order listed, and then retry your operation:
-- **Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-is-installed-in-the-vm-but-its-unresponsive-for-windows-vms)**. -- **Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)**. -- **Cause 3: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cant-be-retrieved-or-a-snapshot-cant-be-taken)**.
+- **Cause 1: [The agent is installed in the VM, but it's unresponsive (for Windows VMs)](#the-agent-is-installed-in-the-vm-but-its-unresponsive-for-windows-vms)**.
+- **Cause 2: [The agent installed in the VM is out of date (for Linux VMs)](#the-agent-installed-in-the-vm-is-out-of-date-for-linux-vms)**.
+- **Cause 3: [The snapshot status can't be retrieved, or a snapshot can't be taken](#the-snapshot-status-cant-be-retrieved-or-a-snapshot-cant-be-taken)**.
- **Cause 4: [Compute service does not have permission to delete the old restore points because of a resource group lock](#remove-lock-from-the-recovery-point-resource-group)**. - **Cause 5**: There's an extension version/bits mismatch with the Windows version you're running, or the following module is corrupt:
- **C:\Packages\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot\\<extension version\>\iaasvmprovider.dll**
-
+ **C:\Packages\Plugins\Microsoft.Azure.RecoveryServices.VMSnapshot\\<extension version\>\iaasvmprovider.dll**
+ To resolve this issue, check if the module is compatible with x86 (32-bit)/x64 (64-bit) version of _regsvr32.exe_, and then follow these steps: 1. In the affected VM, go to **Control panel** > **Program and features**.
After you trigger creation of restore point, the compute service starts communic
### OSProvisioningClientError - Restore points operation failed due to an error. For details, see restore point provisioning error Message details
-**Error code**: OSProvisioningClientError
+**Error code**: OSProvisioningClientError
**Error message**: OS Provisioning did not finish in the allotted time. This error occurred too many times consecutively from image. Make sure the image has been properly prepared (generalized).
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Last updated 09/18/2023
-+ # VM Applications overview
Application packages provide benefits over other deployment and packaging method
- If you have Network Security Group (NSG) rules applied on your VM or scale set, downloading the packages from an internet repository might not be possible. And with storage accounts, downloading packages onto locked-down VMs would require setting up private links. -- Support for Block Blobs: This feature allows the handling of large files efficiently by breaking them into smaller, manageable blocks. Ideal for uploading large amounts of data, streaming, and background uploading.
+- Support for Block Blobs: This feature allows the handling of large files efficiently by breaking them into smaller, manageable blocks. Ideal for uploading large amounts of data, streaming, and background uploading.
## What are VM app packages?
The VM application packages use multiple resource types:
- **Only 25 applications per VM**: No more than 25 applications may be deployed to a VM at any point. -- **2GB application size**: The maximum file size of an application version is 2 GB.
+- **2GB application size**: The maximum file size of an application version is 2 GB.
- **No guarantees on reboots in your script**: If your script requires a reboot, the recommendation is to place that application last during deployment. While the code attempts to handle reboots, it may fail.
The VM application packages use multiple resource types:
There's no extra charge for using VM Application Packages, but you're charged for the following resources: -- Storage costs of storing each package and any replicas.
+- Storage costs of storing each package and any replicas.
- Network egress charges for replication of the first image version from the source region to the replicated regions. Subsequent replicas are handled within the region, so there are no extra charges. For more information on network egress, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
VM application versions are the deployable resource. Versions are defined with t
- A link to the configuration file for the VM application, which you can include license files - Update string for how to update the VM application to a newer version - End-of-life date. End-of-life dates are informational; you're still able to deploy VM application versions past the end-of-life date.-- Exclude from latest. You can keep a version from being used as the latest version of the application.
+- Exclude from latest. You can keep a version from being used as the latest version of the application.
- Target regions for replication - Replica count per region
MyAppe.exe /S
> If your blob was originally named "myApp.exe" instead of "myapp", then the above script would have worked without setting the `packageFileName` property.
-## Command interpreter
+## Command interpreter
The default command interpreters are:
sudo yum install --downloadonly --downloaddir=/tmp/powershell powershell
sudo tar -cvzf powershell.tar.gz *.rpm ```
-4. This tar archive is the application package file.
+4. This tar archive is the application package file.
- The install command in this case is:
Most third party applications in Windows are available as .exe or .msi installer
Installer executables typically launch a user interface (UI) and require someone to select through the UI. If the installer supports a silent mode parameter, it should be included in your installation string.
-Cmd.exe also expects executable files to have the extension `.exe`, so you need to rename the file to have the `.exe` extension.
+Cmd.exe also expects executable files to have the extension `.exe`, so you need to rename the file to have the `.exe` extension.
If I want to create a VM application package for `myApp.exe`, which ships as an executable, my VM Application is called 'myApp', so I write the command assuming the application package is in the current directory: ```terminal
-"move .\\myApp .\\myApp.exe & myApp.exe /S -config myApp_config"
+"move .\\myApp .\\myApp.exe & myApp.exe /S -config myApp_config"
```
-If the installer executable file doesn't support an uninstall parameter, you can sometimes look up the registry on a test machine to know here the uninstaller is located.
+If the installer executable file doesn't support an uninstall parameter, you can sometimes look up the registry on a test machine to know here the uninstaller is located.
In the registry, the uninstall string is stored in `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\<installed application name>\UninstallString` so I would use the contents as my remove command:
start /wait %windir%\\system32\\msiexec.exe /x $appname /quiet /forcerestart /lo
### Zipped files
-For .zip or other zipped files, rename and unzip the contents of the application package to the desired destination.
+For .zip or other zipped files, rename and unzip the contents of the application package to the desired destination.
Example install command:
To learn more about getting the status of VM extensions, see [Virtual machine ex
To get status of VM extensions, use [Get-AzVM](/powershell/module/az.compute/get-azvm): ```azurepowershell-interactive
-Get-AzVM -name <VM name> -ResourceGroupName <resource group name> -Status | convertto-json -Depth 10
+Get-AzVM -name <VM name> -ResourceGroupName <resource group name> -Status | convertto-json -Depth 10
``` To get status of scale set extensions, use [Get-AzVMSS](/powershell/module/az.compute/get-azvmss):
$result | ForEach-Object {
$res = @{ instanceId = $_.InstanceId; vmappStatus = $_.InstanceView.Extensions | Where-Object {$_.Name -eq "VMAppExtension"}} $resultSummary.Add($res) | Out-Null }
-$resultSummary | convertto-json -depth 5
+$resultSummary | convertto-json -depth 5
``` ## Error messages
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/build-image-with-packer.md
Last updated 03/31/2023 -+ # PowerShell: How to use Packer to create virtual machine images in Azure
-**Applies to:** :heavy_check_mark: Windows VMs
+**Applies to:** :heavy_check_mark: Windows VMs
Each virtual machine (VM) in Azure is created from an image that defines the Windows distribution and OS version. Images can include pre-installed applications and configurations. The Azure Marketplace provides many first and third-party images for most common OS' and application environments, or you can create your own custom images tailored to your needs. This article details how to use the open-source tool [Packer](https://www.packer.io/) to define and build custom images in Azure.
New-AzResourceGroup -Name $rgName -Location $location
## Create Azure credentials Packer authenticates with Azure using a service principal. An Azure service principal is a security identity that you can use with apps, services, and automation tools like Packer. You control and define the permissions as to what operations the service principal can perform in Azure.
-Create a service principal with [New-AzADServicePrincipal](/powershell/module/az.resources/new-azadserviceprincipal). The value for `-DisplayName` needs to be unique; replace with your own value as needed.
+Create a service principal with [New-AzADServicePrincipal](/powershell/module/az.resources/new-azadserviceprincipal). The value for `-DisplayName` needs to be unique; replace with your own value as needed.
```azurepowershell $sp = New-AzADServicePrincipal -DisplayName "PackerPrincipal" -role Contributor -scope /subscriptions/yyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyy
Get-AzPublicIPAddress `
To see your VM, that includes the IIS install from the Packer provisioner, in action, enter the public IP address in to a web browser.
-![IIS default site](./media/build-image-with-packer/iis.png)
+![IIS default site](./media/build-image-with-packer/iis.png)
## Next steps
virtual-machines Disks Enable Host Based Encryption Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-enable-host-based-encryption-powershell.md
$ipConfig = New-AzVmssIpConfig -Name "myIPConfig" -SubnetId $Vnet.Subnets[0].Id
# Enable encryption at host by specifying EncryptionAtHost parameter
-$VMSS = New-AzVmssConfig -Location $LocationName -SkuCapacity 2 -SkuName $VMSize -EncryptionAtHost
+$VMSS = New-AzVmssConfig -Location $LocationName -SkuCapacity 2 -SkuName $VMSize -OrchestrationMode "Flexible" -EncryptionAtHost
$VMSS = Add-AzVmssNetworkInterfaceConfiguration -Name "myVMSSNetworkConfig" -VirtualMachineScaleSet $VMSS -Primary $true -IpConfiguration $ipConfig
$ipConfig = New-AzVmssIpConfig -Name "myIPConfig" -SubnetId $Vnet.Subnets[0].Id
# Enable encryption at host by specifying EncryptionAtHost parameter
-$VMSS = New-AzVmssConfig -Location $LocationName -SkuCapacity 2 -SkuName $VMSize -EncryptionAtHost
+$VMSS = New-AzVmssConfig -Location $LocationName -SkuCapacity 2 -SkuName $VMSize -OrchestrationMode "Flexible" -EncryptionAtHost
$VMSS = Add-AzVmssNetworkInterfaceConfiguration -Name "myVMSSNetworkConfig" -VirtualMachineScaleSet $VMSS -Primary $true -IpConfiguration $ipConfig
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder.md
Last updated 11/10/2023
-+ # Create a Windows VM by using Azure VM Image Builder
-**Applies to:** :heavy_check_mark: Windows VMs
+**Applies to:** :heavy_check_mark: Windows VMs
In this article, you learn how to create a customized Windows image by using Azure VM Image Builder. The example in this article uses [customizers](../linux/image-builder-json.md#properties-customize) for customizing the image: - PowerShell (ScriptUri): Download and run a [PowerShell script](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/testPsScript.ps1).
In this article, you learn how to create a customized Windows image by using Azu
- `osDiskSizeGB`: Can be used to increase the size of an image. - `identity`. Provides an identity for VM Image Builder to use during the build.
-Use the following sample JSON template to configure the image: [helloImageTemplateWin.json](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom_Windows_Managed_Image/helloImageTemplateWin.json).
+Use the following sample JSON template to configure the image: [helloImageTemplateWin.json](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom_Windows_Managed_Image/helloImageTemplateWin.json).
> [!NOTE] > Windows users can run the following Azure CLI examples on [Azure Cloud Shell](https://shell.azure.com) by using Bash.
Because you'll be using some pieces of information repeatedly, create some varia
```azurecli-interactive # Resource group name - we're using myImageBuilderRG in this example imageResourceGroup='myWinImgBuilderRG'
-# Region location
+# Region location
location='WestUS2' # Run output name runOutputName='aibWindows'
az group create -n $imageResourceGroup -l $location
VM Image Builder uses the provided [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) to inject the image into the resource group. In this example, you create an Azure role definition with specific permissions for distributing the image. The role definition is then assigned to the user identity.
-## Create a user-assigned managed identity and grant permissions
+## Create a user-assigned managed identity and grant permissions
Create a user-assigned identity so that VM Image Builder can access the storage account where the script is stored.
vi helloImageTemplateWin.json
> For the source image, always [specify a version](../linux/image-builder-troubleshoot.md#the-build-step-failed-for-the-image-version). You can't specify `latest` as the version. > > If you add or change the resource group that the image is distributed to, make sure that the [permissions are set](#create-a-user-assigned-identity-and-set-permissions-on-the-resource-group) on the resource group.
-
+ ## Create the image Submit the image configuration to the VM Image Builder service by running the following commands:
In the background, VM Image Builder also creates a staging resource group in you
> Don't delete the staging resource group directly. First, delete the image template artifact, which causes the staging resource group to be deleted. If the service reports a failure when you submit the image configuration template, do the following:-- See [Troubleshoot the Azure VM Image Builder service](../linux/image-builder-troubleshoot.md#troubleshoot-image-template-submission-errors).
+- See [Troubleshoot the Azure VM Image Builder service](../linux/image-builder-troubleshoot.md#troubleshoot-image-template-submission-errors).
- Before you try to resubmit the template, delete it by running the following commands: ```azurecli-interactive
az resource invoke-action \
--resource-group $imageResourceGroup \ --resource-type Microsoft.VirtualMachineImages/imageTemplates \ -n helloImageTemplateWin01 \
- --action Run
+ --action Run
``` Wait until the build is complete.
virtual-machines Template Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/template-description.md
description: Learn more about how the virtual machine resource is defined in an
-+ Last updated 04/11/2023
This example shows a typical resource section of a template for creating a speci
```json "resources": [
- {
- "apiVersion": "2016-04-30-preview",
- "type": "Microsoft.Compute/virtualMachines",
- "name": "[concat('myVM', copyindex())]",
+ {
+ "apiVersion": "2016-04-30-preview",
+ "type": "Microsoft.Compute/virtualMachines",
+ "name": "[concat('myVM', copyindex())]",
"location": "[resourceGroup().location]", "copy": {
- "name": "virtualMachineLoop",
+ "name": "virtualMachineLoop",
"count": "[parameters('numberOfInstances')]" }, "dependsOn": [
- "[concat('Microsoft.Network/networkInterfaces/myNIC', copyindex())]"
- ],
- "properties": {
- "hardwareProfile": {
- "vmSize": "Standard_DS1"
- },
- "osProfile": {
- "computername": "[concat('myVM', copyindex())]",
- "adminUsername": "[parameters('adminUsername')]",
- "adminPassword": "[parameters('adminPassword')]"
- },
- "storageProfile": {
- "imageReference": {
- "publisher": "MicrosoftWindowsServer",
- "offer": "WindowsServer",
- "sku": "2012-R2-Datacenter",
- "version": "latest"
- },
- "osDisk": {
+ "[concat('Microsoft.Network/networkInterfaces/myNIC', copyindex())]"
+ ],
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "Standard_DS1"
+ },
+ "osProfile": {
+ "computername": "[concat('myVM', copyindex())]",
+ "adminUsername": "[parameters('adminUsername')]",
+ "adminPassword": "[parameters('adminPassword')]"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2012-R2-Datacenter",
+ "version": "latest"
+ },
+ "osDisk": {
"name": "[concat('myOSDisk', copyindex())]",
- "caching": "ReadWrite",
- "createOption": "FromImage"
+ "caching": "ReadWrite",
+ "createOption": "FromImage"
}, "dataDisks": [ {
This example shows a typical resource section of a template for creating a speci
"lun": 0, "createOption": "Empty" }
- ]
- },
- "networkProfile": {
- "networkInterfaces": [
- {
+ ]
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
"id": "[resourceId('Microsoft.Network/networkInterfaces',
- concat('myNIC', copyindex()))]"
- }
- ]
+ concat('myNIC', copyindex()))]"
+ }
+ ]
}, "diagnosticsProfile": { "bootDiagnostics": { "enabled": "true", "storageUri": "[concat('https://', variables('storageName'), '.blob.core.windows.net')]" }
- }
+ }
},
- "resources": [
- {
- "name": "Microsoft.Insights.VMDiagnosticsSettings",
- "type": "extensions",
- "location": "[resourceGroup().location]",
- "apiVersion": "2016-03-30",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
- ],
- "properties": {
- "publisher": "Microsoft.Azure.Diagnostics",
- "type": "IaaSDiagnostics",
- "typeHandlerVersion": "1.5",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "xmlCfg": "[base64(concat(variables('wadcfgxstart'),
- variables('wadmetricsresourceid'),
+ "resources": [
+ {
+ "name": "Microsoft.Insights.VMDiagnosticsSettings",
+ "type": "extensions",
+ "location": "[resourceGroup().location]",
+ "apiVersion": "2016-03-30",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
+ ],
+ "properties": {
+ "publisher": "Microsoft.Azure.Diagnostics",
+ "type": "IaaSDiagnostics",
+ "typeHandlerVersion": "1.5",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "xmlCfg": "[base64(concat(variables('wadcfgxstart'),
+ variables('wadmetricsresourceid'),
concat('myVM', copyindex()),
- variables('wadcfgxend')))]",
- "storageAccount": "[variables('storageName')]"
- },
- "protectedSettings": {
- "storageAccountName": "[variables('storageName')]",
- "storageAccountKey": "[listkeys(variables('accountid'),
- '2015-06-15').key1]",
- "storageAccountEndPoint": "https://core.windows.net"
- }
- }
+ variables('wadcfgxend')))]",
+ "storageAccount": "[variables('storageName')]"
+ },
+ "protectedSettings": {
+ "storageAccountName": "[variables('storageName')]",
+ "storageAccountKey": "[listkeys(variables('accountid'),
+ '2015-06-15').key1]",
+ "storageAccountEndPoint": "https://core.windows.net"
+ }
+ }
}, { "name": "MyCustomScriptExtension",
This example shows a typical resource section of a template for creating a speci
"settings": { "fileUris": [ "[concat('https://', variables('storageName'),
- '.blob.core.windows.net/customscripts/start.ps1')]"
+ '.blob.core.windows.net/customscripts/start.ps1')]"
], "commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File start.ps1" } }
- }
+ }
]
- }
+ }
] ```
When you deploy the example template, you enter values for the name and password
[Variables](../../azure-resource-manager/templates/syntax.md) make it easy for you to set up values in the template that are used repeatedly throughout it or that can change over time. This variables section is used in the example: ```json
-"variables": {
+"variables": {
"storageName": "mystore1",
- "accountid": "[concat('/subscriptions/', subscription().subscriptionId,
+ "accountid": "[concat('/subscriptions/', subscription().subscriptionId,
'/resourceGroups/', resourceGroup().name,
- '/providers/','Microsoft.Storage/storageAccounts/', variables('storageName'))]",
- "wadlogs": "<WadCfg>
- <DiagnosticMonitorConfiguration overallQuotaInMB=\"4096\" xmlns=\"http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration\">
- <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter=\"Error\"/>
- <WindowsEventLog scheduledTransferPeriod=\"PT1M\" >
- <DataSource name=\"Application!*[System[(Level = 1 or Level = 2)]]\" />
- <DataSource name=\"Security!*[System[(Level = 1 or Level = 2)]]\" />
+ '/providers/','Microsoft.Storage/storageAccounts/', variables('storageName'))]",
+ "wadlogs": "<WadCfg>
+ <DiagnosticMonitorConfiguration overallQuotaInMB=\"4096\" xmlns=\"http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration\">
+ <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter=\"Error\"/>
+ <WindowsEventLog scheduledTransferPeriod=\"PT1M\" >
+ <DataSource name=\"Application!*[System[(Level = 1 or Level = 2)]]\" />
+ <DataSource name=\"Security!*[System[(Level = 1 or Level = 2)]]\" />
<DataSource name=\"System!*[System[(Level = 1 or Level = 2)]]\" />
- </WindowsEventLog>",
+ </WindowsEventLog>",
"wadperfcounters": "<PerformanceCounters scheduledTransferPeriod=\"PT1M\"> <PerformanceCounterConfiguration counterSpecifier=\"\\Process(_Total)\\Thread Count\" sampleRate=\"PT15S\" unit=\"Count\"> <annotation displayName=\"Threads\" locale=\"en-us\"/> </PerformanceCounterConfiguration>
- </PerformanceCounters>",
- "wadcfgxstart": "[concat(variables('wadlogs'), variables('wadperfcounters'),
- '<Metrics resourceId=\"')]",
- "wadmetricsresourceid": "[concat('/subscriptions/', subscription().subscriptionId,
- '/resourceGroups/', resourceGroup().name ,
- '/providers/', 'Microsoft.Compute/virtualMachines/')]",
+ </PerformanceCounters>",
+ "wadcfgxstart": "[concat(variables('wadlogs'), variables('wadperfcounters'),
+ '<Metrics resourceId=\"')]",
+ "wadmetricsresourceid": "[concat('/subscriptions/', subscription().subscriptionId,
+ '/resourceGroups/', resourceGroup().name ,
+ '/providers/', 'Microsoft.Compute/virtualMachines/')]",
"wadcfgxend": "\"><MetricAggregation scheduledTransferPeriod=\"PT1H\"/> <MetricAggregation scheduledTransferPeriod=\"PT1M\"/> </Metrics></DiagnosticMonitorConfiguration> </WadCfg>"
-},
+},
``` When you deploy the example template, variable values are used for the name and identifier of the previously created storage account. Variables are also used to provide the settings for the diagnostic extension. Use the [best practices for creating Azure Resource Manager templates](../../azure-resource-manager/templates/best-practices.md) to help you decide how you want to structure the parameters and variables in your template.
When you need more than one virtual machine for your application, you can use a
```json "copy": {
- "name": "virtualMachineLoop",
+ "name": "virtualMachineLoop",
"count": "[parameters('numberOfInstances')]" }, ```
When you need more than one virtual machine for your application, you can use a
Also, notice in the example that the loop index is used when specifying some of the values for the resource. For example, if you entered an instance count of three, the names of the operating system disks are myOSDisk1, myOSDisk2, and myOSDisk3: ```json
-"osDisk": {
+"osDisk": {
"name": "[concat('myOSDisk', copyindex())]",
- "caching": "ReadWrite",
- "createOption": "FromImage"
+ "caching": "ReadWrite",
+ "createOption": "FromImage"
} ```
Also, notice in the example that the loop index is used when specifying some of
Keep in mind that creating a loop for one resource in the template may require you to use the loop when creating or accessing other resources. For example, multiple VMs can't use the same network interface, so if your template loops through creating three VMs it must also loop through creating three network interfaces. When assigning a network interface to a VM, the loop index is used to identify it: ```json
-"networkInterfaces": [ {
+"networkInterfaces": [ {
"id": "[resourceId('Microsoft.Network/networkInterfaces',
- concat('myNIC', copyindex()))]"
+ concat('myNIC', copyindex()))]"
} ] ```
Most resources depend on other resources to work correctly. Virtual machines mus
```json "dependsOn": [
- "[concat('Microsoft.Network/networkInterfaces/', 'myNIC', copyindex())]"
+ "[concat('Microsoft.Network/networkInterfaces/', 'myNIC', copyindex())]"
], ```
Resource Manager deploys in parallel any resources that aren't dependent on anot
How do you know if a dependency is required? Look at the values you set in the template. If an element in the virtual machine resource definition points to another resource that is deployed in the same template, you need a dependency. For example, your example virtual machine defines a network profile: ```json
-"networkProfile": {
- "networkInterfaces": [ {
+"networkProfile": {
+ "networkInterfaces": [ {
"id": "[resourceId('Microsoft.Network/networkInterfaces',
- concat('myNIC', copyindex())]"
- } ]
+ concat('myNIC', copyindex())]"
+ } ]
}, ```
Several profile elements are used when defining a virtual machine resource. Some
- [size](../sizes.md) - [name](/azure/architecture/best-practices/resource-naming) and credentials - disk and [operating system settings](cli-ps-findimage.md)-- [network interface](/previous-versions/azure/virtual-network/virtual-network-deploy-multinic-classic-ps)
+- [network interface](/previous-versions/azure/virtual-network/virtual-network-deploy-multinic-classic-ps)
- boot diagnostics ## Disks and images
-In Azure, vhd files can represent [disks or images](../managed-disks-overview.md). When the operating system in a vhd file is specialized to be a specific VM, it's referred to as a disk. When the operating system in a vhd file is generalized to be used to create many VMs, it's referred to as an image.
+In Azure, vhd files can represent [disks or images](../managed-disks-overview.md). When the operating system in a vhd file is specialized to be a specific VM, it's referred to as a disk. When the operating system in a vhd file is generalized to be used to create many VMs, it's referred to as an image.
### Create new virtual machines and new disks from a platform image When you create a VM, you must decide what operating system to use. The imageReference element is used to define the operating system of a new VM. The example shows a definition for a Windows Server operating system: ```json
-"imageReference": {
- "publisher": "MicrosoftWindowsServer",
- "offer": "WindowsServer",
- "sku": "2012-R2-Datacenter",
- "version": "latest"
+"imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2012-R2-Datacenter",
+ "version": "latest"
}, ```
If you want to create a Linux operating system, you might use this definition:
Configuration settings for the operating system disk are assigned with the osDisk element. The example defines a new managed disk with the caching mode set to **ReadWrite** and that the disk is being created from a [platform image](cli-ps-findimage.md): ```json
-"osDisk": {
+"osDisk": {
"name": "[concat('myOSDisk', copyindex())]",
- "caching": "ReadWrite",
- "createOption": "FromImage"
+ "caching": "ReadWrite",
+ "createOption": "FromImage"
}, ```
Configuration settings for the operating system disk are assigned with the osDis
If you want to create virtual machines from existing disks, remove the imageReference and the osProfile elements and define these disk settings: ```json
-"osDisk": {
+"osDisk": {
"osType": "Windows",
- "managedDisk": {
- "id": "[resourceId('Microsoft.Compute/disks', [concat('myOSDisk', copyindex())])]"
- },
+ "managedDisk": {
+ "id": "[resourceId('Microsoft.Compute/disks', [concat('myOSDisk', copyindex())])]"
+ },
"caching": "ReadWrite",
- "createOption": "Attach"
+ "createOption": "Attach"
}, ```
If you want to create virtual machines from existing disks, remove the imageRefe
If you want to create a virtual machine from a managed image, change the imageReference element and define these disk settings: ```json
-"storageProfile": {
+"storageProfile": {
"imageReference": { "id": "[resourceId('Microsoft.Compute/images', 'myImage')]" },
- "osDisk": {
+ "osDisk": {
"name": "[concat('myOSDisk', copyindex())]", "osType": "Windows",
- "caching": "ReadWrite",
- "createOption": "FromImage"
+ "caching": "ReadWrite",
+ "createOption": "FromImage"
} }, ```
You can optionally add data disks to the VMs. The [number of disks](../sizes.md)
{ "name": "[concat('myDataDisk', copyindex())]", "diskSizeGB": "100",
- "lun": 0,
+ "lun": 0,
"caching": "ReadWrite", "createOption": "Empty" }
You can optionally add data disks to the VMs. The [number of disks](../sizes.md)
Although [extensions](../extensions/features-windows.md) are a separate resource, they're closely tied to VMs. Extensions can be added as a child resource of the VM or as a separate resource. The example shows the [Diagnostics Extension](../extensions/diagnostics-template.md) being added to the VMs: ```json
-{
- "name": "Microsoft.Insights.VMDiagnosticsSettings",
- "type": "extensions",
- "location": "[resourceGroup().location]",
- "apiVersion": "2016-03-30",
- "dependsOn": [
- "[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
- ],
- "properties": {
- "publisher": "Microsoft.Azure.Diagnostics",
- "type": "IaaSDiagnostics",
- "typeHandlerVersion": "1.5",
- "autoUpgradeMinorVersion": true,
- "settings": {
- "xmlCfg": "[base64(concat(variables('wadcfgxstart'),
- variables('wadmetricsresourceid'),
+{
+ "name": "Microsoft.Insights.VMDiagnosticsSettings",
+ "type": "extensions",
+ "location": "[resourceGroup().location]",
+ "apiVersion": "2016-03-30",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
+ ],
+ "properties": {
+ "publisher": "Microsoft.Azure.Diagnostics",
+ "type": "IaaSDiagnostics",
+ "typeHandlerVersion": "1.5",
+ "autoUpgradeMinorVersion": true,
+ "settings": {
+ "xmlCfg": "[base64(concat(variables('wadcfgxstart'),
+ variables('wadmetricsresourceid'),
concat('myVM', copyindex()),
- variables('wadcfgxend')))]",
- "storageAccount": "[variables('storageName')]"
- },
- "protectedSettings": {
- "storageAccountName": "[variables('storageName')]",
- "storageAccountKey": "[listkeys(variables('accountid'),
- '2015-06-15').key1]",
- "storageAccountEndPoint": "https://core.windows.net"
- }
- }
+ variables('wadcfgxend')))]",
+ "storageAccount": "[variables('storageName')]"
+ },
+ "protectedSettings": {
+ "storageAccountName": "[variables('storageName')]",
+ "storageAccountKey": "[listkeys(variables('accountid'),
+ '2015-06-15').key1]",
+ "storageAccountEndPoint": "https://core.windows.net"
+ }
+ }
}, ```
There are many extensions that you can install on a VM, but the most useful is p
"settings": { "fileUris": [ "[concat('https://', variables('storageName'),
- '.blob.core.windows.net/customscripts/start.ps1')]"
+ '.blob.core.windows.net/customscripts/start.ps1')]"
], "commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File start.ps1" }
virtual-machines Ubuntu Pro In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/canonical/ubuntu-pro-in-place-upgrade.md
description: Learn how to do an in-place upgrade from Ubuntu Server to Ubuntu Pr
-+ Last updated 9/12/2023
virtual-machines Centos End Of Life https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/centos/centos-end-of-life.md
description: Understand your options for moving CentOS workloads
-+ Last updated 12/1/2023
virtual-machines Deploy Ibm Db2 Purescale Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/deploy-ibm-db2-purescale-azure.md
description: Learn how to deploy an example architecture used recently to migrat
editor: swread-+ -+ Last updated 04/19/2023
virtual-machines Install Openframe Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
Last updated 04/19/2023
-+ # Install TmaxSoft OpenFrame on Azure
Now that the VM is created and you are logged on, you must perform a few setup s
1. Map the name **ofdemo** to the local IP address, modify `/etc/hosts` using any text editor. Assuming our IP is `192.168.96.148`, this is before the change: ```config
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain
+ 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
+ ::1 localhost localhost.localdomain localhost6 localhost6.localdomain
<IP Address> <your hostname> ``` - This is after the change: ```config
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain
+ 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
+ ::1 localhost localhost.localdomain localhost6 localhost6.localdomain
192.168.96.148 ofdemo ``` 2. Create groups and users: ```bash
- sudo adduser -d /home/oframe7 oframe7
+ sudo adduser -d /home/oframe7 oframe7
``` 3. Change the password for user oframe7:
Now that the VM is created and you are logged on, you must perform a few setup s
``` ```output
- New password:
- Retype new password:
+ New password:
+ Retype new password:
passwd: all authentication tokens updated successfully. ``` 4. Update the kernel parameters in `/etc/sysctl.conf` using any text editor: ```text
- kernel.shmall = 7294967296
+ kernel.shmall = 7294967296
kernel.sem = 10000 32000 10000 10000 ```
Tibero provides the several key functions in the OpenFrame environment on Azure:
2. Copy the Tibero software to the Tibero user account (oframe). For example: ```bash
- tar -xzvf tibero6-bin-6_rel_FS04-linux64-121793-opt-tested.tar.gz
+ tar -xzvf tibero6-bin-6_rel_FS04-linux64-121793-opt-tested.tar.gz
mv license.xml /opt/tmaxdb/tibero6/license/ ```
Tibero provides the several key functions in the OpenFrame environment on Azure:
```text # Tibero6 ENV
- export TB_HOME=/opt/tmaxdb/tibero6
- export TB_SID=TVSAM export TB_PROF_DIR=$TB_HOME/bin/prof
- export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH
+ export TB_HOME=/opt/tmaxdb/tibero6
+ export TB_SID=TVSAM export TB_PROF_DIR=$TB_HOME/bin/prof
+ export LD_LIBRARY_PATH=$TB_HOME/lib:$TB_HOME/client/lib:$LD_LIBRARY_PATH
export PATH=$TB_HOME/bin:$TB_HOME/client/bin:$PATH ```
Tibero provides the several key functions in the OpenFrame environment on Azure:
6. Modify `\$TB_HOME/client/config/tbdsn.tbr` using any text editor and put 127.0.0.1 instead of localhost as shown: ```text
- TVSAM=(
+ TVSAM=(
(INSTANCE=(HOST=127.0.0.1) (PT=8629) (DB_NAME=TVSAM)
Tibero provides the several key functions in the OpenFrame environment on Azure:
Creating agent table... Done. For details, check /opt/tmaxdb/tibero6/instance/TVSAM/log/system_init.log.
- **************************************************
+ **************************************************
* Tibero Database TVSAM is created successfully on Fri Aug 12 19:10:43 UTC 2016. * Tibero home directory ($TB_HOME) = * /opt/tmaxdb/tibero6
Tibero provides the several key functions in the OpenFrame environment on Azure:
* /opt/tmaxdb/tibero6/bin:/opt/tmaxdb/tibero6/client/bin * Initialization parameter file = * /opt/tmaxdb/tibero6/config/TVSAM.tip
- *
+ *
* Make sure that you always set up environment variables $TB_HOME and * $TB_SID properly before you run Tibero. ******************************************************************************
Tibero provides the several key functions in the OpenFrame environment on Azure:
8. To recycle Tibero, first shut it down using the `tbdown` command. For example: ```bash
- tbdown
+ tbdown
``` ```output
Tibero provides the several key functions in the OpenFrame environment on Azure:
```output Change core dump dir to /opt/tmaxdb/tibero6/bin/prof. Listener port = 8629
- Tibero 6
+ Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved. Tibero instance started up (NORMAL mode). ```
Tibero provides the several key functions in the OpenFrame environment on Azure:
``` ```output
- tbSQL 6
+ tbSQL 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved. Connected to Tibero. ```
Tibero provides the several key functions in the OpenFrame environment on Azure:
12. Boot Tibero and verify that the Tibero processes are running: ```bash
- tbboot
+ tbboot
ps -ef | egrep tbsvr ```
To install ODBC:
6. Edit the bash profile `~/.bash_profile` using any text editor and add the following: ```text
- # UNIX ODBC ENV
- export ODBC_HOME=$HOME/unixODBC
- export PATH=$ODBC_HOME/bin:$PATH
- export LD_LIBRARY_PATH=$ODBC_HOME/lib:$LD_LIBRARY_PATH
- export ODBCINI=$HOME/unixODBC/etc/odbc.ini
+ # UNIX ODBC ENV
+ export ODBC_HOME=$HOME/unixODBC
+ export PATH=$ODBC_HOME/bin:$PATH
+ export LD_LIBRARY_PATH=$ODBC_HOME/lib:$LD_LIBRARY_PATH
+ export ODBCINI=$HOME/unixODBC/etc/odbc.ini
export ODBCSYSINI=$HOME ```
To install ODBC:
[Tibero] Description = Tibero ODBC driver for Tibero6 Driver = /opt/tmaxdb/tibero6/client/lib/libtbodbc.so
- Setup =
- FileUsage =
- CPTimeout =
- CPReuse =
+ Setup =
+ FileUsage =
+ CPTimeout =
+ CPReuse =
Driver Logging = 7 [ODBC]
- Trace = NO
- TraceFile = /home/oframe7/odbc.log
- ForceTrace = Yes
- Pooling = No
+ Trace = NO
+ TraceFile = /home/oframe7/odbc.log
+ ForceTrace = Yes
+ Pooling = No
DEBUG = 1 ```
To install ODBC:
```config [TVSAM]
- Description = Tibero ODBC driver for Tibero6
- Driver = Tibero
- DSN = TVSAM
- SID = TVSAM
- User = tibero
+ Description = Tibero ODBC driver for Tibero6
+ Driver = Tibero
+ DSN = TVSAM
+ SID = TVSAM
+ User = tibero
password = tmax ``` 8. Create a symbolic link and validate the Tibero database connection: ```bash
- ln $ODBC_HOME/lib/libodbc.so $ODBC_HOME/lib/libodbc.so.1
+ ln $ODBC_HOME/lib/libodbc.so $ODBC_HOME/lib/libodbc.so.1
ln $ODBC_HOME/lib/libodbcinst.so $ODBC_HOME/lib/libodbcinst.so.1 isql TVSAM tibero tmax ```
The Base application server is installed before the individual services that Ope
- Modify the `base.properties` file accordingly, using any text editor: ```config
- OPENFRAME_HOME= <appropriate location for installation> ex. /opt/tmaxapp/OpenFrame TP_HOST_NAME=<your IP Hostname> ex. ofdemo
- TP_HOST_IP=<your IP Address> ex. 192.168.96.148
- TP_SHMKEY=63481
- TP_TPORTNO=6623
- TP_UNBLOCK_PORT=6291
- TP_NODE_NAME=NODE1
- TP_NODE_LIST=NODE1
- MASCAT_NAME=SYS1.MASTER.ICFCAT
- MASCAT_CREATE=YES
- DEFAULT_VOLSER=DEFVOL
- VOLADD_DEFINE=YES TSAM_USERNAME=tibero
- TSAM_PASSWORD=tmax
- TSAM_DATABASE=oframe
- DATASET_SHMKEY=63211
- DSLOCK_DATA=SYS1.DSLOCK.DATA
- DSLOCK_LOG=SYS1.DSLOCK.LOG
- DSLOCK_SEQ=dslock_seq.dat
- DSLOCK_CREATE=YES
+ OPENFRAME_HOME= <appropriate location for installation> ex. /opt/tmaxapp/OpenFrame TP_HOST_NAME=<your IP Hostname> ex. ofdemo
+ TP_HOST_IP=<your IP Address> ex. 192.168.96.148
+ TP_SHMKEY=63481
+ TP_TPORTNO=6623
+ TP_UNBLOCK_PORT=6291
+ TP_NODE_NAME=NODE1
+ TP_NODE_LIST=NODE1
+ MASCAT_NAME=SYS1.MASTER.ICFCAT
+ MASCAT_CREATE=YES
+ DEFAULT_VOLSER=DEFVOL
+ VOLADD_DEFINE=YES TSAM_USERNAME=tibero
+ TSAM_PASSWORD=tmax
+ TSAM_DATABASE=oframe
+ DATASET_SHMKEY=63211
+ DSLOCK_DATA=SYS1.DSLOCK.DATA
+ DSLOCK_LOG=SYS1.DSLOCK.LOG
+ DSLOCK_SEQ=dslock_seq.dat
+ DSLOCK_CREATE=YES
OPENFRAME_LICENSE_PATH=/opt/tmaxapp/license/OPENFRAME TMAX_LICENSE_PATH=/opt/tmaxapp/license/TMAX ``` 7. Execute the installer using the `base.properties file`. For example: ```bash
- chmod a+x OpenFrame_Base7_0_Linux_x86_64.bin
+ chmod a+x OpenFrame_Base7_0_Linux_x86_64.bin
./OpenFrame_Base7_0_Linux_x86_64.bin -f base.properties ```
The Base application server is installed before the individual services that Ope
```output total 44
- drwxrwxr-x. 4 oframe7 oframe7 61 Nov 30 16:57 UninstallerData
- drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 bin
- drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 cpm drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 data
- drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 include
- drwxrwxr-x. 2 oframe7 oframe7 8192 Nov 30 16:57 lib
- drwxrwxr-x. 6 oframe7 oframe7 48 Nov 30 16:57 log
- drwxrwxr-x. 2 oframe7 oframe7 6 Nov 30 16:57 profile
- drwxrwxr-x. 7 oframe7 oframe7 62 Nov 30 16:57 sample
- drwxrwxr-x. 2 oframe7 oframe7 6 Nov 30 16:57 schema
- drwxrwxr-x. 2 oframe7 oframe7 6 Nov 30 16:57 temp
- drwxrwxr-x. 3 oframe7 oframe7 16 Nov 30 16:57 shared
- drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:58 license
- drwxrwxr-x. 23 oframe7 oframe7 4096 Nov 30 16:58 core
- drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:58 config
- drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:58 scripts
+ drwxrwxr-x. 4 oframe7 oframe7 61 Nov 30 16:57 UninstallerData
+ drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 bin
+ drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 cpm drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 data
+ drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:57 include
+ drwxrwxr-x. 2 oframe7 oframe7 8192 Nov 30 16:57 lib
+ drwxrwxr-x. 6 oframe7 oframe7 48 Nov 30 16:57 log
+ drwxrwxr-x. 2 oframe7 oframe7 6 Nov 30 16:57 profile
+ drwxrwxr-x. 7 oframe7 oframe7 62 Nov 30 16:57 sample
+ drwxrwxr-x. 2 oframe7 oframe7 6 Nov 30 16:57 schema
+ drwxrwxr-x. 2 oframe7 oframe7 6 Nov 30 16:57 temp
+ drwxrwxr-x. 3 oframe7 oframe7 16 Nov 30 16:57 shared
+ drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:58 license
+ drwxrwxr-x. 23 oframe7 oframe7 4096 Nov 30 16:58 core
+ drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:58 config
+ drwxrwxr-x. 2 oframe7 oframe7 4096 Nov 30 16:58 scripts
drwxrwxr-x. 2 oframe7 oframe7 25 Nov 30 16:58 volume_default ```
The Base application server is installed before the individual services that Ope
11. Shut down OpenFrame Base: ```bash
- tmdown
+ tmdown
``` ```output Do you really want to down whole Tmax? (y : n): y
- TMDOWN for node(NODE1) is starting:
- TMDOWN: SERVER(ofrsasvr:36) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: SERVER(ofrdsedt:39) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: SERVER(vtammgr:43) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: SERVER(ofrcmsvr:40) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: SERVER(ofrdmsvr:38) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: SERVER(ofrlhsvr:37) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: SERVER(ofruisvr:41) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: SERVER(ofrsmlog:42) downed: Wed Sep 7 15:37:21 2016
- TMDOWN: CLH downed: Wed Sep 7 15:37:21 2016
- TMDOWN: CLL downed: Wed Sep 7 15:37:21 2016
- TMDOWN: TLM downed: Wed Sep 7 15:37:21 2016
- TMDOWN: TMM downed: Wed Sep 7 15:37:21 2016
+ TMDOWN for node(NODE1) is starting:
+ TMDOWN: SERVER(ofrsasvr:36) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: SERVER(ofrdsedt:39) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: SERVER(vtammgr:43) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: SERVER(ofrcmsvr:40) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: SERVER(ofrdmsvr:38) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: SERVER(ofrlhsvr:37) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: SERVER(ofruisvr:41) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: SERVER(ofrsmlog:42) downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: CLH downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: CLL downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: TLM downed: Wed Sep 7 15:37:21 2016
+ TMDOWN: TMM downed: Wed Sep 7 15:37:21 2016
TMDOWN: TMAX is down ```
OpenFrame Batch consists of several components that simulate mainframe batch env
```config OPENFRAME_HOME = /opt/tmaxapp/OpenFrame
- DEFAULT_VOLSER=DEFVOL
- TP_NODE_NAME=NODE1
- TP_NODE_LIST=NODE1
- RESOURCE_SHMKEY=66991
- #JOBQ_DATASET_CREATE=YES
- #OUTPUTQ_DATASET_CREATE=YES
- DEFAULT_JCLLIB_CREATE=YES
- DEFAULT_PROCLIB_CREATE=YES
- DEFAULT_USERLIB_CREATE=YES
- TJES_USERNAME=tibero
- TJES_PASSWORD=tmax
- TJES_DATABASE=oframe
+ DEFAULT_VOLSER=DEFVOL
+ TP_NODE_NAME=NODE1
+ TP_NODE_LIST=NODE1
+ RESOURCE_SHMKEY=66991
+ #JOBQ_DATASET_CREATE=YES
+ #OUTPUTQ_DATASET_CREATE=YES
+ DEFAULT_JCLLIB_CREATE=YES
+ DEFAULT_PROCLIB_CREATE=YES
+ DEFAULT_USERLIB_CREATE=YES
+ TJES_USERNAME=tibero
+ TJES_PASSWORD=tmax
+ TJES_DATABASE=oframe
BATCH_TABLE_CREATE=YES ```
OpenFrame Batch consists of several components that simulate mainframe batch env
7. Execute the following commands: ```bash
- $$2 NODE1 (tmadm): quit
+ $$2 NODE1 (tmadm): quit
ADM quit for node (NODE1) ```
OpenFrame Batch consists of several components that simulate mainframe batch env
```output Do you really want to down whole Tmax? (y : n): y
- TMDOWN for node(NODE1) is starting:
- TMDOWN: SERVER(ofrsasvr:36) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN for node(NODE1) is starting:
+ TMDOWN: SERVER(ofrsasvr:36) downed: Wed Sep 7 16:01:46 2016
TMDOWN: SERVER(obmjmsvr:44) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(vtammgr: 43) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(ofrcmsvr:40) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:45) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:46) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(ofrdmsvr:38) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:47) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(ofrdsedt:39) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjschd:54) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjinit:55) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:48) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjspbk:57) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:49) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:50) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:51) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(ofrlhsvr:37) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:52) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjmsvr:53) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmjhist:56) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(ofruisvr:41) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(obmtsmgr:59) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(ofrpmsvr:58) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: SERVER(ofrsmlog:42) downed: Wed Sep 7 16:01:46 2016
- TMDOWN: CLL downed: Wed Sep 7 16:01:46 2016
- TMDOWN: TLM downed: Wed Sep 7 16:01:46 2016
- TMDOWN: CLH downed: Wed Sep 7 16:01:46 2016
- TMDOWN: TMM downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(vtammgr: 43) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(ofrcmsvr:40) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:45) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:46) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(ofrdmsvr:38) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:47) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(ofrdsedt:39) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjschd:54) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjinit:55) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:48) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjspbk:57) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:49) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:50) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:51) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(ofrlhsvr:37) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:52) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjmsvr:53) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmjhist:56) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(ofruisvr:41) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(obmtsmgr:59) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(ofrpmsvr:58) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: SERVER(ofrsmlog:42) downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: CLL downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: TLM downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: CLH downed: Wed Sep 7 16:01:46 2016
+ TMDOWN: TMM downed: Wed Sep 7 16:01:46 2016
TMDOWN: TMAX is down ```
TACF Manager is an OpenFrame service module that controls user access to systems
3. Modify the TACF parameters: ```config
- OPENFRAME_HOME=/opt/tmaxapp/OpenFrame
- USE_OS_AUTH=NO
- TACF_USERNAME=tibero
- TACF_PASSWORD=tmax
- TACF_DATABASE=oframe
- TACF_TABLESPACE=TACF00
+ OPENFRAME_HOME=/opt/tmaxapp/OpenFrame
+ USE_OS_AUTH=NO
+ TACF_USERNAME=tibero
+ TACF_PASSWORD=tmax
+ TACF_DATABASE=oframe
+ TACF_TABLESPACE=TACF00
TACF_TABLE_CREATE=YES ```
TACF Manager is an OpenFrame service module that controls user access to systems
```output Wed Dec 07 17:36:42 EDT 2016
- Free Memory: 18703 kB
+ Free Memory: 18703 kB
Total Memory: 28800 kB
- 4 Command Line Args:
- 0: -f 1: tacf.properties
- 2: -m
- 3: SILENT
- java.class.path:
- /tmp/install.dir.41422/InstallerData
- /tmp/install.dir.41422/InstallerData/installer.zip
- ZGUtil.CLASS_PATH:
- /tmp/install.dir.41422/InstallerData
- tmp/install.dir.41422/InstallerData/installer.zip
- sun.boot.class.path:
+ 4 Command Line Args:
+ 0: -f 1: tacf.properties
+ 2: -m
+ 3: SILENT
+ java.class.path:
+ /tmp/install.dir.41422/InstallerData
+ /tmp/install.dir.41422/InstallerData/installer.zip
+ ZGUtil.CLASS_PATH:
+ /tmp/install.dir.41422/InstallerData
+ tmp/install.dir.41422/InstallerData/installer.zip
+ sun.boot.class.path:
/tmp/install.dir.41422/Linux/resource/jre/lib/resources.jar /tmp/install.dir.41422/Linux/resource/jre/lib/rt.jar /tmp/install.dir.41422/Linux/resource/jre/lib/sunrsasign.jar /tmp/install.dir.41422/Linux/resource/jre/lib/jsse.jar /tmp/install.dir.41422/Linux/resource/jre/lib/jce.jar /tmp/install.dir.41422/Linux/resource/jre/lib/charsets.jar /tmp/install.dir.41422/Linux/resource/jre/lib/jfr.jar /tmp/install.dir.41422/Linux/resource/jre/classes ``` 6. At the command prompt, type `tmboot` to restart OpenFrame. The output looks something like this: ```output
- TMBOOT for node(NODE1) is starting:
- Welcome to Tmax demo system: it will expire 2016/11/4
- Today: 2016/9/7
- TMBOOT: TMM is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: CLL is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: CLH is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: TLM(tlm) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofrsasvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofrlhsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofrdmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofrdsedt) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofrcmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofruisvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofrsmlog) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(vtammgr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjschd) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjinit) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjhist) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmjspbk) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(ofrpmsvr) is starting: Wed Sep 7 17:48:53 2016
- TMBOOT: SVR(obmtsmgr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT for node(NODE1) is starting:
+ Welcome to Tmax demo system: it will expire 2016/11/4
+ Today: 2016/9/7
+ TMBOOT: TMM is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: CLL is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: CLH is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: TLM(tlm) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofrsasvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofrlhsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofrdmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofrdsedt) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofrcmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofruisvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofrsmlog) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(vtammgr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjschd) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjinit) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjhist) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmjspbk) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(ofrpmsvr) is starting: Wed Sep 7 17:48:53 2016
+ TMBOOT: SVR(obmtsmgr) is starting: Wed Sep 7 17:48:53 2016
TMBOOT: SVR(tmsvr) is starting: Wed Sep 7 17:48:53 2016 ```
TACF Manager is an OpenFrame service module that controls user access to systems
8. Execute the following commands in the bash terminal: ```bash
- $$2 NODE1 (tmadm): quit
+ $$2 NODE1 (tmadm): quit
``` ```output
TACF Manager is an OpenFrame service module that controls user access to systems
```bash tacfmgr
- ```output
- Input USERNAME : ROOT
+ ```output
+ Input USERNAME : ROOT
Input PASSWORD : SYS1 TACFMGR: TACF MANAGER START!!!
TACF Manager is an OpenFrame service module that controls user access to systems
9. Shut the server down using the `tmdown` command. The output looks something like this: ```bash
- tmdown
+ tmdown
``` ```output Do you really want to down whole Tmax? (y : n): y
- TMDOWN for node(NODE1) is starting:
- TMDOWN: SERVER(ofrlhsvr:37) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(ofrdsedt:39) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(obmjschd:54) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(obmjmsvr:47) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(obmjmsvr:48) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(ofrdmsvr:38) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(obmjmsvr:50) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(obmjhist:56) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(ofrsasvr:36) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(ofrcmsvr:40) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(obmjspbk:57) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(tmsvr:60) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(ofrpmsvr:58) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: SERVER(obmtsmgr:59) downed: Wed Sep 7 17:50:50 2016
- TMDOWN: CLL downed: Wed Sep 7 17:50:50 2016
- TMDOWN: CLH downed: Wed Sep 7 17:50:50 2016
- TMDOWN: TLM downed: Wed Sep 7 17:50:50 2016
- TMDOWN: TMM downed: Wed Sep 7 17:50:50 2016
+ TMDOWN for node(NODE1) is starting:
+ TMDOWN: SERVER(ofrlhsvr:37) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(ofrdsedt:39) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(obmjschd:54) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(obmjmsvr:47) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(obmjmsvr:48) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(ofrdmsvr:38) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(obmjmsvr:50) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(obmjhist:56) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(ofrsasvr:36) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(ofrcmsvr:40) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(obmjspbk:57) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(tmsvr:60) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(ofrpmsvr:58) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: SERVER(obmtsmgr:59) downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: CLL downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: CLH downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: TLM downed: Wed Sep 7 17:50:50 2016
+ TMDOWN: TMM downed: Wed Sep 7 17:50:50 2016
TMDOWN: TMAX is down ```
ProSort is a utility used in batch transactions for sorting data.
4. Create a license subdirectory and copy the license file there. For example: ```bash
- cd /opt/tmaxapp/prosort
- mkdir license
+ cd /opt/tmaxapp/prosort
+ mkdir license
cp /opt/tmaxsw/oflicense/prosort/license.xml /opt/tmaxapp/prosort/license ```
ProSort is a utility used in batch transactions for sorting data.
```text # PROSORT
- PROSORT_HOME=/opt/tmaxapp/prosort
- PROSORT_SID=gbg
- PATH=$PATH:$PROSORT_HOME/bin LD_LIBRARY_PATH=$PROSORT_HOME/lib:$LD_LIBRARY_PATH LIBPATH$PROSORT_HOME/lib:$LIBPATH
- export PROSORT_HOME PROSORT_SID
- PATH LD_LIBRARY_PATH LIBPATH
- PATH=$PATH:$OPENFRAME_HOME/shbin
+ PROSORT_HOME=/opt/tmaxapp/prosort
+ PROSORT_SID=gbg
+ PATH=$PATH:$PROSORT_HOME/bin LD_LIBRARY_PATH=$PROSORT_HOME/lib:$LD_LIBRARY_PATH LIBPATH$PROSORT_HOME/lib:$LIBPATH
+ export PROSORT_HOME PROSORT_SID
+ PATH LD_LIBRARY_PATH LIBPATH
+ PATH=$PATH:$OPENFRAME_HOME/shbin
export PATH ```
ProSort is a utility used in batch transactions for sorting data.
7. Create the configuration file. For example: ```bash
- cd /opt/tmaxapp/prosort/config
- ./gen_tip.sh
+ cd /opt/tmaxapp/prosort/config
+ ./gen_tip.sh
``` ```output
ProSort is a utility used in batch transactions for sorting data.
8. Create the symbolic link. For example: ```bash
- cd /opt/tmaxapp/OpenFrame/util/
+ cd /opt/tmaxapp/OpenFrame/util/
ln -s DFSORT SORT ```
ProSort is a utility used in batch transactions for sorting data.
```output Usage: prosort [options] [sort script files] options
- -h Display this information
- -v Display version information
- -s Display state information
- -j Display profile information
+ -h Display this information
+ -v Display version information
+ -s Display state information
+ -j Display profile information
-x Use SyncSort compatible mode ```
OFCOBOL is the OpenFrame compiler that interprets the mainframe's COBOL programs
4. Accept the licensing agreement. When the installation is complete, the following appears: ```output
- Choose Install Folder
+ Choose Install Folder
-- Where would you like to install? Default Install Folder: /home/oframe7/OFCOBOL ENTER AN ABSOLUTE PATH, OR PRESS <ENTER> TO ACCEPT THE DEFAULT : /opt/tmaxapp/OFCOBOL
- INSTALL FOLDER IS: /opt/tmaxapp/OFCOBOL
+ INSTALL FOLDER IS: /opt/tmaxapp/OFCOBOL
IS THIS CORRECT? (Y/N): Y[oframe7@ofdemo ~]$ vi .bash_profile
- ============================================================================ Installing...
+ ============================================================================ Installing...
- [==================|==================|==================|==================]
+ [==================|==================|==================|==================]
[|||]
- =============================================================================== Installation Complete
+ =============================================================================== Installation Complete
-- Congratulations. OpenFrame_COBOL has been successfully installed PRESS <ENTER> TO EXIT THE INSTALLER
OFCOBOL is the OpenFrame compiler that interprets the mainframe's COBOL programs
- Here's the SYSLIB section after the change: ```config
- [SYSLIB] BIN_PATH=${OPENFRAME_HOME}/bin:${OPENFRAME_HOME}/util:${COBDIR}/bin:/usr/local/bin:/bin LIB_PATH=${OPENFRAME_HOME}/lib:${OPENFRAME_HOME}/core/lib:${TB_HOME}/client/lib:${COBDIR}/lib:/ usr/lib:/lib:/lib/i686:/usr/local/lib:${PROSORT_HOME}/lib:/opt/FSUNbsort/lib :${ODBC_HOME}/lib
+ [SYSLIB] BIN_PATH=${OPENFRAME_HOME}/bin:${OPENFRAME_HOME}/util:${COBDIR}/bin:/usr/local/bin:/bin LIB_PATH=${OPENFRAME_HOME}/lib:${OPENFRAME_HOME}/core/lib:${TB_HOME}/client/lib:${COBDIR}/lib:/ usr/lib:/lib:/lib/i686:/usr/local/lib:${PROSORT_HOME}/lib:/opt/FSUNbsort/lib :${ODBC_HOME}/lib
:${OFCOB_HOME}/lib ``` 9. Review the `OpenFrame_COBOL_InstallLog.log` file in vi and verify that there are no errors. For example: ```bash
- cat $OFCOB_HOME/UninstallerData/log/OpenFrame_COBOL_InstallLog.log
+ cat $OFCOB_HOME/UninstallerData/log/OpenFrame_COBOL_InstallLog.log
``` ```output
- ……..
- Summary
+ ……..
+ Summary
- Installation: Successful.
- 131 Successes
- 0 Warnings
- 0 NonFatalErrors
+ Installation: Successful.
+ 131 Successes
+ 0 Warnings
+ 0 NonFatalErrors
0 FatalError ``` 10. Use the `ofcob --version` command and review the version number to verify the installation. For example: ```bash
- ofcob --version
+ ofcob --version
``` ```output
- OpenFrame COBOL Compiler 3.0.54
+ OpenFrame COBOL Compiler 3.0.54
CommitTag:: 645f3f6bf7fbe1c366a6557c55b96c48454f4bf ```
OFASM is the OpenFrame compiler that interprets the mainframe's assembler progra
```bash source .bash_profile
- ofasm --version
+ ofasm --version
``` ```output
- # TmaxSoft OpenFrameAssembler v3 r328
+ # TmaxSoft OpenFrameAssembler v3 r328
(3ff35168d34f6e2046b96415bbe374160fcb3a34) ```
OFASM is the OpenFrame compiler that interprets the mainframe's assembler progra
``` ```output
- # OFASM ENV
- export OFASM_HOME=/opt/tmaxapp/OFASM
- export OFASM_MACLIB=$OFASM_HOME/maclib/free_macro
- export PATH="${PATH}:$OFASM_HOME/bin:"
+ # OFASM ENV
+ export OFASM_HOME=/opt/tmaxapp/OFASM
+ export OFASM_MACLIB=$OFASM_HOME/maclib/free_macro
+ export PATH="${PATH}:$OFASM_HOME/bin:"
export LD_LIBRARY_PATH="./:$OFASM_HOME/lib:$LD_LIBRARY_PATH" ```
OFASM is the OpenFrame compiler that interprets the mainframe's assembler progra
7. Validate the `OpenFrame_ASM_InstallLog.log` file, and verify that there are no errors. For example: ```bash
- cat $OFASM_HOME/UninstallerData/log/OpenFrame_ASM_InstallLog.log
+ cat $OFASM_HOME/UninstallerData/log/OpenFrame_ASM_InstallLog.log
``` ```output
- ……..
- Summary
+ ……..
+ Summary
Installation: Successful.
- 55 Successes
- 0 Warnings
- 0 NonFatalErrors
+ 55 Successes
+ 0 Warnings
+ 0 NonFatalErrors
0 FatalErrors ```
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
3. Execute the installer using the properties file as shown: ```bash
- chmod a+x OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin
+ chmod a+x OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin
./OpenFrame_OSC7_0_Fix2_Linux_x86_64.bin -f osc.properties ```
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
5. Review the `OpenFrame_OSC7_0_Fix2_InstallLog.log` file. It should look something like this: ```output
- Summary
-
+ Summary
+
Installation: Successful. 233 Successes
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
vtammgr TPFMAGENT
- #BATCH
+ #BATCH
#BATCH#obmtsmgr #BATCH#ofrpmsvr #BATCH#obmjmsvr
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
#TACF #TACF#tmsvr After changes #BATCH
- #BASE obmtsmgr
+ #BASE obmtsmgr
ofrsasvr ofrpmsvr ofrlhsvr obmjmsvr ofrdmsvr obmjschd
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
```bash cp /home/oframe7/oflicense/ofonline/licosc.dat $OPENFRAME_HOME/license
- cd $OPENFRAME_HOME/license
- ls -l
+ cd $OPENFRAME_HOME/license
+ ls -l
``` ```output
- -rwxr-xr-x. 1 oframe mqm 80 Sep 12 01:37 licosc.dat
- -rwxr-xr-x. 1 oframe mqm 80 Sep 8 09:40 lictacf.dat
+ -rwxr-xr-x. 1 oframe mqm 80 Sep 12 01:37 licosc.dat
+ -rwxr-xr-x. 1 oframe mqm 80 Sep 8 09:40 lictacf.dat
-rwxrwxr-x. 1 oframe mqm 80 Sep 3 11:54 lictjes.da ```
OSC is the OpenFrame environment similar to IBM CICS that supports high-speed OL
```output OSCBOOT : pre-processing [ OK ]
- TMBOOT for node(NODE1) is starting:
- Welcome to Tmax demo system: it will expire 2016/11/4
- Today: 2016/9/12
- TMBOOT: TMM is starting: Mon Sep 12 01:40:25 2016
- TMBOOT: CLL is starting: Mon Sep 12 01:40:25 2016
- TMBOOT: CLH is starting: Mon Sep 12 01:40:25 2016
- TMBOOT: TLM(tlm) is starting: Mon Sep 12 01:40:25 2016
+ TMBOOT for node(NODE1) is starting:
+ Welcome to Tmax demo system: it will expire 2016/11/4
+ Today: 2016/9/12
+ TMBOOT: TMM is starting: Mon Sep 12 01:40:25 2016
+ TMBOOT: CLL is starting: Mon Sep 12 01:40:25 2016
+ TMBOOT: CLH is starting: Mon Sep 12 01:40:25 2016
+ TMBOOT: TLM(tlm) is starting: Mon Sep 12 01:40:25 2016
``` 11. To verify that the process status is ready, use the `tmadmin` command in si. All the processes should display RDY in the **status** column.
Before installing JEUS, install the Apache Ant package, which provides the libra
```text # Ant ENV
- export ANT_HOME=$HOME/ant
+ export ANT_HOME=$HOME/ant
export PATH=$HOME/ant/bin:$PATH ```
Before installing JEUS, install the Apache Ant package, which provides the libra
9. Update the `~/.bash_profile` file with the JEUS variables as shown: ```text
- # JEUS ENV
- export JEUS_HOME=/opt/tmaxui/jeus7 PATH="/opt/tmaxui/jeus7/bin:/opt/tmaxui/jeus7/lib/system:/opt/tmaxui/jeus7/webserver/bin:$ {PATH}"
+ # JEUS ENV
+ export JEUS_HOME=/opt/tmaxui/jeus7 PATH="/opt/tmaxui/jeus7/bin:/opt/tmaxui/jeus7/lib/system:/opt/tmaxui/jeus7/webserver/bin:$ {PATH}"
export PATH ```
Before installing JEUS, install the Apache Ant package, which provides the libra
11. *Optional*. Create an alias for easy shutdown and boot of JEUS components, using the following commands:
- ```bash
+ ```bash
# JEUS alias alias dsboot='startDomainAdminServer -domain jeus_domain -u administrator -p jeusadmin'
- alias msboot='startManagedServer -domain jeus_domain -server server1 -u administrator -p jeusadmin'
- alias msdown=`jeusadmin -u administrator -p tmax1234 "stop-server server1"'
+ alias msboot='startManagedServer -domain jeus_domain -server server1 -u administrator -p jeusadmin'
+ alias msdown=`jeusadmin -u administrator -p tmax1234 "stop-server server1"'
alias dsdown=`jeusadmin -domain jeus_domain -u administrator -p tmax1234 "local-shutdown"' ```
Before installing JEUS, install the Apache Ant package, which provides the libra
``` For example, `http://192.168.92.133:9736/webadmin/login`. The logon screen appears:
-
+ ![JEUS WebAdmin logon screen](media/jeus-01.png) > [!NOTE]
OFGW Is the OpenFrame gateway that supports communication between the 3270 termi
5. Verify that the URL for OFGW is working as expected: ```text
- Type URL
+ Type URL
http://192.168.92.133:8088/webterminal/ and press enter < IP > :8088/webterminal/ ```
OFManager provides operation and management functions for OpenFrame in the web e
8. Verify that the URL for OFManager is working as expected: ```text
- Type URL http://192.168.92.133:8088/ofmanager and press enter < IP > : < PORT > ofmanager Enter ID: ROOT
+ Type URL http://192.168.92.133:8088/ofmanager and press enter < IP > : < PORT > ofmanager Enter ID: ROOT
Password: SYS1 ```
virtual-machines Configure Oracle Asm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-asm.md
description: Quickly get Oracle ASM up and running in your Azure environment.
-+ Last updated 07/13/2022
# Set up Oracle ASM on an Azure Linux virtual machine
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic Azure virtual machine deployment combined with the installation and configuration of Oracle Automatic Storage Management (ASM). You learn how to:
The .ssh directory and key files are created. For more information, refer to [Cr
### Create a resource group
-To create a resource group, use the [az group create](/cli/azure/group) command. An Azure resource group is a logical container in which Azure resources are deployed and managed.
+To create a resource group, use the [az group create](/cli/azure/group) command. An Azure resource group is a logical container in which Azure resources are deployed and managed.
```azurecli $ az group create --name ASMOnAzureLab --location westus
$ az network vnet create \
--resource-group ASMOnAzureLab \ --name AzureBastionSubnet \ --vnet-name asmVnet \
- --address-prefixes 10.0.1.0/24
+ --address-prefixes 10.0.1.0/24
``` 2. Create public IP for Bastion
$ az network vnet create \
$ az network public-ip create \ --resource-group ASMOnAzureLab \ --name asmBastionIP \
- --sku Standard
+ --sku Standard
``` 3. Create Azure Bastion resource. It takes about 10 minutes for the resource to deploy.
az vm create --resource-group ASMOnAzureLab \
--vnet-name asmVnet \ --subnet asmSubnet1 \ --public-ip-sku Basic \
- --nsg ""
+ --nsg ""
``` ### Connect to asmVM
This lab requires a swap file on the lab virtual machine. Complete following ste
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdd 8:48 0 40G 0 disk ====> Data disk 2 (40GB) sdb 8:16 0 20G 0 disk ====> Swap file disk (20GB)
- sr0 11:0 1 628K 0 rom
- fd0 2:0 1 4K 0 disk
+ sr0 11:0 1 628K 0 rom
+ fd0 2:0 1 4K 0 disk
sdc 8:32 0 40G 0 disk ====> Data disk 1 (40GB)
- sda 8:0 0 30G 0 disk
+ sda 8:0 0 30G 0 disk
Γö£ΓöÇsda2 8:2 0 29G 0 part /
- Γö£ΓöÇsda14 8:14 0 4M 0 part
+ Γö£ΓöÇsda14 8:14 0 4M 0 part
Γö£ΓöÇsda15 8:15 0 495M 0 part /boot/efi ΓööΓöÇsda1 8:1 0 500M 0 part /boot ```
This lab requires a swap file on the lab virtual machine. Complete following ste
```output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sdd 8:48 0 40G 0 disk
- sdb 8:16 0 20G 0 disk
+ sdd 8:48 0 40G 0 disk
+ sdb 8:16 0 20G 0 disk
ΓööΓöÇsdb1 8:17 0 20G 0 part ====> Newly created partition
- sr0 11:0 1 628K 0 rom
- fd0 2:0 1 4K 0 disk
- sdc 8:32 0 40G 0 disk
- sda 8:0 0 30G 0 disk
+ sr0 11:0 1 628K 0 rom
+ fd0 2:0 1 4K 0 disk
+ sdc 8:32 0 40G 0 disk
+ sda 8:0 0 30G 0 disk
Γö£ΓöÇsda2 8:2 0 29G 0 part /
- Γö£ΓöÇsda14 8:14 0 4M 0 part
+ Γö£ΓöÇsda14 8:14 0 4M 0 part
Γö£ΓöÇsda15 8:15 0 495M 0 part /boot/efi ΓööΓöÇsda1 8:1 0 500M 0 part /boot ```
This lab requires a swap file on the lab virtual machine. Complete following ste
In the output, you see a line for swap disk partition **/dev/sdb1**, note down the **UUID**. ```output
- /dev/sdb1: UUID="00000000-0000-0000-0000-000000000000" TYPE="xfs" PARTLABEL="xfspart" PARTUUID="...."
+ /dev/sdb1: UUID="00000000-0000-0000-0000-000000000000" TYPE="xfs" PARTLABEL="xfspart" PARTUUID="...."
``` 6. Paste UUID from previous step into the following command and run it. This command ensures proper mounting of drive every time system reboots.
This lab requires a swap file on the lab virtual machine. Complete following ste
To install Oracle ASM, complete the following steps.
-For more information about installing Oracle ASM, see [Oracle ASMLib Downloads for Oracle Linux 7](https://www.oracle.com/linux/downloads/linux-asmlib-v7-downloads.html).
+For more information about installing Oracle ASM, see [Oracle ASMLib Downloads for Oracle Linux 7](https://www.oracle.com/linux/downloads/linux-asmlib-v7-downloads.html).
1. You need to login as root in order to continue with ASM installation, if you have not already done so
For more information about installing Oracle ASM, see [Oracle ASMLib Downloads f
2. Run these additional commands to install Oracle ASM components: ```bash
- $ yum list | grep oracleasm
+ $ yum list | grep oracleasm
``` Output of the command looks like ```output
- kmod-oracleasm.x86_64 2.0.8-28.0.1.el7 ol7_latest
- oracleasm-support.x86_64 2.1.11-2.el7 ol7_latest
+ kmod-oracleasm.x86_64 2.0.8-28.0.1.el7 ol7_latest
+ oracleasm-support.x86_64 2.1.11-2.el7 ol7_latest
``` Continue installation by running following commands ```bash
- $ yum -y install kmod-oracleasm.x86_64
- $ yum -y install oracleasm-support.x86_64
+ $ yum -y install kmod-oracleasm.x86_64
+ $ yum -y install oracleasm-support.x86_64
$ wget https://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.15-1.el7.x86_64.rpm
- $ yum -y install oracleasmlib-2.0.15-1.el7.x86_64.rpm
+ $ yum -y install oracleasmlib-2.0.15-1.el7.x86_64.rpm
$ rm -f oracleasmlib-2.0.15-1.el7.x86_64.rpm ```
For more information about installing Oracle ASM, see [Oracle ASMLib Downloads f
4. ASM requires specific users and roles in order to function correctly. The following commands create the pre-requisite user accounts and groups. ```bash
- $ groupadd -g 54345 asmadmin
- $ groupadd -g 54346 asmdba
- $ groupadd -g 54347 asmoper
+ $ groupadd -g 54345 asmadmin
+ $ groupadd -g 54346 asmdba
+ $ groupadd -g 54347 asmoper
$ usermod -a -g oinstall -G oinstall,dba,asmdba,asmadmin,asmoper oracle ```
For more information about installing Oracle ASM, see [Oracle ASMLib Downloads f
6. Create the app folder change the owner. ```bash
- $ mkdir /u01/app/grid
+ $ mkdir /u01/app/grid
$ chown oracle:oinstall /u01/app/grid ```
To set up Oracle ASM, complete the following steps:
3. **1** to select the first partition 4. press **enter** for the default first sector 5. press **enter** for the default last sector
- 6. press **w** to write the changes to the partition table
+ 6. press **w** to write the changes to the partition table
```bash $ fdisk /dev/sdc
To set up Oracle ASM, complete the following steps:
```output Welcome to fdisk (util-linux 2.23.2).
-
+ Changes will remain in memory only, until you decide to write them. Be careful before using the write command.
-
+ Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x947f0a91.
-
+ The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted.
-
+ Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): 1
- First sector (2048-104857599, default 2048):
+ First sector (2048-104857599, default 2048):
Using default value 2048
- Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599):
+ Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599):
Using default value 104857599 Partition 1 of type Linux and of size 50 GiB is set
-
+ Command (m for help): w The partition table has been altered!
-
+ Calling ioctl() to re-read partition table. Syncing disks. ```
To set up Oracle ASM, complete the following steps:
6. Check the Oracle ASM service status and start the Oracle ASM service: ```bash
- $ oracleasm status
+ $ oracleasm status
``` ```output
To set up Oracle ASM, complete the following steps:
1. Create first disk ```bash
- $ oracleasm createdisk VOL1 /dev/sdc1
+ $ oracleasm createdisk VOL1 /dev/sdc1
``` 2. The output of command should look like
To set up Oracle ASM, complete the following steps:
3. Create remaining disks ```bash
- $ oracleasm createdisk VOL2 /dev/sdd1
+ $ oracleasm createdisk VOL2 /dev/sdd1
``` >[!NOTE]
To set up Oracle ASM, complete the following steps:
9. Change passwords for the root and oracle users. **Make note of these new passwords** as you are using them later during the installation. ```bash
- $ passwd oracle
+ $ passwd oracle
$ passwd root ```
To download and prepare the Oracle Grid Infrastructure software, complete the fo
```PowerShell $asmVMid=$(az vm show --resource-group ASMOnAzureLab --name asmVM --query 'id' --output tsv)
-
+ az network bastion tunnel --name asmBastion --resource-group ASMOnAzureLab --target-resource-id $asmVMid --resource-port 22 --port 57500 ```
To install Oracle Grid Infrastructure, complete the following steps:
```bash $ sudo su - oracle $ export DISPLAY=10.0.0.4:0.0
- $ cd /opt/grid
- $ ./gridSetup.sh
+ $ cd /opt/grid
+ $ ./gridSetup.sh
``` Oracle Grid Infrastructure 19c Installer opens on **asmXServer** VM. (It might take a few minutes for the installer to start.)
Complete following steps to setup Oracle ASM.
Run following to set context. If you still have the shell open from previous command, you may skip this step. ```bash
- $ sudo su - oracle
+ $ sudo su - oracle
$ export DISPLAY=10.0.0.4:0.0 ``` Launch the Oracle Automatic Storage Management Configuration Assistant ```bash
- $ cd /opt/grid/bin
+ $ cd /opt/grid/bin
$ ./asmca ```
The Oracle database software is already installed on the Azure Marketplace image
* Run following to set context. If you still have the shell open from previous command, this may not be necessary. ```bash
- $ sudo su - oracle
+ $ sudo su - oracle
$ export DISPLAY=10.0.0.4:0.0 ```
The Oracle database software is already installed on the Azure Marketplace image
```azurecli $ az vm delete --resource-group ASMOnAzureLab --name asmXServer --force-deletion yes
-$ az network public-ip delete --resource-group ASMOnAzureLab --name asmXServerPublicIP
+$ az network public-ip delete --resource-group ASMOnAzureLab --name asmXServerPublicIP
``` ## Delete ASM On Azure Lab Setup
virtual-machines Configure Oracle Dataguard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-dataguard.md
Title: Implement Oracle Data Guard on a Linux-based Azure virtual machine
+ Title: Implement Oracle Data Guard on a Linux-based Azure virtual machine
description: Quickly get Oracle Data Guard up and running in your Azure environment. -+ Last updated 03/23/2023
Create a resource group by using the [az group create](/cli/azure/group) command
```azurecli az group create \ --name $RESOURCE_GROUP \
- --location $LOCATION
+ --location $LOCATION
``` ### Create a virtual network with two subnets
-Create a virtual network where you'll connect all compute services. One subnet will host Azure Bastion, an Azure service that helps protect your databases from public access. The second subnet will host the two Oracle database VMs.
+Create a virtual network where you'll connect all compute services. One subnet will host Azure Bastion, an Azure service that helps protect your databases from public access. The second subnet will host the two Oracle database VMs.
Also, create a network security group that all services will reference to determine what ports are publicly exposed. Only port 443 will be exposed. The Azure Bastion service will open this port automatically when you create that service instance.
az network vnet create \
--resource-group $RESOURCE_GROUP \ --location $LOCATION \ --name $VNET_NAME \
- --address-prefix "10.0.0.0/16"
+ --address-prefix "10.0.0.0/16"
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --name AzureBastionSubnet \
Enter the username and password, and then select the **Connect** button.
```bash sudo systemctl stop firewalld
-sudo systemctl disable firewalld
+sudo systemctl disable firewalld
``` ### Configure the environment for OracleVM1
virtual-machines Configure Oracle Golden Gate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-golden-gate.md
description: Quickly get an Oracle Golden Gate up and running in your Azure envi
-+ Last updated 08/02/2018
Before you start, make sure that the Azure CLI has been installed. For more info
GoldenGate is a logical replication software that enables real-time replication, filtering, and transformation of data from a source database to a target database. This feature ensures that changes in the source database are replicated in real-time, making it possible for the target database to be up-to-date with the latest data.
-Use GoldenGate mainly for heterogeneous replication cases, such as replicating data from different source databases to a single database. For example, a data warehouse. You can also use it for cross-platform migrations, such as from SPARC and AIX to Linux x86 environments, and advanced high availability and scalability scenarios.
+Use GoldenGate mainly for heterogeneous replication cases, such as replicating data from different source databases to a single database. For example, a data warehouse. You can also use it for cross-platform migrations, such as from SPARC and AIX to Linux x86 environments, and advanced high availability and scalability scenarios.
Additionally, GoldenGate is also suitable for near-zero downtime migrations since it supports online migrations with minimal disruption to the source systems.
We use key file based authentication with ssh to connect to the Oracle Database
Location of key files depends on your source system. Windows: %USERPROFILE%\.ssh
-Linux: ~/.ssh
+Linux: ~/.ssh
If they don't exist you can create a new keyfile pair.
$ az network vnet create \
--resource-group GoldenGateOnAzureLab \ --name AzureBastionSubnet \ --vnet-name ggVnet \
- --address-prefixes 10.0.1.0/24
+ --address-prefixes 10.0.1.0/24
``` 2. Create public IP for Bastion
$ az network vnet create \
$ az network public-ip create \ --resource-group GoldenGateOnAzureLab \ --name ggBastionIP \
- --sku Standard
+ --sku Standard
``` 3. Create Azure Bastion resource. It takes about 10 minutes for the resource to deploy.
$ az vm create \
--subnet ggSubnet1 \ --public-ip-address "" \ --nsg "" \
- --zone 1
+ --zone 1
``` #### Create ggVM2 (replicate)
Creating Pluggable Databases
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb1/cdb1.log" for more details. ```
-3. Set the ORACLE_SID and LD_LIBRARY_PATH variables.
+3. Set the ORACLE_SID and LD_LIBRARY_PATH variables.
```bash $ export ORACLE_SID=cdb1
Configure firewall to allow connections from ggVM1. Following command is run on
```bash $ sudo su -
-$ firewall-cmd --permanent --zone=trusted --add-source=10.0.0.5
+$ firewall-cmd --permanent --zone=trusted --add-source=10.0.0.5
$ firewall-cmd --reload $ exit ```
SQL> EXIT;
EXTRACT EXTORA USERID C##GGADMIN@cdb1, PASSWORD ggadmin RMTHOST 10.0.0.5, MGRPORT 7809
- RMTTRAIL ./dirdat/rt
+ RMTTRAIL ./dirdat/rt
DDL INCLUDE MAPPED
- DDLOPTIONS REPORT
+ DDLOPTIONS REPORT
LOGALLSUPCOLS UPDATERECORDFORMAT COMPACT TABLE pdb1.test.TCUSTMER;
SQL> EXIT;
USERID C##GGADMIN@cdb1, PASSWORD ggadmin RMTHOST 10.0.0.6, MGRPORT 7809 RMTTASK REPLICAT, GROUP INITREP
- TABLE pdb1.test.*, SQLPREDICATE 'AS OF SCN 2172191';
+ TABLE pdb1.test.*, SQLPREDICATE 'AS OF SCN 2172191';
``` ```bash
SQL> EXIT;
SQL> CREATE USER REPUSER IDENTIFIED BY REP_PASS CONTAINER=CURRENT; SQL> GRANT DBA TO REPUSER; SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('REPUSER',CONTAINER=>'PDB1');
- SQL> CONNECT REPUSER/REP_PASS@PDB1
+ SQL> CONNECT REPUSER/REP_PASS@PDB1
SQL> EXIT; ```
SQL> EXIT;
``` ```
- GGSCI> EDIT PARAMS REPORA
+ GGSCI> EDIT PARAMS REPORA
``` When vi editor opens you have to press `i` to switch to insert mode, then copy and paste file contents and press `Esc` key, `:wq!` to save file.
SQL> EXIT;
ASSUMETARGETDEFS DISCARDFILE ./dirrpt/tcustmer.dsc, APPEND USERID repuser@pdb1, PASSWORD REP_PASS
- MAP pdb1.test.*, TARGET pdb1.test.*;
+ MAP pdb1.test.*, TARGET pdb1.test.*;
``` ```bash
The replication has begun, and you can test it by inserting new records to TEST
* To view reports on **ggVM1**, run the following commands. ```bash
- GGSCI> VIEW REPORT EXTORA
+ GGSCI> VIEW REPORT EXTORA
``` * To view reports on **ggVM2**, run the following commands.
The replication has begun, and you can test it by inserting new records to TEST
* To view status and history on **ggVM1**, run the following commands. ```bash
- GGSCI> DBLOGIN USERID C##GGADMIN@CDB1, PASSWORD ggadmin
+ GGSCI> DBLOGIN USERID C##GGADMIN@CDB1, PASSWORD ggadmin
GGSCI> INFO EXTRACT EXTORA, DETAIL ``` * To view status and history on **ggVM2**, run the following commands. ```bash
- GGSCI> DBLOGIN USERID REPUSER@PDB1 PASSWORD REP_PASS
+ GGSCI> DBLOGIN USERID REPUSER@PDB1 PASSWORD REP_PASS
GGSCI> INFO REP REPORA, DETAIL ```
The replication has begun, and you can test it by inserting new records to TEST
```output Sending STATS request to Extract group EXTORA ...
-
+ Start of statistics at 2023-03-24 19:41:54.
-
+ DDL replication statistics (for all trails):
-
+ *** Total statistics since extract started *** Operations 0.00 Mapped operations 0.00 Unmapped operations 0.00 Other operations 0.00 Excluded operations 0.00
-
+ Output to ./dirdat/rt:
-
+ Extracting from PDB1.TEST.TCUSTORD to PDB1.TEST.TCUSTORD:
-
+ *** Total statistics since 2023-03-24 19:41:34 *** Total inserts 1.00 Total updates 0.00
The replication has begun, and you can test it by inserting new records to TEST
Total upserts 0.00 Total discards 0.00 Total operations 1.00
-
+ *** Daily statistics since 2023-03-24 19:41:34 *** Total inserts 1.00 Total updates 0.00
The replication has begun, and you can test it by inserting new records to TEST
Total upserts 0.00 Total discards 0.00 Total operations 1.00
-
+ *** Hourly statistics since 2023-03-24 19:41:34 *** Total inserts 1.00 Total updates 0.00
The replication has begun, and you can test it by inserting new records to TEST
Total upserts 0.00 Total discards 0.00 Total operations 1.00
-
+ *** Latest statistics since 2023-03-24 19:41:34 *** Total inserts 1.00 Total updates 0.00
The replication has begun, and you can test it by inserting new records to TEST
Total upserts 0.00 Total discards 0.00 Total operations 1.00
-
+ End of statistics. ```
ggXServer VM is only used during setup. You can safely delete it after completin
```azurecli $ az vm delete --resource-group GoldenGateOnAzureLab --name ggXServer --force-deletion yes
-$ az network public-ip delete --resource-group GoldenGateOnAzureLab --name ggXServerPublicIP
+$ az network public-ip delete --resource-group GoldenGateOnAzureLab --name ggXServerPublicIP
``` ## Delete Golden Gate On Azure Lab Setup
virtual-machines Oracle Database Backup Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md
description: Learn how to back up and recover an Oracle Database instance by usi
-+ Last updated 01/28/2021 -+ # Back up and recover Oracle Database on an Azure Linux VM by using Azure Backup **Applies to:** :heavy_check_mark: Linux VMs
-This article demonstrates the use of Azure Backup to take disk snapshots of virtual machine (VM) disks, which include the Oracle Database files and the Oracle fast recovery area. By using Azure Backup, you can take full disk snapshots that are suitable as backups and are stored in a [Recovery Services vault](../../../backup/backup-azure-recovery-services-vault-overview.md).
+This article demonstrates the use of Azure Backup to take disk snapshots of virtual machine (VM) disks, which include the Oracle Database files and the Oracle fast recovery area. By using Azure Backup, you can take full disk snapshots that are suitable as backups and are stored in a [Recovery Services vault](../../../backup/backup-azure-recovery-services-vault-overview.md).
Azure Backup also provides application-consistent backups, which ensure that more fixes aren't required to restore the data. Application-consistent backups work with both file system and Oracle Automatic Storage Management (ASM) databases.
To prepare the environment, complete these steps:
The Oracle Database instance's archived redo log files play a crucial role in database recovery. They store the committed transactions needed to roll forward from a database snapshot taken in the past.
-When the database is in `ARCHIVELOG` mode, it archives the contents of online redo log files when they become full and switch. Together with a backup, they're required to achieve point-in-time recovery when the database is lost.
+When the database is in `ARCHIVELOG` mode, it archives the contents of online redo log files when they become full and switch. Together with a backup, they're required to achieve point-in-time recovery when the database is lost.
Oracle provides the capability to archive redo log files to different locations. The industry best practice is that at least one of those destinations should be on remote storage, so it's separate from the host storage and protected with independent snapshots. Azure Files meets those requirements.
During Oracle installation, we recommend that you use `backupdba` as the OS grou
1. Create a new backup user named `azbackup` that belongs to the OS group that you verified or created in the previous steps. Substitute `<group name>` with the name of the verified group. The user is also added to the `oinstall` group to enable it to open ASM disks. ```bash
- sudo useradd -g <group name> -G oinstall azbackup
+ sudo useradd -g <group name> -G oinstall azbackup
``` 1. Set up external authentication for the new backup user.
During Oracle installation, we recommend that you use `backupdba` as the OS grou
SQL> QUIT ```
-### Set up application-consistent backups
+### Set up application-consistent backups
1. Switch to the root user:
During Oracle installation, we recommend that you use `backupdba` as the OS grou
--vault-name myVault \ --backup-management-type AzureIaasVM \ --container-name vmoracle19c \
- --item-name vmoracle19c
+ --item-name vmoracle19c
``` 1. Monitor the progress of the backup job by using `az backup job list` and `az backup job show`.
To set up your storage account and file share, run the following commands:
* The blob container name, at the end of `Config Blob Container Name`. In this example, it's `vmoracle19c-75aefd4b34c64dd39fdcd3db579783f2`. * The template name, at the end of `Template Blob Uri`. In this example, it's `azuredeployc009747a-0d2e-4ac9-9632-f695bf874693.json`.
-1. Use the values from the preceding step in the following command to assign variables in preparation for creating the VM. A shared access signature (SAS) key is generated for the storage container with a 30-minute duration.
+1. Use the values from the preceding step in the following command to assign variables in preparation for creating the VM. A shared access signature (SAS) key is generated for the storage container with a 30-minute duration.
```azurecli expiretime=$(date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ')
After the VM is restored, you should reassign the original IP address to the new
```azurecli az vm nic remove --nics vmoracle19cRestoredNICc2e8a8a4fc3f47259719d5523cd32dcf --resource-group rg-oracle --vm-name vmoracle19c ```
-
+ 1. Start the VM: ```azurecli
virtual-machines Oracle Database Backup Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-storage.md
description: Learn how to back up and recover an Oracle Database instance to an
-+ Last updated 01/28/2021 -+ # Back up and recover Oracle Database on an Azure Linux VM by using Azure Files
To set up your storage account and file share, run the following commands:
If you get an error similar to the following example, the Common Internet File System (CIFS) package might not be installed on your Linux host: ```output
- mount: wrong fs type, bad option, bad superblock on //orabackup1.file.core.windows.net/orabackup
+ mount: wrong fs type, bad option, bad superblock on //orabackup1.file.core.windows.net/orabackup
``` To check if the CIFS package is installed, run the following command:
In this section, you use Oracle RMAN to take a full backup of the database and a
rman target / RMAN> configure snapshot controlfile name to '/mnt/orabkup/snapcf_ev.f'; RMAN> configure channel 1 device type disk format '/mnt/orabkup/%d/Full_%d_%U_%T_%s';
- RMAN> configure channel 2 device type disk format '/mnt/orabkup/%d/Full_%d_%U_%T_%s';
+ RMAN> configure channel 2 device type disk format '/mnt/orabkup/%d/Full_%d_%U_%T_%s';
``` 2. In this example, you're limiting the size of RMAN backup pieces to 4 GiB. However, the RMAN backup `maxpiecesize` value can go up to 4 TiB, which is the file size limit for Azure standard file shares and premium file shares. For more information, see [Azure Files scalability and performance targets](../../../storage/files/storage-files-scale-targets.md).
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
description: Learn about supported configurations and limitations of Oracle virt
-+ Last updated 04/11/2023
virtual-machines Byos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/byos.md
description: Learn about bring-your-own-subscription images for Red Hat Enterpri
-+ Last updated 06/10/2020
# Red Hat Enterprise Linux bring-your-own-subscription Gold Images in Azure
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
Red Hat Enterprise Linux (RHEL) images are available in Azure via a pay-as-you-go or bring-your-own-subscription (BYOS) (Red Hat Gold Image) model. This article provides an overview of the Red Hat Gold Images in Azure.
The following instructions walk you through the initial deployment process for a
``` 1. Accept the image terms.
-
+ Option 1 ```azurecli az vm image terms accept --publisher redhat --offer rhel-byos --plan <SKU value here> -o=jsonc
The following instructions walk you through the initial deployment process for a
``` Option2 ```azurecli
- az vm image terms accept --urn <SKU value here>
+ az vm image terms accept --urn <SKU value here>
``` Example ```azurecli
The following script is an example. Replace the resource group, location, VM nam
$cred = New-Object System.Management.Automation.PSCredential("azureuser",$securePassword) Get-AzMarketplaceTerms -Publisher redhat -Product rhel-byos -Name rhel-lvm87 Set-AzMarketplaceTerms -Accept -Publisher redhat -Product rhel-byos -Name rhel-lvm87
-
+ # Create a resource group New-AzResourceGroup -Name $resourceGroup -Location $location
The following script is an example. Replace the resource group, location, VM nam
# Create a virtual network $vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup -Location $location -Name MYvNET -AddressPrefix 192.168.0.0/16 -Subnet $subnetConfig
-
+ # Create a public IP address and specify a DNS name $pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $location -Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4
The following script is an example. Replace the resource group, location, VM nam
# Create a virtual machine configuration $vmConfig = New-AzVMConfig -VMName $vmName -VMSize Standard_D3_v2 | Set-AzVMOperatingSystem -Linux -ComputerName $vmName -Credential $cred |
- Set-AzVMSourceImage -PublisherName redhat -Offer rhel-byos -Skus rhel-lvm87 -Version latest |
+ Set-AzVMSourceImage -PublisherName redhat -Offer rhel-byos -Skus rhel-lvm87 -Version latest |
Add-AzVMNetworkInterface -Id $nic.Id Set-AzVMPlan -VM $vmConfig -Publisher redhat -Product rhel-byos -Name "rhel-lvm87"
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/overview.md
description: Learn about the Red Hat product offerings available on Azure.
-+ Last updated 02/10/2020
# Red Hat workloads on Azure
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
Red Hat workloads are supported through a variety of offerings on Azure. Red Hat Enterprise Linux (RHEL) images are at the core of RHEL workloads, as is the Red Hat Update Infrastructure (RHUI). Red Hat JBoss EAP is also supported on Azure, see [Red Hat JBoss EAP](#red-hat-jboss-eap).
virtual-machines Redhat Extended Lifecycle Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-extended-lifecycle-support.md
Title: Red Hat Enterprise Linux Extended Lifecycle Support
+ Title: Red Hat Enterprise Linux Extended Lifecycle Support
description: Learn about adding Red Hat Enterprise Extended Lifecycle support add on -+ Last updated 04/16/2020
# Red Hat Enterprise Linux (RHEL) Extended Lifecycle Support
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
This article provides information on Extended Lifecycle Support for the Red Hat Enterprise images: * General Extended Update Support policy
-* Red Hat Enterprise Linux 6
+* Red Hat Enterprise Linux 6
## Red Hat Enterprise Linux Extended Update Support
virtual-machines Redhat In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-in-place-upgrade.md
description: Learn how to do an in-place upgrade from Red Hat Enterprise 7.x ima
-+ Last updated 04/16/2020
# Red Hat Enterprise Linux in-place upgrades
-**Applies to:** :heavy_check_mark: Linux VMs
+**Applies to:** :heavy_check_mark: Linux VMs
->[!Note]
+>[!Note]
> Offerings of SQL Server on Red Hat Enterprise Linux don't support in-place upgrades on Azure.
->[!Important]
+>[!Important]
> Take a snapshot of the image before you start the upgrade as a precaution. ## What is RHEL in-place upgrade?
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
description: Learn about Red Hat Update Infrastructure for on-demand Red Hat Ent
-+ Last updated 04/06/2023
For more information on Red Hat support policies for all versions of RHEL, see [
The Red Hat images provided in Azure Marketplace are connected by default to one of two different types of life-cycle repositories: - Non-EUS: Will have the latest available software published by Red Hat for their particular Red Hat Enterprise Linux (RHEL) repositories.-- Extended Update Support (EUS): Updates won't go beyond a specific RHEL minor release.
+- Extended Update Support (EUS): Updates won't go beyond a specific RHEL minor release.
> [!NOTE] > For more information on RHEL EUS, see [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) and [Red Hat Enterprise Linux Extended Update Support Overview](https://access.redhat.com/articles/rhel-eus).
Support for EUS RHEL7 ended in August 30, 2021. For more information, see [Red H
### Switch a RHEL Server to EUS Repositories.
-#### [Switching to EUS repositories on RHEL7](#tab/rhel7)
+#### [Switching to EUS repositories on RHEL7](#tab/rhel7)
>[!NOTE] >Support for RHEL7 EUS ended in August 30, 2021. It is not recommended to switch to EUS repositories in RHEL7 anymore.
-
-#### [Switching to EUS repositories on RHEL8](#tab/rhel8)
+
+#### [Switching to EUS repositories on RHEL8](#tab/rhel8)
Use the following procedure to lock a RHEL 8.x VM to a particular minor release. Run the commands as `root`: >[!NOTE]
Use the following procedure to lock a RHEL 8.x VM to a particular minor release.
```bash sudo dnf --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel8-eus.config' install rhui-azure-rhel8-eus ```
-
+ 1. Lock the `releasever` level, it has to be one of 8.1, 8.2, 8.4, 8.6 or 8.8.
Use the following procedure to lock a RHEL 8.x VM to a particular minor release.
sudo sh -c 'echo 8.8 > /etc/dnf/vars/releasever' ```
- If there are permission issues to access the `releasever`, you can edit the file using a text editor, add the image version details, and save the file.
+ If there are permission issues to access the `releasever`, you can edit the file using a text editor, add the image version details, and save the file.
> [!NOTE] > This instruction locks the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 8.1 > /etc/yum/vars/releasever` locks your RHEL version to RHEL 8.1.
Use the following procedure to lock a RHEL 8.x VM to a particular minor release.
sudo dnf update ```
-#### [Switching to EUS repositories on RHEL9](#tab/rhel9)
+#### [Switching to EUS repositories on RHEL9](#tab/rhel9)
Use the following procedure to lock a RHEL 9.x VM to a particular minor release. Run the commands as `root`:
Use the following procedure to lock a RHEL 9.x VM to a particular minor release.
```bash sudo dnf --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel9-eus.config' install rhui-azure-rhel9-eus ```
-
+ 1. Lock the `releasever` level, currently it has to be one of 9.0 and 9.2.
Use the following procedure to lock a RHEL 9.x VM to a particular minor release.
sudo sh -c 'echo 9.2 > /etc/dnf/vars/releasever' ```
- If there are permission issues to access the `releasever`, you can edit the file using a text editor, add the image version details, and save the file.
+ If there are permission issues to access the `releasever`, you can edit the file using a text editor, add the image version details, and save the file.
> [!NOTE] > This instruction locks the RHEL minor release to the current minor release. Enter a specific minor release if you are looking to upgrade and lock to a later minor release that is not the latest. For example, `echo 9.2 > /etc/yum/vars/releasever` locks your RHEL version to RHEL 9.2.
Use the following procedure to lock a RHEL 9.x VM to a particular minor release.
### Switch a RHEL Server to non-EUS Repositories.
-#### [Switching to non-EUS repositories on RHEL7](#tab/rhel7)
+#### [Switching to non-EUS repositories on RHEL7](#tab/rhel7)
To remove the version lock, use the following commands. Run the commands as `root`.
To remove the version lock, use the following commands. Run the commands as `roo
sudo yum update ```
-#### [Switching to non-EUS repositories on RHEL8](#tab/rhel8)
+#### [Switching to non-EUS repositories on RHEL8](#tab/rhel8)
To remove the version lock, use the following commands. Run the commands as `root`.
To remove the version lock, use the following commands. Run the commands as `roo
```
-#### [Switching to non-EUS repositories on RHEL9](#tab/rhel9)
+#### [Switching to non-EUS repositories on RHEL9](#tab/rhel9)
To remove the version lock, use the following commands. Run the commands as `root`.
If you're using a network configuration (custom Firewall or UDR configurations)
```output # Azure Global
-RHUI 3
+RHUI 3
West US - 13.91.47.76 East Us - 40.85.190.91 South East Asia - 52.187.75.218
Southeast Asia - 20.24.186.80
``` > [!NOTE]
->
+>
> - As of October 12, 2023, all pay-as-you-go (PAYG) clients will be directed to the Red Hat Update Infrastructure (RHUI) 4 IPs in phase over the next two months. During this time, the RHUI3 IPs will remain for continued updates but will be removed at a future time. Existing routes and rules allowing access to RHUI3 IPs must be updated to also include RHUI4 IP addresses for uninterrupted access to packages and updates. Do not remove RHUI3 IPs to continue receiving updates during the transition period. > > - Also, the new Azure US Government images, as of January 2020, uses Public IP mentioned previously under the Azure Global header.
This procedure is provided for reference only. RHEL PAYG images already have the
- For RHEL 8: 1. Create a `config` file by using this command or a text editor:
-
+ ```bash cat <<EOF > rhel8.config [rhui-microsoft-azure-rhel8]
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
vm-linux-+ Last updated 04/18/2023
The synthetic and VF interfaces have the same MAC address. Together, they consti
Both interfaces are visible via the `ifconfig` or `ip addr` command in Linux. Here's an example `ifconfig` output: ```output
-U1804:~$ ifconfig
-enP53091s1np0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
-ether 00:0d:3a:f5:76:bd txqueuelen 1000 (Ethernet)
-RX packets 365849 bytes 413711297 (413.7 MB)
-RX errors 0 dropped 0 overruns 0 frame 0
-TX packets 9447684 bytes 2206536829 (2.2 GB)
-TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
-eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
-inet 10.1.19.4 netmask 255.255.255.0 broadcast 10.1.19.255
-inet6 fe80::20d:3aff:fef5:76bd prefixlen 64 scopeid 0x20<link>
-ether 00:0d:3a:f5:76:bd txqueuelen 1000 (Ethernet)
-RX packets 8714212 bytes 4954919874 (4.9 GB)
-RX errors 0 dropped 0 overruns 0 frame 0
-TX packets 9103233 bytes 2183731687 (2.1 GB)
-TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+U1804:~$ ifconfig
+enP53091s1np0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
+ether 00:0d:3a:f5:76:bd txqueuelen 1000 (Ethernet)
+RX packets 365849 bytes 413711297 (413.7 MB)
+RX errors 0 dropped 0 overruns 0 frame 0
+TX packets 9447684 bytes 2206536829 (2.2 GB)
+TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+
+eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
+inet 10.1.19.4 netmask 255.255.255.0 broadcast 10.1.19.255
+inet6 fe80::20d:3aff:fef5:76bd prefixlen 64 scopeid 0x20<link>
+ether 00:0d:3a:f5:76:bd txqueuelen 1000 (Ethernet)
+RX packets 8714212 bytes 4954919874 (4.9 GB)
+RX errors 0 dropped 0 overruns 0 frame 0
+TX packets 9103233 bytes 2183731687 (2.1 GB)
+TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
``` The synthetic interface always has a name in the form `eth\<n\>`. Depending on the Linux distribution, the VF interface might have a name in the form `eth\<n\>`. Or it might have a different name in the form of `enP\<n\>` because of a udev rule that does renaming.
The synthetic interface always has a name in the form `eth\<n\>`. Depending on t
You can determine whether a particular interface is the synthetic interface or the VF interface by using the shell command line that shows the device driver that the interface uses: ```output
-$ ethtool -i <interface name> | grep driver
+$ ethtool -i <interface name> | grep driver
``` If the driver is `hv_netvsc`, it's the synthetic interface. The VF interface has a driver name that contains "mlx." The VF interface is also identifiable because its `flags` field includes `SLAVE`. This flag indicates that it's under the control of the synthetic interface that has the same MAC address.
Incoming packets are received and processed on the VF interface before being pas
You can verify that packets are flowing over the VF interface from the output of `ethtool -S eth\<n\>`. The output lines that contain `vf` show the traffic over the VF interface. For example: ```output
-U1804:~# ethtool -S eth0 | grep ' vf_'
- vf_rx_packets: 111180
- vf_rx_bytes: 395460237
- vf_tx_packets: 9107646
- vf_tx_bytes: 2184786508
- vf_tx_dropped: 0
+U1804:~# ethtool -S eth0 | grep ' vf_'
+ vf_rx_packets: 111180
+ vf_rx_bytes: 395460237
+ vf_tx_packets: 9107646
+ vf_tx_bytes: 2184786508
+ vf_tx_dropped: 0
``` If these counters are incrementing on successive execution of the `ethtool` command, network traffic is flowing over the VF interface.
If these counters are incrementing on successive execution of the `ethtool` comm
You can verify the existence of the VF interface as a PCI device by using the `lspci` command. For example, on the Generation 1 VM, you might get output similar to the following output. (Generation 2 VMs don't have the legacy PCI devices.) ```output
-U1804:~# lspci
-0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
-0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
-0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
-0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
-0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
-cf63:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)
+U1804:~# lspci
+0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
+0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
+0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
+0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
+0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
+cf63:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)
``` In this example, the last line of output identifies a VF from the Mellanox ConnectX-4 physical NIC.
The `ethtool -l` or `ethtool -L` command (to get and set the number of transmit
## Interpreting startup messages
-During startup, Linux shows many messages related to the initialization and configuration of the VF interface. It also shows information about the bonding with the synthetic interface. Understanding these messages can be helpful in identifying any problems in the process.
+During startup, Linux shows many messages related to the initialization and configuration of the VF interface. It also shows information about the bonding with the synthetic interface. Understanding these messages can be helpful in identifying any problems in the process.
Here's example output from the `dmesg` command, trimmed to just the lines that are relevant to the VF interface. Depending on the Linux kernel version and distribution in your VM, the messages might vary slightly, but the overall flow is the same. ```output
-[ 2.327663] hv_vmbus: registering driver hv_netvsc
-[ 3.918902] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 added
+[ 2.327663] hv_vmbus: registering driver hv_netvsc
+[ 3.918902] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 added
``` The netvsc driver for `eth0` has been registered. ```output
-[ 6.944883] hv_vmbus: registering driver hv_pci
+[ 6.944883] hv_vmbus: registering driver hv_pci
``` The VMbus virtual PCI driver has been registered. This driver provides core PCI services in a Linux VM in Azure. You must register it before the VF interface can be detected and configured. ```output
-[ 6.945132] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI VMBus probing: Using version 0x10002
-[ 6.947953] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI host bridge to bus cf63:00
-[ 6.947955] pci_bus cf63:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
-[ 6.948805] pci cf63:00:02.0: [15b3:1016] type 00 class 0x020000
-[ 6.957487] pci cf63:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
-[ 7.035464] pci cf63:00:02.0: enabling Extended Tags
-[ 7.040811] pci cf63:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cf63:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
-[ 7.041264] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
+[ 6.945132] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI VMBus probing: Using version 0x10002
+[ 6.947953] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI host bridge to bus cf63:00
+[ 6.947955] pci_bus cf63:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
+[ 6.948805] pci cf63:00:02.0: [15b3:1016] type 00 class 0x020000
+[ 6.957487] pci cf63:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
+[ 7.035464] pci cf63:00:02.0: enabling Extended Tags
+[ 7.040811] pci cf63:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cf63:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
+[ 7.041264] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
``` The PCI device with the listed GUID (assigned by the Azure host) has been detected. It's assigned a PCI domain ID (0xcf63 in this case) based on the GUID. The PCI domain ID must be unique across all PCI devices available in the VM. This uniqueness requirement spans other Mellanox VF interfaces, GPUs, NVMe devices, and other devices that might be present in the VM. ```output
-[ 7.128515] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
-[ 7.139925] mlx5_core cf63:00:02.0: handle_hca_cap:524:(pid 12): log_max_qp value in current profile is 18, changing it to HCA capability limit (12)
-[ 7.342391] mlx5_core cf63:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
+[ 7.128515] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
+[ 7.139925] mlx5_core cf63:00:02.0: handle_hca_cap:524:(pid 12): log_max_qp value in current profile is 18, changing it to HCA capability limit (12)
+[ 7.342391] mlx5_core cf63:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
``` A Mellanox VF that uses the mlx5 driver has been detected. The mlx5 driver begins its initialization of the device. ```output
-[ 7.465085] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF registering: eth1
-[ 7.465119] mlx5_core cf63:00:02.0 eth1: joined to eth0
+[ 7.465085] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF registering: eth1
+[ 7.465119] mlx5_core cf63:00:02.0 eth1: joined to eth0
``` The corresponding synthetic interface that's using the netvsc driver has detected a matching VF. The mlx5 driver recognizes that it has been bonded with the synthetic interface. ```output
-[ 7.466064] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
-[ 7.480575] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
-[ 7.480651] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1
+[ 7.466064] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 7.480575] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 7.480651] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1
``` The Linux kernel initially named the VF interface `eth1`. An udev rule renamed it to avoid confusion with the names given to the synthetic interfaces. ```output
-[ 8.087962] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+[ 8.087962] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
``` The Mellanox VF interface is now up and active. ```output
-[ 8.090127] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
-[ 9.654979] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
+[ 8.090127] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+[ 9.654979] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
``` These messages indicate that the data path for the bonded pair has switched to use the VF interface. About 1.6 seconds later, it switches back to the synthetic interface. Such switches might occur two or three times during the startup process and are normal behavior as the configuration is initialized. ```output
-[ 9.909128] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
-[ 9.910595] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
-[ 11.411194] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
-[ 11.532147] mlx5_core cf63:00:02.0 enP53091s1np0: Disabling LRO, not supported in legacy RQ
-[ 11.731892] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
-[ 11.733216] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+[ 9.909128] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+[ 9.910595] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+[ 11.411194] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
+[ 11.532147] mlx5_core cf63:00:02.0 enP53091s1np0: Disabling LRO, not supported in legacy RQ
+[ 11.731892] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+[ 11.733216] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
``` The final message indicates that the data path has switched to using the VF interface. It's expected during normal operation of the VM.
The automatic switching between the VF interface and the synthetic interface ens
The removal and readd of the VF interface during a servicing event is visible in the `dmesg` output in the VM. Here's typical output: ```output
-[ 8160.911509] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
-[ 8160.912120] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF unregistering: enP53091s1np0
-[ 8162.020138] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 removed
+[ 8160.911509] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
+[ 8160.912120] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF unregistering: enP53091s1np0
+[ 8162.020138] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 removed
```
-The data path has been switched away from the VF interface, and the VF interface has been unregistered. At this point, Linux has removed all knowledge of the VF interface and is operating as if Accelerated Networking wasn't enabled.
+The data path has been switched away from the VF interface, and the VF interface has been unregistered. At this point, Linux has removed all knowledge of the VF interface and is operating as if Accelerated Networking wasn't enabled.
```output
-[ 8225.557263] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 added
-[ 8225.557867] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI VMBus probing: Using version 0x10002
-[ 8225.566794] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI host bridge to bus cf63:00
-[ 8225.566797] pci_bus cf63:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
-[ 8225.571556] pci cf63:00:02.0: [15b3:1016] type 00 class 0x020000
-[ 8225.584903] pci cf63:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
-[ 8225.662860] pci cf63:00:02.0: enabling Extended Tags
-[ 8225.667831] pci cf63:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cf63:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
-[ 8225.667978] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
+[ 8225.557263] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 added
+[ 8225.557867] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI VMBus probing: Using version 0x10002
+[ 8225.566794] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI host bridge to bus cf63:00
+[ 8225.566797] pci_bus cf63:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
+[ 8225.571556] pci cf63:00:02.0: [15b3:1016] type 00 class 0x020000
+[ 8225.584903] pci cf63:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
+[ 8225.662860] pci cf63:00:02.0: enabling Extended Tags
+[ 8225.667831] pci cf63:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cf63:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
+[ 8225.667978] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
``` When the VF interface is readded after servicing is complete, a new PCI device with the specified GUID is detected. It's assigned the same PCI domain ID (0xcf63) as before. The handling of the readd VF interface is like the handling during the initial startup. ```output
-[ 8225.679672] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
-[ 8225.888476] mlx5_core cf63:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
-[ 8226.021016] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF registering: eth1
-[ 8226.021058] mlx5_core cf63:00:02.0 eth1: joined to eth0
-[ 8226.021968] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
-[ 8226.026631] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
-[ 8226.026699] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1
-[ 8226.265256] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+[ 8225.679672] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
+[ 8225.888476] mlx5_core cf63:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
+[ 8226.021016] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF registering: eth1
+[ 8226.021058] mlx5_core cf63:00:02.0 eth1: joined to eth0
+[ 8226.021968] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 8226.026631] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 8226.026699] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1
+[ 8226.265256] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
``` The mlx5 driver initializes the VF interface, and the interface is now functional. The output is similar to the output during the initial startup. ```output
-[ 8226.267380] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+[ 8226.267380] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
``` The data path has been switched back to the VF interface.
The data path has been switched back to the VF interface.
You can disable or enable Accelerated Networking on a virtual NIC in a nonrunning VM by using the Azure CLI. For example: ```output
-$ az network nic update --name u1804895 --resource-group testrg --accelerated-network false
+$ az network nic update --name u1804895 --resource-group testrg --accelerated-network false
``` Disabling Accelerated Networking that's enabled in the guest VM produces a `dmesg` output. It's the same as when the VF interface is removed for Azure host servicing. Enabling Accelerated Networking produces the same `dmesg` output as when the VF interface is readded after Azure host servicing.
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
Title: Accelerated Networking overview
description: Learn how Accelerated Networking can improve the networking performance of Azure VMs. -+ Last updated 04/18/2023
If you use a custom image that supports Accelerated Networking, make sure you ha
Images with cloud-init version 19.4 or later have networking correctly configured to support Accelerated Networking during provisioning.
-# [RHEL, CentOS](#tab/redhat)
+# [RHEL, CentOS](#tab/redhat)
The following example shows a sample configuration drop-in for `NetworkManager` on RHEL or CentOS: ```bash
-sudo mkdir -p /etc/NetworkManager/conf.d
-sudo cat > /etc/NetworkManager/conf.d/99-azure-unmanaged-devices.conf <<EOF
-# Ignore SR-IOV interface on Azure, since it's transparently bonded
-# to the synthetic interface
-[keyfile]
-unmanaged-devices=driver:mlx4_core;driver:mlx5_core
-EOF
+sudo mkdir -p /etc/NetworkManager/conf.d
+sudo cat > /etc/NetworkManager/conf.d/99-azure-unmanaged-devices.conf <<EOF
+# Ignore SR-IOV interface on Azure, since it's transparently bonded
+# to the synthetic interface
+[keyfile]
+unmanaged-devices=driver:mlx4_core;driver:mlx5_core
+EOF
```
-# [openSUSE, SLES](#tab/suse)
+# [openSUSE, SLES](#tab/suse)
The following example shows a sample configuration drop-in for `networkd` on openSUSE or SLES: ```bash
-sudo mkdir -p /etc/systemd/network
-sudo cat > /etc/systemd/network/99-azure-unmanaged-devices.network <<EOF
-# Ignore SR-IOV interface on Azure, since it's transparently bonded
-# to the synthetic interface
-[Match]
-Driver=mlx4_en mlx5_en mlx4_core mlx5_core
-[Link]
-Unmanaged=yes
-EOF
+sudo mkdir -p /etc/systemd/network
+sudo cat > /etc/systemd/network/99-azure-unmanaged-devices.network <<EOF
+# Ignore SR-IOV interface on Azure, since it's transparently bonded
+# to the synthetic interface
+[Match]
+Driver=mlx4_en mlx5_en mlx4_core mlx5_core
+[Link]
+Unmanaged=yes
+EOF
```
-# [Ubuntu, Debian](#tab/ubuntu)
+# [Ubuntu, Debian](#tab/ubuntu)
The following example shows a sample configuration drop-in for `networkd` on Ubuntu, Debian, or Flatcar: ```bash
-sudo mkdir -p /etc/systemd/network
-sudo cat > /etc/systemd/network/99-azure-unmanaged-devices.network <<EOF
-# Ignore SR-IOV interface on Azure, since it's transparently bonded
-# to the synthetic interface
-[Match]
-Driver=mlx4_en mlx5_en mlx4_core mlx5_core
-[Link]
-Unmanaged=yes
-EOF
+sudo mkdir -p /etc/systemd/network
+sudo cat > /etc/systemd/network/99-azure-unmanaged-devices.network <<EOF
+# Ignore SR-IOV interface on Azure, since it's transparently bonded
+# to the synthetic interface
+[Match]
+Driver=mlx4_en mlx5_en mlx4_core mlx5_core
+[Link]
+Unmanaged=yes
+EOF
``` >[!NOTE]
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
Last updated 08/23/2023-+ # Create a virtual network peering - Resource Manager, different subscriptions and Microsoft Entra tenants In this tutorial, you learn to create a virtual network peering between virtual networks created through Resource Manager. The virtual networks exist in different subscriptions that may belong to different Microsoft Entra tenants. Peering two virtual networks enables resources in different virtual networks to communicate with each other with the same bandwidth and latency as though the resources were in the same virtual network. Learn more about [Virtual network peering](virtual-network-peering-overview.md).
-Depending on whether, the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+Depending on whether, the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
Learn how to create a virtual network peering in other scenarios by selecting the scenario from the following table:
If you choose to install and use PowerShell locally, this article requires the A
-In the following steps, learn how to peer virtual networks in different subscriptions and Microsoft Entra tenants.
+In the following steps, learn how to peer virtual networks in different subscriptions and Microsoft Entra tenants.
You can use the same account that has permissions in both subscriptions or you can use separate accounts for each subscription to set up the peering. An account with permissions in both subscriptions can complete all of the steps without signing out and signing in to portal and assigning permissions.
A user account in the other subscription that you want to peer with must be adde
# [**PowerShell**](#tab/create-peering-powershell)
-Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-2** from **subscription-2** to **vnet-1** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-2** from **subscription-2** to **vnet-1** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **user-2**.
$id = @{
$vnetA = Get-AzVirtualNetwork @id $vnetA.id
-```
+```
# [**Azure CLI**](#tab/create-peering-cli)
vnetidA=$(az network vnet show \
--output tsv) echo $vnetidA
-```
+```
A user account in the other subscription that you want to peer with must be adde
# [**PowerShell**](#tab/create-peering-powershell)
-Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-1** from **subscription-1** to **vnet-2** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
+Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-1** from **subscription-1** to **vnet-2** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment).
Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **user-1**.
New-AzRoleAssignment @role
# [**Azure CLI**](#tab/create-peering-cli)
-Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-2**. Assign **user-1** from **subscription-1** to **vnet-2** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
+Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-2**. Assign **user-1** from **subscription-1** to **vnet-2** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create).
Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **user-1**.
vnetidB=$(az network vnet show \
--output tsv) echo $vnetidB
-```
+```
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Last updated 04/18/2023 -+ # Use Azure CLI to create a Windows or Linux VM with Accelerated Networking
In the following examples, you can replace the example parameters such as `<myRe
1. The NSG contains several default rules, one of which disables all inbound access from the internet. Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to open a port to allow remote desktop protocol (RDP) or secure shell (SSH) access to the VM. # [Windows](#tab/windows)
-
+ ```azurecli az network nsg rule create \ --resource-group <myResourceGroup> \
In the following examples, you can replace the example parameters such as `<myRe
--destination-address-prefix "*" \ --destination-port-range 3389 ```
-
+ # [Linux](#tab/linux)
-
+ ```azurecli az network nsg rule create \ --resource-group <myResourceGroup> \
Once you create the VM in Azure, connect to the VM and confirm that the Ethernet
- **CentOS**: 3.10.0-693. > [!NOTE]
- > Other kernel versions may be supported. For an updated list, see the compatibility tables for each distribution at [Supported Linux and FreeBSD virtual machines for Hyper-V](/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows), and confirm that SR-IOV is supported. You can find more details in the release notes for [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). *
+ > Other kernel versions may be supported. For an updated list, see the compatibility tables for each distribution at [Supported Linux and FreeBSD virtual machines for Hyper-V](/windows-server/virtualization/hyper-v/supported-linux-and-freebsd-virtual-machines-for-hyper-v-on-windows), and confirm that SR-IOV is supported. You can find more details in the release notes for [Linux Integration Services for Hyper-V and Azure](https://www.microsoft.com/download/details.aspx?id=55106). *
1. Use the `lspci` command to confirm that the Mellanox VF device is exposed to the VM. The returned output should be similar to the following example:
virtual-network Create Vm Dual Stack Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md
Last updated 08/24/2023-+ ms.devlang: azurecli # Create an Azure Virtual Machine with a dual-stack network using the Azure CLI
-In this article, you create a virtual machine in Azure with the Azure CLI. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
+In this article, you create a virtual machine in Azure with the Azure CLI. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
## Prerequisites
Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to
## Create public IP addresses
-You create two public IP addresses in this section, IPv4 and IPv6.
+You create two public IP addresses in this section, IPv4 and IPv6.
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP addresses.
virtual-network Create Vm Dual Stack Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md
Last updated 12/05/2023-+ # Create an Azure Virtual Machine with a dual-stack network using the Azure portal
-In this article, you create a virtual machine in Azure with the Azure portal. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
+In this article, you create a virtual machine in Azure with the Azure portal. The virtual machine is created along with the dual-stack network as part of the procedures. When completed, the virtual machine supports IPv4 and IPv6 communication.
## Prerequisites
In this section, you create a dual-stack virtual network for the virtual machine
1. In **IPv6 address space**, edit the default address space and change its value to **2404:f800:8000:122::/63**. 1. To add an IPv6 subnet, select **+ Add a subnet** and enter or select the following information:
-
+ | Setting | Value | | - | -- | | **Subnet** | |
In this section, you create a dual-stack virtual network for the virtual machine
## Create public IP addresses
-You create two public IP addresses in this section, IPv4 and IPv6.
+You create two public IP addresses in this section, IPv4 and IPv6.
### Create IPv4 public IP address 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
-2. Select **+ Create**.
+2. Select **+ Create**.
-3. Enter or select the following information in **Create public IP address**.
+3. Enter or select the following information in **Create public IP address**.
| Setting | Value | | - | -- |
You create two public IP addresses in this section, IPv4 and IPv6.
### Create IPv6 public IP address 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results.
-2. Select **+ Create**.
+2. Select **+ Create**.
3. Enter or select the following information in **Create public IP address**.
You create two public IP addresses in this section, IPv4 and IPv6.
| **Inbound port rules** | | | Public inbound ports | Select **None**. |
-4. Select the **Networking** tab, or **Next: Disks** then **Next: Networking**.
+4. Select the **Networking** tab, or **Next: Disks** then **Next: Networking**.
5. Enter or select the following information in the **Networking** tab.
You create two public IP addresses in this section, IPv4 and IPv6.
6. Select **Review + create**.
-7. Select **Create**.
+7. Select **Create**.
8. **Generate new key pair** appears. Select **Download private key and create resource**.
virtual-network Ip Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ip-services-overview.md
Last updated 08/24/2023
-+ # What is Azure Virtual Network IP Services?
For more information about private IP addresses, see [Private IP addresses](./pr
## Routing preference
-Azure routing preference enables you to choose how your traffic routes between Azure and the Internet. You can choose to route traffic either via the Microsoft network, or, via the ISP network (public internet). You can choose the routing option while creating a public IP address. By default, traffic is routed via the Microsoft global network for all Azure services.
+Azure routing preference enables you to choose how your traffic routes between Azure and the Internet. You can choose to route traffic either via the Microsoft network, or, via the ISP network (public internet). You can choose the routing option while creating a public IP address. By default, traffic is routed via the Microsoft global network for all Azure services.
Routing preference choices include:
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
Last updated 08/24/2023 -+ # Public IP address prefix
-A public IP address prefix is a reserved range of [public IP addresses](public-ip-addresses.md#public-ip-addresses) in Azure. Public IP prefixes are assigned from a pool of addresses in each Azure region.
+A public IP address prefix is a reserved range of [public IP addresses](public-ip-addresses.md#public-ip-addresses) in Azure. Public IP prefixes are assigned from a pool of addresses in each Azure region.
You create a public IP address prefix in an Azure region and subscription by specifying a name and prefix size. The prefix size is the number of addresses available for use. Public IP address prefixes consist of IPv4 or IPv6 addresses. In regions with Availability Zones, Public IP address prefixes can be created as zone-redundant or associated with a specific availability zone. After the public IP prefix is created, you can create public IP addresses. ## Benefits
Resource|Scenario|Steps|
- You can't delete a prefix if any addresses within it are assigned to public IP address resources associated to a resource. Dissociate all public IP address resources that are assigned IP addresses from the prefix first. For more information on disassociating public IP addresses, see [Manage public IP addresses](virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address). -- IPv6 is supported on basic public IPs with **dynamic** allocation only. Dynamic allocation means the IPv6 address changes if you delete and redeploy your resource in Azure.
+- IPv6 is supported on basic public IPs with **dynamic** allocation only. Dynamic allocation means the IPv6 address changes if you delete and redeploy your resource in Azure.
-- Standard IPv6 public IPs support static (reserved) allocation.
+- Standard IPv6 public IPs support static (reserved) allocation.
- Standard internal load balancers support dynamic allocation from within the subnet to which they're assigned. ## Pricing
-
+ For costs associated with using Azure Public IPs, both individual IP addresses and IP ranges, see [Public IP Address pricing](https://azure.microsoft.com/pricing/details/ip-addresses/). ## Next steps
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
-+ # Assign multiple IP addresses to virtual machines using the Azure CLI
-An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it.
+An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it.
Assigning multiple IP addresses to a VM enables the following capabilities:
Every NIC attached to a VM has one or more IP configurations associated to it. E
There's a limit to how many private IP addresses can be assigned to a NIC. There's also a limit to how many public IP addresses that can be used in an Azure subscription. See [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) for details.
-This article explains how to add multiple IP addresses to a virtual machine using the Azure CLI.
+This article explains how to add multiple IP addresses to a virtual machine using the Azure CLI.
## Prerequisites
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
Last updated 04/24/2023 -+ # Create, change, or delete a network security group
There's a limit to how many network security groups you can create for each Azur
# [**PowerShell**](#tab/network-security-group-powershell)
-Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) to create a network security group named **myNSG** in **East US** region. **myNSG** is created in the existing **myResourceGroup** resource group.
+Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) to create a network security group named **myNSG** in **East US** region. **myNSG** is created in the existing **myResourceGroup** resource group.
```azurepowershell-interactive New-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup -Location eastus
Get-AzNetworkSecurityGroup | format-table Name, Location, ResourceGroupName, Pro
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network nsg list](/cli/azure/network/nsg#az-network-nsg-list) to list all network security groups in your subscription.
+Use [az network nsg list](/cli/azure/network/nsg#az-network-nsg-list) to list all network security groups in your subscription.
```azurecli-interactive az network nsg list --out table
Under **Monitoring**, you can enable or disable **Diagnostic settings**. For mor
Under **Help**, you can view **Effective security rules**. For more information, see [Diagnose a virtual machine network traffic filter problem](diagnose-network-traffic-filter-problem.md). To learn more about the common Azure settings listed, see the following articles:
az network nsg rule list --resource-group myResourceGroup --nsg-name myNSG
> [!NOTE] > This procedure only applies to a custom security rule. It doesn't work if you choose a default security rule.
- :::image type="content" source="./media/manage-network-security-group/view-security-rule-details.png" alt-text="Screenshot of details of an inbound security rule of a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/view-security-rule-details.png" alt-text="Screenshot of details of an inbound security rule of a network security group in Azure portal.":::
# [**PowerShell**](#tab/network-security-group-powershell)
az network nsg rule show --resource-group myResourceGroup --nsg-name myNSG --nam
5. Change the settings as needed, and then select **Save**. For an explanation of all settings, see [Security rule settings](#security-rule-settings).
- :::image type="content" source="./media/manage-network-security-group/change-security-rule.png" alt-text="Screenshot of change of an inbound security rule details of a network security group in Azure portal.":::
+ :::image type="content" source="./media/manage-network-security-group/change-security-rule.png" alt-text="Screenshot of change of an inbound security rule details of a network security group in Azure portal.":::
> [!NOTE] > This procedure only applies to a custom security rule. You aren't allowed to change a default security rule.
Get-AzApplicationSecurityGroup | format-table Name, ResourceGroupName, Location
# [**Azure CLI**](#tab/network-security-group-cli)
-Use [az network asg list](/cli/azure/network/asg#az-network-asg-list) to list all application security groups in a resource group.
+Use [az network asg list](/cli/azure/network/asg#az-network-asg-list) to list all application security groups in a resource group.
```azurecli-interactive az network asg list --resource-group myResourceGroup --out table
To manage network security groups, security rules, and application security grou
| Microsoft.Network/networkSecurityGroups/read | Get network security group | | Microsoft.Network/networkSecurityGroups/write | Create or update network security group | | Microsoft.Network/networkSecurityGroups/delete | Delete network security group |
-| Microsoft.Network/networkSecurityGroups/join/action | Associate a network security group to a subnet or network interface
+| Microsoft.Network/networkSecurityGroups/join/action | Associate a network security group to a subnet or network interface
>[!NOTE] > To perform `write` operations on a network security group, the subscription account must have at least `read` permissions for resource group along with `Microsoft.Network/networkSecurityGroups/write` permission.
virtual-network Manage Route Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-route-table.md
-+ Last updated 04/24/2023
The most common changes are to [add](#create-a-route) routes, [remove](#delete-a
## Associate a route table to a subnet
-You can optionally associate a route table to a subnet. A route table can be associated to zero or more subnets. Route tables aren't associated to virtual networks. You must associate a route table to each subnet you want the route table associated to.
+You can optionally associate a route table to a subnet. A route table can be associated to zero or more subnets. Route tables aren't associated to virtual networks. You must associate a route table to each subnet you want the route table associated to.
Azure routes all traffic leaving the subnet based on routes you've created:
Azure routes all traffic leaving the subnet based on routes you've created:
* [Default routes](virtual-networks-udr-overview.md#default)
-* Routes propagated from an on-premises network, if the virtual network is connected to an Azure virtual network gateway (ExpressRoute or VPN).
+* Routes propagated from an on-premises network, if the virtual network is connected to an Azure virtual network gateway (ExpressRoute or VPN).
You can only associate a route table to subnets in virtual networks that exist in the same Azure location and subscription as the route table.
A route table contains zero or more routes. To learn more about the information
1. Select the **...** and then select **Delete**. Select **Yes** in the confirmation dialog box.
- :::image type="content" source="./media/manage-route-table/delete-route.png" alt-text="Screenshot of the delete button for a route from a route table.":::
+ :::image type="content" source="./media/manage-route-table/delete-route.png" alt-text="Screenshot of the delete button for a route from a route table.":::
### Delete a route - commands
virtual-network Setup Dpdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md
description: Learn the benefits of the Data Plane Development Kit (DPDK) and how
-+ Last updated 04/24/2023
A list of setup instructions for DPDK on MANA VMs is available here: [Microsoft
The following distributions from the Azure Marketplace are supported:
-| Linux OS | Kernel version |
+| Linux OS | Kernel version |
|--| | | Ubuntu 18.04 | 4.15.0-1014-azure+ |
-| SLES 15 SP1 | 4.12.14-8.19-azure+ |
-| RHEL 7.5 | 3.10.0-862.11.6.el7.x86_64+ |
-| CentOS 7.5 | 3.10.0-862.11.6.el7.x86_64+ |
+| SLES 15 SP1 | 4.12.14-8.19-azure+ |
+| RHEL 7.5 | 3.10.0-862.11.6.el7.x86_64+ |
+| CentOS 7.5 | 3.10.0-862.11.6.el7.x86_64+ |
| Debian 10 | 4.19.0-1-cloud+ | The noted versions are the minimum requirements. Newer versions are supported too.
A list of requirements for DPDK on MANA VMs is available here: [Microsoft Azure
**Custom kernel support**
-For any Linux kernel version that's not listed, see [Patches for building an Azure-tuned Linux kernel](https://github.com/microsoft/azure-linux-kernel). For more information, you can also contact [aznetdpdk@microsoft.com](mailto:aznetdpdk@microsoft.com).
+For any Linux kernel version that's not listed, see [Patches for building an Azure-tuned Linux kernel](https://github.com/microsoft/azure-linux-kernel). For more information, you can also contact [aznetdpdk@microsoft.com](mailto:aznetdpdk@microsoft.com).
## Region support
All Azure regions support DPDK.
Accelerated networking must be enabled on a Linux virtual machine. The virtual machine should have at least two network interfaces, with one interface for management. Enabling Accelerated networking on management interface isn't recommended. Learn how to [create a Linux virtual machine with accelerated networking enabled](create-vm-accelerated-networking-cli.md).
-In addition, DPDK uses RDMA verbs to create data queues on the Network Adapter. In the VM, ensure the correct RDMA kernel drivers are loaded. They can be mlx4_ib, mlx5_ib or mana_ib depending on VM sizes.
+In addition, DPDK uses RDMA verbs to create data queues on the Network Adapter. In the VM, ensure the correct RDMA kernel drivers are loaded. They can be mlx4_ib, mlx5_ib or mana_ib depending on VM sizes.
After restarting, run the following commands once:
``` * Create a directory for mounting with `mkdir /mnt/huge`.
-
+ * Mount hugepages with `mount -t hugetlbfs nodev /mnt/huge`.
-
+ * Check that hugepages are reserved with `grep Huge /proc/meminfo`. * The example above is for 2M huge pages. 1G huge pages can also be used.
After restarting, run the following commands once:
3. PCI addresses * Use `ethtool -i <vf interface name>` to find out which PCI address to use for *VF*.
-
+ * If *eth0* has accelerated networking enabled, make sure that testpmd doesnΓÇÖt accidentally take over the *VF* pci device for *eth0*. If the DPDK application accidentally takes over the management network interface and causes you to lose your SSH connection, use the serial console to stop the DPDK application. You can also use the serial console to stop or start the virtual machine. 4. Load *ibuverbs* on each reboot with `modprobe -a ib_uverbs`. For SLES 15 only, also load *mlx4_ib* with `modprobe -a mlx4_ib`.
virtual-network Tutorial Create Route Table Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md
virtual-network
Last updated 04/20/2022 -+ # Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance.
Azure automatically routes traffic between all subnets within a virtual network,
## Create a route table
-Before you can create a route table, create a resource group with [az group create](/cli/azure/group) for all resources created in this article.
+Before you can create a route table, create a resource group with [az group create](/cli/azure/group) for all resources created in this article.
```azurecli-interactive # Create a resource group.
az group create \
--location eastus ```
-Create a route table with [az network route-table create](/cli/azure/network/route-table#az-network-route-table-create). The following example creates a route table named *myRouteTablePublic*.
+Create a route table with [az network route-table create](/cli/azure/network/route-table#az-network-route-table-create). The following example creates a route table named *myRouteTablePublic*.
```azurecli-interactive # Create a route table
az network route-table create \
## Create a route
-Create a route in the route table with [az network route-table route create](/cli/azure/network/route-table/route#az-network-route-table-route-create).
+Create a route in the route table with [az network route-table route create](/cli/azure/network/route-table/route#az-network-route-table-route-create).
```azurecli-interactive az network route-table route create \
az network vnet subnet update \
## Create an NVA
-An NVA is a VM that performs a network function, such as routing, firewalling, or WAN optimization. We will create a basic NVA from a general purpose Ubuntu VM, for demonstration purposes.
+An NVA is a VM that performs a network function, such as routing, firewalling, or WAN optimization. We will create a basic NVA from a general purpose Ubuntu VM, for demonstration purposes.
Create a VM to be used as the NVA in the *DMZ* subnet with [az vm create](/cli/azure/vm). When you create a VM, Azure creates and assigns a network interface *myVmNvaVMNic* and a public IP address to the VM, by default. The `--public-ip-address ""` parameter instructs Azure not to create and assign a public IP address to the VM, since the VM doesn't need to be connected to from the internet. If SSH keys do not already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option.
az vm create \
--generate-ssh-keys ```
-The VM takes a few minutes to create. Do not continue to the next step until Azure finishes creating the VM and returns output about the VM.
+The VM takes a few minutes to create. Do not continue to the next step until Azure finishes creating the VM and returns output about the VM.
For a network interface myVmNvaVMNic to be able to forward network traffic sent to it, that is not destined for its own IP address, IP forwarding must be enabled for the network interface. Enable IP forwarding for the network interface with [az network nic update](/cli/azure/network/nic).
The command may take up to a minute to execute. Note that this change will not
## Create virtual machines
-Create two VMs in the virtual network so you can validate that traffic from the *Public* subnet is routed to the *Private* subnet through the NVA in a later step.
+Create two VMs in the virtual network so you can validate that traffic from the *Public* subnet is routed to the *Private* subnet through the NVA in a later step.
Create a VM in the *Public* subnet with [az vm create](/cli/azure/vm). The `--no-wait` parameter enables Azure to execute the command in the background so you can continue to the next command. To streamline this article, a password is used. Keys are typically used in production deployments. If you use keys, you must also configure SSH agent forwarding. For more information, see the documentation for your SSH client. Replace `<replace-with-your-password>` in the following command with a password of your choosing.
az vm create \
--admin-password $adminPassword ```
-The VM takes a few minutes to create. After the VM is created, the Azure CLI shows information similar to the following example:
+The VM takes a few minutes to create. After the VM is created, the Azure CLI shows information similar to the following example:
```output {
traceroute to myVmPublic (10.0.0.4), 30 hops max, 60 byte packets
1 10.0.0.4 (10.0.0.4) 1.404 ms 1.403 ms 1.398 ms ```
-You can see that traffic is routed directly from the *myVmPrivate* VM to the *myVmPublic* VM. Azure's default routes, route traffic directly between subnets.
+You can see that traffic is routed directly from the *myVmPrivate* VM to the *myVmPublic* VM. Azure's default routes, route traffic directly between subnets.
Use the following command to SSH to the *myVmPublic* VM from the *myVmPrivate* VM:
virtual-network Virtual Network Scenario Udr Gw Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-scenario-udr-gw-nva.md
description: Learn how to deploy virtual appliances and route tables to create a
-+ Last updated 03/22/2023
A common scenario among larger Azure customers is the need to provide a two-tier
* Administrators must be able to manage the firewall virtual appliances from their on-premises computers, by using a third firewall virtual appliance used exclusively for management purposes.
-This example is a standard perimeter network (also known as DMZ) scenario with a DMZ and a protected network. Such scenario can be constructed in Azure by using NSGs, firewall virtual appliances, or a combination of both.
+This example is a standard perimeter network (also known as DMZ) scenario with a DMZ and a protected network. Such scenario can be constructed in Azure by using NSGs, firewall virtual appliances, or a combination of both.
The following table shows some of the pros and cons between NSGs and firewall virtual appliances.
-| Item | Pros | Cons |
+| Item | Pros | Cons |
| -- | | | | **NSG** | No cost. <br/>Integrated into Azure role based access. <br/>Rules can be created in Azure Resource Manager templates. | Complexity could vary in larger environments. | | **Firewall** | Full control over data plane. <br/> Central management through firewall console. |Cost of firewall appliance. <br/> Not integrated with Azure role based access. |
You can deploy the environment explained previously in Azure using different fea
* **Virtual network**. An Azure virtual network acts in similar fashion to an on-premises network, and can be segmented into one or more subnets to provide traffic isolation, and separation of concerns.
-* **Virtual appliance**. Several partners provide virtual appliances in the Azure Marketplace that can be used for the three firewalls described previously.
+* **Virtual appliance**. Several partners provide virtual appliances in the Azure Marketplace that can be used for the three firewalls described previously.
* **Route tables**. Route tables are used by Azure networking to control the flow of packets within a virtual network. These route tables can be applied to subnets. You can apply a route table to the GatewaySubnet, which forwards all traffic entering into the Azure virtual network from a hybrid connection to a virtual appliance.
You can deploy the environment explained previously in Azure using different fea
In this example, there's a subscription that contains the following items:
-* Two resource groups, not shown in the diagram.
+* Two resource groups, not shown in the diagram.
* **ONPREMRG**. Contains all resources necessary to simulate an on-premises network.
- * **AZURERG**. Contains all resources necessary for the Azure virtual network environment.
+ * **AZURERG**. Contains all resources necessary for the Azure virtual network environment.
* A virtual network named **onpremvnet** segmented as follows used to mimic an on-premises datacenter.
In this example, there's a subscription that contains the following items:
* **azsn4**. Management subnet used exclusively to provide management access to all firewall virtual appliances. This subnet only contains a NIC for each firewall virtual appliance used in the solution.
- * **GatewaySubnet**. Azure hybrid connection subnet required for ExpressRoute and VPN Gateway to provide connectivity between Azure VNets and other networks.
+ * **GatewaySubnet**. Azure hybrid connection subnet required for ExpressRoute and VPN Gateway to provide connectivity between Azure VNets and other networks.
-* There are 3 firewall virtual appliances in the **azurevnet** network.
+* There are 3 firewall virtual appliances in the **azurevnet** network.
* **AZF1**. External firewall exposed to the public Internet by using a public IP address resource in Azure. You need to ensure you have a template from the Marketplace or directly from your appliance vendor that deploys a 3-NIC virtual appliance.
As an example, imagine you have the following setup in an Azure vnet:
* A user defined route linked to **onpremsn1** specifies that all traffic to **onpremsn2** must be sent to **OPFW**.
-At this point, if **onpremvm1** tries to establish a connection with **onpremvm2**, the UDR will be used and traffic will be sent to **OPFW** as the next hop. Keep in mind that the actual packet destination isn't being changed, it still says **onpremvm2** is the destination.
+At this point, if **onpremvm1** tries to establish a connection with **onpremvm2**, the UDR will be used and traffic will be sent to **OPFW** as the next hop. Keep in mind that the actual packet destination isn't being changed, it still says **onpremvm2** is the destination.
Without IP Forwarding enabled for **OPFW**, the Azure virtual networking logic drops the packets, since it only allows packets to be sent to a VM if the VMΓÇÖs IP address is the destination for the packet. With IP Forwarding, the Azure virtual network logic forwards the packets to OPFW, without changing its original destination address. **OPFW** must handle the packets and determine what to do with them.
-For the scenario previously to work, you must enable IP Forwarding on the NICs for **OPFW**, **AZF1**, **AZF2**, and **AZF3** that are used for routing (all NICs except the ones linked to the management subnet).
+For the scenario previously to work, you must enable IP Forwarding on the NICs for **OPFW**, **AZF1**, **AZF2**, and **AZF3** that are used for routing (all NICs except the ones linked to the management subnet).
## Firewall Rules
virtual-network Virtual Network Service Endpoint Policies Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-cli.md
virtual-network
Last updated 02/03/2020 -+ # Customer intent: I want only specific Azure Storage account to be allowed access from a virtual network subnet.
az network vnet create \
--subnet-prefix 10.0.0.0/24 ```
-## Enable a service endpoint
+## Enable a service endpoint
-In this example, a service endpoint for *Microsoft.Storage* is created for the subnet *Private*:
+In this example, a service endpoint for *Microsoft.Storage* is created for the subnet *Private*:
```azurecli-interactive az network vnet subnet create \
az network vnet subnet update \
--network-security-group myNsgPrivate ```
-Create security rules with [az network nsg rule create](/cli/azure/network/nsg/rule). The rule that follows allows outbound access to the public IP addresses assigned to the Azure Storage service:
+Create security rules with [az network nsg rule create](/cli/azure/network/nsg/rule). The rule that follows allows outbound access to the public IP addresses assigned to the Azure Storage service:
```azurecli-interactive az network nsg rule create \
The VM takes a few minutes to create. After creation, take note of the **publicI
SSH into the *myVmPrivate* VM. Replace *\<publicIpAddress>* with the public IP address of your *myVmPrivate* VM.
-```bash
+```bash
ssh <publicIpAddress> ```
From the same VM *myVmPrivate*, create a directory for a mount point:
sudo mkdir /mnt/MyAzureFileShare2 ```
-Attempt to mount the Azure file share from storage account *notallowedstorageacc* to the directory you created.
-This article assumes you deployed the latest version of Linux distribution. If you are using earlier versions of Linux distribution, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for additional instructions about mounting file shares.
+Attempt to mount the Azure file share from storage account *notallowedstorageacc* to the directory you created.
+This article assumes you deployed the latest version of Linux distribution. If you are using earlier versions of Linux distribution, see [Mount on Linux](../storage/files/storage-how-to-use-files-linux.md?toc=%2fazure%2fvirtual-network%2ftoc.json) for additional instructions about mounting file shares.
Before executing the command below, replace *\<storage-account-key>* with value of *AccountKey* from **$saConnectionString2**.
Before executing the command below, replace *\<storage-account-key>* with value
sudo mount --types cifs //notallowedstorageacc.file.core.windows.net/my-file-share /mnt/MyAzureFileShare2 --options vers=3.0,username=notallowedstorageacc,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino ```
-Access is denied, and you receive a `mount error(13): Permission denied` error, because this storage account is not in the allow list of the service endpoint policy we applied to the subnet.
+Access is denied, and you receive a `mount error(13): Permission denied` error, because this storage account is not in the allow list of the service endpoint policy we applied to the subnet.
Exit the SSH session to the *myVmPublic* VM.
Exit the SSH session to the *myVmPublic* VM.
When no longer needed, use [az group delete](/cli/azure) to remove the resource group and all of the resources it contains.
-```azurecli-interactive
+```azurecli-interactive
az group delete --name myResourceGroup --yes ```
virtual-network Virtual Network Test Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-test-latency.md
-+ Last updated 03/23/2023
Run *latte.exe* from the Windows command line, not from PowerShell.
``` For example:
-
+ `latte -c -a 10.0.0.4:5005 -i 65100` 1. Wait for the results. Depending on how far apart the VMs are, the test could take a few minutes to finish. Consider starting with fewer iterations to test for success before running longer tests.
virtual-network Virtual Networks Name Resolution Ddns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-ddns.md
ms.assetid: c315961a-fa33-45cf-82b9-4551e70d32dd
-+ Last updated 04/27/2023
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
Last updated 04/27/2023 -+ # Name resolution for resources in Azure virtual networks
There are many different DNS caching packages available (such as dnsmasq). Here'
# [RHEL, CentOS](#tab/redhat) **RHEL/CentOS (uses NetworkManager)**:
-
+ 1. Install the dnsmasq package with the following command: ```bash sudo yum install dnsmasq ```
-
-1. Enable the dnsmasq service with the following command:
+
+1. Enable the dnsmasq service with the following command:
```bash systemctl enable dnsmasq.service
There are many different DNS caching packages available (such as dnsmasq). Here'
service network restart ```
-# [openSUSE, SLES](#tab/suse)
+# [openSUSE, SLES](#tab/suse)
**openSUSE/SLES (uses netconf)**:
-
+ 1. Install the dnsmasq package with the following command: ```bash
There are many different DNS caching packages available (such as dnsmasq). Here'
# [Ubuntu, Debian](#tab/ubuntu) **Ubuntu/Debian (uses resolvconf)**:
-
+ 1. Use the following command to install the dnsmasq package: ```bash
The resolv.conf file is autogenerated, and shouldn't be edited. The specific ste
systemctl restart NetworkManager.service ```
-# [openSUSE, SLES](#tab/suse)
+# [openSUSE, SLES](#tab/suse)
**openSUSE/SLES (uses netconf)**:
The resolv.conf file is autogenerated, and shouldn't be edited. The specific ste
This section covers VMs, role instances, and web apps. > [!NOTE]
-> [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md) replaces the need to use VM-based DNS servers in a virtual network. The following section is provided if you wish to use a VM-based DNS solution, however there are many benefits to using Azure DNS Private Resolver, including cost reduction, built-in high availability, scalability, and flexibility.
+> [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md) replaces the need to use VM-based DNS servers in a virtual network. The following section is provided if you wish to use a VM-based DNS solution, however there are many benefits to using Azure DNS Private Resolver, including cost reduction, built-in high availability, scalability, and flexibility.
### VMs and role instances
If necessary, you can determine the internal DNS suffix by using PowerShell or t
* For virtual networks in Azure Resource Manager deployment models, the suffix is available via the [network interface REST API](/rest/api/virtualnetwork/networkinterfaces), the [Get-AzNetworkInterface](/powershell/module/az.network/get-aznetworkinterface) PowerShell cmdlet, and the [az network nic show](/cli/azure/network/nic#az-network-nic-show) Azure CLI command.
-If forwarding queries to Azure doesn't suit your needs, provide your own DNS solution or deploy an [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md).
+If forwarding queries to Azure doesn't suit your needs, provide your own DNS solution or deploy an [Azure DNS Private Resolver](../dns/dns-private-resolver-overview.md).
If you provide your own DNS solution, it needs to:
If you provide your own DNS solution, it needs to:
> [!IMPORTANT] > If you're using Windows DNS Servers as Custom DNS Servers forwarding DNS requests to Azure DNS Servers, make sure you increase the Forwarding Timeout value more than 4 seconds to allow Azure Recursive DNS Servers to perform proper recursion operations.
->
+>
> For more information about this issue, see [Forwarders and conditional forwarders resolution timeouts](/troubleshoot/windows-server/networking/forwarders-resolution-timeouts).
->
+>
> This recommendation may also apply to other DNS Server platforms with forwarding timeout value of 3 seconds or less.
->
+>
> Failing to do so may result in Private DNS Zone records being resolved with public IP addresses. ### Web apps
virtual-wan Howto Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-private-link.md
Last updated 03/30/2023 -+ # Use Private Link in Virtual WAN