Updates from: 03/01/2024 02:09:35
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-cache-token.md
Title: "Cache the authentication token"
-description: This article will show you how to cache the authentication token.
-
+description: Learn how to cache the authentication token in the Immersive Reader app.
+ Previously updated : 01/14/2020- Last updated : 02/26/2024+ # How to cache the authentication token
-This article demonstrates how to cache the authentication token in order to improve performance of your application.
+This article demonstrates how to cache the authentication token in order to improve the performance of your application.
## Using ASP.NET
-Import the **Microsoft.Identity.Client** NuGet package, which is used to acquire a token.
+Import the `Microsoft.Identity.Client` NuGet package, which is used to acquire a token. For details, see [Install Identity Client NuGet package](quickstarts/client-libraries.md?pivots=programming-language-csharp#install-identity-client-nuget-package).
Create a confidential client application property.
private IConfidentialClientApplication ConfidentialClientApplication
Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md). > [!IMPORTANT]
-> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../../active-directory/develop/msal-migration.md) for more details.
+> The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade. To learn more, see the [migration guide](../../active-directory/develop/msal-migration.md).
```csharp
public async Task<string> GetTokenAsync()
} ```
-The `AuthenticationResult` object has an `AccessToken` property which is the actual token you will use when launching the Immersive Reader using the SDK. It also has an `ExpiresOn` property which denotes when the token will expire. Before launching the Immersive Reader, you can check whether the token has expired, and acquire a new token only if it has expired.
+The `AuthenticationResult` object has an `AccessToken` property, which is the actual token you use when launching the Immersive Reader using the SDK. It also has an `ExpiresOn` property that denotes when the token expires. Before launching the Immersive Reader, you can check whether the token is expired, and acquire a new token only if it expired.
## Using Node.JS
-Add the [**request**](https://www.npmjs.com/package/request) npm package to your project. Use the following code to acquire a token, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
+Add the [request](https://www.npmjs.com/package/request) npm package to your project. Use the following code to acquire a token, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
```javascript router.get('/token', function(req, res) {
router.get('/token', function(req, res) {
}); ```
-The `expires_on` property is the date and time at which the token expires, expressed as the number of seconds since January 1, 1970 UTC. Use this value to determine whether your token has expired before attempting to acquire a new one.
+The `expires_on` property is the date and time at which the token expires, expressed as the number of seconds since January 1, 1970 UTC. Use this value to determine whether your token is expired before attempting to acquire a new one.
```javascript async function getToken() {
async function getToken() {
} ```
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](reference.md)
ai-services How To Configure Read Aloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-configure-read-aloud.md
Title: "Configure Read Aloud"
-description: This article will show you how to configure the various options for Read Aloud.
-
+description: Learn how to configure the various options for Read Aloud in Immersive Reader.
+ Previously updated : 06/29/2020- Last updated : 02/26/2024+ # How to configure Read Aloud
-This article demonstrates how to configure the various options for Read Aloud in the Immersive Reader.
+This article demonstrates how to configure the various options for Read Aloud in your Immersive Reader application.
## Automatically start Read Aloud
ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
## Configure the voice
-Set `voice` to either `male` or `female`. Not all languages support both voices. For more information, see the [Language Support](./language-support.md) page.
+Set `voice` to either `male` or `female`. Not all languages support both voices. For more information, see [Language support](./language-support.md).
```typescript const options = {
const options = {
## Configure playback speed
-Set `speed` to a number between `0.5` (50%) and `2.5` (250%) inclusive. Values outside this range will get clamped to either 0.5 or 2.5.
+Set `speed` to a number between `0.5` (50%) and `2.5` (250%) inclusive. Values outside this range get clamped to either 0.5 or 2.5.
```typescript const options = {
const options = {
}; ```
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](reference.md)
ai-services How To Configure Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-configure-translation.md
Title: "Configure translation"
+ Title: Configure translation in Immersive Reader
-description: This article will show you how to configure the various options for translation.
-
+description: Learn how to configure the various Immersive Reader options for translation.
+ Previously updated : 01/06/2022- Last updated : 02/27/2024+ # How to configure Translation
This article demonstrates how to configure the various options for Translation i
## Configure Translation language
-The `options` parameter contains all of the flags that can be used to configure Translation. Set the `language` parameter to the language you wish to translate to. See the [Language Support](./language-support.md) for the full list of supported languages.
+The `options` parameter contains all of the flags that can be used to configure Translation. Set the `language` parameter to the language you wish to translate to. For the full list of supported languages, see [Language support](language-support.md).
```typescript const options = {
ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
## Automatically translate the document on load
-Set `autoEnableDocumentTranslation` to `true` to enable automatically translating the entire document when the Immersive Reader loads.
+Set `autoEnableDocumentTranslation` to `true` to enable automatic translation of the entire document when the Immersive Reader loads.
```typescript const options = {
const options = {
}; ```
-## Automatically enable word translation
+## Enable automatic word translation
Set `autoEnableWordTranslation` to `true` to enable single word translation.
const options = {
}; ```
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](reference.md)
ai-services How To Customize Launch Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-customize-launch-button.md
Title: "Edit the Immersive Reader launch button"
-description: This article will show you how to customize the button that launches the Immersive Reader.
+description: Learn how to customize the button that launches the Immersive Reader app.
#-+ Previously updated : 03/08/2021- Last updated : 02/23/2024+ # How to customize the Immersive Reader button
-This article demonstrates how to customize the button that launches the Immersive Reader to fit the needs of your application.
+This article demonstrates how to customize the button that launches the Immersive Reader, to fit the needs of your application.
## Add the Immersive Reader button
-The Immersive Reader SDK provides default styling for the button that launches the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling.
+The [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) provides default styling for the button that launches the Immersive Reader. To enable this styling, use the `immersive-reader-button` class attribute.
```html <div class='immersive-reader-button'></div>
The Immersive Reader SDK provides default styling for the button that launches t
## Customize the button style
-Use the `data-button-style` attribute to set the style of the button. The allowed values are `icon`, `text`, and `iconAndText`. The default value is `icon`.
+To set the style of the button, use the `data-button-style` attribute. The allowed values are `icon`, `text`, and `iconAndText`. The default value is `icon`.
### Icon button
+Use the following code to render the icon button.
+ ```html <div class='immersive-reader-button' data-button-style='icon'></div> ```
-This renders the following:
-
-![This is the rendered Text button](./media/button-icon.png)
### Text button
+Use the following code to render the button text.
+ ```html <div class='immersive-reader-button' data-button-style='text'></div> ```
-This renders the following:
-
-![This is the rendered Immersive Reader button.](./media/button-text.png)
### Icon and text button
+Use the following code to render both the button and the text.
+ ```html <div class='immersive-reader-button' data-button-style='iconAndText'></div> ```
-This renders the following:
-
-![Icon button](./media/button-icon-and-text.png)
## Customize the button text
-Configure the language and the alt text for the button using the `data-locale` attribute. The default language is English.
+To configure the language and the alt text for the button, use the `data-locale` attribute. The default language is English.
```html <div class='immersive-reader-button' data-locale='fr-FR'></div>
Configure the language and the alt text for the button using the `data-locale` a
## Customize the size of the icon
-The size of the Immersive Reader icon can be configured using the `data-icon-px-size` attribute. This sets the size of the icon in pixels. The default size is 20px.
+To configure the size of the Immersive Reader icon, use the `data-icon-px-size` attribute. This sets the size of the icon in pixels. The default size is 20 px.
```html <div class='immersive-reader-button' data-icon-px-size='50'></div> ```
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](reference.md)
ai-services How To Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-multiple-resources.md
Title: "Integrate multiple Immersive Reader resources"
+ Title: Integrate multiple Immersive Reader resources
-description: In this tutorial, you'll create a Node.js application that launches the Immersive Reader using multiple Immersive Reader resources.
-
+description: Learn how to create a Node.js application using multiple Immersive Reader resources.
+ Previously updated : 01/14/2020- Last updated : 02/27/2024+ #Customer intent: As a developer, I want to learn more about the Immersive Reader SDK so that I can fully utilize all that the SDK has to offer. # Integrate multiple Immersive Reader resources
-In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. In the [quickstart](./quickstarts/client-libraries.md), you learned how to use Immersive Reader with a single resource. This tutorial covers how to integrate multiple Immersive Reader resources in the same application. In this tutorial, you learn how to:
+In the [overview](overview.md), you learned about the Immersive Reader and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. In the [quickstart](quickstarts/client-libraries.md), you learned how to use Immersive Reader with a single resource. This tutorial covers how to integrate multiple Immersive Reader resources in the same application.
-> [!div class="checklist"]
-> * Create multiple Immersive Reader resource under an existing resource group
-> * Launch the Immersive Reader using multiple resources
+In this tutorial, you learn how to:
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+> [!div class="checklist"]
+> * Create multiple Immersive Reader resource under an existing resource group.
+> * Launch the Immersive Reader using multiple resources.
## Prerequisites
-* Follow the [quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to create a web app that launches the Immersive Reader with NodeJS. In that quickstart, you configure a single Immersive Reader resource. We will build on top of that in this tutorial.
+* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/ai-services).
+* A single Immersive Reader resource configured for Microsoft Entra authentication. Follow [these instructions](how-to-create-immersive-reader.md) to get set up.
+* A [NodeJS web app](quickstarts/client-libraries.md?pivots=programming-language-nodejs) that launches Immersive Reader.
-## Create the Immersive Reader resources
+## Create multiple resources
-Follow [these instructions](./how-to-create-immersive-reader.md) to create each Immersive Reader resource. The **Create-ImmersiveReaderResource** script has `ResourceName`, `ResourceSubdomain`, and `ResourceLocation` as parameters. These should be unique for each resource being created. The remaining parameters should be the same as what you used when setting up your first Immersive Reader resource. This way, each resource can be linked to the same Azure resource group and Microsoft Entra application.
+Follow [these instructions](how-to-create-immersive-reader.md) again to create each Immersive Reader resource. The `Create-ImmersiveReaderResource` script has `ResourceName`, `ResourceSubdomain`, and `ResourceLocation` as parameters. These parameters should be unique for each resource being created. The remaining parameters should be the same as what you used when setting up your first Immersive Reader resource. This way, each resource can be linked to the same Azure resource group and Microsoft Entra application.
-The example below shows how to create two resources, one in WestUS, and another in EastUS. Notice the unique values for `ResourceName`, `ResourceSubdomain`, and `ResourceLocation`.
+The following example shows how to create two resources, one in **WestUS** and another in **EastUS**. Each resource has unique values for `ResourceName`, `ResourceSubdomain`, and `ResourceLocation`.
```azurepowershell-interactive Create-ImmersiveReaderResource
- -SubscriptionName <SUBSCRIPTION_NAME> `
- -ResourceName Resource_name_wus `
- -ResourceSubdomain resource-subdomain-wus `
- -ResourceSKU <RESOURCE_SKU> `
- -ResourceLocation westus `
- -ResourceGroupName <RESOURCE_GROUP_NAME> `
- -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
- -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
- -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
- -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+ -SubscriptionName <SUBSCRIPTION_NAME>
+ -ResourceName Resource_name_westus
+ -ResourceSubdomain resource-subdomain-westus
+ -ResourceSKU <RESOURCE_SKU>
+ -ResourceLocation westus
+ -ResourceGroupName <RESOURCE_GROUP_NAME>
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION>
+ -AADAppDisplayName <MICROSOFT_ENTRA_DISPLAY_NAME>
+ -AADAppIdentifierUri <MICROSOFT_ENTRA_IDENTIFIER_URI>
+ -AADAppClientSecret <MICROSOFT_ENTRA_CLIENT_SECRET>
Create-ImmersiveReaderResource
- -SubscriptionName <SUBSCRIPTION_NAME> `
- -ResourceName Resource_name_eus `
- -ResourceSubdomain resource-subdomain-eus `
- -ResourceSKU <RESOURCE_SKU> `
- -ResourceLocation eastus `
- -ResourceGroupName <RESOURCE_GROUP_NAME> `
- -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
- -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
- -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
- -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+ -SubscriptionName <SUBSCRIPTION_NAME>
+ -ResourceName Resource_name_eastus
+ -ResourceSubdomain resource-subdomain-eastus
+ -ResourceSKU <RESOURCE_SKU>
+ -ResourceLocation eastus
+ -ResourceGroupName <RESOURCE_GROUP_NAME>
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION>
+ -AADAppDisplayName <MICROSOFT_ENTRA_DISPLAY_NAME>
+ -AADAppIdentifierUri <MICROSOFT_ENTRA_IDENTIFIER_URI>
+ -AADAppClientSecret <MICROSOFT_ENTRA_CLIENT_SECRET>
``` ## Add resources to environment configuration
-In the quickstart, you created an environment configuration file that contains the `TenantId`, `ClientId`, `ClientSecret`, and `Subdomain` parameters. Since all of your resources use the same Microsoft Entra application, we can use the same values for the `TenantId`, `ClientId`, and `ClientSecret`. The only change that needs to be made is to list each subdomain for each resource.
+In the quickstart, you created an environment configuration file that contains the `TenantId`, `ClientId`, `ClientSecret`, and `Subdomain` parameters. Since all of your resources use the same Microsoft Entra application, you can use the same values for the `TenantId`, `ClientId`, and `ClientSecret`. The only change that needs to be made is to list each subdomain for each resource.
-Your new __.env__ file should now look something like the following:
+Your new *.env* file should now look something like:
```text TENANT_ID={YOUR_TENANT_ID}
SUBDOMAIN_WUS={YOUR_WESTUS_SUBDOMAIN}
SUBDOMAIN_EUS={YOUR_EASTUS_SUBDOMAIN} ```
-Be sure not to commit this file into source control, as it contains secrets that should not be made public.
+> [!NOTE]
+> Be sure not to commit this file into source control because it contains secrets that shouldn't be made public.
-Next, we're going to modify the _routes\index.js_ file that we created to support our multiple resources. Replace its content with the following code.
+Next, modify the *routes\index.js* file that you created to support your multiple resources. Replace its content with the following code.
As before, this code creates an API endpoint that acquires a Microsoft Entra authentication token using your service principal password. This time, it allows the user to specify a resource location and pass it in as a query parameter. It then returns an object containing the token and the corresponding subdomain.
router.get('/GetTokenAndSubdomain', function(req, res) {
module.exports = router; ```
-The **getimmersivereaderlaunchparams** API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
+The `getimmersivereaderlaunchparams` API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
-## Launch the Immersive Reader with sample content
+## Add sample content
-1. Open _views\index.pug_, and replace its content with the following code. This code populates the page with some sample content, and adds two buttons that launches the Immersive Reader. One for launching Immersive Reader for the EastUS resource, and another for the WestUS resource.
+Open *views\index.pug*, and replace its content with the following code. This code populates the page with some sample content, and adds two buttons that launch the Immersive Reader. One that launches Immersive Reader for the **EastUS** resource, and another for the **WestUS** resource.
- ```pug
- doctype html
- html
- head
- title Immersive Reader Quickstart Node.js
+```pug
+doctype html
+html
+ head
+ title Immersive Reader Quickstart Node.js
- link(rel='stylesheet', href='https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css')
+ link(rel='stylesheet', href='https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css')
- // A polyfill for Promise is needed for IE11 support.
- script(src='https://cdn.jsdelivr.net/npm/promise-polyfill@8/dist/polyfill.min.js')
+ // A polyfill for Promise is needed for IE11 support.
+ script(src='https://cdn.jsdelivr.net/npm/promise-polyfill@8/dist/polyfill.min.js')
- script(src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.2.0.js')
- script(src='https://code.jquery.com/jquery-3.3.1.min.js')
+ script(src='https://ircdname.azureedge.net/immersivereadersdk/immersive-reader-sdk.1.2.0.js')
+ script(src='https://code.jquery.com/jquery-3.3.1.min.js')
- style(type="text/css").
- .immersive-reader-button {
- background-color: white;
- margin-top: 5px;
- border: 1px solid black;
- float: right;
- }
- body
- div(class="container")
- button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("wus")') WestUS Immersive Reader
- button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("eus")') EastUS Immersive Reader
-
- h1(id="ir-title") About Immersive Reader
- div(id="ir-content" lang="en-us")
- p Immersive Reader is a tool that implements proven techniques to improve reading comprehension for emerging readers, language learners, and people with learning differences. The Immersive Reader is designed to make reading more accessible for everyone. The Immersive Reader
-
- ul
- li Shows content in a minimal reading view
- li Displays pictures of commonly used words
- li Highlights nouns, verbs, adjectives, and adverbs
- li Reads your content out loud to you
- li Translates your content into another language
- li Breaks down words into syllables
-
- h3 The Immersive Reader is available in many languages.
-
- p(lang="es-es") El Lector inmersivo está disponible en varios idiomas.
- p(lang="zh-cn") 沉浸式阅读器支持许多语言
- p(lang="de-de") Der plastische Reader ist in vielen Sprachen verf├╝gbar.
- p(lang="ar-eg" dir="rtl" style="text-align:right") يتوفر \"القارئ الشامل\" في العديد من اللغات.
-
- script(type="text/javascript").
- function getTokenAndSubdomainAsync(region) {
- return new Promise(function (resolve, reject) {
- $.ajax({
- url: "/GetTokenAndSubdomain",
- type: "GET",
- data: {
- region: region
- },
- success: function (data) {
- if (data.error) {
- reject(data.error);
- } else {
- resolve(data);
- }
- },
- error: function (err) {
- reject(err);
+ style(type="text/css").
+ .immersive-reader-button {
+ background-color: white;
+ margin-top: 5px;
+ border: 1px solid black;
+ float: right;
+ }
+ body
+ div(class="container")
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("wus")') WestUS Immersive Reader
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("eus")') EastUS Immersive Reader
+
+ h1(id="ir-title") About Immersive Reader
+ div(id="ir-content" lang="en-us")
+ p Immersive Reader is a tool that implements proven techniques to improve reading comprehension for emerging readers, language learners, and people with learning differences. The Immersive Reader is designed to make reading more accessible for everyone. The Immersive Reader
+
+ ul
+ li Shows content in a minimal reading view
+ li Displays pictures of commonly used words
+ li Highlights nouns, verbs, adjectives, and adverbs
+ li Reads your content out loud to you
+ li Translates your content into another language
+ li Breaks down words into syllables
+
+ h3 The Immersive Reader is available in many languages.
+
+ p(lang="es-es") El Lector inmersivo está disponible en varios idiomas.
+ p(lang="zh-cn") 沉浸式阅读器支持许多语言
+ p(lang="de-de") Der plastische Reader ist in vielen Sprachen verf├╝gbar.
+ p(lang="ar-eg" dir="rtl" style="text-align:right") يتوفر \"القارئ الشامل\" في العديد من اللغات.
+
+script(type="text/javascript").
+function getTokenAndSubdomainAsync(region) {
+ return new Promise(function (resolve, reject) {
+ $.ajax({
+ url: "/GetTokenAndSubdomain",
+ type: "GET",
+ data: {
+ region: region
+ },
+ success: function (data) {
+ if (data.error) {
+ reject(data.error);
+ } else {
+ resolve(data);
}
- });
+ },
+ error: function (err) {
+ reject(err);
+ }
});
- }
-
- function handleLaunchImmersiveReader(region) {
- getTokenAndSubdomainAsync(region)
- .then(function (response) {
- const token = response["token"];
- const subdomain = response["subdomain"];
- // Learn more about chunk usage and supported MIME types https://learn.microsoft.com/azure/ai-services/immersive-reader/reference#chunk
- const data = {
- Title: $("#ir-title").text(),
- chunks: [{
- content: $("#ir-content").html(),
- mimeType: "text/html"
- }]
- };
- // Learn more about options https://learn.microsoft.com/azure/ai-services/immersive-reader/reference#options
- const options = {
- "onExit": exitCallback,
- "uiZIndex": 2000
- };
- ImmersiveReader.launchAsync(token, subdomain, data, options)
- .catch(function (error) {
- alert("Error in launching the Immersive Reader. Check the console.");
- console.log(error);
- });
- })
- .catch(function (error) {
- alert("Error in getting the Immersive Reader token and subdomain. Check the console.");
- console.log(error);
- });
- }
-
- function exitCallback() {
- console.log("This is the callback function. It is executed when the Immersive Reader closes.");
- }
- ```
-
-3. Our web app is now ready. Start the app by running:
-
- ```bash
- npm start
- ```
-
-4. Open your browser and navigate to `http://localhost:3000`. You should see the above content on the page. Select either the **EastUS Immersive Reader** button or the **WestUS Immersive Reader** button to launch the Immersive Reader using those respective resources.
-
-## Next steps
-
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
-* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp)
+ });
+ }
+
+ function handleLaunchImmersiveReader(region) {
+ getTokenAndSubdomainAsync(region)
+ .then(function (response) {
+ const token = response["token"];
+ const subdomain = response["subdomain"];
+ // Learn more about chunk usage and supported MIME types https://learn.microsoft.com/azure/ai-services/immersive-reader/reference#chunk
+ const data = {
+ Title: $("#ir-title").text(),
+ chunks: [{
+ content: $("#ir-content").html(),
+ mimeType: "text/html"
+ }]
+ };
+ // Learn more about options https://learn.microsoft.com/azure/ai-services/immersive-reader/reference#options
+ const options = {
+ "onExit": exitCallback,
+ "uiZIndex": 2000
+ };
+ ImmersiveReader.launchAsync(token, subdomain, data, options)
+ .catch(function (error) {
+ alert("Error in launching the Immersive Reader. Check the console.");
+ console.log(error);
+ });
+ })
+ .catch(function (error) {
+ alert("Error in getting the Immersive Reader token and subdomain. Check the console.");
+ console.log(error);
+ });
+ }
+
+ function exitCallback() {
+ console.log("This is the callback function. It is executed when the Immersive Reader closes.");
+ }
+```
+
+Your web app is now ready. Start the app by running:
+
+```bash
+npm start
+```
+
+Open your browser and navigate to `http://localhost:3000`. You should see the above content on the page. Select either the **EastUS Immersive Reader** button or the **WestUS Immersive Reader** button to launch the Immersive Reader using those respective resources.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk)
ai-services How To Prepare Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-prepare-html.md
Title: "How to prepare HTML content for Immersive Reader"
-description: Learn how to launch the Immersive reader using HTML, JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
-
+description: Learn how to structure HTML and retrieve your content for use with Immersive Reader.
+ Previously updated : 03/04/2021- Last updated : 02/23/2024+ # How to prepare HTML content for Immersive Reader
-This article shows you how to structure your HTML and retrieve the content, so that it can be used by Immersive Reader.
+This article shows you how to structure your HTML and retrieve the content, so that your Immersive Reader application can use it.
## Prepare the HTML content
-Place the content that you want to render in the Immersive Reader inside of a container element. Be sure that the container element has a unique `id`. The Immersive Reader provides support for basic HTML elements, see the [reference](reference.md#html-support) for more information.
+Place the content that you want to render in the Immersive Reader inside of a container element. Be sure that the container element has a unique `id`. To learn more about how the Immersive Reader provides support for basic HTML elements, see the [SDK reference](reference.md#html-support).
```html <div id='immersive-reader-content'>
const data = {
ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS); ```
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK Reference](reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](reference.md)
ai-services How To Store User Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-store-user-preferences.md
Title: "Store user preferences"
+ Title: Store user preferences for Immersive Reader
-description: This article will show you how to store the user's preferences.
-
+description: Learn how to store user preferences via the Immersive Reader SDK options.
+ Previously updated : 06/29/2020- Last updated : 02/23/2024+ # How to store user preferences
-This article demonstrates how to store the user's UI settings, formally known as **user preferences**, via the [-preferences](./reference.md#options) and [-onPreferencesChanged](./reference.md#options) Immersive Reader SDK options.
+This article demonstrates how to store the user's UI settings, or *user preferences*, via the [-preferences](./reference.md#options) and [-onPreferencesChanged](./reference.md#options) Immersive Reader SDK options.
-When the [CookiePolicy](./reference.md#cookiepolicy-options) SDK option is set to *Enabled*, the Immersive Reader application stores the **user preferences** (text size, theme color, font, and so on) in cookies, which are local to a specific browser and device. Each time the user launches the Immersive Reader on the same browser and device, it will open with the user's preferences from their last session on that device. However, if the user opens the Immersive Reader on a different browser or device, the settings will initially be configured with the Immersive Reader's default settings, and the user will have to set their preferences again, and so on for each device they use. The `-preferences` and `-onPreferencesChanged` Immersive Reader SDK options provide a way for applications to roam a user's preferences across various browsers and devices, so that the user has a consistent experience wherever they use the application.
+When the [CookiePolicy](./reference.md#cookiepolicy-options) SDK option is set to *Enabled*, the Immersive Reader application stores user preferences, such as text size, theme color, and font, by using cookies. These cookies are local to a specific browser and device. Each time the user launches the Immersive Reader on the same browser and device, it opens with the user's preferences from their last session on that device. However, if the user opens the Immersive Reader app on a different browser or device, the settings are initially configured with the Immersive Reader's default settings, and the user needs to set their preferences again for each device they use. The `-preferences` and `-onPreferencesChanged` Immersive Reader SDK options provide a way for applications to roam a user's preferences across various browsers and devices, so that the user has a consistent experience wherever they use the application.
-First, by supplying the `-onPreferencesChanged` callback SDK option when launching the Immersive Reader application, the Immersive Reader will send a `-preferences` string back to the host application each time the user changes their preferences during the Immersive Reader session. The host application is then responsible for storing the user preferences in their own system. Then, when that same user launches the Immersive Reader again, the host application can retrieve that user's preferences from storage, and supply them as the `-preferences` string SDK option when launching the Immersive Reader application, so that the user's preferences are restored.
+First, by supplying the `-onPreferencesChanged` callback SDK option when launching the Immersive Reader application, the Immersive Reader sends a `-preferences` string back to the host application each time the user changes their preferences during the Immersive Reader session. The host application is then responsible for storing the user preferences in their own system. Then, when that same user launches the Immersive Reader again, the host application can retrieve that user's preferences from storage, and supply them as the `-preferences` string SDK option when launching the Immersive Reader application, so that the user's preferences are restored.
-This functionality may be used as an alternate means to storing **user preferences** in the case where using cookies is not desirable or feasible.
+This functionality can be used as an alternate means to storing user preferences when using cookies isn't desirable or feasible.
> [!CAUTION]
-> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
+> Don't attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application because this might cause unexpected behavior resulting in a degraded user experience. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
-## How to enable storing user preferences
+## Enable storing user preferences
-the Immersive Reader SDK [launchAsync](./reference.md#launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function will be called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application.
+The Immersive Reader SDK [launchAsync](./reference.md#launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function is called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application.
```typescript const options = {
const options = {
ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options); ```
-## How to load user preferences into the Immersive Reader
+## Load user preferences
-Pass in the user's preferences to the Immersive Reader using the `-preferences` option. A trivial example to store and load the user's preferences is as follows:
+Pass in the user's preferences to the Immersive Reader app by using the `-preferences` option. A trivial example to store and load the user's preferences is as follows:
```typescript const storedUserPreferences = localStorage.getItem("USER_PREFERENCES");
const options = {
}; ```
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](reference.md)
ai-services Display Math https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to/display-math.md
Title: "Display math in the Immersive Reader"
-description: This article will show you how to display math in the Immersive Reader.
-
+description: Learn how to display math in the Immersive Reader app.
+ Previously updated : 01/14/2020- Last updated : 02/26/2024+ # How to display math in the Immersive Reader
-The Immersive Reader can display math when provided in the form of Mathematical Markup Language ([MathML](https://developer.mozilla.org/docs/Web/MathML)).
-The MIME type can be set through the Immersive Reader [chunk](../reference.md#chunk). See [supported MIME types](../reference.md#supported-mime-types) for more information.
+The Immersive Reader can display math expressions when provided in the form of Mathematical Markup Language ([MathML](https://developer.mozilla.org/docs/Web/MathML)).
-## Send Math to the Immersive Reader
-In order to send math to the Immersive Reader, supply a chunk containing MathML, and set the MIME type to ```application/mathml+xml```;
+## Send math to the Immersive Reader
-For example, if your content were the following:
+In order to display math in the Immersive Reader app, supply a [chunk](../reference.md#chunk) that contains MathML, and set the MIME type to `application/mathml+xml`. To learn more, see [supported MIME types](../reference.md#supported-mime-types).
+
+For example, see the following content:
```html <div id='ir-content'>
For example, if your content were the following:
</div> ```
-Then you could display your content by using the following JavaScript.
+You can then display your content by using the following JavaScript.
```javascript const data = {
ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
When you launch the Immersive Reader, you should see:
-![Math in Immersive Reader](../media/how-tos/1-math.png)
-## Next steps
+## Next step
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
+> [!div class="nextstepaction"]
+> [Explore the Immersive Reader SDK reference](../reference.md)
ai-services Set Cookie Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to/set-cookie-policy.md
Title: "Set Immersive Reader Cookie Policy"
+ Title: "Set Immersive Reader cookie policy"
-description: This article will show you how to set the cookie policy for the Immersive Reader.
+description: Learn how to set the cookie policy for the Immersive Reader app.
#-+ Previously updated : 01/06/2020- Last updated : 02/26/2024+ # How to set the cookie policy for the Immersive Reader
-The Immersive Reader will disable cookie usage by default. If you enable cookie usage, then the Immersive Reader may use cookies to maintain user preferences and track feature usage. If you enable cookie usage in the Immersive Reader, please consider the requirements of EU Cookie Compliance Policy. It is the responsibility of the host application to obtain any necessary user consent in accordance with EU Cookie Compliance Policy.
+The Immersive Reader disables cookie usage by default. If you enable cookie usage, then the Immersive Reader can use cookies to maintain user preferences and track feature usage. If you enable cookie usage in the Immersive Reader, consider the requirements of the EU Cookie Compliance Policy. It's the responsibility of the host application to obtain any necessary user consent in accordance with the EU Cookie Compliance Policy.
The cookie policy can be set through the Immersive Reader [options](../reference.md#options).
-## Enable Cookie Usage
+## Enable cookie usage
```javascript
-var options = {
+const options = {
'cookiePolicy': ImmersiveReader.CookiePolicy.Enable }; ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options); ```
-## Disable Cookie Usage
+## Disable cookie usage
```javascript
-var options = {
+const options = {
'cookiePolicy': ImmersiveReader.CookiePolicy.Disable }; ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options); ```
-## Next steps
+## Next step
-* View the [Node.js quickstart](../quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
+> [!div class="nextstepaction"]
+> [View the quickstart guides](../quickstarts/client-libraries.md?pivots=programming-language-nodejs)
ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/quickstarts/client-libraries.md
Title: "Quickstart: Immersive Reader client library"
+ Title: "Quickstart: Build an Immersive Reader app"
-description: "The Immersive Reader client library makes it easy to integrate the Immersive Reader service into your web applications to improve reading comprehension. In this quickstart, you'll learn how to use Immersive Reader for text selection, recognizing parts of speech, reading selected text out loud, translation, and more."
+description: Learn how to build an Immersive Reader app using C#, Node.js, Java, Kotlin, and Swift to help improve reading comprehension.
#-+ zone_pivot_groups: programming-languages-set-twenty Previously updated : 03/08/2021- Last updated : 02/14/2024+ keywords: display pictures, parts of speech, read selected text, translate words, reading comprehension
-# Quickstart: Get started with Immersive Reader
+# Quickstart: Build an Immersive Reader app
+
+[Immersive Reader](https://www.onenote.com/learningtools) is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. You can use Immersive Reader in your applications to isolate text to improve focus, display pictures for commonly used words, highlight parts of speech, read selected text out loud, translate words and phrases in real-time, and more.
::: zone pivot="programming-language-csharp"
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
recommendations: false
# Securely use Azure OpenAI On Your Data
-Use this article to learn how to use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks and private endpoints.
+Use this article to learn how to use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks, and private endpoints.
This article is only applicable when using [Azure OpenAI On Your Data with text](/azure/ai-services/openai/concepts/use-your-data). It does not apply to [Azure OpenAI On Your Data with images](/azure/ai-services/openai/concepts/use-your-image-data).
To allow your Azure AI Search to call your Azure OpenAI `preprocessing-jobs` as
Set `networkAcls.bypass` as `AzureServices` from the management API. For more information, see [Virtual networks article](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#grant-access-to-trusted-azure-services-for-azure-openai).
+This step can be skipped only if you have a [shared private link](#create-shared-private-link) for your Azure AI Search resource.
+ ### Disable public network access You can disable public network access of your Azure OpenAI resource in the Azure portal.
To allow access to your Azure OpenAI service from your client machines, like usi
## Configure Azure AI Search
-You can use basic pricing tier and higher for the configuration below. You donΓÇÖt have to use S2 pricing tier because the configuration doesn't require [private endpoint support for indexers with a skill set](/azure/search/search-limits-quotas-capacity#shared-private-link-resource-limits). See [step 8](#data-ingestion-architecture) of the data ingestion architecture diagram. The networking for custom skill is *bypass trusted service*, not *private endpoint*.
+You can use basic pricing tier and higher for the configuration below. It's not necessary, but if you use the S2 pricing tier you will see [additional options](#create-shared-private-link) available for selection.
### Enable managed identity
To allow access to your Azure AI Search resource from your client machines, like
:::image type="content" source="../media/use-your-data/approve-private-endpoint.png" alt-text="A screenshot showing private endpoint approval screen." lightbox="../media/use-your-data/approve-private-endpoint.png":::
-The private endpoint resource is provisioned in a Microsoft managed tenant, while the linked resource is in your tenant. You can't access the private endpoint resource by just clicking the **private endpoint** link (in blue font) in the **Private access** tab of the **Networking page**. Instead, click elsewhere on the row, then the **Approve**` button above should be clickable.
+The private endpoint resource is provisioned in a Microsoft managed tenant, while the linked resource is in your tenant. You can't access the private endpoint resource by just clicking the **private endpoint** link (in blue font) in the **Private access** tab of the **Networking page**. Instead, click elsewhere on the row, then the **Approve** button above should be clickable.
Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow).
+### Create shared private link
+
+> [!TIP]
+> If you are using a basic or standard pricing tier, or if it is your first time to setup all of your resources securely, you should skip this advanced topic.
+
+This section is only applicable for S2 pricing tier search resource, because it requires [private endpoint support for indexers with a skill set](/azure/search/search-limits-quotas-capacity#shared-private-link-resource-limits).
+
+To create shared private link from your search resource connecting to your Azure OpenAI resource, see the [search documentation](/azure/search/search-indexer-howto-access-private). Select **Resource type** as `Microsoft.CognitiveServices/accounts` and **Group ID** as `openai_account`.
+
+With shared private link, [step eight](#data-ingestion-architecture) of the data ingestion architecture diagram is changed from **bypass trusted service** to **private endpoint**.
++
+The Azure AI Search shared private link you created is also in a Microsoft managed virtual network, not your virtual network. The difference compared to the other managed private endpoint created [earlier](#disable-public-network-access-1) is that the managed private endpoint `[1]` from Azure OpenAI to Azure Search is provisioned through the [form application](#disable-public-network-access-1), while the managed private endpoint `[2]` from Azure Search to Azure OpenAI is provisioned via Azure portal or REST API of Azure Search.
++ ## Configure Storage Account ### Enable trusted service
So far you have already setup each resource work independently. Next you need to
| `Search Service Contributor` | Azure OpenAI | Azure AI Search | Inference service queries the index schema for auto fields mapping. Data ingestion service creates index, data sources, skill set, indexer, and queries the indexer status. | | `Storage Blob Data Contributor` | Azure OpenAI | Storage Account | Reads from the input container, and writes the preprocess result to the output container. | | `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | Custom skill |
-| `Storage Blob Data Contributor` | Azure AI Search | Storage Account | Reads blob and writes knowledge store |
+| `Storage Blob Data Contributor` | Azure AI Search | Storage Account | Reads blob and writes knowledge store. |
In the above table, the `Assignee` means the system assigned managed identity of that resource.
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/role-based-access-control.md
You can manage access and permissions to your Speech resources with Azure role-based access control (Azure RBAC). Assigned roles can vary across Speech resources. For example, you can assign a role to a Speech resource that should only be used to train a custom speech model. You can assign another role to a Speech resource that is used to transcribe audio files. Depending on who can access each Speech resource, you can effectively set a different level of access per application or user. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md). > [!NOTE]
-> A Speech resource can inherit or be assigned multiple roles. The final level of access to this resource is a combination of all roles permissions from the operation level.
+> A Speech resource can inherit or be assigned multiple roles. The final level of access to the resource is a combination of all role permissions.
## Roles for Speech resources
-A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in this table are assigned by default.
+A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in the following table are available for assignment.
+
+> [!WARNING]
+> Speech service architecture differs from other Azure AI services in the way it uses [Azure control plane and data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md). Speech service is extensively using data plane comparing to other Azure AI services, and this requires different set up for the roles. Because of this some general Cognitive Services roles have actual access right set that doesn't exactly match their name when used in Speech services scenario. For instance *Cognitive Services User* provides in effect the Contributor rights, while *Cognitive Services Contributor* provides no access at all. The same is true for generic *Owner* and *Contributor* roles which have no data plane rights and consequently provide no access to Speech resource. To keep consistency we recommend to use roles containing *Speech* in their names. These roles are *Cognitive Services Speech User* and *Cognitive Services Speech Contributor*. Their access right sets were designed specifically for the Speech service. In case you would like to use general Cognitive Services roles and Azure generic roles, we ask you to very carefully study the following access right table.
| Role | Can list resource keys | Access to data, models, and endpoints in custom projects| Access to speech transcription and synthesis APIs | | | | |
-|**Owner** |Yes |View, create, edit, and delete |Yes |
-|**Contributor** |Yes |View, create, edit, and delete |Yes |
-|**Cognitive Services Contributor** |Yes |View, create, edit, and delete |Yes |
+|**Owner** |Yes |None |No |
+|**Contributor** |Yes |None |No |
+|**Cognitive Services Contributor** |Yes |None |No |
|**Cognitive Services User** |Yes |View, create, edit, and delete |Yes | |**Cognitive Services Speech Contributor** |No | View, create, edit, and delete |Yes | |**Cognitive Services Speech User** |No |View only |Yes |
ai-studio Flow Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md
Last updated 2/24/2024 -+ # Deploy a flow for real-time inference
You can also directly go to the **Deployments** page from the left navigation, s
## Test the endpoint
-In the endpoint detail page, switch to the **Test** tab.
+In the deployment detail page, switch to the **Test** tab.
For endpoints deployed from standard flow, you can input values in form editor or JSON editor to test the endpoint.
The `chat_input` was set during development of the chat flow. You can input the
## Consume the endpoint
-In the endpoint detail page, switch to the **Consume** tab. You can find the REST endpoint and key/token to consume your endpoint. There's also sample code for you to consume the endpoint in different languages.
+In the deployment detail page, switch to the **Consume** tab. You can find the REST endpoint and key/token to consume your endpoint. There's also sample code for you to consume the endpoint in different languages.
++
+You need to input values for `RequestBody` or `data` and `api_key`. For example, if your flow has 2 inputs `location` and `url`, then you need to specify data as following.
+
+```json
+ {
+"location": "LA",
+"url": "<the_url_to_be_classified>"
+}
+```
+++ ## Clean up resources
ai-studio Azure Open Ai Gpt 4V Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/azure-open-ai-gpt-4v-tool.md
The following are available input parameters:
| - | - | -- | -- | | connection | AzureOpenAI | The Azure OpenAI connection to be used in the tool. | Yes | | deployment\_name | string | The language model to use. | Yes |
-| prompt | string | Text prompt that the language model uses to generate its response. | Yes |
+| prompt | string | Text prompt that the language model uses to generate its response. The Jinja template for composing prompts in this tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax `![image]({{INPUT NAME}})`. Image input can be passed in the `user`, `system` and `assistant` messages. | Yes |
| max\_tokens | integer | Maximum number of tokens to generate in the response. Default is 512. | No | | temperature | float | Randomness of the generated text. Default is 1. | No | | stop | list | Stopping sequence for the generated text. Default is null. | No |
The following are available output parameters:
|-|| | string | The text of one response of conversation |
-## Next steps
--- [Learn more about how to create a flow](../flow-develop.md)
+## Next step
+- Learn more about [how to process images in prompt flow](../flow-process-image.md).
+- [Learn more about how to create a flow](../flow-develop.md).
ai-studio Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
If you're developing a python tool that requires calling external services with
Create a custom connection that stores all your LLM API KEY or other required credentials.
-1. Go to Prompt flow in your workspace, then go to **connections** tab.
-1. Select **Create** and select **Custom**.
-1. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
+1. Go to **AI project settings**, then select **New Connection**.
+1. Select **Custom** service. You can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
> [!NOTE] > Make sure at least one key-value pair is set as secret, otherwise the connection will not be created successfully. You can set one Key-Value pair as secret by **is secret** checked, which will be encrypted and stored in your key value.
+ :::image type="content" source="../../media/prompt-flow/create-connection.png" alt-text="Screenshot that shows create connection in AI Studio." lightbox = "../../media/prompt-flow/create-connection.png":::
++ 1. Add the following custom keys to the connection: - `azureml.flow.connection_type`: `Custom` - `azureml.flow.module`: `promptflow.connections`
aks Azure Linux Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-linux-aks-partner-solutions.md
Our third party partners featured in this article have introduction guides to he
| Networking | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Tetrate](#tetrate) | | Observability | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Dynatrace](#dynatrace) | | Security | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Kong](#kong) <br> [Tetrate](#tetrate) |
-| Storage | [Veeam](#veeam) |
+| Storage | [Catalogic](#catalogic) <br> [Veeam](#veeam) |
| Config Management | [Corent](#corent) | | Migration | [Catalogic](#catalogic) |
Migrate workloads to Azure Linux Container Host on AKS with confidence.
| Solution | Categories | |-||
-| CloudCasa | Migration |
+| CloudCasa by Catalogic | Migration <br> Storage |
-CloudCasa is a Kubernetes backup, recovery, and migration solution that is fully compatible with AKS, as well as all other major Kubernetes distributions and managed services.
+CloudCasa by Catalogic is a Kubernetes backup, recovery, and migration solution that is fully compatible with AKS, as well as all other major Kubernetes distributions and managed services.
<details> <summary> See more </summary><br>
CloudCasa can also centrally manage Azure Backup or Velero backup installations
</details>
-For more information, see [Catalogic Solutions](https://cloudcasa.io/) and [Catalogic on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/catalogicsoftware1625626770507.cloudcasa-aks-app).
+For more information, see [CloudCasa by Catalogic Solutions](https://cloudcasa.io/) and [CloudCasa by Catalogic on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/catalogicsoftware1625626770507.cloudcasa-aks-app).
## Next steps
aks Enable Fips Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-fips-nodes.md
description: Learn how to enable Federal Information Process Standard (FIPS) for
Previously updated : 06/28/2023 Last updated : 02/29/2024
The Federal Information Processing Standard (FIPS) 140-2 is a US government stan
* FIPS-enabled node pools require Kubernetes version 1.19 and greater. * To update the underlying packages or modules used for FIPS, you must use [Node Image Upgrade][node-image-upgrade]. * Container images on the FIPS nodes haven't been assessed for FIPS compliance.
+ * Mounting of a CIFS share fails because FIPS disables some authentication modules. To work around this issue, see [Errors when mounting a file share on a FIPS-enabled node pool][errors-mount-file-share-fips].
+ > [!IMPORTANT] > The FIPS-enabled Linux image is a different image than the default Linux image used for Linux-based node pools. To enable FIPS on a node pool, you must create a new Linux-based node pool. You can't enable FIPS on existing node pools.
To learn more about AKS security, see [Best practices for cluster security and u
[fips]: /azure/compliance/offerings/offering-fips-140-2 [install-azure-cli]: /cli/azure/install-azure-cli [node-image-upgrade]: node-image-upgrade.md
+[errors-mount-file-share-fips]: /troubleshoot/azure/azure-kubernetes/fail-to-mount-azure-file-share#fipsnodepool
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure
## Activity log
-The following table lists a few example operations related to AKS that may be created in the [Activity log](../azure-monitor/essentials/activity-log.md). Use the Activity log to track information such as when a cluster is created or had its configuration change. You can view this information [in the portal](../azure-monitor/essentials/activity-log.md#view-the-activity-log) or by using [other methods](../azure-monitor/essentials/activity-log.md#other-methods-to-retrieve-activity-log-events). You can also use it to create an [Activity log alert]() to be proactively notified when an event occurs.
+The following table lists a few example operations related to AKS that may be created in the [Activity log](../azure-monitor/essentials/activity-log-insights.md). Use the Activity log to track information such as when a cluster is created or had its configuration change. You can view this information in the portal or by using [other methods](../azure-monitor/essentials/activity-log.md#other-methods-to-retrieve-activity-log-events). You can also use it to create an [Activity log alert]() to be proactively notified when an event occurs.
| Operation | Description | |:|:|
app-service Get Resource Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/get-resource-events.md
# Get resource events in Azure App Service
-Azure App Service provides built-in tools to monitor the status and health of your resources. Resource events help you understand any changes that were made to your underlying web app resources and take action as necessary. Event examples include: scaling of instances, updates to application settings, restarting of the web app, and many more. In this article, you'll learn how to view [Azure Activity Logs](../azure-monitor/essentials/activity-log.md#view-the-activity-log) and enable [Event Grid](../event-grid/index.yml) to monitor App Service resource events.
+Azure App Service provides built-in tools to monitor the status and health of your resources. Resource events help you understand any changes that were made to your underlying web app resources and take action as necessary. Event examples include: scaling of instances, updates to application settings, restarting of the web app, and many more. In this article, you'll learn how to view [Azure Activity Logs](../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log) and enable [Event Grid](../event-grid/index.yml) to monitor App Service resource events.
## View Azure Activity Logs Azure Activity Logs contain resource events emitted by operations taken on the resources in your subscription. Both the user actions in the Azure portal and Azure Resource Manager templates contribute to the events captured by the Activity log.
Azure Activity Logs for App Service details such as:
Azure Activity Logs can be queried using the Azure portal, PowerShell, REST API, or CLI. You can send the logs to a storage account, Event Hub, and Log Analytics. You can also analyze them in Power BI or create alerts to stay updated on resource events.
-[View and retrieve Azure Activity log events.](../azure-monitor/essentials/activity-log.md#view-the-activity-log)
+[View and retrieve Azure Activity log events.](../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log)
## Ship Activity Logs to Event Grid
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
description: In this quickstart, you learn how to use the Azure portal to create
Previously updated : 02/28/2024 Last updated : 02/29/2024
# Quickstart: Direct web traffic with Azure Application Gateway - Azure portal
-In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
+In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
![Conceptual diagram of the quickstart setup.](./media/quick-create-portal/application-gateway-qs-resources.png)
You can also complete this quickstart using [Azure PowerShell](quick-create-powe
## Prerequisites
-An Azure account with an active subscription is required. If you don't already have an account, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+An Azure account with an active subscription is required. If you don't already have an account, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. ## Create an application gateway
-You'll create the application gateway using the tabs on the **Create application gateway** page.
+Create the application gateway using the tabs on the **Create application gateway** page.
1. On the Azure portal menu or from the **Home** page, select **Create a resource**. 2. Under **Categories**, select **Networking** and then select **Application Gateway** in the **Popular Azure services** list.
You'll create the application gateway using the tabs on the **Create application
- **Resource group**: Select **myResourceGroupAG** for the resource group. If it doesn't exist, select **Create new** to create it. - **Application gateway name**: Enter *myAppGateway* for the name of the application gateway.
+ - Use the default selections for other settings.
- ![Create new application gateway: Basics](./media/application-gateway-create-gateway-portal/application-gateway-create-basics.png)
+ ![Screenshot of create new application gateway: basics.](./media/application-gateway-create-gateway-portal/application-gateway-create-basics.png)
2. For Azure to communicate between the resources that you create, a virtual network is needed. You can either create a new virtual network or use an existing one. In this example, you'll create a new virtual network at the same time that you create the application gateway. Application Gateway instances are created in separate subnets. You create two subnets in this example: One for the application gateway, and another for the backend servers.
You'll create the application gateway using the tabs on the **Create application
- **Name**: Enter *myVNet* for the name of the virtual network.
- - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed. The default IP address range provided is 10.0.0.0/24.
-
- - **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column.
+ - **Subnet name** (Application Gateway subnet): The **Subnets** list shows a subnet named *default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed. The default IP address range provided is 10.0.0.0/24.
- - **Address range** (backend server subnet): In the second row of the **Subnets** Grid, enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.0.0.0/24, enter *10.0.1.0/24* for the address range of *myBackendSubnet*.
+ ![Screenshot of create new application gateway: virtual network.](./media/application-gateway-create-gateway-portal/application-gateway-create-vnet.png)
Select **OK** to close the **Create virtual network** window and save the virtual network settings.-
- ![Create new application gateway: virtual network](./media/application-gateway-create-gateway-portal/application-gateway-create-vnet.png)
-3. On the **Basics** tab, accept the default values for the other settings and then select **Next: Frontends**.
+3. Select **Next: Frontends**.
### Frontends tab
You'll create the application gateway using the tabs on the **Create application
2. Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**.
- ![Create new application gateway: frontends](./media/application-gateway-create-gateway-portal/application-gateway-create-frontends.png)
+ ![Screenshot of create new application gateway: frontends.](./media/application-gateway-create-gateway-portal/application-gateway-create-frontends.png)
> [!NOTE] > Application Gateway frontend now supports dual-stack IP addresses (Public Preview). You can now create up to four frontend IP addresses: Two IPv4 addresses (public and private) and two IPv6 addresses (public and private).
The backend pool is used to route requests to the backend servers that serve the
3. In the **Add a backend pool** window, select **Add** to save the backend pool configuration and return to the **Backends** tab.
- ![Create new application gateway: backends](./media/application-gateway-create-gateway-portal/application-gateway-create-backends.png)
+ ![Screenshot of create new application gateway: backends.](./media/application-gateway-create-gateway-portal/application-gateway-create-backends.png)
4. On the **Backends** tab, select **Next: Configuration**.
On the **Configuration** tab, you'll connect the frontend and backend pool you c
Accept the default values for the other settings on the **Listener** tab, then select the **Backend targets** tab to configure the rest of the routing rule.
- ![Create new application gateway: listener](./media/application-gateway-create-gateway-portal/application-gateway-create-rule-listener.png)
+ ![Screenshot of create new application gateway: listener.](./media/application-gateway-create-gateway-portal/application-gateway-create-rule-listener.png)
4. On the **Backend targets** tab, select **myBackendPool** for the **Backend target**.
-5. For the **Backend setting**, select **Add new** to add a new Backend setting. The Backend setting will determine the behavior of the routing rule. In the **Add Backend setting** window that opens, enter *myBackendSetting* for the **Backend settings name** and *80* for the **Backend port**. Accept the default values for the other settings in the **Add Backend setting** window, then select **Add** to return to the **Add a routing rule** window.
+5. For the **Backend setting**, select **Add new** to add a new Backend setting. The Backend setting determines the behavior of the routing rule. In the **Add Backend setting** window that opens, enter *myBackendSetting* for the **Backend settings name** and *80* for the **Backend port**. Accept the default values for the other settings in the **Add Backend setting** window, then select **Add** to return to the **Add a routing rule** window.
- ![Create new application gateway: HTTP setting](./media/application-gateway-create-gateway-portal/application-gateway-create-backendsetting.png)
+ ![Screenshot of create new application gateway: backend setting.](./media/application-gateway-create-gateway-portal/application-gateway-create-backendsetting.png)
6. On the **Add a routing rule** window, select **Add** to save the routing rule and return to the **Configuration** tab.
- ![Create new application gateway: routing rule](./media/application-gateway-create-gateway-portal/application-gateway-create-rule-backends.png)
+ ![Screenshot of new application gateway: completed configuration tab.](./media/application-gateway-create-gateway-portal/application-gateway-create-rule-backends.png)
7. Select **Next: Tags** and then **Next: Review + create**.
On the **Configuration** tab, you'll connect the frontend and backend pool you c
Review the settings on the **Review + create** tab, and then select **Create** to create the virtual network, the public IP address, and the application gateway. It can take several minutes for Azure to create the application gateway. Wait until the deployment finishes successfully before moving on to the next section.
+ ![Screenshot of new application gateway: ready to create.](./media/application-gateway-create-gateway-portal/application-gateway-create.png)
+ ## Add backend targets In this example, you'll use virtual machines as the target backend. You can either use existing virtual machines or create new ones. You'll create two virtual machines as backend servers for the application gateway. To do this, you'll:
-1. Create two new VMs, *myVM* and *myVM2*, to be used as backend servers.
-2. Install IIS on the virtual machines to verify that the application gateway was created successfully.
-3. Add the backend servers to the backend pool.
+1. Add a backend subnet.
+2. Create two new VMs, *myVM* and *myVM2*, to be used as backend servers.
+3. Install IIS on the virtual machines to verify that the application gateway was created successfully.
+4. Add the backend servers to the backend pool.
+
+### Add a backend subnet
+
+The subnet *myAGSubnet* can only contain the application gateway, so we need another subnet to add backend targets.
+
+To create a backend subnet:
+
+1. Select the **myVNet** resource. You can select it under **Deployment details** after deployment of the application gateway is complete, or you can search for Virtual networks and select it from the list.
+2. Under **Settings**, select **Subnets** and then select **+ Subnet** to begin adding a new subnet.
+
+ - **Name**: Enter *myBackendSubnet*.
+ - **Subnet address range**: Enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.0.0.0/24, enter *10.0.1.0/24* for the address range of *myBackendSubnet*. This address range might be already entered by default.
+
+3. Use the default settings for other items and then select **Save**.
+
+ ![Screenshot of new application gateway subnets.](./media/application-gateway-create-gateway-portal/application-gateway-subnets.png)
### Create a virtual machine
In this example, you install IIS on the virtual machines to verify Azure created
Select **Cloud Shell** from the top navigation bar of the Azure portal and then select **PowerShell** from the drop-down list.
- ![Install custom extension](./media/application-gateway-create-gateway-portal/application-gateway-extension.png)
+ ![Screenshot of install custom extension.](./media/application-gateway-create-gateway-portal/application-gateway-extension.png)
2. Run the following command to install IIS on the virtual machine. Change the *Location* parameter if necessary:
In this example, you install IIS on the virtual machines to verify Azure created
-Location EastUS ```
-3. Create a second virtual machine and install IIS by using the steps that you previously completed. Use *myVM2* for the virtual machine name and for the **VMName** setting of the **Set-AzVMExtension** cmdlet.
+3. Create a second virtual machine and install IIS by using the steps that you previously completed. Use *myVM2* for the virtual machine name and for the `VMName` setting of the **Set-AzVMExtension** cmdlet.
### Add backend servers to backend pool 1. On the Azure portal menu, select **All resources** or search for and select *All resources*. Then select **myAppGateway**.- 2. Select **Backend pools** from the left menu.- 3. Select **myBackendPool**.- 4. Under **Backend targets**, **Target type**, select **Virtual machine** from the drop-down list.- 5. Under **Target**, select the **myVM** and **myVM2** virtual machines and their associated network interfaces from the drop-down lists. > [!div class="mx-imgBorder"] > ![Add backend servers](./media/application-gateway-create-gateway-portal/application-gateway-backend.png) 6. Select **Save**.- 7. Wait for the deployment to complete before proceeding to the next step. ## Test the application gateway
Use IIS to test the application gateway:
2. Copy the public IP address, and then paste it into the address bar of your browser to browse that IP address. 3. Check the response. A valid response verifies that the application gateway was successfully created and can successfully connect with the backend.
- ![Test application gateway](./media/application-gateway-create-gateway-portal/application-gateway-iistest.png)
+ ![Screenshot of the application gateway test.](./media/application-gateway-create-gateway-portal/application-gateway-iistest.png)
Refresh the browser multiple times and you should see connections to both myVM and myVM2.
To delete the resource group:
1. On the Azure portal menu, select **Resource groups** or search for and select *Resource groups*. 2. On the **Resource groups** page, search for **myResourceGroupAG** in the list, then select it. 3. On the **Resource group page**, select **Delete resource group**.
-4. Enter *myResourceGroupAG* under **TYPE THE RESOURCE GROUP NAME** and then select **Delete**
+4. Enter *myResourceGroupAG* under **TYPE THE RESOURCE GROUP NAME** and then select **Delete**.
## Next steps
azure-arc Conceptual Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-azure-rbac.md
Title: "Azure RBAC on Azure Arc-enabled Kubernetes (preview)" Previously updated : 05/04/2023 Last updated : 02/28/2024 description: "This article provides a conceptual overview of the Azure RBAC capability on Azure Arc-enabled Kubernetes." # Azure RBAC on Azure Arc-enabled Kubernetes clusters (preview)
-Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. With Azure RBAC, you can use Microsoft Entra ID and role assignments in Azure to control authorization checks on the cluster. This allows the benefits of Azure role assignments, such as activity logs showing all Azure RBAC changes to an Azure resource, to be used with your Azure Arc-enabled Kubernetes cluster.
+Kubernetes [ClusterRoleBinding and RoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) object types help to define authorization in Kubernetes natively. With Azure role-based access control (Azure RBAC), you can use Microsoft Entra ID and role assignments in Azure to control authorization checks on the cluster. This allows the benefits of Azure role assignments, such as activity logs showing all Azure RBAC changes to an Azure resource, to be used with your Azure Arc-enabled Kubernetes cluster.
[!INCLUDE [preview features note](./includes/preview/preview-callout.md)] ## Architecture
-[ ![Diagram showing Azure RBAC architecture.](./media/conceptual-azure-rbac.png) ](./media/conceptual-azure-rbac.png#lightbox)
In order to route all authorization access checks to the authorization service in Azure, a webhook server ([guard](https://github.com/appscode/guard)) is deployed on the cluster.
azure-arc Conceptual Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-cluster-connect.md
Title: "Cluster connect access to Azure Arc-enabled Kubernetes clusters" Previously updated : 07/22/2022 Last updated : 02/28/2024 description: "Cluster connect allows developers to access their Azure Arc-enabled Kubernetes clusters from anywhere for interactive development and debugging."
Cluster connect allows developers to access their clusters from anywhere for int
## Architecture
-[ ![Cluster connect architecture](./media/conceptual-cluster-connect.png) ](./media/conceptual-cluster-connect.png#lightbox)
-On the cluster side, a reverse proxy agent called `clusterconnect-agent` deployed as part of the agent Helm chart, makes outbound calls to the Azure Arc service to establish the session.
+On the cluster side, a reverse proxy agent called `clusterconnect-agent`, deployed as part of the agent Helm chart, makes outbound calls to the Azure Arc service to establish the session.
When the user calls `az connectedk8s proxy`:
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md
Azure Arc resource bridge stores resource information in Azure Cosmos DB. As des
## Security audit logs
-The [activity log](../../azure-monitor/essentials/activity-log.md) is an Azure platform log that provides insight into subscription-level events. This includes tracking when the Azure Arc resource bridge is modified, deleted, or added. You can [view the activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) in the Azure portal or retrieve entries with PowerShell and Azure CLI. By default, activity log events are [retained for 90 days](../../azure-monitor/essentials/activity-log.md#retention-period) and then deleted.
+The [activity log](../../azure-monitor/essentials/activity-log-insights.md) is an Azure platform log that provides insight into subscription-level events. This includes tracking when the Azure Arc resource bridge is modified, deleted, or added. You can [view the activity log](../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log) in the Azure portal or retrieve entries with PowerShell and Azure CLI. By default, activity log events are [retained for 90 days](../../azure-monitor/essentials/activity-log-insights.md#retention-period) and then deleted.
## Next steps
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
This article describes how Arc resource bridge is upgraded, and the two ways upg
## Private cloud providers Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
-For **Arc-enabled VMware vSphere**, manual upgrade is available, but appliances on version 1.0.15 and higher automatically receive cloud-managed upgrade as the default experience. Appliances that are earlier than version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This deploys a new Arc resource bridge using the latest version and reconnects pre-existing Azure resources.
+For **Arc-enabled VMware vSphere**, manual upgrade and cloud upgrade are available. Appliances on version 1.0.15 and higher are automatically opted-in to cloud-managed upgrade. With cloud-managed upgrade, Microsoft may attempt to upgrade your Arc resource bridge at any time if it is on an appliance version that will soon be out of support. In order for either upgrade option to work, the upgrade prerequisites must be met. While Microsoft offers cloud-managed upgrade, youΓÇÖre still responsible for checking that your resource bridge is healthy, online, in a "Running" status, and within the supported n-3 versions. Disruptions could cause cloud-managed upgrades to fail.ΓÇ» Any appliances that are earlier than version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This deploys a new Arc resource bridge using the latest version and reconnects pre-existing Azure resources.
-For **Azure Arc VM management (preview) on Azure Stack HCI**, to use appliance version 1.0.15 or higher, you must be on Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all HCI, Arc resource bridge, and extension components as a "validated recipe" package. Attempting to upgrade Arc resource bridge independent of other HCI environment components by using the `az arcappliance upgrade` command may cause problems in your environment that could result in a disaster recovery scenario. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). Customers on Azure Stack HCI, version 22H2 will receive limited support.
+For **Azure Arc VM management (preview) on Azure Stack HCI**, appliance version 1.0.15 or higher is only available on Azure Stack HCI build 23H2. In HCI 23H2, the LCM tool manages upgrades across all HCI, Arc resource bridge, and extension components as a "validated recipe" package. Any preview version of Arc resource bridge must be removed prior to updating from 22H2 to 23H2. Attempting to upgrade Arc resource bridge independent of other HCI environment components may cause problems in your environment that could result in a disaster recovery scenario. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual up
Before upgrading an Arc resource bridge, the following prerequisites must be met: -- The appliance VM must be online, its status is "Running" and the [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be valid.
+- The appliance VM must be online, healthy, status is "Running" and the [credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm) must be valid.
- There must be sufficient space on the management machine (~3.5 GB) and appliance VM (35 GB) to download required images.
There are two ways to upgrade Arc resource bridge: cloud-managed upgrades manage
## Cloud-managed upgrade
-As a Microsoft-managed product, Arc resource bridges on a supported [private cloud provider](#private-cloud-providers) with an appliance version 1.0.15 or higher are automatically opted into cloud-manaaged upgrade. With cloud-managed upgrade, Microsoft manages the upgrade of your Arc resource bridge to be within supported versions, as long as the prerequisites are met. If prerequisites are not met, then cloud managed upgrade will fail.
+Arc resource bridges on a supported [private cloud provider](#private-cloud-providers) with an appliance version 1.0.15 or higher are automatically opted into cloud-managed upgrade. With cloud-managed upgrade, Microsoft may attempt to upgrade your Arc resource bridge at any time if it is on an appliance version that will soon be out of support. The upgrade prerequisites must be met for cloud-managed upgrade to work. While Microsoft offers cloud-managed upgrade, youΓÇÖre still responsible for checking that your resource bridge is healthy, online, in a "Running" status, and within the supported n-3 versions. Disruptions could cause cloud-managed upgrades to fail.
-While Microsoft manages the upgrade of your Arc resource bridge, it's still important for you to ensure that your resource bridge is healthy, online, in a `Running` status, and on a supported version. To do so, run the `az arcappliance show` command from your management machine, or check the Azure resource of your Arc resource bridge. If your appliance VM isn't in a healthy state, cloud-managed upgrade might fail, and your version may become unsupported.
+To check your resource bridge status and the appliance version run the `az arcappliance show` command from your management machine or check the Azure resource of your Arc resource bridge. If your appliance VM isn't in a healthy, Running state, cloud-managed upgrade might fail.
Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status might switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`.
To upgrade a resource bridge on Azure Stack HCI, please transition to 23H2 and u
## Version releases
-The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there's a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, see the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there's a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month or early in the month. For detailed release info, see the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
## Supported versions
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
## Version 1.38 - February 2024
-Download for [Windows](https://download.microsoft.com/download/0/9/8/0981cd23-37aa-4cb3-8965-368586ab9fd8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+Download for [Windows](https://download.microsoft.com/download/4/8/f/48f69eb1-f7ce-499f-b9d3-5087f330ae79/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
### Known issues
azure-arc Switch To The New Version Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/switch-to-the-new-version-scvmm.md
Previously updated : 11/29/2023 Last updated : 02/29/2024 keywords: "VMM, Arc, Azure" #Customer intent: As a VI admin, I want to switch to the new version of Arc-enabled SCVMM and leverage the associated capabilities
keywords: "VMM, Arc, Azure"
On September 22, 2023, we rolled out major changes to **Azure Arc-enabled System Center Virtual Machine Manager**. By switching to the new version, you can use all the Azure management services that are available for Arc-enabled Servers.
+If you onboarded to Azure Arc-enabled SCVMM before **September 22, 2023**, and your VMs were Azure-enabled, you'll no longer be able to perform any operations on the VMs, except the **Remove from Azure** operation.
+
+To continue using these machines, follow these instructions to switch to the new version.
+ >[!Note] >If you're new to Arc-enabled SCVMM, you'll be able to leverage the new capabilities by default. To get started, see [Quick Start for Azure Arc-enabled System Center Virtual Machine Manager](quickstart-connect-system-center-virtual-machine-manager-to-arc.md). ## Switch to the new version (Existing customer)
-If you've onboarded to Arc-enabled SCVMM before September 22, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version:
+If you onboarded to Arc-enabled SCVMM before September 22, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version:
>[!Note] > If you had enabled guest management on any of the VMs, [disconnect](/azure/azure-arc/servers/manage-agent?tabs=windows#step-2-disconnect-the-server-from-azure-arc) and [uninstall agents](/azure/azure-arc/servers/manage-agent?tabs=windows#step-3a-uninstall-the-windows-agent).
azure-cache-for-redis Cache How To Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-encryption.md
Previously updated : 03/28/2023 Last updated : 02/28/2024
Data in a Redis server is stored in memory by default. This data isn't encrypted
Azure Cache for Redis offers platform-managed keys (PMKs), also know as Microsoft-managed keys (MMKs), by default to encrypt data on-disk in all tiers. The Enterprise and Enterprise Flash tiers of Azure Cache for Redis additionally offer the ability to encrypt the OS and data persistence disks with a customer-managed key (CMK). Customer managed keys can be used to wrap the MMKs to control access to these keys. This makes the CMK a _key encryption key_ or KEK. For more information, see [key management in Azure](/azure/security/fundamentals/key-management). - ## Scope of availability for CMK disk encryption | Tier | Basic, Standard, Premium | Enterprise, Enterprise Flash |
Azure Cache for Redis offers platform-managed keys (PMKs), also know as Microsof
|Customer managed keys (CMK) | No | Yes | > [!WARNING]
-> By default, all Azure Cache for Redis tiers use Microsoft managed keys to encrypt disks mounted to cache instances. However, in the Basic and Standard tiers, the C0 and C1 SKUs do not support any disk encryption.
+> By default, all Azure Cache for Redis tiers use Microsoft managed keys to encrypt disks mounted to cache instances. However, in the Basic and Standard tiers, the C0 and C1 SKUs do not support any disk encryption.
> > [!IMPORTANT]
Azure Cache for Redis offers platform-managed keys (PMKs), also know as Microsof
In the **Enterprise** tier, disk encryption is used to encrypt the persistence disk, temporary files, and the OS disk: -- persistence disk: holds persisted RDB or AOF files as part of [data persistence](cache-how-to-premium-persistence.md)
+- persistence disk: holds persisted RDB or AOF files as part of [data persistence](cache-how-to-premium-persistence.md)
- temporary files used in _export_: temporary data used exported is encrypted. When you [export](cache-how-to-import-export-data.md) data, the encryption of the final exported data is controlled by settings in the storage account.-- the OS disk
+- the OS disk
MMK is used to encrypt these disks by default, but CMK can also be used.
-In the **Enterprise Flash** tier, keys and values are also partially stored on-disk using nonvolatile memory express (NVMe) flash storage. However, this disk isn't the same as the one used for persisted data. Instead, it's ephemeral, and data isn't persisted after the cache is stopped, deallocated, or rebooted. only MMK is only supported on this disk because this data is transient and ephemeral.
+In the **Enterprise Flash** tier, keys and values are also partially stored on-disk using nonvolatile memory express (NVMe) flash storage. However, this disk isn't the same as the one used for persisted data. Instead, it's ephemeral, and data isn't persisted after the cache is stopped, deallocated, or rebooted. MMK is only supported on this disk because this data is transient and ephemeral.
| Data stored |Disk |Encryption Options | |-||-|
In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted by defau
- Disk encryption isn't available in the Basic and Standard tiers for the C0 or C1 SKUs - Only user assigned managed identity is supported to connect to Azure Key Vault. System assigned managed identity is not supported.-- Changing between MMK and CMK on an existing cache instance triggers a long-running maintenance operation. We don't recommend this for production use because a service disruption occurs.
+- Changing between MMK and CMK on an existing cache instance triggers a long-running maintenance operation. We don't recommend this for production use because a service disruption occurs.
### Azure Key Vault prerequisites and limitations
In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted by defau
1. Sign in to the [Azure portal](https://portal.azure.com) and start the [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) quickstart guide.
-1. On the **Advanced** page, go to the section titled **Customer-managed key encryption at rest** and enable the **Use a customer-managed key** option.
+1. On the **Advanced** page, go to the section titled **Customer-managed key encryption at rest** and enable the **Use a customer-managed key** option.
:::image type="content" source="media/cache-how-to-encryption/cache-use-key-encryption.png" alt-text="Screenshot of the advanced settings with customer-managed key encryption checked and in a red box.":::
In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted by defau
:::image type="content" source="media/cache-how-to-encryption/cache-managed-identity-user-assigned.png" alt-text="Screenshot showing user managed identity in the working pane.":::
-1. Select your chosen user assigned managed identity, and then choose the key input method to use.
+1. Select your chosen user assigned managed identity, and then choose the key input method to use.
-1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache.
+1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache.
> [!NOTE] > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance. Remember that both purge protection and soft delete must be enabled in your Key Vault instance.
In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted by defau
### Add CMK encryption to an existing Enterprise cache
-1. Go to the **Encryption** in the Resource menu of your cache instance. If CMK is already set up, you see the key information.
+1. Go to the **Encryption** in the Resource menu of your cache instance. If CMK is already set up, you see the key information.
-1. If you haven't set up or if you want to change CMK settings, select **Change encryption settings**
+1. If you haven't set up or if you want to change CMK settings, select **Change encryption settings**
:::image type="content" source="media/cache-how-to-encryption/cache-encryption-existing-use.png" alt-text="Screenshot encryption selected in the Resource menu for an Enterprise tier cache.":::
-1. Select **Use a customer-managed key** to see your configuration options.
+1. Select **Use a customer-managed key** to see your configuration options.
1. Select **Add** to assign a [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to the resource. This managed identity is used to connect to the [Azure Key Vault](../key-vault/general/overview.md) instance that holds the customer managed key.
-1. Select your chosen user assigned managed identity, and then choose which key input method to use.
+1. Select your chosen user assigned managed identity, and then choose which key input method to use.
-1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache.
+1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache.
> [!NOTE] > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance. 1. Choose the specific key using the **Customer-managed key (RSA)** drop-down. If there are multiple versions of the key to choose from, use the **Version** drop-down. :::image type="content" source="media/cache-how-to-encryption/cache-encryption-existing-key.png" alt-text="Screenshot showing the select identity and key fields completed for Encryption.":::
-
+ 1. If using the **URI** input method, enter the Key Identifier URI for your chosen key from Azure Key Vault. 1. Select **Save**
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Title: Monitor Azure Cache for Redis
-description: Learn how to monitor the health and performance your Azure Cache for Redis instances
+description: Learn how to monitor the health and performance your Azure Cache for Redis instances.
Use Azure Monitor to:
- pin metrics charts to the dashboard - customize the date and time range of monitoring charts - add and remove metrics from the charts-- and set alerts when certain conditions are met
+- set alerts when certain conditions are met
Metrics for Azure Cache for Redis instances are collected using the Redis [`INFO`](https://redis.io/commands/info) command. Metrics are collected approximately two times per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
For scenarios where you don't need the full flexibility of Azure Monitor for Azu
## Use Insights for predefined charts
-The **Monitoring** section in the Resource menu contains **Insights**. When you select **Insights**, you see groupings of three types of charts: **Overview**, **Performance** and **Operations**.
+The **Monitoring** section in the Resource menu contains **Insights**. When you select **Insights**, you see groupings of three types of charts: **Overview**, **Performance**, and **Operations**.
:::image type="content" source="./media/cache-how-to-monitor/cache-monitoring-part.png" alt-text="Screenshot showing Monitoring Insights selected in the Resource menu.":::
Configure a storage account to use with to store your metrics. The storage accou
1. Under the table heading **metric**, check box beside the line items you want to store, such as **AllMetrics**. Specify a **Retention (days)** policy. The maximum days retention you can specify is **365 days**. However, if you want to keep the metrics data forever, set **Retention (days)** to **0**. 1. Select **Save**.+ :::image type="content" source="./media/cache-how-to-monitor/cache-diagnostics.png" alt-text="Redis diagnostics"::: >[!NOTE]
In the Resource menu on the left, select **Metrics** under **Monitoring**. Here,
When you're seeing the aggregation type: - **Count** show 2, it indicates the metric received 2 data points for your time granularity (1 minute).-- **Max** shows the maximum value of a data point in the time granularity,-- **Min** shows the minimum value of a data point in the time granularity,
+- **Max** shows the maximum value of a data point in the time granularity.
+- **Min** shows the minimum value of a data point in the time granularity.
- **Average** shows the average value of all data points in the time granularity. - **Sum** shows the sum of all data points in the time granularity and might be misleading depending on the specific metric.
In contrast, for clustered caches, we recommend using the metrics with the suffi
- Depicts the worst-case (99th percentile) latency of server-side commands. Measured by issuing `PING` commands from the load balancer to the Redis server and tracking the time to respond. - Useful for tracking the health of your Redis instance. Latency increases if the cache is under heavy load or if there are long running commands that delay the execution of the `PING` command. - This metric is only available in Standard and Premium tier caches.
- - This metric is not available for caches that are affected by Cloud Service retirement. See more information [here](cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic)
+ - This metric isn't available for caches that are affected by Cloud Service retirement. See more information [here](cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic)
- Cache Latency (preview) - The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. - Cache Misses
In contrast, for clustered caches, we recommend using the metrics with the suffi
- The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track load on a Redis server. - Errors - Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could add more in the future. The error types represented now are as follows:
- - **Failover** ΓÇô when a cache fails over (subordinate promotes to primary)
- - **Dataloss** ΓÇô when there's data loss on the cache
- - **UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough, and specifically, when the number of bytes in the Redis server output buffer for a client goes over 1,000,000 bytes
- - **AOF** ΓÇô when there's an issue related to AOF persistence
- - **RDB** ΓÇô when there's an issue related to RDB persistence
- - **Import** ΓÇô when there's an issue related to Import RDB
- - **Export** ΓÇô when there's an issue related to Export RDB
- - **AADAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token
- - **AADTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires.
+ - **Failover** ΓÇô when a cache fails over (subordinate promotes to primary).
+ - **Dataloss** ΓÇô when there's data loss on the cache.
+ - **UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough, and specifically, when the number of bytes in the Redis server output buffer for a client goes over 1,000,000 bytes.
+ - **AOF** ΓÇô when there's an issue related to AOF persistence.
+ - **RDB** ΓÇô when there's an issue related to RDB persistence.
+ - **Import** ΓÇô when there's an issue related to Import RDB.
+ - **Export** ΓÇô when there's an issue related to Export RDB.
+ - **AADAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token. Not recommended. Use **MicrosoftEntraAuthenticationFailure** instead.
+ - **AADTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires. Not recommended. Use **MicrosoftEntraTokenExpired** instead.
+ - **MicrosoftEntraAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token.
+ - **MicrosoftEntraTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires.
+ > [!NOTE]
-> Metrics for errors aren't available when using the Enterprise Tiers.
+> Metrics for errors aren't available when using the Enterprise tiers.
- Evicted Keys - The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit.
In contrast, for clustered caches, we recommend using the metrics with the suffi
- If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md). - Gets
- - Sum of the number of get commands run on the cache during the specified reporting interval. This is a combined total of the increases in the `cmdstat` counts reported by the Redis INFO all command for all commands in the _get_ family, including `GET`, `HGET` , `MGET`, and others. This value can differ from the total number of hits and misses because some individual commands access multiple keys. For example: `MGET key1 key2 key3` only increments the number of gets by one but increments the combined number of hits and misses by three.
+ - Sum of the number of get commands run on the cache during the specified reporting interval. The sum is a combined total of the increases in the `cmdstat` counts reported by the Redis INFO all command for all commands in the _get_ family, including `GET`, `HGET` , `MGET`, and others. This value can differ from the total number of hits and misses because some individual commands access multiple keys. For example: `MGET key1 key2 key3` only increments the number of gets by one but increments the combined number of hits and misses by three.
- Operations per Second
- - The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command.
+ - The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command.
- Server Load - The percentage of CPU cycles in which the Redis server is busy processing and _not waiting idle_ for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. You can expect a large latency effect. If you're seeing a high Redis Server Load, such as 100, because you're sending many expensive commands to the server, then you might see timeout exceptions in the client. In this case, you should consider scaling up, scaling out to a Premium cluster, or partitioning your data into multiple caches. When _Server Load_ is only moderately high, such as 50 to 80 percent, then average latency usually remains low, and timeout exceptions could have other causes than high server latency. - The _Server Load_ metric is sensitive to other processes on the machine using the existing CPU cycles that reduce the Redis server's idle time. For example, on the _C1_ tier, background tasks such as virus scanning cause _Server Load_ to spike higher for no obvious reason. We recommended that you pay attention to other metrics such as operations, latency, and CPU, in addition to _Server Load_.
In contrast, for clustered caches, we recommend using the metrics with the suffi
> The _Server Load_ metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes _Server Load_ is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime. - Sets
- - Sum of the number of set commands run on the cache during the specified reporting interval. This is a combined total of the increases in the `cmdstat` counts reported by the Redis INFO all command for all commands in the _set_ family, including `SET`, `HSET` , `MSET`, and others.
+ - Sum of the number of set commands run on the cache during the specified reporting interval. This sum is a combined total of the increases in the `cmdstat` counts reported by the Redis INFO all command for all commands in the _set_ family, including `SET`, `HSET` , `MSET`, and others.
- Total Keys - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval. - Total Operations
The two workbooks provided are:
- **Azure Cache For Redis Resource Overview** combines many of the most commonly used metrics so that the health and performance of the cache instance can be viewed at a glance. :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-resource-overview.png" alt-text="Screenshot of graphs showing a resource overview for the cache."::: -- **Geo-Replication Dashboard** pulls geo-replication health and status metrics from both the geo-primary and geo-secondary cache instances to give a complete picture of geo-replcation health. Using this dashboard is recommended, as some geo-replication metrics are only emitted from either the geo-primary or geo-secondary.
+- **Geo-Replication Dashboard** pulls geo-replication health and status metrics from both the geo-primary and geo-secondary cache instances to give a complete picture of geo-replication health. Using this dashboard is recommended, as some geo-replication metrics are only emitted from either the geo-primary or geo-secondary.
:::image type="content" source="media/cache-how-to-monitor/cache-monitoring-geo-dashboard.png" alt-text="Screenshot showing the geo-replication dashboard with a geo-primary and geo-secondary cache set."::: ## Related content
azure-cache-for-redis Cache Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-whats-new.md
Previously updated : 01/23/2024 Last updated : 02/28/2024 # What's New in Azure Cache for Redis
+## February 2024
+
+Support for using customer managed keys for disk (CMK) encryption has now reached General Availability (GA).
+
+For more information, see [How to configure CMK encryption on Enterprise caches](cache-how-to-encryption.md#how-to-configure-cmk-encryption-on-enterprise-caches).
+ ## January 2024
-All tiers of Azure Cache for Redis now support TLS 1.3.
+All tiers of Azure Cache for Redis now support TLS 1.3.
For more information, see [What are the configuration settings for the TLS protocol?](cache-tls-configuration.md).
Microsoft Entra ID for authentication and role-based access control is available
### Microsoft Entra ID authentication and authorization (preview)
-Microsoft Entra ID based [authentication and authorization](cache-azure-active-directory-for-authentication.md) is now available for public preview with Azure Cache for Redis. With this Microsft Entra ID integration, users can connect to their cache instance without an access key and use [role-based access control](cache-configure-role-based-access-control.md) to connect to their cache instance.
+Microsoft Entra ID based [authentication and authorization](cache-azure-active-directory-for-authentication.md) is now available for public preview with Azure Cache for Redis. With this Microsoft Entra ID integration, users can connect to their cache instance without an access key and use [role-based access control](cache-configure-role-based-access-control.md) to connect to their cache instance.
This feature is available for Azure Cache for Redis Basic, Standard, and Premium SKUs. With this update, customers can look forward to increased security and a simplified authentication process when using Azure Cache for Redis.
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
This example reads the body of a POST request, as a `String`, and uses it to bui
#### Read parameter from a route
-This example reads a mandatory parameter, named `id`, and an optional parameter `name` from the route path, and uses them to build a JSON document returned to the client, with content type `application/json`. T
+This example reads a mandatory parameter, named `id`, and an optional parameter `name` from the route path, and uses them to build a JSON document returned to the client, with content type `application/json`.
```java @FunctionName("TriggerStringRoute")
When you use route parameters, an `invoke_URL_template` is automatically created
You can programmatically access the `invoke_URL_template` by using the Azure Resource Manager APIs for [List Functions](/rest/api/appservice/webapps/listfunctions) or [Get Function](/rest/api/appservice/webapps/getfunction).
+### HTTP streams (preview)
+
+You can now stream requests to and responses from your HTTP endpoint in Node.js v4 function apps. For more information, see [HTTP streams](functions-reference-node.md?pivots=nodejs-model-v4#http-streams-preview).
+ ### Working with client identities If your function app is using [App Service Authentication / Authorization](../app-service/overview-authentication-authorization.md), you can view information about authenticated clients from your code. This information is available as [request headers injected by the platform](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
The following table lists the currently available version ranges of the default
> [!NOTE]
-> Even though host.json supports custom ranges for `version`, you should use a version range value from this table, such as `[3.3.0, 4.0.0)`.
+> Even though host.json supports custom ranges for `version`, you should use a version range value from this table, such as `[4.0.0, 5.0.0)`.
## Explicitly install extensions
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
The following sample *host.json* file for version 2.x+ has all possible options
}, "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[1.*, 2.0.0)"
+ "version": "[4.0.0, 5.0.0)"
}, "functions": [ "QueueProcessor", "GitHubWebHook" ], "functionTimeout": "00:05:00",
For more information on snapshots, see [Debug snapshots on exceptions in .NET ap
| maximumCollectionPlanSize | 50 | The maximum number of problems that we can track at any time in a range from one to 9999. | | maximumSnapshotsRequired | 3 | The maximum number of snapshots collected for a single problem, in a range from one to 999. A problem may be thought of as an individual throw statement in your application. Once the number of snapshots collected for a problem reaches this value, no more snapshots will be collected for that problem until problem counters are reset (see `problemCounterResetInterval`) and the `thresholdForSnapshotting` limit is reached again. | | problemCounterResetInterval | 24:00:00 | How often to reset the problem counters in a range from one minute to seven days. When this interval is reached, all problem counts are reset to zero. Existing problems that have already reached the threshold for doing snapshots, but haven't yet generated the number of snapshots in `maximumSnapshotsRequired`, remain active. |
-| provideAnonymousTelemetry | true | Determines whether to send anonymous usage and error telemetry to Microsoft. This telemetry may be used if you contact Microsoft to help troubleshoot problems with the Snapshot Debugger. It is also used to monitor usage patterns. |
+| provideAnonymousTelemetry | true | Determines whether to send anonymous usage and error telemetry to Microsoft. This telemetry may be used if you contact Microsoft to help troubleshoot problems with the Snapshot Debugger. It's also used to monitor usage patterns. |
| reconnectInterval | 00:15:00 | How often we reconnect to the Snapshot Debugger endpoint. Allowable range is one minute to one day. | | shadowCopyFolder | null | Specifies the folder to use for shadow copying binaries. If not set, the folders specified by the following environment variables are tried in order: Fabric_Folder_App_Temp, LOCALAPPDATA, APPDATA, TEMP. | | shareUploaderProcess | true | If true, only one instance of SnapshotUploader will collect and upload snapshots for multiple apps that share the InstrumentationKey. If set to false, the SnapshotUploader will be unique for each (ProcessName, InstrumentationKey) tuple. | | snapshotInLowPriorityThread | true | Determines whether or not to process snapshots in a low IO priority thread. Creating a snapshot is a fast operation but, in order to upload a snapshot to the Snapshot Debugger service, it must first be written to disk as a minidump. That happens in the SnapshotUploader process. Setting this value to true uses low-priority IO to write the minidump, which won't compete with your application for resources. Setting this value to false speeds up minidump creation at the expense of slowing down your application. | | snapshotsPerDayLimit | 30 | The maximum number of snapshots allowed in one day (24 hours). This limit is also enforced on the Application Insights service side. Uploads are rate limited to 50 per day per application (that is, per instrumentation key). This value helps prevent creating additional snapshots that will eventually be rejected during upload. A value of zero removes the limit entirely, which isn't recommended. |
-| snapshotsPerTenMinutesLimit | 1 | The maximum number of snapshots allowed in 10 minutes. Although there is no upper bound on this value, exercise caution increasing it on production workloads because it could impact the performance of your application. Creating a snapshot is fast, but creating a minidump of the snapshot and uploading it to the Snapshot Debugger service is a much slower operation that will compete with your application for resources (both CPU and I/O). |
+| snapshotsPerTenMinutesLimit | 1 | The maximum number of snapshots allowed in 10 minutes. Although there's no upper bound on this value, exercise caution increasing it on production workloads because it could impact the performance of your application. Creating a snapshot is fast, but creating a minidump of the snapshot and uploading it to the Snapshot Debugger service is a much slower operation that will compete with your application for resources (both CPU and I/O). |
| tempFolder | null | Specifies the folder to write minidumps and uploader log files. If not set, then *%TEMP%\Dumps* is used. | | thresholdForSnapshotting | 1 | How many times Application Insights needs to see an exception before it asks for snapshots. | | uploaderProxy | null | Overrides the proxy server used in the Snapshot Uploader process. You may need to use this setting if your application connects to the internet via a proxy server. The Snapshot Collector runs within your application's process and will use the same proxy settings. However, the Snapshot Uploader runs as a separate process and you may need to configure the proxy server manually. If this value is null, then Snapshot Collector will attempt to autodetect the proxy's address by examining `System.Net.WebRequest.DefaultWebProxy` and passing on the value to the Snapshot Uploader. If this value isn't null, then autodetection isn't used and the proxy server specified here will be used in the Snapshot Uploader. |
This setting is a child of [logging](#logging). It controls the console logging
|Property |Default | Description | ||||
-|DisableColors|false| Suppresses log formatting in the container logs on Linux. Set to true if you are seeing unwanted ANSI control characters in the container logs when running on Linux. |
+|DisableColors|false| Suppresses log formatting in the container logs on Linux. Set to true if you're seeing unwanted ANSI control characters in the container logs when running on Linux. |
|isEnabled|false|Enables or disables console logging.| ## Azure Cosmos DB
Configuration settings for a custom handler. For more information, see [Azure Fu
|Property | Default | Description | | | | |
-| defaultExecutablePath | n/a | The executable to start as the custom handler process. It is a required setting when using custom handlers and its value is relative to the function app root. |
-| workingDirectory | *function app root* | The working directory in which to start the custom handler process. It is an optional setting and its value is relative to the function app root. |
+| defaultExecutablePath | n/a | The executable to start as the custom handler process. It's a required setting when using custom handlers and its value is relative to the function app root. |
+| workingDirectory | *function app root* | The working directory in which to start the custom handler process. It's an optional setting and its value is relative to the function app root. |
| arguments | n/a | An array of command line arguments to pass to the custom handler process. | | enableForwardingHttpRequest | false | If set, all functions that consist of only an HTTP trigger and HTTP output is forwarded the original HTTP request instead of the custom handler [request payload](functions-custom-handlers.md#request-payload). |
Controls the logging behaviors of the function app, including Application Insigh
|Property |Default | Description | ||||
-|fileLoggingMode|debugOnly|Determines the file logging behavior when running in Azure. Options are `never`, `always`, and `debugOnly`. This setting isn't used when running locally. When possible, you should use Application Insights when debugging your functions in Azure. Using `always` negatively impacts your app's cold start behavior and data throughput. The default `debugOnly` setting generates log files when you are debugging using the Azure portal. |
+|fileLoggingMode|debugOnly|Determines the file logging behavior when running in Azure. Options are `never`, `always`, and `debugOnly`. This setting isn't used when running locally. When possible, you should use Application Insights when debugging your functions in Azure. Using `always` negatively impacts your app's cold start behavior and data throughput. The default `debugOnly` setting generates log files when you're debugging using the Azure portal. |
|logLevel|n/a|Object that defines the log category filtering for functions in the app. This setting lets you filter logging for specific functions. For more information, see [Configure log levels](configure-monitoring.md#configure-log-levels). | |console|n/a| The [console](#console) logging setting. | |applicationInsights|n/a| The [applicationInsights](#applicationinsights) setting. |
A set of [shared code directories](functions-reference-csharp.md#watched-directo
## watchFiles
-An array of one or more names of files that are monitored for changes that require your app to restart. This guarantees that when code in these files are changed, the updates are picked up by your functions.
+An array of one or more names of files that are monitored for changes that require your app to restart. This guarantees that when code in these files is changed, the updates are picked up by your functions.
```json {
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
Title: Node.js developer reference for Azure Functions
description: Understand how to develop functions by using Node.js. ms.assetid: 45dedd78-3ff9-411f-bb4b-16d29a11384c Previously updated : 04/17/2023 Last updated : 02/28/2024 ms.devlang: javascript # ms.devlang: javascript, typescript
The following table shows each version of the Node.js programming model along wi
| - | - | | | | | 4.x | GA | 4.25+ | 20.x (Preview), 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. | | 3.x | GA | 4.x | 20.x (Preview), 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file |
-| 2.x | GA (EOL) | 3.x | 14.x, 12.x, 10.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. |
-| 1.x | GA (EOL) | 2.x | 10.x, 8.x | Reached end of life (EOL) on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. |
+| 2.x | n/a | 3.x | 14.x, 12.x, 10.x | Reached end of support on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. |
+| 1.x | n/a | 2.x | 10.x, 8.x | Reached end of support on December 13, 2022. See [Functions Versions](./functions-versions.md) for more info. |
## Folder structure
app.storageQueue('copyBlob1', {
### Generic inputs and outputs
-The `app`, `trigger`, `input`, and `output` objects exported by the `@azure/functions` module provide type-specific methods for most types. For all the types that aren't supported, a `generic` method has been provided to allow you to manually specify the configuration. The `generic` method can also be used if you want to change the default settings provided by a type-specific method.
+The `app`, `trigger`, `input`, and `output` objects exported by the `@azure/functions` module provide type-specific methods for most types. For all the types that aren't supported, a `generic` method is provided to allow you to manually specify the configuration. The `generic` method can also be used if you want to change the default settings provided by a type-specific method.
The following example is a simple HTTP triggered function using generic methods instead of type-specific methods.
The `HttpRequest` object has the following properties:
| **`params`** | `Record<string, string>` | Route parameter keys and values. | | **`user`** | `HttpRequestUser | null` | Object representing logged-in user, either through Functions authentication, SWA Authentication, or null when no such user is logged in. | | **`body`** | [`ReadableStream | null`](https://developer.mozilla.org/docs/Web/API/ReadableStream) | Body as a readable stream. |
-| **`bodyUsed`** | `boolean` | A boolean indicating if the body has been read from already. |
+| **`bodyUsed`** | `boolean` | A boolean indicating if the body is already read. |
In order to access a request or response's body, the following methods can be used:
The response can be set in several ways:
::: zone-end
+## HTTP streams (preview)
+
+HTTP streams is a feature that makes it easier to process large data, stream OpenAI responses, deliver dynamic content, and support other core HTTP scenarios. It lets you stream requests to and responses from HTTP endpoints in your Node.js function app. Use HTTP streams in scenarios where your app requires real-time exchange and interaction between client and server over HTTP. You can also use HTTP streams to get the best performance and reliability for your apps when using HTTP.
+
+HTTP streams is currently in preview.
+
+>[!IMPORTANT]
+>HTTP streams aren't supported in the v3 model. [Upgrade to the v4 model](./functions-node-upgrade-v4.md) to use the HTTP streaming feature.
+The existing `HttpRequest` and `HttpResponse` types in programming model v4 already support various ways of handling the message body, including as a stream.
+
+### Prerequisites
+- The [`@azure/functions` npm package](https://www.npmjs.com/package/@azure/functions) version 4.3.0 or later.
+- [Azure Functions runtime](./functions-versions.md) version 4.28 or later.
+- [Azure Functions Core Tools](./functions-run-local.md) version 4.0.5530 or a later version, which contains the correct runtime version.
+
+### Enable streams
+
+Use these steps to enable HTTP streams in your function app in Azure and in your local projects:
+
+1. If you plan to stream large amounts of data, modify the [`FUNCTIONS_REQUEST_BODY_SIZE_LIMIT`](./functions-app-settings.md#functions_request_body_size_limit) setting in Azure. The default maximum body size allowed is `104857600`, which limits your requests to a size of ~100 MB.
+
+1. For local development, also add `FUNCTIONS_REQUEST_BODY_SIZE_LIMIT` to the [local.settings.json file](./functions-develop-local.md#local-settings-file).
+
+1. Add the following code to your app in any file included by your [main field](./functions-reference-node.md#registering-a-function).
+
+ #### [JavaScript](#tab/javascript)
+
+ ```javascript
+ const { app } = require('@azure/functions');
+
+ app.setup({ enableHttpStream: true });
+ ```
+
+ #### [TypeScript](#tab/typescript)
+
+ ```typescript
+ import { app } from '@azure/functions';
+
+ app.setup({ enableHttpStream: true });
+ ```
+
+
+
+### Stream examples
+
+This example shows an HTTP triggered function that receives data via an HTTP POST request, and the function streams this data to a specified output file:
+
+#### [JavaScript](#tab/javascript)
++
+#### [TypeScript](#tab/typescript)
++++
+This example shows an HTTP triggered function that streams a file's content as the response to incoming HTTP GET requests:
+
+#### [JavaScript](#tab/javascript)
++
+#### [TypeScript](#tab/typescript)
+
+
++
+### Stream considerations
+++ The `request.params` object isn't supported when using HTTP streams during preview. Refer to this [GitHub issue](https://github.com/Azure/azure-functions-nodejs-library/issues/229) for more information and suggested workaround.+++ Use `request.body` to obtain the maximum benefit from using streams. You can still continue to use methods like `request.text()`, which always return the body as a string.+ ## Hooks ::: zone pivot="nodejs-model-v3"
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
- devx-track-js - devx-track-python - ignite-2023 Previously updated : 09/01/2023 Last updated : 02/26/2024 zone_pivot_groups: programming-languages-set-functions
For C# script, update the extension bundle reference in the host.json as follows
"version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[2.*, 3.0.0)"
+ "version": "[4.0.0, 5.0.0)"
} } ```
If you receive a warning about your extension bundle version not meeting a minim
"version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[2.*, 3.0.0)"
+ "version": "[4.0.0, 5.0.0)"
} } ```
azure-maps Rest Api Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-azure-maps.md
+
+ Title: Links to the Azure Maps Rest API
+
+description: Links to the Azure Maps Rest API.
++ Last updated : 02/05/2024+++++
+# Azure Maps Rest API
+
+Azure Maps is a set of mapping and geospatial services that enable developers and organizations to build intelligent location-based experiences for applications across many different industries and use cases. Use Azure Maps to bring maps, geocoding, location search, routing, real-time traffic, geolocation, time zone information and weather data into your web, mobile and server-side solutions.
+
+The following tables show overviews of the services that Azure Maps offers:
+
+## Latest release
+
+The most recent stable release of the Azure Maps services.
+
+| API | Description |
+|--|-|
+| [Data] | The Azure Maps Data v2 service is deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service need to be updated to use the Azure Maps [Data registry] service by 9/16/24. For more information, see [How to create data registry]. |
+| [Data Registry] | Programmatically store and update geospatial data to use in spatial operations. |
+| [Geolocation] | Convert IP addresses to country/region ISO codes. |
+| [Render] | Get road, satellite/aerial, weather, traffic map tiles, and static map images. |
+| [Route] | Calculate optimized travel times and distances between locations for multiple modes of transportation and get localized travel instructions. |
+| [Search] | Geocode addresses and coordinates, search for business listings and places by name or category and get administrative boundary polygons. |
+| [Spatial] | Use geofences, great circle distances and other spatial operations to analyze location data. |
+| [Timezone] | Get time zone and sunrise/sunset information for specified locations. |
+| [Traffic] | Get current traffic information including traffic flow and traffic incident details. |
+| [Weather] | Get current, forecasted, and historical weather conditions, air quality, tropical storm details and weather along a route. |
+
+## Previous release
+
+A previous stable release of an Azure Maps service that is still in use. The services in this list will generally have a more recent version available, and are slated for retirement. If using a previous release, update to the latest version before it's retired to avoid disruption of service.
+
+| API | Description |
+|--|-|
+| [Data][Data-v1] | The Azure Maps Data v1 service is deprecated and will be retired on 9/16/24. To avoid service disruptions, all calls to the Data service need to be updated to use the Azure Maps [Data registry] service by 9/16/24. For more information, see [How to create data registry]. |
+| [Render][Render v1] | Get road, satellite/aerial, weather, traffic map tiles and static map images.<BR>The Azure Maps [Render v1] service is now deprecated and will be retired on 9/17/26. To avoid service disruptions, all calls to Render v1 API needs to be updated to use the latest version of the [Render] API by 9/17/26. |
+| [Search][Search-v1] | Geocode addresses and coordinates, search for business listings and places by name or category and get administrative boundary polygons. This is version 1.0 of the Search service. For the latest version, see [Search]. |
+
+## Latest preview
+
+Prerelease version of an Azure Maps service. Preview releases contain new functionality or updates to existing functionality that will be included in a future release.
+
+| API | Description |
+|--|-|
+| [Route][Route-2023-10-01-preview] | Returns the ideal route in GeoJSON between locations for multiple modes of transportation.<BR><BR>Some of the updates in this version of the Route service include:<ul><li>Routes with "via" waypoints that the route must pass through.</li><li>More geographies</li><li>More languages available for localized travel instructions.</li></ul> |
+
+<! Links to latest versions of each service ->
+[Data]: /rest/api/maps/data
+[How to create data registry]: /azure/azure-maps/how-to-create-data-registries
+[Data Registry]: /rest/api/maps/data-registry
+[Geolocation]: /rest/api/maps/geolocation
+[Render]: /rest/api/maps/render
+[Route]: /rest/api/maps/route
+[Search]: /rest/api/maps/search
+[Spatial]: /rest/api/maps/spatial
+[Timezone]: /rest/api/maps/timezone
+[Traffic]: /rest/api/maps/traffic
+[Weather]: /rest/api/maps/weather
+
+<! Links to previous versions of each service -->
+[Data-v1]: /rest/api/maps/data?view=rest-maps-1.0
+[Render v1]: /rest/api/maps/render?view=rest-maps-1.0
+[Search-v1]: /rest/api/maps/search?view=rest-maps-1.0
+
+<! 2023-10-01-preview is the latest preview release of the Route service,
+ currently the only Azure Maps service in Preview -->
+[Route-2023-10-01-preview]: /rest/api/maps/route?view=rest-maps-2023-10-01-preview
azure-maps Rest Api Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-creator.md
+
+ Title: Links to the Azure Maps Creator Rest API
+
+description: Links to the Azure Maps Creator Rest API
++ Last updated : 02/05/2024+++++
+# Creator Rest API
+
+Indoor mapping is a technology that enables the creation of digital maps of the interior of buildings. It helps visitors navigate through buildings and locate points of interest such as restrooms, conference rooms, and offices. Indoor mapping can be used to create a more convenient and enjoyable visitor experience. Visitors can spend less time searching for building directories and more time discovering new points of interest. With Azure Maps Creator, you can create indoor maps that enable customers to zoom in and out of a building to see each floor and navigate to any desired location using Creator's wayfinding service. In addition to common mapping functionality, Azure Maps Creator offers an array of useful services that enable you to implement functionality such as asset tracking, facility management, workspace optimization, hybrid work models to support a blend of in-office, remote, and on-the-go working, and much more.
+
+The following tables offer high-level overviews of the services that Azure Maps Creator offers:
+
+## Latest release
+
+The most recent stable release of the Creator services.
+
+| API | Description |
+|--|-|
+| [Alias] | This API allows the caller to assign an alias to reference a resource. |
+| [Conversion] | Used to import a set of DWG design files as a zipped [Drawing Package](https://aka.ms/am-drawing-package) into Azure Maps.|
+| [Dataset] | A collection containing the indoor map [features](/azure/azure-maps/glossary#feature) of a facility. This API allows the caller to create a dataset from previously uploaded data. |
+| [Feature State] | The Feature stateset can be used to dynamically render features in a facility according to their current state and respective map style. |
+| [Tileset] | A `tileset` is a collection of vector tiles that render on the map, created from an existing dataset. |
+| [WFS] | Use the Web Feature Service (WFS) API to query for all feature collections or a specific collection within a dataset. For example, you can use WFS to find all mid-size meeting rooms in a specific building and floor level. |
+
+## Latest preview
+
+Pre-release version of a Creator service. Preview releases contain new functionality or updates to existing functionality that will be included in a future release.
+
+| API | Description |
+|--|-|
+| [Alias][Alias-preview] | This API allows the caller to assign an alias to reference a resource. |
+| [Conversion][Conversion-preview] | Used to import a set of DWG design files as a zipped [Drawing Package](https://aka.ms/am-drawing-package) into Azure Maps.|
+| [Dataset][Dataset-preview] | A collection of indoor map [features](/azure/azure-maps/glossary#feature) in a facility. This API allows the caller to create a dataset from previously uploaded data. |
+| [Feature State][Feature State-preview] | The Feature stateset can be used to dynamically render features in a facility according to their current state and respective map style. |
+| [Features] | An instance of an object produced from the [Conversion][Conversion-preview] service that combines a geometry with metadata information. |
+| [Map Configuration] | Map Configuration in indoor mapping refers to the default settings of a map that are applied when the map is loaded. It includes the default zoom level, center point, and other map settings. |
+| [Routeset] | Use the routeset API to create the data that the wayfinding service needs to generate paths. |
+| [Style] | Use the Style API to customize your facility's look and feel. Everything is configurable from the color of a feature, the icon that renders, or the zoom level when a feature should appear or disappear. |
+| [Tileset][Tileset-preview] | A collection of vector tiles that render on the map, created from an existing dataset. |
+| [Wayfinding] | Wayfinding is a technology that helps people navigate through complex indoor environments such as malls, offices, stadiums, airports and office buildings. |
+
+<! V2 is the latest stable release of each Creator service >
+
+[Alias]: /rest/api/maps-creator/alias
+[Conversion]: /rest/api/maps-creator/conversion
+[Dataset]: /rest/api/maps-creator/dataset
+[Feature State]: /rest/api/maps-creator/feature-state
+[Tileset]: /rest/api/maps-creator/tileset
+[WFS]: /rest/api/maps-creator/wfs
+
+<! 2023-03-01-preview is the latest preview release of each Creator service ->
+
+[Alias-preview]: /rest/api/maps-creator/alias?view=rest-maps-creator-2023-03-01-preview
+[Conversion-preview]: /rest/api/maps-creator/conversion?view=rest-maps-creator-2023-03-01-preview
+[Dataset-preview]: /rest/api/maps-creator/dataset?view=rest-maps-creator-2023-03-01-preview
+[Feature State-preview]: /rest/api/maps-creator/feature-state?view=rest-maps-creator-2023-03-01-preview
+[Features]: /rest/api/maps-creator/features?view=rest-maps-creator-2023-03-01-preview
+[Map configuration]: /rest/api/maps-creator/map-configuration?view=rest-maps-creator-2023-03-01-preview
+[Routeset]: /rest/api/maps-creator/routeset?view=rest-maps-creator-2023-03-01-preview
+[Style]: /rest/api/maps-creator/style?view=rest-maps-creator-2023-03-01-preview
+[Tileset-preview]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview
+[Wayfinding]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
You can collect more data automatically when you include instrumentation librari
[!INCLUDE [azure-monitor-app-insights-opentelemetry-support](../includes/azure-monitor-app-insights-opentelemetry-community-library-warning.md)]
-### [ASP.NET Core](#tab/aspnetcore-1)
+### [ASP.NET Core](#tab/aspnetcore)
To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTracerProvider` methods, after adding the nuget package for the library.
var app = builder.Build();
app.Run(); ```
-### [.NET](#tab/net-1)
+### [.NET](#tab/net)
The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
var metricsProvider = Sdk.CreateMeterProviderBuilder()
.AddAzureMonitorMetricExporter(); ```
-### [Java](#tab/java-1)
+### [Java](#tab/java)
You can't extend the Java Distro with community instrumentation libraries. To request that we include another instrumentation library, open an issue on our GitHub page. You can find a link to our GitHub page in [Next Steps](#next-steps).
-### [Node.js](#tab/nodejs-1)
+### [Node.js](#tab/nodejs)
Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient.
Other OpenTelemetry Instrumentations are available [here](https://github.com/ope
}); ```
-### [Python](#tab/python-1)
+### [Python](#tab/python)
To add a community instrumentation library (not officially supported/included in Azure Monitor distro), you can instrument directly with the instrumentations. The list of community instrumentation libraries can be found [here](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation).
describes the instruments and provides examples of when you might use each one.
#### Histogram example
-#### [ASP.NET Core](#tab/aspnetcore-2)
+#### [ASP.NET Core](#tab/aspnetcore)
Application startup must subscribe to a Meter by name.
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); ```
-#### [.NET](#tab/net-2)
+#### [.NET](#tab/net)
```csharp public class Program
public class Program
} ```
-#### [Java](#tab/java-2)
+#### [Java](#tab/java)
```java import io.opentelemetry.api.GlobalOpenTelemetry;
public class Program {
} ```
-#### [Node.js](#tab/nodejs-2)
+#### [Node.js](#tab/nodejs)
```javascript // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
public class Program {
histogram.record(100, { "testKey2": "testValue" }); ```
-#### [Python](#tab/python-2)
+#### [Python](#tab/python)
```python # Import the `configure_azure_monitor()` and `metrics` functions from the appropriate packages.
input()
#### Counter example
-#### [ASP.NET Core](#tab/aspnetcore-3)
+#### [ASP.NET Core](#tab/aspnetcore)
Application startup must subscribe to a Meter by name.
myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); ```
-#### [.NET](#tab/net-3)
+#### [.NET](#tab/net)
```csharp public class Program
public class Program
} ```
-#### [Java](#tab/java-3)
+#### [Java](#tab/java)
```Java import io.opentelemetry.api.GlobalOpenTelemetry;
public class Program {
} ```
-#### [Node.js](#tab/nodejs-3)
+#### [Node.js](#tab/nodejs)
```javascript // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
public class Program {
counter.add(3, { "testKey": "testValue2" }); ```
-#### [Python](#tab/python-3)
+#### [Python](#tab/python)
```python # Import the `configure_azure_monitor()` and `metrics` functions from the appropriate packages.
input()
#### Gauge Example
-#### [ASP.NET Core](#tab/aspnetcore-4)
+#### [ASP.NET Core](#tab/aspnetcore)
Application startup must subscribe to a Meter by name.
private static IEnumerable<Measurement<int>> GetThreadState(Process process)
} ```
-#### [.NET](#tab/net-4)
+#### [.NET](#tab/net)
```csharp public class Program
public class Program
} ```
-#### [Java](#tab/java-4)
+#### [Java](#tab/java)
```Java import io.opentelemetry.api.GlobalOpenTelemetry;
public class Program {
} ```
-#### [Node.js](#tab/nodejs-4)
+#### [Node.js](#tab/nodejs)
```typescript // Import the useAzureMonitor function and the metrics module from the @azure/monitor-opentelemetry and @opentelemetry/api packages, respectively.
public class Program {
}); ```
-#### [Python](#tab/python-4)
+#### [Python](#tab/python)
```python # Import the necessary packages.
However, you might want to manually report exceptions beyond what instrumentatio
For instance, exceptions caught by your code aren't ordinarily reported. You might wish to report them to draw attention in relevant experiences including the failures section and end-to-end transaction views.
-#### [ASP.NET Core](#tab/aspnetcore-5)
+#### [ASP.NET Core](#tab/aspnetcore)
- To log an Exception using an Activity: ```csharp
to draw attention in relevant experiences including the failures section and end
} ```
-#### [.NET](#tab/net-5)
+#### [.NET](#tab/net)
- To log an Exception using an Activity: ```csharp
to draw attention in relevant experiences including the failures section and end
} ```
-#### [Java](#tab/java-5)
+#### [Java](#tab/java)
You can use `opentelemetry-api` to update the status of a span and record exceptions.
You can use `opentelemetry-api` to update the status of a span and record except
span.recordException(e); ```
-#### [Node.js](#tab/nodejs-5)
+#### [Node.js](#tab/nodejs)
```javascript // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
You can use `opentelemetry-api` to update the status of a span and record except
} ```
-#### [Python](#tab/python-5)
+#### [Python](#tab/python)
The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown are automatically captured and recorded. See the following code sample for an example of this behavior.
with tracer.start_as_current_span("hello", record_exception=False) as span:
You might want to add a custom span in two scenarios. First, when there's a dependency request not already collected by an instrumentation library. Second, when you wish to model an application process as a span on the end-to-end transaction view.
-#### [ASP.NET Core](#tab/aspnetcore-6)
+#### [ASP.NET Core](#tab/aspnetcore)
> [!NOTE] > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
app.Run();
`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`. `ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
-#### [.NET](#tab/net-6)
+#### [.NET](#tab/net)
> [!NOTE] > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. You create `ActivitySource` directly by using its constructor instead of by using `TracerProvider`. Each [`ActivitySource`](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/customizing-the-sdk#activity-source) class must be explicitly connected to `TracerProvider` by using `AddSource()`. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
using (var activity = activitySource.StartActivity("CustomActivity"))
`ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`. `ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`.
-#### [Java](#tab/java-6)
+#### [Java](#tab/java)
##### Use the OpenTelemetry annotation
you can add your spans by using the OpenTelemetry API.
} ```
-#### [Node.js](#tab/nodejs-6)
+#### [Node.js](#tab/nodejs)
```javascript // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
you can add your spans by using the OpenTelemetry API.
span.end(); ```
-#### [Python](#tab/python-6)
+#### [Python](#tab/python)
The OpenTelemetry API can be used to add your own spans, which appear in the `requests` and `dependencies` tables in Application Insights.
The OpenTelemetry Logs/Events API is still under development. In the meantime, y
> [!CAUTION] > Span Events are only recommended for when you need additional diagnostic metadata associated with your span. For other scenarios, such as describing business events, we recommend you wait for the release of the OpenTelemetry Events API.
-#### [ASP.NET Core](#tab/aspnetcore-7)
+#### [ASP.NET Core](#tab/aspnetcore)
Currently unavailable.
-#### [.NET](#tab/net-7)
+#### [.NET](#tab/net)
Currently unavailable.
-#### [Java](#tab/java-7)
+#### [Java](#tab/java)
You can use `opentelemetry-api` to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace.
You can use `opentelemetry-api` to create span events, which populate the `trace
Span.current().addEvent("eventName"); ```
-#### [Node.js](#tab/nodejs-7)
+#### [Node.js](#tab/nodejs)
Currently unavailable.
-#### [Python](#tab/python-7)
+#### [Python](#tab/python)
Currently unavailable.
Currently unavailable.
We recommend you use the OpenTelemetry APIs whenever possible, but there might be some scenarios when you have to use the Application Insights [Classic API](api-custom-events-metrics.md).
-#### [ASP.NET Core](#tab/aspnetcore-8)
+#### [ASP.NET Core](#tab/aspnetcore)
##### Events
var telemetryClient = new TelemetryClient(telemetryConfiguration);
telemetryClient.TrackEvent("testEvent"); ```
-#### [.NET](#tab/net-8)
+#### [.NET](#tab/net)
##### Events
var telemetryClient = new TelemetryClient(telemetryConfiguration);
telemetryClient.TrackEvent("testEvent"); ```
-#### [Java](#tab/java-8)
+#### [Java](#tab/java)
1. Add `applicationinsights-core` to your application:
telemetryClient.TrackEvent("testEvent");
} ```
-#### [Node.js](#tab/nodejs-8)
+#### [Node.js](#tab/nodejs)
If you want to add custom events or access the Application Insights API, replace the @azure/monitor-opentelemetry package with the `applicationinsights` [v3 Beta package](https://www.npmjs.com/package/applicationinsights/v/beta). It offers the same methods and interfaces, and all sample code for @azure/monitor-opentelemetry applies to the v3 Beta package.
Then use the `TelemetryClient` to send custom telemetry:
} ```
-#### [Python](#tab/python-8)
+#### [Python](#tab/python)
Unlike other languages, Python doesn't have an Application Insights SDK. You can meet all your monitoring needs with the Azure Monitor OpenTelemetry Distro, except for sending `customEvents`. Until the OpenTelemetry Events API stabilizes, use the [Azure Monitor Events Extension](https://pypi.org/project/azure-monitor-events-extension/0.1.0/) with the Azure Monitor OpenTelemetry Distro to send `customEvents` to Application Insights.
These attributes might include adding a custom property to your telemetry. You m
Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
-##### [ASP.NET Core](#tab/aspnetcore-9)
+##### [ASP.NET Core](#tab/aspnetcore)
To add span attributes, use either of the following two ways:
public class ActivityEnrichingProcessor : BaseProcessor<Activity>
} ```
-#### [.NET](#tab/net-9)
+#### [.NET](#tab/net)
To add span attributes, use either of the following two ways:
public class ActivityEnrichingProcessor : BaseProcessor<Activity>
} ```
-##### [Java](#tab/java-9)
+##### [Java](#tab/java)
You can use `opentelemetry-api` to add attributes to spans.
Adding one or more span attributes populates the `customDimensions` field in the
Span.current().setAttribute(attributeKey, "myvalue1"); ```
-##### [Node.js](#tab/nodejs-9)
+##### [Node.js](#tab/nodejs)
```typescript // Import the necessary packages.
class SpanEnrichingProcessor implements SpanProcessor {
tracerProvider.addSpanProcessor(new SpanEnrichingProcessor()); ```
-##### [Python](#tab/python-9)
+##### [Python](#tab/python)
Use a custom processor:
class SpanEnrichingProcessor(SpanProcessor):
You can populate the _client_IP_ field for requests by setting the `http.client_ip` attribute on the span. Application Insights uses the IP address to generate user location attributes and then [discards it by default](ip-collection.md#default-behavior).
-##### [ASP.NET Core](#tab/aspnetcore-10)
+##### [ASP.NET Core](#tab/aspnetcore)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
activity.SetTag("http.client_ip", "<IP Address>"); ```
-#### [.NET](#tab/net-10)
+#### [.NET](#tab/net)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
activity.SetTag("http.client_ip", "<IP Address>"); ```
-##### [Java](#tab/java-10)
+##### [Java](#tab/java)
Java automatically populates this field.
-##### [Node.js](#tab/nodejs-10)
+##### [Node.js](#tab/nodejs)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
} ```
-##### [Python](#tab/python-10)
+##### [Python](#tab/python)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `SpanEnrichingProcessor.py`:
You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by u
> [!IMPORTANT] > Consult applicable privacy laws before you set the Authenticated User ID.
-##### [ASP.NET Core](#tab/aspnetcore-11)
+##### [ASP.NET Core](#tab/aspnetcore)
Use the add [custom property example](#add-a-custom-property-to-a-span).
Use the add [custom property example](#add-a-custom-property-to-a-span).
activity?.SetTag("enduser.id", "<User Id>"); ```
-##### [.NET](#tab/net-11)
+##### [.NET](#tab/net)
Use the add [custom property example](#add-a-custom-property-to-a-span).
Use the add [custom property example](#add-a-custom-property-to-a-span).
activity?.SetTag("enduser.id", "<User Id>"); ```
-##### [Java](#tab/java-11)
+##### [Java](#tab/java)
Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table.
Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions`
Span.current().setAttribute("enduser.id", "myuser"); ```
-#### [Node.js](#tab/nodejs-11)
+#### [Node.js](#tab/nodejs)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
} ```
-##### [Python](#tab/python-11)
+##### [Python](#tab/python)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
span._attributes["enduser.id"] = "<User ID>"
### Add log attributes
-#### [ASP.NET Core](#tab/aspnetcore-12)
+#### [ASP.NET Core](#tab/aspnetcore)
OpenTelemetry uses .NET's `ILogger`. Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
-#### [.NET](#tab/net-12)
+#### [.NET](#tab/net)
OpenTelemetry uses .NET's `ILogger`. Attaching custom dimensions to logs can be accomplished using a [message template](/dotnet/core/extensions/logging?tabs=command-line#log-message-template).
-#### [Java](#tab/java-12)
+#### [Java](#tab/java)
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c
* [Log4j 2.0 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) * [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html)
-#### [Node.js](#tab/nodejs-12)
+#### [Node.js](#tab/nodejs)
```typescript // Import the useAzureMonitor function and the logs module from the @azure/monitor-opentelemetry and @opentelemetry/api-logs packages, respectively.
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching c
logger.emit(logRecord); ```
-#### [Python](#tab/python-12)
+#### [Python](#tab/python)
The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](.\opentelemetry-add-modify.md?tabs=python#included-instrumentation-libraries). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
logger.warning("WARNING: Warning log with properties", extra={"key1": "value1"})
You might use the following ways to filter out telemetry before it leaves your application.
-### [ASP.NET Core](#tab/aspnetcore-13)
+### [ASP.NET Core](#tab/aspnetcore)
1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
You might use the following ways to filter out telemetry before it leaves your a
1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
-### [.NET](#tab/net-13)
+### [.NET](#tab/net)
1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries: - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
You might use the following ways to filter out telemetry before it leaves your a
1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported.
-### [Java](#tab/java-13)
+### [Java](#tab/java)
See [sampling overrides](java-standalone-config.md#sampling-overrides-preview) and [telemetry processors](java-standalone-telemetry-processors.md).
-### [Node.js](#tab/nodejs-13)
+### [Node.js](#tab/nodejs)
1. Exclude the URL option provided by many HTTP instrumentation libraries.
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
} ```
-### [Python](#tab/python-13)
+### [Python](#tab/python)
1. Exclude the URL with the `OTEL_PYTHON_EXCLUDED_URLS` environment variable: ```
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
You might want to get the trace ID or span ID. If you have logs sent to a destination other than Application Insights, consider adding the trace ID or span ID. Doing so enables better correlation when debugging and diagnosing issues.
-### [ASP.NET Core](#tab/aspnetcore-14)
+### [ASP.NET Core](#tab/aspnetcore)
> [!NOTE] > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
string traceId = activity?.TraceId.ToHexString();
string spanId = activity?.SpanId.ToHexString(); ```
-### [.NET](#tab/net-14)
+### [.NET](#tab/net)
> [!NOTE] > The `Activity` and `ActivitySource` classes from the `System.Diagnostics` namespace represent the OpenTelemetry concepts of `Span` and `Tracer`, respectively. That's because parts of the OpenTelemetry tracing API are incorporated directly into the .NET runtime. To learn more, see [Introduction to OpenTelemetry .NET Tracing API](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry.Api/README.md#introduction-to-opentelemetry-net-tracing-api).
string traceId = activity?.TraceId.ToHexString();
string spanId = activity?.SpanId.ToHexString(); ```
-### [Java](#tab/java-14)
+### [Java](#tab/java)
You can use `opentelemetry-api` to get the trace ID or span ID.
You can use `opentelemetry-api` to get the trace ID or span ID.
String spanId = span.getSpanContext().getSpanId(); ```
-### [Node.js](#tab/nodejs-14)
+### [Node.js](#tab/nodejs)
Get the request trace ID and the span ID in your code:
Get the request trace ID and the span ID in your code:
let traceId = trace.getActiveSpan().spanContext().traceId; ```
-### [Python](#tab/python-14)
+### [Python](#tab/python)
Get the request trace ID and the span ID in your code:
span_id = trace.get_current_span().get_span_context().span_id
## Next steps
-### [ASP.NET Core](#tab/aspnetcore-15)
+### [ASP.NET Core](#tab/aspnetcore)
- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md) - To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore).
span_id = trace.get_current_span().get_span_context().span_id
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
-#### [.NET](#tab/net-15)
+#### [.NET](#tab/net)
- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md) - To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter).
span_id = trace.get_current_span().get_span_context().span_id
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
-### [Java](#tab/java-15)
+### [Java](#tab/java)
- Review [Java autoinstrumentation configuration options](java-standalone-config.md). - To review the source code, see the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).
span_id = trace.get_current_span().get_span_context().span_id
- To enable usage experiences, see [Enable web or browser user monitoring](javascript.md). - See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
-### [Node.js](#tab/nodejs-15)
+### [Node.js](#tab/nodejs)
- To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry). - To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.
span_id = trace.get_current_span().get_span_context().span_id
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
-### [Python](#tab/python-15)
+### [Python](#tab/python)
- To review the source code and extra documentation, see the [Azure Monitor Distro GitHub repository](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/README.md). - To see extra samples and use cases, see [Azure Monitor Distro samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples).
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
The UI supports selecting multiple subscriptions to view resource changes. Use t
### View the Activity Log change history
-Use the [View change history](../essentials/activity-log.md#view-change-history) feature to call the Azure Monitor Change Analysis service backend to view changes associated with an operation. Changes returned include:
+Use the [View change history](../essentials/activity-log-insights.md#view-change-history) feature to call the Azure Monitor Change Analysis service backend to view changes associated with an operation. Changes returned include:
- Resource level changes from [Azure Resource Graph](../../governance/resource-graph/overview.md). - Resource properties from [Azure Resource Manager](../../azure-resource-manager/management/overview.md).
azure-monitor Container Insights Deployment Hpa Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-deployment-hpa-metrics.md
Title: Deployment and HPA metrics with Container insights | Microsoft Docs description: This article describes what deployment and HPA metrics are collected with Container insights. Previously updated : 08/29/2022 Last updated : 2/28/2024
azure-monitor Container Insights Livedata Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-metrics.md
For help with setting up or troubleshooting the Live Data feature, review the [s
The Live Data feature directly accesses the Kubernetes API. For more information about the authentication model, see [The Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/).
-This feature performs a polling operation against the metrics endpoints including `/api/v1/nodes`, `/apis/metrics.k8s.io/v1beta1/nodes`, and `/api/v1/pods`. The interval is every five seconds by default. This data is cached in your browser and charted in four performance charts included in Container insights. Each subsequent poll is charted into a rolling five-minute visualization window. To see the charts, select **Go Live (preview)** and then select the **Cluster** tab.
+This feature performs a polling operation against the metrics endpoints including `/api/v1/nodes`, `/apis/metrics.k8s.io/v1beta1/nodes`, and `/api/v1/pods`. The interval is every five seconds by default. This data is cached in your browser and charted in four performance charts included in Container insights. Each subsequent poll is charted into a rolling five-minute visualization window. To see the charts, slide the **Live** option to **On**.
:::image type="content" source="./media/container-insights-livedata-metrics/cluster-view-go-live-example-01.png" alt-text="Screenshot that shows the Go Live option in the Cluster view." lightbox="./media/container-insights-livedata-metrics/cluster-view-go-live-example-01.png":::
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
Title: View live data with Container insights description: This article describes the real-time view of Kubernetes logs, events, and pod metrics without using kubectl in Container insights. Previously updated : 01/12/2024 Last updated : 2/28/2024
azure-monitor Container Insights Livedata Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-setup.md
Title: Configure Live Data in Container insights description: This article describes how to set up the real-time view of container logs (stdout/stderr) and events without using kubectl with Container insights. Previously updated : 05/24/2022 Last updated : 2/28/2024
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
Title: Log search alerts from Container insights | Microsoft Docs description: This article describes how to create custom log search alerts for memory and CPU utilization from Container insights. Previously updated : 08/29/2022 Last updated : 2/28/2024
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
Title: Query logs from Container insights description: Container insights collects metrics and log data, and this article describes the records and includes sample queries. Previously updated : 06/06/2023 Last updated : 2/28/2024
azure-monitor Container Insights Persistent Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-persistent-volumes.md
Title: Configure PV monitoring with Container insights | Microsoft Docs description: This article describes how you can configure monitoring Kubernetes clusters with persistent volumes with Container insights. Previously updated : 05/24/2022 Last updated : 2/28/2024
azure-monitor Container Insights Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-region-mapping.md
Title: Container insights region mappings description: Describes the region mappings supported between Container insights, Log Analytics Workspace, and custom metrics. Previously updated : 05/27/2022 Last updated : 2/28/2024
azure-monitor Container Insights Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md
Title: Syslog collection with Container Insights description: This article describes how to collect Syslog from AKS nodes using Container insights. Previously updated : 01/31/2023 Last updated : 2/28/2024
azure-monitor Container Insights Transition Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-transition-solution.md
Title: Transition from the Container Monitoring Solution to using Container Insights Previously updated : 8/29/2022 Last updated : 2/28/2024 description: "Learn how to migrate from using the legacy OMS solution to monitoring your containers using Container Insights"
# Transition from the Container Monitoring Solution to using Container Insights
-With both the underlying platform and agent deprecations, on March 31, 2025 the [Container Monitoring Solution](./containers.md) will be retired. If you use the Container Monitoring Solution to ingest data to your Log Analytics workspace, make sure to transition to using [Container Insights](./container-insights-overview.md) prior to that date.
+With both the underlying platform and agent deprecations, on August 31, 2024 the [Container Monitoring Solution](./containers.md) will be retired. If you use the Container Monitoring Solution to ingest data to your Log Analytics workspace, make sure to transition to using [Container Insights](./container-insights-overview.md) prior to that date.
## Steps to complete the transition
azure-monitor Prometheus Metrics Multiple Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-multiple-workspaces.md
Title: Send Prometheus metrics to multiple Azure Monitor workspaces description: Describes data collection rules required to send Prometheus metrics from a cluster in Azure Monitor to multiple Azure Monitor workspaces. Previously updated : 09/28/2022 Last updated : 2/28/2024
azure-monitor Prometheus Metrics Scrape Configuration Minimal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md
Title: Minimal Prometheus ingestion profile in Azure Monitor description: Describes minimal ingestion profile in Azure Monitor managed service for Prometheus and how you can configure it to collect more data. Previously updated : 1/28/2023 Last updated : 2/28/2024
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
Title: Customize scraping of Prometheus metrics in Azure Monitor description: Customize metrics scraping for a Kubernetes cluster with the metrics add-on in Azure Monitor. Previously updated : 09/28/2022 Last updated : 2/28/2024
azure-monitor Prometheus Metrics Scrape Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-scale.md
Title: Scrape Prometheus metrics at scale in Azure Monitor description: Guidance on performance that can be expected when collection metrics at high scale for Azure Monitor managed service for Prometheus. Previously updated : 09/28/2022 Last updated : 2/28/2024
azure-monitor Prometheus Metrics Scrape Validate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-validate.md
Title: Create, validate and troubleshoot custom configuration file for Prometheus metrics in Azure Monitor description: Describes how to create custom configuration file Prometheus metrics in Azure Monitor and use validation tool before applying to Kubernetes cluster. Previously updated : 09/28/2022 Last updated : 2/28/2024
azure-monitor Prometheus Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md
Title: Troubleshoot collection of Prometheus metrics in Azure Monitor description: Steps that you can take if you aren't collecting Prometheus metrics as expected. Previously updated : 09/28/2022 Last updated : 02/28/2024
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-active-directory.md
description: Learn how to set up remote write in Azure Monitor managed service f
Previously updated : 11/01/2022 Last updated : 2/28/2024 # Send Prometheus data to Azure Monitor by using Microsoft Entra authentication
azure-monitor Prometheus Remote Write Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-managed-identity.md
Title: Set up Prometheus remote write by using managed identity authentication
description: Learn how to set up remote write in Azure Monitor managed service for Prometheus. Use managed identity authentication to send data from a self-managed Prometheus server running in your Azure Kubernetes Server (AKS) cluster or Azure Arc-enabled Kubernetes cluster. Previously updated : 11/01/2022 Last updated : 2/28/2024 # Send Prometheus data to Azure Monitor by using managed identity authentication
azure-monitor Prometheus Remote Write https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write.md
Title: Remote-write in Azure Monitor Managed Service for Prometheus
description: Describes how to configure remote-write to send data from self-managed Prometheus running in your AKS cluster or Azure Arc-enabled Kubernetes cluster Previously updated : 11/01/2022 Last updated : 2/28/2024 # Azure Monitor managed service for Prometheus remote write
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
Several other features don't have a direct cost, but you instead pay for the ing
| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log search alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1), the cost will also depend on the number of time series created by the dimensions resulting from your query. | | Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
+A list of Azure Monitor billing meter names is available [here](cost-meters.md).
### Data transfer charges Sending data to Azure Monitor can incur data bandwidth charges. As described in the [Azure Bandwidth pricing page](https://azure.microsoft.com/pricing/details/bandwidth/), data transfer between Azure services located in two regions charged as outbound data transfer at the normal rate. Inbound data transfer is free. Data transfer charges for Azure Monitor though are typically very small compared to the costs for data ingestion and retention. You should focus more on your ingested data volume to control your costs.
To get started analyzing your Azure Monitor charges, open [Cost Management + Bil
:::image type="content" source="media/usage-estimated-costs/010.png" lightbox="media/usage-estimated-costs/010.png" alt-text="Screenshot that shows Azure Cost Management with cost information.":::
-To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**. See [Azure Monitor billing meter names](cost-meters.md) for the different charges that are included in each service.
+To limit the view to Azure Monitor charges, [create a filter](../cost-management-billing/costs/group-filter.md) for the following **Service names**. See [Azure Monitor billing meter names](cost-meters.md) for the different billing meters that are included in each service.
- Azure Monitor - Log Analytics
azure-monitor Activity Log Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log-insights.md
Title: Azure activity log insights
+ Title: Azure activity log and activity log insights
description: Learn how to monitor changes to resources and resource groups in an Azure subscription with Azure Monitor activity log insights.
Last updated 12/11/2023
-# Customer intent: As an IT manager, I want to understand how I can use activity log insights to monitor changes to resources and resource groups in an Azure subscription.
+# Customer intent: As an IT manager, I want to understand how I can use the activity log and activity log insights to monitor changes to resources and resource groups in an Azure subscription.
-# Monitor changes to resources and resource groups with Azure Monitor activity log insights
+# Use the Azure Monitor activity log and activity log insights
-Activity log insights provide you with a set of dashboards that monitor the changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to onboard and view activity log insights in the Azure portal.
+The Azure Monitor activity log is a platform log that provides insight into subscription-level events. The activity log includes information like when a resource is modified or a virtual machine is started. This article provides information on how to view the activity log and send it to different destinations.
+
+## View the activity log
+
+You can access the activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, the only filter is on the subscription. If you open it from a resource's menu, the filter is set to that resource. You can always change the filter to view all other entries. Select **Add Filter** to add more properties to the filter.
+<!-- convertborder later -->
+
+For a description of activity log categories, see [Azure activity log event schema](activity-log-schema.md#categories).
+
+## Download the activity log
+
+Select **Download as CSV** to download the events in the current view.
+<!-- convertborder later -->
+
+### View change history
-Before you use activity log insights, you must [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
+For some events, you can view the change history, which shows what changes happened during that event time. Select an event from the activity log you want to look at more deeply. Select the **Change history** tab to view any changes on the resource up to 30 minutes before and after the time of the operation.
-## How do activity log insights work?
-Azure Monitor stores all activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) in a table called `AzureActivity`.
+If any changes are associated with the event, you'll see a list of changes that you can select. Selecting a change opens the **Change history** page. This page displays the changes to the resource. In the following example, you can see that the VM changed sizes. The page displays the VM size before the change and after the change. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
++
+## Retention period
+
+Activity log events are retained in Azure for *90 days* and then deleted. There's no charge for entries during this time regardless of volume. For more functionality, such as longer retention, create a diagnostic setting and route the entries to another location based on your needs. See the criteria in the preceding section.
+
+## Activity log insights
+
+Activity log insights provide you with a set of dashboards that monitor the changes to resources and resource groups in a subscription. The dashboards also present data about which users or services performed activities in the subscription and the activities' status. This article explains how to onboard and view activity log insights in the Azure portal.
Activity log insights are a curated [Log Analytics workbook](../visualize/workbooks-overview.md) with dashboards that visualize the data in the `AzureActivity` table. For example, data might include which administrators deleted, updated, or created resources and whether the activities failed or succeeded.
+Azure Monitor stores all activity logs you send to a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) in a table called `AzureActivity`. Before you use activity log insights, you must [enable sending logs to your Log Analytics workspace](./diagnostic-settings.md).
+ :::image type="content" source="media/activity-log/activity-logs-insights-main-screen.png" lightbox= "media/activity-log/activity-logs-insights-main-screen.png" alt-text="Screenshot that shows activity log insights dashboards."::: ## View resource group or subscription-level activity log insights
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Title: Azure activity log
-description: View the Azure Monitor activity log and send it to Azure Monitor Logs, Azure Event Hubs, and Azure Storage.
+ Title: Stream Azure activity log data
+description: Send Azure Monitor activity log data to Azure Monitor Logs, Azure Event Hubs, and Azure Storage.
-# Azure Monitor activity log
+# Stream Azure Monitor activity log data
-The Azure Monitor activity log is a [platform log](./platform-logs-overview.md) in Azure that provides insight into subscription-level events. The activity log includes information like when a resource is modified or a virtual machine is started. You can view the activity log in the Azure portal or retrieve entries with PowerShell and the Azure CLI. This article provides information on how to view the activity log and send it to different destinations.
+The Azure Monitor activity log is a platform log that provides insight into subscription-level events. The activity log includes information like when a resource is modified or a virtual machine is started. You can view the activity log in the Azure portal or retrieve entries with PowerShell and the Azure CLI. This article provides information on how to view the activity log and send it to different destinations.
For more functionality, create a diagnostic setting to send the activity log to one or more of these locations for the following reasons:
For details on how to create a diagnostic setting, see [Create diagnostic settin
> * Entries in the Activity Log are representing control plane changes like a virtual machine restart, any non related entries should be written into [Azure Resource Logs](resource-logs.md) > * Entries in the Activity Log are typically a result of changes (create, update or delete operations) or an action having been initiated. Operations focused on reading details of a resource are not typically captured.
-## Retention period
-
-Activity log events are retained in Azure for *90 days* and then deleted. There's no charge for entries during this time regardless of volume. For more functionality, such as longer retention, create a diagnostic setting and route the entries to another location based on your needs. See the criteria in the preceding section.
-
-## View the activity log
-
-You can access the activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, the only filter is on the subscription. If you open it from a resource's menu, the filter is set to that resource. You can always change the filter to view all other entries. Select **Add Filter** to add more properties to the filter.
-<!-- convertborder later -->
-
-For a description of activity log categories, see [Azure activity log event schema](activity-log-schema.md#categories).
-
-## Download the activity log
-
-Select **Download as CSV** to download the events in the current view.
-<!-- convertborder later -->
-
-### View change history
-
-For some events, you can view the change history, which shows what changes happened during that event time. Select an event from the activity log you want to look at more deeply. Select the **Change history** tab to view any changes on the resource up to 30 minutes before and after the time of the operation.
--
-If any changes are associated with the event, you'll see a list of changes that you can select. Selecting a change opens the **Change history** page. This page displays the changes to the resource. In the following example, you can see that the VM changed sizes. The page displays the VM size before the change and after the change. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
--
-### Other methods to retrieve activity log events
-
-You can also access activity log events by using the following methods:
--- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the activity log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log).-- Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the activity log from the CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log).-- Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the activity log from a REST client.- ## Send to Log Analytics workspace Send the activity log to a Log Analytics workspace to enable the [Azure Monitor Logs](../logs/data-platform-logs.md) feature, where you:
Each event is stored in the PT1H.json file with the following format. This forma
```json { "time": "2020-06-12T13:07:46.766Z", "resourceId": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/MY-RESOURCE-GROUP/PROVIDERS/MICROSOFT.COMPUTE/VIRTUALMACHINES/MV-VM-01", "correlationId": "0f0cb6b4-804b-4129-b893-70aeeb63997e", "operationName": "Microsoft.Resourcehealth/healthevent/Updated/action", "level": "Information", "resultType": "Updated", "category": "ResourceHealth", "properties": {"eventCategory":"ResourceHealth","eventProperties":{"title":"This virtual machine is starting as requested by an authorized user or process. It will be online shortly.","details":"VirtualMachineStartInitiatedByControlPlane","currentHealthStatus":"Unknown","previousHealthStatus":"Unknown","type":"Downtime","cause":"UserInitiated"}}} ```
+### Other methods to retrieve activity log events
+
+You can also access activity log events by using the following methods:
+- Use the [Get-AzLog](/powershell/module/az.monitor/get-azlog) cmdlet to retrieve the activity log from PowerShell. See [Azure Monitor PowerShell samples](../powershell-samples.md#retrieve-activity-log).
+- Use [az monitor activity-log](/cli/azure/monitor/activity-log) to retrieve the activity log from the CLI. See [Azure Monitor CLI samples](../cli-samples.md#view-activity-log).
+- Use the [Azure Monitor REST API](/rest/api/monitor/) to retrieve the activity log from a REST client.
+-
+-
## Legacy collection methods > [!NOTE]
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
- Title: Overview of Azure platform logs | Microsoft Docs
-description: Overview of logs in Azure Monitor, which provide rich, frequent data about the operation of an Azure resource.
---- Previously updated : 07/31/2023--
-# Overview of Azure platform logs
-
-Platform logs provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on. Platform logs are automatically generated. This article provides an overview of platform logs including the information they provide, and how to configure them for collection and analysis.
-
-## Types of platform logs
-
-The following table lists the platform logs that are available at different layers within Azure.
-
-| Log | Layer | Description |
-|:|:|:|
-| [Resource logs](./resource-logs.md) | Azure Resources | Resource logs provide an insight into operations that were performed within an Azure resource. This is known as the *data plane*. Examples include getting a secret from a key vault, or making a request to a database. The contents of resource logs varies according to the Azure service and resource type.<br><br>*Resource logs were previously referred to as diagnostic logs.* |
-| [Activity logs](../essentials/activity-log.md) | Azure Subscription |Activity logs provide an insight into the operations performed *on* each Azure resource in the subscription from the outside, known as the *management plane*. in addition to updates on Service Health events. Use the Activity log to determine *what*, *who*, and *when* for any write operation (PUT, POST, DELETE) executed on the resources in your subscription. There's a single activity log for each Azure subscription. |
-| [Microsoft Entra logs](../../active-directory/reports-monitoring/overview-reports.md) | Azure Tenant | Microsoft Entra logs contain the history of sign-in activity and an audit trail of changes made in Microsoft Entra ID for a particular tenant. |
-
-> [!NOTE]
-> The Azure activity log is primarily for activities that occur in Azure Resource Manager. The activity log doesn't track resources by using the classic/RDFE model. Some classic resource types have a proxy resource provider in Resource Manager, for example, Microsoft.ClassicCompute. If you interact with a classic resource type through Resource Manager by using these proxy resource providers, the operations appear in the activity log. If you interact with a classic resource type outside of the Resource Manager proxies, your actions are only recorded in the Operation log. The [Operation log](https://portal.azure.com/?Microsoft_Azure_Monitoring_Log=#view/Microsoft_Azure_Resources/OperationLogsBlade) can be browsed in a separate section of the portal.
--
-## View platform logs
-
-There are different options for viewing and analyzing the different Azure platform logs:
--- View the activity log using the Azure portal and access events from PowerShell and the Azure CLI. See [View the activity log](../essentials/activity-log.md#view-the-activity-log) for details.-- View Microsoft Entra security and activity reports in the Azure portal. See [What are Microsoft Entra reports?](../../active-directory/reports-monitoring/overview-reports.md) for details.-- Resource logs are automatically generated by supported Azure resources. You must create a [diagnostic setting](#diagnostic-settings) for the resource to store and view the log.-
-## Diagnostic settings
-
-Resource logs must have a diagnostic setting to be viewed. Create a [diagnostic setting](../essentials/diagnostic-settings.md) to send platform logs to one of the following destinations for analysis or other purposes.
-
-| Destination | Description |
-|:|:|
-| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log search alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
-| Event hub | Send platform log data outside of Azure, for example, to a third-party SIEM or custom telemetry platform via Event hubs |
-| Azure Storage | Archive the logs to Azure storage for audit or backup. |
-| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Partner integrations are specialized integrations between Azure Monitor and non-Microsoft monitoring platforms. Partner integrations are especially useful when you're already using one of the supported partners. |
--- For details on how to create a diagnostic setting for activity logs or resource logs, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md).-- For details on how to create a diagnostic setting for Microsoft Entra logs, see the following articles:
- - [Integrate Microsoft Entra logs with Azure Monitor logs](../../active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md)
- - [Tutorial: Stream Microsoft Entra logs to an Azure event hub](../../active-directory/reports-monitoring/tutorial-azure-monitor-stream-logs-to-event-hub.md)
- - [Tutorial: Archive Microsoft Entra logs to an Azure Storage account](../../active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md)
-
-## Pricing model
-
-Processing data to stream logs is charged for [certain services](resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace.
-
-While there's no direct charge when this data is sent from the resource to a Log Analytics workspace, there's a Log Analytics charge for ingesting the data into a workspace. The charge is based on the number of bytes in the exported JSON-formatted log data, measured in GB (10^9 bytes).
-
-Pricing is available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/).
-
-## Next steps
-
-* [Read more details about activity logs](../essentials/activity-log.md)
-* [Read more details about resource logs](./resource-logs.md)
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
Title: Azure resource logs
-description: Learn how to stream Azure resource logs to a Log Analytics workspace in Azure Monitor.
+ Title: Stream Azure resource log data
+description: Learn how to stream Azure resource logs to a Log Analytics workspace, event hub, or Azure Storage in Azure Monitor.
Last updated 08/08/2023
-# Azure resource logs
+# Stream Azure resource log data
Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) that provide insight into operations that were performed within an Azure resource. The content of resource logs varies by the Azure service and resource type. Resource logs aren't collected by default. This article describes the [diagnostic setting](diagnostic-settings.md) required for each Azure resource to send its resource logs to different destinations.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The default pricing for Log Analytics is a pay-as-you-go model that's based on i
- The number and type of monitored resources. - The types of data collected from each monitored resource.
+A list of Azure Monitor billing meter names is available [here](../cost-meters.md).
+ ## Data size calculation Data volume is measured as the size of the data sent to be stored and is measured in units of GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record. It doesn't matter whether the data is sent from an agent or added during the ingestion process. This calculation includes any custom columns added by the [logs ingestion API](logs-ingestion-api-overview.md), [transformations](../essentials/data-collection-transformations.md) or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace.
Subscriptions that contained a Log Analytics workspace or Application Insights r
Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricing information for the Standalone and Per Node pricing tiers is available [here](https://aka.ms/OMSpricing).
+A list of Azure Monitor billing meter names, including these legacy tiers, is available [here](../cost-meters.md).
+ > [!IMPORTANT] > The legacy pricing tiers do not support access to some of the newest features in Log Analytics such as ingesting data as cost-effective Basic Logs.
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Access the single machine analysis experience from the **Monitoring** section of
| Option | Description | |:|:| | Overview page | Select the **Monitoring** tab to display alerts, [platform metrics](../essentials/data-platform-metrics.md), and other monitoring information for the virtual machine host. You can see the number of active alerts on the tab. In the **Monitoring** tab, you get a quick view of:<br><br>**Alerts:** the alerts fired in the last 24 hours, with some important statistics about those alerts. If you do not have any alerts set up for this VM, there is a link to help you quickly create new alerts for your VM.<br><br>**Key metrics:** the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/analyze-metrics.md) where you can perform different aggregations, and add more counters for analysis. |
-| Activity log | See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started.
+| Activity log | See [activity log](../essentials/activity-log-insights.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started.
| Insights | Displays VM insights views if the VM is enabled for [VM insights](../vm/vminsights-overview.md).<br><br>Select the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).<br><br>If *processes and dependencies* is enabled for the VM, select the **Map** tab to view the running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).<br><br>If the VM is not enabled for VM insights, it offers the option to enable VM insights. | | Alerts | View [alerts](../alerts/alerts-overview.md) for the current virtual machine. These alerts only use the machine as the target resource, so there might be other alerts associated with it. You might need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. For details, see [Monitor virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md). | | Metrics | Open metrics explorer with the scope set to the machine. This option is the same as selecting one of the performance charts from the **Overview** page except that the metric isn't already added. |
Access the multiple machine analysis experience from the **Monitor** menu in the
| Option | Description | |:|:|
-| Activity log | See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of virtual machines or Virtual Machine Scale Sets to view events for all your machines. |
+| Activity log | See [activity log](../essentials/activity-log-insights.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of virtual machines or Virtual Machine Scale Sets to view events for all your machines. |
| Alerts | View [alerts](../alerts/alerts-overview.md) for all resources. This includes alerts related to all virtual machines in the workspace. Create a filter for a **Resource Type** of virtual machines or Virtual Machine Scale Sets to view alerts for all your machines. | | Metrics | Open [metrics explorer](../essentials/analyze-metrics.md) with no scope selected. This feature is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together. | | Logs | Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the workspace. You can select from a variety of existing queries to drill into log and performance data for all machines. Or you can create a custom query to perform additional analysis. |
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
Platform metrics for Azure virtual machines include important host metrics such
### Activity log The [activity log](../essentials/activity-log.md) is collected automatically. It includes the recent activity of the machine, such as any configuration changes and when it was stopped and started. You can view the platform metrics and activity log collected for each virtual machine host in the Azure portal.
-You can [view the activity log](../essentials/activity-log.md#view-the-activity-log) for an individual machine or for all resources in a subscription. [Create a diagnostic setting](../essentials/diagnostic-settings.md) to send this data into the same Log Analytics workspace used by Azure Monitor Agent to analyze it with the other monitoring data collected for the virtual machine. There's no cost for ingestion or retention of activity log data.
+You can [view the activity log](../essentials/activity-log-insights.md#view-the-activity-log) for an individual machine or for all resources in a subscription. [Create a diagnostic setting](../essentials/diagnostic-settings.md) to send this data into the same Log Analytics workspace used by Azure Monitor Agent to analyze it with the other monitoring data collected for the virtual machine. There's no cost for ingestion or retention of activity log data.
### VM availability information in Azure Resource Graph With [Azure Resource Graph](../../governance/resource-graph/overview.md), you can use the same Kusto Query Language used in log queries to query your Azure resources at scale with complex filtering, grouping, and sorting by resource properties. You can use [VM health annotations](../../service-health/resource-health-vm-annotation.md) to Resource Graph for detailed failure attribution and downtime analysis.
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Azure NetApp Files backup is supported for the following regions:
* Japan East * Japan West * Korea Central
+* Korea South
* North Central US * North Europe * Norway East
Azure NetApp Files backup is supported for the following regions:
* UAE Central * UAE North * UK South
+* UK West
* West Europe * West US * West US 2
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
To read all notifications received during your current session, select the **Not
:::image type="content" source="media/set-preferences/read-notifications.png" alt-text="Screenshot showing the Notifications icon in the global header.":::
-To view notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log.md#view-the-activity-log).
+To view notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log).
## Next steps
azure-vmware Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md
During onboarding, to create a connection between your VMware vCenter and Azure,
As a last step, run the following command:
-`az rest --method delete --url` [URL](https://management.azure.com/subscriptions/%3csubscrption-id%3e/resourcegroups/%3cresource-group-name%3e/providers/Microsoft.AVS/privateClouds/%3cprivate-cloud-name%3e/addons/arc?api-version=2022-05-01%22)
+`az rest --method delete --url` [URL](https://management.azure.com/subscriptions/%3Csubscrption-id%3E/resourcegroups/%3Cresource-group-name%3E/providers/Microsoft.AVS/privateClouds/%3Cprivate-cloud-name%3E/addons/arc?api-version=2022-05-01%22)
Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer.
backup Azure Kubernetes Service Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md
Title: Troubleshoot Azure Kubernetes Service backup description: Symptoms, causes, and resolutions of the Azure Kubernetes Service backup and restore operations. Previously updated : 02/28/2024 Last updated : 02/29/2024 - ignite-2023
The extension pods aren't exempt, and require the Microsoft Entra pod identity t
kubectl get Azurepodidentityexceptions --all-namespaces ```
-3. To assign the *Storage Account Contributor* role to the extension identity, run the following command:
+3. To assign the *Storage Blob Data Contributor* role to the extension identity, run the following command:
```azurecli-interactive
- az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname
+ az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Blob Data Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname
``` ### Scenario 3
These error codes appear due to issues based on the Backup extension installed i
### UserErrorExtensionMSIMissingPermissionsOnBackupStorageLocation
-**Cause**: The Backup extension should have the *Storage Account Contributor* role on the Backup Storage Location (storage account). The Extension Identity gets this role assigned.
+**Cause**: The Backup extension should have the *Storage Blob Data Contributor* role on the Backup Storage Location (storage account). The Extension Identity gets this role assigned.
**Recommended action**: If this role is missing, then use Azure portal or CLI to reassign this missing permission on the storage account. ### UserErrorBackupStorageLocationNotReady
-**Cause**: During extension installation, a Backup Storage Location is to be provided as input that includes a storage account and blob container. The Backup extension should have *Storage Account Contributor* role on the Backup Storage Location (storage account). The Extension Identity gets this role assigned.
+**Cause**: During extension installation, a Backup Storage Location is to be provided as input that includes a storage account and blob container. The Backup extension should have *Storage Blob Data Contributor* role on the Backup Storage Location (storage account). The Extension Identity gets this role assigned.
**Recommended action**: The error appears if the Extension Identity doesn't have right permissions to access the storage account. This error appears if AKS backup extension is installed the first time when configuring protection operation. This happens for the time taken for the granted permissions to propagate to the AKS backup extension. As a workaround, wait an hour and retry the protection configuration. Otherwise, use Azure portal or CLI to reassign this missing permission on the storage account.
backup Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-overview.md
Title: Overview of security features description: Learn about security capabilities in Azure Backup that help you protect your backup data and meet the security needs of your business. Previously updated : 03/31/2023 Last updated : 02/29/2024
Storage accounts used by Recovery Services vaults are isolated and can't be acce
Azure Backup provides three [built-in roles](../role-based-access-control/built-in-roles.md) to control backup management operations:
-* Backup Contributor - to create and manage backups, except deleting Recovery Services vault and giving access to others
-* Backup Operator - everything a contributor does except removing backup and managing backup policies
-* Backup Reader - permissions to view all backup management operations
+* **Backup Contributor**: To create and manage backups, except deleting Recovery Services vault and giving access to others
+* **Backup Operator**: Everything a contributor does except removing backup and managing backup policies
+* **Backup Reader**: permissions to view all backup management operations
Learn more about [Azure role-based access control to manage Azure Backup](./backup-rbac-rs-vault.md).
Azure Backup has several security controls built into the service to prevent, de
## Separation between guest and Azure storage
-With Azure Backup, which includes virtual machine backup and SQL and SAP HANA in VM backup, the backup data is stored in Azure storage and the guest has no direct access to backup storage or its contents. With the virtual machine backup, the backup snapshot creation and storage are done by Azure fabric where the guest has no involvement other than quiescing the workload for application consistent backups. With SQL and SAP HANA, the backup extension gets temporary access to write to specific blobs. In this way, even in a compromised environment, existing backups can't be tampered with or deleted by the guest.
+With Azure Backup, which includes virtual machine backup and SQL and SAP HANA in VM backup, the backup data is stored in Azure storage and the guest has no direct access to backup storage or its contents. With the virtual machine backup, the backup snapshot creation and storage are done by Azure fabric where the guest has no involvement other than quiescing the workload for application consistent backups. With SQL and SAP HANA, the backup extension gets temporary access to write to specific blobs. In this way, even in a compromised environment, existing backups can't be tampered with or deleted by the guest.
## Internet connectivity not required for Azure VM backup
Encryption protects your data and helps you to meet your organizational security
* Within Azure, data in transit between Azure storage and the vault is [protected by HTTPS](backup-support-matrix.md#network-traffic-to-azure). This data remains on the Azure backbone network.
-* Backup data is automatically encrypted using [platform-managed keys](backup-encryption.md), and you don't need to take any explicit action to enable it. You can also encrypt your backed up data using [customer managed keys](encryption-at-rest-with-cmk.md) stored in the Azure Key Vault. It applies to all workloads being backed up to your Recovery Services vault.
+* Backup data is automatically encrypted using [platform-managed keys](backup-encryption.md), and you don't need to take any explicit action to enable it. You can also encrypt your backed-up data using [customer managed keys](encryption-at-rest-with-cmk.md) stored in the Azure Key Vault. It applies to all workloads being backed up to your Recovery Services vault.
* Azure Backup supports backup and restore of Azure VMs that have their OS/data disks encrypted with [Azure Disk Encryption (ADE)](backup-azure-vms-encryption.md#encryption-support-using-ade) and [VMs with CMK encrypted disks](backup-azure-vms-encryption.md#encryption-using-customer-managed-keys). For more information, [learn more about encrypted Azure VMs and Azure Backup](./backup-azure-vms-encryption.md).
Azure Backup service uses the Microsoft Azure Recovery Services (MARS) agent to
* For data backed up using the Microsoft Azure Recovery Services (MARS) agent, a passphrase is used to ensure data is encrypted before upload to Azure Backup and decrypted only after download from Azure Backup. The passphrase details are only available to the user who created the passphrase and the agent that's configured with it. Nothing is transmitted or shared with the service. This ensures complete security of your data, as any data that's exposed inadvertently (such as a man-in-the-middle attack on the network) is unusable without the passphrase, and the passphrase isn't sent over the network.
+## Security posture and security levels
+
+Azure Backup provides security features at the vault level to safeguard backup data stored in it. These security measures encompass the settings associated with the Azure Backup solution for the vaults, and the protected data sources contained in the vaults.
+
+Security levels for Azure Backup vaults are categorized as follows:
+
+- **Excellent (Maximum)**: This level represents the highest security, which ensures comprehensive protection. You can achieve this when all backup data is protected from accidental deletions and defends from ransomware attacks. To achieve this high level of security, the following conditions must be met:
+
+ - [Immutability](backup-azure-immutable-vault-concept.md) or [soft-delete](backup-azure-security-feature-cloud.md) vault setting must be enabled and irreversible (locked/always-on).
+ - [Multi-user authorization (MUA)](multi-user-authorization-concept.md) must be enabled on the vault.
+
+- **Good (Adequate)**: This signifies a robust security level, which ensures dependable data protection. It shields existing backups from unintended removal and enhances the potential for data recovery. To attain this level of security, you must enable either immutability with a lock or soft-delete.
+
+- **Fair (Minimum/Average)**: This represents a basic level of security, appropriate for standard protection requirements. Essential backup operations benefit from an extra layer of protection. To attain minimal security, you must enable Multi-user Authorization (MUA) on the vault.
+
+- **Poor (Bad/None)**: This indicates a deficiency in security measures, which is less suitable for data protection. In this level, neither advanced protective features nor solely reversible capabilities are in place. The None level security gives protection primarily from accidental deletions only.
+
+You can [view and manage the security levels across all datasources in their respective vaults through Azure Business Continuity Center](../business-continuity-center/security-levels-concept.md).
+ ## Compliance with standardized security requirements To help organizations comply with national/regional and industry-specific requirements governing the collection and use of individuals' data, Microsoft Azure & Azure Backup offer a comprehensive set of certifications and attestations. [See the list of compliance certifications](compliance-offerings.md)
certification How To Test Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-test-pnp.md
- Title: Test your IoT Plug and Play device with Azure CLI
-description: A guide on how to test IoT Plug and Play device with the Azure CLI in preparation for certification.
---- Previously updated : 01/28/2022---
-# How to test IoT Plug and Play devices
-
-The IoT Plug and Play device certification program includes tools to check that a device meets the IoT Plug and Play certification requirements. The tools also help organizations to drive awareness of the availability of their IoT Plug and Play devices. These certified devices are tailored for IoT solutions and help to reduce time to market.
-
-This article shows you how to:
--- Install the Azure IoT command-line tool extension for the Azure CLI-- Run the IoT Plug and Play tests to validate your device application while in-development phase -
-> [!Note]
-> A full walk through the certification process can be found in the [Azure Certified Device certification tutorial](tutorial-00-selecting-your-certification.md).
-
-## Prepare your device
-
-The application code that runs on your IoT Plug and Play must:
--- Connect to Azure IoT Hub using the [Device Provisioning Service (DPS)](../iot-dps/about-iot-dps.md).-- Follow the [IoT Plug an Play conventions](../iot/concepts-developer-guide-device.md) to implement of telemetry, properties, and commands.-
-The application is software that's installed separately from the operating system or is bundled with the operating system in a firmware image that's flashed to the device.
-
-Prior to certifying your device through the certification process for IoT Plug and Play, you will want to validate that the device implementation matches the telemetry, properties and commands defined in the [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl) device model locally prior to submitting to the [Azure IoT Public Model Repository](../iot/concepts-model-repository.md).
-
-To meet the certification requirements, your device must:
--- Connects to Azure IoT Hub using the [DPS](../iot-dps/about-iot-dps.md).-- Implement of telemetry, properties, or commands following the IoT Plug and Play convention.-- Describe the device interactions with a [DTDL v2](https://aka.ms/dtdl) model.-- Send the model ID during [DPS registration](../iot/concepts-developer-guide-device.md#dps-payload) in the DPS provisioning payload.-- Announce the model ID during the [MQTT connection](../iot/concepts-developer-guide-device.md#model-id-announcement).-
-## Test with the Azure IoT Extension CLI
-
-The [Azure IoT CLI extension](/cli/azure/iot/product?view=azure-cli-latest&preserve-view=true) lets you validate that the device implementation matches the model before you submit the device for certification through the Azure Certified Device portal.
-
-The following steps show you how to prepare for and run the certification tests using the CLI:
-
-### Install the Azure IoT extension for the Azure CLI
-Install the [Azure CLI](/cli/azure/install-azure-cli) and review the installation instructions to set up the [Azure CLI](/cli/azure/iot?view=azure-cli-latest&preserve-view=true) in your environment.
-
-To install the Azure IoT Extension, run the following command:
-
-```azurecli
-az extension add --name azure-iot
-```
-
-To learn more, see [Azure CLI for Azure IoT](/cli/azure/iot/product?view=azure-cli-latest&preserve-view=true).
-
-### Create a new product test
-
-The following command creates a test using DPS with a symmetric key attestation method:
--- Creates a new product to test, and generates a test configuration. The output displays the DPS information that the device must use for provisioning: the primary key, device ID, and ID Scope.-- Specifies the folder with the DTDL files describing your model.-
-```azurecli
-az iot product test create --badge-type Pnp --at SymmetricKey --device-type FinishedProduct --models {local folder name}
-```
-
-The JSON output from the command contains the `primaryKey`, `registrationId`, and `scopeID` to use when you connect your device.
-
-Expected output:
-
-```json
-"deviceType": "FinishedProduct",
-"id": "d45d53d9-656d-4be7-9bbf-140bc87e98dc",
-"provisioningConfiguration": {
- "symmetricKeyEnrollmentInformation": {
- "primaryKey":"Ci/Ghpqp0n4m8FD5PTicr6dEikIfM3AtVWzAhObU7yc=",
- "registrationId": "d45d53d9-656d-4be7-9bbf-140bc87e98dc",
- "scopeId": "0ne000FFA42"
- }
-}
-```
-
-### Connect your device
-
-Use the DPS information output by the previous command to connect your device to the test IoT Hub instance.
-
-### Manage and configure the product tests
-
-When the device is connected and ready to interact with the IoT hub, generate a product test configuration file. To create the file:
--- Use the test `id` from the output of the previous command.-- Use the `--wait` parameter to get the test case.-
-```azurecli
-az iot product test task create --type GenerateTestCases --test-id [YourTestId] --wait
-```
-
-Expected output:
-
-```json
-{
- "deviceTestId": "d45d53d9-656d-4be7-9bbf-140bc87e98dc",
- "error": null,
- "id": "526da38e-91fc-4e20-a761-4f04b392c42b",
- "resultLink": "/deviceTests/d45d53d9-656d-4be7-9bbf-140bc87e98dc/TestCases",
- "status": "Completed",
- "type": "GenerateTestCases"
-}
-```
-
-You can use the `az iot product test case update` command to modify the test configuration file.
-
-### Run the tests
-
-After you generate the test configuration, the next step is to run the tests. Use the same `devicetestId` from the previous commands as parameter to run the tests. Check the test results to make sure that all tests have a status `Passed`.
-
-```azurecli
-az iot product test task create --type QueueTestRun --test-id [YourTestId] --wait
-```
-
-Example test run output
-
-```json
-"validationTasks": [
- {
- "componentName": "Default component",
- "endTime": "2020-08-25T05:18:49.5224772+00:00",
- "interfaceId": "dtmi:com:example:TemperatureController;1",
- "logs": [
- {
- "message": "Waiting for telemetry from the device",
- "time": "2020-08-25T05:18:37.3862586+00:00"
- },
- {
- "message": "Validating PnP properties",
- "time": "2020-08-25T05:18:37.3875168+00:00"
- },
- {
- "message": "Validating PnP commands",
- "time": "2020-08-25T05:18:37.3894343+00:00"
- },
- {
- "message": "{\"propertyName\":\"serialNumber\",\"expectedSchemaType\":null,\"actualSchemaType\":null,\"message\":\"Property is successfully validated\",\"passed\":true,\"time\":\"2020-08-25T05:18:37.4205985+00:00\"}",
- "time": "2020-08-25T05:18:37.4205985+00:00"
- },
- {
- "message": "PnP interface properties validation passed",
- "time": "2020-08-25T05:18:37.4206964+00:00"
- },
- ...
- ]
- }
-]
-```
-
-## Test using the Azure Certified Device portal
-
-The following steps show you how to use the [Azure Certified Device portal](https://certify.azure.com) to onboard, register product details, submit a getting started guide, and run the certification tests.
-
-### Onboarding
-
-To use the [certification portal](https://certify.azure.com), you must use a Microsoft Entra ID from your work or school tenant.
-
-To publish the models to the Azure IoT Public Model Repository, your account must be a member of the [Microsoft Partner Network](https://partner.microsoft.com). The system checks that the Microsoft Partner Network ID exists and the account is fully vetted before publishing to the device catalog.
-
-### Company profile
-
-You can manage your company profile from the left navigation menu. The company profile includes the company URL, email address, and company logo. The program agreement must be accepted on this page before you run any certification operations.
-
-The company profile information is used in the device description showcased in the device catalog.
-
-### Create new project
-
-To certify a device, you must first create a new project.
-
-Navigate to the [certification portal](https://certify.azure.com). On the **Projects** page, select *+ Create new project*. Then enter a name for the project, the device name, and select a device class.
-
-The product information you provide during the certification process falls into four categories:
--- Device information. Collects information about the device such as its name, description, certifications, and operating system.-- The **Get started** guide. You must submit the guide as a PDF document to be approved by the system administrator before publishing the device.-- Marketing details. Provide customer-ready marketing information for your device. The marketing information includes as description, a photo, and distributors.-- Additional industry certifications. This optional section lets you provide additional information about any other certifications the device has got.-
-### Connect and test
-
-The connect and test step checks that your device meets the IoT Plug and Play certification requirements.
-
-There are three steps to be completed:
-
-1. Connect and discover interfaces. The device must connect to the Azure IoT certification service through DPS. Choose the authentication method (X.509 certificate, symmetric keys, or trusted platform module) to use and update the device application with the DPS information.
-1. Review interfaces. Review the interface and make sure each one has payload inputs that make sense for testing.
-1. Test. The system tests each device model to check that the telemetry, properties, and commands described in the model follow the IoT Plug and Play conventions. When the test is complete, select the **view logs** link to see the telemetry from the device and the raw data sent to IoT Hub device twin properties.
-
-### Submit and publish
-
-The final required stage is to submit the project for review. This step notifies an Azure Certified Device team member to review your project for completeness, including the device and marketing details, and the get started guide. A team member may contact you at the company email address previously provided with questions or edit requests before approval.
-
-If your device requires further manual validation as part of certification, you'll receive a notice at this time.
-
-When a device is certified, you can choose to publish your product details to the Azure Certified Device Catalog using the **Publish to catalog** feature in the product summary page.
-
-## Next steps
-
-Now the device submission is completed, you can contact the device certification team at [iotcert@microsoft.com](mailto:iotcert@microsoft.com) to continue to the next steps, which include Microsoft Partner Network membership validation and a review of the getting started guides. When all the requirements are satisfied, you can choose to have your device included in the [Certified for Azure IoT device catalog](https://devicecatalog.azure.com).
certification How To Troubleshoot Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/how-to-troubleshoot-pnp.md
- Title: Troubleshoot your IoT Plug and Play device
-description: A guide of recommended troubleshooting steps for partners certifying an IoT Plug and Play device.
---- Previously updated : 04/15/2021---
-# Troubleshoot your IoT Plug and Play certification project
-
-During the Connect & test phase of your IoT Plug and Play certification project, you may run into some scenarios that prevent you from passing the Azure for IoT Certification Service (AICS) testing.
-
-## Prerequisites
--- You should be signed in and have a project for your device created on the [Azure Certified Device portal](https://certify.azure.com). For more information, view the [tutorial](tutorial-01-creating-your-project.md).-
-## When AICS tests aren't passing
-
-AICS test may not pass because of several causes. Follow these steps to check for common issues and troubleshoot your device.
-
-1. Double-check that your device code is setting the Model ID Payload during DPS provisioning. This is a requirement for AICS to validate your device.
-1. You can view the telemetry logs from previous test runs by pressing the `View Logs` button to identify what is causing the test to fail. Both the test messaging and raw data are available for review.
-
- ![Review test data](./media/images/review-logs.png)
-
-1. In some instances where the logs indicate `Failed to get Digital Twin Model ID of device xx due to DeviceNotConnected`, try rebooting the device and restarting the device provisioning process.
-1. If the automated tests continue to fail, then you can `request a manual review` of the results to substitute. This will trigger a request for **manual validation** with the Azure Certified Device team.
-
- ![Request manual review](./media/images/request-manual-review.png)
-
-## When you see "Passed with warnings"
-
-While running the tests, if you receive a result of `Passed with warnings`, this means that some telemetry was not received during the testing period. This may be due to a dependency of the telemetry on longer time intervals or external triggers that were not available. You can proceed with submitting your device for review, during which the review team will determine if **manual validation** is necessary in the future.
-
-## When you need help with the model repository
-
-For IoT Plug and Play issues related to the model repository, refer to [our Docs guidance about the device model repository](../iot/concepts-model-repository.md).
-
-## Next steps
-
-Hopefully this guide helps you continue with your IoT Plug and Play certification journey! Once you have passed AICS, you can then proceed with our tutorials to submit and publish your device.
--- [Tutorial: Testing your device](tutorial-03-testing-your-device.md)
certification Program Requirements Pnp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/program-requirements-pnp.md
- Title: IoT Plug and Play Certification Requirements
-description: IoT Plug and Play Certification Requirements
--- Previously updated : 03/15/2021------
-# IoT Plug and Play Certification Requirements
-> [!Note]
-> The Azure Certified Device program has met its goals and will conclude on February 23, 2024. This means that the Azure Certified Device catalog, along with certifications for Azure Certified Device, Edge Managed, and IoT Plug and Play will no longer be available after this date. However, the Edge Secured-core program will remain active and will be relocated to a new home at [aka.ms/EdgeSecuredCoreHome](https://aka.ms/EdgeSecuredCoreHome).
-
-This document outlines the device specific capabilities that will be represented in the Azure IoT Device catalog. A capability is singular device attribute that may be software implementation or combination of software and hardware implementations.
-
-## Program Purpose
-
-IoT Plug and Play enables solution builders to integrate smart devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device model that advertises the device capabilities to an IoT Plug and Play-enabled application.
-
-Promise of IoT Plug and Play certification are:
-
-1. Defined device models and interfaces are compliant with the [Digital Twin Definition Language](https://github.com/Azure/opendigitaltwins-dtdl)
-1. Easy integration with Azure IoT based solutions using the [Digital Twin APIs](../iot/concepts-digital-twin.md) : Azure IoT Hub and Azure IoT Central
-1. Product truth validated through testing telemetry from end point to cloud using DTDL
-
-> [!Note]
-> Upon completed testing and validation, we may request that the product is evaluated by Microsoft.
-
-## Requirements
-
-**[Required] Device to cloud: The purpose of test is to make sure devices that send telemetry works with IoT Hub**
-
-| **Name** | IoTPnP.D2C |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device to cloud (required): **1.** Validates that the device can send message to AICS managed IoT Hub **2.** User must specify the number and frequency of messages. **3.** AICS validates the telemetry is received by the Hub instance |
-| **Resources** | [Certification steps](./overview.md) (has all the additional resources) |
-
-**[Required] DPS: The purpose of test is to check the device implements and supports IoT Hub Device Provisioning Service with one of the three attestation methods**
-
-| **Name** | AzureCertified.DPS |
-| -- | |
-| **Target Availability** | New |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device supports easy input of target DPS ID scope ownership. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests to validate that the device supports DPS **1.** User must select one of the attestation methods (X.509, TPM and SAS key) **2.** Depending on the attestation method, user needs to take corresponding action such as **a)** Upload X.509 cert to AICS managed DPS scope **b)** Implement SAS key or endorsement key into the device |
-| **Resources** | [Device provisioning service overview](../iot-dps/about-iot-dps.md) |
-
-**[Required] DTDL v2: The purpose of test to ensure defined device models and interfaces are compliant with the Digital Twins Definition Language v2.**
-
-| **Name** | IoTPnP.DTDL |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | The [portal workflow](https://certify.azure.com) validates: **1.** Model ID announcement and ensure the device is connected using either the MQTT or MQTT over WebSockets protocol **2.** Models are compliant with the DTDL v2 **3.** Telemetry, properties, and commands are properly implemented and interact between IoT Hub Digital Twin and Device Twin on the device |
-| **Resources** | [Public Preview Refresh updates](../iot/overview-iot-plug-and-play.md) |
-
-**[Required] Device models are published in public model repository**
-
-| **Name** | IoTPnP.ModelRepo |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | All device models are required to be published in public repository. Device models are resolved via models available in public repository **1.** User must manually publish the models to the public repository before submitting for the certification. **2.** Note that once the models are published, it is immutable. We strongly recommend publishing only when the models and embedded device code are finalized.*1 *1 User must contact Microsoft support to revoke the models once published to the model repository **3.** [Portal workflow](https://certify.azure.com) checks the existence of the models in the public repository when the device is connected to the certification service |
-| **Resources** | [Model repository](../iot/overview-iot-plug-and-play.md) |
--
-**[If implemented] Device info Interface: The purpose of test is to validate device info interface is implemented properly in the device code**
-
-| **Name** | IoTPnP.DeviceInfoInterface |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Any device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | [Portal workflow](https://certify.azure.com) validates the device code implements device info interface **1.** Checks the values are emitted by the device code to IoT Hub **2.** Checks the interface is implemented in the DCM (this implementation will change in DTDL v2) **3.** Checks properties are not write-able (read only) **4.** Checks the schema type is string and/or long and not null |
-| **Resources** | [Microsoft defined interface](../iot/overview-iot-plug-and-play.md) |
-| **Azure Recommended** | N/A |
-
-**[If implemented] Cloud to device: The purpose of test is to make sure messages can be sent from cloud to devices**
-
-| **Name** | IoTPnP.C2D |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must be able to Cloud to Device messages from IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute these tests. Cloud to device (if implemented): **1.** Validates that the device can receive message from IoT Hub **2.** AICS sends random message and validates via message ACK from the device |
-| **Resources** | **1.** [Certification steps](./overview.md) (has all the additional resources), **2.** [Send cloud to device messages from an IoT Hub](../iot-hub/iot-hub-devguide-messages-c2d.md) |
-
-**[If implemented] Direct methods: The purpose of test is to make sure devices works with IoT Hub and supports direct methods**
-
-| **Name** | IoTPnP.DirectMethods |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must be able to receive and reply commands requests from IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Direct methods (if implemented): **1.** User has to specify the method payload of direct method. **2.** AICS validates the specified payload request is sent from Hub and ACK message received by the device |
-| **Resources** | **1.** [Certification steps](./overview.md) (has all the additional resources), **2.** [Understand direct methods from IoT Hub](../iot-hub/iot-hub-devguide-direct-methods.md) |
-
-**[If implemented] Device twin property: The purpose of test is to make sure devices that send telemetry works with IoT Hub and supports some of the IoT Hub capabilities such as direct methods, and device twin property**
-
-| **Name** | IoTPnP.DeviceTwin |
-| -- | |
-| **Target Availability** | Available now |
-| **Applies To** | Leaf device/Edge device |
-| **OS** | Agnostic |
-| **Validation Type** | Automated |
-| **Validation** | Device must send any telemetry schemas to IoT Hub. Microsoft provides the [portal workflow](https://certify.azure.com) to execute the tests. Device twin property (if implemented): **1.** AICS validates the read/write-able property in device twin JSON **2.** User has to specify the JSON payload to be changed **3.** AICS validates the specified desired properties sent from IoT Hub and ACK message received by the device |
-| **Resources** | **1.** [Certification steps](./overview.md) (has all the additional resources), **2.** [Use device twins with IoT Hub](../iot-hub/iot-hub-devguide-device-twins.md) |
chaos-studio Chaos Studio Quickstart Dns Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-dns-outage.md
-# Quickstart: Replicate a DNS outage by using the NSG fault
+# Replicate a DNS outage by using the NSG fault
The network security group (NSG) fault enables you to modify your existing NSG rules as part of a chaos experiment in Azure Chaos Studio. By using this fault, you can block network traffic to your Azure resources and simulate a loss of connectivity or outages of dependent resources.
-In this quickstart, you create a chaos experiment that blocks all traffic to external (internet) DNS servers for 15 minutes. With this experiment, you can validate that resources connected to the Azure virtual network associated with the target NSG don't have a dependency on external DNS servers. In this way, you can validate one of the risk-threat model requirements.
+This article walks you through creating a chaos experiment that blocks all traffic to external (internet) DNS servers for 15 minutes. You can validate that resources connected to the Azure virtual network associated with the target NSG don't have a dependency on external DNS servers. In this way, you can validate one of the risk-threat model requirements.
## Prerequisites
communication-services Job Router Azure Openai Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/router/job-router-azure-openai-integration.md
+
+ Title: Quickstart - Integrate Azure OpenAI with Job Router
+
+description: In this quickstart, you learn how to integrate Azure OpenAI with ACS Job Router using an Azure Function App to match workers to jobs based on worker's performance.
++++ Last updated : 02/8/2024++++
+# Quick Start: Integrating Azure OpenAI with ACS Job Router
+
+Integrate ACS Job Router with Azure OpenAI. Use Azure OpenAI to pair your jobs to agents.
+
+### Prerequisites
+- Create an Azure OpenAI resource. [Setup Guide](../../../ai-services/openai/how-to/create-resource.md)
+- Create an Azure Communication Services resource. [Setup Guide](../create-communication-resource.md)
+- Clone the GitHub solution. [Integrating Azure OpenAI with ACS Job Router](https://github.com/Azure-Samples/communication-services-dotnet-quickstarts/tree/main/JobRouterOpenAIIntegration)
+- Visual Studio Code Installed. [Visual Studio Code](https://code.visualstudio.com/)
+- Azure Functions Extension for Visual Studio Code. [Azure Function Extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions)
+
+### Overview
+
+This quick start demonstrates how to integrate Azure OpenAI with ACS Job Router to intelligently choose the best-suited worker based on performance indicators for an incoming job.
+
+### Console application
+- Manages ACS resources including policies and queues.
+- Simulates job queuing and allocation.
+
+### Azure Function project
+- Hosts API for Azure OpenAI integration.
+- Manages worker scoring and job labels.
+
+#### This guide covers two main projects:
+- A .NET console application to interact with ACS Job Router.
+- An Azure Function project for OpenAI integration with ACS Job Router.
+
+#### The console application is set up to provision the following ACS resources:
+- A distribution policy that lets ACS Job Router understand how to generate offers for workers. This application is configured to provision a Best-Worker mode distribution policy that uses a Function Router Rule for scoring workers.
+- A queue with the best-worker mode distribution policy attached.
+- Five workers that are registered to a queue with three chosen performance indicator values populated as labels.
+- Creates a Job in ACS Job Router and lets the user know which worker Azure OpenAI scored the highest based on the performance indicator labels.
+
+ :::image type="content" source="./media/overview-sequence-diagram.png" lightbox="./media/overview-sequence-diagram-expanded.png" alt-text="Screenshot of Job Router with Azure OpenAI integration SequenceDiagram.":::
+
+#### The Azure Function project is set up to interact with a deployed Azure OpenAI model:
+- The Azure function receives a request from ACS Job Router with a payload containing the worker's labels (performance indicators).
+- The Azure function extracts those values from the request.
+- The Azure function was preconfigured with a prompt to explain to the Azure OpenAI model how to interpret each one of these performance indicators and to ask Azure OpenAI to score each worker based on these values to find the most suitable agent for a new job.
+ > [!NOTE]
+ > The prompt can be updated for any desired outcome. For example, you could modify the prompt to weigh the Average Handling Time as the most important data point or ask the model to optimize for customer satisfaction.
+- The Azure Function sends a request with the configured prompts and worker performance indicators, and Azure OpenAI responds back with a JSON Object containing the scores generated by Azure OpenAI.
+- The Azure Function then sends these scores back to ACS Job Router.
+- ACS Job Router then sends an offer to the worker that was scored the highest by Azure OpenAI.
+
+## Performance indicators used in this project
+
+In this guide, we've chosen three performance indicators based on typical contact center performance data points.
+Workers are evaluated based on:
+- **CSAT**: Customer satisfaction.
+- **Outcome**: Issue resolution rate.
+- **AHT**: Average handling time.
+
+## Understanding ACS Job Router
+- Learn about ACS Job Router. [Job Router Concepts](../../concepts/router/concepts.md)
+- Configure the BestWorker Distribution Policy using an Azure Function Scoring Rule. [Customize Worker Scoring](../../how-tos/router-sdk/customize-worker-scoring.md)
+
+## Understanding Azure Function deployments
+- Learn about Azure Function Deployments. [Azure Function Deployments](../../../azure-functions/functions-deployment-technologies.md)
+
+## Understanding Azure OpenAI prompts
+- Learn about Azure OpenAI prompt Engineering Techniques. [Prompt Engineering Techniques](../../../ai-services/openai/concepts/advanced-prompt-engineering.md)
+
+## Deployment and execution
+
+1. Open the OpenAiScoringFunction project in Visual Studio Code with the Azure Function Extension installed. Select 'Create Function App in Azure...'
+
+ :::image type="content" source="./media/create-azure-function-from-code.png" alt-text="Screenshot of CreateFunctionApp in VS Code.":::
+
+2. After selecting your Subscription, enter a unique name for your function app.
+
+ :::image type="content" source="./media/function-select-subscription.png" alt-text="Screenshot of selecting subscription in VS Code.":::
+
+3. Once your Function App is created, right-click on your App and select 'Deploy Function App...'
+4. Open the Azure portal and go to your Azure OpenAI resource, then go to Azure AI Studio. From here, navigate to the Deployments tab and select "+ Create new deployment"
+ - a. Select a model that can perform completions
+
+ [Azure OpenAI Service models](../../../ai-services/openai/concepts/models.md)
+ - b. Give your model a Deployment name and select ΓÇ£CreateΓÇ¥
+
+ :::image type="content" source="./media/azure-openai-model-creation.png" alt-text="Screenshot of creating azure OpenAI model.":::
+
+5. Once your Azure OpenAI Model is created, copy down the 'Endpoint', 'Keys', and 'Region'
+
+ :::image type="content" source="./media/azure-openai-keys-and-endpoints.png" alt-text="Screenshot of key and endpoint page for Azure OpenAI.":::
+
+6. In the Azure portal, navigate to your newly created Function App Environmental Variables blade and create the following variables:
+
+ :::image type="content" source="./media/azure-function-environment-settings.png" alt-text="Screenshot of Azure function environment settings example.":::
+
+| Name | Value | Description |
+|--||--|
+| OpenAIBaseURI | {Endpoint} | Endpoint URI from OpenAI Resource |
+| OpenAIAPIKey | {Key} | Key from OpenAI Resource |
+| DeploymentName | {DeploymentName} | Deployment Name from OpenAI Resource |
+| Preprompt | You're helping pair a customer with an agent in a contact center. You'll evaluate the best available agent based on their performance indicators below. CSAT holds the average customer satisfaction score between 1 and 3, higher is better. Outcome is a score between 0 and 1, higher is better. AHT is average handling time, lower is better. If AHT provided is 00:00, please ignore it in the scoring.| Prompt containing preprocessing instructions for the Azure OpenAI model |
+| Postprompt | Respond with only a json object with agent ID as the key, and scores based on suitability for this customer as the value in a range of 0 to 1. Don't include any other information.| Prompt containing postprocessing instructions for the Azure OpenAI model |
+| DefaultCSAT | 1.5 | Default CSAT score for workers missing this label |
+| DefaultOutcome | 0.5 | Default Outcome score for workers missing this label |
+| DefaultAHT | 10:00 | Default AHT for workers missing this label |
++
+7. On the Overview blade of your function app, copy the function URL. On the Functions --> Keys blade of your function app, copy the master or default key.
+8. Navigate to your ACS resource and copy down your connection string.
+9. Open the JR_AOAI_Integration Console application and open the `appsettings.json` file to update the following config settings.
+
+ :::image type="content" source="./media/appsettings-configuration.png" alt-text="Screenshot of AppSettings.":::
+
+10. Run the application and follow the on-screen instructions to Create a Job.
+
+## Experimentation
+
+Various experiments can be conducted within this project, such as:
+
+- Experimenting with the PrePrompt string (in the Environment Variables of the Azure Function) to further tune scores provided by Azure OpenAI.
+- Add other performance indicator labels to workers, updating the OpenAiScorer class in the OpenAIScorerFunction project to account for new labels and updating the prompts, default performance indicators in the Environment Variables of your function, and add the new performance indicators into the `appSettings.json` file under each worker.
+- Implement logic to update the values of the performance indicator labels as jobs are completed by each worker. This could be by adding persistence or a cache to your application to store these values. Labels are routable attributes in Job Router. If a worker's labels are updated, any offers issued for that worker will be revoked. Consider updating labels at a point when the worker isn't expecting offers (AvailableForOffers ΓÇô false, or when the workerΓÇÖs capacity is consumed).
communication-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/whats-new.md
Azure Communication Services Email Simple Mail Transfer Protocol (SMTP) as a Ser
### Azure AI-powered Azure Communication Services Call Automation API Actions :::image type="content" source="./media/whats-new-images/11-23/advanced-call-automation-actions.png" alt-text="A graphic showing a server interacting with the cloud":::
-Azure AI-powered Call Automation API actions are now generally available for developers who want to create enhanced calling workflows using Azure AI Speech-to-Text, Text-to-Speech and other language understanding engines. These actions allow developers to play dynamic audio prompts and recognize voice input from callers, enabling natural conversational experiences and more efficient task handling. Developers can use these actions with any of the four major SDKs - .NET, Java, JavaScript and Python - and integrate them with their Azure Open AI solutions to create virtual assistants that go beyond simple IVRs. You can learn more about this release and its capabilities from the Microsoft Ignite 2023 announcements blog and on-demand session.
+Azure AI-powered Call Automation API actions are now generally available for developers who want to create enhanced calling workflows using Azure AI Speech-to-Text, Text-to-Speech and other language understanding engines. These actions allow developers to play dynamic audio prompts and recognize voice input from callers, enabling natural conversational experiences and more efficient task handling. Developers can use these actions with any of the four major SDKs - .NET, Java, JavaScript and Python - and integrate them with their Azure OpenAI solutions to create virtual assistants that go beyond simple IVRs. You can learn more about this release and its capabilities from the Microsoft Ignite 2023 announcements blog and on-demand session.
[Read more in the Ignite Blog post.](https://techcommunity.microsoft.com/t5/azure-communication-services/ignite-2023-creating-value-with-intelligent-application/ba-p/3907629)
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
ms.suite: integration Previously updated : 12/12/2023 Last updated : 02/28/2024
To create a connection when you add a Service Bus trigger or action, you need to
### Get endpoint URL for Service Bus namespace
-If you use the Service Bus managed connector, you need this endpoint URL if you select either authentication type for **Microsoft Entra integrated** or **Logic Apps Managed Identity**. The endpoint URL starts with the **https://** prefix.
+If you use the Service Bus managed connector, you need this endpoint URL if you select either authentication type for **Microsoft Entra integrated** or **Logic Apps Managed Identity**. The endpoint URL starts with the **sb://** prefix.
1. In the [Azure portal](https://portal.azure.com), open your Service Bus *namespace*.
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following tables describe how to configure a collection of NSG allow rules.
#### Considerations - If you're running HTTP servers, you might need to add ports `80` and `443`.-- Adding deny rules for some ports and protocols with lower priority than `65000` might cause service interruption and unexpected behavior. - Don't explicitly deny the Azure DNS address `168.63.128.16` in the outgoing NSG rules, or your Container Apps environment won't be able to function.
cosmos-db Spark Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/spark-databricks.md
This article details how to work with Azure Cosmos DB for Apache Cassandra from
* **Cassandra Spark connector:** - To integrate Azure Cosmos DB for Apache Cassandra with Spark, the Cassandra connector should be attached to the Azure Databricks cluster. To attach the cluster:
- * Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](https://docs.databricks.com/libraries) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
+ * Review the Databricks runtime version, the Spark version. Then find the [maven coordinates](https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector-assembly) that are compatible with the Cassandra Spark connector, and attach it to the cluster. See ["Upload a Maven package or Spark package"](/azure/databricks/libraries/) article to attach the connector library to the cluster. We recommend selecting Databricks runtime version 10.4 LTS, which supports Spark 3.2.1. To add the Apache Spark Cassandra Connector, your cluster, select **Libraries** > **Install New** > **Maven**, and then add `com.datastax.spark:spark-cassandra-connector-assembly_2.12:3.2.0` in Maven coordinates. If using Spark 2.x, we recommend an environment with Spark version 2.4.5, using spark connector at maven coordinates `com.datastax.spark:spark-cassandra-connector_2.11:2.4.3`.
* **Azure Cosmos DB for Apache Cassandra-specific library:** - If you're using Spark 2.x, a custom connection factory is required to configure the retry policy from the Cassandra Spark connector to Azure Cosmos DB for Apache Cassandra. Add the `com.microsoft.azure.cosmosdb:azure-cosmos-cassandra-spark-helper:1.2.0`[maven coordinates](https://search.maven.org/artifact/com.microsoft.azure.cosmosdb/azure-cosmos-cassandra-spark-helper/1.2.0/jar) to attach the library to the cluster.
cosmos-db Priority Based Execution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/priority-based-execution.md
To get started using priority-based execution, navigate to the **Features** page
#### [.NET SDK v3](#tab/net-v3) ```csharp
-using Microsoft.Azure.Cosmos.PartitionKey;
-using Microsoft.Azure.Cosmos.PriorityLevel;
-
-Using Mircosoft.Azure.Cosmos.PartitionKey;
-Using Mircosoft.Azure.Cosmos.PriorityLevel;
+using Microsoft.Azure.Cosmos;
//update products catalog with low priority RequestOptions catalogRequestOptions = new ItemRequestOptions{PriorityLevel = PriorityLevel.Low};
cost-management-billing Analyze Unexpected Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/analyze-unexpected-charges.md
Continue reading the following sections for more techniques to determine who own
### Analyze the audit logs for the resource
-If you have permission to view a resource, you should be able to access its audit logs. Review the logs to find the user who was responsible for the most recent changes to a resource. To learn more, see [View and retrieve Azure Activity log events](../../azure-monitor/essentials/activity-log.md#view-the-activity-log).
+If you have permission to view a resource, you should be able to access its audit logs. Review the logs to find the user who was responsible for the most recent changes to a resource. To learn more, see [View and retrieve Azure Activity log events](../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log).
### Analyze user permissions to the resource's parent scope
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Cloud Security Posture Management (CSPM)
description: Learn more about CSPM in Microsoft Defender for Cloud. Previously updated : 02/11/2024 Last updated : 02/28/2024 # Cloud security posture management (CSPM)
You can choose which ticketing system to integrate. For preview, only ServiceNow
For commercial and national cloud coverage, review the [features supported in Azure cloud environments](support-matrix-cloud-environment.md).
+## Support for Resource type in AWS and GCP
+
+For multicloud support of resource types (or services) in our foundational multicloud CSPM tier, see the [table of multicloud resource and service types for AWS and GCP](multicloud-resource-types-support-foundational-cspm.md).
+ ## Next steps - Watch [Predict future security incidents! Cloud Security Posture Management with Microsoft Defender](https://www.youtube.com/watch?v=jF3NSR_OepI).
defender-for-cloud File Integrity Monitoring Enable Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-enable-log-analytics.md
The **Changes** tab (shown below) lists all changes for the workspace during the
Use wildcards to simplify tracking across directories. The following rules apply when you configure folder monitoring using wildcards: - Wildcards are required for tracking multiple files.-- Wildcards can only be used in the last segment of a path, such as `C:\folder\file` or` /etc/*.conf`
+- Wildcards can only be used in the last segment of a path, such as `C:\folder\file` or `/etc/*.conf`
- If an environment variable includes a path that isn't valid, validation succeeds but the path fails when inventory runs. - When setting the path, avoid general paths such as `c:\*.*`, which results in too many folders being traversed.
File Integrity Monitoring data resides within the Azure Log Analytics/Configurat
In the following example, we're retrieving all changes in the last 14 days in the categories of registry and files:
- ```
+ ```kusto
ConfigurationChange | where TimeGenerated > ago(14d) | where ConfigChangeType in ('Registry', 'Files')
File Integrity Monitoring data resides within the Azure Log Analytics/Configurat
1. Remove **Files** from the **where** clause. 1. Remove the summarization line and replace it with an ordering clause:
- ```
+ ```kusto
ConfigurationChange | where TimeGenerated > ago(14d) | where ConfigChangeType in ('Registry')
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Defender for Cloud provides the following list of recommended items to monitor b
## Next steps
-In this article, you learned about File Integrity Monitoring (FIM) in Defender for Cloud.
+In this article, you learned about File Integrity Monitoring (FIM) in Defender for Cloud.
Next, you can: - [Enable File Integrity Monitoring when using the Azure Monitor Agent](file-integrity-monitoring-enable-ama.md)-- [Enable File Integrity Monitoring when using the Log Analytics agent](file-integrity-monitoring-enable-log-analytics.md)
+- [Enable File Integrity Monitoring when using the Log Analytics agent](file-integrity-monitoring-enable-log-analytics.md)
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Title: Configure the Microsoft Security DevOps GitHub action
-description: Learn how to configure the Microsoft Security DevOps GitHub action.
+description: Learn how to configure the Microsoft Security DevOps GitHub action to enhance your project's security and DevOps processes.
Last updated 06/18/2023
Microsoft Security DevOps uses the following Open Source tools:
name: alerts path: ${{ steps.msdo.outputs.sarifFile }} ```+ > [!NOTE] > For additional tool configuration options, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki) - 1. Select **Start commit** :::image type="content" source="media/msdo-github-action/start-commit.png" alt-text="Screenshot showing you where to select start commit.":::
Code scanning findings will be filtered by specific MSDO tools in GitHub. These
- Learn how to [deploy apps from GitHub to Azure](/azure/developer/github/deploy-to-azure).
-## Next steps
+## Related content
Learn more about [DevOps security in Defender for Cloud](defender-for-devops-introduction.md).
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
Title: Drive remediation of security recommendations with governance rules in Microsoft Defender for Cloud
+ Title: Drive remediation of recommendations with governance rules
description: Learn how to drive remediation of security recommendations with governance rules in Microsoft Defender for Cloud
For tracking, you can review the progress of the remediation tasks by subscripti
- The [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) must be enabled. - You need **Contributor**, **Security Admin**, or **Owner** permissions on the Azure subscriptions.-- For AWS accounts and GCP projects, you need **Contributor**, **Security Admin**, or **Owner** permissions on the Defender for Cloud AWS or GCP connectors. -
+- For AWS accounts and GCP projects, you need **Contributor**, **Security Admin**, or **Owner** permissions on the Defender for Cloud AWS or GCP connectors.
## Define a governance rule
For tracking, you can review the progress of the remediation tasks by subscripti
1. Specify how recommendations are impacted by the rule. - **By severity** - The rule assigns the owner and due date to any recommendation in the subscription that doesn't already have them assigned.
- - **By specific recommendations** - Select the specific built-in or custom recommendations that the rule applies to.
+ - **By specific recommendations** - Select the specific built-in or custom recommendations that the rule applies to.
:::image type="content" source="./media/governance-rules/create-rule-conditions.png" alt-text="Screenshot of page for adding conditions for a governance rule." lightbox="media/governance-rules/create-rule-conditions.png":::
You can view the effect of government rules in your environment.
1. You can search for rules, or filter rules. - Filter on **Environment** to identify rules for Azure, AWS, and GCP.
-
+ - Filter on rule name, owner, or time between the recommendation being issued and due date.
-
+ - Filter on **Grace period** to find MCSB recommendations that won't affect your secure score.
-
+ - Identify by status. :::image type="content" source="./media/governance-rules/view-filter-rules.png" alt-text="Screenshot of page for viewing and filtering rules." lightbox="media/governance-rules/view-filter-rules.png":::
The governance report lets you select subscriptions that have governance rules a
From the governance report, you can drill down into recommendations by scope, display name, priority, remediation timeframe, owner type, owner details, grace period and cloud.
-## Next steps
+## Next step
Learn how to [Implement security recommendations](implement-security-recommendations.md).
defender-for-cloud Harden Docker Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/harden-docker-hosts.md
Title: Review Docker host hardening recommendations
-description: How-to protect your Docker hosts and verify they're compliant with the CIS Docker benchmark
+description: How to protect your Docker hosts and verify they're compliant with the CIS Docker benchmark with Microsoft Defender for Cloud.
When vulnerabilities are found, they're grouped inside a single recommendation.
|Required roles and permissions:|**Reader** on the workspace to which the host connects| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts| - ## Identify and remediate security vulnerabilities in your Docker configuration 1. From Defender for Cloud's menu, open the **Recommendations** page. 1. Filter to the recommendation **Vulnerabilities in container security configurations should be remediated** and select the recommendation.
- The recommendation page shows the affected resources (Docker hosts).
+ The recommendation page shows the affected resources (Docker hosts).
:::image type="content" source="./media/monitor-container-security/docker-host-vulnerabilities-found.png" alt-text="Recommendation to remediate vulnerabilities in container security configurations."::: > [!NOTE]
- > Machines that aren't running Docker will be shown in the **Not applicable resources** tab. They'll appear in Azure Policy as Compliant.
+ > Machines that aren't running Docker will be shown in the **Not applicable resources** tab. They'll appear in Azure Policy as Compliant.
-1. To view and remediate the CIS controls that a specific host failed, select the host you want to investigate.
+1. To view and remediate the CIS controls that a specific host failed, select the host you want to investigate.
> [!TIP] > If you started at the asset inventory page and reached this recommendation from there, select the **Take action** button on the recommendation page.
When vulnerabilities are found, they're grouped inside a single recommendation.
1. When you're sure the command is appropriate and ready for your host, select **Run**.
+## Next step
-## Next steps
-
-Docker hardening is just one aspect of Defender for Cloud's container security features.
+Docker hardening is just one aspect of Defender for Cloud's container security features.
Learn more [Container security in Defender for Cloud](defender-for-containers-introduction.md).
defender-for-cloud Multicloud Resource Types Support Foundational Cspm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multicloud-resource-types-support-foundational-cspm.md
+
+ Title: Supported resource and service types for multicloud in Foundational CSPM
+description: Learn more about the supported resource and service types for multicloud in Microsoft Defender for Cloud's Foundational CSPM.
+ Last updated : 02/29/2024++
+# Supported resource and service types for multicloud in foundational CSPM
+
+ This page lists the resource and service types that are supported for Amazon Web Services (AWS) and Google Cloud Platform (GCP) in Defender for CloudΓÇÖs foundational Cloud Security Posture Management (CSPM) tier.
+
+## Resource types supported in AWS
+
+| Provider Namespace | Resource Type Name |
+|-|-|
+| AccessAnalyzer | AnalyzerSummary |
+| ApiGateway | Stage |
+| AppSync | GraphqlApi |
+| ApplicationAutoScaling | ScalableTarget |
+| AutoScaling | AutoScalingGroup |
+| AWS | Account |
+| AWS | AccountInRegion |
+| CertificateManager | CertificateTags |
+| CertificateManager | CertificateDetail |
+| CertificateManager | CertificateSummary |
+| CloudFormation | StackSummary |
+| CloudFormation | StackTemplate |
+| CloudFormation | StackInstanceSummary |
+| CloudFormation | Stack |
+| CloudFormation | StackResourceSummary |
+| CloudFront | DistributionConfig |
+| CloudFront | DistributionSummary |
+| CloudFront | DistributionTags |
+| CloudTrail | EventSelector |
+| CloudTrail | Trail |
+| CloudTrail | TrailStatus |
+| CloudTrail | TrailTags |
+| CloudWatch | MetricAlarm |
+| CloudWatch | MetricAlarmTags |
+| CloudWatchLogs | LogGroup |
+| CloudWatchLogs | MetricFilter |
+| CodeBuild | Project |
+| CodeBuild | ProjectName |
+| CodeBuild | SourceCredentialsInfo |
+| ConfigService | ConfigurationRecorder |
+| ConfigService | ConfigurationRecorderStatus |
+| ConfigService | DeliveryChannel |
+| DAX | Cluster |
+| DAX | ClusterTags |
+| DatabaseMigrationService | ReplicationInstance |
+| DynamoDB | ContinuousBackupsDescription |
+| DynamoDB | TableDescription |
+| DynamoDB | TableTags |
+| DynamoDB | TableName |
+| EC2 | Snapshot |
+| EC2 | Subnet |
+| EC2 | Volume |
+| EC2 | VPC |
+| EC2 | VpcEndpoint |
+| EC2 | VpcPeeringConnection |
+| EC2 | Instance |
+| EC2 | AccountAttribute |
+| EC2 | Address |
+| EC2 | CreateVolumePermission |
+| EC2 | EbsEncryptionByDefault |
+| EC2 | FlowLog |
+| EC2 | Image |
+| EC2 | InstanceStatus |
+| EC2 | InstanceTypeInfo |
+| EC2 | NetworkAcl |
+| EC2 | NetworkInterface |
+| EC2 | Region |
+| EC2 | Reservation |
+| EC2 | RouteTable |
+| EC2 | SecurityGroup |
+| ECR | Image |
+| ECR | Repository |
+| ECR | RepositoryPolicy |
+| ECS | TaskDefinition |
+| ECS | ServiceArn |
+| ECS | Service |
+| ECS | ClusterArn |
+| ECS | TaskDefinitionTags |
+| ECS | TaskDefinitionArn |
+| EFS | FileSystemDescription |
+| EFS | MountTargetDescription |
+| EKS | Cluster |
+| EKS | Nodegroup |
+| EKS | NodegroupName |
+| EKS | ClusterName |
+| EMR | Cluster |
+| ElasticBeanstalk | ConfigurationSettingsDescription |
+| ElasticBeanstalk | EnvironmentDescription |
+| ElasticLoadBalancing | LoadBalancerTags |
+| ElasticLoadBalancing | LoadBalancer |
+| ElasticLoadBalancing | LoadBalancerAttributes |
+| ElasticLoadBalancing | LoadBalancerPolicy |
+| ElasticLoadBalancingV2 | LoadBalancerTags |
+| ElasticLoadBalancingV2 | Rule |
+| ElasticLoadBalancingV2 | TargetGroup |
+| ElasticLoadBalancingV2 | TargetHealthDescription |
+| ElasticLoadBalancingV2 | LoadBalancer |
+| ElasticLoadBalancingV2 | Listener |
+| ElasticLoadBalancingV2 | LoadBalancerAttribute |
+| Elasticsearch | DomainInfo |
+| Elasticsearch | DomainStatus |
+| Elasticsearch | DomainTags |
+| GuardDuty | DetectorId |
+| Iam | AccountAlias |
+| Iam | AttachedPolicyType |
+| Iam | CredentialReport |
+| Iam | Group |
+| Iam | InstanceProfile |
+| Iam | MFADevice |
+| Iam | PasswordPolicy |
+| Iam | ServerCertificateMetadata |
+| Iam | SummaryMap |
+| Iam | User |
+| Iam | UserPolicies |
+| Iam | VirtualMFADevice |
+| Iam | ManagedPolicy |
+| Iam | ManagedPolicy |
+| Iam | AccessKeyLastUsed |
+| Iam | AccessKeyMetadata |
+| Iam | PolicyVersion |
+| Iam | PolicyVersion |
+| Internal | Iam_EntitiesForPolicy |
+| Internal | Iam_EntitiesForPolicy |
+| Internal | AwsSecurityConnector |
+| KMS | KeyPolicyName |
+| KMS | KeyRotationStatus |
+| KMS | KeyTags |
+| KMS | KeyPolicy |
+| KMS | KeyMetadata |
+| KMS | KeyListEntry |
+| KMS| AliasListEntry |
+| Lambda | FunctionCodeLocation |
+| Lambda | FunctionConfiguration|
+| Lambda | FunctionPolicy |
+| Lambda | FunctionTags |
+| Macie2 | JobSummary |
+| Macie2 | MacieStatus |
+| NetworkFirewall | Firewall |
+| NetworkFirewall | FirewallMetadata |
+| NetworkFirewall | FirewallPolicy |
+| NetworkFirewall | FirewallPolicyMetadata |
+| NetworkFirewall | RuleGroup |
+| NetworkFirewall | RuleGroupMetadata |
+| RDS | ExportTask |
+| RDS | DBClusterSnapshot |
+| RDS | DBSnapshot |
+| RDS | DBSnapshotAttributesResult |
+| RDS | EventSubscription |
+| RDS | DBCluster |
+| RDS | DBInstance |
+| RDS | DBClusterSnapshotAttributesResult |
+| RedShift | LoggingStatus |
+| RedShift | Parameter |
+| Redshift | Cluster |
+| Route53 | HostedZone |
+| Route53 | ResourceRecordSet |
+| Route53Domains | DomainSummary |
+| S3 | S3Region |
+| S3 | S3BucketTags |
+| S3 | S3Bucket |
+| S3 | BucketPolicy |
+| S3 | BucketEncryption |
+| S3 | BucketPublicAccessBlockConfiguration |
+| S3 | BucketVersioning |
+| S3 | LifecycleConfiguration |
+| S3 | PolicyStatus |
+| S3 | ReplicationConfiguration |
+| S3 | S3AccessControlList |
+| S3 | S3BucketLoggingConfig |
+| S3Control | PublicAccessBlockConfiguration |
+| SNS | Subscription |
+| SNS | Topic |
+| SNS | TopicAttributes |
+| SNS | TopicTags |
+| SQS | Queue |
+| SQS | QueueAttributes |
+| SQS | QueueTags |
+| SageMaker | NotebookInstanceSummary |
+| SageMaker | DescribeNotebookInstanceTags |
+| SageMaker | DescribeNotebookInstanceResponse |
+| SecretsManager | SecretResourcePolicy |
+| SecretsManager | SecretListEntry |
+| SecretsManager | DescribeSecretResponse |
+| SimpleSystemsManagement | ParameterMetadata |
+| SimpleSystemsManagement | ParameterTags |
+| SimpleSystemsManagement | ResourceComplianceSummary |
+| SimpleSystemsManagement | InstanceInformation |
+| WAF | LoggingConfiguration |
+| WAF | WebACL |
+| WAF | WebACLSummary |
+| WAFV2 | ApplicationLoadBalancerForWebACL |
+| WAFV2 | WebACLSummary |
+
+## Resource types supported in GCP
+
+| Provider Namespace | Resource Type Name |
+|-|-|
+| ApiKeys | Key |
+| ArtifactRegistry | Image |
+| ArtifactRegistry | Repository |
+| ArtifactRegistry | RepositoryPolicy |
+| Bigquery | Dataset |
+| Bigquery | DatasetData |
+| Bigquery | Table |
+| Bigquery | TablePolicy |
+| Bigquery | TablesData |
+| CloudKMS | CryptoKey |
+| CloudKMS | CryptoKeyPolicy |
+| CloudKMS | KeyRing |
+| CloudKMS | KeyRingPolicy |
+| CloudResourceManager | Project |
+| CloudResourceManager | Ancestor |
+| CloudResourceManager | AncestorPolicy |
+| CloudResourceManager | EffectiveOrgPolicy |
+| CloudResourceManager | Folder |
+| CloudResourceManager | FolderPolicy |
+| CloudResourceManager | Organization |
+| CloudResourceManager | OrganizationPolicy |
+| CloudResourceManager | Policy |
+| Compute | Instance |
+| Compute | BackendService |
+| Compute | BackendService |
+| Compute | Disk |
+| Compute | EffectiveFirewalls |
+| Compute | Firewall |
+| Compute | ForwardingRule |
+| Compute | GlobalForwardingRule |
+| Compute | InstanceGroup |
+| Compute | InstanceGroupInstance |
+| Compute | InstanceGroupManager |
+| Compute | InstanceGroupManager |
+| Compute | InstanceTemplate |
+| Compute | MachineType |
+| Compute | ManagedInstance |
+| Compute | ManagedInstance |
+| Compute | Network |
+| Compute | NetworkEffectiveFirewalls |
+| Compute | Project |
+| Compute | SslPolicy |
+| Compute | Subnetwork |
+| Compute | TargetHttpProxy |
+| Compute | TargetHttpsProxy |
+| Compute | TargetPool |
+| Compute | TargetSslProxy |
+| Compute | TargetTcpProxy |
+| Compute | UrlMap |
+| Container | Cluster |
+| Dns | ManagedZone |
+| Dns | Policy |
+| IAM | OrganizationRole |
+| IAM | ProjectRole |
+| IAM | Role |
+| IAM | ServiceAccount |
+| IAM | ServiceAccountKey |
+| Internal | GcpSecurityConnector |
+| Logging | AncestorLogSink |
+| Logging | LogEntry |
+| Logging | LogMetric |
+| Logging | LogSink |
+| Monitoring | AlertPolicy |
+| OsConfig | OSPolicyAssignment |
+| OsConfig | OSPolicyAssignmentReport |
+| SQLAdmin | DatabaseInstance |
+| SecretManager | Secret |
+| SecretManager | SecretPolicy |
+| Storage | Bucket |
+| Storage | BucketPolicy |
+
+## Learn More
+
+- Review the [features supported in Azure cloud environments](support-matrix-cloud-environment.md) for information on commercial and national cloud coverage.
+- Watch [Predict future security incidents! Cloud Security Posture Management with Microsoft Defender](https://www.youtube.com/watch?v=jF3NSR_OepI).
+- Learn about [security standards and recommendations](security-policy-concept.md).
+- Learn about [secure score](secure-score-security-controls.md).
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
# Set up SNMP MIB health monitoring on an OT sensor
-This article describes show to configure your OT sensors for health monitoring via an authorized SNMP monitoring server. SNMP queries are sent up to 50 times a second, using UDP over port 161.
+This article describes how to configure your OT sensors for health monitoring via an authorized SNMP monitoring server. SNMP queries are polled up to 50 times a second, using UDP over port 161.
-Setup for SNMP monitoring includes configuring settings on your OT sensor and on your SNMP server. To define Defender for IoT sensors on your SNMP server, either define your settings manually or use a pre-defined SNMP MIB file downloaded from the Azure portal.
-
-SNMP queries are sent up to 50 times a second, using UDP over port 161.
+Setup for SNMP monitoring includes configuring settings on your OT sensor and on your SNMP server. To define Defender for IoT sensors on your SNMP server, either define your settings manually or use a predefined SNMP MIB file downloaded from the Azure portal.
## Prerequisites
Before you perform the procedures in this article, make sure that you have the f
- **An OT sensor** [installed](ot-deploy/install-software-ot-sensor.md) and [activated](ot-deploy/activate-deploy-sensor.md), with access as an **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
-To download a pre-defined SNMP MIB file from the Azure portal, you'll need access to the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md).
+To download a predefined SNMP MIB file from the Azure portal, you need access to the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md).
## Configure SNMP monitoring settings on your OT sensor
To download a pre-defined SNMP MIB file from the Azure portal, you'll need acces
## Download Defender for IoT's SNMP MIB file
-Defender for IoT in the Azure portal provides a downloadable MIB file for you to load into your SNMP monitoring system to pre-define Defender for IoT sensors.
+Defender for IoT in the Azure portal provides a downloadable MIB file for you to load into your SNMP monitoring system to predefine Defender for IoT sensors.
**To download the SNMP MIB file** from [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **More actions** > **Download SNMP MIB file**. - ## OT sensor OIDs for manual SNMP configurations If you're configuring Defender for IoT sensors on your SNMP monitoring system manually, use the following table for reference regarding sensor object identifier values (OIDs): | Management console and sensor | OID | Format | Description | |--|--|--|--|
+| **sysDescr** | 1.3.6.1.2.1.1.1 | DISPLAYSTRING | Returns ```Microsoft Defender for IoT``` |
+| **Platform** | 1.3.6.1.2.1.1.1.0 | STRING | Sensor or on-premises management console |
+| **sysObjectID** | 1.3.6.1.2.1.1.2 | DISPLAYSTRING | Returns the private MIB allocation, for example ```1.3.6.1.4.1.53313.1.1``` is the private OID root for 1.3.6.1.4.1.53313 |
+| **sysUpTime** | 1.3.6.1.2.1.1.3 | DISPLAYSTRING | Returns the sensor uptime in hundredths of a second |
+| **sysContact** | 1.3.6.1.2.1.1.4 | DISPLAYSTRING | Returns the textual name of the admin user for this sensor |
+| **Vendor** | 1.3.6.1.2.1.1.4.0 | STRING | Microsoft Support (support.microsoft.com) |
+| **sysName** | 1.3.6.1.2.1.1.5 | DISPLAYSTRING | Returns the appliance name |
| **Appliance name** | 1.3.6.1.2.1.1.5.0 | STRING | Appliance name for the on-premises management console |
-| **Vendor** | 1.3.6.1.2.1.1.4.0 | STRING | Microsoft Support (support.microsoft.com) |
-| **Platform** | 1.3.6.1.2.1.1.1.0 | STRING | Sensor or on-premises management console |
+| **sysLocation** | 1.3.6.1.2.1.1.6 | DISPLAYSTRING | Returns the default location Portal.azure.com |
+| **sysServices** | 1.3.6.1.2.1.1.7 | INTEGER | Returns a value indicating the service this entity offers, for example, ```7``` signifies ΓÇ£applicationsΓÇ¥ |
+| **ifIndex** | 1.3.6.1.2.1.2.2.1.1 | GAUGE32 | Returns the sequential ID numbers for each network card |
+| **ifDescription** | 1.3.6.1.2.1.2.2.1.2 | DISPLAYSTRING | Returns a string of the hardware description for each network interface card |
+| **ifType** | 1.3.6.1.2.1.2.2.1.3 | INTEGER | Returns the type of network adapter, for example ```1.3.6.1.2.1.2.2.1.3.117``` signifies Gigabit Ethernet |
+| **ifMtu** | 1.3.6.1.2.1.2.2.1.4 | GAUGE32 | Returns the MTU value for this network adapter. **Note** monitoring interfaces don't show an MTU value |
+| **ifspeed** | 1.3.6.1.2.1.2.2.1.5 | GAUGE32 | Returns the interface speed for this network adapter |
| **Serial number** | 1.3.6.1.4.1.53313.1 |STRING | String that the license uses | | **Software version** | 1.3.6.1.4.1.53313.2 | STRING | Xsense full-version string and management full-version string | | **CPU usage** | 1.3.6.1.4.1.53313.3.1 | GAUGE32 | Indication for zero to 100 | | **CPU temperature** | 1.3.6.1.4.1.53313.3.2 | STRING | Celsius indication for zero to 100 based on Linux input. <br><br> Any machine that has no actual physical temperature sensor (for example VMs) returns "No sensors found" | | **Memory usage** | 1.3.6.1.4.1.53313.3.3 | GAUGE32 | Indication for zero to 100 | | **Disk Usage** | 1.3.6.1.4.1.53313.3.4 | GAUGE32 | Indication for zero to 100 |
-| **Service Status** | 1.3.6.1.4.1.53313.5 |STRING | Online or offline if one of the four crucial components is down |
+| **Service Status** | 1.3.6.1.4.1.53313.5 |STRING | Online or offline if one of the four crucial components has failed |
| **Locally/cloud connected** | 1.3.6.1.4.1.53313.6 |STRING | Activation mode of this appliance: Cloud Connected / Locally Connected | | **License status** | 1.3.6.1.4.1.53313.7 |STRING | Activation period of this appliance: Active / Expiration Date / Expired | Note that: - Nonexisting keys respond with null, HTTP 200.-- Hardware-related MIBs (CPU usage, CPU temperature, memory usage, disk usage) should be tested on all architectures and physical sensors. CPU temperature on virtual machines is expected to be not applicable.
+- Hardware-related MIBs (CPU usage, CPU temperature, memory usage, disk usage) should be tested on all architectures and physical sensors. CPU temperature on virtual machines is expected to be non applicable.
## Next steps
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | **Version 24.1.0**:<br> - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) |
+| **OT networks** | **Version 24.1.0**:<br>- [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) <br><br>- [New fields for SNMP MIB OIDs](#new-fields-for-snmp-mib-oids)|
### Alert suppression rules from the Azure portal (Public preview)
For more information, see [Suppress irrelevant alerts](how-to-accelerate-alert-i
### Focused alerts in OT/IT environments
-Organizations where sensors are deployed between OT and IT networks deal with many alerts, related to both OT and IT traffic. The amount of alerts, some of which are irrelevant, can cause alert fatigue and affect overall performance.
+Organizations where sensors are deployed between OT and IT networks deal with many alerts, related to both OT and IT traffic. The amount of alerts, some of which are irrelevant, can cause alert fatigue and affect overall performance.
To address these challenges, we've updated Defender for IoT's detection policy to automatically trigger alerts based on business impact and network context, and reduce low-value IT related alerts.
To migrate from the L60 profile to a supported profile follow the [Back up and r
### New license renewal reminder in the Azure portal
-When the license for one or more of your OT sites is about to expire, a note is visible at the top of Defender for IoT in the Azure portal, reminding you to renew your licenses. To continue to get security value from Defender for IoT, select the link in the note to renew the relevant licenses in the Microsoft 365 admin center. Learn more about [Defender for IoT billing](billing.md).
+When the license for one or more of your OT sites is about to expire, a note is visible at the top of Defender for IoT in the Azure portal, reminding you to renew your licenses. To continue to get security value from Defender for IoT, select the link in the note to renew the relevant licenses in the Microsoft 365 admin center. Learn more about [Defender for IoT billing](billing.md).
:::image type="content" source="media/whats-new/license-renewal-note.png" alt-text="Screenshot of the license renewal reminder note." lightbox="media/whats-new/license-renewal-note.png":::
+### New fields for SNMP MIB OIDs
+
+Additional standard, generic fields have been added to the SNMP MiB OIDs. For the full list of fields, see [OT sensor OIDs for manual SNMP configurations](how-to-set-up-snmp-mib-monitoring.md#ot-sensor-oids-for-manual-snmp-configurations).
+ ## January 2024 |Service area |Updates |
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
This article provides a list of known issues and troubleshooting steps associate
## Azure Database Migration Service Naming Rules
-If your DMS service failed with "Error: Service name 'x_y_z' is not valid", then you need to follow the Azure Database Migration Service Naming Rules. As Azure Database Migration Service uses Azure Data factory for its compute, it follows the exact same naming rules as mentioned [here](https://learn.microsoft.com/azure/data-factory/naming-rules).
+If your DMS service failed with "Error: Service name 'x_y_z' is not valid", then you need to follow the Azure Database Migration Service Naming Rules. As Azure Database Migration Service uses Azure Data factory for its compute, it follows the exact same naming rules as mentioned [here](../data-factory/naming-rules.md).
## Azure SQL Database limitations
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
Sign in to the Azure portal with this [Preview link](https://aka.ms/expressroute
1. Select the **Subscription** and **Resource Group** for the circuit. Then select the type of **Resiliency** for your setup.
- **Maximum Resiliency** - This option provides the highest level of resiliency for your ExpressRoute circuit. It provides two ExpressRoute circuits with local redundancy in two different ExpressRoute locations.
+ **Maximum Resiliency (Recommended)** - This option provides the highest level of resiliency for your ExpressRoute circuit. It provides two ExpressRoute circuits with local redundancy in two different ExpressRoute locations.
+
+ > [!NOTE]
+ > Provides maximum protection against location wide outages and connectivity failures in an ExpressRoute location. This option is strongly recommended for mission-critical and production workloads.
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/maximum-resiliency.png" alt-text="Diagram of maximum resiliency for an ExpressRoute connection."::: **Standard Resiliency** - This option provides a single ExpressRoute circuit with local redundancy at a single ExpressRoute location.
+ > [!NOTE]
+ > Doesn't provide protection against location wide outages. This option is recommended for non-critical and non-production workloads.
+
:::image type="content" source="./media/expressroute-howto-circuit-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of standard resiliency for an ExpressRoute connection."::: 1. Enter or select the following information for the respective resiliency type.
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
This article helps you create a connection to link a virtual network (virtual ne
5. Select the resiliency type for your connection. You can choose **Maximum resiliency** or **Standard resiliency**.
- **Maximum resiliency** - This option provides the highest level of resiliency to your virtual network. It provides two redundant connections from the virtual network gateway to two different ExpressRoute circuits in different ExpressRoute locations.
+ **Maximum resiliency (Recommended)** - This option provides the highest level of resiliency to your virtual network. It provides two redundant connections from the virtual network gateway to two different ExpressRoute circuits in different ExpressRoute locations.
+ > [!NOTE]
+ > Provides maximum protection against location wide outages and connectivity failures in an ExpressRoute location. This option is strongly recommended for mission-critical and production workloads.
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/maximum-resiliency.png" alt-text="Diagram of a virtual network gateway connected to two different ExpressRoute circuits."::: **Standard resiliency** - This option provides a single redundant connection from the virtual network gateway to a single ExpressRoute circuit.
+ > [!NOTE]
+ > Doesn't provide protection against location wide outages. This option is recommended for non-critical and non-production workloads.
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of a virtual network gateway connected to a single ExpressRoute circuit."::: 6. Enter the following information for the respective resiliency type and then select **Review + create**. Then select **Create** after validation completes.
governance General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md
are prevented from being created or updated.
The error message from a deny policy assignment includes the policy definition and policy assignment IDs. If the error information in the message is missed, it's also available in the
-[Activity log](../../../azure-monitor/essentials/activity-log.md#view-the-activity-log). Use this
+[Activity log](../../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log). Use this
information to get more details to understand the resource restrictions and adjust the resource properties in your request to match allowed values.
governance Power Bi Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/troubleshoot/power-bi-connector.md
Title: Troubleshoot Azure Resource Graph Power BI connector description: Learn how to troubleshoot issues with Azure Resource Graph Power BI connector. Previously updated : 02/22/2024 Last updated : 02/28/2024
ARG Power BI connector isn't available in the following products:
The ARG Power BI connector only supports [import connections](/power-bi/connect-data/desktop-directquery-about#import-connections). The ARG Power BI connector doesn't support `DirectQuery` connections. For more information about connectivity modes and their differences, go to [DirectQuery in Power BI](/power-bi/connect-data/desktop-directquery-about).
-## Load times and throttling
+## Load times
-The load time for ARG queries in Power BI is contingent on the query size. Larger query results might lead to extended load times.
-
-If you're experiencing a 429 error, which is due to throttling, go to [Guidance for throttled requests in Azure Resource Graph](../concepts/guidance-for-throttled-requests.md).
+The load time for ARG queries in Power BI is contingent on the query response size. Larger query results might lead to extended load times.
## Unexpected Results If your query yields unexpected or inaccurate results, consider the following scenarios: -- **Verify permissions**: Confirm that your [Azure role-based access control (Azure RBAC) permissions](../../../role-based-access-control/overview.md) are accurate. Ensure you have at least read access to the resources you want to query. Queries don't return results without adequate permissions to the Azure object or object group.-- **Check for comments**: Review your query and remove any comments (`//`) because Power BI doesn't support Kusto comments. Comments might affect your query results.
+- **Verify permissions**: Confirm that your [Azure role-based access control (Azure RBAC) permissions](../../../role-based-access-control/overview.md) are accurate, as the results of the queries are subject to Azure RBAC. Ensure you have at least read access to the resources you want to query. Queries don't return results without adequate permissions to the Azure object or object group.
+- **Check for comments**: Review your query and remove any comments (`//`) because Power BI doesn't support Kusto comments.
- **Compare results**: For parity checks, run your query in both the ARG Explorer in Azure portal and the ARG Power BI connector. Compare the results obtained from both platforms for consistency. ## Errors
The following table contains descriptions of common ARG Power BI connector error
| Error | Description | | - | - | | Invalid query | Query that was entered isn't valid. Check the syntax of your query and refer to the ARG [Kusto Query Language (KQL)](../concepts/query-language.md#supported-kql-language-elements) for guidance. |
-| Scope check | If you're querying at the tenant scope, delete all inputs in the subscription ID or management group ID fields. <br> <br> If you have inputs in the subscriptions ID or management group ID fields that you want to filter for, select either subscription or management group from the drop-down scope field. |
+| Scope check | If you're querying at the tenant scope, ensure the subscription ID and management group ID fields are empty. <br> <br> If you have inputs in the subscriptions ID or management group ID fields that you want to filter for, select either subscription or management group from the drop-down scope field. |
| Scope subscription mismatch | The subscription scope was selected from the scope drop-down field but a management group ID was entered. | | Scope management group mismatch | The management group scope was selected from the scope drop-down field but a subscription ID was entered. |
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
For example: `https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/Pati
After you've found the record you want to restore, use the `PUT` operation to recreate the resource with the same ID, or use the `POST` operation to make a new resource with the same information. > [!NOTE]
-> There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation.
+> There is no time-based expiration for history/soft delete data. The only way to remove history/soft deleted data is with a hard delete or the purge history operation.\
+
+## Batch Bundles
+In FHIR, bundles can be considered as a container that holds multiple resources. Batch bundles enable users to submit a set of actions to be performed on a server in single HTTP request/response.
+
+A batch bundle interaction with FHIR service is performed with HTTP POST command at base URL.
+```rest
+POST {{fhir_url}}
+{
+ "resourceType": "Bundle",
+ "type": "batch",
+ "entry": [
+ {
+ "resource": {
+ "resourceType": "Patient",
+ "id": "patient-1",
+ "name": [
+ {
+ "given": ["Alice"],
+ "family": "Smith"
+ }
+ ],
+ "gender": "female",
+ "birthDate": "1990-05-15"
+ },
+ "request": {
+ "method": "POST",
+ "url": "Patient"
+ }
+ },
+ {
+ "resource": {
+ "resourceType": "Patient",
+ "id": "patient-2",
+ "name": [
+ {
+ "given": ["Bob"],
+ "family": "Johnson"
+ }
+ ],
+ "gender": "male",
+ "birthDate": "1985-08-23"
+ },
+ "request": {
+ "method": "POST",
+ "url": "Patient"
+ }
+ }
+ }
+ ]
+}
+```
+
+In the case of a batch, each entry is treated as an individual interaction or operation.
+> [!NOTE]
+> For batch bundles there should be no interdependencies between different entries in FHIR bundle. The success or failure of one entry should not impact the success or failure of another entry.
+
+### Batch bundle parallel processing in public preview
+
+Currently batch bundles are executed serially in FHIR service. To improve performance and throughput, we're enabling parallel processing of batch bundles in public preview.
+To use the capability of parallel batch bundle processing-
+* Set header ΓÇ£x-bundle-processing-logicΓÇ¥ value to ΓÇ£parallelΓÇ¥.
+* Ensure there's no overlapping resource ID that is executing on DELETE, POST, PUT or PATCH operations in the same bundle.
+
+> [!IMPORTANT]
+> Bundle parallel processing is currently in public preview. Preview APIs and SDKs are provided without a service-level agreement. We recommend that you don't use them for production workloads. Some features might not be supported, or they might have constrained capabilities. For more information, review Supplemental Terms of Use for Microsoft Azure Previews
## Patch and Conditional Patch
Content-Type: `application/json`
``` ## Performance consideration with Conditional operations
-Conditional interactions can be complex and performance-intensive. To enhance the latency of queries involving conditional interactions, you have the option to utilize the request header **x-conditionalquery-processing-logic** . Setting this header to **parallel** allows concurrent execution of queries with conditional interactions.
+1. Conditional interactions can be complex and performance-intensive. To enhance the latency of queries involving conditional interactions, you have the option to utilize the request header **x-conditionalquery-processing-logic** . Setting this header to **parallel** allows concurrent execution of queries with conditional interactions.
+2. **x-ms-query-latency-over-efficiency** header value when set to "true", all queries are executed using maximum supported parallelism, which forces the scan of physical partitions to be executed concurrently. This feature was designed for accounts with a high number of physical partitions which queries can take longer due to the number of physical segments that need to be scanned.
## Next steps
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
The `import` operation supports two modes: initial mode and incremental mode. Ea
> > Also, if multiple resources share the same resource ID, then only one of those resources is imported at random. An error is logged for the resources sharing the same resource ID.
+This table shows the difference between import modes
+|Areas|Initial mode |Incremental mode |
+|:- |:-|:--|
+|Capability|Initial load of data into FHIR service|Continuous ingestion of data into FHIR service (Incremental or Near Real Time).|
+|Concurrent API calls|Blocks concurrent write operations|Data can be ingested concurrently while executing API CRUD operations on the FHIR server.|
+|Ingestion of versioned resources|Not supported|Enables ingestion of multiple versions of FHIR resources in single batch while maintaining resource history.|
+|Retain lastUpdated field value|Not supported|Retain the lastUpdated field value in FHIR resources during the ingestion process.|
+|Billing| Does not incur any charge|Incurs charges based on successfully ingested resources. Charges are incurred per API pricing.|
+ ## Performance considerations To achieve the best performance with the `import` operation, consider these factors:
Here are the error messages that occur if the `import` operation fails, and reco
[Copy data to Azure Synapse Analytics](copy-to-synapse.md)
healthcare-apis Register Application Cli Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/register-application-cli-rest.md
In practice, you'll define variables, assign values to them, and set references
### Define app registration name, etc. appregname=myappregtest1 clientid=$(az ad app create --display-name $appregname --query appId --output tsv)
-objectid=$(az ad app show --id $clientid --query objectId --output tsv)
+objectid=$(az ad app show --id $clientid --query Id --output tsv)
``` You can use `echo $<variable name>` to display the value of a specified variable.
iot-operations Howto Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-manage-secrets.md
For more information, see [Deploy Azure IoT Operations extensions](./howto-deplo
## Configure service principal and Azure Key Vault upfront
-If the Azure account executing the `az iot ops init` command does not have permissions to query the Microsoft Graph and create service principals, you can prepare these upfront and use extra arguments when running the CLI command as described in [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli).
+If the Azure account executing the `az iot ops init` command doesn't have permissions to query the Microsoft Graph and create service principals, you can prepare these upfront and use extra arguments when running the CLI command as described in [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli).
### Configure service principal for interacting with Azure Key Vault via Microsoft Entra ID Follow these steps to create a new Application Registration that will be used by the AIO application to authenticate to Key Vault.
-First, register an application with Microsoft Entra ID.
+First, register an application with Microsoft Entra ID:
1. In the Azure portal search bar, search for and select **Microsoft Entra ID**.
First, register an application with Microsoft Entra ID.
1. Select **Register**.
- When your application is created, you are directed to its resource page.
+ When your application is created, you're directed to its resource page.
1. Copy the **Application (client) ID** from the app registration overview page. You'll use this value as an argument when running Azure IoT Operations deployment with the `az iot ops init` command.
-Next, give your application permissions for key vault.
+Next, give your application permissions for key vault:
1. On the resource page for your app, select **API permissions** from the **Manage** section of the app menu.
Next, give your application permissions for key vault.
1. Select **Add permissions**.
-Create a client secret that will be added to your Kubernetes cluster to authenticate to your key vault.
+Create a client secret that will be added to your Kubernetes cluster to authenticate to your key vault:
1. On the resource page for your app, select **Certificates & secrets** from the **Manage** section of the app menu.
Create a client secret that will be added to your Kubernetes cluster to authenti
1. Copy the **Value** from your new secret. You'll use this value later when you run `az iot ops init`.
-Retrieve the service principal Object Id
+Retrieve the service principal Object ID:
-1. On the **Overview** page for your app, under the section **Essentials**, click on the **Application name** link under **Managed application in local directory**. This opens the Enterprise Application properties. Copy the Object Id to use when you run `az iot ops init`.
+1. On the **Overview** page for your app, under the section **Essentials**, click on the **Application name** link under **Managed application in local directory**. This opens the Enterprise Application properties. Copy the Object ID to use when you run `az iot ops init`.
### Create an Azure Key Vault
If you have an existing key vault, you can change the permission model by execut
```bash az keyvault update --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --enable-rbac-authorization false ```
-You will need the Key Vault resource ID when you run `az iot ops init`. To retrieve the resource ID, run:
+You'll need the Key Vault resource ID when you run `az iot ops init`. To retrieve the resource ID, run:
```bash az keyvault show --name "<your unique key vault name>" --resource-group "<the name of the resource group>" --query id -o tsv ```
-### Set service principal access policy in Azue Key Vault
+### Set service principal access policy in Azure Key Vault
The newly created service principal needs **Secret** `list` and `get` access policy for the Azure IoT Operations to work with the secret store.
az keyvault set-policy --name "<your unique key vault name>" --resource-group "<
### Pass service principal and Key Vault arguments to Azure IoT Operations deployment
-When following the guide [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli), you will need to pass in additional flags to the `az iot ops init` command in order to use the pre-configured service principal and key vault.
+When following the guide [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli), you'll need to pass in additional flags to the `az iot ops init` command in order to use the pre-configured service principal and key vault.
The following example shows how to prepare the cluster for Azure IoT Operations without fully deploying it by using `--no-deploy` flag. You can also run the command without this argument for a default Azure IoT Operations deployment.
Once you have the secret store set up on your cluster, you can create and add Az
1. Save your changes and apply them to your cluster. If you use k9s, your changes are automatically applied.
-The CSI driver updates secrets according to a polling interval, so a new secret won't be updated on the pods until the next polling interval. If you want the secrets to be updated immediately, update the pods for that component. For example, for the Azure IoT Data Processor component, update the `aio-dp-reader-worker-0` and `aio-dp-runner-worker-0` pods.
+The CSI driver updates secrets by using a polling interval, therefore the new secret isn't available to the pod until the next polling interval. To update a component immediately, restart the pods for the component. For example, to restart the Data Processor component, run the following commands:
+
+```console
+kubectl delete pod aio-dp-reader-worker-0 -n azure-iot-operations
+kubectl delete pod aio-dp-runner-worker-0 -n azure-iot-operations
+```
## Azure IoT MQ secrets
iot-operations Quickstart Process Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-process-telemetry.md
Before you send data to the cloud for storage and analysis, you might want to pr
## Add a secret to your cluster
+To access the lakehouse from a Data Processor pipeline, you need to enable your cluster to access the service principal details you created earlier. You need to configure your Azure Key Vault with the service principal details so that the cluster can retrieve them.
+ [!INCLUDE [add-cluster-secret](../includes/add-cluster-secret.md)] ## Create a basic pipeline
iot-operations Tutorial Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/view-analyze-data/tutorial-anomaly-detection.md
To add a table to the `bakery_ops` database to store the anomaly data, navigate
### Add a secret to your cluster
+To access the Azure Data Explorer database from a Data Processor pipeline, you need to enable your cluster to access the service principal details you created earlier. You need to configure your Azure Key Vault with the service principal details so that the cluster can retrieve them.
+ [!INCLUDE [add-cluster-secret](../includes/add-cluster-secret.md)] ## Assets and measurements
iot-operations Tutorial Overall Equipment Effectiveness https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/view-analyze-data/tutorial-overall-equipment-effectiveness.md
Make a note of your workspace ID and lakehouse ID, you need them later. You can
### Add a secret to your cluster
+To access the lakehouse from a Data Processor pipeline, you need to enable your cluster to access the service principal details you created earlier. You need to configure your Azure Key Vault with the service principal details so that the cluster can retrieve them.
+ [!INCLUDE [add-cluster-secret](../includes/add-cluster-secret.md)] ## Understand the scenario and data
iot Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-glossary.md
Previously updated : 08/26/2022 Last updated : 02/28/2024 # Generated from YAML source.
Casing rules: Always capitalize as *Azure IoT Explorer*.
Applies to: Iot Hub, Device developer
+### Azure IoT Operations Preview - enabled by Azure Arc
+
+A unified data plane for the edge. It's a collection of modular, scalable, and highly available data services that run on Azure Arc-enabled edge Kubernetes clusters. It enables data capture from various different systems and integrates with data modeling applications such as Microsoft Fabric to help organizations deploy the industrial metaverse.
+
+[Learn more](../iot-operations/get-started/overview-iot-operations.md)
+
+On first mention in an article, use *Azure IoT Operations Preview - enabled by Azure Arc*. On subsequent mentions, you can use *Azure IoT Operations*. Never use an acronym.
+
+Casing rules: Always capitalize as *Azure IoT Operations Preview - enabled by Azure Arc* or *Azure IoT Operations*.
+ ### Azure IoT Tools A cross-platform, open-source, Visual Studio Code extension that helps you manage Azure [IoT Hub](#iot-hub) and [devices](#device) in VS Code. With Azure IoT Tools, IoT developers can easily develop an IoT project in VS Code
Casing rules: Always lowercase.
Applies to: Iot Hub, Device Provisioning Service
-### Industry 4.0
-
-Refers to the fourth revolution that's occurred in manufacturing. Companies can build connected [solutions](#solution) to manage the manufacturing facility and equipment more efficiently by enabling manufacturing equipment to be cloud connected, allowing remote access and management from the cloud, and enabling OT personnel to have a single pane view of their entire facility.
-
-Applies to: Iot Hub, IoT Central
- ### Interface In IoT Plug and Play, an interface describes related capabilities that are implemented by a [IoT Plug and Play device](#iot-plug-and-play-device) or [digital twin](#digital-twin). You can reuse interfaces across different [device models](#device-model). When an interface is used in a [device](#device) [model](#model), it defines a [component](#component) of the device. A simple device only contains a default interface.
iot Iot Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-introduction.md
Previously updated : 05/02/2023 Last updated : 02/27/2024 #Customer intent: As a newcomer to IoT, I want to understand what IoT is, what services are available, and examples of business cases so I can figure out where to start.
To build an IoT solution for your business, you typically evaluate your solution
A managed app platform lets you quickly evaluate your IoT solution by reducing the number of decisions needed to achieve results. The managed app platform takes care of most infrastructure elements in your solution, letting you focus on adding industry knowledge and evaluating the solution. Azure IoT Central is a managed app platform.
-Platform services provide all the building blocks for customized and flexible IoT applications. You have more options to choose and code when you connect your devices, and ingest, store, and analyze your data. Azure IoT platform services include Azure IoT Hub, Device Provisioning Service, and Azure Digital Twins. Other platform services that may be part of your IoT solution include Azure Data Explorer, Azure Storage platform, and Azure Functions.
+Platform services provide all the building blocks for customized and flexible IoT applications. You have more options to choose and code when you connect your devices, and ingest, store, and analyze your data. Azure IoT platform services include Azure IoT Hub, Device Provisioning Service, and Azure Digital Twins. Other platform services that might be part of your IoT solution include Azure Data Explorer, Azure Storage platform, and Azure Functions.
| Managed app platform | Platform services | |-|-| | Take advantage of a platform that handles the security and management of your IoT applications and devices. | Have full control over the underlying services in your solution. For example: </br> Scaling and securing services to meet your needs. </br> Using in-house or partner expertise to onboard devices and provision services. | | Customize branding, dashboards, user roles, devices, and telemetry. However, you can't customize the underlying IoT services. | Fully customize and control your IoT solution. | | Has a simple, predictable pricing structure. | Let you fine-tune services to control overall costs. |
-| Solution can be a single Azure service. | Solution is a collection of Azure services such as Azure IoT Hub, Device Provisioning Service, Azure Digital Twins, Azure Data Explorer, Azure Storage platform, and Azure Function. |
+| Solution can be a single Azure service. | Solution is a collection of Azure services such as Azure IoT Hub, Device Provisioning Service, Azure Digital Twins, Azure Data Explorer, Azure Storage platform, and Azure Functions. |
To learn more, see [What Azure technologies and services can you use to create IoT solutions?](iot-services-and-technologies.md).
An IoT device is typically made up of a circuit board with sensors attached that
* An accelerometer in an elevator. * Presence sensors in a room.
-There's a wide variety of devices available from different manufacturers to build your solution. For prototyping a microprocessor device, you can use a device such as a [Raspberry Pi](https://www.raspberrypi.org/). The Raspberry Pi lets you attach many different types of sensor. For prototyping a microcontroller device, you can use devices such as the [ESPRESSIF ESP32](../iot-develop/quickstart-devkit-espressif-esp32-freertos-iot-hub.md), [STMicroelectronics B-U585I-IOT02A Discovery kit](../iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md), [STMicroelectronics B-L4S5I-IOT01A Discovery kit](../iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md), or [NXP MIMXRT1060-EVK Evaluation kit](../iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md). These boards typically have built-in sensors, such as temperature and accelerometer sensors.
+There's a wide variety of devices available from different manufacturers to build your solution. For prototyping a microprocessor device, you can use a device such as a [Raspberry Pi](https://www.raspberrypi.org/). The Raspberry Pi lets you attach many different types of sensor. For prototyping a microcontroller device, use devices such as the [ESPRESSIF ESP32](../iot-develop/quickstart-devkit-espressif-esp32-freertos-iot-hub.md), [STMicroelectronics B-U585I-IOT02A Discovery kit](../iot-develop/quickstart-devkit-stm-b-u585i-iot-hub.md), [STMicroelectronics B-L4S5I-IOT01A Discovery kit](../iot-develop/quickstart-devkit-stm-b-l4s5i-iot-hub.md), or [NXP MIMXRT1060-EVK Evaluation kit](../iot-develop/quickstart-devkit-nxp-mimxrt1060-evk-iot-hub.md). These boards typically have built-in sensors, such as temperature and accelerometer sensors.
Microsoft provides open-source [Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) that you can use to build the apps that run on your devices.
Typically, IoT devices send telemetry from their attached sensors to cloud servi
* A cloud service sets the target temperature for a thermostat device.
-The [IoT Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) and IoT Hub support common [communication protocols](../iot-hub/iot-hub-devguide-protocols.md) such as HTTP, MQTT, and AMQP for device-to-cloud and cloud-to-device communication. In some scenarios, you may need a gateway to connect your IoT devices to your cloud services.
+The [IoT Device SDKs](../iot-hub/iot-hub-devguide-sdks.md) and IoT Hub support common [communication protocols](../iot-hub/iot-hub-devguide-protocols.md) such as HTTP, MQTT, and AMQP for device-to-cloud and cloud-to-device communication. In some scenarios, you might need a gateway to connect your IoT devices to your cloud services.
IoT devices have different characteristics when compared to other clients such as browsers and mobile apps. Specifically, IoT devices: * Are often embedded systems with no human operator. * Can be deployed in remote locations, where physical access is expensive.
-* May only be reachable through the solution back end.
-* May have limited power and processing resources.
-* May have intermittent, slow, or expensive network connectivity.
-* May need to use proprietary, custom, or industry-specific application protocols.
+* Might only be reachable through the solution back end.
+* Might have limited power and processing resources.
+* Might have intermittent, slow, or expensive network connectivity.
+* Might need to use proprietary, custom, or industry-specific application protocols.
The device SDKs help you address the challenges of connecting devices securely and reliably to your cloud services.
Some cloud services, such as IoT Hub and the Device Provisioning Service, are Io
To learn more, see: -- [Device management and control](iot-overview-device-management.md)-- [Message processing in an IoT solution](iot-overview-message-processing.md)-- [Extend your IoT solution](iot-overview-solution-extensibility.md)-- [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md)
+* [Device management and control](iot-overview-device-management.md)
+* [Message processing in an IoT solution](iot-overview-message-processing.md)
+* [Extend your IoT solution](iot-overview-solution-extensibility.md)
+* [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md)
## Solution-wide concerns Any IoT solution must address the following solution-wide concerns:
-* [Security](iot-overview-security.md) including physical security, authentication, authorization, and encryption
+* [Security](iot-overview-security.md) including physical security, authentication, authorization, and encryption.
* [Solution management](iot-overview-solution-management.md) including deployment and monitoring. * High availability and disaster recovery for all the components in your solution. * Scalability for all the services in your solution.
+## IoT Operations
+
+_Azure IoT Operations Preview ΓÇô enabled by Azure Arc_ is a unified data plane for the edge. Azure IoT Operations is a set of modular, scalable, and highly available data services that run on Azure Arc-enabled edge Kubernetes clusters. It enables data capture from various different systems and integrates with data modeling applications such as Microsoft Fabric to help organizations deploy the industrial metaverse. To learn more, see [What is Azure IoT Operations?](../iot-operations/get-started/overview-iot-operations.md).
+ ## Next steps Suggested next steps to explore Azure IoT further include: * [IoT device development](iot-overview-device-development.md) * [Device infrastructure and connectivity](iot-overview-device-connectivity.md)
-* [Azure IoT services and technologies](iot-services-and-technologies.md).
+* [Azure IoT services and technologies](iot-services-and-technologies.md)
To learn more about Azure IoT architecture, see:
iot Iot Overview Analyze Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md
Previously updated : 04/11/2023 Last updated : 02/28/2024 # Customer intent: As a solution builder, I want a high-level overview of the options for analyzing and visualizing device data in an IoT solution.
There are many services you can use to analyze and visualize your IoT data. Some
Use [Azure Databricks](/azure/databricks/introduction/) to process, store, clean, share, analyze, model, and monetize datasets with solutions from BI to machine learning. Use the Azure Databricks platform to build and deploy data engineering workflows, machine learning models, analytics dashboards, and more. - [Use structured streaming with Azure Event Hubs and Azure Databricks clusters](/azure/databricks/structured-streaming/streaming-event-hubs/). You can connect a Databricks workspace to the Event Hubs-compatible endpoint on an IoT hub to read data from IoT devices.-- [Extend Azure IoT Central with custom analytics](../iot-central/core/howto-create-custom-analytics.md)
+- [Extend Azure IoT Central with custom analytics](../iot-central/core/howto-create-custom-analytics.md).
### Azure Stream Analytics
Azure Stream Analytics is a fully managed stream processing engine that is desig
### Power BI
-[Power BI](/power-bi/fundamentals/power-bi-overview) is a collection of software services, apps, and connectors that work together to turn your unrelated sources of data into coherent, visually immersive, and interactive insights. Power BI lets you easily connect to your data sources, visualize and discover what's important, and share that with anyone or everyone you want.
+[Power BI](/power-bi/fundamentals/power-bi-overview) is a collection of software services, apps, and connectors that work together to turn your unrelated sources of data into coherent, visually immersive, and interactive insights. Power BI lets you easily connect to your data sources, visualize and discover what's important, and share reports with anyone or everyone you want.
- [Visualize real-time sensor data from Azure IoT Hub using Power BI](../iot-hub/iot-hub-live-data-visualization-in-power-bi.md) - [Export data from Azure IoT Central and visualize insights in Power BI](../iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md) ### Azure Maps
-[Azure Maps](../azure-maps/about-azure-maps.md) is a collection of geospatial services and SDKs that use fresh mapping data to provide geographic context to web and mobile applications. For an IoT example, see [Integrate with Azure Maps (Azure Digital Twins)](../digital-twins/how-to-integrate-maps.md)
+[Azure Maps](../azure-maps/about-azure-maps.md) is a collection of geospatial services and SDKs that use fresh mapping data to provide geographic context to web and mobile applications. For an IoT example, see [Integrate with Azure Maps (Azure Digital Twins)](../digital-twins/how-to-integrate-maps.md).
### Grafana
iot Iot Overview Device Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-connectivity.md
Previously updated : 03/20/2023 Last updated : 02/28/2024 - template-overview - ignite-2023
An Azure IoT hub exposes a collection of per-device endpoints that let devices e
- *Retrieve and update device twin properties*. A device uses this endpoint to access its device twin properties. - *Receive direct method requests*. A device uses this endpoint to listen for direct method requests.
-Every IoT hub has a unique hostname that's used to connect devices to the hub. The hostname is in the format `iothubname.azure-devices.net`. If you use one of the device SDKs, you don't need to know the full names of the individual endpoints because the SDKs provide higher level abstractions. However, the device does need to know the hostname of the IoT hub to which it's connecting.
+Every IoT hub has a unique hostname that you use to connect devices to the hub. The hostname is in the format `iothubname.azure-devices.net`. If you use one of the device SDKs, you don't need to know the full names of the individual endpoints because the SDKs provide higher level abstractions. However, the device does need to know the hostname of the IoT hub to which it's connecting.
A device can establish a secure connection to an IoT hub:
To learn more about implementing automatic reconnections to endpoints, see [Mana
A device connection string provides a device with the information it needs to connect securely to an IoT hub. The connection string includes the following information: - The hostname of the IoT hub.-- The device ID that's registered with the IoT hub.
+- The device ID registered with the IoT hub.
- The security information the device needs to establish a secure connection to the IoT hub.
-## Authentication and authorization
+## Authentication
Azure IoT devices use TLS to verify the authenticity of the IoT hub or DPS endpoint they're connecting to. The device SDKs include the DigiCert Global Root G2 TLS certificate they currently need to establish a secure connection to the IoT hub. To learn more, see [Transport Layer Security (TLS) support in IoT Hub](../iot-hub/iot-hub-tls-support.md) and [TLS support in Azure IoT Hub Device Provisioning Service (DPS)](../iot-dps/tls-support.md).
To learn more about how to choose a protocol for your devices to connect to the
- [Communicate with DPS using the HTTPS protocol (symmetric keys)](../iot-dps/iot-dps-https-sym-key-support.md) - [Communicate with DPS using the HTTPS protocol (X.509)](../iot-dps/iot-dps-https-x509-support.md)
-Industrial IoT scenarios often use the [open platform communications unified architecture (OPC UA)](https://opcfoundation.org/about/opc-technologies/opc-u).
+Industrial IoT scenarios often use the [open platform communications unified architecture (OPC UA)](https://opcfoundation.org/about/opc-technologies/opc-u).
## Connection patterns
Ephemeral connections are brief connections for devices to send telemetry to you
## Field gateways
-Field gateways (sometimes referred to as edge gateways) are typically deployed on-premises and close to your IoT devices. Field gateways handle communication with the cloud on behalf of your IoT devices. Field gateways may:
+Field gateways (sometimes referred to as edge gateways) are typically deployed on-premises and close to your IoT devices. Field gateways handle communication with the cloud on behalf of your IoT devices. Field gateways can:
- Do protocol translation. For example, enabling Bluetooth enabled devices to connect to the cloud. - Manage offline and disconnected scenarios. For example, buffering telemetry when the cloud endpoint is unreachable.-- Filter, compress, or aggregate telemetry before it's sent to the cloud.
+- Filter, compress, or aggregate telemetry before sending it to the cloud.
- Run logic at the edge to remove the latency associated with running logic on behalf of devices in the cloud. For example, detecting a spike in temperature and opening a valve in response. You can use Azure IoT Edge to deploy a field gateway to your on-premises environment. IoT Edge provides a set of features that enable you to deploy and manage field gateways at scale. IoT Edge also provides a set of modules that you can use to implement common gateway scenarios. To learn more, see [What is Azure IoT Edge?](../iot-edge/about-iot-edge.md)
An IoT Edge device can maintain a [persistent connection](#persistent-connection
## Bridges
-A device bridge enables devices that are connected to a third-party cloud to connect to your IoT solution. Examples of third-party clouds include [Sigfox](https://www.sigfox.com/), [Particle Device Cloud](https://www.particle.io/), and [The Things Network](https://www.thethingsnetwork.org/).
+A device bridge enables devices that are connected to a non-Microsoft cloud to connect to your IoT solution. Examples of non-Microsoft clouds include [Sigfox](https://www.sigfox.com/), [Particle Device Cloud](https://www.particle.io/), and [The Things Network](https://www.thethingsnetwork.org/).
-The open source IoT Central Device Bridge acts as a translator that forwards telemetry to an IoT Central application. To learn more, see [Azure IoT Central Device Bridge](https://github.com/Azure/iotc-device-bridge). There are third-party bridge solutions, such as [Tartabit IoT Bridge](/shows/internet-of-things-show/onboarding-constrained-devices-into-azure-using-tartabits-iot-bridge), for connecting devices to an IoT hub.
+The open source IoT Central Device Bridge acts as a translator that forwards telemetry to an IoT Central application. To learn more, see [Azure IoT Central Device Bridge](https://github.com/Azure/iotc-device-bridge). There are non-Microsoft bridge solutions, such as [Tartabit IoT Bridge](/shows/internet-of-things-show/onboarding-constrained-devices-into-azure-using-tartabits-iot-bridge), for connecting devices to an IoT hub.
## Next steps
-Now that you've seen an overview of device connectivity in Azure IoT solutions, some suggested next steps include
+Now that you've seen an overview of device connectivity in Azure IoT solutions, some suggested next steps include:
- [Device management and control in IoT solutions](iot-overview-device-management.md) - [Process and route messages](iot-overview-message-processing.md)
iot Iot Overview Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md
Previously updated : 03/20/2023 Last updated : 02/28/2024 # Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device development so that I can easily find relevant content.
In Azure IoT, a device developer writes the code to run on the devices in the so
- Manages device state and synchronizes that state with the cloud. - Responds to commands sent from the cloud. - Enables the installation of software updates from the cloud.-- Enables the device to keep functioning while it's disconnected from the cloud.
+- Enables the device to keep functioning while disconnected from the cloud.
## Device types
For MPU devices, device SDKs are available for the following languages:
For MCU devices, see: -- [Azure RTOS Middleware](https://github.com/azure-rtos/)
+- [Azure RTOS Middleware](https://github.com/eclipse-threadx)
- [FreeRTOS Middleware](https://github.com/Azure/azure-iot-middleware-freertos) - [Azure SDK for Embedded C](https://github.com/Azure/azure-sdk-for-c)
To learn more about implementing automatic reconnections to endpoints, see [Mana
## Device development without a device SDK
-Although you're recommended to use one of the device SDKS, there may be scenarios where you prefer not to. In these scenarios, your device code must directly use one of the communication protocols that IoT Hub and the Device Provisioning Service (DPS) support.
+Although you're recommended to use one of the device SDKS, there might be scenarios where you prefer not to. In these scenarios, your device code must directly use one of the communication protocols that IoT Hub and the Device Provisioning Service (DPS) support.
For more information, see:
For more information, see:
IoT Plug and Play enables solution builders to integrate IoT devices with their solutions without any manual configuration. At the core of IoT Plug and Play, is a device model that a device uses to advertise its capabilities to an IoT Plug and Play-enabled application such as IoT Central. This model is structured as a set of elements that define: -- *Properties* that represent the read-only or writable state of a device or other entity. For example, a device serial number may be a read-only property and a target temperature on a thermostat may be a writable property.
+- *Properties* that represent the read-only or writable state of a device or other entity. For example, a device serial number might be a read-only property and a target temperature on a thermostat might be a writable property.
- *Telemetry* that's the data emitted by a device, whether the data is a regular stream of sensor readings, an occasional error, or an information message. - *Commands* that describe a function or operation that can be done on a device. For example, a command could reboot a gateway or take a picture using a remote camera.
To learn more, see:
## Containerized device code
-Using containers, such as Docker, to run your device code lets you deploy code to your devices by using the capabilities of the container infrastructure. Containers also let you define a runtime environment for your code with all the required library and package versions installed. Containers make it easier to deploy updates and to manage the lifecycle of your IoT devices.
+If you use containers, such as in Docker, to run your device code you can deploy code to your devices by using the capabilities of the container infrastructure. Containers also let you define a runtime environment for your code with all the required library and package versions installed. Containers make it easier to deploy updates and to manage the lifecycle of your IoT devices.
Azure IoT Edge runs device code in containers. You can use Azure IoT Edge to deploy code modules to your devices. To learn more, see [Develop your own IoT Edge modules](../iot-edge/module-development.md).
iot Iot Overview Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-management.md
Previously updated : 03/20/2023 Last updated : 02/28/2024 # Customer intent: As a solution builder or device developer I want a high-level overview of the issues around device management and control so that I can easily find relevant content.
Before a device can connect to an IoT hub, it must be registered. Device registr
- Authentication information such as symmetric keys or X.509 certificates. - The type of device. Is it an IoT Edge device or not?
-If you think a device has been compromised or isn't functioning correctly, you can disable it in the device registry to prevent it from connecting to the cloud. To allow a device to connect back to a cloud after the issue is resolved, you can re-enable it in the device registry. You can also permanently remove a device from the device registry to completely prevent it from connecting to the cloud.
+If you think a device is compromised or isn't functioning correctly, you can disable it in the device registry to prevent it from connecting to the cloud. To allow a device to connect back to a cloud after the issue is resolved, you can re-enable it in the device registry. You can also permanently remove a device from the device registry to completely prevent it from connecting to the cloud.
To learn more, see [Understand the identity registry in your IoT hub](../iot-hub/iot-hub-devguide-identity-registry.md).
IoT Central provides a UI to manage the device registry in the underlying IoT hu
## Device provisioning
-You must configure each device in your solution with the details of the IoT hub it should connect to. You can manually configure each device in your solution, but this may not be practical for a large number of devices. To get around this problem, you can use the Device Provisioning Service (DPS) to automatically register each device with an IoT hub, and then provision each device with the required connection information. If your IoT solution uses multiple IoT hubs, you can use DPS to provision devices to a hub based on criteria such as which is the closest hub to the device. You can configure your DPS with rules for registering and provisioning your devices in advance of physically deploying the device in the field.
+You must configure each device in your solution with the details of the IoT hub it should connect to. You can manually configure each device in your solution, but this approach might not be practical for a large number of devices. To get around this problem, you can use the Device Provisioning Service (DPS) to automatically register each device with an IoT hub, and then provision each device with the required connection information. If your IoT solution uses multiple IoT hubs, you can use DPS to provision devices to a hub based on criteria such as which is the closest hub to the device. You can configure your DPS with rules for registering and provisioning your devices in advance of physically deploying the device in the field.
If your IoT solution uses IoT Hub, then using DPS is optional. If you're using IoT Central, then your solution automatically uses a DPS instance that IoT Central manages.
The [Device Update for IoT Hub](../iot-hub-device-update/understand-device-updat
## Device key management and rotation
-During the lifecycle of your IoT solution, you may need to roll over the keys used to authenticate devices. For example, you may need to roll over your keys if you suspect that a key has been compromised or if a certificate expires:
+During the lifecycle of your IoT solution, you might need to roll over the keys used to authenticate devices. For example, you might need to roll over your keys if you suspect that a key is compromised or if a certificate expires:
- [Roll over the keys used to authenticate devices in IoT Hub and DPS](../iot-dps/how-to-roll-certificates.md) - [Roll over the keys used to authenticate devices in IoT Central](../iot-central/core/how-to-connect-devices-x509.md#roll-x509-device-certificates) ## Device monitoring
-As part of overall solution monitoring, you may want to monitor the health of your devices. For example, you may want to monitor the health of your devices or detect when a device is no longer connected to the cloud. Options for monitoring devices include:
+As part of overall solution monitoring, you might want to monitor the health of your devices. For example, you might want to monitor the health of your devices or detect when a device is no longer connected to the cloud. Options for monitoring devices include:
- Devices use the device twin to report its current state to the cloud. For example, a device can report its current internal temperature or its current battery level. - Devices can raise alerts by sending telemetry messages to the cloud.
To learn more, see:
## Next steps
-Now that you've seen an overview of device management and control in Azure IoT solutions, some suggested next steps include
+Now that you've seen an overview of device management and control in Azure IoT solutions, some suggested next steps include:
- [Process and route messages](iot-overview-message-processing.md) - [Extend your IoT solution](iot-overview-solution-extensibility.md)
iot Iot Overview Message Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-message-processing.md
Previously updated : 04/03/2023 Last updated : 02/28/2024 # Customer intent: As a solution builder or device developer I want a high-level overview of the message processing in IoT solutions so that I can easily find relevant content for my scenario.
To learn more, see [React to IoT Hub events by using Event Grid to trigger actio
## Enrich or transform messages
-To simplify downstream processing, you may want to add data to telemetry messages or modify their structure.
+To simplify downstream processing, you might want to add data to telemetry messages or modify their structure.
### IoT Hub message enrichments
To learn more, see [Message enrichments for device-to-cloud IoT Hub messages](..
IoT Central has two options for transforming telemetry messages: - Use [mappings](../iot-central/core/howto-map-data.md) to transform complex device telemetry into structured data on ingress to IoT Central.-- Use [transformations](../iot-central/core/howto-transform-data-internally.md) to manipulate the format and structure of the device data before it's exported to a destination.
+- Use [transformations](../iot-central/core/howto-transform-data-internally.md) to manipulate the format and structure of the device data before you export it to a destination.
## Process messages at the edge
-An Azure IoT Edge module can process telemetry from an attached sensor or device before it's sent to an IoT hub. For example, before it sends data to the cloud an IoT Edge module can:
+An Azure IoT Edge module can process telemetry from an attached sensor or device before it sends it to an IoT hub. For example, before it sends data to the cloud an IoT Edge module can:
- [Filter data](../iot-edge/tutorial-deploy-function.md) - Aggregate data
An Azure IoT Edge module can process telemetry from an attached sensor or device
You can use other Azure services to process telemetry messages from your devices. Both IoT Hub and IoT Central can route messages to other services. For example, you can forward telemetry messages to:
-[Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md) is a managed stream processing engine that is designed to analyze and process large volumes of streaming data. Stream Analytics can identify patterns in your data and then trigger actions such as creating alerts, feeding information to a reporting tool, or storing the transformed data. Stream Analytics is also available on the Azure IoT Edge runtime, enabling it to process data at the edge rather than in the cloud.
+[Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md) is a managed stream processing engine that is designed to analyze and process large volumes of streaming data. Stream Analytics can identify patterns in your data and then trigger actions such as creating alerts, sending information to a reporting tool, or storing the transformed data. Stream Analytics is also available on the Azure IoT Edge runtime, enabling it to process data at the edge rather than in the cloud.
[Azure Functions](../azure-functions/functions-overview.md) is a serverless compute service that lets you run code in response to events. You can use Azure Functions to process telemetry messages from your devices.
To learn more, see:
## Next steps
-Now that you've seen an overview of device management and control in Azure IoT solutions, some suggested next steps include
+Now that you've seen an overview of device management and control in Azure IoT solutions, some suggested next steps include:
- [Extend your IoT solution](iot-overview-solution-extensibility.md) - [Analyze and visualize your IoT data](iot-overview-analyze-visualize.md)
iot Iot Overview Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-security.md
Microsoft Defender for IoT can automatically monitor some of the recommendations
- **Use X.509 certificates to authenticate your devices to IoT Hub or IoT Central**: IoT Hub and IoT Central support both X509 certificate-based authentication and security tokens as methods for a device to authenticate. If possible, use X509-based authentication in production environments as it provides greater security. To learn more, see [Authenticating a device to IoT Hub](../iot-hub/iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub) and [Device authentication concepts in IoT Central](../iot-central/core/concepts-device-authentication.md). -- **Use Transport Layer Security (TLS) 1.2 to secure connections from devices**: IoT Hub and IoT Central use TLS to secure connections from IoT devices and services. Three versions of the TLS protocol are currently supported: 1.0, 1.1, and 1.2. TLS 1.0 and 1.1 are considered legacy. To learn more, see [Authentication and authorization](iot-overview-device-connectivity.md#authentication-and-authorization).
+- **Use Transport Layer Security (TLS) 1.2 to secure connections from devices**: IoT Hub and IoT Central use TLS to secure connections from IoT devices and services. Three versions of the TLS protocol are currently supported: 1.0, 1.1, and 1.2. TLS 1.0 and 1.1 are considered legacy. To learn more, see [Authentication and authorization](iot-overview-device-connectivity.md#authentication).
- **Ensure you have a way to update the TLS root certificate on your devices**: TLS root certificates are long-lived, but they still may expire or be revoked. If there's no way of updating the certificate on the device, the device may not be able to connect to IoT Hub, IoT Central, or any other cloud service at a later date.
iot Iot Overview Solution Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md
Previously updated : 04/03/2023 Last updated : 02/28/2024 # Customer intent: As a solution builder, I want a high-level overview of the options for extending an IoT solution so that I can easily find relevant content for my scenario.
A typical IoT solution includes the analysis and visualization of the data from
### Integration with other services
-An IoT solution may include other systems such as asset management, work scheduling, and control automation systems. Such systems might:
+An IoT solution might include other systems such as asset management, work scheduling, and control automation systems. Such systems might:
- Use data from your IoT devices as input to predictive maintenance systems that generate entries in a work scheduling system. - Update the device registry to ensure it has up to date data from your asset management system.
An IoT solution may include other systems such as asset management, work schedul
## Azure Data Health Services
-[Azure Health Data Services](../healthcare-apis/healthcare-apis-overview.md) is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions. An IoT solution can use these services to integrate IoT data into a healthcare solution. To learn more, see [Deploy and review the continuous patient monitoring application template (IoT Central)](../iot-central/healthcare/tutorial-continuous-patient-monitoring.md)
-
-## Industrial IoT (IIoT)
-
-Azure IIoT lets you integrate data from assets and sensors, including those systems that are already operating on your factory floor, into your Azure IoT solution. To learn more, see [Microsoft OPC Publisher and Azure Industrial IoT Platform](https://github.com/Azure/Industrial-IoT/blob/main/readme.md).
+[Azure Health Data Services](../healthcare-apis/healthcare-apis-overview.md) is a set of managed API services based on open standards and frameworks that enable workflows to improve healthcare and offer scalable and secure healthcare solutions. An IoT solution can use these services to integrate IoT data into a healthcare solution.
## Extensibility mechanisms
iot Iot Overview Solution Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-management.md
Previously updated : 05/04/2023 Last updated : 02/28/2024 # Customer intent: As a solution builder, I want a high-level overview of the options for managing an IoT solution so that I can easily find relevant content for my scenario.
The following diagram shows a high-level view of the components in a typical IoT
There are many options for managing your IoT solution including the Azure portal, PowerShell, and ARM templates. This article summarizes the main options.
+To learn about securing your IoT solution, see [Secure your IoT solution](iot-overview-security.md).
+ ## Monitoring While there are tools specifically for [monitoring devices](iot-overview-device-management.md#device-monitoring) in your IoT solution, you also need to be able to monitor the health of your IoT
While there are tools specifically for [monitoring devices](iot-overview-device-
| IoT Central | [Use audit logs to track activity in your IoT Central application](../iot-central/core/howto-use-audit-logs.md) </br> [Use Azure Monitor to monitor your IoT Central application](../iot-central/core/howto-manage-iot-central-from-portal.md#monitor-application-health) | | Azure Digital Twins | [Use Azure Monitor to monitor Azure Digital Twins resources](../digital-twins/how-to-monitor.md) |
+To learn more about the Azure Monitor service, see [Azure Monitor overview](../azure-monitor/overview.md).
+ ## Azure portal The Azure portal offers a consistent GUI environment for managing your Azure IoT services. For example, you can use the portal to:
iot Iot Phone App How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-phone-app-how-to.md
Previously updated : 08/24/2022 Last updated : 02/28/2024
To learn more, see [Connect the app](#connect-the-app) later in this guide.
### Telemetry
-The app collects data from sensors on the phone to send as telemetry to the IoT service you're using. Sensor data is aggregated every five seconds by default, but you can change this on the app settings page:
+The app collects data from sensors on the phone to send as telemetry to the IoT service you're using. Sensor data is aggregated every five seconds by default, but you can change this period on the app settings page:
:::image type="content" source="media/iot-phone-app-how-to/telemetry.png" alt-text="Screenshot of telemetry page in smartphone app.":::
To register the device in IoT Central:
:::image type="content" source="media/iot-phone-app-how-to/iot-central-create-device.png" alt-text="Screenshot showing how to create a device in IoT Central.":::
-1. On the list of devices, click on the device name and then select **Connect**. On the **Device connection** page you can see the QR code that you'll scan in the smartphone app:
+1. On the list of devices, click on the device name and then select **Connect**. On the **Device connection** page you can see the QR code to scan in the smartphone app in the next section:
:::image type="content" source="media/iot-phone-app-how-to/device-connection-qr-code.png" alt-text="Screenshot showing the device connection page with the QR code.":::
To learn more about how devices connect to IoT Central, see [How devices connect
To view the data the device is sending in your IoT Central application:
-1. Sign in to your IoT Central application and navigate to the **Devices** page. Your device has been automatically assigned to the **Smartphone** device template.
+1. Sign in to your IoT Central application and navigate to the **Devices** page. Your device is automatically assigned to the **Smartphone** device template.
> [!TIP] > You may need to refresh the page in your web browser to see when the device is assigned to the **Smartphone** device template.
To view the data the device is sending in your IoT Central application:
## Next steps
-Now that you've connected your smartphone app to IoT Central, a suggested next step is to learn more about [IoT Central](../iot-central/core/overview-iot-central.md).
+Now that your smartphone app is connected to IoT Central, a suggested next step is to learn more about [IoT Central](../iot-central/core/overview-iot-central.md).
iot Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-sdks.md
Previously updated : 02/20/2023 Last updated : 02/28/2024
iot Iot Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-support-help.md
Title: Azure IoT support and help options | Microsoft Docs description: How to obtain help and support for questions or problems when you create solutions using Azure IoT Services.--++ Previously updated : 7/11/2022 Last updated : 02/28/2024
Here are suggestions for where you can get help when developing your Azure IoT s
Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
-* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
+* [Azure portal help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
* [Azure portal for the United States government](https://portal.azure.us) > [!NOTE]
Explore the range of [Azure support options and choose the plan](https://azure.m
## Post a question on Microsoft Q&A
-For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), AzureΓÇÖs preferred destination for community support.
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), Azure's preferred destination for community support.
If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of the following tags when you ask your question: -- [Azure IoT](/answers/topics/azure-iot.html)-- [Azure IoT Central](/answers/topics/azure-iot-central.html)-- [Azure IoT Edge](/answers/topics/azure-iot-edge.html)-- [Azure IoT Hub](/answers/topics/azure-iot-hub.html)-- [Azure IoT Hub Device Provisioning Service (DPS)](/answers/topics/azure-iot-dps.html)-- [Azure IoT SDKs](/answers/topics/azure-iot-sdk.html)-- [Azure Digital Twins](/answers/topics/azure-digital-twins.html)-- [Azure RTOS](/answers/topics/azure-rtos.html)-- [Azure Sphere](/answers/topics/azure-sphere.html)-- [Azure Time Series Insights](/answers/topics/azure-time-series-insights.html)-- [Azure Maps](/answers/topics/azure-maps.html)
+* [Azure IoT](/answers/topics/azure-iot.html)
+* [Azure IoT Central](/answers/topics/azure-iot-central.html)
+* [Azure IoT Edge](/answers/topics/azure-iot-edge.html)
+* [Azure IoT Hub and Azure IoT Hub Device Provisioning Service (DPS)](/answers/topics/azure-iot-hub.html)
+* [Azure IoT SDKs](/answers/topics/azure-iot-sdk.html)
+* [Azure Digital Twins](/answers/topics/azure-digital-twins.html)
+* [Azure IoT Plug and Play](/answers/topics/azure-iot-pnp.html)
+* [Azure RTOS](/answers/topics/azure-rtos.html)
+* [Azure Sphere](/answers/topics/azure-sphere.html)
+* [Azure Time Series Insights](/answers/topics/azure-time-series-insights.html)
+* [Azure Maps](/answers/topics/azure-maps.html)
## Post a question on Stack Overflow
For answers on your developer questions from the largest community developer eco
If you do submit a new question to Stack Overflow, please use one or more of the following tags when you create the question:
+* [Azure IoT Central](https://stackoverflow.com/questions/tagged/azure-iot-central)
+* [Azure IoT Edge](https://stackoverflow.com/questions/tagged/azure-iot-edge)
+* [Azure IoT Hub](https://stackoverflow.com/questions/tagged/azure-iot-hub)
+* [Azure IoT Hub Device Provisioning Service](https://stackoverflow.com/questions/tagged/azure-iot-dps)
+* [Azure IoT SDKs](https://stackoverflow.com/questions/tagged/azure-iot-sdk)
+* [Azure Digital Twins](https://stackoverflow.com/questions/tagged/azure-digital-twins)
+* [Azure RTOS](https://stackoverflow.com/questions/tagged/azure-rtos)
+* [Azure Sphere](https://stackoverflow.com/questions/tagged/azure-sphere)
+* [Azure Time Series Insights](https://stackoverflow.com/questions/tagged/azure-timeseries-insights)
## Stay informed of updates and new releases
lighthouse View Service Provider Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/view-service-provider-activity.md
Customers who have delegated subscriptions to service providers through [Azure L
## View activity log data
-[View the activity log](../../azure-monitor/essentials/activity-log.md#view-the-activity-log) from the **Monitor** menu in the Azure portal. Use the filters if you want to show results from a specific subscription.
+[View the activity log](../../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log) from the **Monitor** menu in the Azure portal. Use the filters if you want to show results from a specific subscription.
You can also [view and retrieve activity log events](../../azure-monitor/essentials/activity-log.md#other-methods-to-retrieve-activity-log-events) programmatically.
In the activity log, you'll see the name of the operation and its status, along
> [!NOTE] > Users from the service provider appear in the activity log, but these users and their role assignments aren't shown in **Access Control (IAM)** or when retrieving role assignment info via APIs.
-Logged activity is available in the Azure portal for the past 90 days. You can also [store this data for a longer period](../../azure-monitor/essentials/activity-log.md#retention-period) if needed.
+Logged activity is available in the Azure portal for the past 90 days. You can also [store this data for a longer period](../../azure-monitor/essentials/activity-log-insights.md#retention-period) if needed.
## Set alerts for critical operations
machine-learning Concept Azure Machine Learning V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-azure-machine-learning-v2.md
Previously updated : 11/04/2022 Last updated : 02/27/2024 #Customer intent: As a data scientist, I want to understand the big picture about how Azure Machine Learning works.
Azure Machine Learning includes several resources and assets to enable you to pe
This document provides a quick overview of these resources and assets.
-## Workspace
+## Prerequisites
-The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all jobs, including logs, metrics, output, and a snapshot of your scripts. The workspace stores references to resources like datastores and compute. It also holds all assets like models, environments, components and data asset.
+### [Python SDK](#tab/sdk)
-### Create a workspace
+To use the Python SDK code examples in this article:
+
+1. Install the [Python SDK v2](https://aka.ms/sdk-v2-install)
+2. Create a connection to your Azure Machine Learning subscription. The examples all rely on `ml_client`. To create a workspace, the connection does not need a workspace name, since you may not yet have one. All other examples in this article require that the workspace name is included in the connection.
+
+ ```python
+ # import required libraries
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import Workspace
+ from azure.identity import DefaultAzureCredential
+
+ # Enter details of your subscription
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+
+ # get a handle to the subscription (use this if you haven't created a workspace yet)
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
+
+ # all other examples in this article require the connection to include workspace name
+ workspace_name = "<WORKSPACE_NAME>"
+ ml_client = ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace_name)
+ ```
### [Azure CLI](#tab/cli)
-To create a workspace using CLI v2, use the following command:
+To use the Azure CLI code examples in this article, you need to have the Azure CLI installed and configured. You can install the Azure CLI from the [Install and set up the CLI (v2)](how-to-configure-cli.md).
+Once you have the Azure CLI installed, sign in to your Azure account:
++
+If you have access to multiple Azure subscriptions, you set your active subscription:
+
-```bash
-az ml workspace create --file my_workspace.yml
-```
-For more information, see [workspace YAML schema](reference-yaml-workspace.md).
+### [Studio](#tab/azure-studio)
+
+Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+++
+## Workspace
+
+The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all jobs, including logs, metrics, output, and a snapshot of your scripts. The workspace stores references to resources like datastores and compute. It also holds all assets like models, environments, components and data asset.
+
+### Create a workspace
### [Python SDK](#tab/sdk)
To create a workspace using Python SDK v2, you can use the following code:
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] ```python
-ws_basic = Workspace(
- name="my-workspace",
- location="eastus", # Azure region (location) of workspace
- display_name="Basic workspace-example",
- description="This example shows how to create a basic workspace"
+# specify the workspace details
+ws = Workspace(
+ name="my_workspace",
+ location="eastus",
+ display_name="My workspace",
+ description="This example shows how to create a workspace",
+ tags=dict(purpose="demo"),
)
-ml_client.workspaces.begin_create(ws_basic) # use MLClient to connect to the subscription and resource group and create workspace
+
+ml_client.workspaces.begin_create(ws) # use MLClient to connect to the subscription and resource group and create workspace
``` This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/workspace/workspace.ipynb) shows more ways to create an Azure Machine Learning workspace using SDK v2. --
-## Compute
+### [Azure CLI](#tab/cli)
-A compute is a designated compute resource where you run your job or host your endpoint. Azure Machine Learning supports the following types of compute:
+To create a workspace using CLI v2, use the following command:
-* **Compute cluster** - a managed-compute infrastructure that allows you to easily create a cluster of CPU or GPU compute nodes in the cloud.
- [!INCLUDE [serverless compute](./includes/serverless-compute.md)]
+```bash
+az ml workspace create --file my_workspace.yml
+```
-* **Compute instance** - a fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine on the cloud.
-* **Inference cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure Machine Learning workspace, or attach an existing AKS cluster.
-* **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference.
+For the content of the file, see [workspace YAML examples](https://github.com/Azure/azureml-examples/tree/main/cli/resources/workspace).
+### [Studio](#tab/azure-studio)
-### [Azure CLI](#tab/cli)
+Create a workspace in the studio welcome page by selecting **Create workspace**.
-To create a compute using CLI v2, use the following command:
+
+## Compute
-```bash
-az ml compute --file my_compute.yml
-```
+A compute is a designated compute resource where you run your job or host your endpoint. Azure Machine Learning supports the following types of compute:
-For more information, see [compute YAML schema](reference-yaml-overview.md#compute).
+* **Compute instance** - a fully configured and managed development environment in the cloud. You can use the instance as a training or inference compute for development and testing. It's similar to a virtual machine on the cloud.
+* **Compute cluster** - a managed-compute infrastructure that allows you to easily create a cluster of CPU or GPU compute nodes in the cloud.
+* **Serverless compute** - a compute cluster you access on the fly. When you use serverless compute, you don't need to create your own cluster. All compute lifecycle management is offloaded to Azure Machine Learning.
+* **Inference cluster** - used to deploy trained machine learning models to Azure Kubernetes Service. You can create an Azure Kubernetes Service (AKS) cluster from your Azure Machine Learning workspace, or attach an existing AKS cluster.
+* **Attached compute** - You can attach your own compute resources to your workspace and use them for training and inference.
+### Create a compute
### [Python SDK](#tab/sdk)
-To create a compute using Python SDK v2, you can use the following code:
+To create a compute cluster using Python SDK v2, you can use the following code:
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
ml_client.begin_create_or_update(cluster_basic)
This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/compute/compute.ipynb) shows more ways to create compute using SDK v2.
+### [Azure CLI](#tab/cli)
+
+To create a compute using CLI v2, use the following command:
++
+```bash
+az ml compute create --file my_compute.yml
+```
+
+For the content of the file, see [compute YAML examples](https://github.com/Azure/azureml-examples/tree/main/cli/resources/compute).
+.
+
+### [Studio](#tab/azure-studio)
+
+1. Select a workspace if you are not already in one.
+1. From the left-hand menu, select **Compute**.
+1. On the top, select a tab to specify the type of compute you want to create.
+1. Select **New** to create the new compute.
+ ## Datastore
Azure Machine Learning datastores securely keep the connection information to yo
* Azure Data Lake * Azure Data Lake Gen2
-### [Azure CLI](#tab/cli)
-
-To create a datastore using CLI v2, use the following command:
--
-```bash
-az ml datastore create --file my_datastore.yml
-```
-For more information, see [datastore YAML schema](reference-yaml-overview.md#datastore).
-
+### Create a datastore
### [Python SDK](#tab/sdk)
To create a datastore using Python SDK v2, you can use the following code:
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] ```python
+import AzureBlobDatastore
+ blob_datastore1 = AzureBlobDatastore(
- name="blob-example",
+ name="blob_example",
description="Datastore pointing to a blob container.", account_name="mytestblobstore", container_name="data-container",
ml_client.create_or_update(blob_datastore1)
This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/resources/datastores/datastore.ipynb) shows more ways to create datastores using SDK v2. --
-## Model
-
-Azure machine learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure Machine Learning supports three types of storage format for models:
-
-* `custom_model`
-* `mlflow_model`
-* `triton_model`
-
-### Creating a model
- ### [Azure CLI](#tab/cli)
-To create a model using CLI v2, use the following command:
+To create a datastore using CLI v2, use the following command:
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ```bash
-az ml model create --file my_model.yml
+az ml datastore create --file my_datastore.yml
```
-For more information, see [model YAML schema](reference-yaml-model.md).
+For the content of the file, see [datastore YAML examples](https://github.com/Azure/azureml-examples/tree/main/cli/resources/datastore).
+### [Studio](#tab/azure-studio)
-### [Python SDK](#tab/sdk)
+1. Select a workspace if you are not already in one.
+1. From the left-hand menu, select **Data**.
+1. On the top, select **Datastores**.
+1. Select **Create** to create a new datastore.
-To create a model using Python SDK v2, you can use the following code:
+
+## Model
-```python
-my_model = Model(
- path="model.pkl", # the path to where my model file is located
- type="custom_model", # can be custom_model, mlflow_model or triton_model
- name="my-model",
- description="Model created from local file.",
-)
+Azure Machine Learning models consist of the binary file(s) that represent a machine learning model and any corresponding metadata. Models can be created from a local or remote file or directory. For remote locations `https`, `wasbs` and `azureml` locations are supported. The created model will be tracked in the workspace under the specified name and version. Azure Machine Learning supports three types of storage format for models:
-ml_client.models.create_or_update(my_model) # use the MLClient to connect to workspace and create/register the model
-```
+* `custom_model`
+* `mlflow_model`
+* `triton_model`
-
+### Create a model in the model registry
+
+Model registration allows you to store and version your models in the Azure cloud, in your workspace. The model registry helps you organize and keep track of your trained models.
+
+For more information on how to create models in the registry, see [Work with models in Azure Machine Learning](how-to-manage-models.md).
## Environment
In custom environments, you're responsible for setting up your environment and i
### Create an Azure Machine Learning custom environment
-### [Azure CLI](#tab/cli)
-
-To create an environment using CLI v2, use the following command:
--
-```bash
-az ml environment create --file my_environment.yml
-```
-For more information, see [environment YAML schema](reference-yaml-environment.md).
+### [Python SDK](#tab/sdk)
+To create an environment using Python SDK v2, see [Create an environment](how-to-manage-environments-v2.md?tabs=python#create-an-environment).
-### [Python SDK](#tab/sdk)
+This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/assets/environment/environment.ipynb) shows more ways to create custom environments using SDK v2.
-To create an environment using Python SDK v2, you can use the following code:
+### [Azure CLI](#tab/cli)
+To create an environment using CLI v2, see [Create an environment](how-to-manage-environments-v2.md?tabs=cli#create-an-environment).
-```python
-my_env = Environment(
- image="pytorch/pytorch:latest", # base image to use
- name="docker-image-example", # name of the model
- description="Environment created from a Docker image.",
-)
+For more information, see [environment YAML schema](reference-yaml-environment.md).
-ml_client.environments.create_or_update(my_env) # use the MLClient to connect to workspace and create/register the environment
-```
+### [Studio](#tab/azure-studio)
-This [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/assets/environment/environment.ipynb) shows more ways to create custom environments using SDK v2.
+1. Select a workspace if you are not already in one.
+1. From the left-hand menu, select **Environments**.
+1. On the top, select **Custom environments**.
+1. Select **Create** to create a new custom environment.
machine-learning Concept Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-online-endpoint.md
reviewer: msakande Previously updated : 09/27/2023 Last updated : 02/29/2024 # Network isolation with managed online endpoints
To learn more about configurations for the workspace managed virtual network, se
## Scenarios for network isolation configuration
+Your Azure Machine Learning workspace and managed online endpoint each have a `public_network_access` flag that you can use to configure their inbound communication. On the other hand, outbound communication from a deployment depends on the workspace's managed virtual network.
+
+#### Communication with the managed online endpoint
+ Suppose a managed online endpoint has a deployment that uses an AI model, and you want to use an app to send scoring requests to the endpoint. You can decide what network isolation configuration to use for the managed online endpoint as follows: **For inbound communication**:
However, if you want your deployment to access the internet, you can use the wor
Finally, if your deployment doesn't need to access private Azure resources and you don't need to control access to the internet, then you don't need to use a workspace managed virtual network.
+#### Inbound communication to the Azure Machine Learning workspace
+
+You can use the `public_network_access` flag of your Azure Machine Learning workspace to enable or disable inbound workspace access.
+Typically, if you secure inbound communication to your workspace (by disabling the workspace's `public_network_access` flag) you also want to secure inbound communication to your managed online endpoint.
+
+The following chart shows a typical workflow for securing inbound communication to your Azure Machine Learning workspace and your managed online endpoint. For best security, we recommend that you disable the `public_network_access` flags for the workspace and the managed online endpoint to ensure that both can't be accessed via the public internet. If the workspace doesn't have a private endpoint, you can create one, making sure to include proper DNS resolution. You can then access the managed online endpoint by using the workspace's private endpoint.
+++
+For more information on DNS resolution for your workspace and private endpoint, see [How to use your workspace with a custom DNS server](how-to-custom-dns.md).
+ ## Appendix ### Secure outbound access with legacy network isolation method
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
azureml-model-deployment: DEPLOYMENT_NAME
You can configure some of the properties in the created job at invocation time.
+> [!NOTE]
+> Configuring job properties is only available in batch endpoints with Pipeline component deployments by the moment.
+ ### Configure experiment name # [Azure CLI](#tab/cli)
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
--- Previously updated : 11/09/2022+++ Last updated : 02/29/2024 #Customer intent: As a DevOps person, I need to automate or customize the creation of Azure Machine Learning by using templates.
For more information, see [Deploy an application with Azure Resource Manager tem
```json "type": "Microsoft.MachineLearningServices/workspaces",
- "apiVersion": "2020-03-01",
+ "apiVersion": "2023-10-01",
``` ### Multiple workspaces in the same VNet
For more information, see [Customer-managed keys](concept-customer-managed-keys.
> > For steps on creating the vault and key, see [Configure customer-managed keys](how-to-setup-customer-managed-keys.md).
-__To get the values__ for the `cmk_keyvault` (ID of the Key Vault) and the `resource_cmk_uri` (key URI) parameters needed by this template, use the following steps:
+__To get the values__ for the `cmk_keyvault` (ID of the Key Vault) and the `resource_cmk_uri` (key URI) parameters needed by this template, use the following steps:
-1. To get the Key Vault ID, use the following command:
+1. To get the Key Vault ID, use the following command:
- # [Azure CLI](#tab/azcli)
+ # [Azure CLI](#tab/azcli)
- ```azurecli
- az keyvault show --name <keyvault-name> --query 'id' --output tsv
- ```
+ ```azurecli
+ az keyvault show --name <keyvault-name> --query 'id' --output tsv
+ ```
- # [Azure PowerShell](#tab/azpowershell)
+ # [Azure PowerShell](#tab/azpowershell)
- ```azurepowershell
- Get-AzureRMKeyVault -VaultName '<keyvault-name>'
- ```
-
+ ```azurepowershell
+ Get-AzureRMKeyVault -VaultName '<keyvault-name>'
+ ```
+
- This command returns a value similar to `/subscriptions/{subscription-guid}/resourceGroups/<resource-group-name>/providers/Microsoft.KeyVault/vaults/<keyvault-name>`.
+ This command returns a value similar to `/subscriptions/{subscription-guid}/resourceGroups/<resource-group-name>/providers/Microsoft.KeyVault/vaults/<keyvault-name>`.
-1. To get the value for the URI for the customer managed key, use the following command:
+1. To get the value for the URI for the customer managed key, use the following command:
- # [Azure CLI](#tab/azcli)
+ # [Azure CLI](#tab/azcli)
- ```azurecli
- az keyvault key show --vault-name <keyvault-name> --name <key-name> --query 'key.kid' --output tsv
- ```
+ ```azurecli
+ az keyvault key show --vault-name <keyvault-name> --name <key-name> --query 'key.kid' --output tsv
+ ```
- # [Azure PowerShell](#tab/azpowershell)
+ # [Azure PowerShell](#tab/azpowershell)
- ```azurepowershell
- Get-AzureKeyVaultKey -VaultName '<keyvault-name>' -KeyName '<key-name>'
- ```
-
+ ```azurepowershell
+ Get-AzureKeyVaultKey -VaultName '<keyvault-name>' -KeyName '<key-name>'
+ ```
+
- This command returns a value similar to `https://mykeyvault.vault.azure.net/keys/mykey/{guid}`.
+ This command returns a value similar to `https://mykeyvault.vault.azure.net/keys/mykey/{guid}`.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Once a workspace has been created, you cannot change the settings for confidential data, encryption, key vault ID, or key identifiers. To change these values, you must create a new workspace using the new values. To enable use of Customer Managed Keys, set the following parameters when deploying the template:
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
The Fully Qualified Domains resolve to the following Canonical Names (CNAMEs) ca
The FQDNs resolve to the IP addresses of the Azure Machine Learning workspace in that region. However, resolution of the workspace Private Link FQDNs can be overridden by using a custom DNS server hosted in the virtual network. For an example of this architecture, see the [custom DNS server hosted in a vnet](#example-custom-dns-server-hosted-in-vnet) example.
-> [!NOTE]
-> Managed online endpoints share the workspace private endpoint. If you are manually adding DNS records to the private DNS zone `privatelink.api.azureml.ms`, an A record with wildcard
-> `*.<per-workspace globally-unique identifier>.inference.<region>.privatelink.api.azureml.ms` should be added to route all endpoints under the workspace to the private endpoint.
## Manual DNS server integration
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
To create a registry, use the following command. You can edit the JSON to change
> [!TIP] > We recommend using the latest API version when working with the REST API. For a list of the current REST API versions for Azure Machine Learning, see the [Machine Learning REST API reference](/rest/api/azureml/). The current API versions are listed in the table of contents on the left side of the page.
-```bash
```bash curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2023-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d ' {
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
For examples that use the legacy method for network isolation, see the deploymen
## Prerequisites
-To begin, you need an Azure subscription, CLI or SDK to interact with Azure Machine Learning workspace and related entities, and the right permission.
- * To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* install and configure the [Azure CLI](/cli/azure/) and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+* Install and configure the [Azure CLI](/cli/azure/) and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+ >[!TIP]
- > Azure Machine Learning managed virtual network was introduced on May 23rd, 2023. If you have an older version of the ml extension, you may need to update it for the examples in this article work. To update the extension, use the following Azure CLI command:
+ > Azure Machine Learning managed virtual network was introduced on May 23rd, 2023. If you have an older version of the ml extension, you might need to update it for the examples in this article to work. To update the extension, use the following Azure CLI command:
> > ```azurecli > az extension update -n ml
To begin, you need an Azure subscription, CLI or SDK to interact with Azure Mach
* If you want to use a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) to create and manage online endpoints and online deployments, the identity should have the proper permissions. For details about the required permissions, see [Set up service authentication](./how-to-identity-based-service-authentication.md#workspace). For example, you need to assign the proper RBAC permission for Azure Key Vault on the identity.
+#### Migrate from legacy network isolation method to managed virtual network
+
+If you've used the [legacy method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) previously for network isolation of managed online endpoints, and you want to migrate to using a workspace managed virtual network to secure your endpoints, follow these steps:
+
+1. Delete all computes in your workspace.
+1. Enable managed virtual network for your workspace. For more information on how to configure a managed network for your workspace, see [Workspace Managed Virtual Network Isolation](how-to-managed-network.md).
+1. Configure private endpoints for outbound communication to private resources that your managed online endpoints need to access. These private resources include a storage account, Azure Key Vault, and Azure Container Registry (ACR).
+1. (Optional) If you're integrating with a user registry, configure private endpoints for outbound communication to your registry, its storage account, and its ACR.
+ ## Limitations [!INCLUDE [machine-learning-managed-vnet-online-endpoint-limitations](includes/machine-learning-managed-vnet-online-endpoint-limitations.md)]
machine-learning Azure Open Ai Gpt 4V Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/azure-open-ai-gpt-4v-tool.md
Azure OpenAI GPT-4 Turbo with Vision tool enables you to leverage your AzureOpen
## Connection
-Setup connections to provisioned resources in prompt flow.
+Set up connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version | |-|-|-|-|-|
Setup connections to provisioned resources in prompt flow.
||-||-| | connection | AzureOpenAI | the AzureOpenAI connection to be used in the tool | Yes | | deployment\_name | string | the language model to use | Yes |
-| prompt | string | The text prompt that the language model will use to generate its response. | Yes |
+| prompt | string | Text prompt that the language model uses to generate its response. The Jinja template for composing prompts in this tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax `![image]({{INPUT NAME}})`. Image input can be passed in the `user`, `system` and `assistant` messages. | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No | | temperature | float | the randomness of the generated text. Default is 1. | No | | stop | list | the stopping sequence for the generated text. Default is null. | No |
Setup connections to provisioned resources in prompt flow.
| Return Type | Description | |-|| | string | The text of one response of conversation |+
+## Next step
+
+Learn more about [how to process images in prompt flow](../how-to-process-image.md).
machine-learning Openai Gpt 4V Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/openai-gpt-4v-tool.md
Set up connections to provisioned resources in prompt flow.
||-||-| | connection | OpenAI | The OpenAI connection to be used in the tool. | Yes | | model | string | The language model to use, currently only support gpt-4-vision-preview. | Yes |
-| prompt | string | The text prompt that the language model uses to generate its response. | Yes |
+| prompt | string | Text prompt that the language model uses to generate its response. The Jinja template for composing prompts in this tool follows a similar structure to the chat API in the LLM tool. To represent an image input within your prompt, you can use the syntax `![image]({{INPUT NAME}})`. Image input can be passed in the `user`, `system` and `assistant` messages. | Yes |
| max\_tokens | integer | The maximum number of tokens to generate in the response. Default is a low value decided by [OpenAI API](https://platform.openai.com/docs/guides/vision). | No | | temperature | float | The randomness of the generated text. Default is 1. | No | | stop | list | The stopping sequence for the generated text. Default is null. | No |
Set up connections to provisioned resources in prompt flow.
| Return Type | Description | |-||
-| string | The text of one response of conversation |
+| string | The text of one response of conversation |
+
+## Next step
+
+Learn more about [how to process images in prompt flow](../how-to-process-image.md).
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
Once you have the input image data prepared in [JSONL](https://jsonlines.org/) (
``` %pip install --upgrade matplotlib ```+ ```python %matplotlib inline
To configure automated ML jobs for image-related tasks, create a task specific A
> resources: >  instance_type: Standard_NC24s_v3 >  instance_count: 4
-```
+> ```
```yaml task: image_object_detection
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
- Previously updated : 09/28/2022+ Last updated : 02/28/2024 #Customer intent: As an experienced data scientist, I need to package my data into a consumable and reusable object to train my machine learning models. # Create Azure Machine Learning datasets - [!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-In this article, you learn how to create Azure Machine Learning datasets to access data for your local or remote experiments with the Azure Machine Learning Python SDK. To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
-
-By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](../../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
+In this article, you learn how to create Azure Machine Learning datasets to access data for your local or remote experiments with the Azure Machine Learning Python SDK. For more information about how datasets fit in Azure Machine Learning's overall data access workflow, visit the [Securely access data](concept-data.md#data-workflow) article.
-For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](how-to-connect-data-ui.md#create-data-assets)
+When you create a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and you don't risk the integrity of your data sources. Additionally, datasets are lazily evaluated, which helps improve workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](../../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md). For information about a low-code experience, visit [Create Azure Machine Learning datasets with the Azure Machine Learning studio](how-to-connect-data-ui.md#create-data-assets).
With Azure Machine Learning datasets, you can:
-* Keep a single copy of data in your storage, referenced by datasets.
+* Keep a single copy of data in your storage, referenced by datasets
-* Seamlessly access data during model training without worrying about connection strings or data paths. [Learn more about how to train with datasets](how-to-train-with-datasets.md).
+* Seamlessly access data during model training without worrying about connection strings or data paths. For more information about dataset training, visit [Learn more about how to train with datasets](how-to-train-with-datasets.md)
-* Share data and collaborate with other users.
+* Share data and collaborate with other users
> [!IMPORTANT] > Items in this article marked as "preview" are currently in public preview.
With Azure Machine Learning datasets, you can:
To create and work with datasets, you need:
-* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/)
-* An [Azure Machine Learning workspace](../quickstart-create-resources.md).
+* An [Azure Machine Learning workspace](../quickstart-create-resources.md)
-* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package.
+* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package
- * Create an [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md), which is a fully configured and managed development environment that includes integrated notebooks and the SDK already installed.
+* Create an [Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md), a fully configured and managed development environment that includes integrated notebooks and the SDK already installed
**OR**
- * Work on your own Jupyter notebook and [install the SDK yourself](/python/api/overview/azure/ml/install).
+* Work on your own Jupyter notebook and [install the SDK yourself](/python/api/overview/azure/ml/install)
> [!NOTE]
-> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. If you are developing on __Linux__, these classes rely on .NET Core 2.1, and are only supported on specific distributions. For more information on the supported distros, see the .NET Core 2.1 column in the [Install .NET on Linux](/dotnet/core/install/linux) article.
+> Some dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package, which is only compatible with 64-bit Python. If you develop on __Linux__, these classes rely on .NET Core 2.1, and only specific distributions support them. For more information about the supported distros, read the .NET Core 2.1 column in the [Install .NET on Linux](/dotnet/core/install/linux) article.
> [!IMPORTANT]
-> While the package may work on older versions of Linux distros, we do not recommend using a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, as they do not receive the latest updates. We recommend using the latest supported version of your distro that is compatible with .
+> While the package may work on older versions of Linux distros, we do not recommend use of a distro that is out of mainstream support. Distros that are out of mainstream support may have security vulnerabilities, because they do not receive the latest updates. We recommend using the latest supported version of your distro that is compatible with .
## Compute size guidance
-When creating a dataset, review your compute processing power and the size of your data in memory. The size of your data in storage is not the same as the size of data in a dataframe. For example, data in CSV files can expand up to 10x in a dataframe, so a 1 GB CSV file can become 10 GB in a dataframe.
+When you create a dataset, review your compute processing power and the size of your data in memory. The size of your data in storage isn't the same as the size of data in a dataframe. For example, data in CSV files can expand up to 10 times in a dataframe, so a 1-GB CSV file can become 10 GB in a dataframe.
-If your data is compressed, it can expand further; 20 GB of relatively sparse data stored in compressed parquet format can expand to ~800 GB in memory. Since Parquet files store data in a columnar format, if you only need half of the columns, then you only need to load ~400 GB in memory.
+Compressed data can expand further. Twenty GB of relatively sparse data stored in a compressed parquet format can expand to ~800 GB in memory. Since Parquet files store data in a columnar format, if you only need half of the columns, then you only need to load ~400 GB in memory.
-[Learn more about optimizing data processing in Azure Machine Learning](../concept-optimize-data-processing.md).
+For more information, visit [Learn more about optimizing data processing in Azure Machine Learning](../concept-optimize-data-processing.md).
## Dataset types
-There are two dataset types, based on how users consume them in training; FileDatasets and TabularDatasets. Both types can be used in Azure Machine Learning training workflows involving, estimators, AutoML, hyperDrive and pipelines.
+There are two dataset types, based on how users consume datasets in training: FileDatasets and TabularDatasets. Azure Machine Learning training workflows that involve estimators, AutoML, hyperDrive, and pipelines can use both types.
### FileDataset
-A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs.
-If your data is already cleansed, and ready to use in training experiments, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) the files to your compute as a FileDataset object.
+A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs. If your data is already cleaned, and ready to use in training experiments, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) the files to your compute as a FileDataset object.
+
+We recommend FileDatasets for your machine learning workflows, because the source files can be in any format. This enables a wider range of machine learning scenarios, including deep learning.
-We recommend FileDatasets for your machine learning workflows, since the source files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
+Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](how-to-connect-data-ui.md#create-data-assets).
-Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](how-to-connect-data-ui.md#create-data-assets)
-.
### TabularDataset
-A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. This provides you with the ability to materialize the data into a pandas or Spark DataFrame so you can work with familiar data preparation and training libraries without having to leave your notebook. You can create a `TabularDataset` object from .csv, .tsv, [.parquet](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-), [.jsonl files](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-json-lines-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none--invalid-lines--errorencoding--utf8--), and from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-sql-query-query--validate-true--set-column-types-none--query-timeout-30-).
+A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) parses the provided file or list of files, to represent data in a tabular format. You can then materialize the data into a pandas or Spark DataFrame, to work with familiar data preparation and training libraries while staying in your notebook. You can create a `TabularDataset` object from .csv, .tsv, [.parquet](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-parquet-files), [.json lines files](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-json-lines-files), and from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-sql-query).
-With TabularDatasets, you can specify a time stamp from a column in the data or from wherever the path pattern data is stored to enable a time series trait. This specification allows for easy and efficient filtering by time. For an example, see [Tabular time series-related API demo with NOAA weather data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb).
+With TabularDatasets, you can specify a time stamp from a column in the data, or from the location where the path pattern data is stored, to enable a time series trait. This specification enables easy and efficient filtering by time. For an example, visit [Tabular time series-related API demo with NOAA weather data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb).
Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](how-to-connect-data-ui.md#create-data-assets). >[!NOTE] > [Automated ML](../concept-automated-ml.md) workflows generated via the Azure Machine Learning studio currently only support TabularDatasets.
->
->[!NOTE]
-> For TabularDatasets generating from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-sql-query-query--validate-true--set-column-types-none--query-timeout-30-), T-SQL (e.g. 'WITH' sub query) or duplicate column name is not supported. Complex queries like T-SQL can cause performance issues. Duplicate column names in a dataset can cause ambiguity issues.
+>
+>Also, for TabularDatasets generated from [SQL query results](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-sql-query), T-SQL (e.g. 'WITH' sub query) or duplicate column names are not supported. Complex T-SQL queries can cause performance issues. Duplicate column names in a dataset can cause ambiguity issues.
## Access datasets in a virtual network
-If your workspace is in a virtual network, you must configure the dataset to skip validation. For more information on how to use datastores and datasets in a virtual network, see [Secure a workspace and associated resources](how-to-secure-workspace-vnet.md#datastores-and-datasets).
-
+If your workspace is located in a virtual network, you must configure the dataset to skip validation. For more information about how to use datastores and datasets in a virtual network, visit [Secure a workspace and associated resources](how-to-secure-workspace-vnet.md#datastores-and-datasets).
## Create datasets from datastores
-For the data to be accessible by Azure Machine Learning, datasets must be created from paths in [Azure Machine Learning datastores](how-to-access-data.md) or web URLs.
+To make the data accessible by Azure Machine Learning, you must create datasets from paths in web URLs or [Azure Machine Learning datastores](how-to-access-data.md).
-> [!TIP]
-> You can create datasets directly from storage urls with identity-based data access. Learn more at [Connect to storage with identity-based data access](../how-to-identity-based-data-access.md).
+> [!TIP]
+> You can create datasets directly from storage urls with identity-based data access. For more information, visit [Connect to storage with identity-based data access](how-to-identity-based-data-access.md).
-
To create datasets from a datastore with the Python SDK: 1. Verify that you have `contributor` or `owner` access to the underlying storage service of your registered Azure Machine Learning datastore. [Check your storage account permissions in the Azure portal](../../role-based-access-control/check-access.md).
-1. Create the dataset by referencing paths in the datastore. You can create a dataset from multiple paths in multiple datastores. There is no hard limit on the number of files or data size that you can create a dataset from.
+1. Create the dataset by referencing paths in the datastore. You can create a dataset from multiple paths in multiple datastores. There's no hard limit on the number of files or data size from which you can create a dataset.
> [!NOTE]
-> For each data path, a few requests will be sent to the storage service to check whether it points to a file or a folder. This overhead may lead to degraded performance or failure. A dataset referencing one folder with 1000 files inside is considered referencing one data path. We recommend creating dataset referencing less than 100 paths in datastores for optimal performance.
+> For each data path, a few requests will be sent to the storage service to check whether it points to a file or a folder. This overhead may lead to degraded performance or failure. A dataset referencing one folder with 1000 files inside is considered referencing one data path. For optimal performance, we recommend creating datasets that reference less than 100 paths in datastores.
### Create a FileDataset
-Use the [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#from-files-path--validate-true-) method on the `FileDatasetFactory` class to load files in any format and to create an unregistered FileDataset.
+Use the [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#azureml-data-dataset-factory-filedatasetfactory-from-files) method on the `FileDatasetFactory` class to load files in any format, and to create an unregistered FileDataset.
-If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. Learn more about how to [use datastores and datasets in a virtual network](how-to-secure-workspace-vnet.md#datastores-and-datasets).
+If your storage is behind a virtual network or firewall, set the parameter `validate=False` in the `from_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. For more information, visit [use datastores and datasets in a virtual network](how-to-secure-workspace-vnet.md#datastores-and-datasets).
```Python from azureml.core import Workspace, Datastore, Dataset
-# create a FileDataset pointing to files in 'animals' folder and its subfolders recursively
+# create a FileDataset recursively pointing to files in 'animals' folder and its subfolder
datastore_paths = [(datastore, 'animals')] animal_ds = Dataset.File.from_files(path=datastore_paths)
web_paths = ['https://azureopendatastorage.blob.core.windows.net/mnist/train-ima
mnist_ds = Dataset.File.from_files(path=web_paths) ```
-If you want to upload all the files from a local directory, create a FileDataset in a single method with [upload_directory()](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#upload-directory-src-dir--target--pattern-none--overwrite-false--show-progress-true-). This method uploads data to your underlying storage, and as a result incur storage costs.
+To upload all the files from a local directory, create a FileDataset in a single method with
+[`upload_directory()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#azureml-data-dataset-factory-filedatasetfactory-upload-directory). This method uploads data to your underlying storage, and as a result you incur storage costs.
```Python from azureml.core import Workspace, Datastore, Dataset
datastore = Datastore.get(ws, '<name of your datastore>')
ds = Dataset.File.upload_directory(src_dir='<path to you data>', target=DataPath(datastore, '<path on the datastore>'), show_progress=True)- ```
-To reuse and share datasets across experiment in your workspace, [register your dataset](#register-datasets).
++
+To reuse and share datasets across experiments in your workspace, [register your dataset](#register-datasets).
### Create a TabularDataset
-Use the [`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--) method on the `TabularDatasetFactory` class to read files in .csv or .tsv format, and to create an unregistered TabularDataset. To read in files from .parquet format, use the [`from_parquet_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) method. If you're reading from multiple files, results will be aggregated into one tabular representation.
+Use the [`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-delimited-files) method on the `TabularDatasetFactory` class to read files in .csv or .tsv format, and to create an unregistered TabularDataset. To read in files from `.parquet` format, use the [`from_parquet_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-parquet-files) method. If you're reading from multiple files, results are aggregated into one tabular representation.
-See the [TabularDatasetFactory reference documentation](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory) for information about supported file formats, as well as syntax and design patterns such as [multiline support](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--).
+For information about supported file formats, visit the [TabularDatasetFactory reference documentation](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory), and information about syntax and design patterns such as [multiline support](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-delimited-files).
-If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_delimited_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. Learn more about how to use [datastores and datasets in a virtual network](how-to-secure-workspace-vnet.md#datastores-and-datasets).
+If your storage is behind a virtual network or firewall, set the parameter `validate=False` in your `from_delimited_files()` method. This bypasses the initial validation step, and ensures that you can create your dataset from these secure files. For more information about data storage resources behind a virtual network or firewall, visit [datastores and datasets in a virtual network](how-to-secure-workspace-vnet.md#datastores-and-datasets).
-The following code gets the existing workspace and the desired datastore by name. And then passes the datastore and file locations to the `path` parameter to create a new TabularDataset, `weather_ds`.
+This code gets the existing workspace and the desired datastore by name. It then passes the datastore and file locations to the `path` parameter to create a new TabularDataset named `weather_ds`:
```Python from azureml.core import Workspace, Datastore, Dataset
weather_ds = Dataset.Tabular.from_delimited_files(path=datastore_paths)
``` ### Set data schema
-By default, when you create a TabularDataset, column data types are inferred automatically. If the inferred types don't match your expectations, you can update your dataset schema by specifying column types with the following code. The parameter `infer_column_type` is only applicable for datasets created from delimited files. [Learn more about supported data types](/python/api/azureml-core/azureml.data.dataset_factory.datatype).
-
+When you create a TabularDataset, column data types are automatically inferred by default. If the inferred types don't match your expectations, you can specify column types with the following code to update your dataset. The parameter `infer_column_type` is only applicable for datasets created from delimited files. For more information, visit [Learn more about supported data types](/python/api/azureml-core/azureml.data.dataset_factory.datatype).
```Python from azureml.core import Dataset
titanic_ds.take(3).to_pandas_dataframe()
To reuse and share datasets across experiments in your workspace, [register your dataset](#register-datasets). ## Wrangle data
-After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data wrangling and [exploration](#explore-data) prior to model training.
-
-If you don't need to do any data wrangling or exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md).
+After you create and [register](#register-datasets) your dataset, you can load that dataset into your notebook for data wrangling and [exploration](#explore-data), before model training. You might not need to do any data wrangling or exploration. In that case, for more information about how to consume datasets in your training scripts for ML experiment submissions, visit [Train with datasets](how-to-train-with-datasets.md).
### Filter datasets (preview)
-Filtering capabilities depends on the type of dataset you have.
+Filtering capabilities depends on the type of dataset you have.
> [!IMPORTANT]
-> Filtering datasets with the preview method, [`filter()`](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+> Filtering datasets with the [`filter()`](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-filter) preview method is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
>
-**For TabularDatasets**, you can keep or remove columns with the [keep_columns()](/python/api/azureml-core/azureml.data.tabulardataset#keep-columns-columns--validate-false-) and [drop_columns()](/python/api/azureml-core/azureml.data.tabulardataset#drop-columns-columns-) methods.
+For **TabularDatasets**, you can keep or remove columns with the [keep_columns()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-keep-columns) and [drop_columns()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-drop-columns) methods.
-To filter out rows by a specific column value in a TabularDataset, use the [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) method (preview).
+To filter out rows by a specific column value in a TabularDataset, use the [filter()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-filter) method (preview).
-The following examples return an unregistered dataset based on the specified expressions.
+These examples return an unregistered dataset based on the specified expressions:
```python # TabularDataset that only contains records where the age column value is greater than 15
tabular_dataset = tabular_dataset.filter(tabular_dataset['age'] > 15)
tabular_dataset = tabular_dataset.filter((tabular_dataset['name'].contains('Bri')) & (tabular_dataset['age'] > 15)) ```
-**In FileDatasets**, each row corresponds to a path of a file, so filtering by column value is not helpful. But, you can [filter()](/python/api/azureml-core/azureml.data.filedataset#filter-expression-) out rows by metadata like, CreationTime, Size etc.
-
-The following examples return an unregistered dataset based on the specified expressions.
+In **FileDatasets**, each row corresponds to a path of a file, so filtering by column value doesn't help. However, you can [filter()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-filter) rows by metadata - for example, CreationTime, Size etc. These examples return an unregistered dataset based on the specified expressions:
```python # FileDataset that only contains files where Size is less than 100000
file_dataset = file_dataset.filter(file_dataset.file_metadata['Size'] < 100000)
file_dataset = file_dataset.filter((file_dataset.file_metadata['CreatedTime'] < datetime(2020,1,1)) | (file_dataset.file_metadata['CanSeek'] == False)) ```
-**Labeled datasets** created from [image labeling projects](../how-to-create-image-labeling-projects.md) are a special case. These datasets are a type of TabularDataset made up of image files. For these types of datasets, you can [filter()](/python/api/azureml-core/azureml.data.tabulardataset#filter-expression-) images by metadata, and by column values like `label` and `image_details`.
+**Labeled datasets** created from [image labeling projects](../how-to-create-image-labeling-projects.md) are a special case. These datasets are a type of TabularDataset made up of image files. For these datasets, you can [filter()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-filter) images by metadata, and by `label` and `image_details` column values.
```python # Dataset that only contains records where the label column value is dog
labeled_dataset = labeled_dataset.filter((labeled_dataset['label']['isCrowd'] ==
### Partition data
-You can partition a dataset by including the `partitions_format` parameter when creating a TabularDataset or FileDataset.
+To partition a dataset, include the `partitions_format` parameter when you create a TabularDataset or FileDataset.
-When you partition a dataset, the partition information of each file path is extracted into columns based on the specified format. The format should start from the position of first partition key until the end of file path.
+When you partition a dataset, the partition information of each file path is extracted into columns based on the specified format. The format should start from the position of first partition key and continue to the end of file path.
-For example, given the path `../Accounts/2019/01/01/data.jsonl` where the partition is by department name and time; the `partition_format='/{Department}/{PartitionDate:yyyy/MM/dd}/data.jsonl'` creates a string column 'Department' with the value 'Accounts' and a datetime column 'PartitionDate' with the value `2019-01-01`.
+For example, given the path `../Accounts/2019/01/01/data.jsonl`, where the partition is by department name and time, the `partition_format='/{Department}/{PartitionDate:yyyy/MM/dd}/data.jsonl'` creates a string column 'Department' with the value 'Accounts', and a datetime column 'PartitionDate' with the value `2019-01-01`.
-If your data already has existing partitions and you want to preserve that format, include the `partitioned_format` parameter in your [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#from-files-path--validate-true--partition-format-none-) method to create a FileDataset.
+If your data already has existing partitions and you want to preserve that format, include the `partitioned_format` parameter in your [`from_files()`](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#azureml-data-dataset-factory-filedatasetfactory-from-files) method, to create a FileDataset.
-To create a TabularDataset that preserves existing partitions, include the `partitioned_format` parameter in the [from_parquet_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-parquet-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) or the
-[from_delimited_files()](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false--empty-as-string-false--encoding--utf8--) method.
+To create a TabularDataset that preserves existing partitions, include the `partitioned_format` parameter in the [`from_parquet_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-parquet-files) or the
+[`from_delimited_files()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-from-delimited-files) method.
-The following example,
-* Creates a FileDataset from partitioned files.
+This example
+
+* Creates a FileDataset from partitioned files
* Gets the partition keys
-* Creates a new, indexed FileDataset using
-
+* Creates a new, indexed FileDataset
+ ```Python file_dataset = Dataset.File.from_files(data_paths, partition_format = '{userid}/*.wav')
partitions = file_dataset.get_partition_key_values(['userid'])
new_file_dataset = file_dataset.filter(ds['userid'] == 'user1').download() ```
-You can also create a new partitions structure for TabularDatasets with the [partitions_by()](/python/api/azureml-core/azureml.data.tabulardataset#partition-by-partition-keys--target--name-none--show-progress-true--partition-as-file-dataset-false-) method.
+You can also create a new partitions structure for TabularDatasets with the [partition_by()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-partition-by) method.
```Python
partition_keys = new_dataset.partition_keys # ['country']
## Explore data
-After you're done wrangling your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration prior to model training.
+After you wrangle your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration before model training.
-For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
+For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. For more information, visit [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
```python # download the dataset
mount_context = dataset.mount(mounted_path)
mount_context.start() ```
-For TabularDatasets, use the [`to_pandas_dataframe()`](/python/api/azureml-core/azureml.data.tabulardataset#to-pandas-dataframe-on-error--nullout-of-range-datetime--null--) method to view your data in a dataframe.
+For TabularDatasets, use the [`to_pandas_dataframe()`](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-to-pandas-dataframe) method to view your data in a dataframe.
```python # preview the first 3 rows of titanic_ds
titanic_ds.take(3).to_pandas_dataframe()
## Create a dataset from pandas dataframe
-To create a TabularDataset from an in memory pandas dataframe
-use the [`register_pandas_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#register-pandas-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-) method. This method registers the TabularDataset to the workspace and uploads data to your underlying storage, which incurs storage costs.
+To create a TabularDataset from an in-memory pandas dataframe, use the [`register_pandas_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-datset-factory-tabulardatasetfactory-register-pandas-dataframe) method. This method registers the TabularDataset to the workspace and uploads data to your underlying storage. This process incurs storage costs.
```python from azureml.core import Workspace, Datastore, Dataset
dataset = Dataset.Tabular.register_pandas_dataframe(pandas_df, datastore, "datas
``` > [!TIP]
-> Create and register a TabularDataset from an in memory spark dataframe or a dask dataframe with the public preview methods, [`register_spark_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory##register-spark-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-) and [`register_dask_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#register-dask-dataframe-dataframe--target--name--description-none--tags-none--show-progress-true-). These methods are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time.
->
-> These methods upload data to your underlying storage, and as a result incur storage costs.
+> Create and register a TabularDataset from an in memory spark dataframe or a dask dataframe with the public preview methods, [`register_spark_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-register-spark-dataframe) and [`register_dask_dataframe()`](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#azureml-data-dataset-factory-tabulardatasetfactory-register-dask-dataframe). These methods are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time.
+>
+> These methods upload data to your underlying storage, and as a result incur storage costs.
## Register datasets
-To complete the creation process, register your datasets with a workspace. Use the [`register()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#&preserve-view=trueregister-workspace--name--description-none--tags-none--create-new-version-false-) method to register datasets with your workspace in order to share them with others and reuse them across experiments in your workspace:
+To complete the creation process, register your datasets with a workspace. Use the [`register()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#azureml-data-abstract-dataset-abstractdataset-register) method to register datasets with your workspace, to share them with others and reuse them across experiments in your workspace:
```Python titanic_ds = titanic_ds.register(workspace=workspace,
titanic_ds = titanic_ds.register(workspace=workspace,
## Create datasets using Azure Resource Manager
-There are many templates at [https://github.com/Azure/azure-quickstart-templates/tree/master//quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) that can be used to create datasets.
+You can find many templates at [microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) that can be used to create datasets.
-For information on using these templates, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](../how-to-create-workspace-template.md).
-
+For information about these templates, visit [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](../how-to-create-workspace-template.md).
## Train with datasets
Use your datasets in your machine learning experiments for training ML models. [
## Version datasets
-You can register a new dataset under the same name by creating a new version. A dataset version is a way to bookmark the state of your data so that you can apply a specific version of the dataset for experimentation or future reproduction. Learn more about [dataset versions](how-to-version-track-datasets.md).
+You can register a new dataset under the same name by creating a new version. A dataset version can bookmark the state of your data, to apply a specific version of the dataset for experimentation or future reproduction. For more information, visit [dataset versions](how-to-version-track-datasets.md).
+ ```Python # create a TabularDataset from Titanic training data web_paths = ['https://dprepdata.blob.core.windows.net/demo/Titanic.csv',
titanic_ds = titanic_ds.register(workspace = workspace,
## Next steps
-* Learn [how to train with datasets](how-to-train-with-datasets.md).
-* Use automated machine learning to [train with TabularDatasets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb).
-* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
+* Learn [how to train with datasets](how-to-train-with-datasets.md)
+* Use automated machine learning to [train with TabularDatasets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
+* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for Mar
We strongly encourage customers to move away from relying on any individual Gateway IP address (since these will be retired in the future). Instead allow network traffic to reach both the individual Gateway IP addresses and Gateway IP address subnets in a region.
-| **Region name** | **Gateway IP address(es)** | **Gateway IP address Subnets** |
-|:-|:-|:--|
-| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 |
-| Australia Central 2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 |
-| Australia East | 13.70.112.32, 40.79.160.32, 40.79.168.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 20.53.46.128/27 |
-| Australia Southeast | 13.77.49.33 | 13.77.49.32/29, 104.46.179.160/27 |
-| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27 |
-| Canada Central | 13.71.168.32 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27 |
-| Canada East | 40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 |
-| Central US | 104.208.21.192, 13.89.168.192, 52.182.136.192 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27 |
-| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27 |
-| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27 |
-| China East 3 | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
-| China North | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
-| China North 2 | 40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27 |
-| China North 3 | 13.75.32.192, 13.75.33.192 | 13.75.32.192/29, 13.75.33.192/29 |
-| East Asia | 13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27 |
-| East US | 20.42.65.64, 20.42.73.0, 52.168.116.64 | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27 |
-| East US 2 | 104.208.150.192, 40.70.144.192, 52.167.104.192 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27 |
-| France Central | 40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 |
-| France South | 40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27 |
-| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27 |
-| India Central | 104.211.86.32, 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27 |
-| India South | 40.78.192.32 | 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27 |
-| India West | 104.211.144.32 | 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27 |
-| Japan East | 40.79.184.8, 40.79.192.23 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 |
-| Japan West | 40.74.96.6 | 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 |
-| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27 |
-| Korea South | 52.231.145.3 | 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 |
-| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27 |
-| North Europe | 52.138.224.6, 52.138.224.7 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 |
-| Norway East | 51.120.96.0 | 51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27 |
-| Norway West | 51.120.216.0 | 51.120.217.32/29, 51.13.136.224/27 |
-| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 |
-| South Africa West | 102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27 |
-| South Central US | 20.45.120.0 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27 |
-| Southeast Asia | 23.98.80.12, 40.78.233.2 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 |
-| Sweden Central | 51.12.96.32 | 51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27 |
-| Sweden South | 51.12.200.32 | 51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27 |
-| Switzerland North | 51.107.56.0 | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 |
-| Switzerland West | 51.107.152.0 | 51.107.153.32/29, 51.107.250.64/27 |
-| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 |
-| UAE North | 65.52.248.0 | 20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 |
-| UK South | 51.105.64.0 | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27 |
-| UK West | 51.140.208.98 | 51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 |
-| West Central US | 13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 |
-| West Europe | 13.69.105.208, 104.40.169.187 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27 |
-| West US | 13.86.216.212, 13.86.217.212 | 20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27 |
-| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27 |
-| West US 3 | 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 |
+| **Region name** |**Current Gateway IP address**| **Gateway IP address subnets** |
+|:-|:--|:--|
+| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 |
+| Australia Central2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 |
+| Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 40.79.160.32/29, 20.53.46.128/27 |
+| Australia South East |13.77.49.33 |3.77.49.32/29, 104.46.179.160/27|
+| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27|
+|Brazil Southeast|191.233.48.2|191.233.48.32/29, 191.233.15.160/27|
+| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27|
+| Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 |
+| Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27|
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/2752.130.112.136/29, 52.130.13.96/27|
+| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27|
+| China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 |
+| China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27|
+| East Asia |13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27|
+| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27|
+| East US 2 |52.167.105.38, 40.70.144.38| 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27|
+| France Central |40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 |
+| France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27|
+| Germany North| 51.116.56.0| 51.116.57.32/29, 51.116.54.96/27|
+| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27|
+| India Central |20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27|
+| India South | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27|
+| India West | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27|
+| Japan East | 40.79.184.8, 40.79.192.23| 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 |
+| Japan West |40.74.96.6| 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 |
+| Jio India Central| 20.192.233.32|20.192.233.32/29, 20.192.48.32/27|
+| Jio India West|20.193.200.32|20.193.200.32/29, 20.192.167.224/27|
+| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27|
+| Korea South |52.231.145.3| 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 |
+| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27|
+| North Europe |52.138.224.6, 52.138.224.7 |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 |
+|Norway East|51.120.96.0|51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27|
+|Norway West|51.120.216.0|51.120.217.32/29, 51.13.136.224/27|
+| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 |
+| South Africa West |102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27|
+| South Central US | 20.45.120.0 |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27|
+| South East Asia | 23.98.80.12, 40.78.233.2|13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 |
+| Sweden Central|51.12.96.32|51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27|
+| Sweden South|51.12.200.32|51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27|
+| Switzerland North |51.107.56.0 |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27|
+| Switzerland West | 51.107.152.0| 51.107.153.32/29, 51.107.250.64/27|
+| UAE Central | 20.37.72.64| 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 |
+| UAE North |65.52.248.0 |20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 |
+| UK South | 51.105.64.0|51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27|
+| UK West |51.140.208.98 |51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 |
+| West Central US |13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 |
+| West Europe | 13.69.105.208,104.40.169.187|104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27|
+| West US |13.86.216.212, 13.86.217.212 |20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27|
+| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27|
+| West US 3 |20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 |
## Connection redirection
migrate Concepts Azure Sql Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md
Previously updated : 02/12/2024 Last updated : 02/26/2024
Target and pricing settings | **Savings options - SQL Server on Azure VM (IaaS)*
Target and pricing settings | **Currency** | The billing currency for your account. Target and pricing settings | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. Target and pricing settings | **VM uptime** | Specify the duration (days per month/hour per day) that servers/VMs run. This is useful for computing cost estimates for SQL Server on Azure VM where you're aware that Azure VMs might not run continuously. <br/> Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified. Default is 31 days per month/24 hours per day.
-Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription. Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription (RHEL and SLES). Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
Assessment criteria | **Sizing criteria** | Set to *Performance-based* by default, which means Azure Migrate collects performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration. <br/><br/> You can change this to *As on-premises* to get recommendations based on just the on-premises SQL Server configuration without the performance metric based optimizations. Assessment criteria | **Performance history** | Indicate the data duration on which you want to base the assessment. (Default is one day) Assessment criteria | **Percentile utilization** | Indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
After sizing recommendations are complete, Azure SQL assessment calculates the c
### Compute cost - To calculate the compute cost for an Azure SQL configuration, the assessment considers the following properties:
- - Azure Hybrid Benefit for SQL and Windows licenses or Enterprise Linux subscription
+ - Azure Hybrid Benefit for SQL and Windows licenses or Enterprise Linux subscription (RHEL and SLES)
- Environment type - Reserved capacity - Azure target location
migrate How To Create Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md
ms. Previously updated : 02/12/2024 Last updated : 02/26/2024
Run an assessment as follows:
- Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day. - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation.
- - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription. If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription (RHEL and SLES). If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions (RHEL and SLES), you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
1. Select **Save** if you make changes.
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
Previously updated : 02/12/2024 Last updated : 02/26/2024
Run an assessment as follows:
Target and pricing settings | **Currency** | The billing currency for your account. Target and pricing settings | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. Target and pricing settings | **VM uptime** | Specify the duration (days per month/hour per day) that servers/VMs run. This is useful for computing cost estimates for SQL Server on Azure VM where you're aware that Azure VMs might not run continuously. <br/> Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified. Default is 31 days per month/24 hours per day.
- Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription. Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+ Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription (RHEL and SLES). Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
Assessment criteria | **Sizing criteria** | Set to *Performance-based* by default, which means Azure Migrate collects performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration.<br/><br/> You can change this to *As on-premises* to get recommendations based on just the on-premises SQL Server configuration without the performance metric based optimizations. Assessment criteria | **Performance history** | Indicate the data duration on which you want to base the assessment. (Default is one day) Assessment criteria | **Percentile utilization** | Indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
migrate Resources Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/resources-faq.md
description: Get answers to common questions about the Azure Migrate service.
Previously updated : 12/12/2022 Last updated : 02/27/2022
Review the supported geographies for [public](migrate-support-matrix.md#public-c
When you create a project, you select a geography of your choice. The project and related resources are created in one of the regions in the geography, as allocated by the Azure Migrate service. See the metadata storage locations for each geography [here](migrate-support-matrix.md#public-cloud).
-Azure Migrate doesn't move or store customer data outside of the region allocated, guaranteeing data residency and resiliency in the same geography.
+Azure Migrate doesn't move or store customer data outside of the region allocated, guaranteeing data residency and resiliency in the same geography.
+
+Azure Migrate provides an additional option for creating projects and ensuring data residency in preferred regions beyond the specified geographies. To avail this option, use the link - https://aka.ms/migrate/ProjectRegionSelection.
## Does Azure Migrate offer Backup and Disaster Recovery?
migrate Troubleshoot Assessment Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment-faq.md
+
+ Title: Troubleshoot assessments FAQ in Azure Migrate
+description: FAQs for Troubleshooting assessments in Azure Migrate.
++
+ms.
++ Last updated : 02/16/2024+++
+# Troubleshoot assessment - FAQ
+
+This article provides answers to some of the most common questions about troubleshoot issues with assessment. See articles [Troubleshoot](troubleshoot-assessment.md) issues with assessment and [Supported Scenarios](troubleshoot-assessment-supported-scenarios.md) for troubleshooting assessments.
+
+## Why is the recommended Azure disk SKU bigger than on-premises in an Azure VM assessment?
+
+Azure VM assessment might recommend a bigger disk based on the type of assessment:
+
+- Disk sizing depends on two assessment properties: sizing criteria and storage type.
+- If the sizing criteria are **Performance-based** and the storage type is set to **Automatic**, the IOPS and throughput values of the disk are considered when identifying the target disk type (Standard HDD, Standard SSD, Premium, or Ultra disk). A disk SKU from the disk type is then recommended, and the recommendation considers the size requirements of the on-premises disk.
+- If the sizing criteria are **Performance-based** and the storage type is **Premium**, a premium disk SKU in Azure is recommended based on the IOPS, throughput, and size requirements of the on-premises disk. The same logic is used to perform disk sizing when the sizing criteria is **As on-premises** and the storage type is **Standard HDD**, **Standard SSD**, **Premium**, or **Ultra disk**.
+
+For example, say you have an on-premises disk with 32 GB of memory, but the aggregated read and write IOPS for the disk is 800 IOPS. The Azure VM assessment recommends a premium disk because of the higher IOPS requirements. It also recommends a disk SKU that can support the required IOPS and size. The nearest match in this example would be P15 (256 GB, 1100 IOPS). Even though the size required by the on-premises disk was 32 GB, the Azure VM assessment recommended a larger disk because of the high IOPS requirement of the on-premises disk.
+
+## Why is performance data missing for some or all VMs in my assessment report?
+
+For **Performance-based** assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance can't collect performance data for the on-premises VMs. Make sure to check:
+
+- If the VMs are powered on for the duration for which you're creating the assessment.
+- If only memory counters are missing and you're trying to assess Hyper-V VMs, check if you have dynamic memory enabled on these VMs. Because of a known issue, currently the Azure Migrate appliance can't collect memory utilization for such VMs.
+- If all of the performance counters are missing, ensure the port access requirements for assessment are met. Learn more about the port access requirements for [VMware](./migrate-support-matrix-vmware.md#port-access-requirements), [Hyper-V](./migrate-support-matrix-hyper-v.md#port-access), and [physical](./migrate-support-matrix-physical.md#port-access) assessments.
+If any of the performance counters are missing, Azure Migrate: Discovery and assessment fall back to the allocated cores/memory on-premises and recommend a VM size accordingly.
+
+## Why is performance data missing for some or all servers in my Azure VM or Azure VMware Solution assessment report?
+
+For **Performance-based** assessment, the assessment report export says **PercentageOfCoresUtilizedMissing** or **PercentageOfMemoryUtilizedMissing** when the Azure Migrate appliance can't collect performance data for the on-premises servers. Make sure to check:
+
+- If the servers are powered on for the duration for which you're creating the assessment.
+- If only memory counters are missing and you're trying to assess servers in a Hyper-V environment. In this scenario, enable dynamic memory on the servers and recalculate the assessment to reflect the latest changes. The appliance can collect memory utilization values for servers in a Hyper-V environment only when the server has dynamic memory enabled.
+- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
+
+ > [!Note]
+ > If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly.
+
+## Why is performance data missing for some or all SQL instances or databases in my Azure SQL assessment?
+
+To ensure performance data is collected, make sure to check:
+
+- If the SQL servers are powered on for the duration for which you're creating the assessment.
+- If the connection status of the SQL agent in Azure Migrate is **Connected**, and also check the last heartbeat.
+- If the Azure Migrate connection status for all SQL instances is **Connected** in the discovered SQL instance pane.
+- If all of the performance counters are missing, ensure that outbound connections on port 443 (HTTPS) are allowed.
+
+If any of the performance counters are missing, the Azure SQL assessment recommends the smallest Azure SQL configuration for that instance or database.
+
+## Why is the confidence rating of my assessment low?
+
+The confidence rating is calculated for **Performance-based** assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. An assessment could get a low confidence rating for the following reasons:
+
+- You didn't profile your environment for the duration for which you're creating the assessment. For example, if you're creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you can't wait for the duration, change the performance duration to a shorter period and recalculate the assessment.
+- The assessment isn't able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
+ - Servers are powered on for the duration of the assessment.
+ - Outbound connections on ports 443 are allowed.
+ - For Hyper-V Servers, dynamic memory is enabled.
+ - The connection status of agents in Azure Migrate is **Connected**. Also check the last heartbeat.
+ - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is **Connected** in the discovered SQL instance pane.
+
+ Recalculate the assessment to reflect the latest changes in confidence rating.
+
+- For Azure VM and Azure VMware Solution assessments, few servers were created after discovery had started. For example, say you're creating an assessment for the performance history of the past month, but a few servers were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).
+- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, say you're creating an assessment for the performance history of the past month, but a few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
+
+## Why is my RAM utilization greater than 100%?
+
+By design, in Hyper-V if maximum memory provisioned is less than what is required by the VM, the assessment will show memory utilization to be more than 100%.
+
+## Is the operating system license included in an Azure VM assessment?
+
+An Azure VM assessment currently considers the operating system license cost only for Windows servers. License costs for Linux servers aren't currently considered.
+
+## How performance-based sizing works in an Azure VM assessment?
+
+An Azure VM assessment continuously collects performance data of on-premises servers and uses it to recommend the VM SKU and disk SKU in Azure. [Learn more](concepts-assessment-calculation.md#calculate-sizing-performance-based) about how performance-based data is collected.
+
+## Can I migrate my disks to an Ultra disk by using Azure Migrate?
+
+No. Currently, both Azure Migrate and Azure Site Recovery don't support migration to Ultra disks. [Learn more](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk) about deploying an Ultra disk.
+
+## Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput?
+
+As per the [official pricing page](https://azure.microsoft.com/pricing/details/managed-disks/), Ultra disk is billed based on the provisioned size, provisioned IOPS, and provisioned throughput. For example, if you provisioned a 200-GiB Ultra disk with 20,000 IOPS and 1,000 MB/second and deleted it after 20 hours, it will map to the disk size offer of 256 GiB. You'll be billed for 256 GiB, 20,000 IOPS, and 1,000 MB/second for 20 hours.
+
+IOPS to be provisioned = (Throughput discovered) * 1024/256
+
+## Does the Ultra disk recommendation consider latency?
+
+No, currently only disk size, total throughput, and total IOPS are used for sizing and costing.
+
+## I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says "No VM found for this location"
+
+This result is possible because not all VM sizes that support Ultra disks are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
+
+## Why is my assessment showing a warning that it was created with an invalid offer?
+
+Your assessment was created with an offer that is no longer valid and hence, the **Edit** and **Recalculate** buttons are disabled. You can create a new assessment with any of the valid offers - *Pay as you go*, *Pay as you go Dev/Test*, and *Enterprise Agreement*. You can also use the **Discount(%)** field to specify any custom discount on top of the Azure offer. [Learn more](how-to-create-assessment.md).
+
+## Why is my assessment showing a warning that it was created with a target Azure location that has been deprecated?
+
+Your assessment was created with an Azure region that has been deprecated and hence the **Edit** and **Recalculate** buttons are disabled. You can [create a new assessment](how-to-create-assessment.md) with any of the valid target locations. [Learn more](concepts-assessment-calculation.md#whats-in-an-azure-vm-assessment).
+
+## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime, and Discount (%)?
+
+When you select **Reserved Instances**, the **Discount (%)** and **VM uptime** properties aren't applicable. As your assessment was created with an invalid combination of these properties, the **Edit** and **Recalculate** buttons are disabled. Create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment).
+
+## Why are some of my assessments marked as "to be upgraded to latest assessment version"?
+
+Recalculate your assessment to view the upgraded Azure SQL assessment experience to identify the ideal migration target for your SQL deployments across Azure SQL Managed Instances, SQL Server on Azure VM, and Azure SQL DB:
+ - We recommended migrating instances to *SQL Server on Azure VM* as per the Azure best practices.
+ - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials are not available.
+ - Enhanced user-experience that covers readiness and cost estimates for multiple migration targets for SQL deployments in one assessment.
+
+We recommend that you export your existing assessment before recalculating.
+
+## I don't see performance data for some network adapters on my physical servers
+
+This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual network adapters. The network throughput is captured only on the virtual network adapters discovered.
+
+## The recommended Azure VM SKU for my physical server is oversized
+
+This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual network adapters. As a result, the number of network adapters discovered is higher than the actual number. The Azure VM assessment picks an Azure VM that can support the required number of network adapters, which can potentially result in an oversized VM. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of the number of network adapters on sizing. This product gap will be addressed going forward.
+
+## The readiness category is marked "Not ready" for my physical server
+
+The readiness category might be incorrectly marked as **Not ready** in the case of a physical server that has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual adapters. As a result, the number of network adapters discovered is higher than the actual number. In both **As on-premises** and **Performance-based** assessments, the Azure VM assessment picks an Azure VM that can support the required number of network adapters. If the number of network adapters is discovered to be higher than 32, the maximum number of NICs supported on Azure VMs, the server will be marked **Not ready**. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of number of NICs on sizing.
+
+## The number of discovered NICs is higher than actual for physical servers
+
+This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual adapters. As a result, the number of NICs discovered is higher than the actual number.
+
+## Capture network traffic
+
+To collect network traffic logs:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select F12 to start Developer Tools. If needed, clear the **Clear entries on navigation** setting.
+1. Select the **Network** tab, and start capturing network traffic:
+ - In Chrome, select **Preserve log**. The recording should start automatically. A red circle indicates that traffic is being captured. If the red circle doesn't appear, select the black circle to start.
+ - In Microsoft Edge and Internet Explorer, recording should start automatically. If it doesn't, select the green play button.
+1. Try to reproduce the error.
+1. After you've encountered the error while recording, stop recording and save a copy of the recorded activity:
+ - In Chrome, right-click and select **Save as HAR with content**. This action compresses and exports the logs as a .har file.
+ - In Microsoft Edge or Internet Explorer, select the **Export captured traffic** option. This action compresses and exports the log.
+1. Select the **Console** tab to check for any warnings or errors. To save the console log:
+ - In Chrome, right-click anywhere in the console log. Select **Save as** to export, and zip the log.
+ - In Microsoft Edge or Internet Explorer, right-click the errors and select **Copy all**.
+1. Close Developer Tools.
+
+## Where is the Operating System data in my assessment discovered from?
+
+- For VMware VMs, by default, it's the operating system data provided by the vCenter Server.
+ - For VMware Linux VMs, if application discovery is enabled, the OS details are fetched from the guest VM. To check which OS details are in the assessment, go to the **Discovered servers** view, and hover over the value in the **Operating system** column. In the text that pops up, you'd be able to see whether the OS data you see is gathered from the vCenter Server or from the guest VM by using the VM credentials.
+ - For Windows VMs, the operating system details are always fetched from the vCenter Server.
+- For Hyper-V VMs, the operating system data is gathered from the Hyper-V host.
+- For physical servers, it is fetched from the server.
+
+## Next steps
+
+[Create](how-to-create-assessment.md) or [customize](how-to-modify-assessment.md) an assessment.
migrate Troubleshoot Assessment Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment-supported-scenarios.md
+
+ Title: Troubleshooting supported scenarios for Assessments
+description: Get help for resolving issues with assessments in supported scenarios using Azure Migrate.
++
+ms.
++ Last updated : 02/16/2024+++
+# Troubleshoot assessment - supported scenarios
+
+This article provides supported scenarios for troubleshooting assessments. See articles [Troubleshoot](troubleshoot-assessment.md) issues with assessment and [FAQ](troubleshoot-assessment-faq.md) for commonly questions about troubleshoot issues with assessment.
+
+## Scenario: Unknown migration tool for import-based Azure VMware Solution assessment
+
+### Cause
+
+For servers imported via a CSV file, the default migration tool in an Azure VMware Solution assessment is unknown.
+
+### Resolution
+
+For servers in a VMware environment, use the VMware Hybrid Cloud Extension (HCX) solution. [Learn more](../azure-vmware/configure-vmware-hcx.md).
+
+## Scenario: Linux VMs are "conditionally ready" in an Azure VM assessment
+
+### Cause
+
+In the case of VMware and Hyper-V VMs, an Azure VM assessment marks Linux VM as **conditionally ready** because of a known gap.
+
+- The gap prevents it from detecting the minor version of the Linux OS installed on the on-premises VMs.
+- For example, for RHEL 6.10, currently an Azure VM assessment detects only RHEL 6 as the OS version. This behavior occurs because the vCenter Server and the Hyper-V host don't provide the kernel version for Linux VM operating systems.
+- Since Azure endorses only specific versions of Linux, the Linux VMs are currently marked as **conditionally ready** in an Azure VM assessment.
+- You can determine whether the Linux OS running on the on-premises VM is endorsed in Azure by reviewing [Azure Linux support](../virtual-machines/linux/endorsed-distros.md).
+- After you've verified the endorsed distribution, you can ignore this warning.
+
+### Resolution
+
+This gap can be addressed by enabling [application discovery](./how-to-discover-applications.md) on the VMware VMs. An Azure VM assessment uses the operating system detected from the VM by using the guest credentials provided. This Operating System data identifies the right OS information in the case of both Windows and Linux VMs.
+
+## Scenario: Operating system version not available
+
+### Cause
+
+For physical servers, the operating system minor version information should be available. If it isn't available, contact Microsoft Support. For servers in a VMware environment, Azure Migrate uses the operating system information specified for the VM in the vCenter Server. But vCenter Server doesn't provide the minor version for operating systems.
+
+### Resolution
+
+To discover the minor version, set up [application discovery](./how-to-discover-applications.md). For Hyper-V VMs, operating system minor version discovery isn't supported.
+
+## Scenario: Azure SKUs bigger than on-premises in an Azure VM assessment
+
+### Cause
+
+An Azure VM assessment might recommend Azure VM SKUs with more cores and memory than the current on-premises allocation based on the type of assessment:
+
+- The VM SKU recommendation depends on the assessment properties.
+- The recommendation is affected by the type of assessment you perform in an Azure VM assessment. The two types are **Performance-based** or **As on-premises**.
+- For performance-based assessments, the Azure VM assessment considers the utilization data of the on-premises VMs (CPU, memory, disk, and network utilization) to determine the right target VM SKU for your on-premises VMs. It also adds a comfort factor when determining effective utilization.
+- For on-premises sizing, performance data isn't considered, and the target SKU is recommended based on on-premises allocation.
+
+### Resolution
+
+Let's look at an example recommendation:
+
+We have an on-premises VM with 4 cores and 8 GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3.
+
+- If the assessment is **As on-premises**, an Azure VM SKU with 4 cores and 8 GB of memory is recommended.
+- If the assessment is **Performance-based**, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8 GB memory * 1.3 = 5.2 GB memory), the cheapest VM SKU of 4 cores (nearest supported core count) and 8 GB of memory (nearest supported memory size) is recommended.
+- [Learn more](concepts-assessment-calculation.md#types-of-assessments) about assessment sizing.
++
+## Next steps
+
+[Create](how-to-create-assessment.md) or [customize](how-to-modify-assessment.md) an assessment.
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
Title: Troubleshoot assessments in Azure Migrate
-description: Get help with assessment in Azure Migrate.
+ Title: Common issues in Azure Migrate assessments
+description: Get help with assessment issues in Azure Migrate.
ms. Previously updated : 01/17/2023- Last updated : 02/20/2024+
-# Troubleshoot assessment
+# Common issues in Azure Migrate assessments
-This article helps you troubleshoot issues with assessment and dependency visualization with [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool).
+This article helps you troubleshoot issues with assessment and dependency visualization with [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool). See articles [Supported Scenarios](troubleshoot-assessment-supported-scenarios.md) for troubleshooting assessments scenarios and [FAQ](troubleshoot-assessment-faq.md) for commonly questions about troubleshoot issues with assessment.
## Common assessment errors Assessment service uses the [configuration data](discovered-metadata.md) and the [performance data](concepts-assessment-calculation.md#how-does-the-appliance-calculate-performance-data) for calculating the assessments. The data is fetched by the Azure Migrate appliance at specific intervals in case of appliance-based discovery and assessments. The following table summarizes the errors encountered while fetching the data by the assessment service.
-**Error** | **Cause** | **Action**
- | |
-60001:UnableToConnectToPhysicalServer | Either the prerequisites to connect to the server have not been met or there are network issues in connecting to the server, for instance some proxy settings. | - Ensure that the server meets the prerequisites and port access requirements. <br/><br/> - Add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the Azure Migrate appliance and retry the operation. This is to allow remote inbound connections on servers: *Windows: WinRM port 5985 (HTTP) and Linux: SSH port 22 (TCP)*. <br/><br/> - Ensure that you have chosen the correct authentication method on the appliance to connect to the server. <br/><br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
-60002: InvalidServerCredentials | Unable to connect to server due to incorrect credentials on the appliance, or the credentials previously provided have expired or the server credentials have changed. | - Ensure that you have provided the correct credentials for the server on the appliance. You can check that by trying to connect to the server using those credentials. <br/><br/> - If the credentials added are incorrect or have expired, edit the credentials on the appliance and revalidate the added servers. If the validation succeeds, the issue is resolved. <br/><br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
-60004: NoPerfDataAvailableForServers | The appliance is unable to fetch the required performance data from the server due to network issues or the credentials provided on the appliance do not have enough permissions to fetch the metadata. | - Ensure that the server is accessible from the appliance. <br/><br/> - Ensure that the guest credentials provided on the appliance have [required permissions](migrate-support-matrix-physical.md#physical-server-requirements). <br/><br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
-60005: SSHOperationTimeout | The operation took longer than expected either due to network latency issues or due to the lack of latest updates on Linux server.| - Ensure that the impacted server has the latest kernel and OS updates installed. <br/><br/> - Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.<br/><br/> - Connect to the impacted server from the appliance and run the commands documented here to check if they return null or empty data. <br/><br/> - If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager).
-60006: ServerAccessDenied | The operation could not be completed due to forbidden access on the server. The guest credentials provided do not have enough permissions to access the servers. |
-60011: ServerWindowsWMICallFailed | WMI call failed due to WMI service failure. This might be a transient error, if the server is unreachable due to network issue or in case of physical sever the server might be switched off. | - Please ensure WinRM is running and the server is reachable from the appliance VM. <br/><br/> - Ensure that the server is switched on.<br/><br/> - For troubleshooting with physical servers, follow the [instructions](migrate-support-matrix-physical.md#physical-server-requirements).<br/><br/> - If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager).
-10004: CredentialNotProvidedForGuestOSType | The credentials for the server OS type weren't added on the appliance. | - Ensure that you add the credentials for the OS type of the affected server on the appliance.<br/><br/> - You can now add multiple server credentials on the appliance.
-751: Unable to connect to Server | Unable to connect to the server due to connectivity issues. | Resolve the connectivity issue mentioned in the error message.
-754: Performance Data not available | Azure Migrate is unable to collect performance data if the vCentre is not configured to give out the performance data | Configure the statistics level on VCentre server to 3 to make the performance data available. Wait for a day before running the assessment for the data to populate.
-757: Virtual Machine not found | The Azure Migrate service is unable to locate the specified virtual machine. This may occur if the virtual machine has been deleted by the administrator on the VMware environment.| Please verify that the virtual machine still exists in the VMware environment.
-758: Request timeout while fetching Performance data | Azure Migrate assessment service is unable to retrieve performance data. This could happen if the vCenter server is not reachable. | - Please verify the vCenter server credentials are correct.<br/><br/> - Ensure that the server is reachable before attempting to retrieve performance data again.<br/><br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
-760: Unable to get Performance counters | Azure Migrate assessment service is unable to retrieve performance counters. This can happen due to multiple reasons. Check the error message to find the exact reason.| - Ensure that you resolve the error flagged in the error message.<br/><br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
-8002: Virtual Machine could not be found | Azure Migrate discovery service could not find the virtual machine. This could happen if the virtual machine is deleted or its UUID has changed. | - Ensure that the on-premises virtual machine exists and then restart the job. <br/><br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
-9003: Operating system type running on the server isn't supported. | The operating system running on the server isn't Windows or Linux. | Only Windows and Linux OS types are supported. If the server is running Windows or Linux OS, check the operating system type specified in vCenter Server.
-9004: Server isn't in a running state. | The server is in a powered-off state. | Ensure that the server is in a running state.
-9010: The server is powered off. | The server is in a powered-off state. | Ensure that the server is in a running state.
-9014: Unable to retrieve the file containing the discovered metadata because of an error encountered on the ESXi host | The error details will be mentioned with the error.| Ensure that port 443 is open on the ESXi host on which the server is running. Learn more on how to remediate the issue.
-9015: The vCenter Server user account provided for server discovery doesn't have guest operations privileges enabled. | The required privileges of guest operations haven't been enabled on the vCenter Server user account. | Ensure that the vCenter Server user account has privileges enabled for **Virtual Machines** > **Guest Operations** to interact with the server and pull the required data. [Learn more](troubleshoot-discovery.md#error-9014-httpgetrequesttoretrievefilefailed) on how to set up the vCenter Server account with required privileges.
-9022: The access is denied to run the Get-WmiObject cmdlet on the server. | The role associated with the credentials provided on the appliance or a group policy on-premises is restricting access to the WMI object. You encounter this issue when you try the following credentials on the server: `FriendlyNameOfCredentials`. | Check if the credentials provided on the appliance have created file administrator privileges and have WMI enabled.<br/><br/> If the credentials on the appliance don't have the required permissions, either provide another set of credentials or edit an existing one. (Find the friendly name of the credentials tried by Azure Migrate in the possible causes.) <br/><br/> [Learn more](tutorial-discover-vmware.md#prepare-vmware) on how to remediate the issue.
+### Error Code: 60001:UnableToConnectToPhysicalServer
+#### Cause
+
+Either the prerequisites to connect to the server have not been met or there are network issues in connecting to the server, for instance some proxy settings.
+
+#### Recommended Action
+
+- Ensure that the server meets the prerequisites and port access requirements.
+- Add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the Azure Migrate appliance and retry the operation. This is to allow remote inbound connections on servers: *Windows: WinRM port 5985 (HTTP) and Linux: SSH port 22 (TCP)*.
+- Ensure that you have chosen the correct authentication method on the appliance to connect to the server. <br/><br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 60002: InvalidServerCredentials
+
+#### Cause
+
+Unable to connect to server due to incorrect credentials on the appliance, or the credentials previously provided have expired or the server credentials have changed.
+
+#### Recommended Action
+
+- Ensure that you have provided the correct credentials for the server on the appliance. You can check that by trying to connect to the server using those credentials.
+
+- If the credentials added are incorrect or have expired, edit the credentials on the appliance and revalidate the added servers. If the validation succeeds, the issue is resolved.
+
+- If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 60004: NoPerfDataAvailableForServers
+
+#### Cause
+
+The appliance is unable to fetch the required performance data from the server due to network issues or the credentials provided on the appliance do not have enough permissions to fetch the metadata.
+
+#### Recommended Action
+
+- Ensure that the guest credentials provided on the appliance have [required permissions](migrate-support-matrix-physical.md#physical-server-requirements).
+- If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 60005: SSHOperationTimeout
+
+#### Cause
+
+The operation took longer than expected either due to network latency issues or due to the lack of latest updates on Linux server.
+
+#### Recommended Action
+
+- Ensure that the impacted server has the latest kernel and OS updates installed.
+
+- Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.
+
+- Connect to the impacted server from the appliance and run the commands documented here to check if they return null or empty data.
+
+- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 60006: ServerAccessDenied
+
+#### Cause
+
+The operation could not be completed due to forbidden access on the server. The guest credentials provided do not have enough permissions to access the servers.
+
+### Error Code: 60011: ServerWindowsWMICallFailed
+
+#### Cause
+
+WMI call failed due to WMI service failure. This might be a transient error, if the server is unreachable due to network issue or in case of physical sever the server might be switched off.
+
+#### Recommended Action
+
+- Ensure WinRM is running and the server is reachable from the appliance VM.
+- Ensure that the server is switched on.
+- For troubleshooting with physical servers, follow the [instructions](migrate-support-matrix-physical.md#physical-server-requirements).
+- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 10004: CredentialNotProvidedForGuestOSType
+
+#### Cause
+
+The credentials for the server OS type weren't added on the appliance.
+
+#### Recommended Action
+
+- Ensure that you add the credentials for the OS type of the affected server on the appliance.
+- You can now add multiple server credentials on the appliance.
+
+### Error Code: 751: Unable to connect to Server
+
+#### Cause
+
+Unable to connect to the server due to connectivity issues.
+
+#### Recommended Action
+
+Resolve the connectivity issue mentioned in the error message.
+
+### Error Code: 754: Performance Data not available
+
+#### Cause
+
+Azure Migrate is unable to collect performance data if the vCentre is not configured to give out the performance data.
+
+#### Recommended Action
+
+Configure the statistics level on VCentre server to 3 to make the performance data available. Wait for a day before running the assessment for the data to populate.
+
+### Error Code: 757: Virtual Machine not found
+
+#### Cause
+
+The Azure Migrate service is unable to locate the specified virtual machine. This may occur if the virtual machine has been deleted by the administrator on the VMware environment.
+
+#### Recommended Action
+
+Verify that the virtual machine still exists in the VMware environment.
+
+### Error Code: 758: Request timeout while fetching Performance data
+
+#### Cause
+
+Azure Migrate assessment service is unable to retrieve performance data. This could happen if the vCenter server is not reachable.
+
+#### Recommended Action
+
+- Verify the vCenter server credentials are correct.
+- Ensure that the server is reachable before attempting to retrieve performance data again.
+- If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 760: Unable to get Performance counters
+
+#### Cause
+
+Azure Migrate assessment service is unable to retrieve performance counters. This can happen due to multiple reasons. Check the error message to find the exact reason.
+
+#### Recommended Action
+
+- Ensure that you resolve the error flagged in the error message.
+- If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 8002: Virtual Machine could not be found
+
+#### Cause
+Azure Migrate discovery service could not find the virtual machine. This could happen if the virtual machine is deleted or its UUID has changed.
+
+#### Recommended Action
+
+- Ensure that the on-premises virtual machine exists and then restart the job.
+- If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).
+
+### Error Code: 9003: Operating system type running on the server isn't supported.
+
+#### Cause
+
+The operating system running on the server isn't Windows or Linux.
+
+#### Recommended Action
+
+Only Windows and Linux OS types are supported. If the server is running Windows or Linux OS, check the operating system type specified in vCenter Server.
+
+### Error Code: 9004: Server isn't in a running state.
+
+#### Cause
+
+The server is in a powered-off state.
+
+#### Recommended Action
+
+Ensure that the server is in a running state.
+
+### Error Code: 9010: The server is powered off.
+
+#### Cause
+
+The server is in a powered-off state.
+
+#### Recommended Action
+
+Ensure that the server is in a running state.
+
+### Error Code: 9014: Unable to retrieve the file containing the discovered metadata because of an error encountered on the ESXi host
+
+#### Cause
+
+The error details will be mentioned with the error.
+
+#### Recommended Action
+
+Ensure that port 443 is open on the ESXi host on which the server is running. Learn more on how to remediate the issue.
+
+### Error Code: 9015: The vCenter Server user account provided for server discovery doesn't have guest operations privileges enabled.
+
+#### Cause
+
+The required privileges of guest operations haven't been enabled on the vCenter Server user account.
+
+#### Recommended Action
+
+Ensure that the vCenter Server user account has privileges enabled for **Virtual Machines** > **Guest Operations** to interact with the server and pull the required data. Learn more on how to set up the vCenter Server account with required privileges.
+
+### Error Code: 9022: The access is denied to run the Get-WmiObject cmdlet on the server.
+
+#### Cause
+
+The role associated with the credentials provided on the appliance or a group policy on-premises is restricting access to the WMI object. You encounter this issue when you try the following credentials on the server: `FriendlyNameOfCredentials`.
+
+#### Recommended Action
+
+Check if the credentials provided on the appliance have created file administrator privileges and have WMI enabled.<br/><br/> If the credentials on the appliance don't have the required permissions, either provide another set of credentials or edit an existing one. (Find the friendly name of the credentials tried by Azure Migrate in the possible causes.) <br/><br/> [Learn more](tutorial-discover-vmware.md#prepare-vmware) on how to remediate the issue.
## Azure VM assessment readiness issues
-This table lists help for fixing the following assessment readiness issues.
-
-**Issue** | **Fix**
- |
-Unsupported boot type | Azure does not support UEFI boot type for VMs with the Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2 operating systems. Check the list of operating systems that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure).
-Conditionally supported Windows operating system | The operating system has passed its end-of-support date and needs a Custom Support Agreement for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure.
-Unsupported Windows operating system | Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure.
-Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. [Learn more](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment).
-Unendorsed Linux OS | The server might start in Azure, but Azure provides no operating system support. Consider upgrading to an [endorsed Linux version](../virtual-machines/linux/endorsed-distros.md) before you migrate to Azure.
-Unknown operating system | The operating system of the VM was specified as **Other** in vCenter Server or could not be identified as a known OS in Azure Migrate. This behavior blocks Azure Migrate from verifying the Azure readiness of the VM. Ensure that the operating system is [supported](./migrate-support-matrix-vmware-migration.md#azure-vm-requirements) by Azure before you migrate the server.
-Unsupported bit version | VMs with a 32-bit operating system might boot in Azure, but we recommend that you upgrade to 64-bit before you migrate to Azure.
-Requires a Microsoft Visual Studio subscription | The server is running a Windows client operating system, which is supported only through a Visual Studio subscription.
-VM not found for the required storage performance | The storage performance (input/output operations per second (IOPS) and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration.
-VM not found for the required network performance | The network performance (in/out) required for the server exceeds Azure VM support. Reduce the networking requirements for the server.
-VM not found in the specified location | Use a different target location before migration.
-One or more unsuitable disks | One or more disks attached to the VM don't meet Azure requirements.<br><br> Azure Migrate: Discovery and assessment assesses the disks based on the disk limits for Ultra disks (64 TB).<br><br> For each disk attached to the VM, make sure that the size of the disk is < 64 TB (supported by Ultra SSD disks).<br><br> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by [Azure managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-storage-limits).
-One or more unsuitable network adapters | Remove unused network adapters from the server before migration.
-Disk count exceeds limit | Remove unused disks from the server before migration.
-Disk size exceeds limit | Azure Migrate: Discovery and assessment supports disks with up to 64 TB size (Ultra disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits.
-Disk unavailable in the specified location | Make sure the disk is in your target location before you migrate.
-Disk unavailable for the specified redundancy | The disk should use the redundancy storage type defined in the assessment settings (LRS by default).
-Couldn't determine disk suitability because of an internal error | Try creating a new assessment for the group.
-VM with required cores and memory not found | Azure couldn't find a suitable VM type. Reduce the memory and number of cores of the on-premises server before you migrate.
-Couldn't determine VM suitability because of an internal error | Try creating a new assessment for the group.
-Couldn't determine suitability for one or more disks because of an internal error | Try creating a new assessment for the group.
-Couldn't determine suitability for one or more network adapters because of an internal error | Try creating a new assessment for the group.
-No VM size found for offer currency Reserved Instance (RI) | Server marked **not suitable** because the VM size wasn't found for the selected combination of RI, offer, and currency. Edit the assessment properties to choose the valid combinations and recalculate the assessment.
+This section helps on fixing the following assessment readiness issues.
-## Azure VMware Solution (AVS) assessment readiness issues
+### Issue: Unsupported boot type
-This table lists help for fixing the following assessment readiness issues.
+#### Fix
-**Issue** | **Fix**
- |
-Unsupported IPv6 | Only applicable to Azure VMware Solution assessments. Azure VMware Solution doesn't support IPv6 internet addresses. Contact the Azure VMware Solution team for remediation guidance if your server is detected with IPv6.
-Unsupported OS | Support for certain Operating System versions has been deprecated by VMware and the assessment recommends you to upgrade the operating system before migrating to Azure VMware Solution. [Learn more](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software).
+Azure does not support UEFI boot type for VMs with the Windows Server 2003/Windows Server 2003 R2/Windows Server 2008/Windows Server 2008 R2 operating systems. Check the list of operating systems that support UEFI-based machines [here](./common-questions-server-migration.md#which-operating-systems-are-supported-for-migration-of-uefi-based-machines-to-azure).
+### Issue: Conditionally supported Windows operating system
-## Suggested migration tool in an import-based Azure VMware Solution assessment is unknown
+#### Fix
-For servers imported via a CSV file, the default migration tool in an Azure VMware Solution assessment is unknown. For servers in a VMware environment, use the VMware Hybrid Cloud Extension (HCX) solution. [Learn more](../azure-vmware/configure-vmware-hcx.md).
+The operating system has passed its end-of-support date and needs a Custom Support Agreement for [support in Azure](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading before you migrate to Azure. Review information about [preparing servers running Windows Server 2003](prepare-windows-server-2003-migration.md) for migration to Azure.
-## Linux VMs are "conditionally ready" in an Azure VM assessment
+### Issue: Unsupported Windows operating system
-In the case of VMware and Hyper-V VMs, an Azure VM assessment marks Linux VMs as **conditionally ready** because of a known gap.
+#### Fix
-- The gap prevents it from detecting the minor version of the Linux OS installed on the on-premises VMs.-- For example, for RHEL 6.10, currently an Azure VM assessment detects only RHEL 6 as the OS version. This behavior occurs because the vCenter Server and the Hyper-V host don't provide the kernel version for Linux VM operating systems.-- Since Azure endorses only specific versions of Linux, the Linux VMs are currently marked as **conditionally ready** in an Azure VM assessment.-- You can determine whether the Linux OS running on the on-premises VM is endorsed in Azure by reviewing [Azure Linux support](../virtual-machines/linux/endorsed-distros.md).-- After you've verified the endorsed distribution, you can ignore this warning.
+Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure.
-This gap can be addressed by enabling [application discovery](./how-to-discover-applications.md) on the VMware VMs. An Azure VM assessment uses the operating system detected from the VM by using the guest credentials provided. This Operating System data identifies the right OS information in the case of both Windows and Linux VMs.
+### Issue: Conditionally endorsed Linux OS
-## Operating system version not available
+#### Fix
-For physical servers, the operating system minor version information should be available. If it isn't available, contact Microsoft Support. For servers in a VMware environment, Azure Migrate uses the operating system information specified for the VM in the vCenter Server. But vCenter Server doesn't provide the minor version for operating systems. To discover the minor version, set up [application discovery](./how-to-discover-applications.md). For Hyper-V VMs, operating system minor version discovery isn't supported.
+Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure.
-## Azure SKUs bigger than on-premises in an Azure VM assessment
+### Issue: Unendorsed Linux OS
-An Azure VM assessment might recommend Azure VM SKUs with more cores and memory than the current on-premises allocation based on the type of assessment:
+#### Fix
-- The VM SKU recommendation depends on the assessment properties.-- The recommendation is affected by the type of assessment you perform in an Azure VM assessment. The two types are **Performance-based** or **As on-premises**.-- For performance-based assessments, the Azure VM assessment considers the utilization data of the on-premises VMs (CPU, memory, disk, and network utilization) to determine the right target VM SKU for your on-premises VMs. It also adds a comfort factor when determining effective utilization.-- For on-premises sizing, performance data isn't considered, and the target SKU is recommended based on on-premises allocation.
+The server might start in Azure, but Azure provides no operating system support. Consider upgrading to an [endorsed Linux version](../virtual-machines/linux/endorsed-distros.md) before you migrate to Azure.
-Let's look at an example recommendation:
+### Issue: Unknown operating system
-We have an on-premises VM with 4 cores and 8 GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3.
+#### Fix
-- If the assessment is **As on-premises**, an Azure VM SKU with 4 cores and 8 GB of memory is recommended.-- If the assessment is **Performance-based**, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8 GB memory * 1.3 = 5.2 GB memory), the cheapest VM SKU of 4 cores (nearest supported core count) and 8 GB of memory (nearest supported memory size) is recommended.-- [Learn more](concepts-assessment-calculation.md#types-of-assessments) about assessment sizing.
+The operating system of the VM was specified as **Other** in vCenter Server or could not be identified as a known OS in Azure Migrate. This behavior blocks Azure Migrate from verifying the Azure readiness of the VM. Ensure that the operating system is [supported](./migrate-support-matrix-vmware-migration.md#azure-vm-requirements) by Azure before you migrate the server.
-## Why is the recommended Azure disk SKU bigger than on-premises in an Azure VM assessment?
+### Issue: Unsupported bit version
-Azure VM assessment might recommend a bigger disk based on the type of assessment:
+#### Fix
-- Disk sizing depends on two assessment properties: sizing criteria and storage type.-- If the sizing criteria is **Performance-based** and the storage type is set to **Automatic**, the IOPS and throughput values of the disk are considered when identifying the target disk type (Standard HDD, Standard SSD, Premium, or Ultra disk). A disk SKU from the disk type is then recommended, and the recommendation considers the size requirements of the on-premises disk.-- If the sizing criteria is **Performance-based** and the storage type is **Premium**, a premium disk SKU in Azure is recommended based on the IOPS, throughput, and size requirements of the on-premises disk. The same logic is used to perform disk sizing when the sizing criteria is **As on-premises** and the storage type is **Standard HDD**, **Standard SSD**, **Premium**, or **Ultra disk**.
+VMs with a 32-bit operating system might boot in Azure, but we recommend that you upgrade to 64-bit before you migrate to Azure.
-For example, say you have an on-premises disk with 32 GB of memory, but the aggregated read and write IOPS for the disk is 800 IOPS. The Azure VM assessment recommends a premium disk because of the higher IOPS requirements. It also recommends a disk SKU that can support the required IOPS and size. The nearest match in this example would be P15 (256 GB, 1100 IOPS). Even though the size required by the on-premises disk was 32 GB, the Azure VM assessment recommended a larger disk because of the high IOPS requirement of the on-premises disk.
+### Issue: Requires a Microsoft Visual Studio subscription
-## Why is performance data missing for some or all VMs in my assessment report?
+#### Fix
-For **Performance-based** assessment, the assessment report export says 'PercentageOfCoresUtilizedMissing' or 'PercentageOfMemoryUtilizedMissing' when the Azure Migrate appliance can't collect performance data for the on-premises VMs. Make sure to check:
+The server is running a Windows client operating system, which is supported only through a Visual Studio subscription.
-- If the VMs are powered on for the duration for which you're creating the assessment.-- If only memory counters are missing and you're trying to assess Hyper-V VMs, check if you have dynamic memory enabled on these VMs. Because of a known issue, currently the Azure Migrate appliance can't collect memory utilization for such VMs.-- If all of the performance counters are missing, ensure the port access requirements for assessment are met. Learn more about the port access requirements for [VMware](./migrate-support-matrix-vmware.md#port-access-requirements), [Hyper-V](./migrate-support-matrix-hyper-v.md#port-access), and [physical](./migrate-support-matrix-physical.md#port-access) assessments.
-If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly.
+### Issue: VM not found for the required storage performance
-## Why is performance data missing for some or all servers in my Azure VM or Azure VMware Solution assessment report?
+#### Fix
-For **Performance-based** assessment, the assessment report export says **PercentageOfCoresUtilizedMissing** or **PercentageOfMemoryUtilizedMissing** when the Azure Migrate appliance can't collect performance data for the on-premises servers. Make sure to check:
+The storage performance (input/output operations per second (IOPS) and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration.
-- If the servers are powered on for the duration for which you're creating the assessment.-- If only memory counters are missing and you're trying to assess servers in a Hyper-V environment. In this scenario, enable dynamic memory on the servers and recalculate the assessment to reflect the latest changes. The appliance can collect memory utilization values for servers in a Hyper-V environment only when the server has dynamic memory enabled.-- If all of the performance counters are missing, ensure that outbound connections on ports 443 (HTTPS) are allowed.
+### Issue: VM not found for the required network performance
- > [!Note]
- > If any of the performance counters are missing, Azure Migrate: Discovery and assessment falls back to the allocated cores/memory on-premises and recommends a VM size accordingly.
+#### Fix
-## Why is performance data missing for some or all SQL instances or databases in my Azure SQL assessment?
+The network performance (in/out) required for the server exceeds Azure VM support. Reduce the networking requirements for the server.
-To ensure performance data is collected, make sure to check:
+### Issue: VM not found in the specified location
-- If the SQL servers are powered on for the duration for which you're creating the assessment.-- If the connection status of the SQL agent in Azure Migrate is **Connected**, and also check the last heartbeat. -- If the Azure Migrate connection status for all SQL instances is **Connected** in the discovered SQL instance pane.-- If all of the performance counters are missing, ensure that outbound connections on port 443 (HTTPS) are allowed.
+#### Fix
-If any of the performance counters are missing, the Azure SQL assessment recommends the smallest Azure SQL configuration for that instance or database.
+Use a different target location before migration.
-## Why is the confidence rating of my assessment low?
+### Issue: One or more unsuitable disks
-The confidence rating is calculated for **Performance-based** assessments based on the percentage of [available data points](./concepts-assessment-calculation.md#ratings) needed to compute the assessment. An assessment could get a low confidence rating for the following reasons:
+#### Fix
-- You didn't profile your environment for the duration for which you're creating the assessment. For example, if you're creating an assessment with performance duration set to one week, you need to wait for at least a week after you start the discovery for all the data points to get collected. If you can't wait for the duration, change the performance duration to a shorter period and recalculate the assessment.-- The assessment isn't able to collect the performance data for some or all the servers in the assessment period. For a high confidence rating, ensure that:
- - Servers are powered on for the duration of the assessment.
- - Outbound connections on ports 443 are allowed.
- - For Hyper-V Servers, dynamic memory is enabled.
- - The connection status of agents in Azure Migrate is **Connected**. Also check the last heartbeat.
- - For Azure SQL assessments, Azure Migrate connection status for all SQL instances is **Connected** in the discovered SQL instance pane.
+One or more disks attached to the VM don't meet Azure requirements.<br><br> Azure Migrate: Discovery and assessment assess the disks based on the disk limits for Ultra disks (64 TB).<br><br> For each disk attached to the VM, make sure that the size of the disk is < 64 TB (supported by Ultra SSD disks).<br><br> If it isn't, reduce the disk size before you migrate to Azure, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits. Make sure that the performance (IOPS and throughput) needed by each disk is supported by [Azure managed virtual machine disks](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-storage-limits).
- Recalculate the assessment to reflect the latest changes in confidence rating.
+### Issue: One or more unsuitable network adapters
-- For Azure VM and Azure VMware Solution assessments, few servers were created after discovery had started. For example, say you're creating an assessment for the performance history of the past month, but a few servers were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-assessment-calculation.md#confidence-ratings-performance-based).-- For Azure SQL assessments, few SQL instances or databases were created after discovery had started. For example, say you're creating an assessment for the performance history of the past month, but a few SQL instances or databases were created in the environment only a week ago. In this case, the performance data for the new servers won't be available for the entire duration and the confidence rating would be low. [Learn more](./concepts-azure-sql-assessment-calculation.md#confidence-ratings).
+#### Fix
-## Why is my RAM utilization greater than 100%?
+Remove unused network adapters from the server before migration.
-By design, in Hyper-V if maximum memory provisioned is less than what is required by the VM, the assessment will show memory utilization to be more than 100%.
+### Issue: Disk count exceeds limit
-## Is the operating system license included in an Azure VM assessment?
+#### Fix
-An Azure VM assessment currently considers the operating system license cost only for Windows servers. License costs for Linux servers aren't currently considered.
+Remove unused disks from the server before migration.
-## How does performance-based sizing work in an Azure VM assessment?
+### Issue: Disk size exceeds limit
-An Azure VM assessment continuously collects performance data of on-premises servers and uses it to recommend the VM SKU and disk SKU in Azure. [Learn more](concepts-assessment-calculation.md#calculate-sizing-performance-based) about how performance-based data is collected.
+#### Fix
-## Can I migrate my disks to an Ultra disk by using Azure Migrate?
+Azure Migrate: Discovery and assessment support disks with up to 64 TB size (Ultra disks). Shrink disks to less than 64 TB before migration, or use multiple disks in Azure and [stripe them together](../virtual-machines/premium-storage-performance.md#disk-striping) to get higher storage limits.
-No. Currently, both Azure Migrate and Azure Site Recovery don't support migration to Ultra disks. [Learn more](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal#deploy-an-ultra-disk) about deploying an Ultra disk.
+### Issue: Disk unavailable in the specified location
-## Why are the provisioned IOPS and throughput in my Ultra disk more than my on-premises IOPS and throughput?
+#### Fix
-As per the [official pricing page](https://azure.microsoft.com/pricing/details/managed-disks/), Ultra disk is billed based on the provisioned size, provisioned IOPS, and provisioned throughput. For example, if you provisioned a 200-GiB Ultra disk with 20,000 IOPS and 1,000 MB/second and deleted it after 20 hours, it will map to the disk size offer of 256 GiB. You'll be billed for 256 GiB, 20,000 IOPS, and 1,000 MB/second for 20 hours.
+Make sure the disk is in your target location before you migrate.
-IOPS to be provisioned = (Throughput discovered) * 1024/256
+### Issue: Disk unavailable for the specified redundancy
-## Does the Ultra disk recommendation consider latency?
+#### Fix
-No, currently only disk size, total throughput, and total IOPS are used for sizing and costing.
+The disk should use the redundancy storage type defined in the assessment settings (LRS by default).
-## I can see M series supports Ultra disk, but in my assessment where Ultra disk was recommended, it says "No VM found for this location"
+### Issue: Couldn't determine disk suitability because of an internal error
-This result is possible because not all VM sizes that support Ultra disks are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
+#### Fix
-## Why is my assessment showing a warning that it was created with an invalid offer?
+Try creating a new assessment for the group.
-Your assessment was created with an offer that is no longer valid and hence, the **Edit** and **Recalculate** buttons are disabled. You can create a new assessment with any of the valid offers - *Pay as you go*, *Pay as you go Dev/Test*, and *Enterprise Agreement*. You can also use the **Discount(%)** field to specify any custom discount on top of the Azure offer. [Learn more](how-to-create-assessment.md).
+### Issue: VM with required cores and memory not found
-## Why is my assessment showing a warning that it was created with a target Azure location that has been deprecated?
+#### Fix
-Your assessment was created with an Azure region that has been deprecated and hence the **Edit** and **Recalculate** buttons are disabled. You can [create a new assessment](how-to-create-assessment.md) with any of the valid target locations. [Learn more](concepts-assessment-calculation.md#whats-in-an-azure-vm-assessment).
+Azure couldn't find a suitable VM type. Reduce the memory and number of cores of the on-premises server before you migrate.
-## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime, and Discount (%)?
+### Issue: Couldn't determine VM suitability because of an internal error
-When you select **Reserved Instances**, the **Discount (%)** and **VM uptime** properties aren't applicable. As your assessment was created with an invalid combination of these properties, the **Edit** and **Recalculate** buttons are disabled. Create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment).
+#### Fix
-## Why are some of my assessments marked as "to be upgraded to latest assessment version"?
+Try creating a new assessment for the group.
-Recalculate your assessment to view the upgraded Azure SQL assessment experience to identify the ideal migration target for your SQL deployments across Azure SQL Managed Instances, SQL Server on Azure VM, and Azure SQL DB:
- - We recommended migrating instances to *SQL Server on Azure VM* as per the Azure best practices.
- - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials are not available.
- - Enhanced user-experience that covers readiness and cost estimates for multiple migration targets for SQL deployments in one assessment.
+### Issue: Couldn't determine suitability for one or more disks because of an internal error
-We recommend that you export your existing assessment before recalculating.
+#### Fix
-## I don't see performance data for some network adapters on my physical servers
+Try creating a new assessment for the group.
-This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual network adapters. The network throughput is captured only on the virtual network adapters discovered.
+### Issue: Couldn't determine suitability for one or more network adapters because of an internal error
-## The recommended Azure VM SKU for my physical server is oversized
+#### Fix
-This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual network adapters. As a result, the number of network adapters discovered is higher than the actual number. The Azure VM assessment picks an Azure VM that can support the required number of network adapters, which can potentially result in an oversized VM. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of the number of network adapters on sizing. This product gap will be addressed going forward.
+Try creating a new assessment for the group.
-## The readiness category is marked "Not ready" for my physical server
+### Issue: No VM size found for offer currency Reserved Instance (RI)
-The readiness category might be incorrectly marked as **Not ready** in the case of a physical server that has Hyper-V virtualization enabled. On these servers, because of a product gap, Azure Migrate currently discovers both the physical and virtual adapters. As a result, the number of network adapters discovered is higher than the actual number. In both **As on-premises** and **Performance-based** assessments, the Azure VM assessment picks an Azure VM that can support the required number of network adapters. If the number of network adapters is discovered to be higher than 32, the maximum number of NICs supported on Azure VMs, the server will be marked **Not ready**. [Learn more](./concepts-assessment-calculation.md#calculating-sizing) about the impact of number of NICs on sizing.
+#### Fix
-## The number of discovered NICs is higher than actual for physical servers
+Server marked not suitable because the VM size wasn't found for the selected combination of RI, offer, and currency. Edit the assessment properties to choose the valid combinations and recalculate the assessment.
-This issue can happen if the physical server has Hyper-V virtualization enabled. On these servers, Azure Migrate currently discovers both the physical and virtual adapters. As a result, the number of NICs discovered is higher than the actual number.
+## Azure VMware Solution (AVS) assessment readiness issues
-## Capture network traffic
+This section provides help for fixing the following assessment readiness issues.
-To collect network traffic logs:
+### Issue: Unsupported IPv6
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select F12 to start Developer Tools. If needed, clear the **Clear entries on navigation** setting.
-1. Select the **Network** tab, and start capturing network traffic:
- - In Chrome, select **Preserve log**. The recording should start automatically. A red circle indicates that traffic is being captured. If the red circle doesn't appear, select the black circle to start.
- - In Microsoft Edge and Internet Explorer, recording should start automatically. If it doesn't, select the green play button.
-1. Try to reproduce the error.
-1. After you've encountered the error while recording, stop recording and save a copy of the recorded activity:
- - In Chrome, right-click and select **Save as HAR with content**. This action compresses and exports the logs as a .har file.
- - In Microsoft Edge or Internet Explorer, select the **Export captured traffic** option. This action compresses and exports the log.
-1. Select the **Console** tab to check for any warnings or errors. To save the console log:
- - In Chrome, right-click anywhere in the console log. Select **Save as** to export, and zip the log.
- - In Microsoft Edge or Internet Explorer, right-click the errors and select **Copy all**.
-1. Close Developer Tools.
+#### Fix
-## Where is the Operating System data in my assessment discovered from?
+Only applicable to Azure VMware Solution assessments. Azure VMware Solution doesn't support IPv6 internet addresses. Contact the Azure VMware Solution team for remediation guidance if your server is detected with IPv6.
-- For VMware VMs, by default, it's the operating system data provided by the vCenter Server.
- - For VMware Linux VMs, if application discovery is enabled, the OS details are fetched from the guest VM. To check which OS details are in the assessment, go to the **Discovered servers** view, and hover over the value in the **Operating system** column. In the text that pops up, you'd be able to see whether the OS data you see is gathered from the vCenter Server or from the guest VM by using the VM credentials.
- - For Windows VMs, the operating system details are always fetched from the vCenter Server.
-- For Hyper-V VMs, the operating system data is gathered from the Hyper-V host.-- For physical servers, it is fetched from the server.
+### Issue: Unsupported OS
+
+#### Fix
+
+Support for certain Operating System versions has been deprecated by VMware and the assessment recommends you to upgrade the operating system before migrating to Azure VMware Solution. [Learn more](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software).
## Common web apps discovery errors Azure Migrate provides options to assess discovered ASP.NET web apps for migration to Azure App Service by using the Azure Migrate: Discovery and assessment tool. Refer to the [assessment](tutorial-assess-webapps.md) tutorial to get started.
-Typical App Service assessment errors are summarized in the table.
-
-| **Error** | **Cause** | **Recommended action** |
-|--|--|--|
-|**Application pool check**|The IIS site is using the following application pools: {0}.|Azure App Service doesn't support more than one application pool configuration per App Service application. Move the workloads to a single application pool and remove other application pools.|
-|**Application pool identity check**|The site's application pool is running as an unsupported user identity type: {0}.|App Service doesn't support using the LocalSystem or SpecificUser application pool identity types. Set the application pool to run as ApplicationPoolIdentity.|
-|**Authorization check**|The following unsupported authentication types were found: {0}.|App Service supported authentication types and configuration are different from on-premises IIS. Disable the unsupported authentication types on the site. After the migration is complete, it will be possible to configure the site by using one of the App Service supported authentication types.|
-|**Authorization check unknown**|Unable to determine enabled authentication types for all of the site configuration.|Unable to determine authentication types. Fix all configuration errors and confirm that all site content locations are accessible to the administrators group.|
-|**Configuration error check**|The following configuration errors were found: {0}.|Migration readiness can't be determined without reading all applicable configuration. Fix all configuration errors. Make sure configuration is valid and accessible.|
-|**Content size check**|The site content appears to be greater than the maximum allowed of 2 GB for successful migration.|For successful migration, site content should be less than 2 GB. Evaluate if the site could switch to using non-file-system-based storage options for static content, such as Azure Storage.|
-|**Content size check unknown**|File content size couldn't be determined, which usually indicates an access issue.|Content must be accessible to migrate the site. Confirm that the site isn't using UNC shares for content and that all site content locations are accessible to the administrators group.|
-|**Global module check**|The following unsupported global modules were detected: {0}.|App Service supports limited global modules. Remove the unsupported modules from the GlobalModules section, along with all associated configuration.|
-|**ISAPI filter check**|The following unsupported ISAPI filters were detected: {0}.|Automatic configuration of custom ISAPI filters isn't supported. Remove the unsupported ISAPI filters.|
-|**ISAPI filter check unknown**|Unable to determine ISAPI filters present for all of the site configuration.|Automatic configuration of custom ISAPI filters isn't supported. Fix all configuration errors and confirm that all site content locations are accessible to the administrators group.|
-|**Location tag check**|The following location paths were found in the applicationHost.config file: {0}.|The migration method doesn't support moving location path configuration in applicationHost.config. Move the location path configuration to either the site's root web.config file or to a web.config file associated with the specific application to which it applies.|
-|**Protocol check**|Bindings were found by using the following unsupported protocols: {0}.|App Service only supports the HTTP and HTTPS protocols. Remove the bindings with protocols that aren't HTTP or HTTPS.|
-|**Virtual directory check**|The following virtual directories are hosted on UNC shares: {0}.|Migration doesn't support migrating site content hosted on UNC shares. Move content to a local file path or consider changing to a non-file-system-based storage option, such as Azure Storage. If you use shared configuration, disable shared configuration for the server before you modify the content paths.|
-|**HTTPS binding check**|The application uses HTTPS.|More manual steps are required for HTTPS configuration in App Service. Other post-migration steps are required to associate certificates with the App Service site.|
-|**TCP port check**|Bindings were found on the following unsupported ports: {0}.|App Service supports only ports 80 and 443. Clients making requests to the site should update the port in their requests to use 80 or 443.|
-|**Framework check**|The following non-.NET frameworks or unsupported .NET framework versions were detected as possibly in use by this site: {0}.|Migration doesn't validate the framework for non-.NET sites. App Service supports multiple frameworks, but these have different migration options. Confirm that the non-.NET frameworks aren't being used by the site, or consider using an alternate migration option.|
+Here, typical App Service assessment errors are summarized.
+
+### Error: Application pool check
+
+#### Cause
+
+The IIS site is using the following application pools: {0}.
+
+#### Recommended Action
+
+Azure App Service doesn't support more than one application pool configuration per App Service application. Move the workloads to a single application pool and remove other application pools.
+
+### Error: Application pool identity check
+
+#### Cause
+
+The site's application pool is running as an unsupported user identity type: {0}.
+
+#### Recommended Action
+
+App Service doesn't support using the LocalSystem or SpecificUser application pool identity types. Set the application pool to run as ApplicationPoolIdentity.
+
+### Error: Authorization check
+
+#### Cause
+
+The following unsupported authentication types were found: {0}.
+
+#### Recommended Action
+
+App Service supported authentication types and configuration are different from on-premises IIS. Disable the unsupported authentication types on the site. After the migration is complete, it will be possible to configure the site by using one of the App Services supported authentication types.
+
+### Error: Authorization check unknown
+
+#### Cause
+
+Unable to determine enabled authentication types for all of the site configuration.
+
+#### Recommended Action
+
+Unable to determine authentication types. Fix all configuration errors and confirm that all site content locations are accessible to the administrator's group.
+
+### Error: Configuration error check
+
+#### Cause
+
+The following configuration errors were found: {0}.
+
+#### Recommended Action
+
+Migration readiness can't be determined without reading all applicable configuration. Fix all configuration errors. Make sure configuration is valid and accessible.
+
+### Error: Content size check
+
+#### Cause
+
+The site content appears to be greater than the maximum allowed of 2 GB for successful migration.
+
+#### Recommended Action
+
+For successful migration, site content should be less than 2 GB. Evaluate if the site could switch to using non-file-system-based storage options for static content, such as Azure Storage.
+
+### Error: Content size check unknown
+
+#### Cause
+
+File content size couldn't be determined, which usually indicates an access issue.
+
+#### Recommended Action
+
+Content must be accessible to migrate the site. Confirm that the site isn't using UNC shares for content and that all site content locations are accessible to the administrator's group.
+
+### Error: Global module check
+
+#### Cause
+
+The following unsupported global modules were detected: {0}.
+
+#### Recommended Action
+
+App Service supports limited global modules. Remove the unsupported modules from the GlobalModules section, along with all associated configuration.
+
+### Error: ISAPI filter check
+
+#### Cause
+
+The following unsupported ISAPI filters were detected: {0}.
+
+#### Recommended Action
+
+Automatic configuration of custom ISAPI filters isn't supported. Remove the unsupported ISAPI filters.
+
+### Error: ISAPI filter check unknown
+
+#### Cause
+
+Unable to determine ISAPI filters present for all of the site configuration.
+
+#### Recommended Action
+
+Automatic configuration of custom ISAPI filters isn't supported. Fix all configuration errors and confirm that all site content locations are accessible to the administrator's group.
+
+### Error: Location tag check
+
+#### Cause
+
+The following location paths were found in the applicationHost.config file: {0}.
+
+#### Recommended Action
+
+The migration method doesn't support moving location path configuration in applicationHost.config. Move the location path configuration to either the site's root web.config file or to a web.config file associated with the specific application to which it applies.
+
+### Error: Protocol check
+
+#### Cause
+
+Bindings were found by using the following unsupported protocols: {0}.
+
+#### Recommended Action
+
+App Service only supports the HTTP and HTTPS protocols. Remove the bindings with protocols that aren't HTTP or HTTPS.
+
+### Error: Virtual directory check
+
+#### Cause
+
+The following virtual directories are hosted on UNC shares: {0}.
+
+#### Recommended Action
+
+Migration doesn't support migrating site content hosted on UNC shares. Move content to a local file path or consider changing to a non-file-system-based storage option, such as Azure Storage. If you use shared configuration, disable shared configuration for the server before you modify the content paths.
+
+### Error: HTTPS binding check
+
+#### Cause
+
+The application uses HTTPS.
+
+#### Recommended Action
+
+More manual steps are required for HTTPS configuration in App Service. Other post-migration steps are required to associate certificates with the App Service site.
+
+### Error: TCP port check
+
+#### Cause
+
+Bindings were found on the following unsupported ports: {0}.
+
+#### Recommended Action
+
+App Service supports only ports 80 and 443. Clients making requests to the site should update the port in their requests to use 80 or 443.
+
+### Error: Framework check
+
+#### Cause
+
+The following non-.NET frameworks or unsupported .NET framework versions were detected as possibly in use by this site: {0}.
+
+#### Recommended Action
+
+Migration doesn't validate the framework for non-.NET sites. App Service supports multiple frameworks, but these have different migration options. Confirm that the non-.NET frameworks aren't being used by the site, or consider using an alternate migration option.
## Next steps
migrate Tutorial Assess Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-aws.md
ms. Previously updated : 02/12/2024 Last updated : 02/26/2024 #Customer intent: As a server admin, I want to assess my AWS instances in preparation for migration to Azure.
Run an assessment as follows:
- Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day. - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation.
- - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription. If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription (RHEL and SLES). If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions (RHEL and SLES), you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
1. Select **Save** if you make changes.
migrate Tutorial Assess Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-gcp.md
ms. Previously updated : 02/12/2024 Last updated : 02/26/2024 #Customer intent: As a server admin, I want to assess my GCP instances in preparation for migration to Azure.
Run an assessment as follows:
- Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day. - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation.
- - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription. If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription (RHEL and SLES). If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions (RHEL and SLES), you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
1. Select **Save** if you make changes.
migrate Tutorial Assess Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-hyper-v.md
ms. Previously updated : 02/12/2024 Last updated : 02/26/2024 #Customer intent: As a Hyper-V admin, I want to assess my Hyper-V VMs in preparation for migration to Azure.
Run an assessment as follows:
- Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day. - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. 
- - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription. If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription (RHEL and SLES). If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions (RHEL and SLES), you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
1. Select **Save** if you make changes. 1. In **Assess Servers**, select **Next**.
migrate Tutorial Assess Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-physical.md
ms. Previously updated : 02/12/2024 Last updated : 02/26/2024 #Customer intent: As a server admin, I want to assess my on-premises physical servers in preparation for migration to Azure.
Run an assessment as follows:
- Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day. - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. 
- - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription. If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription (RHEL and SLES). If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions (RHEL and SLES), you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
1. Select **Save** if you make changes.
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
Previously updated : 02/12/2024 Last updated : 02/26/2024
Run an assessment as follows:
Target and pricing settings | **Currency** | The billing currency for your account. Target and pricing settings | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. Target and pricing settings | **VM uptime** | Specify the duration (days per month/hour per day) that servers/VMs run. This is useful for computing cost estimates for SQL Server on Azure VM where you're aware that Azure VMs might not run continuously. <br/> Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified. Default is 31 days per month/24 hours per day.
- Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription. Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+ Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription (RHEL and SLES). Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
Assessment criteria | **Sizing criteria** | Set to *Performance-based* by default, which means Azure Migrate collects performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration.<br/><br/> You can change this to *As on-premises* to get recommendations based on just the on-premises SQL Server configuration without the performance metric based optimizations. Assessment criteria | **Performance history** | Indicate the data duration on which you want to base the assessment. (Default is one day) Assessment criteria | **Percentile utilization** | Indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
migrate Tutorial Assess Vmware Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vm.md
ms. Previously updated : 02/12/2024 Last updated : 02/26/2024 #Customer intent: As a VMware VM admin, I want to assess my VMware VMs in preparation for migration to Azure.
Run an assessment as follows:
- Cost estimates are based on the duration specified. - Default is 31 days per month/24 hours per day. - In **EA Subscription**, specify whether to take an Enterprise Agreement (EA) subscription discount into account for cost estimation. 
- - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription. If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions, you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
+ - In **Azure Hybrid Benefit**, specify whether you already have a Windows Server license or Enterprise Linux subscription (RHEL and SLES). If you do and they're covered with active Software Assurance of Windows Server or Enterprise Linux Subscriptions (RHEL and SLES), you can apply for the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/) when you bring licenses to Azure.
1. Select **Save** if you make changes. 1. In **Assess Servers**, select **Next**.
migrate Tutorial Assess Vmware Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vmware-solution.md
ms. Previously updated : 02/12/2024 Last updated : 02/26/2024 #Customer intent: As a VMware VM admin, I want to assess my VMware VMs in preparation for migration to Azure VMware Solution (AVS)
Run an assessment as follows:
Target and pricing settings | **Currency** | The billing currency for your account. Target and pricing settings | **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. Target and pricing settings | **VM uptime** | Specify the duration (days per month/hour per day) that servers/VMs run. This is useful for computing cost estimates for SQL Server on Azure VM where you're aware that Azure VMs might not run continuously. <br/> Cost estimates for servers where recommended target is *SQL Server on Azure VM* are based on the duration specified. Default is 31 days per month/24 hours per day.
- Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription. Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
+ Target and pricing settings | **Azure Hybrid Benefit** | Specify whether you already have a Windows Server and/or SQL Server license or Enterprise Linux subscription (RHEL and SLES). Azure Hybrid Benefit is a licensing benefit that helps you to significantly reduce the costs of running your workloads in the cloud. It works by letting you use your on-premises Software Assurance-enabled Windows Server and SQL Server licenses on Azure. For example, if you have a SQL Server license and they're covered with active Software Assurance of SQL Server Subscriptions, you can apply for the Azure Hybrid Benefit when you bring licenses to Azure.
Assessment criteria | **Sizing criteria** | Set to be *Performance-based* by default, which means Azure Migrate collects performance metrics pertaining to SQL instances and the databases managed by it to recommend an optimal-sized SQL Server on Azure VM and/or Azure SQL Database and/or Azure SQL Managed Instance configuration. Assessment criteria | **Performance history** | Indicate the data duration on which you want to base the assessment. (Default is one day) Assessment criteria | **Percentile utilization** | Indicate the percentile value you want to use for the performance sample. (Default is 95th percentile)
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 02/13/2024 Last updated : 02/26/2024
## Update (February 2024) -- Public preview: Envision savings with Azure Hybrid Benefits by bringing your existing Enterprise Linux subscriptions to Azure using Azure VM assessments and business case.
+- Public preview: Envision savings with Azure Hybrid Benefits by bringing your existing Enterprise Linux subscriptions (RHEL and SLES) to Azure using Azure VM assessments and business case.
## Update (January 2024)
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-audit-logs.md
By default, audit logs are disabled. To enable them, set the `audit_log_enabled`
Other parameters you can adjust to control audit logging behavior include: - `audit_log_events`: controls the events to be logged. See below table for specific audit events.-- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters.-- `audit_log_exclude_users`: MySQL users to be excluded from logging. Max length of the parameter is 512 characters.
+- `audit_log_include_users`: MySQL users to be included for logging. The default value for this parameter is empty, which will include all the users for logging. This has higher priority over `audit_log_exclude_users`. Max length of the parameter is 512 characters. For example, wildcard value of `dev*` includes all the users with entries starting with keyword `dev` like "dev1,dev_user,dev_2". Another example for wildcard entry for including user is `*dev` in this example, all users ending with value "dev" like "stage_dev,prod_dev,user_dev" are included in the audit log entries. Additionally, the use of a question mark `(?)` as a wildcard character is permitted in patterns.
+- `audit_log_exclude_users`: MySQL users to be excluded from logging. The Max length of the parameter is 512 characters. Wildcard entries for user are also accepted to exclude users in audit logs. For example, wildcard value of `stage*` excludes all the users with entries starting with keyword `stage` like "stage1,stage_user,stage_2". Another example for wildcard entry for excluding user is `*com` in this example, all users ending with value `com` will be excluded from the audit log entries. Additionally, the use of a question mark `(?)` as a wildcard character is permitted in patterns.
> [!NOTE] > `audit_log_include_users` has higher priority over `audit_log_exclude_users`. For example, if `audit_log_include_users` = `demouser` and `audit_log_exclude_users` = `demouser`, the user will be included in the audit logs because `audit_log_include_users` has higher priority. | **Event** | **Description** | | | |
-| `CONNECTION` | - Connection initiation (successful or unsuccessful)<br />- User reauthentication with different user/password during session<br />- Connection termination |
+| `CONNECTION` | - Connection initiation<br />- Connection termination |
+| `CONNECTION_V2` | - Connection initiation (successful or unsuccessful attempt error code)<br />- Connection termination<br /> |
| `DML_SELECT` | SELECT queries | | `DML_NONSELECT` | INSERT/DELETE/UPDATE queries | | `DML` | DML = DML_SELECT + DML_NONSELECT |
The following sections describe the output of MySQL audit logs based on the even
| `user_s` | Name of user executing the query | | `db_s` | Name of database connected to | | `\_ResourceId` | Resource URI |
+| `status_d` | Connection [Error code](https://dev.mysql.com/doc/mysql-errors/8.0/en/server-error-reference.html) entry for CONNECTIONS_V2 event. |
### General
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
| order by TimeGenerated asc nulls last ``` +
+- List CONNECTION_V2 events on a particular server, `status_d` column denotes the client connection [error code](https://dev.mysql.com/doc/mysql-errors/8.0/en/server-error-reference.html) faced by the client application while connecting.
+
+ ```kusto
+ AzureDiagnostics
+ | where Resource == '<your server name>' //Server name must be in Upper case
+ | where Category == 'MySqlAuditLogs' and event_subclass_s == "CONNECT"
+ | project TimeGenerated, Resource, event_class_s, event_subclass_s, user_s, ip_s, status_d
+ | order by TimeGenerated asc nulls last
+ ```
+ - List CONNECTION events on a particular server ```kusto
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL fl
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## February 2024
+
+- **Audit logs now supports wild card entries**
+
+ The server parameters now supports wildcards in `audit_log_include_users` and `audit_log_exclude_users`, enhancing flexibility for specifying user inclusions and exclusions in audit logs. [Learn more](./concepts-audit-logs.md#configure-audit-logging)
+
+- **Enhanced Audit Logging with CONNECTION_V2 for Comprehensive MySQL User Audits**
+
+ Server parameter [audit_log_events](./concepts-audit-logs.md#configure-audit-logging) now supports event CONNECTION_V2 for detailed connection logs, providing insights into user audits, connection status, and [error codes in MySQL](https://dev.mysql.com/doc/mysql-errors/8.0/en/server-error-reference.html) interactions.[Learn more](./concepts-audit-logs.md#analyze-logs-in-azure-monitor-logs)
+ ## December 2023
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for MyS
We strongly encourage customers to move away from relying on any individual Gateway IP address (since these will be retired in the future). Instead allow network traffic to reach both the individual Gateway IP addresses and Gateway IP address subnets in a region.
-| **Region name** | **Gateway IP address(es)** | **Gateway IP address subnets** |
-|:-|:-|:--|
-| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 |
-| Australia Central 2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 |
-| Australia East | 13.70.112.32, 40.79.160.32, 40.79.168.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 20.53.46.128/27 |
-| Australia Southeast | 13.77.49.33 | 13.77.49.32/29, 104.46.179.160/27 |
-| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27 |
-| Canada Central | 13.71.168.32 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27 |
-| Canada East | 40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 |
-| Central US | 104.208.21.192, 13.89.168.192, 52.182.136.192 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27 |
-| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27 |
-| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27 |
-| China East 3 | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
-| China North | 52.130.128.89 | 52.130.128.88/29, 40.72.77.128/27 |
-| China North 2 | 40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27 |
-| China North 3 | 13.75.32.192, 13.75.33.192 | 13.75.32.192/29, 13.75.33.192/29 |
-| East Asia | 13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27 |
-| East US | 20.42.65.64, 20.42.73.0, 52.168.116.64 | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27 |
-| East US 2 | 104.208.150.192, 40.70.144.192, 52.167.104.192 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27 |
-| France Central | 40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 |
-| France South | 40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27 |
-| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27 |
-| India Central | 104.211.86.32, 20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27 |
-| India South | 40.78.192.32 | 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27 |
-| India West | 104.211.144.32 | 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27 |
-| Japan East | 40.79.184.8, 40.79.192.23 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 |
-| Japan West | 40.74.96.6 | 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 |
-| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27 |
-| Korea South | 52.231.145.3 | 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 |
-| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27 |
-| North Europe | 52.138.224.6, 52.138.224.7 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 |
-| Norway East | 51.120.96.0 | 51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27 |
-| Norway West | 51.120.216.0 | 51.120.217.32/29, 51.13.136.224/27 |
-| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 |
-| South Africa West | 102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27 |
-| South Central US | 20.45.120.0 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27 |
-| Southeast Asia | 23.98.80.12, 40.78.233.2 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 |
-| Sweden Central | 51.12.96.32 | 51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27 |
-| Sweden South | 51.12.200.32 | 51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27 |
-| Switzerland North | 51.107.56.0 | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 |
-| Switzerland West | 51.107.152.0 | 51.107.153.32/29, 51.107.250.64/27 |
-| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 |
-| UAE North | 65.52.248.0 | 20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 |
-| UK South | 51.105.64.0 | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27 |
-| UK West | 51.140.208.98 | 51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 |
-| West Central US | 13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 |
-| West Europe | 13.69.105.208, 104.40.169.187 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27 |
-| West US | 13.86.216.212, 13.86.217.212 | 20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27 |
-| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27 |
-| West US 3 | 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 |
+| **Region name** |**Current Gateway IP address**| **Gateway IP address subnets** |
+|:-|:--|:--|
+| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 |
+| Australia Central2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 |
+| Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 40.79.160.32/29, 20.53.46.128/27 |
+| Australia South East |13.77.49.33 |3.77.49.32/29, 104.46.179.160/27|
+| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27|
+|Brazil Southeast|191.233.48.2|191.233.48.32/29, 191.233.15.160/27|
+| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27|
+| Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 |
+| Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27|
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/2752.130.112.136/29, 52.130.13.96/27|
+| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27|
+| China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 |
+| China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27|
+| East Asia |13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27|
+| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27|
+| East US 2 |52.167.105.38, 40.70.144.38| 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27|
+| France Central |40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 |
+| France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27|
+| Germany North| 51.116.56.0| 51.116.57.32/29, 51.116.54.96/27|
+| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27|
+| India Central |20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27|
+| India South | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27|
+| India West | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27|
+| Japan East | 40.79.184.8, 40.79.192.23| 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 |
+| Japan West |40.74.96.6| 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 |
+| Jio India Central| 20.192.233.32|20.192.233.32/29, 20.192.48.32/27|
+| Jio India West|20.193.200.32|20.193.200.32/29, 20.192.167.224/27|
+| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27|
+| Korea South |52.231.145.3| 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 |
+| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27|
+| North Europe |52.138.224.6, 52.138.224.7 |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 |
+|Norway East|51.120.96.0|51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27|
+|Norway West|51.120.216.0|51.120.217.32/29, 51.13.136.224/27|
+| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 |
+| South Africa West |102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27|
+| South Central US | 20.45.120.0 |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27|
+| South East Asia | 23.98.80.12, 40.78.233.2|13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 |
+| Sweden Central|51.12.96.32|51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27|
+| Sweden South|51.12.200.32|51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27|
+| Switzerland North |51.107.56.0 |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27|
+| Switzerland West | 51.107.152.0| 51.107.153.32/29, 51.107.250.64/27|
+| UAE Central | 20.37.72.64| 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 |
+| UAE North |65.52.248.0 |20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 |
+| UK South | 51.105.64.0|51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27|
+| UK West |51.140.208.98 |51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 |
+| West Central US |13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 |
+| West Europe | 13.69.105.208,104.40.169.187|104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27|
+| West US |13.86.216.212, 13.86.217.212 |20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27|
+| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27|
+| West US 3 |20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 |
operator-nexus Howto Cluster Runtime Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-runtime-upgrade.md
To check on the status of the upgrade observe the detailed status of the cluster
To view the upgrade status through the Azure portal, navigate to the targeted cluster resource. In the cluster's *Overview* screen, the detailed status is provided along with a detailed status message.
+The Cluster upgrade is in-progress when detailedStatus is set to `Updating` and detailedStatusMessage shows the progress of upgrade. Some examples of upgrade progress shown in detailedStatusMessage are `Waiting for control plane upgrade to complete...`, `Waiting for nodepool "<rack-id>" to finish upgrading...`, etc.
+
+The Cluster upgrade is complete when detailedStatus is set to `Running` and detailedStatusMessage shows message `Cluster is up and running`
+ :::image type="content" source="./media/runtime-upgrade-cluster-detail-status.png" lightbox="./media/runtime-upgrade-cluster-detail-status.png" alt-text="Screenshot of Azure portal showing in progress cluster upgrade."::: To view the upgrade status through the Azure CLI, use `az networkcloud cluster show`.
Upon successful execution of the command, the updateStrategy values specified wi
During a runtime upgrade, it's possible that the upgrade fails to move forward but the detail status reflects that the upgrade is still ongoing. **Because the runtime upgrade can take a very long time to successfully finish, there's no set timeout length currently specified**. Hence, it's advisable to also check periodically on your cluster's detail status and logs to determine if your upgrade is indefinitely attempting to upgrade.
-We can identify when this is the case by looking at the Cluster's logs, detailed message, and detailed status message. If a timeout has occurred, we would observe that the Cluster is continuously reconciling over the same indefinitely and not moving forward. The Cluster's detailed status message would reflect, `"Cluster is in the process of being updated."`.
-From here, we recommend checking Cluster logs or configured LAW, to see if there's a failure, or a specific upgrade that is causing the lack of progress.
+We can identify when this is the case by looking at the Cluster's logs, detailed message, and detailed status message. If a timeout has occurred, we would observe that the Cluster is continuously reconciling over the same indefinitely and not moving forward. From here, we recommend checking Cluster logs or configured LAW, to see if there's a failure, or a specific upgrade that is causing the lack of progress.
### Hardware Failure doesn't require Upgrade re-execution
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
See the article [Tracking Asynchronous Operations Using Azure CLI](./howto-track
## Cluster deployment validation
-View the status of the cluster:
+View the status of the cluster on the portal, or via the Azure CLI:
```azurecli az networkcloud cluster show --resource-group "$CLUSTER_RG" \ --resource-name "$CLUSTER_RESOURCE_NAME" ```
+The Cluster deployment is in-progress when detailedStatus is set to `Deploying` and detailedStatusMessage shows the progress of deployment.
+Some examples of deployment progress shown in detailedStatusMessage are `Hardware validation is in progress.` (if cluster is deployed with hardware validation) ,`Cluster is bootstrapping.`, `KCP initialization in progress.`, `Management plane deployment in progress.`, `Cluster extension deployment in progress.`, `waiting for "<rack-ids>" to be ready`, etc.
+++ The Cluster deployment is complete when detailedStatus is set to `Running` and detailedStatusMessage shows message `Cluster is up and running`. + View the management version of the cluster: ```azurecli
Cluster create Logs can be viewed in the following locations:
1. Azure portal Resource/ResourceGroup Activity logs. 2. Azure CLI with `--debug` flag passed on command-line.+
operator-nexus Howto Kubernetes Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-connect.md
Once you are connected to a cluster via Arc for Kuberentes, you can connect to i
The `az ssh arc` command allows users to remotely access a cluster VM that has been connected to Azure Arc. This method is a secure way to SSH into the cluster node directly from the command line, while in connected mode. Once the cluster VM has been registered with Azure Arc, the `az ssh arc` command can be used to manage the machine remotely, making it a quick and efficient method for remote management.
-To use `az arc ssh`, users need to manually connect the cluster VMs to Arc by creating a service principal (SP) with the 'Azure Connected Machine Onboarding' role. For more detailed steps on how to connect an Azure Operator Nexus Kubernetes cluster node to Arc, refer to the [how to guide](./howto-monitor-naks-cluster.md#monitor-nexus-kubernetes-cluster--vm-layer).
- 1. Set the required variables. ```bash
operator-nexus Howto Monitor Naks Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-naks-cluster.md
The following resources provide you with support:
[!INCLUDE [dcr.sh](./includes/dcr.md)] - `assign.sh`: use the script to create a policy to associate the DCR with all Arc-enabled servers in a resource group [!INCLUDE [assign.sh](./includes/assign.md)]-- `install.sh`: Arc-enable Nexus Kubernetes cluster VMs and install Azure Monitoring Agent on each VM
+- `install.sh`: Install Azure Monitoring Agent on each VM to collect monitoring data from Azure Virtual Machines.
[!INCLUDE [install.sh](./includes/install.md)] ### Prerequisites-VM
For convenience, the provided **`assign.sh`** script assigns the built-in policy
./assign.sh ```
-#### Connect Arc-enabled servers and install Azure monitoring agent
+#### Install Azure monitoring agent
-Use the included **`install.sh`** script to Arc-enroll all server VMs that represent the nodes of the Nexus Kubernetes cluster.
-This script creates a Kubernetes daemonSet on the Nexus Kubernetes cluster.
-It deploys a pod to each cluster node, connecting each VM to Arc-enabled servers and installing the Azure Monitoring Agent (AMA).
+Use the included **`install.sh`** which creates a Kubernetes daemonSet on the Nexus Kubernetes cluster.
+It deploys a pod to each cluster node and installs the Azure Monitoring Agent (AMA).
The `daemonSet` also includes a liveness probe that monitors the server connection and AMA processes.
+> [!NOTE]
+> To install Azure Monitoring Agent, you must first Arc connect the Nexus Kubernetes cluster VMs. This process is automated if you are using the latest version bundle. However, if the version bundle you use does not support cluster VM Arc enrollment by default, you will need to upgrade your cluster to the latest version bundle. For more information about the version bundle, please refer [Nexus Kubernetes cluster supported versions](reference-nexus-kubernetes-cluster-supported-versions.md)
1. Set the environment as specified in [Environment Setup](#environment-setup). Set the current `kubeconfig` context for the Nexus Kubernetes cluster VMs. 2. Permit `Kubectl` access to the Nexus Kubernetes cluster.
kubectl logs <podname>
``` On completion, the system logs the message "Server monitoring configured successfully".
-At that point, the Arc-enabled servers appear as resources within the selected resource group.
> [!NOTE] > Associate these connected servers to the [DCR](#associate-arc-enabled-server-resources-to-dcr).
Validate the successful deployment of monitoring agentsΓÇÖ enablement on Nexus K
az k8s-extension show --name azuremonitor-containers \ --cluster-name "<Nexus Kubernetes cluster Name>" \ --resource-group "<Nexus Kubernetes cluster Resource Group>" \
- --cluster-type conectedClusters
+ --cluster-type connectedClusters
``` Look for a Provisioning State of "Succeeded" for the extension. The "k8s-extension create" command may have also returned the status.
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
Note the following important changes to make before you upgrade to any of the av
| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | | | 1.25.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | | | 1.25.6 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.6 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
| 1.25.11 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
+| 1.25.11 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
| 1.26.3 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | No breaking changes | | | 1.26.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | | | 1.26.3 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.3 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
| 1.26.6 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
+| 1.26.6 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
| 1.27.1 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.5.1 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) | | 1.27.1 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) | | 1.27.1 | 3 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
+| 1.27.1 | 4 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
| 1.27.3 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
+| 1.27.3 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
| 1.28.0 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | |
+| 1.28.0 | 2 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>azure-arc-servers v1.0.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5<br>sriov-dp v3.7.0-48 | Azure Linux 2.0 | No breaking changes | Cluster nodes are Azure Arc-enabled |
## Upgrading Kubernetes versions
private-multi-access-edge-compute-mec Partner Programs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-multi-access-edge-compute-mec/partner-programs.md
Microsoft is collaborating with leading third-party providers to develop a robust partner ecosystem across all layers of the value chain. This ecosystem enables Azure private multi-access edge compute (MEC) to support diverse use cases and requirements, enabling partners to add unique value to enterprise customers. ## Azure private MEC ecosystem
-Azure private MEC solution partners include technology partners, application ISVs, and system integrators.
+Azure private MEC solution partners include technology partners, application independent software vendors (ISVs), and system integrators.
- **Technology Partners** bring critical hardware and software components such as network functions, Radio Access Network (RAN) technologies, and SIMs to the Azure private MEC ecosystem. Customers can mix and match these components to meet their requirements.-- **System Integrators and Operators** are responsible for planning, deployment, and operation of a customerΓÇÖs Azure private MEC implementation. These providers bring assets and expertise such as spectrum, RF planning, installation, maintenance, and support. System Integrators and Operators enable customers to rapidly deploy the Azure private MEC solution without requiring in-house expertise in complexities surrounding mobile network technologies.
+- **System Integrators and Operators** are responsible for planning, deployment, and operation of a customerΓÇÖs Azure private MEC implementation. These providers bring assets and expertise such as spectrum, radio frequency (RF) planning, installation, maintenance, and support. System Integrators and Operators enable customers to rapidly deploy the Azure private MEC solution without requiring in-house expertise in complexities surrounding mobile network technologies.
- **Application ISV Partners** bring ready to deploy software solutions built for Azure private MEC. These applications use the low latency edge computing capabilities of Azure private MEC to enable a customerΓÇÖs specific use-cases within industries, including manufacturing, energy, and transportation. ### System Integrators (SIs)
Our operator partners include:
Azure private MEC technology partners provide critical hardware and software components including network functions, Radio Access Network (RAN) technologies, and SIM services. ### Networking ISVs
-Networking ISV partners include software vendors that provide network functions such as firewalls and SD-WAN. The breadth of third-party network functions available enable customers to securely integrate the Azure private MEC solution into their existing edge and cloud environments. The Azure private MEC current network function partners include:
+Networking independent software vendor (ISV) partners include software vendors that provide network functions such as firewalls and SD-WAN. The breadth of third-party network functions available enable customers to securely integrate the Azure private MEC solution into their existing edge and cloud environments. The Azure private MEC current network function partners include:
|Firewall |SD-WAN | ||| | [Palo Alto Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/paloaltonetworks.vmseries-ngfw-vm-edge-panos-10-2-4?tab=Overview) | [NetFoundry](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/netfoundryinc.application-ziti-private-edge?exp=ubp8&tab=Overview) |
-| | [VMware SD-WAN by Velocloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vmware-inc.vmware_sdwan_edge_zones?exp=ubp8&tab=Overview) |
+|[Trend Micro](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.mobile-network-security?tab=Overview) | [VMware SD-WAN by Velocloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vmware-inc.vmware_sdwan_edge_zones?exp=ubp8&tab=Overview) |
| | [Versa Networks](https://aka.ms/versa) | ### SIM & RAN
SIM partners provide wireless authentication technologies and embedded cellular
### Application ISVs
-Microsoft partners with Application ISVs to make their software available through the Azure Marketplace. Our ISVs partners use a combination of private 5G and edge compute capabilities to create new experiences for customers.
+Microsoft partners with Application ISVs to make their software available through the Azure Marketplace. Our ISV partners use a combination of private 5G and edge compute capabilities to create new experiences for customers.
Applications that run on supported platforms can also deploy to Azure private MEC with few code changes required. This means that application ISV solutions for Azure Stack Edge, and Azure IoT Edge can also run on the Azure private MEC solution. Our application ISV partners include: - [Cognitiwe](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cognitiweaio1670399502095.cognitiwe_hse_v1?exp=ubp8&tab=Overview)--[Fractal Analytics](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/neal_analytics.stockview-retail?tab=Overview)--[Glartek](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/glarevisionsa1698227199975.glartek?tab=Overview)
+- [Fractal Analytics](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/neal_analytics.stockview-retail?tab=Overview)
+- [Glartek](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/glarevisionsa1698227199975.glartek?tab=Overview)
- [iLink Systems](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ilinksystems.samplemidasvm?exp=ubp8&tab=Overview)--[inVia Robotics](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/inviaroboticsinc1629911110634.inviarobotics1?tab=Overview)
+- [inVia Robotics](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/inviaroboticsinc1629911110634.inviarobotics1?tab=Overview)
- [Ipsotek](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/atosinternationalsas.ipsotek_vi_suite_bundles?exp=ubp8&tab=Overview)--[Nsion](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nsionltd1591969784743.nsc3_saas?tab=Overview)--[MEEP](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/eepadvancedenterprisecommunicationltd1676190998651.synch-ptt?tab=overview)
+- [Nsion](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nsionltd1591969784743.nsc3_saas?tab=Overview)
+- [MEEP](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/eepadvancedenterprisecommunicationltd1676190998651.synch-ptt?tab=overview)
- [Red Viking](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/redviking1587070336894.rv_argonaut_on_mec?exp=ubp8&tab=Overview)--[Scenera](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/scenerainc1695952178961.scenera-maistro-saas-1?tab=Overview)
+- [Scenera](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/scenerainc1695952178961.scenera-maistro-saas-1?tab=Overview)
- [Sensing Feeling](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/sensingfeelinglimited1671143541932.001?exp=ubp8) -[Tampnet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tampnetas1686124551117.azure_tampnet_private_network?tab=Overview) - Taqtile--[Trend Micro](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trendmicro.mobile-network-security?tab=Overview)--[Trilogy Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trilogynetworksinc1688507869081.farmgrid-preview?tab=Overview&flightCodes=dec2dcd1-ef23-41d8-bf58-ce0c9d9b17c1)--[Unmanned Life](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/unmanned_life.robot-orchestration?tab=Overview)
+- [Trilogy Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/trilogynetworksinc1688507869081.farmgrid-preview?tab=Overview&flightCodes=dec2dcd1-ef23-41d8-bf58-ce0c9d9b17c1)
+- [Unmanned Life](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/unmanned_life.robot-orchestration?tab=Overview)
- [Weavix](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/pksolutionsllc1654260389042.smart_radio_36_ms?exp=ubp8&tab=Overview)--[Zebra](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zebratechnologiescorporation1702409620263.offer_2?tab=Overview)
+- [Zebra](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/zebratechnologiescorporation1702409620263.offer_2?tab=Overview)
## Next steps - To partner with Microsoft and deploy Azure private MEC solutions:
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
The following regions currently support availability zones:
| West US 2 | Sweden Central | | | | | West US 3 | Switzerland North | | | | | Mexico Central* | Poland Central ||||
+||Spain Central* ||||
+
reliability Reliability Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-microsoft-purview.md
Microsoft Purview makes commercially reasonable efforts to provide availability
[!INCLUDE [next step](includes/reliability-disaster-recovery-description-include.md)]
-There's some key information to consider upfront:
+>[!IMPORTANT]
+>Today, Microsoft Purview doesn't support automated disaster recovery. Until that support is added, you're responsible to take care of backup and restore activities. You can manually create a secondary Microsoft Purview account as a warm standby instance in another region.
-- It isn't advisable to back up "scanned" assets' details. You should only back up the curated data such as mapping of classifications and glossaries on assets. The only case when you need to back up assets' details is when you have custom assets via custom `typeDef`.--- The backed-up asset count should be fewer than 100,000 assets. The main driver is that you have to use the search query API to get the assets, which have limitation of 100,000 assets returned. However, if you're able to segment the search query to get smaller number of assets per API call, it's possible to back up more than 100,000 assets.--- The goal is to perform one time migration. If you wish to continuously "sync" assets between two accounts, there are other steps that won't be covered in detail by this article. You have to use [Microsoft Purview's Event Hubs to subscribe and create entities to another account](/purview/manage-kafka-dotnet). However, Event Hubs only has Atlas information. Microsoft Purview has added other capabilities such as **glossaries** and **contacts** which won't be available via Event Hubs.-
-### Identify key requirements
-
-Most of enterprise organizations have critical requirement for Microsoft Purview for capabilities such as Backup, Business Continuity, and Disaster Recovery (BCDR). To get into more details of this requirement, you need to differentiate between Backup, High Availability (HA), and Disaster recovery (DR).
-
-While they're similar, HA keeps the service operational if there was a hardware fault, for example, but it wouldn't protect you if someone accidentally or deliberately deleted all the records in your database. For that, you might need to restore the service from a backup.
-
-### Backup
-
-You might need to create regular backups from a Microsoft Purview account and use a backup in case a piece of data or configuration is accidentally or deliberately deleted from the Microsoft Purview account by the users.
-
-The backup should allow saving a point in time copy of the following configurations from the Microsoft Purview account:
--- Account information (for example, Friendly name)-- Collection structure and role assignments-- Custom Scan rule sets, classifications, and classification rules-- Registered data sources-- Scan information-- Create and maintain key vaults connections-- Key vault connections and Credentials and relations with current scans-- Registered SHIRs-- Glossary terms templates-- Glossary terms-- Manual asset updates (including classification and glossary assignments)-- ADF and Synapse connections and lineage-
-Backup strategy is determined by restore strategy, or more specifically how long it will take to restore things when a disaster occurs. To answer that, you might need to engage with the affected stakeholders (the business owners) and understand what the required recovery objectives are.
-
-There are three main requirements to take into consideration:
--- **Recover Time Objective (RTO)** ΓÇô Defines the maximum allowable downtime following a disaster for which ideally the system should be back operational.-- **Recovery Point Objective (RPO)** ΓÇô Defines the acceptable amount of data loss that is ok following a disaster. Normally RPO is expressed as a timeframe in hours or minutes.-- **Recovery Level Object (RLO)** ΓÇô Defines the granularity of the data being restored. It could be a SQL server, a set of databases, tables, records, etc.-
-To implement disaster recovery for Microsoft Purview, see the [Microsoft Purview disaster recovery documentation.](/purview/concept-best-practices-migration#implementation-steps)
+To implement disaster recovery for Microsoft Purview, see the [Microsoft Purview disaster recovery documentation.](/purview/disaster-recovery)
## Next steps
sap High Availability Guide Rhel With Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-dialog-instance.md
Title: Deploy SAP Dialog Instance with SAP ASCS/SCS high availability VMs on RHEL | Microsoft Docs
-description: Configure SAP Dialog Instance on SAP ASCS/SCS high availability VMs on RHEL
+ Title: Deploy SAP dialog instances with SAP ASCS/SCS high-availability VMs on RHEL | Microsoft Docs
+description: Configure SAP dialog instances on SAP ASCS/SCS high-availability VMs on RHEL.
Last updated 01/21/2024
-# Deploy SAP Dialog Instances with SAP ASCS/SCS high availability VMs on Red Hat Enterprise Linux
+# Deploy SAP dialog instances with SAP ASCS/SCS high-availability VMs on RHEL
-This article describes how to install and configure Primary Application Server (PAS) and Additional Application Server (AAS) dialog instance on the same SAP ASCS/SCS high availability cluster running on Red Hat Enterprise Linux (RHEL).
+This article describes how to install and configure Primary Application Server (PAS) and Additional Application Server (AAS) dialog instances on the same ABAP SAP Central Services (ASCS)/SAP Central Services (SCS) high-availability cluster running on Red Hat Enterprise Linux (RHEL).
## References * [Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker](https://access.redhat.com/articles/3974941) * [Configuring SAP NetWeaver ASCS/ERS ENSA1 with Standalone Resources in RHEL 7.5+ and RHEL 8](https://access.redhat.com/articles/3569681) * SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533), which has:
- * List of Azure VM sizes that are supported for the deployment of SAP software
- * Important capacity information for Azure VM sizes
- * Supported SAP software, and operating system (OS) and database combinations
- * Required SAP kernel version for Windows and Linux on Microsoft Azure
+ * A list of Azure virtual machine (VM) sizes that are supported for the deployment of SAP software.
+ * Important capacity information for Azure VM sizes.
+ * Supported SAP software and operating system (OS) and database combinations.
+ * Required SAP kernel version for Windows and Linux on Azure.
* SAP Note [2015553](https://launchpad.support.sap.com/#/notes/2015553) lists prerequisites for SAP-supported SAP software deployments in Azure.
-* SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) has recommended OS settings for Red Hat Enterprise Linux 7.x
-* SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999) has recommended OS settings for Red Hat Enterprise Linux 8.x
-* SAP Note [2009879](https://launchpad.support.sap.com/#/notes/2009879) has SAP HANA Guidelines for Red Hat Enterprise Linux
+* SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) lists the recommended OS settings for Red Hat Enterprise Linux 7.x.
+* SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999) lists the recommended OS settings for Red Hat Enterprise Linux 8.x.
+* SAP Note [2009879](https://launchpad.support.sap.com/#/notes/2009879) has SAP HANA guidelines for Red Hat Enterprise Linux.
* SAP Note [2178632](https://launchpad.support.sap.com/#/notes/2178632) has detailed information about all monitoring metrics reported for SAP in Azure. * SAP Note [2191498](https://launchpad.support.sap.com/#/notes/2191498) has the required SAP Host Agent version for Linux in Azure. * SAP Note [2243692](https://launchpad.support.sap.com/#/notes/224362) has information about SAP licensing on Linux in Azure.
-* SAP Note [1999351](https://launchpad.support.sap.com/#/notes/1999351) has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
+* SAP Note [1999351](https://launchpad.support.sap.com/#/notes/1999351) has more troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
* [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux. * [Azure Virtual Machines planning and implementation for SAP on Linux](planning-guide.md) * [Azure Virtual Machines deployment for SAP on Linux](deployment-guide.md) * [Azure Virtual Machines DBMS deployment for SAP on Linux](dbms-guide-general.md)
-* [SAP Netweaver in pacemaker cluster](https://access.redhat.com/articles/3150081)
-* General RHEL documentation
- * [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
- * [High Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
- * [High Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
+* [SAP Netweaver in Pacemaker cluster](https://access.redhat.com/articles/3150081)
+* General RHEL documentation:
+ * [High-Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
+ * [High-Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
+ * [High-Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
* Azure-specific RHEL documentation:
- * [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341)
+ * [Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341)
* [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491) ## Overview
-This article describes the cost optimization scenario where you deploy Primary Application Server (PAS) and Additional Application Server (AAS) dialog instances with SAP ASCS/SCS and SAP ERS instances in high availability setup. To minimize the number of VMs for a single SAP system, you want to install PAS and AAS on the same host where SAP ASCS/SCS and SAP ERS are running. With SAP ASCS/SCS being configured in high availability cluster setup, you want PAS and AAS to be managed by cluster as well. The configuration is basically an addition to already configured SAP ASCS/SCS cluster setup. In this setup PAS and AAS will be installed on a virtual hostname and its instance directory is managed by the cluster.
+This article describes the cost optimization scenario where you deploy PAS and AAS dialog instances with SAP ASCS/SCS and Enqueue Replication Server (ERS) instances in a high-availability setup. To minimize the number of VMs for a single SAP system, you want to install PAS and AAS on the same host where SAP ASCS/SCS and SAP ERS are running. With SAP ASCS/SCS being configured in a high-availability cluster setup, you want PAS and AAS also to be managed by cluster. The configuration is basically an addition to an already configured SAP ASCS/SCS cluster setup. In this setup, PAS and AAS are installed on a virtual host name, and its instance directory is managed by the cluster.
-For this setup, PAS and AAS require a highly available instance directory (`/usr/sap/<SID>/D<nr>`). You can place the instance directory filesystem on the same high available storage that you've used for ASCS and ERS instance configuration. The presented architecture showcases [NFS on Azure Files](../../storage/files/files-nfs-protocol.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) for highly available instance directory for the setup.
+For this setup, PAS and AAS require a highly available instance directory (`/usr/sap/<SID>/D<nr>`). You can place the instance directory file system on the same high-available storage that you used for ASCS and ERS instance configuration. The presented architecture showcases [NFS on Azure Files](../../storage/files/files-nfs-protocol.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) for a highly available instance directory for the setup.
-The example shown in this article to describe deployment uses following system information -
+The example shown in this article to describe deployment uses the following system information:
-| Instance name | Instance number | Virtual hostname | Virtual IP (Probe Port) |
+| Instance name | Instance number | Virtual host name | Virtual IP (Probe port) |
| -- | | - | -- | | ABAP SAP Central Services (ASCS) | 00 | sapascs | 10.90.90.10 (62000) | | Enqueue Replication Server (ERS) | 01 | sapers | 10.90.90.9 (62001) |
The example shown in this article to describe deployment uses following system i
| SAP system identifier | NW1 | | | > [!NOTE]
->
-> Install additional SAP application instances on separate VMs, if you want to scale out.
+> Install more SAP application instances on separate VMs if you want to scale out.
-![Architecture of dialog instance installation with SAP ASCS/SCS cluster](media/high-availability-guide-rhel/high-availability-rhel-dialog-instance-architecture.png)
+![Diagram that shows the architecture of dialog instance installation with an SAP ASCS/SCS cluster.](media/high-availability-guide-rhel/high-availability-rhel-dialog-instance-architecture.png)
-### Important consideration for the cost optimization solution
+### Important considerations for the cost-optimization solution
-* Only two dialog instances, PAS and one AAS can be deployed with SAP ASCS/SCS cluster setup.
-* If you want to scale out your SAP system with additional application servers (like **sapa03** and **sapa04**), you can install them in separate VMs. With PAS and AAS being installed on virtual hostnames, you can either install additional application server using physical or virtual hostname in separate VMs. To learn more on how to assign virtual hostname to a VM, refer to the blog [Use SAP Virtual Host Names with Linux in Azure](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/use-sap-virtual-host-names-with-linux-in-azure/ba-p/3251593).
-* With PAS and AAS deployment with SAP ASCS/SCS cluster setup, the instance number of ASCS, ERS, PAS and AAS must be different.
-* Consider sizing your VM SKUs appropriately based on the sizing guidelines. You have to factor in the cluster behavior where multiple SAP instances (ASCS, ERS, PAS and AAS) may run on a single VM when other VM in the cluster is unavailable.
-* The dialog instances (PAS and AAS) running with SAP ASCS/SCS cluster setup must be installed using virtual hostname.
-* You must use the same storage solution of SAP ASCS/SCS cluster setup to deploy PAS and AAS instances as well. For example, if you have configured SAP ASCS/SCS cluster using NFS on Azure files, same storage solution must be used to deploy PAS and AAS.
-* Instance directory `/usr/sap/<SID>/D<nr>` of PAS and AAS must be mounted on NFS file system and will be managed as resource by the cluster.
+* Only two dialog instances, PAS and one AAS, can be deployed with an SAP ASCS/SCS cluster setup.
+* If you want to scale out your SAP system with more application servers (like **sapa03** and **sapa04**), you can install them in separate VMs. With PAS and AAS being installed on virtual host names, you can install more application servers by using either a physical or virtual host name in separate VMs. To learn more about how to assign a virtual host name to a VM, see the blog [Use SAP Virtual Host Names with Linux in Azure](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/use-sap-virtual-host-names-with-linux-in-azure/ba-p/3251593).
+* With a PAS and AAS deployment with an SAP ASCS/SCS cluster setup, the instance numbers of ASCS, ERS, PAS, and AAS must be different.
+* Consider sizing your VM SKUs appropriately based on the sizing guidelines. You must factor in the cluster behavior where multiple SAP instances (ASCS, ERS, PAS, and AAS) might run on a single VM when another VM in the cluster is unavailable.
+* The dialog instances (PAS and AAS) running with an SAP ASCS/SCS cluster setup must be installed by using a virtual host name.
+* You also must use the same storage solution of the SAP ASCS/SCS cluster setup to deploy PAS and AAS instances. For example, if you configured an SAP ASCS/SCS cluster by using NFS on Azure Files, the same storage solution must be used to deploy PAS and AAS.
+* The instance directory `/usr/sap/<SID>/D<nr>` of PAS and AAS must be mounted on an NFS file system and are managed as a resource by the cluster.
> [!NOTE]
- >
> For SAP J2EE systems, it's not supported to place `/usr/sap/<SID>/J<nr>` on NFS on Azure Files.
-* To install additional application servers on separate VMs, you can either use NFS shares or local managed disk for instance directory filesystem. If you're installing additional application servers for SAP J2EE system, `/usr/sap/<SID>/J<nr>` on NFS on Azure Files isn't supported.
-* In traditional SAP ASCS/SCS high availability configuration, application server instances running on separate VMs aren't affected when there's any effect on SAP ASCS and ERS cluster nodes. But with the cost optimization configuration, either PAS or AAS instance will restart when there's an effect on one of the nodes in the cluster.
-* Refer [NFS on Azure Files consideration](high-availability-guide-rhel-nfs-azure-files.md#important-considerations-for-nfs-on-azure-files-shares) and [Azure NetApp Files consideration](high-availability-guide-rhel-netapp-files.md#important-considerations), as same consideration applies for this setup as well.
+* To install more application servers on separate VMs, you can either use NFS shares or a local managed disk for an instance directory file system. If you're installing more application servers for an SAP J2EE system, `/usr/sap/<SID>/J<nr>` on NFS on Azure Files isn't supported.
+* In a traditional SAP ASCS/SCS high-availability configuration, application server instances running on separate VMs aren't affected when there's any effect on SAP ASCS and ERS cluster nodes. But with the cost-optimization configuration, either the PAS or AAS instance restarts when there's an effect on one of the nodes in the cluster.
+* See [NFS on Azure Files considerations](high-availability-guide-rhel-nfs-azure-files.md#important-considerations-for-nfs-on-azure-files-shares) and [Azure NetApp Files considerations](high-availability-guide-rhel-netapp-files.md#important-considerations) because the same considerations apply to this setup.
-## Pre-requisites
+## Prerequisites
-The configuration described in this article is an addition to your already configured SAP ASCS/SCS cluster setup. In this configuration, PAS and AAS will be installed on a virtual hostname and its instance directory is managed by the cluster. Based on your storage, follow the steps described in below guide to configure `SAPInstance` resource for SAP ASCS and SAP ERS instance in the cluster.
+The configuration described in this article is an addition to your already configured SAP ASCS/SCS cluster setup. In this configuration, PAS and AAS are installed on a virtual host name, and its instance directory is managed by the cluster. Based on your storage, use the steps described in the following articles to configure the `SAPInstance` resource for the SAP ASCS and SAP ERS instance in the cluster.
-* NFS on Azure Files - [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](high-availability-guide-rhel-nfs-azure-files.md)
-* Azure NetApp Files - [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](high-availability-guide-rhel-netapp-files.md)
+* **NFS on Azure Files**: [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](high-availability-guide-rhel-nfs-azure-files.md)
+* **Azure NetApp Files**: [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](high-availability-guide-rhel-netapp-files.md)
-Once you have installed **ASCS**, **ERS** and **Database** instance using SWPM, follow below steps to install PAS and AAS instances.
+After you install the **ASCS**, **ERS**, and **Database** instance by using Software Provisioning Manager (SWPM), follow the next steps to install the PAS and AAS instances.
## Configure Azure Load Balancer for PAS and AAS
-This document assumes that you already configured the load balancer for SAP ASCS/SCS cluster setup as described in [configure Azure load balancer](./high-availability-guide-rhel-nfs-azure-files.md#configure-azure-load-balancer). In the same Azure load balancer, follow below steps to create additional frontend IPs and load balancing rules for PAS and AAS.
-
-1. Open the internal load balancer that was created for SAP ASCS/SCS cluster setup.
-2. Frontend IP Configuration: Create two frontend IP, one for PAS and another for AAS (for example: 10.90.90.30 and 10.90.90.31).
-3. Backend Pool: Backend Pool remains same, as we're deploying PAS and AAS on the same backend pool.
-4. Inbound rules: Create two load balancing rule, one for PAS and another for AAS. Follow the same steps for both load balancing rules.
-5. Frontend IP address: Select frontend IP
- 1. Backend pool: Select backend pool
- 2. Check "High availability ports"
- 3. Protocol: TCP
- 4. Health Probe: Create health probe with below details (applies for both PAS and AAS)
- 1. Protocol: TCP
- 2. Port: [for example: 620<Instance-no.> for PAS, 620<Instance-no.> for AAS]
- 3. Interval: 5
- 4. Probe Threshold: 2
- 5. Idle timeout (minutes): 30
- 6. Check "Enable Floating IP"
-
-> [!NOTE]
-> Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. So to control the number of successful or failed consecutive probes, set the property "probeThreshold" to 2. It is currently not possible to set this property using Azure portal, so use either the [Azure CLI](/cli/azure/network/lb/probe) or [PowerShell](/powershell/module/az.network/new-azloadbalancerprobeconfig) command.
+This article assumes that you already configured the load balancer for the SAP ASCS/SCS cluster setup as described in [Configure Azure Load Balancer](./high-availability-guide-rhel-nfs-azure-files.md#configure-azure-load-balancer). In the same Azure Load Balancer instance, follow these steps to create more front-end IPs and load-balancing rules for PAS and AAS.
+
+1. Open the internal load balancer that was created for the SAP ASCS/SCS cluster setup.
+1. **Frontend IP Configuration**: Create two front-end IPs, one for PAS and another for AAS (for example, **10.90.90.30** and **10.90.90.31**).
+1. **Backend Pool**: This pool remains the same because we're deploying PAS and AAS on the same back-end pool.
+1. **Inbound rules**: Create two load-balancing rules, one for PAS and another for AAS. Follow the same steps for both load-balancing rules.
+1. **Frontend IP address**: Select the front-end IP.
+ 1. **Backend pool**: Select the back-end pool.
+ 1. **High availability ports**: Select this option.
+ 1. **Protocol**: Select **TCP**.
+ 1. **Health Probe**: Create a health probe with the following details (applies for both PAS and AAS):
+ 1. **Protocol**: Select **TCP**.
+ 1. **Port**: For example, **620<Instance-no.>** for PAS and **620<Instance-no.>** for AAS.
+ 1. **Interval**: Enter **5**.
+ 1. **Probe Threshold**: Enter **2**.
+ 1. **Idle timeout (minutes)**: Enter **30**.
+ 1. **Enable Floating IP**: Select this option.
+
+The health probe configuration property `numberOfProbes`, otherwise known as **Unhealthy threshold** in the Azure portal, isn't respected. To control the number of successful or failed consecutive probes, set the property `probeThreshold` to `2`. It's currently not possible to set this property by using the Azure portal. Use either the [Azure CLI](/cli/azure/network/lb/probe) or the [PowerShell](/powershell/module/az.network/new-azloadbalancerprobeconfig) command.
> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need more IP addresses for the VMs, deploy a second NIC.
-> [!NOTE]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](high-availability-guide-standard-load-balancer-outbound-connections.md).
+When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure Load Balancer instance, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For steps on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!IMPORTANT]
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter `net.ipv4.tcp_timestamps` to `0`. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
## Prepare servers for PAS and AAS installation
-The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2.
+When steps in this document are marked with the following prefixes, they mean:
+
+- **[A]**: Applicable to all nodes.
+- **[1]**: Only applicable to node 1.
+- **[2]**: Only applicable to node 2.
-1. **[A]** Setup hostname resolution
+1. **[A]** Set up host name resolution.
- You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands
+ You can either use a DNS server or modify `/etc/hosts` on all nodes. This example shows how to use the `/etc/hosts` file. Replace the IP address and the host name in the following commands:
```bash sudo vi /etc/hosts
The following items are prefixed with either **[A]** - applicable to all nodes,
10.90.90.31 sapaas ```
-2. **[1]** Create the SAP directories on the NFS share. Mount temporarily the NFS share **sapnw1** on one of the VMs and create the SAP directories that will be used as nested mount points.
+1. **[1]** Create the SAP directories on the NFS share. Mount the NFS share **sapnw1** temporarily on one of the VMs, and create the SAP directories to be used as nested mount points.
- 1. If using, NFS on Azure files
+ 1. If you're using NFS on Azure Files:
```bash # mount temporarily the volume
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo rmdir /saptmp ```
- 2. If using, Azure NetApp Files
+ 1. If you're using Azure NetApp Files:
```bash # mount temporarily the volume
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo rmdir /saptmp ```
-3. **[A]** Create the shared directories
+1. **[A]** Create the shared directories.
```bash sudo mkdir -p /usr/sap/NW1/D02
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chattr +i /usr/sap/NW1/D03 ```
-4. **[A]** Configure SWAP space. When installing dialog instance with central services, you need to configure more swap space.
+1. **[A]** Configure swap space. When you install a dialog instance with central services, you must configure more swap space.
```bash sudo vi /etc/waagent.conf
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo service waagent restart ```
-5. **[A]** Add firewall rules for PAS and AAS
+1. **[A]** Add firewall rules for PAS and AAS.
```bash # Probe and gateway port for PAS and AAS
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo firewall-cmd --zone=public --add-port={62002,62003,3303,3303}/tcp ```
-## Installing SAP Netweaver PAS instance
+## Install an SAP Netweaver PAS instance
-1. **[1]** Check the status of the cluster. Before configuring PAS resource for installation, make sure ASCS and ERS resources are configured and started.
+1. **[1]** Check the status of the cluster. Before you configure a PAS resource for installation, make sure the ASCS and ERS resources are configured and started.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl2 ```
-2. **[1]** Create filesystem, virtual IP and health probe resource for PAS instance.
+1. **[1]** Create file system, virtual IP, and health probe resources for the PAS instance.
```bash sudo pcs node standby sap-cl2
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_PAS ```
- Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
+ Make sure that the cluster status is okay and that all resources are started. It isn't important on which node the resources are running.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# fs_NW1_PAS (ocf::heartbeat:Filesystem): Started sap-cl1 ```
-3. **[1]** Change the ownership of `/usr/sap/SID/D02` folder after filesystem is mounted.
+1. **[1]** Change the ownership of the `/usr/sap/SID/D02` folder after the file system is mounted.
```bash sudo chown nw1adm:sapsys /usr/sap/NW1/D02 ```
-4. **[1]** Install SAP Netweaver PAS
+1. **[1]** Install the SAP Netweaver PAS.
- Install SAP NetWeaver PAS as root on the first node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the PAS, for example **sappas**, **10.90.90.30** and the instance number that you used for the probe of the load balancer, for example **02**.
+ Install the SAP NetWeaver PAS as a root on the first node by using a virtual host name that maps to the IP address of the load balancer front-end configuration for the PAS. For example, use **sappas**, **10.90.90.30**, and the instance number that you used for the probe of the load balancer, for example **02**.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+ You can use the sapinst parameter `SAPINST_REMOTE_ACCESS_USER` to allow a nonroot user to connect to sapinst.
```bash # Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again.
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<pas_virtual_hostname> ```
-5. Update the `/usr/sap/sapservices` file
+1. Update the `/usr/sap/sapservices` file.
- To prevent the start of the instances by the sapinit startup script, all instances managed by pacemaker must be commented out from `/usr/sap/sapservices` file.
+ To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from the `/usr/sap/sapservices` file.
```bash sudo vi /usr/sap/sapservices
The following items are prefixed with either **[A]** - applicable to all nodes,
# LD_LIBRARY_PATH=/usr/sap/NW1/D02/exe:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH;/usr/sap/NW1/D02/exe/sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_D02_sappas -D -u nw1adm ```
-6. **[1]** Create PAS cluster resource
+1. **[1]** Create the PAS cluster resource.
```bash # If using NFS on Azure Files or NFSv3 on Azure NetApp Files
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_PAS ```
- Check the status of cluster
+ Check the status of the cluster.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-cl1 ```
-7. Configure constraint to start PAS resource group only after ASCS instances is started.
+1. Configure a constraint to start the PAS resource group only after the ASCS instance is started.
```bash sudo pcs constraint order g-NW1_ASCS then g-NW1_PAS kind=Optional symmetrical=false ```
-## Installing SAP Netweaver AAS instance
+## Install an SAP Netweaver AAS instance
-1. **[2]** Check the status of the cluster. Before configure AAS resource for installation, make sure ASCS, ERS and PAS resources are started.
+1. **[2]** Check the status of the cluster. Before you configure an AAS resource for installation, make sure the ASCS, ERS, and PAS resources are started.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# rsc_sap_NW1_PAS02 (ocf::heartbeat:SAPInstance): Started sap-cl1 ```
-2. **[2]** Create filesystem, virtual IP and health probe resource for AAS instance.
+1. **[2]** Create file system, virtual IP, and health probe resources for the AAS instance.
```bash sudo pcs node unstandby sap-cl2
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_AAS ```
- Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running. As g-NW1_PAS resource group is stopped, all the PAS resources will be in stopped (disabled) state.
+ Make sure that the cluster status is okay and that all resources are started. It isn't important on which node the resources are running. Because the g-NW1_PAS resource group is stopped, all the PAS resources are stopped in the (disabled) state.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# fs_NW1_AAS (ocf::heartbeat:Filesystem): Started sap-cl2 ```
-3. **[2]** Change the ownership of `/usr/sap/SID/D03` folder after filesystem is mounted.
+1. **[2]** Change the ownership of the `/usr/sap/SID/D03` folder after the file system is mounted.
```bash sudo chown nw1adm:sapsys /usr/sap/NW1/D03 ```
-4. **[2]** Install SAP Netweaver AAS
+1. **[2]** Install an SAP Netweaver AAS.
- Install SAP NetWeaver AAS as root on the second node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the PAS, for example **sapaas**, **10.90.90.31** and the instance number that you used for the probe of the load balancer, for example **03**.
+ Install an SAP NetWeaver AAS as the root on the second node by using a virtual host name that maps to the IP address of the load balancer front-end configuration for the PAS. For example, use **sapaas**, **10.90.90.31**, and the instance number that you used for the probe of the load balancer, for example, **03**.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+ You can use the sapinst parameter `SAPINST_REMOTE_ACCESS_USER` to allow a nonroot user to connect to sapinst.
```bash # Allow access to SWPM. This rule is not permanent. If you reboot the machine, you have to run the command again.
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<aas_virtual_hostname> ```
-5. Update the `/usr/sap/sapservices` file
+1. Update the `/usr/sap/sapservices` file.
- To prevent the start of the instances by the sapinit startup script, all instances managed by pacemaker must be commented out from `/usr/sap/sapservices` file.
+ To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from the `/usr/sap/sapservices` file.
```bash sudo vi /usr/sap/sapservices
The following items are prefixed with either **[A]** - applicable to all nodes,
#LD_LIBRARY_PATH=/usr/sap/NW1/D03/exe:$LD_LIBRARY_PATH;export LD_LIBRARY_PATH;/usr/sap/NW1/D03/exe/sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_D03_sapaas -D -u nw1adm ```
-6. **[2]** Create AAS cluster resource
+1. **[2]** Create an AAS cluster resource.
```bash # If using NFS on Azure Files or NFSv3 on Azure NetApp Files
The following items are prefixed with either **[A]** - applicable to all nodes,
--group g-NW1_AAS ```
- Check the status of cluster.
+ Check the status of the cluster.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
# rsc_sap_NW1_AAS03 (ocf::heartbeat:SAPInstance): Started sap-cl2 ```
-7. Configure constraint to start AAS resource group only after ASCS instances is started.
+1. Configure a constraint to start the AAS resource group only after the ASCS instance is started.
```bash sudo pcs constraint order g-NW1_ASCS then g-NW1_AAS kind=Optional symmetrical=false
The following items are prefixed with either **[A]** - applicable to all nodes,
sap-cl1:nw1adm > scp -r nw1adm@sap-cl2:/home/nw1adm/.hdb/sapaas . ```
-2. **[1]** To ensure PAS and AAS instances don't run on the same nodes whenever both nodes are running. Add a negative colocation constraint with below command -
+1. **[1]** To ensure the PAS and AAS instances don't run on the same nodes whenever both nodes are running, add a negative colocation constraint with the following command:
```bash sudo pcs constraint colocation add g-NW1_AAS with g-NW1_PAS score=-1000
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo pcs resource enable g-NW1_PAS ```
- The score of -1000 is to ensure that if only one node is available then both the instances continue to run on the other node. If you would like to keep the AAS instance down in such situation, then you can use the `score=-INFINITY` to enforce this condition.
+ The score of -1000 ensures that if only one node is available, both the instances continue to run on the other node. If you want to keep the AAS instance down in such a situation, you can use `score=-INFINITY` to enforce this condition.
-3. Check the status of cluster.
+1. Check the status of the cluster.
```bash sudo pcs status
The following items are prefixed with either **[A]** - applicable to all nodes,
## Test the cluster setup
-Thoroughly test your pacemaker cluster. [Execute the typical failover tests](high-availability-guide-rhel.md#test-the-cluster-setup).
+Thoroughly test your Pacemaker cluster by running [the typical failover tests](high-availability-guide-rhel.md#test-the-cluster-setup).
sap High Availability Guide Rhel With Hana Ascs Ers Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance.md
Title: Deploy SAP ASCS/SCS and SAP ERS with SAP HANA high availability VMs on RHEL | Microsoft Docs
-description: Configure SAP ASCS/SCS and SAP ERS with SAP HANA high availability VMs on RHEL.
+ Title: Deploy SAP ASCS/SCS and SAP ERS with SAP HANA high-availability VMs on RHEL | Microsoft Docs
+description: Configure SAP ASCS/SCS and SAP ERS with SAP HANA high-availability VMs on RHEL.
Last updated 08/16/2022
-# Deploy SAP ASCS/ERS with SAP HANA high availability VMs on Red Hat Enterprise Linux
+# Deploy SAP ASCS/ERS with SAP HANA high-availability VMs on RHEL
-This article describes how to install and configure SAP HANA along with ASCS/SCS and ERS instances on the same high availability cluster, running on Red Hat Enterprise Linux (RHEL).
+This article describes how to install and configure SAP HANA along with ABAP SAP Central Services (ASCS)/SAP Central Services (SCS) and Enqueue Replication Server (ERS) instances on the same high-availability cluster running on Red Hat Enterprise Linux (RHEL).
## References * [Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker](https://access.redhat.com/articles/3974941) * [Configuring SAP NetWeaver ASCS/ERS ENSA1 with Standalone Resources in RHEL 7.5+ and RHEL 8](https://access.redhat.com/articles/3569681) * SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533), which has:
- * List of Azure VM sizes that are supported for the deployment of SAP software
- * Important capacity information for Azure VM sizes
- * Supported SAP software, and operating system (OS) and database combinations
- * Required SAP kernel version for Windows and Linux on Microsoft Azure
+ * A list of Azure virtual machine (VM) sizes that are supported for the deployment of SAP software.
+ * Important capacity information for Azure VM sizes.
+ * Supported SAP software and operating system (OS) and database combinations.
+ * Required SAP kernel version for Windows and Linux on Azure.
* SAP Note [2015553](https://launchpad.support.sap.com/#/notes/2015553) lists prerequisites for SAP-supported SAP software deployments in Azure.
-* SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) has recommended OS settings for Red Hat Enterprise Linux 7.x
-* SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999) has recommended OS settings for Red Hat Enterprise Linux 8.x
-* SAP Note [2009879](https://launchpad.support.sap.com/#/notes/2009879) has SAP HANA Guidelines for Red Hat Enterprise Linux
+* SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) lists the recommended OS settings for Red Hat Enterprise Linux 7.x.
+* SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999) lists the recommended OS settings for Red Hat Enterprise Linux 8.x.
+* SAP Note [2009879](https://launchpad.support.sap.com/#/notes/2009879) has SAP HANA guidelines for Red Hat Enterprise Linux.
* SAP Note [2178632](https://launchpad.support.sap.com/#/notes/2178632) has detailed information about all monitoring metrics reported for SAP in Azure. * SAP Note [2191498](https://launchpad.support.sap.com/#/notes/2191498) has the required SAP Host Agent version for Linux in Azure. * SAP Note [2243692](https://launchpad.support.sap.com/#/notes/224362) has information about SAP licensing on Linux in Azure.
-* SAP Note [1999351](https://launchpad.support.sap.com/#/notes/1999351) has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
+* SAP Note [1999351](https://launchpad.support.sap.com/#/notes/1999351) has more troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
* [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux. * [Azure Virtual Machines planning and implementation for SAP on Linux](planning-guide.md) * [Azure Virtual Machines deployment for SAP on Linux](deployment-guide.md) * [Azure Virtual Machines DBMS deployment for SAP on Linux](dbms-guide-general.md)
-* [SAP Netweaver in pacemaker cluster](https://access.redhat.com/articles/3150081)
-* General RHEL documentation
- * [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
- * [High Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
+* [SAP Netweaver in Pacemaker cluster](https://access.redhat.com/articles/3150081)
+* General RHEL documentation:
+ * [High-Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
+ * [High-Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
* [High Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index) * Azure-specific RHEL documentation:
- * [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341)
+ * [Support Policies for RHEL High-Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341)
* [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491) ## Overview
-This article describes the cost optimization scenario where you deploy SAP HANA, SAP ASCS/SCS and SAP ERS instances in the same high availability setup. To minimize the number of VMs for a single SAP system, you want to install SAP ASCS/SCS and SAP ERS on the same hosts where SAP HANA is running. With SAP HANA being configured in high availability cluster setup, you want SAP ASCS/SCS and SAP ERS to be managed by cluster as well. The configuration is basically an addition to already configured SAP HANA cluster setup. In this setup SAP ASCS/SCS and SAP ERS will be installed on a virtual hostname and its instance directory is managed by the cluster.
+This article describes the cost-optimization scenario where you deploy SAP HANA, SAP ASCS/SCS, and SAP ERS instances in the same high-availability setup. To minimize the number of VMs for a single SAP system, you want to install SAP ASCS/SCS and SAP ERS on the same hosts where SAP HANA is running. With SAP HANA being configured in a high-availability cluster setup, you want SAP ASCS/SCS and SAP ERS also to be managed by cluster. The configuration is basically an addition to an already configured SAP HANA cluster setup. In this setup, SAP ASCS/SCS and SAP ERS are installed on a virtual host name, and its instance directory is managed by the cluster.
-The presented architecture showcases [NFS on Azure Files](../../storage/files/files-nfs-protocol.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) for highly available instance directory for the setup.
+The presented architecture showcases [NFS on Azure Files](../../storage/files/files-nfs-protocol.md) or [Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) for a highly available instance directory for the setup.
-The example shown in this article to describe deployment uses following system information -
+The example shown in this article to describe deployment uses the following system information:
-| Instance name | Instance number | Virtual hostname | Virtual IP (Probe Port) |
+| Instance name | Instance number | Virtual host name | Virtual IP (Probe port) |
| -- | | - | -- | | SAP HANA DB | 03 | saphana | 10.66.0.13 (62503) | | ABAP SAP Central Services (ASCS) | 00 | sapascs | 10.66.0.20 (62000) |
The example shown in this article to describe deployment uses following system i
| SAP system identifier | NW1 | | | > [!NOTE]
->
-> Install SAP Dialog instances (PAS and AAS) on separate VM's.
+> Install SAP dialog instances (PAS and AAS) on separate VMs.
-![Architecture of SAP HANA, SAP ASCS/SCS and ERS installation within the same cluster](media/high-availability-guide-rhel/high-availability-rhel-hana-ascs-ers-dialog-instance.png)
+![Diagram that shows the architecture of an SAP HANA, SAP ASCS/SCS, and ERS installation within the same cluster.](media/high-availability-guide-rhel/high-availability-rhel-hana-ascs-ers-dialog-instance.png)
-### Important consideration for the cost optimization solution
+### Important considerations for the cost-optimization solution
-* SAP Dialog Instances (PAS and AAS) (like **sapa01** and **sapa02**), should be installed on separate VMs. Install SAP ASCS and ERS with virtual hostnames. To learn more on how to assign virtual hostname to a VM, refer to the blog [Use SAP Virtual Host Names with Linux in Azure](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/use-sap-virtual-host-names-with-linux-in-azure/ba-p/3251593).
-* With HANA DB, ASCS/SCS and ERS deployment in the same cluster setup, the instance number of HANA DB, ASCS/SCS and ERS must be different.
-* Consider sizing your VM SKUs appropriately based on the sizing guidelines. You have to factor in the cluster behavior where multiple SAP instances (HANA DB, ASCS/SCS and ERS) may run on a single VM, when other VM in the cluster is unavailable.
+* SAP dialog instances (PAS and AAS) (like **sapa01** and **sapa02**) should be installed on separate VMs. Install SAP ASCS and ERS with virtual host names. To learn more about how to assign a virtual host name to a VM, see the blog [Use SAP Virtual Host Names with Linux in Azure](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/use-sap-virtual-host-names-with-linux-in-azure/ba-p/3251593).
+* With an HANA DB, ASCS/SCS, and ERS deployment in the same cluster setup, the instance number of HANA DB, ASCS/SCS, and ERS must be different.
+* Consider sizing your VM SKUs appropriately based on the sizing guidelines. You must factor in the cluster behavior where multiple SAP instances (HANA DB, ASCS/SCS, and ERS) might run on a single VM when another VM in the cluster is unavailable.
* You can use different storage (for example, Azure NetApp Files or NFS on Azure Files) to install the SAP ASCS and ERS instances. > [!NOTE]
- >
> For SAP J2EE systems, it's not supported to place `/usr/sap/<SID>/J<nr>` on NFS on Azure Files.
- > Database filesystem like /hana/data and /hana/log are not supported on NFS on Azure Files.
-* To install additional application servers on separate VMs, you can either use NFS shares or local managed disk for instance directory filesystem. If you're installing additional application servers for SAP J2EE system, `/usr/sap/<SID>/J<nr>` on NFS on Azure Files isn't supported.
-* Refer [NFS on Azure Files consideration](high-availability-guide-rhel-nfs-azure-files.md#important-considerations-for-nfs-on-azure-files-shares) and [Azure NetApp Files consideration](high-availability-guide-rhel-netapp-files.md#important-considerations), as same consideration applies for this setup as well.
+ > Database file systems like /hana/data and /hana/log aren't supported on NFS on Azure Files.
+* To install more application servers on separate VMs, you can either use NFS shares or a local managed disk for an instance directory file system. If you're installing more application servers for SAP J2EE system, `/usr/sap/<SID>/J<nr>` on NFS on Azure Files isn't supported.
+* See [NFS on Azure Files considerations](high-availability-guide-rhel-nfs-azure-files.md#important-considerations-for-nfs-on-azure-files-shares) and [Azure NetApp Files considerations](high-availability-guide-rhel-netapp-files.md#important-considerations) because the same considerations apply to this setup.
## Prerequisites
-The configuration described in this article is an addition to your already configured SAP HANA cluster setup. In this configuration, SAP ASCS/SCS and ERS instance are installed on a virtual hostname and the instance directory is managed by the cluster.
+The configuration described in this article is an addition to your already-configured SAP HANA cluster setup. In this configuration, an SAP ASCS/SCS and ERS instance are installed on a virtual host name. The instance directory is managed by the cluster.
-Install HANA database, set up HSR and Pacemaker cluster by following the documentation [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](sap-hana-high-availability-rhel.md) or [High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux](sap-hana-high-availability-netapp-files-red-hat.md) depending on what storage option you're using.
+Install a HANA database and set up a HANA system replication (HSR) and Pacemaker cluster by following the steps in [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](sap-hana-high-availability-rhel.md) or [High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux](sap-hana-high-availability-netapp-files-red-hat.md) depending on what storage option you're using.
-Once you've installed, configured and set-up the **HANA Cluster**, follow the steps below to install ASCS and ERS instances.
+After you install, configure, and set up the **HANA Cluster**, follow the next steps to install ASCS and ERS instances.
## Configure Azure Load Balancer for ASCS and ERS
-This document assumes that you already configured the load balancer for HANA cluster setup as described in [configure Azure load balancer](./sap-hana-high-availability-rhel.md#configure-azure-load-balancer). In the same Azure load balancer, follow below steps to create additional frontend IPs and load balancing rules for ASCS and ERS.
+This article assumes that you already configured the load balancer for a HANA cluster setup as described in [Configure Azure Load Balancer](./sap-hana-high-availability-rhel.md#configure-azure-load-balancer). In the same Azure Load Balancer instance, follow these steps to create more front-end IPs and load-balancing rules for ASCS and ERS.
1. Open the internal load balancer that was created for SAP HANA cluster setup.
-2. Frontend IP Configuration: Create two frontend IP, one for ASCS and another for ERS (for example: 10.66.0.20 and 10.66.0.30).
-3. Backend Pool: Backend Pool remains same, as we're deploying ASCS and ERS on the same backend pool.
-4. Inbound rules: Create two load balancing rule, one for ASCS and another for ERS. Follow the same steps for both load balancing rules.
-5. Frontend IP address: Select frontend IP
- 1. Backend pool: Select backend pool
- 2. Check "High availability ports"
- 3. Protocol: TCP
- 4. Health Probe: Create health probe with below details (applies for both ASCS and ERS)
- 1. Protocol: TCP
- 2. Port: [for example: 620<Instance-no.> for ASCS, 621<Instance-no.> for ERS]
- 3. Interval: 5
- 4. Probe Threshold: 2
- 5. Idle timeout (minutes): 30
- 6. Check "Enable Floating IP"
-
-> [!NOTE]
-> Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. So to control the number of successful or failed consecutive probes, set the property "probeThreshold" to 2. It is currently not possible to set this property using Azure portal, so use either the [Azure CLI](/cli/azure/network/lb/probe) or [PowerShell](/powershell/module/az.network/new-azloadbalancerprobeconfig) command.
+1. **Frontend IP Configuration**: Create two front-end IPs, one for ASCS and another for ERS (for example, **10.66.0.20** and **10.66.0.30**).
+1. **Backend Pool**: This pool remains the same because we're deploying ASCS and ERS on the same back-end pool.
+1. **Inbound rules**: Create two load-balancing rules, one for ASCS and another for ERS. Follow the same steps for both load-balancing rules.
+1. **Frontend IP address**: Select the front-end IP.
+ 1. **Backend pool**: Select the back-end pool.
+ 1. **High availability ports**: Select this option.
+ 1. **Protocol**: Select **TCP**.
+ 1. **Health Probe**: Create a health probe with the following details (applies for both ASCS and ERS):
+ 1. **Protocol**: Select **TCP**.
+ 1. **Port**: For example, **620<Instance-no.>** for ASCS and **621<Instance-no.>** for ERS.
+ 1. **Interval**: Enter **5**.
+ 1. **Probe Threshold**: Enter **2**.
+ 1. **Idle timeout (minutes)**: Enter **30**.
+ 1. **Enable Floating IP**: Select this option.
+
+The health probe configuration property `numberOfProbes`, otherwise known as **Unhealthy threshold** in the Azure portal, isn't respected. To control the number of successful or failed consecutive probes, set the property `probeThreshold` to `2`. It's currently not possible to set this property by using the Azure portal. Use either the [Azure CLI](/cli/azure/network/lb/probe) or the [PowerShell](/powershell/module/az.network/new-azloadbalancerprobeconfig) command.
> [!IMPORTANT]
-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+> Floating IP isn't supported on a NIC secondary IP configuration in load-balancing scenarios. For more information, see [Azure Load Balancer limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need more IP addresses for the VMs, deploy a second NIC.
-> [!NOTE]
-> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](high-availability-guide-standard-load-balancer-outbound-connections.md).
+When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure Load Balancer instance, there's no outbound internet connectivity unless more configuration is performed to allow routing to public endpoints. For steps on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](high-availability-guide-standard-load-balancer-outbound-connections.md).
> [!IMPORTANT]
->
-> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
+> Don't enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps causes the health probes to fail. Set the parameter `net.ipv4.tcp_timestamps` to `0`. For more information, see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).
-## SAP ASCS/SCS and ERS Setup
+## SAP ASCS/SCS and ERS setup
-Based on your storage, follow the steps described in below guides to configure `SAPInstance` resource for SAP ASCS/SCS and SAP ERS instance in the cluster.
+Based on your storage, follow the steps described in the following articles to configure a `SAPInstance` resource for the SAP ASCS/SCS and SAP ERS instance in the cluster.
-* NFS on Azure Files - [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](high-availability-guide-rhel-nfs-azure-files.md#prepare-for-an-sap-netweaver-installation)
-* Azure NetApp Files - [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](high-availability-guide-rhel-netapp-files.md#prepare-for-the-sap-netweaver-installation)
+* **NFS on Azure Files**: [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](high-availability-guide-rhel-nfs-azure-files.md#prepare-for-an-sap-netweaver-installation)
+* **Azure NetApp Files**: [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](high-availability-guide-rhel-netapp-files.md#prepare-for-the-sap-netweaver-installation)
## Test the cluster setup
-Thoroughly test your pacemaker cluster.
+Thoroughly test your Pacemaker cluster:
-* [Execute the typical Netweaver failover tests](high-availability-guide-rhel.md#test-the-cluster-setup).
-* [Execute the typical HANA DB failover tests](sap-hana-high-availability-rhel.md#test-the-cluster-setup).
+* [Run the typical Netweaver failover tests](high-availability-guide-rhel.md#test-the-cluster-setup)
+* [Run the typical HANA DB failover tests](sap-hana-high-availability-rhel.md#test-the-cluster-setup)
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
- ignite-2023 Previously updated : 01/18/2023 Last updated : 02/28/2024 # Import data from Azure Cosmos DB for Apache Gremlin for queries in Azure AI Search
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB in
## Prerequisites
-+ [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
++ [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide scenario feedback. You can access the feature automatically after form submission. + An [Azure Cosmos DB account, database, container, and items](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Azure AI Search and Azure Cosmos DB for lower latency and to avoid bandwidth charges.
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
- ignite-2023 Previously updated : 01/18/2023 Last updated : 02/28/2024 # Import data from Azure Cosmos DB for MongoDB for queries in Azure AI Search
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB in
## Prerequisites
-+ [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
-++ [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide scenario feedback. You can access the feature automatically after form submission.
+
+ An [Azure Cosmos DB account, database, collection, and documents](../cosmos-db/sql/create-cosmosdb-resources-portal.md). Use the same region for both Azure AI Search and Azure Cosmos DB for lower latency and to avoid bandwidth charges. + An [automatic indexing policy](../cosmos-db/index-policy.md) on the Azure Cosmos DB collection, set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration. Lazy indexing isn't recommended and may result in missing data.
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
- kr2b-contr-experiment - ignite-2023 Previously updated : 06/10/2022 Last updated : 02/28/2024 # Index data from Azure Database for MySQL
When configured to include a high water mark and soft deletion, the indexer take
## Prerequisites -- [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide feedback and get help with any issues you encounter.
+- [Register for the preview](https://aka.ms/azure-cognitive-search/indexer-preview) to provide scenario feedback. You can access the feature automatically after form submission.
- [Azure Database for MySQL flexible server](../mysql/flexible-server/overview.md).
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Previously updated : 02/12/2024 Last updated : 02/29/2024
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [DigiCert TLS Hybrid ECC SHA384 2020 CA1](https://crt.sh/?d=3422153452) | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | | Γöö [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | | Γöö [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF |
-| Γöö [GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1](https://crt.sh/?d=3112858728) |0x0c08966535b942a9735265e4f97540bc<br>2F7AA2D86056A8775796F798C481A079E538E004 |
| [**DigiCert Global Root G2**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | | Γöö [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | | Γöö [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA |
sentinel Configure Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md
Title: Deploy the Microsoft Sentinel for SAP data connector with Secure Network Communications (SNC) | Microsoft Docs
-description: This article shows you how to deploy the Microsoft Sentinel for SAP data connector to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications.
+ Title: Deploy the Microsoft Sentinel for SAP data connector with SNC
+description: Deploy the Microsoft Sentinel for SAP data connector to ingest NetWeaver and ABAP logs over a secure connection by using Secure Network Communications (SNC).
Last updated 05/03/2022+
+# CustomerIntent: As an SAP admin and Microsoft Sentinel user, I want to know how to use SNC to deploy the Microsoft Sentinel for SAP data connector over a secure connection.
-# Deploy the Microsoft Sentinel for SAP data connector with SNC
+# Deploy the Microsoft Sentinel for SAP data connector by using SNC
-This article shows you how to deploy the **Microsoft Sentinel for SAP** data connector to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC).
+This article shows you how to deploy the Microsoft Sentinel for SAP data connector to ingest SAP NetWeaver and SAP ABAP logs over a secure connection by using Secure Network Communications (SNC).
-The SAP data connector agent typically connects to an SAP ABAP server using an RFC connection, and a user's username and password for authentication.
+The SAP data connector agent typically connects to an SAP ABAP server by using a remote function call (RFC) connection and a username and password for authentication.
-However, some environments may require the connection be over an encrypted channel, and client certificates be used for authentication. In these cases you can use SAP Secure Network Communication for this purpose, and you'll have to take the appropriate steps as outlined in this article.
+However, some environments might require the connection to be made on an encrypted channel, and some environments might require client certificates to be used for authentication. In these cases, you can use SNC from SAP to securely connect your data connector. Complete the steps as they're outlined in this article.
## Prerequisites -- SAP Cryptographic library [Download the SAP Cryptographic Library](https://help.sap.com/viewer/d1d04c0d65964a9b91589ae7afc1bd45/5.0.4/en-US/86921b29cac044d68d30e7b125846860.html).-- Network connectivity. SNC uses ports *48xx* (where xx is the SAP instance number) to connect to the ABAP server.-- SAP server configured to support SNC authentication.-- Self-signed, or enterprise CA-issued certificate for user authentication.
-
+To deploy the Microsoft Sentinel for SAP data connector by using SNC, you need:
+
+- The [SAP Cryptographic Library](https://help.sap.com/viewer/d1d04c0d65964a9b91589ae7afc1bd45/5.0.4/en-US/86921b29cac044d68d30e7b125846860.html).
+- Network connectivity. SNC uses port 48*xx* (where *xx* is the SAP instance number) to connect to the ABAP server.
+- An SAP server configured to support SNC authentication.
+- A self-signed or enterprise certificate authority (CA)-issued certificate for user authentication.
+ > [!NOTE]
-> This guide is a sample case for configuring SNC. In production environments it is strongly recommended to consult with SAP administrators to devise a deployment plan.
+> This article describes a sample case for configuring SNC. In a production environment, we strongly recommended that you consult with SAP administrators to create a deployment plan.
-## Configure your SNC deployment
+## Export the server certificate
-### Export server certificate
+To begin, export the server certificate:
1. Sign in to your SAP client and run the **STRUST** transaction.
-1. Navigate and expand the **SNC SAPCryptolib** section in the left hand pane.
+1. On the left pane, go to **SNC SAPCryptolib** and expand the section.
+
+1. Select the system, and then select a value for **Subject**.
+
+ The server certificate information is shown in the **Certificate** section.
-1. Select the system, then select the value of the **Subject** field.
+1. Select **Export certificate**.
- The server certificate information will be displayed in the **Certificate** section at the bottom of the page.
+ ![Screenshot that shows how to export a server certificate.](./media/configure-snc/export-server-certificate.png)
-1. Select the **Export certificate** button at the bottom of the page.
+1. In the **Export Certificate** dialog:
- ![Screenshot showing how to export a server certificate.](./media/configure-snc/export-server-certificate.png)
+ 1. For file format, select **Base64**.
-1. In the **Export Certificate** dialog box, select **Base64** as the file format, select the double boxes icon next to the **File Path** field, and select a filename to export the certificate to, then select the green checkmark to export the certificate.
+ 1. Next to **File Path**, select the double boxes icon.
+ 1. Select a filename to export the certificate to.
-### Import your certificate
+ 1. Select the green checkmark to export the certificate.
-This section explains how to import a certificate so that it's trusted by your ABAP server. It's important to understand which certificate needs to be imported into the SAP system. In any case, only public keys of the certificates need to be imported into the SAP system.
+## Import your certificate
-- **If the user certificate is self-signed:** Import a user certificate.
+This section explains how to import a certificate so that it's trusted by your ABAP server. It's important to understand which certificate needs to be imported into the SAP system. Only public keys of the certificates need to be imported into the SAP system.
-- **If user certificate is issued by an enterprise CA:** Import an enterprise CA certificate. In the event that both root and subordinate CA servers are used, import both root and subordinate CA public certificates.
+- **If the user certificate is self-signed**: Import a user certificate.
+
+- **If the user certificate is issued by an enterprise CA**: Import an enterprise CA certificate. If both root and subordinate CA servers are used, import both the root and the subordinate CA public certificates.
+
+To import your certificate:
1. Run the **STRUST** transaction. 1. Select **Display<->Change**.
-1. Select **Import certificate** at the bottom of the page.
+1. Select **Import certificate**.
+
+1. In the **Import certificate** dialog:
-1. In the **Import certificate** dialog box, select the double boxes icon next to the **File path** field and locate the certificate.
+ 1. Next to **File path**, select the double boxes icon and go to the certificate.
- 1. Locate the file containing the certificate (public key only) and select the green checkmark to import the certificate.
+ 1. Go to the file that contains the certificate (for a public key only) and select the green checkmark to import the certificate.
- The certificate information is displayed in the **Certificate** section.
+ The certificate information is displayed in the **Certificate** section.
- 1. Select **Add to Certificate List**.
+ 1. Select **Add to Certificate List**.
- The certificate will appear in the **Certificate List** area.
+ The certificate appears in the **Certificate List** section.
-### Associate certificate with a user account
+## Associate the certificate with a user account
+
+To associate the certificate with a user account:
1. Run the **SM30** transaction.
-1. In the **Table/View** field, type **USRACLEXT**, then select **Maintain**.
+1. In **Table/View**, enter **USRACLEXT**, and then select **Maintain**.
+
+1. Review the output and identify whether the target user already has an associated SNC name. If no SNC name is associated with the user, select **New Entries**.
-1. Review the output, identify whether the target user already has an associated SNC name. If not, select **New Entries**.
+ ![Screenshot that shows how to create a new entry in the USERACLEXT table.](./media/configure-snc/usraclext-new-entry.png)
- ![Screenshot showing how to create a new entry in USER A C L E X T table.](./media/configure-snc/usraclext-new-entry.png)
+1. For **User**, enter the user's username. For **SNC Name**, enter the user's certificate subject name prefixed with **p:**, and then select **Save**.
-1. Type the target user's username in the **User** field and the user's certificate subject name prefixed with **p:** in the **SNC Name** field, then select **Save**.
+ ![Screenshot that shows how to create a new user in USERACLEXT table.](./media/configure-snc/usraclext-new-user.png)
- ![Screenshot showing how to create a new user in USER A C L E X T table.](./media/configure-snc/usraclext-new-user.png)
+## Grant logon rights by using the certificate
-### Grant logon rights using certificate
+To grant logon rights:
1. Run the **SM30** transaction.
-1. In the **Table/View** field, type **VSNCSYSACL**, then select **Maintain**.
+1. In **Table/View**, enter **VSNCSYSACL**, and then select **Maintain**.
-1. Confirm that the table is cross-client in the informational prompt that appears.
+1. In the informational prompt that appears, confirm that the table is cross-client.
-1. In **Determine Work Area: Entry** type **E** in the **Type of ACL entry** field, and select the green checkmark.
+1. In **Determine Work Area: Entry**, enter **E** for **Type of ACL entry**, and then select the green checkmark.
-1. Review the output, identify whether the target user already has an associated SNC name. If not, select **New Entries**.
+1. Review the output and identify whether the target user already has an associated SNC name. If the user doesn't have an associated SNC name, select **New Entries**.
- ![Screenshot showing how to create a new entry in the V S N C SYS A C L table.](./media/configure-snc/vsncsysacl-new-entry.png)
+ ![Screenshot that shows how to create a new entry in the VSNCSYSACL table.](./media/configure-snc/vsncsysacl-new-entry.png)
1. Enter your system ID and user certificate subject name with a **p:** prefix.
- ![Screenshot showing how to create a new user in the V S N C SYS A C L table.](./media/configure-snc/vsncsysacl-new-user.png)
+ ![Screenshot that shows how to create a new user in the VSNCSYSACL table.](./media/configure-snc/vsncsysacl-new-user.png)
-1. Ensure **Entry for RFC activated** and **Entry for certificate activated** checkboxes are marked, then select **Save**.
+1. Ensure that the checkboxes for **Entry for RFC activated** and **Entry for certificate activated** are selected, and then select **Save**.
-### Map users of the ABAP service provider to external user IDs
+## Map users of the ABAP service provider to external user IDs
+
+To map ABAP service provider users to external user IDs:
1. Run the **SM30** transaction.
-1. In the **Table/View** field, type **VUSREXTID**, then select **Maintain**.
+1. In **Table/View**, enter **VUSREXTID**, and then select **Maintain**.
-1. In the **Determine Work Area: Entry** page, select the **DN** ID type as the **Work Area**.
+1. In **Determine Work Area: Entry**, select the **DN** ID type for **Work Area**.
-1. Type these details:
+1. Enter the following values:
- - **External ID**: *CN=Sentinel*, *C=US*
- - **Seq. No**: *000*
- - **User**: *SENTINEL*
+ - For **External ID**, enter **CN=Sentinel**, **C=US**.
+ - For **Seq. No**, enter **000**.
+ - For **User**, enter **SENTINEL**.
-1. Select **Save** and **Enter**.
+1. Select **Save**, and then select **Enter**.
- :::image type="content" source="media/configure-snc/vusrextid-table-configuration.png" alt-text="Screenshot of configuring the SAP VUSREXTID table.":::
+ :::image type="content" source="media/configure-snc/vusrextid-table-configuration.png" alt-text="Screenshot that shows how to set up the SAP VUSREXTID table.":::
-### Set up the container
+## Set up the container
> [!NOTE]
-> If you set up the SAP data connector agent container via the UI, don't perform the steps in this section. Continue to set up the connector [in the connector page](deploy-data-connector-agent-container.md) instead.
+> If you set up the SAP data connector agent container by using the UI, don't complete the steps that are described in this section. Instead, continue to set up the connector [on the connector page](deploy-data-connector-agent-container.md).
+
+To set up the container:
-1. Transfer the **libsapcrypto.so** and **sapgenpse** files to the target system where the container will be created.
+1. Transfer the *libsapcrypto.so* and *sapgenpse* files to the system where the container will be created.
-1. Transfer the client certificate (private and public key) to the target system where the container will be created.
+1. Transfer the client certificate (both private and public keys) to the system where the container will be created.
- The client certificate and key can be in .p12, .pfx, or Base-64 .crt and .key format.
+ The client certificate and key can be in *.p12*, *.pfx*, or Base64 *.crt* and *.key* format.
-1. Transfer the server certificate (public key only) to the target system where the container will be created.
+1. Transfer the server certificate (public key only) to the system where the container will be created.
- The server certificate must be in Base-64 .crt format.
+ The server certificate must be in Base64 *.crt* format.
-1. If the client certificate was issued by an enterprise certification authority, transfer the issuing CA and root CA certificates to the target system where the container will be created.
+1. If the client certificate was issued by an enterprise certification authority, transfer the issuing CA and root CA certificates to the system where the container will be created.
-1. Retrieve the kickstart script from the Microsoft Sentinel GitHub repository:
+1. Get the kickstart script from the Microsoft Sentinel GitHub repository:
```bash wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/sapcon-sentinel-kickstart.sh
This section explains how to import a certificate so that it's trusted by your A
chmod +x ./sapcon-sentinel-kickstart.sh ```
-1. Run the script, specifying the following base parameters:
+1. Run the script and specify the following base parameters:
```bash ./sapcon-sentinel-kickstart.sh \
This section explains how to import a certificate so that it's trusted by your A
--sapgenpse <path to sapgenpse> \ --server-cert <path to server certificate public key> \ ```
-
- If the client certificate is in .crt/.key format, use the following switches:
-
+
+ If the client certificate is in *.crt* or *.key* format, use the following switches:
+ ```bash --client-cert <path to client certificate public key> \ --client-key <path to client certificate private key> \ ```
-
- If the client certificate is in .pfx or .p12 format:
-
+
+ If the client certificate is in *.pfx* or *.p12* format, use these switches:
+ ```bash --client-pfx <pfx filename> --client-pfx-passwd <password>
This section explains how to import a certificate so that it's trusted by your A
--server-cert /home/azureuser/server.crt \ ```
-For additional information on options available in the kickstart script, review [Reference: Kickstart script](reference-kickstart.md)
-
-## Next steps
+For more information about options that are available in the kickstart script, see [Reference: Kickstart script](reference-kickstart.md).
-Learn more about the Microsoft Sentinel solution for SAP® applications:
+## Troubleshooting and reference
-- [Deploy Microsoft Sentinel solution for SAP® applications](deployment-overview.md)-- [Prerequisites for deploying Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)-- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)-- [Deploy the solution content from the content hub](deploy-sap-security-content.md)-- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md)-- [Enable and configure SAP auditing](configure-audit.md)-- [Monitor the health of your SAP system](../monitor-sap-system-health.md)-- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+For troubleshooting information, see these articles:
-Troubleshooting:
+- [Troubleshoot your Microsoft Sentinel solution for SAP applications deployment](sap-deploy-troubleshoot.md)
+- [Microsoft Sentinel solutions](../sentinel-solutions.md)
-- [Troubleshoot your Microsoft Sentinel solution for SAP® applications deployment](sap-deploy-troubleshoot.md)
+For reference, see these articles:
-Reference files:
--- [Microsoft Sentinel solution for SAP® applications data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel solution for SAP® applications: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel solution for SAP applications data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel solution for SAP applications: Security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
-For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
+## Related content
+
+- [Deploy Microsoft Sentinel solution for SAP applications](deployment-overview.md)
+- [Prerequisites for deploying Microsoft Sentinel solution for SAP applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP change requests and configure authorization](preparing-sap.md)
sentinel Cross Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/cross-workspace.md
Title: Working with the Microsoft Sentinel solution for SAP® applications across multiple workspaces
-description: This article discusses working with Microsoft Sentinel solution for SAP® applications across multiple workspaces in different scenarios.
+ Title: Microsoft Sentinel solution for SAP apps across multiple workspaces
+description: Learn how to work with the Microsoft Sentinel solution for SAP applications in multiple workspaces for different deployment scenarios.
Last updated 03/22/2023+
+# customer intent: As a security admin or SAP admin, I want to know how to use the Microsoft Sentinel solution for SAP applications in multiple workspaces so that I can plan a deployment.
-# Working with the Microsoft Sentinel solution for SAP® applications across multiple workspaces
+# Work with the Microsoft Sentinel solution for SAP applications in multiple workspaces
-When you set up your Microsoft Sentinel workspace, there are [multiple architecture options](../design-your-workspace-architecture.md#decision-tree) and considerations. Considering geography, regulation, access control, and other factors, you may choose to have multiple Sentinel workspaces in your organization.
+When you set up your Microsoft Sentinel workspace, you have [multiple architecture options](../design-your-workspace-architecture.md#decision-tree) and factors to consider. Taking into account geography, regulation, access control, and other factors, you might choose to have multiple Microsoft Sentinel workspaces in your organization.
-This article discusses working with the Microsoft Sentinel solution for SAP® applications across multiple workspaces in different scenarios.
+This article discusses how to work with the Microsoft Sentinel solution for SAP applications in multiple workspaces for different deployment scenarios.
-The Microsoft Sentinel solution for SAP® applications natively supports a cross-workspace architecture to allow improved flexibility for:
+The Microsoft Sentinel solution for SAP applications natively supports a cross-workspace architecture to support improved flexibility for:
-- Managed security service providers (MSSPs) or a global or federated SOC-- Data residency requirements -- Organizational hierarchy/IT design -- Insufficient role-based access control (RBAC) in a single workspace
+- Managed security service providers (MSSPs) or a global or federated security operations center (SOC).
+- Data residency requirements.
+- Organizational hierarchy and IT design.
+- Insufficient role-based access control (RBAC) in a single workspace.
> [!IMPORTANT]
-> Working with multiple workspaces is currently in PREVIEW. This feature is provided without a service level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Working with multiple workspaces is currently in preview. This feature is provided without a service-level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-You can define multiple workspaces when you [deploy the SAP security content](deploy-sap-security-content.md#deploy-the-security-content-from-the-content-hub).
+You can define multiple workspaces when you [deploy SAP security content](deploy-sap-security-content.md#deploy-the-security-content-from-the-content-hub).
## Collaboration between the SOC and SAP teams in your organization
-In this article, we focus on a specific and common use case, where collaboration between the security operations center (SOC) and SAP teams in your organization requires a multi-workspace setup.
+A common use case is one in which collaboration between the SOC and SAP teams in your organization requires a multi-workspace setup.
+
+Your organization's SAP team has technical knowledge that's critical to successfully and effectively implement the Microsoft Sentinel solution for SAP applications. Therefore, it's important for the SAP team see the relevant data and to collaborate with the SOC about the required configuration and incident response procedures.
-Your organization's SAP team has technical knowledge that's critical to successfully and effectively implement the Microsoft Sentinel solution for SAP® applications. Therefore, it's important for the SAP team see the relevant data and collaborate with the SOC on the required configuration and incident response procedures.
+There are two possible scenarios for SOC and SAP team collaboration, depending on your organization's needs:
-As part of this collaboration, there are two possible scenarios, depending on your organization's needs:
+- Scenario 1: **SAP data and SOC data maintained in separate workspaces**. Both teams can see the SAP data by using [cross-workspace queries](#scenario-1-sap-data-and-soc-data-maintained-in-separate-workspaces).
-1. **The SAP data and the SOC data reside in separate workspaces**. Both teams can see the SAP data, using [cross-workspace queries](#scenario-1-sap-and-soc-data-reside-in-separate-workspaces).
-1. **The SAP data is kept in the SOC workspace**, and SAP team can query the data using [resource context queries](#scenario-2-sap-data-is-kept-in-the-soc-workspace).
+- Scenario 2: **SAP data kept only in the SOC workspace**. The SAP team can query the data by using [resource context queries](#scenario-2-sap-data-kept-only-in-the-soc-workspace).
-## Scenario 1: SAP and SOC data reside in separate workspaces
+## Scenario 1: SAP data and SOC data maintained in separate workspaces
-In this scenario, the SAP and SOC teams have separate Microsoft Sentinel workspaces.
+In this scenario, the SAP team and the SOC team have separate Microsoft Sentinel workspaces where team data is kept.
-When your organization [deploys the Microsoft Sentinel solution for SAP® applications](deploy-sap-security-content.md#deploy-the-microsoft-sentinel-solution-for-sap-applications-from-the-content-hub), each team specifies its SAP workspace.
+When your organization [deploys the Microsoft Sentinel solution for SAP applications](deploy-sap-security-content.md#deploy-the-microsoft-sentinel-solution-for-sap-applications-from-the-content-hub), each team specifies its SAP workspace.
-A common practice is to provide some or all of the SOC team members with the **Sentinel Reader** role on the SAP workspace.
+A common practice is to provide some or all SOC team members with the Sentinel Reader role for the SAP workspace.
Creating separate workspaces for the SAP and SOC data has these benefits: -- Microsoft Sentinel can trigger alerts that include both SOC and SAP data, and run those alerts on the SOC workspace.
+- Microsoft Sentinel can trigger alerts that include both SOC and SAP data, and it can run those alerts on the SOC workspace.
> [!NOTE]
- > For larger SAP landscapes, running queries made by the SOC on data from the SAP workspace can impact performance, because the SAP data must travel to the SOC workspace when being queried. For improved performance and cost optimizations, consider having both the SOC and SAP workspaces on the same [dedicated cluster](../../azure-monitor/logs/logs-dedicated-clusters.md?tabs=cli#cluster-pricing-model).
+ > For larger SAP landscapes, running queries that are made by the SOC on data from the SAP workspace can affect performance. The SAP data must travel to the SOC workspace when it's being queried. For improved performance and cost optimizations, consider having both the SOC and SAP workspaces on the same [dedicated cluster](../../azure-monitor/logs/logs-dedicated-clusters.md?tabs=cli#cluster-pricing-model).
-- The SAP team has its own Microsoft Sentinel workspace, including all features, except for detections that include both SOC and SAP data. -- Flexibility: The SAP team can focus on the control and internal threats in its landscape, while the SOC can focus on external threats. -- There is no additional charge for ingestion fees, because data is only ingested once into Microsoft Sentinel. However, note that each workspace has its own [pricing tier](../design-your-workspace-architecture.md#step-5-collecting-any-non-soc-data). -- The SOC can see and investigate SAP incidents: If the SAP team faces an event they can't explain with the existing data, they can assign the incident to the SOC.
+- The SAP team has its own Microsoft Sentinel workspace that includes all features except detections that include both SOC and SAP data.
+- Flexibility. The SAP team can focus on the control of internal threats in its landscape, and the SOC can focus on external threats.
+- There's no additional charge for ingestion fees, because data is ingested only once into Microsoft Sentinel. However, each workspace has its own [pricing tier](../design-your-workspace-architecture.md#step-5-collecting-any-non-soc-data).
+- The SOC can see and investigate SAP incidents. If the SAP team faces an event that it can't explain by using existing data, the team can assign the incident to the SOC.
-This table maps out the access of data and features for the SAP and SOC teams in this scenario.
+The following table maps the access of data and features for the SAP and SOC teams in this scenario:
|Function |SOC team |SAP team | ||||
This table maps out the access of data and features for the SAP and SOC teams in
|SAP workspace data, analytics rules, functions, watchlists, and workbooks access | &#x2705; | &#x2705;<sup>1</sup> | |SAP incident access and collaboration | &#x2705; | &#x2705;<sup>1</sup> |
-<sup>1</sup>The SOC team can see these functions on both workspaces, while the SAP team can see these functions only on the SAP workspace.
+<sup>1</sup> The SOC team can see these functions in both workspaces. The SAP team can see these functions only in the SAP workspace.
-## Scenario 2: SAP data is kept in the SOC workspace
+## Scenario 2: SAP data kept only in the SOC workspace
-In this scenario, you want to keep all of the data in one workspace and to apply access controls. You can do this using Log Analytics to [manage access to data by resource](../resource-context-rbac.md). You can also associate SAP resources with an Azure resource ID by specifying the required `azure_resource_id` field in the [connector configuration section](reference-systemconfig.md#connector-configuration-section) on the data collector used to ingest data from the SAP system into Microsoft Sentinel.
+In this scenario, you want to keep all the data in one workspace and to apply access controls. You can do this by using Log Analytics in Azure Monitor to [manage access to data by resource](../resource-context-rbac.md). You can also associate SAP resources with an Azure resource ID by specifying the required `azure_resource_id` field in the [connector configuration section](reference-systemconfig.md#connector-configuration-section) on the data collector that you use to ingest data from the SAP system into Microsoft Sentinel.
-Once the data collector agent is configured with the correct resource ID, the SAP team can access the specific SAP data in the SOC workspace using a resource-scoped query. The SAP team cannot read any of the other, non-SAP data types.
+After the data collector agent is configured with the correct resource ID, the SAP team can access the specific SAP data in the SOC workspace by using a resource-scoped query. The SAP team can't read any of the other, non-SAP data types.
-There are no costs associated with this approach, as the data is only ingested once into Microsoft Sentinel. Using this mode of access, the SAP team only sees raw and unformatted data and cannot use any Microsoft Sentinel features. In addition to accessing the raw data via log analytics, the SAP team can also access the same data [via Power BI](../resource-context-rbac.md).
+There are no costs associated with this approach because the data is ingested only once into Microsoft Sentinel. When you use this mode of access, the SAP team sees only raw and unformatted data. The SAP team can't use any Microsoft Sentinel features. In addition to accessing the raw data via Log Analytics, the SAP team can access the same data [via Power BI](../resource-context-rbac.md).
-## Next steps
+## Next step
-In this article, you learned about working with Microsoft Sentinel solution for SAP® applications across multiple workspaces in different scenarios.
+In this article, you learned about working with Microsoft Sentinel solution for SAP applications in multiple workspaces for different deployment scenarios. Next, learn how to deploy the solution:
> [!div class="nextstepaction"]
-> [Deploy the Sentinel solution for SAP® applications](deployment-overview.md)
+> [Deploy the Microsoft Sentinel solution for SAP applications](deployment-overview.md)
sentinel Deploy Sap Btp Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-btp-solution.md
Title: Deploy Microsoft Sentinel Solution for SAP® BTP
-description: This article introduces you to the process of deploying the Microsoft Sentinel Solution for SAP® BTP.
+ Title: Deploy Microsoft Sentinel solution for SAP BTP
+description: Learn how to deploy the Microsoft Sentinel solution for SAP Business Technology Platform (BTP) system.
Last updated 03/30/2023+
+# customer intent: As an SAP admin, I want to know how to deploy the Microsoft Sentinel solution for SAP BTP so that I can plan a deployment.
-# Deploy Microsoft Sentinel Solution for SAP® BTP
+# Deploy the Microsoft Sentinel solution for SAP BTP
-This article describes how to deploy the Microsoft Sentinel Solution for SAP® BTP. The Microsoft Sentinel Solution for SAP® BTP monitors and protects your SAP Business Technology Platform (BTP) system: It collects audits and activity logs from the BTP infrastructure and BTP based apps, and detects threats, suspicious activities, illegitimate activities, and more. Read more about the solution. [Read more about the solution](sap-btp-solution-overview.md).
+This article describes how to deploy the Microsoft Sentinel solution for SAP Business Technology Platform (BTP) system. The Microsoft Sentinel solution for SAP BTP monitors and protects your SAP BTP system. It collects audit logs and activity logs from the BTP infrastructure and BTP-based apps, and then detects threats, suspicious activities, illegitimate activities, and more. [Read more about the solution](sap-btp-solution-overview.md).
> [!IMPORTANT]
-> The Microsoft Sentinel Solution for SAP® BTP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel solution for SAP BTP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Prerequisites Before you begin, verify that: -- The Microsoft Sentinel solution is enabled. -- You have a defined Microsoft Sentinel workspace and have read and write permissions to the workspace.
+- The Microsoft Sentinel solution is enabled.
+- You have a defined Microsoft Sentinel workspace, and you have read and write permissions to the workspace.
- Your organization uses SAP BTP (in a Cloud Foundry environment) to streamline interactions with SAP applications and other business applications. - You have an SAP BTP account (which supports BTP accounts in the Cloud Foundry environment). You can also use a [SAP BTP trial account](https://cockpit.hanatrial.ondemand.com/).-- You have the SAP BTP auditlog-management service and service key (see [Set up the BTP account and solution](#set-up-the-btp-account-and-solution)). -- You can create an [Azure Function App](../../azure-functions/functions-overview.md) with the `Microsoft.Web/Sites`, `Microsoft.Web/ServerFarms`, `Microsoft.Insights/Components`, andΓÇ»`Microsoft.Storage/StorageAccounts` permissions.-- You can create [Data Collection Rules/Endpoints](../../azure-monitor/essentials/data-collection-rule-overview.md) with the permissions:
- - `Microsoft.Insights/DataCollectionEndpoints`, and `Microsoft.Insights/DataCollectionRules`.
- - Assign the Monitoring Metrics Publisher role to the Azure Function.
-- You have an [Azure Key Vault](../../key-vault/general/overview.md) to hold the SAP BTP client secret.
+- You have the SAP BTP auditlog-management service and service key (see [Set up the BTP account and solution](#set-up-the-btp-account-and-solution)).
+- You can create an [Azure function app](../../azure-functions/functions-overview.md) by using the Microsoft.Web/Sites, Microsoft.Web/ServerFarms, Microsoft.Insights/Components, and Microsoft.Storage/StorageAccounts permissions.
+- You can create [data collection rules and endpoints](../../azure-monitor/essentials/data-collection-rule-overview.md) by using these permissions:
+ - Microsoft.Insights/DataCollectionEndpoints and Microsoft.Insights/DataCollectionRules.
+ - Assign the Monitoring Metrics Publisher role to the function app.
+- You have an [Azure Key Vault](../../key-vault/general/overview.md) to hold the SAP BTP client secret.
## Set up the BTP account and solution
-1. After you can log into your BTP account (see the [prerequisites](#prerequisites),) follow these [audit log retrieval steps](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-retrieval-api-usage-for-subaccounts-in-cloud-foundry-environment) on the SAP BTP system.
-1. In the SAP BTP Cockpit, select the **Audit Log Management Service**.
+To set up the BTP account and the solution:
+
+1. After you can sign in to your BTP account (see the [prerequisites](#prerequisites)), follow the [audit log retrieval steps](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-retrieval-api-usage-for-subaccounts-in-cloud-foundry-environment) on the SAP BTP system.
+
+1. In the SAP BTP cockpit, select the **Audit Log Management Service**.
+
+ :::image type="content" source="./media/deploy-sap-btp-solution/btp-audit-log-management-service.png" alt-text="Screenshot that shows selecting the BTP Audit Log Management Service." lightbox="./media/deploy-sap-btp-solution/btp-audit-log-management-service.png":::
+
+1. Create an instance of the Audit Log Management Service in the BTP subaccount.
- :::image type="content" source="./media/deploy-sap-btp-solution/btp-audit-log-management-service.png" alt-text="Screenshot of selecting the BTP Audit Log Management Service." lightbox="./media/deploy-sap-btp-solution/btp-audit-log-management-service.png":::
+ :::image type="content" source="./media/deploy-sap-btp-solution/btp-audit-log-sub-account.png" alt-text="Screenshot that shows creating an instance of the BTP subaccount." lightbox="./media/deploy-sap-btp-solution/btp-audit-log-sub-account.png":::
-1. Create an instance of the Audit Log Management Service in the sub account.
+1. Create a service key and record the values for `url`, `uaa.clientid`, `uaa.clientecret`, and `uaa.url`. These values are required to deploy the data connector.
- :::image type="content" source="./media/deploy-sap-btp-solution/btp-audit-log-sub-account.png" alt-text="Screenshot of creating an instance of the BTP subaccount." lightbox="./media/deploy-sap-btp-solution/btp-audit-log-sub-account.png":::
-
-1. Create a service key and record the `url`, `uaa.clientid`, `uaa.clientecret` and `uaa.url` values. These are required to deploy the data connector.
-
- Here's an example of these field values.
+ Here are examples of these field values:
- **url**: `https://auditlog-management.cfapps.us10.hana.ondemand.com` - **uaa.clientid**: `sb-ac79fee5-8ad0-4f88-be71-d3f9c566e73a!b136532|auditlog-management!b1237`
Before you begin, verify that:
- **uaa.url**: `https://915a0312trial.authentication.us10.hana.ondemand.com` 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to the **Microsoft Sentinel** service.
+1. Go to the Microsoft Sentinel service.
1. Select **Content hub**, and in the search bar, search for *BTP*. 1. Select **SAP BTP**. 1. Select **Install**.
Before you begin, verify that:
1. Select **Create**.
- :::image type="content" source="./media/deploy-sap-btp-solution/sap-btp-create-solution.png" alt-text="Screenshot of how to create the Microsoft Sentinel Solution® for SAP BTP." lightbox="./media/deploy-sap-btp-solution/sap-btp-create-solution.png":::
+ :::image type="content" source="./media/deploy-sap-btp-solution/sap-btp-create-solution.png" alt-text="Screenshot that shows how to create the Microsoft Sentinel solution for SAP BTP." lightbox="./media/deploy-sap-btp-solution/sap-btp-create-solution.png":::
-1. Select the resource group and the Sentinel workspace in which you want to deploy the solution.
-1. Select **Next** until you pass validation and select **Create**.
-1. Once the solution deployment is complete, return to your Sentinel workspace and select **Data connectors**.
-1. In the search bar, type *BTP*, and select **SAP BTP (using Azure Function)**.
+1. Select the resource group and the Microsoft Sentinel workspace in which to deploy the solution.
+1. Select **Next** until you pass validation, and then select **Create**.
+1. When the solution deployment is finished, return to your Microsoft Sentinel workspace and select **Data connectors**.
+1. In the search bar, enter **BTP**, and then select **SAP BTP (using Azure Function)**.
1. Select **Open connector page**.
-1. In the connector page, make sure that you meet the required prerequisites and follow the configuration steps. In step 2 of the data connector configuration, specify the parameters you defined in step 4 of this procedure.
-
+1. On the connector page, make sure that you meet the required prerequisites and complete the configuration steps. In step 2 of the data connector configuration, specify the parameters that you defined in step 4 in this section.
+ > [!NOTE]
- > Retrieving audits for the global account doesn't automatically retrieve audits for the subaccount. Follow the connector configuration steps for each of the subaccounts you want to monitor, and also follow these steps for the global account. Review these [account auditing configuration considerations](#account-auditing-configuration-considerations).
+ > Retrieving audits for the global account doesn't automatically retrieve audits for the subaccount. Follow the connector configuration steps for each of the subaccounts you want to monitor, and also follow these steps for the global account. Review these [account auditing configuration considerations](#consider-your-account-auditing-configurations).
-1. Complete all configuration steps, including the Function App deployment and the Key Vault access policy configuration.
+1. Complete all configuration steps, including the function app deployment and the Azure Key Vault access policy configuration.
1. Make sure that BTP logs are flowing into the Microsoft Sentinel workspace:
- 1. Log in to your BTP subaccount and run a few activities that generate logs, such as logins, adding users, changing permissions, changing settings, and so on.
- 1. Allow 20-30 minutes for the logs to start flowing.
- 1. In the **SAP BTP** connector page, confirm that Microsoft Sentinel receives the BTP data, or query the `SAPBTPAuditLog_CL` table directly.
-1. Enable the [workbook](sap-btp-security-content.md#sap-btp-workbook) and the [analytics rules](sap-btp-security-content.md#built-in-analytics-rules) provided as part of the solution by following [these guidelines](../sentinel-solutions-deploy.md#analytics-rule).
+ 1. Sign in to your BTP subaccount and run a few activities that generate logs, such as sign-ins, adding users, changing permissions, and changing settings.
+ 1. Allow 20 to 30 minutes for the logs to start flowing.
+ 1. On the **SAP BTP** connector page, confirm that Microsoft Sentinel receives the BTP data, or query the **SAPBTPAuditLog_CL** table directly.
+
+1. Enable the [workbook](sap-btp-security-content.md#sap-btp-workbook) and the [analytics rules](sap-btp-security-content.md#built-in-analytics-rules) that are provided as part of the solution by following [these guidelines](../sentinel-solutions-deploy.md#analytics-rule).
-## Account auditing configuration considerations
+## Consider your account auditing configurations
+
+The final step in the deployment process is to consider your global account and subaccount auditing configurations.
### Global account auditing configuration
-When you enable audit log retrieval in the BTP cockpit for the Global account: If the subaccount for which you want to entitle the Audit Log Management Service is under a directory, you must entitle the service at the directory level first, and only then you can entitle the service at the subaccount level.
+When you enable audit log retrieval in the BTP cockpit for the global account: If the subaccount for which you want to entitle the Audit Log Management Service is under a directory, you must entitle the service at the directory level first. Only then can you entitle the service at the subaccount level.
+
+### Subaccount auditing configuration
-### Subaccount auditing configuration
+To enable auditing for a subaccount, complete the steps in the [SAP subaccounts audit retrieval API documentation](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-retrieval-api-usage-for-subaccounts-in-cloud-foundry-environment).
-To enable auditing for a subaccount, follow the steps in the [SAP subaccounts audit retrieval API documentation](https://help.sap.com/docs/btp/sap-business-technology-platform/audit-log-retrieval-api-usage-for-subaccounts-in-cloud-foundry-environment).
+The API documentation describes how to enable the audit log retrieval by using the Cloud Foundry CLI.
-However, while this guide explains how to enable the audit log retrieval using the Cloud Foundry CLI, you can also retrieve the logs via the UI:
+You also can retrieve the logs via the UI:
-1. In your subaccount Service Marketplace, create an instance of the **Audit Log Management Service**.
-1. Create a service key in the new **Audit Log Management Service** instance.
-1. View the Service key and retrieve the required parameters mentioned in step 2 of the configuration instructions in the data connector UI (**url**, **uaa.url**, **uaa.clientid**, **uaa.clientsecret**).
+1. In your subaccount in SAP Service Marketplace, create an instance of **Audit Log Management Service**.
+1. In the new instance, create a service key.
+1. View the service key and retrieve the required parameters from step 4 of the configuration instructions in the data connector UI (**url**, **uaa.url**, **uaa.clientid**, and **uaa.clientsecret**).
-## Next steps
+## Related content
-In this article, you learned how to deploy the Microsoft Sentinel Solution® for SAP BTP.
->
-> - [Learn how to enable the security content](../sentinel-solutions-deploy.md#analytics-rule)
-> - [Review the solution's security content](sap-btp-security-content.md)
+- [Learn how to enable the security content](../sentinel-solutions-deploy.md#analytics-rule)
+- [Review the solution's security content](sap-btp-security-content.md)
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
Title: Deploy the Microsoft Sentinel solution for SAP applications® from the content hub
-description: This article shows you how to deploy the Microsoft Sentinel solution for SAP applications® security content from the content hub into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Microsoft Sentinel Solution for SAP.
+ Title: Deploy Microsoft Sentinel for SAP apps from the content hub
+description: Learn how to deploy the Microsoft Sentinel solution for SAP applications security content from the content hub to your Microsoft Sentinel workspace.
Last updated 03/23/2023+
+# customer intent: As an SAP admin, I want to know how to deploy the Microsoft Sentinel solution for SAP applications from the content hub so that I can plan a deployment.
-# Deploy the Microsoft Sentinel solution for SAP applications® from the content hub
+# Deploy the Microsoft Sentinel solution for SAP applications from the content hub
+
+This article shows you how to deploy the Microsoft Sentinel solution for SAP applications security content from the content hub to your Microsoft Sentinel workspace. This content makes up the remaining parts of the Microsoft Sentinel solution for SAP.
+
+## Prerequisites
-This article shows you how to deploy the Microsoft Sentinel solution for SAP applications® security content from the content hub into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Microsoft Sentinel Solution for SAP.
+To deploy the Microsoft Sentinel solution for SAP applications from the content hub, you need:
-## Deployment milestones
+- A Microsoft Sentinel instance.
+- A defined Microsoft Sentinel workspace, and read and write permissions to the workspace.
+- A Microsoft Sentinel for SAP data connector set up.
+
+## Check deployment milestones
Track your SAP solution deployment journey through this series of articles:
Track your SAP solution deployment journey through this series of articles:
1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
-1. [Work with the solution across multiple workspaces](cross-workspace.md) (PREVIEW)
+1. [Work with the solution in multiple workspaces](cross-workspace.md) (preview)
-1. [Prepare SAP environment](preparing-sap.md)
+1. [Prepare your SAP environment](preparing-sap.md)
1. [Configure auditing](configure-audit.md)
-1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
+1. [Deploy the data connector agent](deploy-data-connector-agent-container.md)
+
+1. **Deploy the Microsoft Sentinel solution for SAP applications from the content hub** (*You are here*)
-1. **Deploy the Microsoft Sentinel solution for SAP applications® from the content hub (*You are here*)**
+1. [Configure the Microsoft Sentinel solution for SAP applications](deployment-solution-configuration.md)
-1. [Configure Microsoft Sentinel solution for SAP® applications](deployment-solution-configuration.md)
+1. Optional deployment steps:
-1. Optional deployment steps
- - [Configure data connector to use SNC](configure-snc.md)
+ - [Configure the SAP data connector to use SNC](configure-snc.md)
- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) - [Configure audit log monitoring rules](configure-audit-log-rules.md)
- - [Deploy SAP connector manually](sap-solution-deploy-alternate.md)
+ - [Deploy the SAP data connector manually](sap-solution-deploy-alternate.md)
- [Select SAP ingestion profiles](select-ingestion-profiles.md) ## Deploy the security content from the content hub Deploy the [SAP security content](sap-solution-security-content.md) from the Microsoft Sentinel **Content hub** and **Watchlists** areas.
-Deploying the **Microsoft Sentinel solution for SAP® applications** causes the Microsoft Sentinel for SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
+Deploying the Microsoft Sentinel solution for SAP applications causes the Microsoft Sentinel for SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
-To deploy SAP solution security content, do the following:
+To deploy SAP solution security content:
1. In Microsoft Sentinel, on the left pane, select **Content hub (Preview)**. The **Content hub (Preview)** page displays a filtered, searchable list of solutions.
-1. To open the SAP solution page, select **Microsoft Sentinel solution for SAP® applications**.
+1. To open the SAP solution page, select **Microsoft Sentinel solution for SAP applications**.
- :::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel solution for SAP® applications' solution pane." lightbox="./media/deploy-sap-security-content/sap-solution.png":::
+ :::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot that shows the Microsoft Sentinel solution for SAP applications solution pane." lightbox="./media/deploy-sap-security-content/sap-solution.png":::
-1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription and resource group.
+1. To start the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription and resource group.
-1. For the **Deployment target workspace**, select the Log Analytics workspace (the one used by Microsoft Sentinel) where you want to deploy the solution. <a id="multi-workspace"></a>
+1. For the **Deployment target workspace**, select the Log Analytics workspace (the one that Microsoft Sentinel uses) where you want to deploy the solution.<a id="multi-workspace"></a>
-1. If you want to [work with the Microsoft Sentinel solution for SAP® applications across multiple workspaces](cross-workspace.md) (PREVIEW), do one of the following, select **Some of the data is on a different workspace**.
- 1. Under **Configure the workspace where the SOC data resides in**, select the SOC subscription and workspace.
- 1. Under **Configure the workspace where the SAP data resides in**, select the SAP subscription and workspace.
+1. If you want to [work with the Microsoft Sentinel solution for SAP applications in multiple workspaces](cross-workspace.md) (preview), select **Some of the data is on a different workspace**, and then do the following steps:
- For example:
+ 1. Under **Configure the workspace where the SOC data resides in**, select the SOC subscription and workspace.
- :::image type="content" source="./media/deploy-sap-security-content/sap-multi-workspace.png" alt-text="Screenshot of how to configure the Microsoft Sentinel solution for SAP® applications to work across multiple workspaces.":::
+ 1. Under **Configure the workspace where the SAP data resides in**, select the SAP subscription and workspace.
- > [!Note]
- > If you want the SAP and SOC data to be kept on the same workspace with no additional access controls, do not select **Some of the data is on a different workspace**. If you want the SOC and SAP data to be kept on the same workspace, but to apply additional access controls, review [this scenario](cross-workspace.md#scenario-2-sap-data-is-kept-in-the-soc-workspace).
+ For example:
-1. Select **Next** to cycle through the **Data Connectors**, **Analytics**, and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
+ :::image type="content" source="./media/deploy-sap-security-content/sap-multi-workspace.png" alt-text="Screenshot that shows how to configure the Microsoft Sentinel solution for SAP applications to work across multiple workspaces.":::
- For more information, see [Microsoft Sentinel solution for SAP® applications: security content reference](sap-solution-security-content.md).
+ > [!NOTE]
+ > If you want the SAP and SOC data to be kept on the same workspace with no additional access controls, do not select **Some of the data is on a different workspace**. If you want the SOC and SAP data to be kept on the same workspace, but to apply additional access controls, review [this scenario](cross-workspace.md#scenario-2-sap-data-kept-only-in-the-soc-workspace).
-1. On the **Review + create tab** pane, wait for the **Validation Passed** message, then select **Create** to deploy the solution.
+1. Select **Next** to cycle through the **Data Connectors**, **Analytics**, and **Workbooks** tabs, where you can learn about the components that are deployed with this solution.
- > [!TIP]
- > You can also select **Download a template** for a link to deploy the solution as code.
+ For more information, see [Microsoft Sentinel solution for SAP applications: security content reference](sap-solution-security-content.md).
-1. After the deployment is completed, a confirmation message appears at the upper right.
+1. On the **Review + create tab** pane, wait for the **Validation Passed** message, and then select **Create** to deploy the solution.
- To display the newly deployed content, go to:
+ > [!TIP]
+ > You can also select **Download a template** for a link to deploy the solution as code.
- - **Threat Management** > **Workbooks** > **My workbooks**, to find the [built-in SAP workbooks](sap-solution-security-content.md#built-in-workbooks).
- - **Configuration** > **Analytics** to find a series of [SAP-related analytics rules](sap-solution-security-content.md#built-in-analytics-rules).
+1. When deployment is finished, to display the newly deployed content:
-1. In Microsoft Sentinel, go to the **Microsoft Sentinel for SAP** data connector to confirm the connection:
+ - For the [built-in SAP workbooks](sap-solution-security-content.md#built-in-workbooks), go to **Threat Management** > **Workbooks** > **My workbooks**.
- :::image type="content" source="./media/deploy-sap-security-content/sap-data-connector.png" alt-text="Screenshot of the Microsoft Sentinel for SAP data connector page." lightbox="media/deploy-sap-security-content/sap-data-connector.png":::
+ - For a series of [SAP-related analytics rules](sap-solution-security-content.md#built-in-analytics-rules), go to **Configuration** > **Analytics**.
- SAP ABAP logs are displayed on the Microsoft Sentinel **Logs** page, under **Custom logs**:
+1. In Microsoft Sentinel, go to the **Microsoft Sentinel for SAP** data connector to confirm the connection:
- :::image type="content" source="./media/deploy-sap-security-content/sap-logs-in-sentinel.png" alt-text="Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel." lightbox="media/deploy-sap-security-content/sap-logs-in-sentinel.png":::
+ :::image type="content" source="./media/deploy-sap-security-content/sap-data-connector.png" alt-text="Screenshot that shows the Microsoft Sentinel for SAP data connector page." lightbox="media/deploy-sap-security-content/sap-data-connector.png":::
- For more information, see [Microsoft Sentinel solution for SAP® applications solution logs reference](sap-solution-log-reference.md).
+ SAP ABAP logs are displayed on the Microsoft Sentinel **Logs** page, under **Custom logs**:
-## Next steps
+ :::image type="content" source="./media/deploy-sap-security-content/sap-logs-in-sentinel.png" alt-text="Screenshot that shows the SAP ABAP logs in the Custom Logs area in Microsoft Sentinel." lightbox="media/deploy-sap-security-content/sap-logs-in-sentinel.png":::
-Learn more about the Microsoft Sentinel solution for SAP® applications:
+ For more information, see [Microsoft Sentinel solution for SAP applications solution logs reference](sap-solution-log-reference.md).
-- [Deploy Microsoft Sentinel solution for SAP® applications](deployment-overview.md)-- [Prerequisites for deploying Microsoft Sentinel solution for SAP® applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)-- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)-- [Deploy the solution content from the content hub](deploy-sap-security-content.md)-- [Deploy and configure the container hosting the SAP data connector agent](deploy-data-connector-agent-container.md)-- [Monitor the health of your SAP system](../monitor-sap-system-health.md)-- [Deploy the Microsoft Sentinel for SAP data connector with SNC](configure-snc.md)-- [Enable and configure SAP auditing](configure-audit.md)-- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+## Troubleshooting and reference
-Troubleshooting:
+For troubleshooting information, see these articles:
-- [Troubleshoot your Microsoft Sentinel solution for SAP® applications deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel solution for SAP applications deployment](sap-deploy-troubleshoot.md)
+- [Microsoft Sentinel solutions](../sentinel-solutions.md)
-Reference files:
+For reference, see these articles:
-- [Microsoft Sentinel solution for SAP® applications solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel solution for SAP® applications: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel solution for SAP applications solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel solution for SAP applications: Security content reference](sap-solution-security-content.md)
- [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
-For more information, see [Microsoft Sentinel solutions](../sentinel-solutions.md).
+## Related content
+
+- [Deploy Microsoft Sentinel solution for SAP applications](deployment-overview.md)
+- [Prerequisites for deploying Microsoft Sentinel solution for SAP applications](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests and configure authorization](preparing-sap.md)
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Title: Deploy Microsoft Sentinel solution for SAP® applications in Microsoft Sentinel
-description: This article introduces you to the process of deploying the Microsoft Sentinel solution for SAP® applications.
+ Title: Deploy Microsoft Sentinel solution for SAP applications
+description: Get an introduction to the process of deploying the Microsoft Sentinel solution for SAP applications.
-+ Last updated 06/19/2023+
+# customer intent: As a business user or decision maker, I want to get an overview of how to deploy the Microsoft Sentinel solution for SAP applications so that I know the scope of the information I need and how to access it.
-# Deploy Microsoft Sentinel solution for SAP® applications
+# Deploy Microsoft Sentinel solution for SAP applications
-This article introduces you to the process of deploying the Microsoft Sentinel solution for SAP® applications. The full process is detailed in a whole set of articles linked under [Deployment milestones](#deployment-milestones).
+This article introduces you to the process of deploying the Microsoft Sentinel solution for SAP applications. The full process is detailed in a set of articles linked under [Deployment milestones](#deployment-milestones).
> [!TIP] > Learn how to [monitor the health and role of your SAP systems](../monitor-sap-system-health.md).
-Microsoft Sentinel solution for SAP® applications is certified for SAP S/4HANA® Cloud, Private Edition RISE with SAP and SAP S/4 on-premises. Learn more about this [certification](solution-overview.md#certification).
+Microsoft Sentinel solution for SAP applications is certified for SAP S/4HANA Cloud, Private Edition RISE with SAP, and SAP S/4 on-premises. Learn more about this [certification](solution-overview.md#certification).
> [!NOTE]
-> If needed, you can [update an existing Microsoft Sentinel for SAP data connector](update-sap-data-connector.md) to its latest version.
+> [Update an existing Microsoft Sentinel for SAP data connector](update-sap-data-connector.md) to the latest version.
+
+## What is the Microsoft Sentinel solution for SAP applications?
-## Overview
+The Microsoft Sentinel solution for SAP applications is a [Microsoft Sentinel solution](../sentinel-solutions.md) that you can use to monitor your SAP systems. Use the solution to detect sophisticated threats throughout the business logic and application layers of your SAP applications. The solution includes the following components:
-**Microsoft Sentinel solution for SAP® applications** is a [Microsoft Sentinel solution](../sentinel-solutions.md) that you can use to monitor your SAP systems and detect sophisticated threats throughout the business logic and application layers. The solution includes the following components:
- The Microsoft Sentinel for SAP data connector for data ingestion. - Analytics rules and watchlists for threat detection.-- Functions for easy data access.-- Workbooks for interactive data visualization.
+- Functions that you can use for easy data access.
+- Workbooks that you can use to create interactive data visualization.
- Watchlists for customization of the built-in solution parameters.-- Playbooks for automating responses to threats.
+- Playbooks that you can use to automate responses to threats.
> [!NOTE] > The Microsoft Sentinel for SAP solution is free to install, but there is an [additional hourly charge](https://azure.microsoft.com/pricing/offers/microsoft-sentinel-sap-promo/) for activating and using the solution on production systems. > > - The additional hourly charge applies to connected production systems only. > - Microsoft Sentinel identifies a production system by looking at the configuration on the SAP system. To do this, Microsoft Sentinel searches for a production entry in the T000 table.
->
+>
> For more information, see [View the roles of your connected production systems](../monitor-sap-system-health.md).
-The Microsoft Sentinel for SAP data connector is an agent that's installed on a VM, a physical server, or a Kubernetes cluster. The agent collects application logs from across the entire SAP system landscape, for all of your SAP SIDs, and sends those logs to your Log Analytics workspace in Microsoft Sentinel. Use the other content in the [Threat Monitoring for SAP solution](sap-solution-security-content.md) ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
-For example, the following image shows a multi-SID SAP landscape with a split between productive and non-productive systems, including the SAP Business Technology Platform. All of the systems in this image are onboarded to Microsoft Sentinel for the SAP solution.
+The Microsoft Sentinel for SAP data connector is an agent that's installed on a virtual machine (VM), physical server, or Kubernetes cluster. The agent collects application logs for all of your SAP SIDs from across the entire SAP system landscape, and then sends those logs to your Log Analytics workspace in Microsoft Sentinel. Use the other content in the [Threat Monitoring for SAP solution](sap-solution-security-content.md), including the analytics rules, workbooks, and watchlists, to gain insight into your organization's SAP environment and to detect and respond to security threats.
+For example, the following image shows a multi-SID SAP landscape with a split between production and nonproduction systems, including the SAP Business Technology Platform. All the systems in this image are onboarded to Microsoft Sentinel for the SAP solution.
+ ## Deployment milestones
-Follow your deployment journey through this series of articles, in which you'll learn how to navigate each of the following steps.
+Follow your deployment journey through this series of articles, in which you learn how to navigate each of the following steps.
> [!NOTE]
-> If needed, you can [update an existing Microsoft Sentinel for SAP data connector](update-sap-data-connector.md) to its latest version.
+> [Update an existing Microsoft Sentinel for SAP data connector](update-sap-data-connector.md) to the latest version.
| Milestone | Article | | | - |
-| **1. Deployment overview** | **YOU ARE HERE** |
-| **2. Plan architecture** | Learn about [working with the solution across multiple workspaces](cross-workspace.md) (PREVIEW) |
-| **3. Deployment prerequisites** | [Prerequisites for deploying the Microsoft Sentinel Solution for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) |
-| **4. Prepare SAP environment** | [Deploying SAP CRs and configuring authorization](preparing-sap.md) |
+| **1. Deployment overview** | *YOU ARE HERE* |
+| **2. Plan your architecture** | Learn how to [work with the solution in multiple workspaces](cross-workspace.md) (preview) |
+| **3. Deployment prerequisites** | [Prerequisites for deploying the Microsoft Sentinel solution for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) |
+| **4. Prepare your SAP environment** | [Deploy SAP change requests and configure authorization](preparing-sap.md) |
| **5. Configure auditing** | [Configure auditing](configure-audit.md) |
-| **6. Deploy the solution content from the content hub** | [Deploy the Microsoft Sentinel solution for SAP applications® from the content hub](deploy-sap-security-content.md) |
-| **7. Deploy data connector agent** | [Deploy and configure the container hosting the data connector agent](deploy-data-connector-agent-container.md) |
-| **8. Configure Microsoft Sentinel Solution for SAP** | [Configure Microsoft Sentinel Solution for SAP](deployment-solution-configuration.md) |
-| **9. Optional steps** | - [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md)<br>- [Deploy SAP connector manually](sap-solution-deploy-alternate.md)<br>- [Select SAP ingestion profiles](select-ingestion-profiles.md) |
+| **6. Deploy the solution content from the content hub** | [Deploy the Microsoft Sentinel solution for SAP applications from the content hub](deploy-sap-security-content.md) |
+| **7. Deploy the data connector agent** | [Deploy and configure the container hosting the data connector agent](deploy-data-connector-agent-container.md) |
+| **8. Configure the Microsoft Sentinel solution for SAP** | [Configure the Microsoft Sentinel solution for SAP](deployment-solution-configuration.md) |
+| **9. Optional steps** | - [Configure the Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md)<br>- [Deploy SAP connector manually](sap-solution-deploy-alternate.md)<br>- [Select SAP ingestion profiles](select-ingestion-profiles.md) |
+
+## Next step
-## Next steps
+Begin the deployment of the Microsoft Sentinel solution for SAP applications by reviewing the prerequisites:
-Begin the deployment of the Microsoft Sentinel solution for SAP® applications by reviewing the prerequisites:
> [!div class="nextstepaction"] > [Prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 02/08/2024 Last updated : 02/14/2024
RHEL 9.0 <br> RHEL 9.1 <br> RHEL 9.2 <br> RHEL 9.3 | 9.60 | 5.14.0-70.13.1.el9_
16.04 LTS | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new 16.04 LTS kernels supported in this release. | |||
-18.04 LTS | [9.60]() | No new 18.04 LTS kernels supported in this release. |
+18.04 LTS | [9.60]() | 4.15.0-1168-azure <br> 4.15.0-1169-azure <br> 4.15.0-1170-azure <br> 4.15.0-1171-azure <br> 4.15.0-1172-azure <br> 4.15.0-1173-azure <br> 4.15.0-214-generic <br> 4.15.0-216-generic <br> 4.15.0-218-generic <br> 4.15.0-219-generic <br> 4.15.0-220-generic <br> 4.15.0-221-generic <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-1112-azure <br> 5.4.0-1113-azure <br> 5.4.0-1115-azure <br> 5.4.0-1116-azure <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-1120-azure <br> 5.4.0-1121-azure <br> 5.4.0-1122-azure <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-156-generic <br> 5.4.0-159-generic <br> 5.4.0-162-generic <br> 5.4.0-163-generic <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic <br> 5.4.0-167-generic <br> 5.4.0-169-generic <br> 5.4.0-170-generic |
18.04 LTS | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | No new 18.04 LTS kernels supported in this release. | 18.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | No new 18.04 LTS kernels supported in this release. | 18.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.15.0-1166-azure <br> 4.15.0-1167-azure <br> 4.15.0-212-generic <br> 4.15.0-213-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic | 18.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 5.4.0-1107-azure <br> 5.4.0-147-generic <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 4.15.0-212-generic <br> 4.15.0-1166-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure | |||
-20.04 LTS | [9.60]() | No new 20.04 LTS kernels supported in this release. |
+20.04 LTS | [9.60]() | 5.15.0-1054-azure <br> 5.15.0-92-generic <br> 5.4.0-1122-azure <br> 5.4.0-170-generic |
20.04 LTS | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 5.15.0-1052-azure <br> 5.15.0-1053-azure <br> 5.15.0-89-generic <br> 5.15.0-91-generic <br> 5.4.0-1120-azure <br> 5.4.0-1121-azure <br> 5.4.0-167-generic <br> 5.4.0-169-generic | 20.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic | 20.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-1112-azure <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-79-generic <br> 5.4.0-156-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.4.0-1116-azure <br> 5.4.0-163-generic <br> 5.15.0-1043-azure <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic | 20.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 5.4.0-147-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.4.0-1107-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | |||
-22.04 LTS |[9.60]()| 5.19.0-1025-azure <br> 5.19.0-1026-azure <br> 5.19.0-1027-azure <br> 5.19.0-41-generic <br> 5.19.0-42-generic <br> 5.19.0-43-generic <br> 5.19.0-45-generic <br> 5.19.0-46-generic <br> 5.19.0-50-generic <br> 6.2.0-1005-azure <br> 6.2.0-1006-azure <br> 6.2.0-1007-azure <br> 6.2.0-1008-azure <br> 6.2.0-1011-azure <br> 6.2.0-1012-azure <br> 6.2.0-1014-azure <br> 6.2.0-1015-azure <br> 6.2.0-1016-azure <br> 6.2.0-1017-azure <br> 6.2.0-1018-azure <br> 6.2.0-25-generic <br> 6.2.0-26-generic <br> 6.2.0-31-generic <br> 6.2.0-32-generic <br> 6.2.0-33-generic <br> 6.2.0-34-generic <br> 6.2.0-35-generic <br> 6.2.0-36-generic <br> 6.2.0-37-generic <br> 6.2.0-39-generic <br> 6.5.0-1007-azure <br> 6.5.0-1009-azure <br> 6.5.0-1010-azure <br> 6.5.0-14-generic |
+22.04 LTS |[9.60]()| 5.19.0-1025-azure <br> 5.19.0-1026-azure <br> 5.19.0-1027-azure <br> 5.19.0-41-generic <br> 5.19.0-42-generic <br> 5.19.0-43-generic <br> 5.19.0-45-generic <br> 5.19.0-46-generic <br> 5.19.0-50-generic <br> 6.2.0-1005-azure <br> 6.2.0-1006-azure <br> 6.2.0-1007-azure <br> 6.2.0-1008-azure <br> 6.2.0-1011-azure <br> 6.2.0-1012-azure <br> 6.2.0-1014-azure <br> 6.2.0-1015-azure <br> 6.2.0-1016-azure <br> 6.2.0-1017-azure <br> 6.2.0-1018-azure <br> 6.2.0-25-generic <br> 6.2.0-26-generic <br> 6.2.0-31-generic <br> 6.2.0-32-generic <br> 6.2.0-33-generic <br> 6.2.0-34-generic <br> 6.2.0-35-generic <br> 6.2.0-36-generic <br> 6.2.0-37-generic <br> 6.2.0-39-generic <br> 6.5.0-1007-azure <br> 6.5.0-1009-azure <br> 6.5.0-1010-azure <br> 6.5.0-14-generic <br> 5.15.0-1054-azure <br> 5.15.0-92-generic <br>6.2.0-1019-azure <br>6.5.0-1011-azure <br>6.5.0-15-generic |
22.04 LTS | [9.57](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 5.15.0-1052-azure <br> 5.15.0-1053-azure <br> 5.15.0-76-generic <br> 5.15.0-89-generic <br> 5.15.0-91-generic | 22.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic | 22.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-1044-azure <br> 5.15.0-79-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic |
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-application-configuration-service.md
Title: Use Application Configuration Service for Tanzu with the Azure Spring Apps Enterprise plan
+ Title: Use Application Configuration Service for Tanzu
description: Learn how to use Application Configuration Service for Tanzu with the Azure Spring Apps Enterprise plan.
You can choose the version of Application Configuration Service when you create
Application Configuration Service supports Azure DevOps, GitHub, GitLab, and Bitbucket for storing your configuration files.
-To manage the service settings, open the **Settings** section and add a new entry under the **Repositories** section.
+To manage the service settings, open the **Settings** section. In this section, you can configure the following key aspects:
+- **Generation**: Upgrade the service generation.
+- **Refresh Interval**: Adjust the frequency at which the service checks for updates from Git repositories.
+- **Repositories**: Add new entries, or modify existing ones. This function enables you to control which repositories the service monitors use to pull data.
-The following table describes the properties for each entry.
+
+If your current service generation is **Gen1**, you can upgrade to **Gen2** for better performance. For more information, see the [Upgrade from Gen1 to Gen2](#upgrade-from-gen1-to-gen2) section.
+
+The **Refresh Interval** specifies the frequency (in seconds) for checking updates in the repository. The minimum value is *0*, which disables automatic refresh. For optimal performance, set this interval to a minimum value of 60 seconds.
+
+The following table describes the properties for each repository entry:
| Property | Required? | Description | ||--|-|
The following table describes the properties for each entry.
Configuration is pulled from Git backends using what you define in a pattern. A pattern is a combination of *{application}/{profile}* as described in the following guidelines. - *{application}* - The name of an application whose configuration you're retrieving. The value `application` is considered the default application and includes configuration information shared across multiple applications. Any other value refers to a specific application and includes properties for both the specific application and shared properties for the default application.-- *{profile}* - Optional. The name of a profile whose properties you may be retrieving. An empty value, or the value `default`, includes properties that are shared across profiles. Non-default values include properties for the specified profile and properties for the default profile.
+- *{profile}* - Optional. The name of a profile whose properties you can retrieve. An empty value, or the value `default`, includes properties that are shared across profiles. Non-default values include properties for the specified profile and properties for the default profile.
### Authentication
Use the following steps to upgrade from Gen1 to Gen2:
1. In the Azure portal, navigate to the Application Configuration Service page for your Azure Spring Apps service instance.
-1. Select the **Settings** section and then select **Gen 2** in the **Generation** dropdown menu.
+1. Select the **Settings** section and then select **Gen2** in the **Generation** dropdown menu.
:::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the Settings tab showing and the Generation menu open." lightbox="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2.png"::: 1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2-settings.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the Settings tab showing and the Validate button highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2-settings.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2-settings.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page and the Settings tab with the Validate button highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2-settings.png":::
## Polyglot support
The Application Configuration Service also supports polyglot apps like dotNET, G
## Refresh strategies
-Use the following steps to refresh your Java Spring Boot application configuration after you update the configuration file in the Git repository.
+When you modify and commit your configurations in a Git repository, several steps are involved before these changes are reflected in your applications. This process, though automated, involves the following distinct stages and components, each with its own timing and behavior:
-1. Load the configuration to Application Configuration Service.
+- Polling by Application Configuration Service: The Application Configuration Service regularly polls the backend Git repositories to detect any changes. This polling occurs at a set frequency, defined by the refresh interval. When a change is detected, Application Configuration Service updates the Kubernetes `ConfigMap`.
+- ConfigMap update and interaction with kubelet cache: In Azure Spring Apps, this `ConfigMap` is mounted as a data volume to the relevant application. However, there's a natural delay in this process due to the frequency at which the kubelet refreshes its cache to recognize changes in `ConfigMap`.
+- Application reads updated configuration: Your application running in the Azure Spring Apps environment can access the updated configuration values. The existing beans in the Spring Context aren't automatically refreshed to use the updated configurations.
- Azure Spring Apps manages the refresh frequency, which is set to 60 seconds.
+These stages are summarized in the following diagram:
-1. Load the configuration to your application.
-A Spring application holds the properties as the beans of the Spring Application Context via the Environment interface. The following list shows several ways to load the new configurations:
+You can adjust the polling refresh interval of the Application Configuration Service to align with your specific needs. To apply the updated configurations in your application, a restart or refresh action is necessary.
+
+In Spring applications, properties are held or referenced as beans within the Spring Context. To load new configurations, consider using the following methods:
- Restart the application. Restarting the application always loads the new configuration. - Call the `/actuator/refresh` endpoint exposed on the config client via the Spring Actuator.
- To use this method, add the following dependency to your configuration clientΓÇÖs *pom.xml* file.
+ To use this method, add the following dependency to your configuration clientΓÇÖs *pom.xml* file.
+
+ ```xml
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-actuator</artifactId>
+ </dependency>
+ ```
- ``` xml
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-actuator</artifactId>
- </dependency>
- ```
+ You can also enable the actuator endpoint by adding the following configuration:
- You can also enable the actuator endpoint by adding the following configurations:
+ ```properties
+ management.endpoints.web.exposure.include=refresh, bus-refresh, beans, env
+ ```
- ```properties
- management.endpoints.web.exposure.include=refresh, bus-refresh, beans, env
- ```
+ After you reload the property sources by calling the `/actuator/refresh` endpoint, the attributes bound with `@Value` in the beans having the annotation `@RefreshScope` are refreshed.
- After you reload the property sources by calling the `/actuator/refresh` endpoint, the attributes bound with `@Value` in the beans having the annotation `@RefreshScope` are refreshed.
+ ```java
+ @Service
+ @Getter @Setter
+ @RefreshScope
+ public class MyService {
+ @Value
+ private Boolean activated;
+ }
+ ```
- ``` java
- @Service
- @Getter @Setter
- @RefreshScope
- public class MyService {
- @Value
- private Boolean activated;
- }
- ```
+ Use curl with the application endpoint to refresh the new configuration, as shown in the following example:
- Use curl with the application endpoint to refresh the new configuration.
+ ```bash
+ curl -X POST http://{app-endpoint}/actuator/refresh
+ ```
- ``` bash
- curl -X POST http://{app-endpoint}/actuator/refresh
- ```
+- Use `FileSystemWatcher` to watch the file change and refresh the context on demand. `FileSystemWatcher` is a class shipped with `spring-boot-devtools` that watches specific directories for file changes, or you can use another utility with similar function. The previous option requires users to initiate the refresh actively, while the latter can monitor for file changes and automatically invoke the refresh upon detecting updates. You can retrieve the file path by using the environment variable `AZURE_SPRING_APPS_CONFIG_FILE_PATH`, as mentioned in the [Polyglot support](#polyglot-support) section.
## Configure Application Configuration Service settings
A Spring application holds the properties as the beans of the Spring Application
Use the following steps to configure Application Configuration Service: 1. Select **Application Configuration Service**.+ 1. Select **Overview** to view the running state and resources allocated to Application Configuration Service. :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-service-overview.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with Overview tab highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-service-overview.png":::
Use the following steps to use Application Configuration Service with applicatio
1. Open the **App binding** tab.
-1. Select **Bind app** and choose one app in the dropdown. Select **Apply** to bind.
+1. Select **Bind app** and choose one app from the dropdown. Select **Apply** to bind.
:::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-service-app-bind-dropdown.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the App binding tab highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-service-app-bind-dropdown.png"::: > [!NOTE] > When you change the bind/unbind status, you must restart or redeploy the app to for the binding to take effect.
-1. In the navigation menu, select **Apps** to view the list all the apps.
+1. In the navigation menu, select **Apps** to view the list of all the apps.
-1. Select the target app to configure patterns for from the `name` column.
+1. Select the target app to configure patterns for the `name` column.
1. In the navigation pane, select **Configuration** and then select **General settings**.
-1. In the **Config file patterns** dropdown, choose one or more patterns from the list. For more information, see the [Pattern](./how-to-enterprise-application-configuration-service.md#pattern) section.
+1. In the **Config file patterns** dropdown, choose one or more patterns from the list. For more information, see the [Pattern](#pattern) section.
:::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-service-pattern.png" alt-text="Screenshot of the Azure portal that shows the App Configuration page with the General settings tab and api-gateway options highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-service-pattern.png":::
-1. Select **Save**
+1. Select **Save**.
### [Azure CLI](#tab/Azure-CLI)
To check the logs of `application-configuration-service` and `flux-source-contro
:::image type="content" source="media/how-to-enterprise-application-configuration-service/query-logs-flux-source-controller.png" alt-text="Screenshot of the Azure portal that shows the query result of logs for flux-source-controller." lightbox="media/how-to-enterprise-application-configuration-service/query-logs-flux-source-controller.png"::: > [!NOTE]
-> There could be a few minutes delay before the logs are available in Log Analytics.
+> There might could be a few minutes delay before the logs are available in Log Analytics.
+
+## Troubleshoot known issues
+
+If the latest changes aren't reflected in the applications, check the following items based on the [Refresh strategies](#refresh-strategies) section:
+
+- Confirm that the Git repo is updated correctly by checking the following items:
+ - Confirm that the branch of the desired config file changes is updated.
+ - Confirm that the pattern configured in the Application Configuration Service matches the updated config files.
+ - Confirm that the application is bound to the Application Configuration Service.
+- Confirm that the `ConfigMap` of the app is updated. If it isn't updated, raise a ticket.
+- Confirm that the `ConfigMap` is mounted to the application as a file by using `web shell`. If the file isn't updated, wait for the Kubernetes refresh interval (1 minute), or force a refresh by restarting the application.
+
+After checking these items, the applications should be able to read the updated configurations. If the applications still aren't updated, raise a ticket.
-## Next steps
+## Related content
-- [Azure Spring Apps](index.yml)
+[Azure Spring Apps](index.yml)
storage Elastic San Connect Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md
Last updated 02/13/2024
-# Connect Azure Elastic SAN volumes to an Azure Kubernetes Service cluster (Preview)
+# Connect Azure Elastic SAN volumes to an Azure Kubernetes Service cluster
This article explains how to connect an Azure Elastic storage area network (SAN) volume from an Azure Kubernetes Service (AKS) cluster. To make this connection, enable the [Kubernetes iSCSI CSI driver](https://github.com/kubernetes-csi/csi-driver-iscsi) on your cluster. With this driver, you can access volumes on your Elastic SAN by creating persistent volumes on your AKS cluster, and then attaching the Elastic SAN volumes to the persistent volumes.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 1/11/2024 Last updated : 2/28/2024 # Release notes for Azure File Sync+ Azure File Sync enables centralizing your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of a Windows file server. While some users may opt to keep a full copy of their data locally, Azure File Sync additionally has the ability to transform Windows Server into a quick cache of your Azure file share. You can use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world. This article provides the release notes for Azure File Sync. It's important to note that major releases of Azure File Sync include service and agent improvements (for example, 15.0.0.0). Minor releases of Azure File Sync are typically for agent improvements (for example, 15.2.0.0). ## Supported versions+ The following Azure File Sync agent versions are supported: | Milestone | Agent version number | Release date | Status |
The following Azure File Sync agent versions are supported:
| V17.2 Release - [KB5023055](https://support.microsoft.com/topic/dfa4c285-a4cb-4561-b0ed-bbd4ae09d91d)| 17.2.0.0 | February 28, 2024 | Supported | | V17.1 Release - [KB5023054](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b)| 17.1.0.0 | February 13, 2024 | Supported - Security Update| | V16.2 Release - [KB5023052](https://support.microsoft.com/topic/azure-file-sync-agent-v16-2-release-february-2024-security-only-update-8247bf99-8f51-4eb6-b378-b86b6d1d45b8)| 16.2.0.0 | February 13, 2024 | Supported - Security Update|
-| V17.0 Release - [KB5023053](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9)| 17.0.0.0 | December 6, 2023 | Supported - Flighting |
+| V17.0 Release - [KB5023053](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9)| 17.0.0.0 | December 6, 2023 | Supported |
| V16.0 Release - [KB5013877](https://support.microsoft.com/topic/ffdc8fe2-c653-43c8-8b47-0865267fd520)| 16.0.0.0 | January 30, 2023 | Supported | | V15.2 Release - [KB5013875](https://support.microsoft.com/topic/9159eee2-3d16-4523-ade4-1bac78469280)| 15.2.0.0 | November 21, 2022 | Supported - Agent version will expire on March 19, 2024 | | V15.1 Release - [KB5003883](https://support.microsoft.com/topic/45761295-d49a-431e-98ec-4fb3329b0544)| 15.1.0.0 | September 19, 2022 | Supported - Agent version will expire on March 19, 2024 | | V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported - Agent version will expire on March 19, 2024 |
-| V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported - Agent version will expire on February 8, 2024 |
-| V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported - Agent version will expire on February 8, 2024 |
## Unsupported versions+ The following Azure File Sync agent versions have expired and are no longer supported: | Milestone | Agent version number | Release date | Status | |-|-|--||
+| V14 Release | 14.0.0.0 | N/A | Not Supported - Agent versions expired on February 8, 2024 |
| V13 Release | 13.0.0.0 | N/A | Not Supported - Agent versions expired on August 8, 2022 | | V12 Release | 12.0.0.0 - 12.1.0.0 | N/A | Not Supported - Agent versions expired on May 23, 2022 | | V11 Release | 11.1.0.0 - 11.3.0.0 | N/A | Not Supported - Agent versions expired on March 28, 2022 |
The following Azure File Sync agent versions have expired and are no longer supp
| Pre-GA agents | 1.1.0.0 - 3.0.13.0 | N/A | Not Supported - Agent versions expired on October 1, 2018 | ### Azure File Sync agent update policy+ [!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)] ## Windows Server 2012 R2 agent support will end on March 4, 2025
-Windows Server 2012 R2 reached [end of support](https://learn.microsoft.com/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10, 2023. Azure File Sync will continue to support Windows Server 2012 R2 until the v17.x agent is expired on March 4, 2025. Once the v17 agent is expired, Windows Server 2012 R2 servers will stop syncing to your Azure file shares.
-**Action Required**
+Windows Server 2012 R2 reached [end of support](/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10, 2023. Azure File Sync will continue to support Windows Server 2012 R2 until the v17.x agent is expired on March 4, 2025. Once the v17 agent is expired, Windows Server 2012 R2 servers will stop syncing to your Azure file shares.
+
+### Action Required
Perform one of the following options for your Windows Server 2012 R2 servers prior to v17 agent expiration on March 4, 2025: - Option #1: Perform an [in-place upgrade](/windows-server/get-started/perform-in-place-upgrade) to a [supported operation system version](file-sync-planning.md#operating-system-requirements). Once the in-place upgrade completes, uninstall the Azure File Sync agent for Windows Server 2012 R2, restart the server, and then install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022). -- Option #2: Deploy a new Azure File Sync server that is running a [supported operation system version](file-sync-planning.md#operating-system-requirements) to replace your Windows 2012 R2 servers. For guidance, see [Replace an Azure File Sync server](file-sync-replace-server.md).
+- Option #2: Deploy a new Azure File Sync server that's running a [supported operation system version](file-sync-planning.md#operating-system-requirements) to replace your Windows 2012 R2 servers. For guidance, see [Replace an Azure File Sync server](file-sync-replace-server.md).
->[!Note]
+>[!NOTE]
>Azure File Sync agent v17.2 is the last agent release currently planned for Windows Server 2012 R2. To continue to receive product improvements and bug fixes, upgrade your servers to Windows Server 2016 or later. ## Version 17.2.0.0+ The following release notes are for Azure File Sync version 17.2.0.0 (released February 28, 2024). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed+ The Azure File Sync v17.2 release is a rollup update for the v17.0 and v17.1 releases:+ - [Azure File Sync Agent v17 Release - December 2023](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9) - [Azure File Sync Agent v17.1 Release - February 2024](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b)
-**Note**: If your server has v17.1 agent installed, you don't need to install the v17.2 agent.
+
+>[!NOTE]
+>If your server has v17.1 agent installed, you don't need to install the v17.2 agent.
### Evaluation tool
-Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration+ For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions.-- The agent is not supported on Nano Server deployment option.
+- The agent isn't supported on Nano Server deployment option.
- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022. - The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, you must uninstall the existing agent, restart the server, and install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022). - The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information. - The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results. ### Interoperability+ - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen. - Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup. ### Sync limitations+ The following items don't sync, but the rest of the system continues to operate normally:+ - Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters. - Files or directories that end with a period. - Paths that are longer than 2,048 characters.
The following items don't sync, but the rest of the system continues to operate
- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints. - Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.
- > [!Note]
- > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
+> [!NOTE]
+> Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
### Server endpoint+ - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync. - Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. - Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).
The following items don't sync, but the rest of the system continues to operate
- Don't store an OS or application paging file within a server endpoint location. ### Cloud endpoint+ - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, you can use the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)).
- > [!Note]
- > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
+> [!NOTE]
+> When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
### Cloud tiering+ - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.-- When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
+- When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure that older files are tiered sooner than recently accessed files.
## Version 17.1.0.0 (Security Update)
-The following release notes are for Azure File Sync version 17.1.0.0 (released February 13, 2024). This release contains a security update for the Azure File Sync agent. These notes are in addition to the release notes listed for version 17.0.0.0.
+
+The following release notes are for Azure File Sync version 17.1.0.0 (released February 13, 2024). This release contains a security update for the Azure File Sync agent. These notes are in addition to the release notes listed for version 17.0.0.0.
### Improvements and issues that are fixed+ - Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397). ## Version 16.2.0.0 (Security Update)
-The following release notes are for Azure File Sync version 16.2.0.0 (released February 13, 2024). This release contains security updates for the Azure File Sync agent. These notes are in addition to the release notes listed for version 16.0.0.0.
+
+The following release notes are for Azure File Sync version 16.2.0.0 (released February 13, 2024). This release contains security updates for the Azure File Sync agent. These notes are in addition to the release notes listed for version 16.0.0.0.
### Improvements and issues that are fixed+ - Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397). ## Version 17.0.0.0 (Flighting)+ The following release notes are for Azure File Sync version 17.0.0.0 (released December 6, 2023). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed+ - Sync upload performance improvements - Sync upload performance has improved (performance numbers to be posted in the near future). This improvement will mainly benefit file share migrations (initial upload) and high churn events on the server in which a large number of files need to be uploaded. - Expanded character support for file and directory names
- - Azure File Sync now supports an expanded list of characters. This expansion allows for users to create and sync SMB File shares with file and directory names on par with NTFS file system, for valid Unicode characters. For more information on unsupported characters, refer to the documentation [here](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=%2Fazure%2Fstorage%2Ffile-sync%2Ftoc.json&tabs=portal1%2Cazure-portal#handling-unsupported-characters).
+ - Azure File Sync now supports an expanded list of characters. This expansion allows for users to create and sync SMB file shares with file and directory names on par with NTFS file system, for valid Unicode characters. For more information on unsupported characters, refer to the documentation [here](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=%2Fazure%2Fstorage%2Ffile-sync%2Ftoc.json&tabs=portal1%2Cazure-portal#handling-unsupported-characters).
- New cloud tiering low disk space mode metric - You can now configure an alert if a server is in low disk space mode. To learn more, see [Monitor Azure File Sync](file-sync-monitoring.md). - Fixed an issue that caused the agent upgrade to hang - Miscellaneous reliability and telemetry improvements for cloud tiering and sync ### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration+ For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions.-- The agent is not supported on Nano Server deployment option.
+- The agent isn't supported on Nano Server deployment option.
- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022. - Windows Server 2012 R2 requires .NET Framework version 4.6.2 or higher.-- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, the existing agent must be uninstalled. Restart the server and install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
+- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, the existing agent must be uninstalled. Restart the server and then install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information. - The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results. ### Interoperability+ - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen. - Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup. ### Sync limitations+ The following items don't sync, but the rest of the system continues to operate normally:+ - Azure File Sync v17 agent supports all characters that are supported by the [NTFS file system](/windows/win32/fileio/naming-a-file) except invalid surrogate pairs. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for more information. - Paths that are longer than 2,048 characters. - The system access control list (SACL) portion of a security descriptor that's used for auditing.
The following items don't sync, but the rest of the system continues to operate
- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints. - Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.
- > [!Note]
- > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
+> [!NOTE]
+> Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
### Server endpoint+ - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync. - Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. - Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).
The following items don't sync, but the rest of the system continues to operate
- Don't store an OS or application paging file within a server endpoint location. ### Cloud endpoint+ - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, use the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Microsoft Entra (formerly Azure AD) tenant. After moving the storage sync service or storage account, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)).
- > [!Note]
- > When creating the cloud endpoint, the storage sync service and storage account must be in the same Microsoft Entra tenant. After you create the cloud endpoint, you can move the storage sync service and storage account to different Microsoft Entra tenants.
+> [!NOTE]
+> When creating the cloud endpoint, the storage sync service and storage account must be in the same Microsoft Entra tenant. After you create the cloud endpoint, you can move the storage sync service and storage account to different Microsoft Entra tenants.
### Cloud tiering+ - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files. ## Version 16.0.0.0+ The following release notes are for Azure File Sync version 16.0.0.0 (released January 30, 2023). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed+ - Improved Azure File Sync service availability - Azure File Sync is now a zone-redundant service, which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully use this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](../files/files-redundancy.md). - Immediately run server change enumeration to detect files changes that were missed on the server
- - Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files will not sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the Invoke-StorageSyncServerChangeDetection PowerShell cmdlet to immediately run server change enumeration on a server endpoint path.
+ - Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files won't sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the `Invoke-StorageSyncServerChangeDetection` PowerShell cmdlet to immediately run server change enumeration on a server endpoint path.
To immediately run server change enumeration on a server endpoint path, run the following PowerShell commands:+ ```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Invoke-StorageSyncServerChangeDetection -ServerEndpointPath <path> ```
- > [!Note]
+
+ > [!NOTE]
> By default, the server change enumeration scan will only check the modified timestamp. To perform a deeper check, use the -DeepScan parameter. - Bug fix for the PowerShell script FileSyncErrorsReport.ps1
The following release notes are for Azure File Sync version 16.0.0.0 (released J
- Miscellaneous reliability and telemetry improvements for cloud tiering and sync ### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration+ For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions.-- The agent is not supported on Nano Server deployment option.
+- The agent isn't supported on Nano Server deployment option.
- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022.-- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, the existing agent must be uninstalled, restart the server and install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
+- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, you must uninstall the existing agent, restart the server, and install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information.-- The Storage Sync Agent (FileSyncSvc) service does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.
+- The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.
### Interoperability+ - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.-- Running sysprep on a server that has the Azure File Sync agent installed is not supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.
+- Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.
### Sync limitations+ The following items don't sync, but the rest of the system continues to operate normally:+ - Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters. - Files or directories that end with a period. - Paths that are longer than 2,048 characters.
The following items don't sync, but the rest of the system continues to operate
- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints. - Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.
- > [!Note]
- > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
-
+> [!NOTE]
+> Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
+ ### Server endpoint+ - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.-- Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
+- Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs). - A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.-- Do not store an OS or application paging file within a server endpoint location.
+- Don't store an OS or application paging file within a server endpoint location.
### Cloud endpoint+ - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)).
- > [!Note]
- > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
+> [!NOTE]
+> When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
### Cloud tiering+ - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.-- When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
+- When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
## Version 15.2.0.0+ The following release notes are for Azure File Sync version 15.2.0.0 (released November 21, 2022). This release contains improvements for the Azure File Sync agent. These notes are in addition to the release notes listed for version 15.0.0.0.
-### Improvements and issues that are fixed
+### Improvements and issues that are fixed
- Fixed a cloud tiering issue in the v15.1 agent that caused the following symptoms: - Memory usage is higher after upgrading to v15.1.
The following release notes are for Azure File Sync version 15.2.0.0 (released N
- Fixed a health reporting issue with servers configured to use a non-Gregorian calendar. ## Version 15.1.0.0+ The following release notes are for Azure File Sync version 15.2.0.0 (released September 19, 2022). This release contains improvements for the Azure File Sync agent. These notes are in addition to the release notes listed for version 15.0.0.0.
-### Improvements and issues that are fixed
+### Improvements and issues that are fixed
+ - Low disk space mode to prevent running out of disk space when using cloud tiering. - Low disk space mode is designed to handle volumes with low free space more effectively. On a server endpoint with cloud tiering enabled, if the free space on the volume reaches below a threshold, Azure File Sync considers the volume to be in Low disk space mode.
- In this mode, Azure File Sync does two things to free up space on the volume:
+ In this mode, Azure File Sync does two things to free up space on the volume:
- Files are tiered to the Azure file share more proactively.
- - Tiered files accessed by the user will not be persisted to the disk.
+ - Tiered files accessed by the user will not be persisted to the disk.
To learn more, see the [low disk space mode](file-sync-cloud-tiering-overview.md#low-disk-space-mode) section in the Cloud tiering overview documentation. -- Fixed a cloud tiering issue that caused high CPU usage after v15.0 agent is installed.
+- Fixed a cloud tiering issue that caused high CPU usage after v15.0 agent is installed.
- Miscellaneous reliability and telemetry improvements. ## Version 15.0.0.0+ The following release notes are for Azure File Sync version 15.0.0.0 (released March 30, 2022). This release contains improvements for the Azure File Sync service and agent. ### Improvements and issues that are fixed-- Reduced transactions when cloud change enumeration job runs
- - Azure File Sync has a cloud change enumeration job that runs every 24 hours to detect changes made directly in the Azure file share and sync those changes to servers in your sync groups. In the v14 release, we made improvements to reduce the number of transactions when this job runs and in the v15 release we made further improvements. The transaction cost is also more predictable, each job will now produce 1 List transaction per directory, per day.
+
+- Reduced transactions when cloud change enumeration job runs
+ - Azure File Sync has a cloud change enumeration job that runs every 24 hours to detect changes made directly in the Azure file share and sync those changes to servers in your sync groups. In the v14 release, we made improvements to reduce the number of transactions when this job runs, and in the v15 release we made further improvements. The transaction cost is also more predictable, as each job will now produce one List transaction per directory, per day.
- View Cloud Tiering status for a server endpoint or volume
- - The `Get-StorageSyncCloudTieringStatus` cmdlet will show cloud tiering status for a specific server endpoint or for a specific volume (depending on path specified). The cmdlet will show current policies, current distribution of tiered vs. fully downloaded data, and last tiering session statistics if the server endpoint path is specified. If the volume path is specified, it will show the effective volume free space policy, the server endpoints located on that volume, and whether these server endpoints have cloud tiering enabled.
+ - The `Get-StorageSyncCloudTieringStatus` cmdlet will show cloud tiering status for a specific server endpoint or for a specific volume (depending on path specified). The cmdlet will show current policies, current distribution of tiered versus fully downloaded data, and last tiering session statistics if the server endpoint path is specified. If the volume path is specified, it will show the effective volume free space policy, the server endpoints located on that volume, and whether these server endpoints have cloud tiering enabled.
To get the cloud tiering status for a server endpoint or volume, run the following PowerShell commands:+ ```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Get-StorageSyncCloudTieringStatus -Path <server endpoint path or volume> ```-- New diagnostic and troubleshooting tool
- - The Debug-StorageSyncServer cmdlet will diagnose common issues like certificate misconfiguration and incorrect server time. Also, we have simplified Azure File Sync troubleshooting by merging the functionality of some of existing scripts and cmdlets (AFSDiag.ps1, FileSyncErrorsReport.ps1, Test-StorageSyncNetworkConnectivity) into the `Debug-StorageSyncServer` cmdlet.
+
+- New diagnostic and troubleshooting tool
+ - The `Debug-StorageSyncServer` cmdlet will diagnose common issues like certificate misconfiguration and incorrect server time. Also, we've simplified Azure File Sync troubleshooting by merging the functionality of some of existing scripts and cmdlets (AFSDiag.ps1, FileSyncErrorsReport.ps1, Test-StorageSyncNetworkConnectivity) into the `Debug-StorageSyncServer` cmdlet.
To run diagnostics on the server, run the following PowerShell commands:+ ```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Debug-StorageSyncServer -Diagnose ```+ To test network connectivity on the server, run the following PowerShell commands:+ ```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Debug-StorageSyncServer -TestNetworkConnectivity ```+ To identify files that are failing to sync on the server, run the following PowerShell commands:+ ```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Debug-StorageSyncServer -FileSyncErrorsReport ```+ To collect logs and traces on the server, run the following PowerShell commands:+ ```powershell Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" Debug-StorageSyncServer -AFSDiag -OutputDirectory C:\output -KernelModeTraceLevel Verbose -UserModeTraceLevel Verbose ```+ - Miscellaneous improvements - Reliability and telemetry improvements for cloud tiering and sync. ### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration+ For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). - The agent installation package must be installed with elevated (admin) permissions.-- The agent is not supported on Nano Server deployment option.
+- The agent isn't supported on Nano Server deployment option.
- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022. - The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information.-- The Storage Sync Agent (FileSyncSvc) service does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.
+- The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.
### Interoperability+ - Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json). - File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.-- Running sysprep on a server that has the Azure File Sync agent installed is not supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.
+- Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.
### Sync limitations+ The following items don't sync, but the rest of the system continues to operate normally:+ - Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters. - Files or directories that end with a period. - Paths that are longer than 2,048 characters.
The following items don't sync, but the rest of the system continues to operate
- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints. - Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.
- > [!Note]
- > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
+> [!NOTE]
+> Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
### Server endpoint+ - A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.-- Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
+- Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs). - A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.-- Do not store an OS or application paging file within a server endpoint location.
+- Don't store an OS or application paging file within a server endpoint location.
### Cloud endpoint+ - Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. - The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)).
- > [!Note]
- > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
+> [!NOTE]
+> When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
### Cloud tiering-- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.-- When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.-
-## Version 14.1.0.0
-The following release notes are for Azure File Sync version 14.1.0.0 (released December 1, 2021). This release contains improvements for the Azure File Sync agent. These notes are in addition to the release notes listed for version 14.0.0.0.
-
-### Improvements and issues that are fixed
-- Tiered files deleted on Windows Server 2022 are not detected by cloud tiering filter driver
- - This issue occurs because the DeleteFile API on Windows Server 2022 uses the FILE_DISPOSITION_INFORMATION_EX class to delete files. The v14.1 release adds support for detecting tiered files deleted using the FILE_DISPOSITION_INFORMATION_EX class.
-
- > [!Note]
- > This issue can also impact Windows 2016 and Windows Server 2019 if a tiered file is deleted using the FILE_DISPOSITION_INFORMATION_EX class.
-
-## Version 14.0.0.0
-The following release notes are for Azure File Sync version 14.0.0.0 (released October 29, 2021). This release contains improvements for the Azure File Sync service and agent.
-
-### Improvements and issues that are fixed
-- Reduced transactions when cloud change enumeration job runs
- - Azure File Sync has a cloud change enumeration job that runs every 24 hours to detect changes made directly in the Azure file share and sync those changes to servers in your sync groups. We have made improvements to reduce the number of transactions when this job runs.
--- Improved server endpoint deprovisioning guidance in the portal
- - When removing a server endpoint via the portal, we now provide step by step guidance based on the reason behind deleting the server endpoint, so that you can avoid data loss and ensure your data is where it needs to be (server or Azure file share). This feature also includes new PowerShell cmdlets (Get-StorageSyncStatus & New-StorageSyncUploadSession) that you can use on your local server to aid you through the deprovisioning process.
--- Invoke-AzStorageSyncChangeDetection cmdlet improvements
- - Prior to the v14 release, if you made changes directly in the Azure file share, you could use the Invoke-AzStorageSyncChangeDetection cmdlet to detect the changes and sync them to the servers in your sync group. However, the cmdlet would fail to run if the path specified contained more than 10,000 items. We have improved the Invoke-AzStorageSyncChangeDetection cmdlet and the 10,000 item limit no longer applies when scanning the entire share. To learn more, see the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) documentation.
-- Miscellaneous improvements
- - Azure File Sync is now supported in West US 3 region.
- - Fixed a bug that caused the FileSyncErrorsReport.ps1 script to not provide the list of all per-item errors.
- - Reduced transactions when a file consistently fails to upload due to a per-item sync error.
- - Reliability and telemetry improvements for cloud tiering and sync.
-
-### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
-
-### Agent installation and server configuration
-For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
--- A restart is required for servers that have an existing Azure File Sync agent installation if the agent version is less than version 12.0.-- The agent installation package must be installed with elevated (admin) permissions.-- The agent is not supported on Nano Server deployment option.-- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022.-- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information.-- The Storage Sync Agent (FileSyncSvc) service does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.-
-### Interoperability
-- Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json).-- File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.-- Running sysprep on a server that has the Azure File Sync agent installed is not supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.-
-### Sync limitations
-The following items don't sync, but the rest of the system continues to operate normally:
-- Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters.-- Files or directories that end with a period.-- Paths that are longer than 2,048 characters.-- The system access control list (SACL) portion of a security descriptor that's used for auditing.-- Extended attributes.-- Alternate data streams.-- Reparse points.-- Hard links.-- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints.-- Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.-
- > [!Note]
- > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
-
-### Server endpoint
-- A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.-- Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.-- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).-- A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.-- Do not store an OS or application paging file within a server endpoint location.-
-### Cloud endpoint
-- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. In addition, changes made to an Azure file share over the REST protocol will not update the SMB last modified time and will not be seen as a change by sync.-- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)).-
- > [!Note]
- > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
-
-### Cloud tiering
- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.-- When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
+- When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
storage File Sync Replace Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-replace-server.md
description: How to replace an Azure File Sync server due to hardware decommissi
Previously updated : 02/27/2024 Last updated : 02/28/2024
This article provides guidance on how to replace an Azure File Sync server due to hardware decommissioning or end of support (for example, Windows Server 2012 R2). ## New Azure File Sync server
-1. Deploy a new on-premises server or Azure virtual machine that is running a [supported Windows Server operating system version](file-sync-planning.md#operating-system-requirements).
-2. [Install the latest Azure File Sync agent](file-sync-deployment-guide.md#install-the-azure-file-sync-agent) on the new server, then [register the server](file-sync-deployment-guide.md#register-windows-server-with-storage-sync-service) to the same Storage Sync Service as the server that is being replaced (referred to as old server in this guide).
-3. Create file shares on the new server and verify the share-level permissions match the permissions configured on the old server.
+
+1. Deploy a new on-premises server or Azure virtual machine that's running a [supported Windows Server operating system version](file-sync-planning.md#operating-system-requirements).
+2. [Install the latest Azure File Sync agent](file-sync-deployment-guide.md#install-the-azure-file-sync-agent) on the new server, then [register the server](file-sync-deployment-guide.md#register-windows-server-with-storage-sync-service) to the same Storage Sync Service as the server that's being replaced (referred to as "old server" in this guide).
+3. Create file shares on the new server and verify that the share-level permissions match the permissions configured on the old server.
4. Optional: To reduce the amount of data that needs to be downloaded to the new server from the Azure file share, use Robocopy to copy the files in the cache from the old server to the new server. ```console
- RobocopyΓÇ»<source> <destination> /COPY:DATSO /MIR /DCOPY:AT /XA:O /B /IT /UNILOG:RobocopyLog.txt
+ RobocopyΓÇ»<source> <destination> /MT:16 /R:2 /W:1 /COPY:DATSO /MIR /DCOPY:DAT /XA:O /B /IT /UNILOG:RobocopyLog.txt
``` Once the initial copy is completed, run the Robocopy command again to copy any remaining changes. 5. In the Azure portal, navigate to the Storage Sync Service. Go to the sync group which has a server endpoint for the old server and [create a server endpoint](file-sync-server-endpoint-create.md#create-a-server-endpoint) on the new server. Repeat this step for every sync group that has a server endpoint for the old server.
- For example, if the old server has 4 server endpoints (four sync groups), 4 server endpoints should be created on the new server.
+ For example, if the old server has four server endpoints (four sync groups), then you should create four server endpoints on the new server.
6. Wait for the namespace download to complete to the new server. To monitor progress, see [How do I monitor the progress of a current sync session?](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?tabs=portal1%2Cazure-portal#how-do-i-monitor-the-progress-of-a-current-sync-session). ## User cut-over+ To redirect user access to the new Azure File Sync server, perform one of the following options: - Option #1: Rename the old server to a random name, then rename the new server to the same name as the old server. - Option #2: Use [Distributed File Systems Namespaces (DFS-N)](/windows-server/storage/dfs-namespaces/dfs-overview) to redirect users to the new server. ## Old Azure File Sync server+ 1. Follow the steps in the [Deprovision or delete your Azure File Sync server endpoint](file-sync-server-endpoint-delete.md#scenario-1-you-intend-to-delete-your-server-endpoint-and-stop-using-your-local-server--vm) documentation to verify that all files have synced to the Azure file share prior to deleting one or more server endpoints on the old server. 2. Once all server endpoints are deleted on the old server, you can [unregister the server](file-sync-server-registration.md#unregister-the-server).
synapse-analytics Data Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/data-management.md
description: Lists of third-party data management partners with solutions that s
Previously updated : 07/05/2023 Last updated : 02/29/2024
This article highlights Microsoft partner companies with data management tools a
## Data management partners | Partner | Description | Website/Product link | | - | -- | -- |
-| :::image type="content" source="./media/data-management/aginity-logo.png" alt-text="The logo of Aginity."::: |**Aginity**<br>Aginity is an analytics development tool. It puts the full power of Microsoft's Synapse platform in the hands of analysts and engineers. The rich and intuitive SQL development environment allows team members to connect to over a dozen industry leading analytics platforms. It allows users to ingest data in a variety of formats, and quickly build complex business calculation to serve the results into Business Intelligence and Machine Learning use cases. The entire application is built around a central catalog which makes collaboration across the analytics team a reality, and the sophisticated management capabilities and fine grained security make governance a breeze. |[Aginity](https://www.aginity.com/databases/microsoft/)<br> |
| :::image type="content" source="./media/data-management/alation-logo.png" alt-text="The logo of Alation."::: |**Alation**<br>Alation's data catalog dramatically improves the productivity, increases the accuracy, and drives confident data-driven decision making for analysts. Alation's data catalog empowers everyone in your organization to find, understand, and govern data. |[Alation](https://www.alation.com/product/data-catalog/)<br> | | :::image type="content" source="./media/data-integration/bibuilders-logo.png" alt-text="The logo of BI Builders (Xpert BI)."::: |**BI Builders (Xpert BI)**<br> Xpert BI provides an intuitive and searchable catalog for the line-of-business user to find, trust, and understand data and reports. The solution covers the whole data platform including Azure Synapse Analytics, ADLS Gen 2, Azure SQL Database, Analysis Services and Power BI, and also data flows and data movement end-to-end. Data stewards can update descriptions and tag data to follow regulatory requirements. Xpert BI can be integrated via APIs to other catalogs such as Microsoft Purview. It supplements traditional data catalogs with a business user perspective. |[Xpert BI](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Xpert BI in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>| | :::image type="content" source="./media/data-management/coffing-data-warehousing-logo.png" alt-text="The logo of Coffing Data Warehousing."::: |**Coffing Data Warehousing**<br>Coffing Data Warehousing provides Nexus Chameleon, a tool with 10 years of design dedicated to querying systems. Nexus is available as a query tool for dedicated SQL pool in Azure Synapse Analytics. Use Nexus to query in-house and cloud computers and join data across different platforms. Point-Select-Report! |[Coffing Data Warehousing](https://coffingdw.com/software/nexus/)<br> |
+| :::image type="content" source="./media/data-management/coginiti.svg" alt-text="The logo of Coginiti."::: |**Coginiti**<br>Coginiti is an analytics development tool. It puts the full power of Microsoft's Synapse platform in the hands of analysts and engineers. The rich and intuitive SQL development environment allows team members to connect to over a dozen industry leading analytics platforms. It allows users to ingest data in a variety of formats, and quickly build complex business calculation to serve the results into Business Intelligence and Machine Learning use cases. The entire application is built around a central catalog which makes collaboration across the analytics team a reality, and the sophisticated management capabilities and fine grained security make governance a breeze. |[Coginiti](https://www.coginiti.co/database)<br> |
| :::image type="content" source="./media/data-management/inbrein-logo.png" alt-text="The logo of Inbrein."::: |**Inbrein MicroERD**<br>Inbrein MicroERD provides the tools that you need to create a precise data model, reduce data redundancy, improve productivity, and observe standards. By using its UI, which was developed based on extensive user experiences, a modeler can work on DB models easily and conveniently. You can continuously enjoy new and improved functions of MicroERD through prompt functional improvements and updates. |[Inbrein MicroDesigner](http://www.inbrein.com/en/solutions/Micro%20Designer.html)<br> | | :::image type="content" source="./media/data-management/infolibrarian-logo.png" alt-text="The logo of InfoLibrarian."::: |**InfoLibrarian (Metadata Management Server)**<br>InfoLibrarian catalogs, stores, and manages metadata to help you solve key pain points of data management. InfoLibrarian provides metadata management, data governance, and asset management solutions for managing and publishing metadata from a diverse set of tools and technologies. |[InfoLibrarian](http://www.infolibcorp.com/metadata-management/software-tools)<br> [Metadata Management Server (Data Catalog) in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/infolibrarian.infolibrarian-metadata-management-server)<br> |
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 02/14/2024 Last updated : 02/29/2024 # What's new in the Remote Desktop client for Windows
virtual-network How To Dhcp Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-dhcp-azure.md
+
+ Title: Deploy a DHCP server in Azure on a virtual machine
+
+description: Learn about how to deploy a Dynamic Host Configuration Protocol (DHCP) server in Azure on a virtual machine as a target for an on-premises DHCP relay agent.
++++ Last updated : 02/28/2024+
+#customer intent: As a Network Administrator, I want to deploy a highly available DHCP server in Azure so that I can provide DHCP services to my on-premises network.
+++
+# Deploy a DHCP server in Azure on a virtual machine
+
+Learn how to deploy a highly available DHCP server in Azure on a virtual machine. This server is used as a target for an on-premises DHCP relay agent to provide dynamic IP address allocation to on-premises clients. Broadcast packets directly from clients to a DHCP Server don't work in an Azure Virtual Network by design.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
++
+## Create internal load balancer
+
+In this section, you create an internal load balancer that load balances virtual machines. An internal load balancer is used to load balance traffic inside a virtual network with a private IP address.
+
+During the creation of the load balancer, you configure:
+
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+1. In the **Load balancer** page, select **Create**.
+
+1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **test-rg**. |
+ | **Instance details** | |
+ | Name | Enter **load-balancer** |
+ | Region | Select **(US) East US 2**. |
+ | SKU | Leave the default **Standard**. |
+ | Type | Select **Internal**. |
+ | Tier | Leave the default **Regional**. |
+
+1. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
+
+1. Enter **frontend-1** in **Name**.
+
+1. Select **subnet-1 (10.0.0.0/24)** in **Subnet**.
+
+1. In **Assignment**, select **Static**.
+
+1. In **IP address**, enter **10.0.0.100**.
+
+1. Select **Add**.
+
+1. Select **Next: Backend pools** at the bottom of the page.
+
+1. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+1. Enter **backend-pool** for **Name** in **Add backend pool**.
+
+1. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
+
+1. Select **Save**.
+
+1. Select the blue **Review + create** button at the bottom of the page.
+
+1. Select **Create**.
+
+## Configure second load balancer frontend
+
+A second frontend is required for the load balancer to provide high availability for the DHCP server. Use the following steps to add a second frontend to the load balancer.
+
+1. In the Azure portal, search for and select **Load balancers**.
+
+1. Select **load-balancer**.
+
+1. In **Settings**, select **Frontend IP configuration**.
+
+1. Select **+ Add**.
+
+1. Enter or select the following information in **Add frontend IP configuration**:
+
+ | Setting | Value |
+ | | |
+ | **Name** | Enter **frontend-2**. |
+ | **Subnet** | Select **subnet-1 (10.0.0.0/24)**. |
+ | **Assignment** | Select **Static**. |
+ | **IP address** | Enter **10.0.0.200**. |
+ | **Availability zone** | Select **Zone-redundant**. |
+
+1. Select **Add**.
+
+1. Verify that in **Frontend IP configuration**, you have **frontend-1** and **frontend-2**.
+
+## Create load balancer rules
+
+The load balancer rules are used to distribute traffic to the virtual machines. Use the following steps to create the load balancer rules.
+
+1. In the Azure portal, search for and select **Load balancers**.
+
+1. Select **load-balancer**.
+
+1. In **Settings**, select **Load balancing rules**.
+
+1. Select **+ Add**.
+
+1. Enter or select the following information in **Add load balancing rule**:
+
+ | Setting | Value |
+ | | |
+ | **Name** | Enter **lb-rule-1**. |
+ | **IP version** | Select **IPv4**. |
+ | **Frontend IP address** | Select **frontend-1**. |
+ | **Backend pool** | Select **backend-pool**. |
+ | **Protocol** | Select **UDP**. |
+ | **Port** | Enter **67**. |
+ | **Backend port** | Enter **67**. |
+ | **Health probe** | Select **Create new**. </br> Enter **dhcp-health-probe** for **Name**. </br> Select **TCP** for **Protocol**. </br> Enter **3389** for **Port**. </br> Enter **67** for **Interval**. </br> Enter **5** for **Unhealthy threshold**. </br> Select **Save**. |
+ | **Enable Floating IP** | Select the box. |
+
+1. Select **Save**.
+
+1. Repeat the previous steps to create the second load balancing rule. Replace the following values with the values for the second frontend:
+
+ | Setting | Value |
+ | | |
+ | **Name** | Enter **lb-rule-2**. |
+ | **Frontend IP address** | Select **frontend-2**. |
+ | **Health probe** | Select **dhcp-health-probe**. |
++
+## Configure DHCP server network adapters
+
+You'll sign-in to the virtual machines with Azure Bastion and configure the network adapter settings and DHCP server role for each virtual machine.
+
+1. In the Azure portal, search for and select **Virtual machines**.
+
+1. Select **vm-1**.
+
+1. In the **vm-1** page, select **Connect** then **Connect via Bastion**.
+
+1. Enter the username and password you created when you created the virtual machine.
+
+1. Open **PowerShell** as an administrator.
+
+1. Run the following command to install the DHCP server role:
+
+ ```powershell
+ Install-WindowsFeature -Name DHCP -IncludeManagementTools
+ ```
+
+### Install Microsoft Loopback Adapter
+
+Use the following steps to install the Microsoft Loopback Adapter by using the Hardware Wizard:
+
+1. Open **Device Manager** on the virtual machine.
+
+1. Select the computer name **vm-1** in **Device Manager**.
+
+1. In the menu bar, select **Action** then **Add legacy hardware**.
+
+1. In the **Add Hardware Wizard**, select **Next**.
+
+1. Select **Install the hardware that I manually select from a list (Advanced)**, and then select **Next**
+
+1. In the **Common hardware types** list, select **Network adapters**, and then select **Next**.
+
+1. In the **Manufacturers** list box, select **Microsoft**.
+
+1. In the **Network Adapter** list box, select **Microsoft Loopback Adapter**, and then select **Next**.
+
+1. select **Next** to start installing the drivers for your hardware.
+
+1. select **Finish**.
+
+1. In **Device Manager**, expand **Network adapters**. Verify that **Microsoft Loopback Adapter** is listed.
+
+1. Close **Device Manager**.
+
+### Set static IP address for Microsoft Loopback Adapter
+
+Use the following steps to set a static IP address for the Microsoft Loopback Adapter:
+
+1. Open **Network and Internet settings** on the virtual machine.
+
+1. Select **Change adapter options**.
+
+1. Right-click **Microsoft Loopback Adapter** and select **Properties**.
+
+1. Select **Internet Protocol Version 4 (TCP/IPv4)** and select **Properties**.
+
+1. Select **Use the following IP address**.
+
+1. Enter the following information:
+
+ | Setting | Value |
+ | | |
+ | **IP address** | Enter **10.0.0.100**. |
+ | **Subnet mask** | Enter **255.255.255.0**. |
+
+1. Select **OK**.
+
+1. Select **Close**.
+
+### Enable routing between the loopback interface and the network adapter
+
+Use the following steps to enable routing between the loopback interface and the network adapter:
+
+1. Open **CMD** as an administrator.
+
+1. Run the following command to list the network interfaces:
+
+ ```cmd
+ netsh int ipv4 show int
+ ```
+
+ ```output
+ C:\Users\azureuser>netsh int ipv4 show int
+
+ Idx Met MTU State Name
+ - -
+ 1 75 4294967295 connected Loopback Pseudo-Interface 1
+ 6 5 1500 connected Ethernet
+ 11 25 1500 connected Ethernet 3
+ ```
+
+ In this example, the network interface connected to the Azure Virtual network is **Ethernet**. The loopback interface that you installed in the previous section is **Ethernet 3**.
+
+ **Make note of the `Idx` number for the primary network adapter and the loopback adapter. In this example the primary network adapter is `6` and the loopback adapter is `11`. You'll need these values for the next steps.**
+
+ > [!CAUTION]
+ > Don't confuse the **Loopback Loopback Pseudo-Interface 1** with the **Microsoft Loopback Adapter**. The **Loopback Pseudo-Interface 1** isn't used in this scenario.
+
+1. Run the following command to enable **weakhostreceive** and **weakhostsend** on the primary network adapter:
+
+ ```cmd
+ netsh int ipv4 set int 6 weakhostreceive=enabled weakhostsend=enabled
+ ```
+
+1. Run the following command to enable **weakhostreceive** and **weakhostsend** on the loopback adapter:
+
+ ```cmd
+ netsh int ipv4 set int 11 weakhostreceive=enabled weakhostsend=enabled
+ ```
+
+1. Close the bastion connection to **vm-1**.
+
+1. Repeat the previous steps to configure **vm-2**. Replace the IP address of **10.0.0.100** with **10.0.0.200** in the static IP address configuration of the loopback adapter.
+
+## Next step
+
+In this article, you learned how to deploy a highly available DHCP server in Azure on a virtual machine. You also learned how to configure the network adapters and installed the DHCP role on the virtual machines. Further configuration of the DHCP server is required to provide DHCP services to on-premises clients from the Azure Virtual Machines. The DHCP relay agent on the on-premises network must be configured to forward DHCP requests to the DHCP servers in Azure. Consult the manufacturer's documentation for the DHCP relay agent for configuration steps.
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureArcInfrastructure** | Azure Arc-enabled servers, Azure Arc-enabled Kubernetes, and Guest Configuration traffic.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory**,**AzureTrafficManager**, and **AzureResourceManager** tags. | Outbound | No | Yes | | **AzureAttestation** | Azure Attestation. | Outbound | No | Yes | | **AzureBackup** |Azure Backup.<br/><br/>**Note**: This tag has a dependency on the **Storage** and **AzureActiveDirectory** tags. | Outbound | No | Yes |
-| **AzureBotService** | Azure Bot Service. | Outbound | No | Yes |
+| **AzureBotService** | Azure Bot Service. | Both | No | Yes |
| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). Includes IPv6. | Both | Yes | Yes | | **AzureCognitiveSearch** | Azure AI Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | Yes | | **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Both | Yes | Yes |
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
Unicast is supported in virtual networks. Multicast, broadcast, IP-in-IP encapsu
### Can I deploy a DHCP server in a virtual network?
-Azure virtual networks provide DHCP service and DNS to VMs. Client/server DHCP traffic (source port UDP/68, destination port UDP/67) is not supported in a virtual network.
+Azure virtual networks provide DHCP service and DNS to Azure Virtual Machines. However, you can also deploy a DHCP Server in an Azure VM to serve the on-prem clients via a DHCP Relay Agent.
-You can't deploy your own DHCP service to receive and provide unicast or broadcast client/server DHCP traffic for endpoints inside a virtual network. Deploying a DHCP server VM with the intent to receive unicast DHCP relay (source port UDP/67, destination port UDP/67) traffic is also an *unsupported* scenario.
+DHCP Server in Azure was previously marked as unsupported since the traffic to port UDP/67 was rate limited in Azure. However, recent platform updates have removed the rate limitation, enabling this capability.
+
+> [!NOTE]
+> The on-premises client to DHCP Server (source port UDP/68, destination port UDP/67) is still not supported in Azure, since this traffic is intercepted and handled differently. So, this will result in some timeout messages at the time of DHCP RENEW at T1 when the client directly attempts to reach the DHCP Server in Azure, but this should succeed when the DHCP RENEW attempt is made at T2 via DHCP Relay Agent. For more details on the T1 and T2 DHCP RENEW timers, see [RFC 2131](https://www.ietf.org/rfc/rfc2131.txt).
### Can I ping a default gateway in a virtual network?
vpn-gateway Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/design.md
Title: 'Azure VPN Gateway topologies and design'
-description: Learn about VPN Gateway topologies and designs to connect on-premises locations to virtual networks.
+description: Learn about VPN Gateway topologies and designs you can use to connect on-premises locations to virtual networks.
Previously updated : 04/10/2023 Last updated : 02/28/2024
-# VPN Gateway design
+# VPN Gateway topology and design
-It's important to know that there are different configurations available for VPN gateway connections. You need to determine which configuration best fits your needs. In the sections below, you can view design information and topology diagrams about the following VPN gateway connections. Use the diagrams and descriptions to help select the connection topology to match your requirements. The diagrams show the main baseline topologies, but it's possible to build more complex configurations using the diagrams as guidelines.
+There are many different configuration options available for VPN Gateway connections. Use the diagrams and descriptions in the following sections to help you select the connection topology that meets your requirements. The diagrams show the main baseline topologies, but it's possible to build more complex configurations using the diagrams as guidelines.
## <a name="s2smulti"></a>Site-to-site VPN
-A Site-to-site (S2S) VPN gateway connection is a connection over IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. S2S connections can be used for cross-premises and hybrid configurations. A S2S connection requires a VPN device located on-premises that has a public IP address assigned to it. For information about selecting a VPN device, see the [VPN Gateway FAQ - VPN devices](vpn-gateway-vpn-faq.md#s2s).
+A site-to-site (S2S) VPN gateway connection is a connection over IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. Site-to-site connections can be used for cross-premises and hybrid configurations. A site-to-site connection requires a VPN device located on-premises that has a public IP address assigned to it. For information about selecting a VPN device, see the [VPN Gateway FAQ - VPN devices](vpn-gateway-vpn-faq.md#s2s).
:::image type="content" source="./media/tutorial-site-to-site-portal/diagram.png" alt-text="Diagram of site-to-site VPN Gateway cross-premises connections." lightbox="./media/tutorial-site-to-site-portal/diagram.png":::
-VPN Gateway can be configured in active-standby mode using one public IP or in active-active mode using two public IPs. In active-standby mode, one IPsec tunnel is active and the other tunnel is in standby. In this setup, traffic flows through the active tunnel, and if some issue happens with this tunnel, the traffic switches over to the standby tunnel. Setting up VPN Gateway in active-active mode is *recommended* in which both the IPsec tunnels are simultaneously active, with data flowing through both tunnels at the same time. An additional advantage of active-active mode is that customers experience higher throughputs.
+VPN Gateway can be configured in active-standby mode using one public IP or in active-active mode using two public IPs. In active-standby mode, one IPsec tunnel is active and the other tunnel is in standby. In this setup, traffic flows through the active tunnel, and if some issue happens with this tunnel, the traffic switches over to the standby tunnel. Setting up VPN Gateway in active-active mode is *recommended* in which both the IPsec tunnels are simultaneously active, with data flowing through both tunnels at the same time. Another advantage of active-active mode is that customers experience higher throughputs.
You can create more than one VPN connection from your virtual network gateway, typically connecting to multiple on-premises sites. When working with multiple connections, you must use a RouteBased VPN type (known as a dynamic gateway when working with classic VNets). Because each virtual network can only have one VPN gateway, all connections through the gateway share the available bandwidth. This type of connection is sometimes referred to as a "multi-site" connection.
You can create more than one VPN connection from your virtual network gateway, t
## <a name="P2S"></a>Point-to-site VPN
-A point-to-site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a VNet.
+A point-to-site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A point-to-site connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure virtual networks from a remote location, such as from home or a conference. Point-to-site VPN is also a useful solution to use instead of site-to-site VPN when you have only a few clients that need to connect to a virtual network.
-Unlike S2S connections, P2S connections don't require an on-premises public-facing IP address or a VPN device. P2S connections can be used with S2S connections through the same VPN gateway, as long as all the configuration requirements for both connections are compatible. For more information about point-to-site connections, see [About point-to-site VPN](point-to-site-about.md).
+Unlike site-to-site connections, point-to-site connections don't require an on-premises public-facing IP address or a VPN device. Point-to-site connections can be used with site-to-site connections through the same VPN gateway, as long as all the configuration requirements for both connections are compatible. For more information about point-to-site connections, see [About point-to-site VPN](point-to-site-about.md).
:::image type="content" source="./media/vpn-gateway-howto-point-to-site-rm-ps/point-to-site-diagram.png" alt-text="Diagram of point-to-site connections." lightbox="./media/vpn-gateway-howto-point-to-site-rm-ps/point-to-site-diagram.png"::: ### Deployment models and methods for P2S +
+### P2S VPN client configuration
+ ## <a name="V2V"></a>VNet-to-VNet connections (IPsec/IKE VPN tunnel)
-Connecting a virtual network to another virtual network (VNet-to-VNet) is similar to connecting a VNet to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE. You can even combine VNet-to-VNet communication with multi-site connection configurations. This lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity.
+Connecting a virtual network to another virtual network (VNet-to-VNet) is similar to connecting a virtual network to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE. You can even combine VNet-to-VNet communication with multi-site connection configurations. This lets you establish network topologies that combine cross-premises connectivity with inter-virtual network connectivity.
-The VNets you connect can be:
+The virtual networks you connect can be:
* in the same or different regions
-* in the same or different subscriptions
+* in the same or different subscriptions
* in the same or different deployment models :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png" alt-text="Diagram of VNet-to-VNet connections." lightbox="./media/vpn-gateway-howto-vnet-vnet-resource-manager-portal/vnet-vnet-diagram.png":::
-### Connections between deployment models
-
-Azure currently has two deployment models: classic and Resource Manager. If you have been using Azure for some time, you probably have Azure VMs and instance roles running in a classic VNet. Your newer VMs and role instances may be running in a VNet created in Resource Manager. You can create a connection between the VNets to allow the resources in one VNet to communicate directly with resources in another.
-
-### VNet peering
-
-You may be able to use VNet peering to create your connection, as long as your virtual network meets certain requirements. VNet peering doesn't use a virtual network gateway. For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md).
- ### Deployment models and methods for VNet-to-VNet +
+In some cases, you might want to use virtual network peering instead of VNet-to-VNet to connect your virtual networks. Virtual network peering doesn't use a virtual network gateway. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
## <a name="coexisting"></a>Site-to-site and ExpressRoute coexisting connections [ExpressRoute](../expressroute/expressroute-introduction.md) is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services, including Azure. Site-to-site VPN traffic travels encrypted over the public Internet. Being able to configure site-to-site VPN and ExpressRoute connections for the same virtual network has several advantages.
-You can configure a site-to-site VPN as a secure failover path for ExpressRoute, or use site-to-site VPNs to connect to sites that aren't part of your network, but that are connected through ExpressRoute. Notice that this configuration requires two virtual network gateways for the same virtual network, one using the gateway type 'Vpn', and the other using the gateway type 'ExpressRoute'.
+You can configure a site-to-site VPN as a secure failover path for ExpressRoute, or use site-to-site VPNs to connect to sites that aren't part of your network, but that are connected through ExpressRoute. Notice that this configuration requires two virtual network gateways for the same virtual network, one using the gateway type *Vpn*, and the other using the gateway type *ExpressRoute*.
:::image type="content" source="./media/design/expressroute-vpngateway-coexisting-connections-diagram.png" alt-text="Diagram of ExpressRoute and VPN Gateway coexisting connections." lightbox="./media/design/expressroute-vpngateway-coexisting-connections-diagram.png":::
-### Deployment models and methods for S2S and ExpressRoute coexist
+### Deployment models and methods for S2S and ExpressRoute coexisting connections
## <a name="highly-available"></a>Highly available connections
vpn-gateway Vpn Gateway About Vpn Gateway Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md
description: Learn about VPN Gateway resources and configuration settings.
Previously updated : 01/23/2024 Last updated : 02/29/2024 ms.devlang: azurecli # About VPN Gateway configuration settings
-A VPN gateway is a type of virtual network gateway that sends encrypted traffic between your virtual network and your on-premises location across a public connection. You can also use a VPN gateway to send traffic between virtual networks across the Azure backbone.
+VPN gateway connection architecture relies on the configuration of multiple resources, each of which contains configurable settings. The sections in this article discuss the resources and settings that relate to a VPN gateway for a virtual network created in [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can find descriptions and topology diagrams for each connection solution in the [VPN Gateway topology and design](design.md) article.
-VPN gateway connections rely on the configuration of multiple resources, each of which contains configurable settings. The sections in this article discuss the resources and settings that relate to a VPN gateway for a virtual network created in [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can find descriptions and topology diagrams for each connection solution in the [VPN Gateway design](design.md) article.
-
-The values in this article apply VPN gateways (virtual network gateways that use the -GatewayType Vpn). See the following articles for information regarding gateways that use these specified settings:
-
-* For values that apply to -GatewayType 'ExpressRoute', see [Virtual Network Gateways for ExpressRoute](../expressroute/expressroute-about-virtual-network-gateways.md).
+The values in this article specifically apply to VPN gateways (virtual network gateways that use the -GatewayType Vpn). If you're looking for information about the following types of gateways, see the following articles:
+* For values that apply to -GatewayType 'ExpressRoute', see [Virtual network gateways for ExpressRoute](../expressroute/expressroute-about-virtual-network-gateways.md).
* For zone-redundant gateways, see [About zone-redundant gateways](about-zone-redundant-vnet-gateways.md).- * For active-active gateways, see [About highly available connectivity](vpn-gateway-highlyavailable.md).- * For Virtual WAN gateways, see [About Virtual WAN](../virtual-wan/virtual-wan-about.md).
-## <a name="vpntype"></a>VPN types
-
-Currently, Azure supports two gateway VPN types: route-based VPN gateways and policy-based VPN gateways. They're built on different internal platforms, which result in different specifications.
-
-As of Oct 1, 2023, you can't create a policy-based VPN gateway through Azure portal. All new VPN gateways will automatically be created as route-based. If you already have a policy-based gateway, you don't need to upgrade your gateway to route-based. You can use Powershell/CLI to create the policy-based gateways.
-
-Previously, the older gateway SKUs didn't support IKEv1 for route-based gateways. Now, most of the current gateway SKUs support both IKEv1 and IKEv2.
+## <a name="gwtype"></a>Gateways and gateway types
+ A virtual network gateway is composed of two or more Azure-managed VMs that are automatically configured and deployed to a specific subnet that you create called the **gateway subnet**. The gateway VMs contain routing tables and run specific gateway services.
-## <a name="gwtype"></a>Gateway types
+When you create a virtual network gateway, the gateway VMs are automatically deployed to the gateway subnet (always named *GatwaySubnet*), and configured with the settings that you specified. This process can take 45 minutes or more to complete, depending on the gateway SKU that you selected.
-Each virtual network can only have one virtual network gateway of each type. When you're creating a virtual network gateway, you must make sure that the gateway type is correct for your configuration.
+One of the settings that you specify when creating a virtual network gateway is the **gateway type**. The gateway type determines how the virtual network gateway is used and the actions that the gateway takes. A virtual network can have two virtual network gateways; one VPN gateway and one ExpressRoute gateway. The gateway type 'Vpn' specifies that the type of virtual network gateway created is a **VPN gateway**. This distinguishes it from an ExpressRoute gateway, which uses a different gateway type.
-The available values for -GatewayType are:
+When you're creating a virtual network gateway, you must make sure that the gateway type is correct for your configuration. The available values for -GatewayType are:
* Vpn * ExpressRoute
New-AzVirtualNetworkGateway -Name vnetgw1 -ResourceGroupName testrg `
See [About Gateway SKUs](about-gateway-skus.md) article for the latest information about gateway SKUs, performance, and supported features.
+## <a name="vpntype"></a>VPN types
+
+Azure supports two different VPN types for VPN gateways: policy-based and route-based. Route-based VPN gateways are built on a different platform than policy-based VPN gateways. This results in different gateway specifications.
+
+In most cases, you'll create a route-based VPN gateway. Previously, the older gateway SKUs didn't support IKEv1 for route-based gateways. Now, most of the current gateway SKUs support both IKEv1 and IKEv2. If you already have a policy-based gateway, you aren't required to upgrade your gateway to route-based.
+
+If you want to create a policy-based gateway, use PowerShell or CLI. As of Oct 1, 2023, you can't create a policy-based VPN gateway through Azure portal, only route-based gateways are available.
++ ## <a name="connectiontype"></a>Connection types In the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), each configuration requires a specific virtual network gateway connection type. The available Resource Manager PowerShell values for `-ConnectionType` are:
New-AzVirtualNetworkGatewayConnection -Name localtovon -ResourceGroupName testrg
## <a name="gwsub"></a>Gateway subnet
-Before you create a VPN gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required VPN gateway settings. Never deploy anything else (for example, additional VMs) to the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know that this is the subnet to which it should deploy the virtual network gateway VMs and services.
+Before you create a VPN gateway, you must create a gateway subnet. The gateway subnet contains the IP addresses that the virtual network gateway VMs and services use. When you create your virtual network gateway, gateway VMs are deployed to the gateway subnet and configured with the required VPN gateway settings. Never deploy anything else (for example, more VMs) to the gateway subnet. The gateway subnet must be named 'GatewaySubnet' to work properly. Naming the gateway subnet 'GatewaySubnet' lets Azure know that this is the subnet to which it should deploy the virtual network gateway VMs and services.
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
Considerations:
## <a name="lng"></a>Local network gateways
-A local network gateway is different than a virtual network gateway. When creating a VPN gateway configuration, the local network gateway usually represents your on-premises network and the corresponding VPN device. In the classic deployment model, the local network gateway was referred to as a Local Site.
+A local network gateway is different than a virtual network gateway. When you're working with a VPN gateway site-to-site architecture, the local network gateway usually represents your on-premises network and the corresponding VPN device. In the classic deployment model, the local network gateway is referred to as a *Local Site*.
-When you configure a local network gateway, you specify the name, the public IP address or the fully qualified domain name (FQDN) of the on-premises VPN device, and the address prefixes that are located on the on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration that you've specified for your local network gateway, and routes packets accordingly. If you use Border Gateway Protocol (BGP) on your VPN device, you provide the BGP peer IP address of your VPN device and the autonomous system number (ASN) of your on-premises network. You also specify local network gateways for VNet-to-VNet configurations that use a VPN gateway connection.
+When you configure a local network gateway, you specify the name, the public IP address or the fully qualified domain name (FQDN) of the on-premises VPN device, and the address prefixes that are located on the on-premises location. Azure looks at the destination address prefixes for network traffic, consults the configuration that you specified for your local network gateway, and routes packets accordingly. If you use Border Gateway Protocol (BGP) on your VPN device, you provide the BGP peer IP address of your VPN device and the autonomous system number (ASN) of your on-premises network. You also specify local network gateways for VNet-to-VNet configurations that use a VPN gateway connection.
The following PowerShell example creates a new local network gateway:
Sometimes you need to modify the local network gateway settings. For example, wh
## <a name="resources"></a>REST APIs, PowerShell cmdlets, and CLI
-For additional technical resources and specific syntax requirements when using REST APIs, PowerShell cmdlets, or Azure CLI for VPN Gateway configurations, see the following pages:
+For technical resources and specific syntax requirements when using REST APIs, PowerShell cmdlets, or Azure CLI for VPN Gateway configurations, see the following pages:
| **Classic** | **Resource Manager** | | | |
vpn-gateway Vpn Gateway About Vpngateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpngateways.md
Title: 'About Azure VPN Gateway'
-description: Learn what VPN Gateway is, and how to use a VPN gateway to connect to IPsec IKE Site-to-Site, VNet-to-VNet, and Point-to-Site VPN virtual networks.
+description: Learn what VPN Gateway is, and how to use a VPN gateway to connect to IPsec IKE site-to-site, VNet-to-VNet, and point-to-site VPN virtual networks.
Previously updated : 01/04/2024 Last updated : 02/29/2024 # Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure VPN Gateway so that I can securely connect to my Azure virtual networks.
# What is Azure VPN Gateway?
-Azure VPN Gateway is a service that uses a specific type of virtual network gateway to send encrypted traffic between an Azure virtual network and on-premises locations over the public Internet. You can also use VPN Gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. Multiple connections can be created to the same VPN gateway. When you create multiple connections, all VPN tunnels share the available gateway bandwidth.
+Azure VPN Gateway is a service that can be used to send encrypted traffic between an Azure virtual network and on-premises locations over the public Internet. You can also use VPN Gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. VPN Gateway uses a specific type of Azure virtual network gateway called a VPN gateway. Multiple connections can be created to the same VPN gateway. When you create multiple connections, all VPN tunnels share the available gateway bandwidth.
-## <a name="vpn"></a>About VPN gateways
+## Why use VPN Gateway?
-A VPN gateway is a type of virtual network gateway. A virtual network gateway is composed of two or more Azure-managed VMs that are automatically configured and deployed to a specific subnet you create called the *GatewaySubnet*. The gateway VMs contain routing tables and run specific gateway services.
+Here are some of the key scenarios for VPN Gateway:
-One of the settings that you specify when creating a virtual network gateway is the "gateway type". The gateway type determines how the virtual network gateway will be used and the actions that the gateway takes. A virtual network can have two virtual network gateways; one VPN gateway and one ExpressRoute gateway. The gateway type 'Vpn' specifies that the type of virtual network gateway created is a **VPN gateway**. This distinguishes it from an ExpressRoute gateway, which uses a different gateway type. For more information, see [Gateway types](vpn-gateway-about-vpn-gateway-settings.md#gwtype).
+* Send encrypted traffic between an Azure virtual network and on-premises locations over the public Internet. You can do this by using the following types of connections:
-When you create a VPN gateway, gateway VMs are deployed to the gateway subnet and configured with the settings that you specified. This process can take 45 minutes or more to complete, depending on the gateway SKU that you selected. After you create a VPN gateway, you can configure connections. For example, you can create an IPsec/IKE VPN tunnel connection between that VPN gateway and another VPN gateway (VNet-to-VNet), or create a cross-premises IPsec/IKE VPN tunnel connection between the VPN gateway and an on-premises VPN device (Site-to-Site). You can also create a Point-to-Site VPN connection (VPN over OpenVPN, IKEv2, or SSTP), which lets you connect to your virtual network from a remote location, such as from a conference or from home.
+ * **Site-to-site connection:** A cross-premises IPsec/IKE VPN tunnel connection between the VPN gateway and an on-premises VPN device.
-## <a name="configuring"></a>Configuring VPN Gateway
+ * **Point-to-site connection:** VPN over OpenVPN, IKEv2, or SSTP. This type of connection lets you connect to your virtual network from a remote location, such as from a conference or from home.
+
+* Send encrypted traffic between virtual networks. You can do this by using the following types of connections:
+
+ * **VNet-to-VNet:** An IPsec/IKE VPN tunnel connection between the VPN gateway and another Azure VPN gateway that uses a *VNet-to-VNet* connection type. This connection type is designed specifically for VNet-to-VNet connections.
+
+ * **Site-to-site connection:** An IPsec/IKE VPN tunnel connection between the VPN gateway and another Azure VPN gateway. This type of connection, when used in the VNet-to-VNet architecture, uses the *Site-to-site (IPsec)* connection type, which allows cross-premises connections to the gateway in addition connections between VPN gateways.
+
+* Configure a site-to-site VPN as a secure failover path for [ExpressRoute](../expressroute/expressroute-introduction.md). You can do this by using:
+
+ * **ExpressRoute + VPN Gateway:** A combination of ExpressRoute + VPN Gateway connections (coexisting connections).
-A VPN gateway connection relies on multiple resources that are configured with specific settings. Most of the resources can be configured separately, although some resources must be configured in a certain order.
+* Use site-to-site VPNs to connect to sites that aren't connected through [ExpressRoute](../expressroute/expressroute-introduction.md). You can do this with:
-### <a name="connectivity"></a> Connectivity
+ * **ExpressRoute + VPN Gateway:** A combination of ExpressRoute + VPN Gateway connections (coexisting connections).
-Because you can create multiple connection configurations using VPN Gateway, you need to determine which configuration best fits your needs. Point-to-Site, Site-to-Site, and coexisting ExpressRoute/Site-to-Site connections all have different instructions and configuration requirements. For connection diagrams and corresponding links to configuration steps, see [VPN Gateway design](design.md).
+## <a name="connectivity"></a> Planning and design
-* [Site-to-Site VPN connections](design.md#s2smulti)
-* [Point-to-Site VPN connections](design.md#P2S)
+Because you can create multiple connection configurations using VPN Gateway, you need to determine which configuration best fits your needs. Point-to-site, site-to-site, and coexisting ExpressRoute/site-to-site connections all have different instructions and resource configuration requirements.
+
+See the [VPN Gateway topology and design](design.md) article for design topologies and links to configuration instructions. The following sections of the article highlight some of the design topologies that are most often used.
+
+* [Site-to-site VPN connections](design.md#s2smulti)
+* [Point-to-site VPN connections](design.md#P2S)
* [VNet-to-VNet VPN connections](design.md#V2V) ### <a name="planningtable"></a>Planning table
-The following table can help you decide the best connectivity option for your solution. Note that ExpressRoute isn't a part of VPN Gateway, but is included in the table.
+The following table can help you decide the best connectivity option for your solution.
[!INCLUDE [cross-premises](../../includes/vpn-gateway-cross-premises-include.md)]
-### <a name="settings"></a>Settings
+### <a name="availability"></a>Availability Zones
+
+VPN gateways can be deployed in Azure Availability Zones. This brings resiliency, scalability, and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically and logically separates gateways within a region, while protecting your on-premises network connectivity to Azure from zone-level failures. See [About zone-redundant virtual network gateways in Azure Availability Zones](about-zone-redundant-vnet-gateways.md).
+
+## <a name="configuring"></a>Configuring VPN Gateway
-The settings that you chose for each resource are critical to creating a successful connection. For information about individual resources and settings for VPN Gateway, see [About VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md) and [About gateway SKUs](about-gateway-skus.md). These articles contain information to help you understand gateway types, gateway SKUs, VPN types, connection types, gateway subnets, local network gateways, and various other resource settings that you might want to consider.
+A VPN gateway connection relies on multiple resources that are configured with specific settings. In some cases, resources must be configured in a certain order. The settings that you chose for each resource are critical to creating a successful connection.
-### <a name="tools"></a>Deployment tools
+For information about individual resources and settings for VPN Gateway, see [About VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md) and [About gateway SKUs](about-gateway-skus.md). These articles contain information to help you understand gateway types, gateway SKUs, VPN types, connection types, gateway subnets, local network gateways, and various other resource settings that you might want to consider.
-You can start out creating and configuring resources using one configuration tool, such as the Azure portal. You can later decide to switch to another tool, such as PowerShell, to configure additional resources, or modify existing resources when applicable. Currently, you can't configure every resource and resource setting in the Azure portal. The instructions in the articles for each connection topology specify when a specific configuration tool is needed.
+For design diagrams and links to configuration articles, see the [VPN Gateway topology and design](design.md) article.
## <a name="gwsku"></a>Gateway SKUs
When you create a virtual network gateway, you specify the gateway SKU that you
[!INCLUDE [Aggregated throughput by SKU](../../includes/vpn-gateway-table-gwtype-aggtput-include.md)] (*) If you need more than 100 S2S VPN tunnels, use [Virtual WAN](../virtual-wan/virtual-wan-about.md) instead of VPN Gateway.
-## <a name="availability"></a>Availability Zones
-
-VPN gateways can be deployed in Azure Availability Zones. This brings resiliency, scalability, and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically and logically separates gateways within a region, while protecting your on-premises network connectivity to Azure from zone-level failures. See [About zone-redundant virtual network gateways in Azure Availability Zones](about-zone-redundant-vnet-gateways.md).
- ## <a name="pricing"></a>Pricing [!INCLUDE [vpn-gateway-about-pricing-include](../../includes/vpn-gateway-about-pricing-include.md)] For more information about gateway SKUs for VPN Gateway, see [Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
-## <a name="faq"></a>FAQ
+## <a name="new"></a>What's new?
-For frequently asked questions about VPN gateway, see the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md).
+Azure VPN Gateway is updated regularly. To stay current with the latest announcements, see the [What's new?](whats-new.md) article. The article highlights the following points of interest:
-## <a name="new"></a>What's new?
+* Recent releases
+* Previews underway with known limitations (if applicable)
+* Known issues
+* Deprecated functionality (if applicable)
+
+You can also subscribe to the RSS feed and view the latest VPN Gateway feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=VPN%20Gateway) page.
-Subscribe to the RSS feed and view the latest VPN Gateway feature updates on the [Azure Updates](https://azure.microsoft.com/updates/?category=networking&query=VPN%20Gateway) page.
+## <a name="faq"></a>FAQ
+
+For frequently asked questions about VPN gateway, see the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md).
## Next steps -- [Tutorial: Create and manage a VPN Gateway](tutorial-create-gateway-portal.md).-- [Learn module: Introduction to Azure VPN Gateway](/training/modules/intro-to-azure-vpn-gateway).-- [Learn module: Connect your on-premises network to Azure with VPN Gateway](/training/modules/connect-on-premises-network-with-vpn-gateway/).-- [Subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#vpn-gateway-limits).
+* [Tutorial: Create and manage a VPN Gateway](tutorial-create-gateway-portal.md).
+* [Learn module: Introduction to Azure VPN Gateway](/training/modules/intro-to-azure-vpn-gateway).
+* [Learn module: Connect your on-premises network to Azure with VPN Gateway](/training/modules/connect-on-premises-network-with-vpn-gateway/).
+* [Subscription and service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#vpn-gateway-limits).
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Previously updated : 10/25/2022 Last updated : 02/29/2024 # Web Application Firewall DRS rule groups and rules
-Azure Web Application Firewall on Azure Front Door protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Because such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures.
+Azure Web Application Firewall on Azure Front Door protects web applications from common vulnerabilities and exploits. Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Because Azure manages these rule sets, the rules are updated as needed to protect against new attack signatures.
The Default Rule Set (DRS) also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
Custom rules are always applied before rules in the DRS are evaluated. If a requ
The Microsoft Threat Intelligence Collection rules are written in partnership with the Microsoft Threat Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
-Some of the built-in DRS rules are disabled by default because they've been replaced by newer rules in the Microsoft Threat Intelligence Collection rules. For example, rule ID 942440, *SQL Comment Sequence Detected*, has been disabled and replaced by the Microsoft Threat Intelligence Collection rule 99031002. The replaced rule reduces the risk of false positive detections from legitimate requests.
+By default, the Microsoft Threat Intelligence Collection rules replace some of the built-in DRS rules, causing them to be disabled. For example, rule ID 942440, *SQL Comment Sequence Detected*, has been disabled and replaced by the Microsoft Threat Intelligence Collection rule 99031002. The replaced rule reduces the risk of false positive detections from legitimate requests.
### <a name="anomaly-scoring-mode"></a>Anomaly scoring
When you configure your WAF, you can decide how the WAF handles requests that ex
For example, if the anomaly score is 5 or greater on a request, and the WAF is in Prevention mode with the anomaly score action set to Block, the request is blocked. If the anomaly score is 5 or greater on a request, and the WAF is in Detection mode, the request is logged but not blocked.
-A single *Critical* rule match is enough for the WAF to block a request when in Prevention mode with the anomaly score action set to Block because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. When an anomaly rule is triggered, it shows a "matched" action in the logs. If the anomaly score is 5 or greater, there will be a separate rule triggered with the anomaly score action configured for the rule set. Default anomaly score action is Block, which results in a log entry with the action `blocked`.
+A single *Critical* rule match is enough for the WAF to block a request when in Prevention mode with the anomaly score action set to Block because the overall anomaly score is 5. However, one *Warning* rule match only increases the anomaly score by 3, which isn't enough by itself to block the traffic. When an anomaly rule is triggered, it shows a "matched" action in the logs. If the anomaly score is 5 or greater, there a separate rule is triggered with the anomaly score action configured for the rule set. Default anomaly score action is Block, which results in a log entry with the action `blocked`.
When your WAF uses an older version of the Default Rule Set (before DRS 2.0), your WAF runs in the traditional mode. Traffic that matches any rule is considered independently of any other rule matches. In traditional mode, you don't have visibility into the complete set of rules that a specific request matched.