Updates from: 01/17/2023 02:07:53
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Self Service Sign Up User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-user-flow.md
Previously updated : 01/06/2023 Last updated : 01/16/2023
Before you can add a self-service sign-up user flow to your applications, you ne
1. Select **User settings**, and then under **External users**, select **Manage external collaboration settings**. 1. Set the **Enable guest self-service sign up via user flows** toggle to **Yes**.
- ![Enable guest self-service sign-up](media/self-service-sign-up-user-flow/enable-self-service-sign-up.png)
+ :::image type="content" source="media/self-service-sign-up-user-flow/enable-self-service-sign-up.png" alt-text="Screenshot of the enable guest self-service sign up toggle.":::
+ 5. Select **Save**. ## Create the user flow for self-service sign-up
Next, you'll create the user flow for self-service sign-up and add it to an appl
3. In the left menu, select **External Identities**. 4. Select **User flows**, and then select **New user flow**.
- ![Add a new user flow button](media/self-service-sign-up-user-flow/new-user-flow.png)
+ :::image type="content" source="media/self-service-sign-up-user-flow/new-user-flow.png" alt-text="Screenshot of the new user flow button.":::
5. Select the user flow type (for example, **Sign up and sign in**), and then select the version (**Recommended** or **Preview**).
-6. On the **Create** page, enter a **Name** for the user flow. Note that the name is automatically prefixed with **B2X_1_**.
+6. On the **Create** page, enter a **Name** for the user flow. The name is automatically prefixed with **B2X_1_**.
7. In the **Identity providers** list, select one or more identity providers that your external users can use to log into your application. **Azure Active Directory Sign up** is selected by default. (See [Before you begin](#before-you-begin) earlier in this article to learn how to add identity providers.)
-8. Under **User attributes**, choose the attributes you want to collect from the user. For additional attributes, select **Show more**. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
+8. Under **User attributes**, choose the attributes you want to collect from the user. For more attributes, select **Show more**. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**.
- ![Create a new user flow page](media/self-service-sign-up-user-flow/create-user-flow.png)
+ :::image type="content" source="media/self-service-sign-up-user-flow/create-user-flow.png" alt-text="Screenshot of the new user flow creation page. ":::
-> [!NOTE]
-> You can only collect attributes when a user signs up for the first time. After a user signs up, they will no longer be prompted to collect attribute information, even if you change the user flow.
+ > [!NOTE]
+ > You can only collect attributes when a user signs up for the first time. After a user signs up, they will no longer be prompted to collect attribute information, even if you change the user flow.
8. Select **Create**. 9. The new user flow appears in the **User flows** list. If necessary, refresh the page.
You can choose order in which the attributes are displayed on the sign-up page.
2. Select **External Identities**, select **User flows**. 3. Select the self-service sign-up user flow from the list. 4. Under **Customize**, select **Page layouts**.
-5. The attributes you chose to collect are listed. To change the order of display, select an attribute, and then select **Move up**, **Move down**, **Move to the top**, or **Move to the bottom**.
+5. The attributes you chose to collect are listed. To change the order of display, select an attribute, and then select **Move up**, **Move down**, **Move to top**, or **Move to bottom**.
6. Select **Save**. ## Add applications to the self-service sign-up user flow
Now you'll associate applications with the user flow to enable sign-up for those
6. In the left menu, under **Use**, select **Applications**. 7. Select **Add application**.
- ![Assign an application to the user flow](media/self-service-sign-up-user-flow/assign-app-to-user-flow.png)
+ :::image type="content" source="media/self-service-sign-up-user-flow/assign-app-to-user-flow.png" alt-text="Screenshot of adding an application to the user flow.":::
8. Select the application from the list. Or use the search box to find the application, and then select it. 9. Click **Select**.
Now you'll associate applications with the user flow to enable sign-up for those
- [Add Facebook to your list of social identity providers](facebook-federation.md) - [Use API connectors to customize and extend your user flows via web APIs](api-connectors-overview.md) - [Add custom approval workflow to your user flow](self-service-sign-up-add-approvals.md)-- [Learn more about initiating an OAuth 2.0 authorization code flow](../develop/v2-oauth2-auth-code-flow.md#request-an-authorization-code)
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
# Start/Stop VMs during off-hours overview > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/articles/azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](https://learn.microsoft.com/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | | Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 |
-| Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 |
| Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 | | Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.6.0](https://docs.mirantis.com/mke/3.6/release-notes/3-6-0.html) <br> MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) | | Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
Title: "Create a C# function using Visual Studio Code - Azure Functions" description: "Learn how to create a C# function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. " Previously updated : 11/08/2022 Last updated : 01/05/2023 ms.devlang: csharp
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./create-first-function-vs-code-csharp-ieux
# Quickstart: Create a C# function in Azure using Visual Studio Code
-This article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions, such as .NET 6. When creating your project, you can choose to instead create a function that runs on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). [Isolated worker process](dotnet-isolated-process-guide.md) supports both LTS and Standard Term Support (STS) versions of .NET. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions) in the .NET Functions isolated worker process guide. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
+This article creates an HTTP triggered function that runs on .NET. For information about .NET versions supported for C# functions, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
+
+There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
In this section, you use Visual Studio Code to create a local Azure Functions pr
:::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window."::: 1. Select the directory location for your project workspace and choose **Select**. You should either create a new folder or choose an empty folder for the project workspace. Don't choose a project folder that is already part of a workspace.
+
+1. For **Select a language**, choose `C#`.
+
+1. For **Select a .NET runtime**, choose from one of the following options:
+
+ | .NET runtime | Process model | Description |
+ | | | |
+ | **.NET 6.0 (LTS)** | [In-process](functions-dotnet-class-library.md) | _In-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions. Function code runs in the same process as the Functions host. |
+ | **.NET 6.0 Isolated (LTS)** | [Isolated worker process](dotnet-isolated-process-guide.md) | Functions run on .NET 6, but in a separate process from the Functions host. |
+ | **.NET 7.0 Isolated** | [Isolated worker process](dotnet-isolated-process-guide.md) | Because .NET 7 isn't an LTS version of .NET, your functions must run in an isolated process on .NET 7. |
+ | **.NET Framework Isolated** | [Isolated worker process](dotnet-isolated-process-guide.md) | Choose this option when your functions need to use libraries only supported on the .NET Framework. |
-1. Provide the following information at the prompts:
+ The two process models use different APIs, and each process model uses a different template when generating the function project code. If you don't see these options, press F1 and type `Preferences: Open user settings`, then search for `Azure Functions: Project Runtime` and make sure that the default runtime version is set to `~4`.
- # [In-process](#tab/in-process)
+1. Provide the remaining information at the prompts:
|Prompt|Selection| |--|--|
- |**Select a language**|Choose `C#`.|
- |**Select a .NET runtime** | Select `.NET 6`.|
|**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Provide a namespace** | Type `My.Functions`. | |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).| |**Select how you would like to open your project**|Select `Add to workspace`.|
- # [Isolated process](#tab/isolated-process)
-
- |Prompt|Selection|
- |--|--|
- |**Select a language**|Choose `C#`.|
- | **Select a .NET runtime** | Choose `.NET 6 Isolated`.|
- |**Select a template for your project's first function**|Choose `HTTP trigger`.|
- |**Provide a function name**|Type `HttpExample`.|
- |**Provide a namespace** | Type `My.Functions`. |
- |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
- |**Select how you would like to open your project**|Choose `Add to workspace`.|
-
-
-
- > [!NOTE]
- > If you don't see .NET 6 as a runtime option, check the following:
- >
- > + Make sure you have installed the .NET 6.0 SDK or other available .NET SDK versions, from .NET website [here](https://dotnet.microsoft.com/download).
- > + Press F1 and type `Preferences: Open user settings`, then search for `Azure Functions: Project Runtime` and change the default runtime version to `~4`.
-
1. Visual Studio Code uses the provided information and generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. For more information about the files that are created, see [Generated project files](functions-develop-vs-code.md?tabs=csharp#generated-project-files). [!INCLUDE [functions-run-function-test-local-vs-code-csharp](../../includes/functions-run-function-test-local-vs-code-csharp.md)]
After checking that the function runs correctly on your local computer, it's tim
You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+The next article depends on your chosen process model.
+ # [In-process](#tab/in-process) > [!div class="nextstepaction"]
azure-functions Functions Bindings Signalr Service Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-input.md
Title: Azure Functions SignalR Service input binding description: Learn to return a SignalR service endpoint URL and access token in Azure Functions.+ ms.devlang: csharp, java, javascript, python Previously updated : 03/04/2022 Last updated : 01/13/2022+ zone_pivot_groups: programming-languages-set-functions-lang-workers
public static SignalRConnectionInfo Negotiate(
# [Isolated process](#tab/isolated-process)
-The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The data required to connect to the output binding is obtained as a `MyConnectionInfo` object from an input binding defined using a `SignalRConnectionInfo` attribute.
+The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The data required to connect to the output binding is obtained as a `MyConnectionInfo` object from an input binding defined using a `SignalRConnectionInfo` attribute.
-
-The `MyConnectionInfo` and `MyMessage` classes are defined as follows:
- # [C# Script](#tab/csharp-script)
public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo c
``` The following example shows a SignalR connection info input binding in a *function.json* file and a function that uses the binding to return the connection information.
module.exports = async function (context, req, connectionInfo) {
}; ```
-
+ Complete PowerShell examples are pending. The following example shows a SignalR connection info input binding in a *function.json* file and a [Python function](functions-reference-python.md) that uses the binding to return the connection information.
def main(req: func.HttpRequest, connectionInfoJson: str) -> func.HttpResponse:
) ``` ::: zone pivot="programming-language-java" The following example shows a [Java function](functions-reference-java.md) that acquires SignalR connection information using the input binding and returns it over HTTP.
public SignalRConnectionInfo negotiate(
} ``` ## Usage
You can set the `UserId` property of the binding to the value from either header
```cs [FunctionName("negotiate")] public static SignalRConnectionInfo Negotiate(
- [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
+ [HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req,
[SignalRConnectionInfo (HubName = "chat", UserId = "{headers.x-ms-client-principal-id}")] SignalRConnectionInfo connectionInfo)
public static SignalRConnectionInfo Negotiate(
# [Isolated process](#tab/isolated-process)
-Sample code not available for the isolated worker process.
+```cs
+[Function("Negotiate")]
+public static string Negotiate([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequestData req,
+ [SignalRConnectionInfoInput(HubName = "serverless", UserId = "{headers.x-ms-client-principal-id}")] string connectionInfo)
+{
+ // The serialization of the connection info object is done by the framework. It should be camel case. The SignalR client respects the camel case response only.
+ return connectionInfo;
+}
+```
# [C# Script](#tab/csharp-script)
public static SignalRConnectionInfo Run(HttpRequest req, SignalRConnectionInfo c
::: zone-end ::: zone pivot="programming-language-java"
-SignalR trigger isn't currently supported for Java.
-
+```java
+@FunctionName("negotiate")
+public SignalRConnectionInfo negotiate(
+ @HttpTrigger(
+ name = "req",
+ methods = { HttpMethod.POST, HttpMethod.GET },
+ authLevel = AuthorizationLevel.ANONYMOUS)
+ HttpRequestMessage<Optional<String>> req,
+ @SignalRConnectionInfoInput(name = "connectionInfo", hubName = "simplechat", userId = "{headers.x-ms-signalr-userid}") SignalRConnectionInfo connectionInfo) {
+ return connectionInfo;
+}
+```
+ You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
Here's binding data in the *function.json* file:
``` ::: zone-end Here's the JavaScript code: ```javascript
module.exports = async function (context, req, connectionInfo) {
}; ```
-
+ Complete PowerShell examples are pending. Here's the Python code:
def main(req: func.HttpRequest, connectionInfo: str) -> func.HttpResponse:
) ``` ::: zone pivot="programming-language-java" You can set the `userId` property of the binding to the value from either header using a [binding expression](./functions-bindings-expressions-patterns.md): `{headers.x-ms-client-principal-id}` or `{headers.x-ms-client-principal-name}`.
The following table explains the binding configuration properties that you set i
-
+ ## Annotations The following table explains the supported settings for the `SignalRConnectionInfoInput` annotation.
The following table explains the supported settings for the `SignalRConnectionIn
|**userId**| Optional: The value of the user identifier claim to be set in the access key token. | |**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file.
The following table explains the binding configuration properties that you set i
## Next steps - [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md)-- [Send SignalR Service messages (Output binding)](./functions-bindings-signalr-service-output.md)
+- [Send SignalR Service messages (Output binding)](./functions-bindings-signalr-service-output.md)
azure-functions Functions Bindings Signalr Service Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-output.md
Title: Azure Functions SignalR Service output binding description: Learn about the SignalR Service output binding for Azure Functions.+ ms.devlang: csharp, java, javascript, python Previously updated : 03/04/2022 Last updated : 01/13/2023+ zone_pivot_groups: programming-languages-set-functions-lang-workers
public static Task SendMessage(
The following example shows a function that sends a message using the output binding to all connected clients. The *newMessage* is the name of the method to be invoked on each client. # [C# Script](#tab/csharp-script)
public static Task SendMessage(
# [Isolated process](#tab/isolated-process) # [C# Script](#tab/csharp-script)
public static Task SendMessage(
``` # [Isolated process](#tab/isolated-process) # [C# Script](#tab/csharp-script)
public static Task AddToGroup(
# [Isolated process](#tab/isolated-process)
-Specify `SignalRGroupActionType` to add or remove a member. The following example adds a user to a group.
+Specify `SignalRGroupActionType` to add or remove a member. The following example removes a user from a group.
# [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Signalr Service Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md
Title: Azure Functions SignalR Service trigger binding description: Learn to send SignalR Service messages from Azure Functions.-+ ms.devlang: csharp, javascript, python Previously updated : 11/29/2021- Last updated : 01/13/2023+ zone_pivot_groups: programming-languages-set-functions-lang-workers
public static async Task Run([SignalRTrigger("SignalRTest", "messages", "SendMes
# [Isolated process](#tab/isolated-process)
-The following example shows a SignalR trigger that reads a message string from one hub using a SignalR trigger and writes it to a second hub using an output binding. The data required to connect to the output binding is obtained as a `MyConnectionInfo` object from an input binding defined using a `SignalRConnectionInfo` attribute.
+The following sample shows a C# function that receives a message event from clients and logs the message content.
-The `MyConnectionInfo` and `MyMessage` classes are defined as follows:
- # [C# Script](#tab/csharp-script)
public static void Run(InvocationContext invocation, string message, ILogger log
::: zone-end ::: zone pivot="programming-language-java"
-SignalR trigger isn't currently supported for Java.
-
+SignalR trigger isn't currently supported for Java.
+ Here's binding data in the *function.json* file:
module.exports = async function (context, invocation) {
context.log(`Receive ${context.bindingData.message} from ${invocation.ConnectionId}.`) }; ```
-
+ Complete PowerShell examples are pending. Here's the Python code:
def main(invocation) -> None:
invocation_json = json.loads(invocation) logging.info("Receive {0} from {1}".format(invocation_json['Arguments'][0], invocation_json['ConnectionId'])) ```
-
+ ::: zone pivot="programming-language-csharp" ## Attributes
The following table explains the properties of the `SignalRTrigger` attribute.
C# script uses a function.json file for configuration instead of attributes.
-The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
+The following table explains the binding configuration properties for C# script that you set in the *function.json* file.
|function.json property |Description| ||--|
The following table explains the binding configuration properties for C# script
-
+ ## Annotations
-There isn't currently a supported Java annotation for a SignalR trigger.
-
+There isn't currently a supported Java annotation for a SignalR trigger.
+ ## Configuration The following table explains the binding configuration properties that you set in the *function.json* file.
The following table explains the binding configuration properties that you set i
|**parameterNames**| (Optional) A list of names that binds to the parameters. | |**connectionStringSetting**| The name of the app setting that contains the SignalR Service connection string, which defaults to `AzureSignalRConnectionString`. | See the [Example section](#example) for complete examples.
-## Usage
+## Usage
### Payloads
Say you have a JavaScript SignalR client trying to invoke method `broadcast` in
await connection.invoke("broadcast", message1, message2); ```
-After you set `parameterNames`, the names you defined correspond to the arguments sent on the client side.
+After you set `parameterNames`, the names you defined correspond to the arguments sent on the client side.
```cs [SignalRTrigger(parameterNames: new string[] {"arg1, arg2"})]
azure-functions Functions Create Your First Function Visual Studio Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio-uiex.md
- Title: "Quickstart: Create your first function in Azure using Visual Studio"
-description: In this quickstart, you learn how to create and publish an HTTP trigger Azure Function by using Visual Studio.
- Previously updated : 11/8/2022---
-# Quickstart: Create your first function in Azure using Visual Studio
-
-In this article, you use Visual Studio to create a C# class library-based function that responds to HTTP requests. After testing the code locally, you deploy it to the <abbr title="A runtime computing environment in which all the details of the server are transparent to application developers, which simplifies the process of deploying and managing code.">serverless</abbr> environment of <abbr title="An Azure service that provides a low-cost serverless computing environment for applications.">Azure Functions</abbr>.
-
-Completing this quickstart incurs a small cost of a few USD cents or less in your <abbr title="The profile that maintains billing information for Azure usage.">Azure account</abbr>.
-
-## 1. Prepare your environment
-
-+ Create an Azure <abbr title="The profile that maintains billing information for Azure usage.">account</abbr> with an active <abbr title="The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.">subscription</abbr>. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-+ Install [Visual Studio 2019](https://azure.microsoft.com/downloads/) and select the **Azure development** workload during installation.
-
-<br/>
-<details>
-<summary><strong>Use an Azure Functions project instead</strong></summary>
-If you want to create an <abbr title="A logical container for one or more individual functions that can be deployed and managed together.">Azure Functions project</abbr> by using Visual Studio 2017 instead, you must first install the [latest Azure Functions tools](functions-develop-vs.md#check-your-tools-version).
-</details>
-
-## 2. Create a function app project
-
-1. From the Visual Studio menu, select **File** > **New** > **Project**.
-
-1. In **Create a new project**, enter *functions* in the search box, choose the **Azure Functions** template, and then select **Next**.
-
-1. In **Configure your new project**, enter a **<abbr title="The function app name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters.">Project name</abbr>** for your project, and then select **Create**.
-
-1. Provide the following information for the **Create a new Azure Functions application** settings:
-
- + Select **<abbr title=" This value creates a function project that uses the version 3.x runtime of Azure Functions, which supports .NET Core 3.x. Azure Functions 1.x supports the .NET Framework.">Azure Functions v3 (.NET Core)</abbr>** from the Functions runtime dropdown. (For more information, see [Azure Functions runtime versions overview](functions-versions.md).)
-
- + Select **<abbr title="This value creates a function triggered by an HTTP request.">HTTP trigger</abbr>** as the function template.
-
- + Select **<abbr title="Because an Azure Function requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string.">Storage emulator</abbr>** from the Storage account dropdown.
-
- + Select **Anonymous** from the <abbr title="The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function.">Authorization level</abbr> dropdown. (For more information about keys and authorization, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](functions-bindings-http-webhook.md).)
-
- + Select **Create**
-
-## 3. Rename the function
-
-The `FunctionName` method attribute sets the name of the function, which by default is generated as `Function1`. Because the tooling doesn't let you override the default function name when you create your project, take a minute to create a better name for the function class, file, and metadata.
-
-1. In **File Explorer**, right-click the Function1.cs file and rename it to *HttpExample.cs*.
-
-1. In the code, rename the Function1 class to `HttpExample'.
-
-1. In the `HttpTrigger` method named `Run`, rename the `FunctionName` method attribute to `HttpExample`.
--
-## 4. Run the function locally
-
-1. To run your function, press <kbd>F5</kbd> in Visual Studio.
-
-1. Copy the URL of your function from the Azure Functions runtime output.
-
- ![Azure local runtime](../../includes/media/functions-run-function-test-local-vs/functions-debug-local-vs.png)
-
-1. Paste the URL for the HTTP request into your browser's address bar. Append the query string **?name=<YOUR_NAME>** to this URL and run the request.
-
- ![Function localhost response in the browser](../../includes/media/functions-run-function-test-local-vs/functions-run-browser-local-vs.png)
-
-1. To stop debugging, press <kbd>Shift</kbd>+<kbd>F5</kbd> in Visual Studio.
-
-<br/>
-<details>
-<summary><strong>Troubleshooting</strong></summary>
- You might need to enable a firewall exception so that the tools can handle HTTP requests. Authorization levels are never enforced when you run a function locally.
-</details>
-
-## 5. Publish the project to Azure
-
-1. In **Solution Explorer**, right-click the project and select **Publish**.
-
-1. In **Target**, select **Azure**
-
- :::image type="content" source="../../includes/media/functions-vstools-publish/functions-visual-studio-publish-profile-step-1.png" alt-text="Select Azure target":::
-
-1. In **Specific target**, select **Azure Function App (Windows)**
-
- :::image type="content" source="../../includes/media/functions-vstools-publish/functions-visual-studio-publish-profile-step-2.png" alt-text="Select Azure Function App":::
-
-1. In **Function Instance**, select **Create a new Azure Function...** and then use the values specified in the following:
-
- + For **Name** provide a <abbr title="Use a name that uniquely identifies your new function app. Accept this name or enter a new name. Valid characters are: `a-z`, `0-9`, and `-`..">Globally unique name</abbr>
-
- + **Select** a subscription from the drop-down list.
-
- + **Select** an existing <abbr title="A logical container for related Azure resources that you can manage as a unit.">resource group</abbr> from the drop-down list or choose **New** to create a new resource group.
-
- + **Select** <abbr title="When you publish your project to a function app that runs in a Consumption plan, you pay only for executions of your functions app. Other hosting plans incur higher costs.">Consumption</abbr> in the Play Type drop-down. (For more information, see [Consumption plan](consumption-plan.md).)
-
- + **Select** a <abbr title="A geographical reference to a specific Azure datacenter in which resources are allocated.See [regions](https://azure.microsoft.com/regions/) for a list of available regions.">location</abbr> from the drop-down.
-
- + **Select** an <abbr="An Azure Storage account is required by the Functions runtime. Select New to configure a general-purpose storage account. You can also choose an existing account that meets the storage account requirements.">Azure Storage</abbr> account from the drop-down
-
- ![Create App Service dialog](../../includes/media/functions-vstools-publish/functions-visual-studio-publish.png)
-
-1. Select **Create**
-
-1. In the **Functions instance**, make sure that **Run from package file** is checked.
-
- :::image type="content" source="../../includes/media/functions-vstools-publish/functions-visual-studio-publish-profile-step-4.png" alt-text="Finish profile creation":::
-
- <br/>
- <details>
- <summary><strong>What does this setting do?</strong></summary>
- When using **Run from package file**, your function app is deployed using [Zip Deploy](functions-deployment-technologies.md#zip-deploy) with [Run-From-Package](run-functions-from-deployment-package.md) mode enabled. This is the recommended deployment method for your functions project, since it results in better performance.
- </details>
-
-1. Select **Finish**.
-
-1. On the Publish page, select **Publish**.
-
-1. On the Publish page, review the root URL of the function app.
-
-1. In the Publish tab, choose **Manage in <abbr title="Cloud Explorer lets you use Visual Studio to view the contents of the site, start and stop the function app, and browse directly to function app resources on Azure and in the Azure portal.">Cloud Explorer</abbr>**.
-
- :::image type="content" source="../../includes/media/functions-vstools-publish/functions-visual-studio-publish-complete.png" alt-text="Publish success message":::
-
-
-## 6. Test your function in Azure
-
-1. In Cloud Explorer, your new function app should be selected. If not, expand your subscription, expand **App Services**, and select your new function app.
-
-1. Right-click the function app and choose **Open in Browser**. This opens the root of your function app in your default web browser and displays the page that indicates your function app is running.
-
- :::image type="content" source="media/functions-create-your-first-function-visual-studio/function-app-running-azure.png" alt-text="Function app running":::
-
-1. In the address bar in the browser, append the string **/api/HttpExample?name=Functions** to the base URL and run the request.
-
- The URL that calls your HTTP trigger function is in the following format:
-
- `http://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions`
-
-2. Go to this URL and you see a response in the browser to the remote GET request returned by the function, which looks like the following example:
-
- :::image type="content" source="media/functions-create-your-first-function-visual-studio/functions-create-your-first-function-visual-studio-browser-azure.png" alt-text="Function response in the browser":::
-
-## 7. Clean up resources
--
-## Next steps
-
-Advance to the next article to learn how to add an <abbr title="A means to associate a function with a storage queue, so that it can create messages on the queue. Bindings are declarative connections between a function and other resources. An input binding provides data to the function; an output binding provides data from the function to other resources.">Azure Storage queue output binding</abbr> to your function:
-
-> [!div class="nextstepaction"]
-> [Add an Azure Storage queue binding to your function](functions-add-output-binding-storage-queue-vs.md)
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Title: "Quickstart: Create your first C# function in Azure using Visual Studio"
description: "In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions." ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 11/08/2022 Last updated : 01/05/2023 ms.devlang: csharp
-adobe-target: true
-adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
-adobe-target-experience: Experience B
-adobe-target-content: ./functions-create-your-first-function-visual-studio-uiex
+ # Quickstart: Create your first C# function in Azure using Visual Studio Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using Visual Studio Code, you should instead consider the [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
The Azure Functions project template in Visual Studio creates a C# class library
1. In **Create a new project**, enter *functions* in the search box, choose the **Azure Functions** template, and then select **Next**.
-1. In **Configure your new project**, enter a **Project name** for your project, and then select **Create**. The function app name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters.
+1. In **Configure your new project**, enter a **Project name** for your project, and then select **Next**. The function app name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters.
-1. For the **Additional information** settings, use the values in the following table:
-
- # [In-process](#tab/in-process)
+1. In **Additional information** choose from one of the following options for **Functions worker**:
+
+ | .NET runtime | Process model | Description |
+ | | | |
+ | **.NET 6.0 (Long Term Support)** | [In-process](functions-dotnet-class-library.md) | _In-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions. Function code runs in the same process as the Functions host. |
+ | **.NET 6.0 Isolated (Long Term Support)** | [Isolated worker process](dotnet-isolated-process-guide.md) | Functions run on .NET 6, but in a separate process from the Functions host. |
+ | **.NET 7.0 Isolated** | [Isolated worker process](dotnet-isolated-process-guide.md) | Because .NET 7 isn't an LTS version of .NET, your functions must run in an isolated process on .NET 7. |
+ | **.NET Framework Isolated v4** | [Isolated worker process](dotnet-isolated-process-guide.md) | Choose this option when your functions need to use libraries only supported on the .NET Framework. |
+ | **.NET Core 3.1 (Long Term Support)** | [In-process](functions-dotnet-class-library.md) | .NET Core 3.1 is no longer a supported version of .NET and isn't supported by Functions version 4.x. Use .NET 6.0 instead. |
+ | **.NET Framework v1** | [In-process](functions-dotnet-class-library.md) | Choose this option when your functions need to use libraries only supported on older versions of .NET Framework. Requires version 1.x of the Functions runtime. |
+
+ The two process models use different APIs, and each process model uses a different template when generating the function project code. If you don't see options for .NET 6.0 and later .NET runtime versions, you may need to [update your Azure Functions tools installation](https://developercommunity.visualstudio.com/t/Sometimes-the-Visual-Studio-functions-wo/10224478?).
+1. For the remaining **Additional information** settings, use the values in the following table:
+
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6** | When you choose **.NET 6**, you create a project that runs in-process with the Azure Functions runtime. Use in-process unless you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](functions-dotnet-class-library.md#supported-versions). |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). | :::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4.png" alt-text="Screenshot of Azure Functions project settings.":::
- # [Isolated process](#tab/isolated-process)
-
- | Setting | Value | Description |
- | | - |-- |
- | **Functions worker** | **.NET 6 Isolated** | When you choose **.NET 6 Isolated**, you create a project that runs in a separate worker process. Choose isolated worker process when you need to run your function app on .NET 7.0 or on .NET Framework 4.8 (preview). To learn more, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). |
- | **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. |
- | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. |
- | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
-
- :::image type="content" source="../../includes/media/functions-vs-tools-create/functions-project-settings-v4-isolated.png" alt-text="Screenshot of Azure Functions project settings.":::
-
-
-
- Make sure you set the **Authorization level** to **Anonymous**. If you choose the default level of **Function**, you're required to present the [function key](./functions-bindings-http-webhook-trigger.md#authorization-keys) in requests to access your function endpoint.
+ Make sure you set the **Authorization level** to **Anonymous**. If you choose the default level of **Function**, you're required to present the [function key](./functions-bindings-http-webhook-trigger.md#authorization-keys) in requests to access your function endpoint in Azure.
2. Select **Create** to create the function project and HTTP trigger function.
You created Azure resources to complete this quickstart. You may be billed for t
In this quickstart, you used Visual Studio to create and publish a C# function app in Azure with a simple HTTP trigger function.
+The next article depends on your chosen process model.
+ # [In-process](#tab/in-process) To learn more about working with C# functions that run in-process with the Functions host, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \
--data-urlencode 'grant_type=client_credentials' \ --data-urlencode 'client_id=<your apps client ID>' \ --data-urlencode 'client_secret=<your apps client secret' \data-urlencode 'resource=https://management.azure.com'
+--data-urlencode 'resource=https://monitor.azure.com'
``` The response body appears as follows:
If you receive an error message with some part of the process, consider the foll
## Next steps -- Learn more about [custom metrics](./metrics-custom-overview.md).
+- Learn more about [custom metrics](./metrics-custom-overview.md).
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
The output for each command will look similar to the following:
"metrics": { "enabled": true, "kubeStateMetrics": {
- "metrican'tationsAllowList": "",
+ "metricAnnotationsAllowList": "",
"metricLabelsAllowlist": "" } }
The output will be similar to the following:
"metrics": { "enabled": true, "kubeStateMetrics": {
- "metrican'tationsAllowList": "pods=[k8s-annotation-1,k8s-annotation-n]",
+ "metricAnnotationsAllowList": "pods=[k8s-annotation-1,k8s-annotation-n]",
"metricLabelsAllowlist": "namespaces=[k8s-label-1,k8s-label-n]" } }
azure-signalr Signalr Howto Work With App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-app-gateway.md
Under folder samples/Chatroom, run the below commands:
```bash # Build and publish the assemblies to publish folder
-dotnet publish -os linux -o publish
+dotnet publish --os linux -o publish
# zip the publish folder as app.zip cd publish zip -r app.zip .
azure-video-indexer Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/network-security.md
Azure Video Indexer is a service hosted on Azure. In some cases the service needs to interact with other services in order to index video files (for example, a Storage account) or when you orchestrate indexing jobs against Azure Video Indexer API endpoint using your own service hosted on Azure (for example, AKS, Web Apps, Logic Apps, Functions). > [!NOTE]
-> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer" that was unfortunately not launched to GA in the UI before removing the preceding "AzureVideoAnalyzerForMedia" tag. The mitigatation is to run the following command from Powershell CLI:
-
-`$nsg | Add-AzNetworkSecurityRuleConfig -Name $rulename -Description "Testing our Service Tag" -Access Allow -Protocol * -Direction Inbound -Priority 100 -SourceAddressPrefix "YourTagDisplayName" -SourcePortRange * -DestinationAddressPrefix * -DestinationPortRange $port`
-
-Where `YourTagDisplayName` needs to be replaced with VideoIndexer
-
+> If you are already using "AzureVideoAnalyzerForMedia" Network Service Tag you may experience issues with your networking security group starting 9 January 2023. This is because we are moving to a new Security Tag label "VideoIndexer" that was unfortunately not launched to GA in the UI before removing the preceding "AzureVideoAnalyzerForMedia" tag. The mitigatation is to remove the old tag from your configuration. We will update this document page + release notes once the new tag will be available.
Use [Network Security Groups with Service Tags](../virtual-network/service-tags-overview.md) to limit access to your resources on a network level. A service tag represents a group of IP address prefixes from a given Azure service, in this case Azure Video Indexer. Microsoft manages the address prefixes grouped by the service tag and automatically updates the service tag as addresses change in our backend, minimizing the complexity of frequent updates to network security rules by the customer.
backup Back Up Hyper V Virtual Machines Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-hyper-v-virtual-machines-mabs.md
Title: Back up Hyper-V virtual machines with MABS description: This article contains the procedures for backing up and recovery of virtual machines using Microsoft Azure Backup Server (MABS).- Previously updated : 07/09/2021+ Last updated : 01/16/2023++++ # Back up Hyper-V virtual machines with Azure Backup Server
-This article explains how to back up Hyper-V virtual machines using Microsoft Azure Backup Server (MABS).
+This article describes how to back up and restore Hyper-V virtual machines using Microsoft Azure Backup Server (MABS).
## Supported scenarios
MABS can do a host or guest-level backup of Hyper-V VMs. At the host level, the
Both methods have pros and cons: -- Host-level backups are flexible because they work regardless of the type of OS running on the guest machines and don't require the installation of the MABS protection agent on each VM. If you deploy host level backup, you can recover an entire virtual machine, or files and folders (item-level recovery).
+| Host-level backups | Guest-level backup |
+| | |
+| - These backups are flexible because they work regardless of the type of OS running on the guest machines and don't require the installation of the MABS protection agent on each VM. <br><br> - If you deploy host level backup, you can recover an entire virtual machine, or files and folders (item-level recovery). | - This backup is useful if you want to protect specific workloads running on a virtual machine. <br><br> - At host-level you can recover an entire VM or specific files, but it won't provide recovery in the context of a specific application. For example, to recover specific SharePoint items from a backed-up VM, you should do guest-level backup of that VM. Use guest-level backup if you want to protect data stored on passthrough disks. Passthrough allows the virtual machine to directly access the storage device and doesn't store virtual volume data in a VHD file. |
-- Guest-level backup is useful if you want to protect specific workloads running on a virtual machine. At host-level you can recover an entire VM or specific files, but it won't provide recovery in the context of a specific application. For example, to recover specific SharePoint items from a backed-up VM, you should do guest-level backup of that VM. Use guest-level backup if you want to protect data stored on passthrough disks. Passthrough allows the virtual machine to directly access the storage device and doesn't store virtual volume data in a VHD file.
+## How the backup process works?
-## How the backup process works
-
-MABS performs backup with VSS as follows. The steps in this description are numbered to help with clarity.
+MABS performs backup with VSS as follows:
1. The MABS block-based synchronization engine makes an initial copy of the protected virtual machine and ensures that the copy of the virtual machine is complete and consistent.
MABS performs backup with VSS as follows. The steps in this description are numb
## Backup prerequisites
-These are the prerequisites for backing up Hyper-V virtual machines with MABS:
+The following table lists the prerequisites to back up Hyper-V virtual machines with MABS:
|Prerequisite|Details| ||-|
-|MABS prerequisites|- If you want to perform item-level recovery for virtual machines (recover files, folders, volumes), then you'll need to have the Hyper-V role enabled on the MABS server (the Hyper-V role gets installed by default during the installation of MABS). If you only want to recover the virtual machine and not item-level, then the role isn't required.<br />- You can protect up to 800 virtual machines of 100 GB each on one MABS server and allow multiple MABS servers that support larger clusters.<br />- MABS excludes the page file from incremental backups to improve virtual machine backup performance.<br />- MABS can back up a Hyper-V server or cluster in the same domain as the MABS server, or in a child or trusted domain. If you want to back up Hyper-V in a workgroup or an untrusted domain, you'll need to set up authentication. For a single Hyper-V server, you can use NTLM or certificate authentication. For a cluster, you can use certificate authentication only.<br />- Using host-level backup to back up virtual machine data on passthrough disks isn't supported. In this scenario, we recommend you use host-level backup to back up VHD files and guest-level backup to back up the other data that isn't visible on the host.<br /> -You can back up VMs stored on deduplicated volumes.|
-|Hyper-V VM prerequisites|- The version of Integration Components that's running on the virtual machine should be the same as the version of the Hyper-V host. <br />- For each virtual machine backup you'll need free space on the volume hosting the virtual hard disk files to allow Hyper-V enough room for differencing disks (AVHD's) during backup. The space must be at least equal to the calculation **Initial disk size\*Churn rate\*Backup** window time. If you're running multiple backups on a cluster, you'll need enough storage capacity to accommodate the AVHDs for each of the virtual machines using this calculation.<br />- To back up virtual machines located on Hyper-V host servers running Windows Server 2012 R2, the virtual machine should have a SCSI controller specified, even if it's not connected to anything. (In Windows Server 2012 R2 backup, the Hyper-V host mounts a new VHD in the VM and then later dismounts it. Only the SCSI controller can support this and therefore is required for online backup of the virtual machine. Without this setting, event ID 10103 will be issued when you try to back up the virtual machine.)|
+|MABS prerequisites|- If you want to perform item-level recovery for virtual machines (recover files, folders, volumes), then you'll need to have the Hyper-V role enabled on the MABS server (the Hyper-V role gets installed by default during the installation of MABS). If you only want to recover the virtual machine and not item-level, then the role isn't required.<br />- You can protect up to 800 virtual machines of 100 GB each on one MABS server and allow multiple MABS servers that support larger clusters.<br />- MABS excludes the page file from incremental backups to improve virtual machine backup performance.<br />- MABS can back up a Hyper-V server or cluster in the same domain as the MABS server, or in a child or trusted domain. If you want to back up Hyper-V in a workgroup or an untrusted domain, you'll need to set up authentication. For a single Hyper-V server, you can use NTLM or certificate authentication. For a cluster, you can use certificate authentication only.<br />- Using the host-level backup to back up virtual machine data on passthrough disks isn't supported. In this scenario, we recommend you use host-level backup to back up VHD files and guest-level backup to back up the other data that isn't visible on the host.<br /> -You can back up VMs stored on deduplicated volumes.|
+|Hyper-V VM prerequisites|- The version of Integration Components that's running on the virtual machine should be the same as the version of the Hyper-V host. <br />- For each virtual machine backup you'll need free space on the volume hosting the virtual hard disk files to allow Hyper-V enough room for differencing disks (AVHDs) during backup. The space must be at least equal to the calculation **Initial disk size\*Churn rate\*Backup** window time. If you're running multiple backups on a cluster, you'll need enough storage capacity to accommodate the AVHDs for each of the virtual machines using this calculation.<br />- To back up virtual machines located on Hyper-V host servers running Windows Server 2012 R2, the virtual machine should have a SCSI controller specified, even if it's not connected to anything. (In Windows Server 2012 R2 backup, the Hyper-V host mounts a new VHD in the VM and then later dismounts it. Only the SCSI controller can support this and therefore is required for online backup of the virtual machine. Without this setting, event ID 10103 will be issued when you try to back up the virtual machine.)|
|Linux prerequisites|- You can back up Linux virtual machines using MABS. Only file-consistent snapshots are supported.| |Back up VMs with CSV storage|- For CSV storage, install the Volume Shadow Copy Services (VSS) hardware provider on the Hyper-V server. Contact your storage area network (SAN) vendor for the VSS hardware provider.<br />- If a single node shuts down unexpectedly in a CSV cluster, MABS will perform a consistency check against the virtual machines that were running on that node.<br />- If you need to restart a Hyper-V server that has BitLocker Drive Encryption enabled on the CSV cluster, you must run a consistency check for Hyper-V virtual machines.| |Back up VMs with SMB storage|- Turn on auto-mount on the server that's running Hyper-V to enable virtual machine protection.<br /> - Disable TCP Chimney Offload.<br />- Ensure that all Hyper-V machine$ accounts have full permissions on the specific remote SMB file shares.<br />- Ensure that the file path for all virtual machine components during recovery to alternate location is fewer than 260 characters. If not, recovery might succeed, but Hyper-V won't be able to mount the virtual machine.<br />- The following scenarios aren't supported:<br /> Deployments where some components of the virtual machine are on local volumes and some components are on remote volumes; an IPv4 or IPv6 address for storage location file server, and recovery of a virtual machine to a computer that uses remote SMB shares.<br />- You'll need to enable the File Server VSS Agent service on each SMB server - Add it in **Add roles and features** > **Select server roles** > **File and Storage Services** > **File Services** > **File Service** > **File Server VSS Agent Service**.| ## Back up virtual machines
+To back up a virtual machine, follow these steps:
+ 1. Set up your [MABS server](backup-azure-microsoft-azure-backup.md) and [your storage](backup-mabs-add-storage.md). When setting up your storage, use these storage capacity guidelines. - Average virtual machine size - 100 GB - Number of virtual machines per MABS server - 800
These are the prerequisites for backing up Hyper-V virtual machines with MABS:
2. Set up the MABS protection agent on the Hyper-V server or Hyper-V cluster nodes.
-3. In the MABS Administrator console, select **Protection** > **Create protection group** to open the **Create New Protection Group** wizard.
+3. On the MABS Administrator console, select **Protection** > **Create protection group** to open the **Create New Protection Group** wizard.
4. On the **Select Group Members** page, select the VMs you want to protect from the Hyper-V host servers on which they're located. We recommend you put all VMs that will have the same protection policy into one protection group. To make efficient use of space, enable colocation. Colocation allows you to locate data from different protection groups on the same disk or tape storage, so that multiple data sources have a single replica and recovery point volume. 5. On the **Select Data Protection Method** page, specify a protection group name. Select **I want short-term protection using Disk** and select **I want online protection** if you want to back up data to Azure using the Azure Backup service.
-6. In **Specify Short-Term Goals** > **Retention range**, specify how long you want to retain disk data. In **Synchronization frequency**, specify how often incremental backups of the data should run. Alternatively, instead of selecting an interval for incremental backups you can enable **Just before a recovery point**. With this setting enabled, MABS will run an express full backup just before each scheduled recovery point.
+6. On **Specify Short-Term Goals** > **Retention range**, specify how long you want to retain disk data. In **Synchronization frequency**, specify how often incremental backups of the data should run. Alternatively, instead of selecting an interval for incremental backups you can enable **Just before a recovery point**. With this setting enabled, MABS will run an express full backup just before each scheduled recovery point.
> [!NOTE] > >If you're protecting application workloads, recovery points are created in accordance with Synchronization frequency, provided the application supports incremental backups. If it doesn't, then MABS runs an express full backup, instead of an incremental backup, and creates recovery points in accordance with the express backup schedule.<br></br>The backup process doesn't back up the checkpoints associated with VMs.
-7. In the **Review disk allocation** page, review the storage pool disk space allocated for the protection group.
+7. On the **Review disk allocation** page, review the storage pool disk space allocated for the protection group.
**Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
These are the prerequisites for backing up Hyper-V virtual machines with MABS:
If MABS is running on Windows Server 2012 R2 or greater, then you can back up replica virtual machines. This is useful for several reasons:
-**Reduces the impact of backups on the running workload** - Taking a backup of a virtual machine incurs some overhead as a snapshot is created. By offloading the backup process to a secondary remote site, the running workload is no longer affected by the backup operation. This is applicable only to deployments where the backup copy is stored on a remote site. For example, you might take daily backups and store data locally to ensure quick restore times, but take monthly or quarterly backups from replica virtual machines stored remotely for long-term retention.
+**Reduces the impact of backups on the running workload** - Taking a backup of a virtual machine incurs some overhead as a snapshot is created. Once the backup process gets offloaded to a secondary remote site, the running workload is no longer affected by the backup operation. This is applicable only to deployments where the backup copy is stored on a remote site. For example, you might take daily backups and store data locally to ensure quick restore times, but take monthly or quarterly backups from replica virtual machines stored remotely for long-term retention.
**Saves bandwidth** - In a typical remote branch office/headquarters deployment you need an appropriate amount of provisioned bandwidth to transfer backup data between sites. If you create a replication and failover strategy, in addition to your data backup strategy, you can reduce the amount of redundant data sent over the network. By backing up the replica virtual machine data rather than the primary, you save the overhead of sending the backed-up data over the network.
A replica virtual machine is turned off until a failover is initiated, and VSS c
- Migration or failover of the replica virtual machine is in progress
-## Recover backed up virtual machines
+## Recover backed-up virtual machines
-When you can recover a backed up virtual machine, you use the Recovery wizard to select the virtual machine and the specific recovery point. To open the Recovery Wizard and recover a virtual machine:
+When you can recover a backed up virtual machine, you use the Recovery wizard to select the virtual machine and the specific recovery point.
-1. In the MABS Administrator console, type the name of the VM, or expand the list of protected items, navigate to **All Protected HyperV Data**, and select the VM you want to recover.
+To open the Recovery Wizard and recover a virtual machine, follow these steps:
-2. In the **Recovery points for** pane, on the calendar, select any date to see the recovery points available. Then in the **Path** pane, select the recovery point you want to use in the Recovery wizard.
+1. On the MABS Administrator console, type the name of the VM, or expand the list of protected items, navigate to **All Protected HyperV Data**, and select the VM you want to recover.
+
+2. On the **Recovery points for** pane, on the calendar, select any date to see the recovery points available. Then in the **Path** pane, select the recovery point you want to use in the Recovery wizard.
3. From the **Actions** menu, select **Recover** to open the Recovery Wizard.
When you can recover a backed up virtual machine, you use the Recovery wizard to
- **Copy to a network folder**: MABS supports item-level recovery (ILR), which allows you to do item-level recovery of files, folders, volumes, and virtual hard disks (VHDs) from a host-level backup of Hyper-V virtual machines to a network share or a volume on a MABS protected server. The MABS protection agent doesn't have to be installed inside the guest to perform item-level recovery. If you choose this option, the Recovery Wizard presents you with an additional screen for identifying the destination and destination path.
-5. In **Specify Recovery Options** configure the recovery options and select **Next**:
+5. On **Specify Recovery Options** configure the recovery options and select **Next**:
- - If you are recovering a VM over low bandwidth, select **Modify** to enable **Network bandwidth usage throttling**. After turning on the throttling option, you can specify the amount of bandwidth you want to make available and the time when that bandwidth is available.
+ - If you're recovering a VM over low bandwidth, select **Modify** to enable **Network bandwidth usage throttling**. After turning on the throttling option, you can specify the amount of bandwidth you want to make available and the time when that bandwidth is available.
- Select **Enable SAN based recovery using hardware snapshots** if you've configured your network. - Select **Send an e-mail when the recovery completes** and then provide the email addresses, if you want email notifications sent once the recovery process completes.
-6. In the Summary screen, make sure all details are correct. If the details aren't correct, or you want to make a change, select **Back**. If you're satisfied with the settings, select **Recover** to start the recovery process.
+6. On the Summary screen, make sure all details are correct. If the details aren't correct, or you want to make a change, select **Back**. If you're satisfied with the settings, select **Recover** to start the recovery process.
7. The **Recovery Status** screen provides information about the recovery job. ## Restore an individual file from a Hyper-V VM
-You can restore individual files from a protected Hyper-V VM recovery point. This feature is only available for Windows Server VMs. Restoring individual files is similar to restoring the entire VM, except you browse into the VMDK and find the file(s) you want, before starting the recovery process. To recover an individual file or select files from a Windows Server VM:
+You can restore individual files from a protected Hyper-V VM recovery point. This feature is only available for Windows Server VMs. Restoring individual files is similar to restoring the entire VM, except you browse into the VMDK and find the file(s) you want, before starting the recovery process.
+
+To recover an individual file or select files from a Windows Server VM, follow these steps:
>[!Note] >Restoring an individual file from a Hyper-V VM is available only for Windows VM and Disk Recovery Points.
-1. In the MABS Administrator Console, select **Recovery** view.
+1. On the MABS Administrator Console, select **Recovery** view.
-1. Using the **Browse** pane, browse or filter to find the VM you want to recover. Once you select a Hyper-V VM or folder, the **Recovery points for** pane displays the available recovery points.
+1. On the **Browse** pane, browse or filter to find the VM you want to recover. Once you select a Hyper-V VM or folder, the **Recovery points for** pane displays the available recovery points.
- !["Recovery points for" pane to recover files from Hyper-v VM](./media/back-up-hyper-v-virtual-machines-mabs/hyper-v-vm-rp-disk.png)
+ ![Screenshot shows how to recover files from Hyper-v VM from the "Recovery points for" pane.](./media/back-up-hyper-v-virtual-machines-mabs/hyper-v-vm-rp-disk.png)
-1. In the **Recovery Points for** pane, use the calendar to select the date that contains the desired recovery point(s). Depending on how the backup policy has been configured, dates can have more than one recovery point. Once you've selected the day when the recovery point was taken, make sure you've chosen the correct **Recovery time**. If the selected date has multiple recovery points, choose your recovery point by selecting it in the Recovery time drop-down menu. Once you chose the recovery point, the list of recoverable items appears in the Path pane.
+1. On the **Recovery Points for** pane, use the calendar to select the date that contains the desired recovery point(s). Depending on how the backup policy has been configured, dates can have more than one recovery point. Once you've selected the day when the recovery point was taken, make sure you've chosen the correct **Recovery time**. If the selected date has multiple recovery points, choose your recovery point by selecting it in the Recovery time drop-down menu. Once you chose the recovery point, the list of recoverable items appears in the Path pane.
1. To find the files you want to recover, in the **Path** pane, double-click the item in the Recoverable item column to open it. Select the file, files, or folders you want to recover. To select multiple items, press the **Ctrl** key while selecting each item. Use the **Path** pane to search the list of files or folders appearing in the **Recoverable Item** column.**Search list below** doesn't search into subfolders. To search through subfolders, double-click the folder. Use the Up button to move from a child folder into the parent folder. You can select multiple items (files and folders), but they must be in the same parent folder. You can't recover items from multiple folders in the same recovery job.
- ![Review Recovery Selection in Hyper-v VM](./media/back-up-hyper-v-virtual-machines-mabs/hyper-v-vm-rp-disk-ilr-2.png)
+ ![Screenshot shows how to review Recovery Selection in Hyper-v VM.](./media/back-up-hyper-v-virtual-machines-mabs/hyper-v-vm-rp-disk-ilr-2.png)
1. Once you've selected the item(s) for recovery, in the Administrator Console tool ribbon, select **Recover** to open the **Recovery Wizard**. In the Recovery Wizard, the **Review Recovery Selection** screen shows the selected items to be recovered.
You can restore individual files from a protected Hyper-V VM recovery point. Thi
1. On the **Specify Destination** screen, select **Browse** to find a network location for your files or folders. MABS creates a folder where all recovered items are copied. The folder name has the prefix, MABS_day-month-year. When you select a location for the recovered files or folder, the details for that location (Destination, Destination path, and available space) are provided.
- ![Specify location to recover files from Hyper-v VM](./media/back-up-hyper-v-virtual-machines-mabs/hyper-v-vm-specify-destination.png)
+ ![Screenshot shows how to specify location to recover files from Hyper-v VM.](./media/back-up-hyper-v-virtual-machines-mabs/hyper-v-vm-specify-destination.png)
1. On the **Specify Recovery Options** screen, choose which security setting to apply. You can opt to modify the network bandwidth usage throttling, but throttling is disabled by default. Also, **SAN Recovery** and **Notification** aren't enabled.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
Azure OpenAI's model names typically correspond to the following standard naming
| `{input-type}` | ([Embeddings models](#embeddings-models) only) The input type of the embedding supported by the model. For example, text search embedding models support `doc` and `query`.| | `{identifier}` | The version identifier of the model. |
-For example, our most powerful GPT-3 model is called `text-davinci-002`, while our most powerful Codex model is called `code-davinci-002`.
+For example, our most powerful GPT-3 model is called `text-davinci-003`, while our most powerful Codex model is called `code-davinci-002`.
> Older versions of the GPT-3 models are available, named `ada`, `babbage`, `curie`, and `davinci`. These older models do not follow the standard naming conventions, and they are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md).
The GPT-3 models can understand and generate natural language. The service offer
- `text-ada-001` - `text-babbage-001` - `text-curie-001`-- `text-davinci-002`
+- `text-davinci-003`
While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application.
When using our Embeddings models, keep in mind their limitations and risks.
| Davinci* | Yes | No | N/A | East US, South Central US, West Europe | | Text-davinci-001 | Yes | No | South Central US, West Europe | N/A | | Text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A |
+| Text-davinci-003 | Yes | No | East US | N/A |
| Text-davinci-fine-tune-002* | Yes | No | N/A | East US, West Europe | \*Models available by request only. Please open a support request.
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 20 requests per second for: text-davinci-002, text-davinci-fine-tune-002, code-cushman-002, code-davinci-002, code-davinci-fine-tune-002 <br ><br> 50 requests per second for all other text models.
+| Requests per second per deployment | 20 requests per second for: text-davinci-003, text-davinci-002, text-davinci-fine-tune-002, code-cushman-002, code-davinci-002, code-davinci-fine-tune-002 <br ><br> 50 requests per second for all other text models.
| | Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
keywords:
# What's new in Azure OpenAI
+## January 2023
+
+### New Features
+
+* **Service GA**. Azure OpenAI is now generally available.ΓÇï
+
+* **New models**: Addition of the latest text model, text-davinci-003
++ ## December 2022 ### New features
databox-online Azure Stack Edge Gpu Kubernetes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-kubernetes-overview.md
Previously updated : 11/07/2021 Last updated : 01/13/2023
The Kubernetes master and the worker nodes are virtual machines that consume CPU
|Kubernetes VM type|CPU and memory requirement| |||
-|Master VM|4 cores, 4-GB RAM|
-|Worker VM|12 cores, 32-GB RAM|
+|Master VM|CPU: 4 cores, RAM: 4-GB|
+|Worker VM|CPU: 30% of available physical cores, RAM: 25% of device specification|
<!--The Kubernetes cluster control plane components make global decisions about the cluster. The control plane has:
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 11/15/2022 Last updated : 01/16/2023 # Security alerts - a reference guide
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Suspicious Activity Detected**<br>(VM_SuspiciousActivity) | Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | | **Suspicious authentication activity**<br>(VM_LoginBruteForceValidUserFailed) | Although none of them succeeded, some of them used accounts were recognized by the host. This resembles a dictionary attack, in which an attacker performs numerous authentication attempts using a dictionary of predefined account names and passwords in order to find valid credentials to access the host. This indicates that some of your host account names might exist in a well-known account name dictionary. | Probing | Medium | | **Suspicious code segment detected** | Indicates that a code segment has been allocated by using non-standard methods, such as reflective injection and process hollowing. The alert provides more characteristics of the code segment that have been processed to provide context for the capabilities and behaviors of the reported code segment. | - | Medium |
-| **Suspicious command execution**<br>(VM_SuspiciousCommandLineExecution) | Machine logs indicate a suspicious command-line execution by user %{user name}. | Execution | High |
| **Suspicious double extension file executed** | Analysis of host data indicates an execution of a process with a suspicious double extension. This extension may trick users into thinking files are safe to be opened and might indicate the presence of malware on the system. | - | High | | **Suspicious download using Certutil detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | | **Suspicious download using Certutil detected** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. | - | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Suspicious process name detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium | | **Suspicious process name detected** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium | | **Suspicious process termination burst**<br>(VM_TaskkillBurst) | Analysis of host data indicates a suspicious process termination burst in %{Machine Name}. Specifically, %{NumberOfCommands} processes were killed between %{Begin} and %{Ending}. | Defense Evasion | Low |
-| **Suspicious Screensaver process executed**<br>(VM_SuspiciousScreenSaverExecution) | The process '%{process name}' was observed executing from an uncommon location. Files with the .scr extensions are screen saver files and are normally reside and execute from the Windows system directory. | Defense Evasion, Execution | Medium |
| **Suspicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is uncommon with this account. | - | Medium | | **Suspicious SVCHOST process executed** | The system process SVCHOST was observed running in an abnormal context. Malware often uses SVCHOST to masquerade its malicious activity. | - | High | | **Suspicious system process executed**<br>(VM_SystemProcessInAbnormalContext) | The system process %{process name} was observed running in an abnormal context. Malware often uses this process name to masquerade its malicious activity. | Defense Evasion, Execution | High |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium | | **Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium | | **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. | - | High |
-| **Windows registry persistence method detected**<br>(VM_RegistryPersistencyKey) | Analysis of host data has detected an attempt to persist an executable in the Windows registry. Malware often uses such a technique to survive a boot. | Persistence | Low |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|Alert (alert type)|Description|MITRE tactics<br>([Learn more](#intentions))|Severity| |-|-|:-:|--| |**a history file has been cleared**|Analysis of host data indicates that the command history log file has been cleared. Attackers may do this to cover their traces. The operation was performed by user: '%{user name}'.|-|Medium|
-|**Access of htaccess file detected**<br>(VM_SuspectHtaccessFileAccess)|Analysis of host data on %{Compromised Host} detected possible manipulation of a htaccess file. Htaccess is a powerful configuration file that allows you to make multiple changes to a web server running the Apache Web software including basic redirect functionality, or for more advanced functions such as basic password protection. Attackers will often modify htaccess files on machines they've compromised to gain persistence.|Persistence, Defense Evasion, Execution|Medium|
|**Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium | |**Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High | |**Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
|**Antimalware scans blocked for files potentially related to malware campaigns on your virtual machine (Preview)**<br>(VM_AmMalwareCampaignRelatedExclusion) | An exclusion rule was detected in your virtual machine to prevent your antimalware extension scanning certain files that are suspected of being related to a malware campaign. The rule was detected by analyzing the Azure Resource Manager operations in your subscription. Attackers might exclude files from antimalware scans to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium | |**Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium | |**Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
-|**Attempt to stop apt-daily-upgrade.timer service detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected an attempt to stop apt-daily-upgrade.timer service. In some recent attacks, attackers have been observed stopping this service, to download malicious files and granting execution privileges for their attack. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low|
-|**Attempt to stop apt-daily-upgrade.timer service detected**<br>(VM_TimerServiceDisabled)|Analysis of host data on %{Compromised Host} detected an attempt to stop apt-daily-upgrade.timer service. In some recent attacks, attackers have been observed stopping this service, to download malicious files and granting execution privileges for their attack.|Defense Evasion|Low|
-|**Behavior similar to common Linux bots detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of a process normally associated with common Linux botnets. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Behavior similar to common Linux bots detected**<br>(VM_CommonBot)|Analysis of host data on %{Compromised Host} detected the execution of a process normally associated with common Linux botnets.|Execution, Collection, Command and Control|Medium|
-|**Behavior similar to Fairware ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it's normally used on discrete folders. In this case, it's being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Behavior similar to Fairware ransomware detected**<br>(VM_FairwareMalware)|Analysis of host data on %{Compromised Host} detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it's normally used on discrete folders. In this case, it's being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder.|Execution|Medium|
|**Behavior similar to ransomware detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of files that have resemblance of known ransomware that can prevent users from accessing their system or personal files, and demands ransom payment in order to regain access. This behavior was seen [x] times today on the following machines: [Machine names]|-|High| |**Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium |
-|**Container with a miner image detected**<br>(VM_MinerInContainerImage) | Machine logs indicate execution of a Docker container that run an image associated with a digital currency mining. | Execution | High |
-|**Crypto coin miner execution** <br> (VM_CryptoCoinMinerExecution) | Analysis of host/device data detected a process being started in a way very similar to a coin mining process. | Execution | Medium |
+|**Container with a miner image detected**<br>(VM_MinerInContainerImage) | Machine logs indicate execution of a Docker container that run an image associated with a digital currency mining. | Execution | High |
|**Custom script extension with suspicious command in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with suspicious command was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extension to execute a malicious code on your virtual machine via the Azure Resource Manager. | Execution | Medium | |**Custom script extension with suspicious entry-point in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousEntryPoint) | Custom script extension with a suspicious entry-point was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. The entry-point refers to a suspicious GitHub repository.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Custom script extension with suspicious payload in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Detected anomalous mix of upper and lower case characters in command line**|Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host.|-|Medium|
-|**Detected file download from a known malicious source [seen multiple times]**<br>(VM_SuspectDownload)|Analysis of host data has detected the download of a file from a known malware source on %{Compromised Host}. This behavior was seen over [x] times today on the following machines: [Machine names]|Privilege Escalation, Execution, Exfiltration, Command and Control|Medium|
|**Detected file download from a known malicious source**|Analysis of host data has detected the download of a file from a known malware source on %{Compromised Host}.|-|Medium|
-|**Detected persistence attempt [seen multiple times]**|Analysis of host data on %{Compromised Host} has detected installation of a startup script for single-user mode. It's extremely rare that any legitimate process needs to execute in that mode, so this may indicate that an attacker has added a malicious process to every run-level to guarantee persistence. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Detected persistence attempt**<br>(VM_NewSingleUserModeStartupScript)|Host data analysis has detected that a startup script for single-user mode has been installed.<br>Because it's rare that any legitimate process would be required to run in that mode, this might indicate that an attacker has added a malicious process to every run-level to guarantee persistence. |Persistence|Medium|
-|**Detected suspicious file download [seen multiple times]**|Analysis of host data has detected suspicious download of remote file on %{Compromised Host}. This behavior was seen 10 times today on the following machines: [Machine name]|-|Low|
-|**Detected suspicious file download**<br>(VM_SuspectDownloadArtifacts)|Analysis of host data has detected suspicious download of remote file on %{Compromised Host}.|Persistence|Low|
|**Detected suspicious network activity**|Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it.|-|Low|
-|**Detected suspicious use of the useradd command [seen multiple times]**|Analysis of host data has detected suspicious use of the useradd command on %{Compromised Host}. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Detected suspicious use of the useradd command**<br>(VM_SuspectUserAddition)|Analysis of host data has detected suspicious use of the useradd command on %{Compromised Host}.|Persistence|Medium|
|**Digital currency mining related behavior detected**|Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining.|-|High| |**Disabling of auditd logging [seen multiple times]**|The Linux Audit system provides a way to track security-relevant information on the system. It records as much information about the events that are happening on your system as possible. Disabling auditd logging could hamper discovering violations of security policies used on the system. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low|
-|**Executable found running from a suspicious location**<br>(VM_SuspectExecutablePath)|Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host.| Execution |High|
|**Exploitation of Xorg vulnerability [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the user of Xorg with suspicious arguments. Attackers may use this technique in privilege escalation attempts. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Exposed Docker daemon on TCP socket**<br>(VM_ExposedDocker)|Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, doesn't use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port.|Execution, Exploitation|Medium|
|**Failed SSH brute force attack**<br>(VM_SshBruteForceFailed)|Failed brute force attacks were detected from the following attackers: %{Attackers}. Attackers were trying to access the host with the following user names: %{Accounts used on failed sign in to host attempts}.|Probing|Medium| |**Fileless Attack Behavior Detected**<br>(VM_FilelessAttackBehavior.Linux)| The memory of the process specified below contains behaviors commonly used by fileless attacks.<br>Specific behaviors include: {list of observed behaviors} | Execution | Low | |**Fileless Attack Technique Detected**<br>(VM_FilelessAttackTechnique.Linux)| The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software.<br>Specific behaviors include: {list of observed behaviors} | Execution | High | |**Fileless Attack Toolkit Detected**<br>(VM_FilelessAttackToolkit.Linux)| The memory of the process specified below contains a fileless attack toolkit: {ToolKitName}. Fileless attack toolkits typically don't have a presence on the filesystem, making detection by traditional anti-virus software difficult.<br>Specific behaviors include: {list of observed behaviors} | Defense Evasion, Execution | High | |**Hidden file execution detected**|Analysis of host data indicates that a hidden file was executed by %{user name}. This activity could either be legitimate activity, or an indication of a compromised host.|-|Informational|
-|**Indicators associated with DDOS toolkit detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services and taking full control over the infected system. This could also possibly be legitimate activity. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Indicators associated with DDOS toolkit detected**<br>(VM_KnownLinuxDDoSToolkit)|Analysis of host data on %{Compromised Host} detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services and taking full control over the infected system. This could also possibly be legitimate activity.|Persistence, Lateral Movement, Execution, Exploitation|Medium|
-|**Local host reconnaissance detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of a command normally associated with common Linux bot reconnaissance. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Local host reconnaissance detected**<br>(VM_LinuxReconnaissance)|Analysis of host data on %{Compromised Host} detected the execution of a command normally associated with common Linux bot reconnaissance.|Discovery|Medium|
-|**Manipulation of host firewall detected [seen multiple times]**<br>(VM_FirewallDisabled)|Analysis of host data on %{Compromised Host} detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. This behavior was seen [x] times today on the following machines: [Machine names]|Defense Evasion, Exfiltration|Medium|
-|**Manipulation of host firewall detected**|Analysis of host data on %{Compromised Host} detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data.|-|Medium|
-|**MITRE Caldera agent detected**<br>(VM_MitreCalderaTools)|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This is often associated with the MITRE 54ndc47 agent, which could be used maliciously to attack other machines in some way.|All |Medium|
|**New SSH key added [seen multiple times]**<br>(VM_SshKeyAddition)|A new SSH key was added to the authorized keys file. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence|Low| |**New SSH key added**|A new SSH key was added to the authorized keys file|-|Low|
-|**Possible attack tool detected [seen multiple times]**|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This tool is often associated with malicious users attacking other machines in some way. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Possible attack tool detected**<br>(VM_KnownLinuxAttackTool)|Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on %{Compromised Host}. This tool is often associated with malicious users attacking other machines in some way.| Execution, Collection, Command and Control, Probing |Medium|
|**Possible backdoor detected [seen multiple times]**|Analysis of host data has detected a suspicious file being downloaded then run on %{Compromised Host} in your subscription. This activity has previously been associated with installation of a backdoor. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Possible credential access tool detected [seen multiple times]**|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Possible credential access tool detected**<br>(VM_KnownLinuxCredentialAccessTool)|Machine logs indicate a possible known credential access tool was running on %{Compromised Host} launched by process: '%{Suspicious Process}'. This tool is often associated with attacker attempts to access credentials.|Credential Access|Medium|
-|**Possible data exfiltration [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they've compromised. This behavior was seen [x]] times today on the following machines: [Machine names]|-|Medium|
-|**Possible data exfiltration**<br>(VM_DataEgressArtifacts)|Analysis of host data on %{Compromised Host} detected a possible data egress condition. Attackers will often egress data from machines they've compromised.|Collection, Exfiltration|Medium|
-|**Possible exploitation of Hadoop Yarn**<br>(VM_HadoopYarnExploit)|Analysis of host data on %{Compromised Host} detected the possible exploitation of the Hadoop Yarn service.|Exploitation|Medium|
|**Possible exploitation of the mailserver detected**<br>(VM_MailserverExploitation )|Analysis of host data on %{Compromised Host} detected an unusual execution under the mail server account|Exploitation|Medium|
-|**Possible Log Tampering Activity Detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Possible Log Tampering Activity Detected**<br>(VM_SystemLogRemoval)|Analysis of host data on %{Compromised Host} detected possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files.|Defense Evasion|Medium|
-|**Possible malicious web shell detected [seen multiple times]**<br>(VM_Webshell)|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they've compromised to gain persistence or for further exploitation. This behavior was seen [x] times today on the following machines: [Machine names]|Persistence, Exploitation|Medium|
|**Possible malicious web shell detected**|Analysis of host data on %{Compromised Host} detected a possible web shell. Attackers will often upload a web shell to a machine they've compromised to gain persistence or for further exploitation.|-|Medium| |**Possible password change using crypt-method detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected password change using crypt method. Attackers can make this change to continue access and gaining persistence after compromise. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Potential overriding of common files [seen multiple times]**|Analysis of host data has detected common executables being overwritten on %{Compromised Host}. Attackers will overwrite common files as a way to obfuscate their actions or for persistence. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Potential overriding of common files**<br>(VM_OverridingCommonFiles)|Analysis of host data has detected common executables being overwritten on %{Compromised Host}. Attackers will overwrite common files as a way to obfuscate their actions or for persistence.|Persistence|Medium|
-|**Potential port forwarding to external IP address [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the initiation of port forwarding to an external IP address. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Potential port forwarding to external IP address**<br>(VM_SuspectPortForwarding)|Host data analysis detected the initiation of port forwarding to an external IP address.|Exfiltration, Command and Control|Medium|
-|**Potential reverse shell detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Potential reverse shell detected**<br>(VM_ReverseShell)|Analysis of host data on %{Compromised Host} detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns.|Exfiltration, Exploitation|Medium|
-|**Privileged command run in container**<br>(VM_PrivilegedExecutionInContainer) | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | Privilege Escalation | Low |
-|**Privileged Container Detected**<br>(VM_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has a full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | Privilege Escalation, Execution | Low |
|**Process associated with digital currency mining detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of a process normally associated with digital currency mining. This behavior was seen over 100 times today on the following machines: [Machine name]|-|Medium| |**Process associated with digital currency mining detected**|Host data analysis detected the execution of a process that is normally associated with digital currency mining.|Exploitation, Execution|Medium|
-|**Process seen accessing the SSH authorized keys file in an unusual way**<br>(VM_SshKeyAccess)|An SSH authorized keys file has been accessed in a method similar to known malware campaigns. This access can indicate that an attacker is attempting to gain persistent access to a machine.|-|Low|
|**Python encoded downloader detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the execution of encoded Python that downloads and runs code from a remote location. This may be an indication of malicious activity. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low| |**Screenshot taken on host [seen multiple times]**|Analysis of host data on %{Compromised Host} detected the user of a screen capture tool. Attackers may use these tools to access private data. This behavior was seen [x] times today on the following machines: [Machine names]|-|Low|
-|**Script extension mismatch detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Script extension mismatch detected**<br>(VM_MismatchedScriptFeatures)|Analysis of host data on %{Compromised Host} detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions.|Defense Evasion|Medium|
|**Shellcode detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected shellcode being generated from the command line. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Successful SSH brute force attack**<br>(VM_SshBruteForceSuccess)|Analysis of host data has detected a successful brute force attack. The IP %{Attacker source IP} was seen making multiple login attempts. Successful logins were made from that IP with the following user(s): %{Accounts used to successfully sign in to host}. This means that the host may be compromised and controlled by a malicious actor.|Exploitation|High|
-|**Suspect Password File Access** <br> (VM_SuspectPasswordFileAccess) | Analysis of host data has detected suspicious access to encrypted user passwords. | Persistence | Informational |
|**Suspicious Account Creation Detected**|Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator.|-|Medium|
-|**Suspicious compilation detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they've compromised to escalate privileges. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium|
-|**Suspicious compilation detected**<br>(VM_SuspectCompilation)|Analysis of host data on %{Compromised Host} detected suspicious compilation. Attackers will often compile exploits on a machine they've compromised to escalate privileges.|Privilege Escalation, Exploitation|Medium|
-|**Suspicious DNS Over Https** <br> (VM_SuspiciousDNSOverHttps) | Analysis of host data indicates the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
|**Suspicious failed execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousFailure) | Suspicious failure of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Such failures may be associated with malicious scripts run by this extension. | Execution | Medium | |**Suspicious kernel module detected [seen multiple times]**|Analysis of host data on %{Compromised Host} detected a shared object file being loaded as a kernel module. This could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names]|-|Medium| |**Suspicious password access [seen multiple times]**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}. This behavior was seen [x] times today on the following machines: [Machine names]|-|Informational| |**Suspicious password access**|Analysis of host data has detected suspicious access to encrypted user passwords on %{Compromised Host}.|-|Informational|
-|**Suspicious PHP execution detected**<br>(VM_SuspectPhp)|Machine logs indicate that a suspicious PHP process is running. The action included an attempt to run OS commands or PHP code from the command line using the PHP process. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells.|Execution|Medium|
|**Suspicious request to the Kubernetes Dashboard**<br>(VM_KubernetesDashboard) | Machine logs indicate that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. |LateralMovement| Medium |
-|**Threat Intel Command Line Suspect Domain** <br> (VM_ThreatIntelCommandLineSuspectDomain) | The process 'PROCESSNAME' on 'HOST' connected to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred.| Initial Access | Medium |
|**Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium | |**Unusual deletion of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | |**Unusual execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
<sup><a name="footnote1"></a>1</sup>: **Preview for non-AKS clusters**: This alert is generally available for AKS clusters, but it is in preview for other environments, such as Azure Arc, EKS and GKE.
-<sup><a name="footnote2"></a>2</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, is not supported for GKE clusters.
+<sup><a name="footnote2"></a>2</sup>: **Limitations on GKE clusters**: GKE uses a Kubernetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, is not supported for GKE clusters.
<sup><a name="footnote3"></a>3</sup>: This alert is supported on Windows nodes/containers.
Defender for Cloud's supported kill chain intents are based on [version 9 of the
| **Collection** | V7, V9 | Collection consists of techniques used to identify and gather information, such as sensitive files, from a target network prior to exfiltration. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. | | **Command and Control** | V7, V9 | The command and control tactic represents how adversaries communicate with systems under their control within a target network. | | **Exfiltration** | V7, V9 | Exfiltration refers to techniques and attributes that result or aid in the adversary removing files and information from a target network. This category also covers locations on a system or network where the adversary may look for information to exfiltrate. |
-| **Impact** | V7, V9 | Impact events primarily try to directly reduce the availability or integrity of a system, service, or network; including manipulation of data to impact a business or operational process. This would often refer to techniques such as ransomware, defacement, data manipulation, and others. |
--
+| **Impact** | V7, V9 | Impact events primarily try to directly reduce the availability or integrity of a system, service, or network; including manipulation of data to impact a business or operational process. This would often refer to techniques such as ransomware, defacement, data manipulation, and others.
+
> [!NOTE] > For alerts that are in preview: [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
+## Defender for Servers alerts to be deprecated
+
+The following tables include the Defender for Servers security alerts [to be deprecated in April, 2023](upcoming-changes.md#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers).
+
+### Linux alerts to be deprecated
+
+| **Alert Type** | **Alert Display Name** | **Severity**
+||||
+VM.Windows_KnownCredentialAccessTools | Suspicious process executed | High
+VM.Windows_SuspiciousAccountCreation | Suspicious Account Creation Detected | Medium
+VM_AbnormalDaemonTermination | Abnormal Termination | Low
+VM_BinaryGeneratedFromCommandLine | Suspicious binary detected | Medium
+VM_CommandlineSuspectDomain Suspicious | domain name reference | Low
+VM_CommonBot | Behavior similar to common Linux bots detected | Medium
+VM_CompCommonBots | Commands similar to common Linux bots detected |Medium
+VM_CompSuspiciousScript | Shell Script Detected | Medium
+VM_CompTestRule | Composite Analytic Test Alert | Low
+VM_CronJobAccess | Manipulation of scheduled tasks detected | Informational
+VM_CryptoCoinMinerArtifacts | Process associated with digital currency mining detected | Medium
+VM_CryptoCoinMinerDownload | Possible Cryptocoinminer download detected | Medium
+VM_CryptoCoinMinerExecution | Potential crypto coin miner started | Medium
+VM_DataEgressArtifacts | Possible data exfiltration detected | Medium
+VM_DigitalCurrencyMining | Digital currency mining related behavior detected | High
+VM_DownloadAndRunCombo | Suspicious Download Then Run Activity | Medium
+VM_EICAR | Microsoft Defender for Cloud test alert (not a threat) | High
+VM_ExecuteHiddenFile | Execution of hidden file | Informational
+VM_ExploitAttempt | Possible command line exploitation attempt | Medium
+VM_ExposedDocker | Exposed Docker daemon on TCP socket | Medium
+VM_FairwareMalware | Behavior similar to Fairware ransomware detected | Medium
+VM_FirewallDisabled | Manipulation of host firewall detected | Medium
+VM_HadoopYarnExploit | Possible exploitation of Hadoop Yarn | Medium
+VM_HistoryFileCleared | A history file has been cleared | Medium
+VM_KnownLinuxAttackTool | Possible attack tool detected | Medium
+VM_KnownLinuxCredentialAccessTool | Possible credential access tool detected | Medium
+VM_KnownLinuxDDoSToolkit | Indicators associated with DDOS toolkit detected | Medium
+VM_KnownLinuxScreenshotTool | Screenshot taken on host | Low
+VM_LinuxBackdoorArtifact | Possible backdoor detected | Medium
+VM_LinuxReconnaissance | Local host reconnaissance detected | Medium
+VM_MismatchedScriptFeatures | Script extension mismatch detected | Medium
+VM_MitreCalderaTools | MITRE Caldera agent detected | Medium
+VM_NewSingleUserModeStartupScript | Detected Persistence Attempt | Medium
+VM_NewSudoerAccount | Account added to sudo group | Low
+VM_OverridingCommonFiles | Potential overriding of common files | Medium
+VM_PrivilegedContainerArtifacts | Container running in privileged mode | Low
+VM_PrivilegedExecutionInContainer | Command within a container running with high privileges | Low
+VM_ReadingHistoryFile | Unusual access to bash history file | Informational
+VM_ReverseShell | Potential reverse shell detected | Medium
+VM_SshKeyAccess | Process seen accessing the SSH authorized keys file in an unusual way | Low
+VM_SshKeyAddition | New SSH key added | Low
+VM_SuspectCompilation | Suspicious compilation detected | Medium
+VM_SuspectConnection | An uncommon connection attempt detected | Medium
+VM_SuspectDownload | Detected file download from a known malicious source | Medium
+VM_SuspectDownloadArtifacts | Detected suspicious file download | Low
+VM_SuspectExecutablePath | Executable found running from a suspicious location | Medium
+VM_SuspectHtaccessFileAccess | Access of htaccess file detected | Medium
+VM_SuspectInitialShellCommand | Suspicious first command in shell | Low
+VM_SuspectMixedCaseText | Detected anomalous mix of uppercase and lowercase characters in command line | Medium
+VM_SuspectNetworkConnection | Suspicious network connection | Informational
+VM_SuspectNohup | Detected suspicious use of the nohup command | Medium
+VM_SuspectPasswordChange | Possible password change using crypt-method detected | Medium
+VM_SuspectPasswordFileAccess | Suspicious password access | Informational
+VM_SuspectPhp | Suspicious PHP execution detected| Medium
+VM_SuspectPortForwarding | Potential port forwarding to external IP address| Medium
+VM_SuspectProcessAccountPrivilegeCombo | Process running in a service account became root unexpectedly | Medium
+VM_SuspectProcessTermination | Security-related process termination detected | Low
+VM_SuspectUserAddition | Detected suspicious use of the useradd command| Medium
+VM_SuspiciousCommandLineExecution | Suspicious command execution | High
+VM_SuspiciousDNSOverHttps| Suspicious use of DNS over HTTPS | Medium
+VM_SystemLogRemoval | Possible Log Tampering Activity Detected | Medium
+VM_ThreatIntelCommandLineSuspectDomain | A possible connection to malicious location has been detected | Medium
+VM_ThreatIntelSuspectLogon | A logon from a malicious IP has been detected | High
+VM_TimerServiceDisabled | Attempt to stop apt-daily-upgrade.timer service detected | Informational
+VM_TimestampTampering | Suspicious file timestamp modification | Low
+VM_Webshell | Possible malicious web shell detected | Medium
+
+### Windows alerts to be deprecated
+
+| **Alert Type** | **Alert Display Name** | **Severity**
+||||
+SCUBA_MULTIPLEACCOUNTCREATE | Suspicious creation of accounts on multiple hosts | Medium
+SCUBA_PSINSIGHT_CONTEXT | Suspicious use of PowerShell detected | Informational
+SCUBA_RULE_AddGuestToAdministrators | Addition of Guest account to Local Administrators group | Medium
+SCUBA_RULE_Apache_Tomcat_executing_suspicious_commands | Apache_Tomcat_executing_suspicious_commands | Medium
+SCUBA_RULE_KnownBruteForcingTools | Suspicious process executed | High
+SCUBA_RULE_KnownCollectionTools | Suspicious process executed | High
+SCUBA_RULE_KnownDefenseEvasionTools | Suspicious process executed | High
+SCUBA_RULE_KnownExecutionTools | Suspicious process executed | High
+SCUBA_RULE_KnownPassTheHashTools | Suspicious process executed | High
+SCUBA_RULE_KnownSpammingTools | Suspicious process executed | Medium
+SCUBA_RULE_Lowering_Security_Settings | Detected the disabling of critical services | Medium
+SCUBA_RULE_OtherKnownHackerTools | Suspicious process executed | High
+SCUBA_RULE_RDP_session_hijacking_via_tscon | Suspect integrity level indicative of RDP hijacking | Medium
+SCUBA_RULE_RDP_session_hijacking_via_tscon_service | Suspect service installation | Medium
+SCUBA_RULE_Suppress_pesky_unauthorized_use_prohibited_notices | Detected suppression of legal notice displayed to users at logon | Low
+SCUBA_RULE_WDigest_Enabling | Detected enabling of the WDigest UseLogonCredential registry key | Medium
+VM.Windows_ApplockerBypass | Potential attempt to bypass AppLocker detected | High
+VM.Windows_BariumKnownSuspiciousProcessExecution | Detected suspicious file creation | High
+VM.Windows_Base64EncodedExecutableInCommandLineParams | Detected encoded executable in command line data | High
+VM.Windows_CalcsCommandLineUse | Detected suspicious use of Cacls to lower the security state of the system | Medium
+VM.Windows_CommandLineStartingAllExe | Detected suspicious command line used to start all executables in a directory | Medium
+VM.Windows_DisablingAndDeletingIISLogFiles | Detected actions indicative of disabling and deleting IIS log files | Medium
+VM.Windows_DownloadUsingCertutil | Suspicious download using Certutil detected | Medium
+VM.Windows_EchoOverPipeOnLocalhost | Detected suspicious named pipe communications | High
+VM.Windows_EchoToConstructPowerShellScript | Dynamic PowerShell script construction | Medium
+VM.Windows_ExecutableDecodedUsingCertutil | Detected decoding of an executable using built-in certutil.exe tool | Medium
+VM.Windows_FileDeletionIsSospisiousLocation | Suspicious file deletion detected | Medium
+VM.Windows_KerberosGoldenTicketAttack | Suspected Kerberos Golden Ticket attack parameters observed | Medium
+VM.Windows_KeygenToolKnownProcessName | Detected possible execution of keygen executable Suspicious process executed | Medium
+VM.Windows_KnownSuspiciousPowerShellScript | Suspicious use of PowerShell detected | High
+VM.Windows_KnownSuspiciousSoftwareInstallation | High risk software detected | Medium
+VM.Windows_MsHtaAndPowerShellCombination | Detected suspicious combination of HTA and PowerShell | Medium
+VM.Windows_MultipleAccountsQuery | Multiple Domain Accounts Queried | Medium
+VM.Windows_NewAccountCreation | Account creation detected | Informational
+VM.Windows_ObfuscatedCommandLine | Detected obfuscated command line. | High
+VM.Windows_PcaluaUseToLaunchExecutable | Detected suspicious use of Pcalua.exe to launch executable code | Medium
+VM.Windows_PetyaRansomware | Detected Petya ransomware indicators | High
+VM.Windows_PowerShellPowerSploitScriptExecution | Suspicious PowerShell cmdlets executed | Medium
+VM.Windows_RansomwareIndication | Ransomware indicators detected | High
+VM.Windows_SqlDumperUsedSuspiciously | Possible credential dumping detected [seen multiple times] | Medium
+VM.Windows_StopCriticalServices | Detected the disabling of critical services | Medium
+VM.Windows_SubvertingAccessibilityBinary | Sticky keys attack detected <br/> Suspicious account creation detected Medium
+VM.Windows_SuspiciousFirewallRuleAdded | Detected suspicious new firewall rule | Medium
+VM.Windows_SuspiciousFTPSSwitchUsage | Detected suspicious use of FTP -s switch | Medium
+VM.Windows_SuspiciousSQLActivity | Suspicious SQL activity | Medium
+VM.Windows_SVCHostFromInvalidPath | Suspicious process executed | High
+VM.Windows_SystemEventLogCleared | The Windows Security log was cleared | Informational
+VM.Windows_TelegramInstallation | Detected potentially suspicious use of Telegram tool | Medium
+VM.Windows_UndercoverProcess | Suspiciously named process detected | High
+VM.Windows_UserAccountControlBypass | Detected change to a registry key that can be abused to bypass UAC | Medium
+VM.Windows_VBScriptEncoding | Detected suspicious execution of VBScript.Encode command | Medium
+VM.Windows_WindowPositionRegisteryChange | Suspicious WindowPosition registry value detected | Low
+VM.Windows_ZincPortOpenningUsingFirewallRule | Malicious firewall rule created by ZINC server implant | High
+VM_DigitalCurrencyMining | Digital currency mining related behavior detected | High
+VM_MaliciousSQLActivity | Malicious SQL activity | High
+VM_ProcessWithDoubleExtensionExecution | Suspicious double extension file executed | High
+VM_RegistryPersistencyKey | Windows registry persistence method detected | Low
+VM_ShadowCopyDeletion | Suspicious Volume Shadow Copy Activity <br/> Executable found running from a suspicious location | High
+VM_SuspectExecutablePath | Executable found running from a suspicious location <br/> Detected anomalous mix of uppercase and lowercase characters in command line | Informational <br/> <br/> Medium <br/> |
+VM_SuspectPhp | Suspicious PHP execution detected | Medium
+VM_SuspiciousCommandLineExecution | Suspicious command execution | High
+VM_SuspiciousScreenSaverExecution | Suspicious Screensaver process executed | Medium
+VM_SvcHostRunInRareServiceGroup | Rare SVCHOST service group executed | Informational
+VM_SystemProcessInAbnormalContext | Suspicious system process executed | Medium
+VM_ThreatIntelCommandLineSuspectDomain | A possible connection to malicious location has been detected | Medium
+VM_ThreatIntelSuspectLogon | A logon from a malicious IP has been detected | High
+VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High
++ ## Next steps To learn more about Microsoft Defender for Cloud security alerts, see the following:
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
The **Azure Policy add-on for Kubernetes** collects cluster and workload configu
| Pod Name | Namespace | Kind | Short Description | Capabilities | Resource limits | Egress Required | |--|--|--|--|--|--|--|
-| microsoft-defender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 64Mi<br> <br> cpu: 60m | No |
+| microsoft-defender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 296Mi<br> <br> cpu: 360m | No |
| microsoft-defender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No | | microsoft-defender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
-\* resource limits aren't configurable
+\* Resource limits aren't configurable; Learn more about [Kubernetes resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes)
## [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 12/28/2022 Last updated : 01/16/2023+ # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated](#recommendation-to-enable-diagnostic-logs-for-virtual-machine-scale-sets-to-be-deprecated) | January 2023 | | [The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports is set to be deprecated](#the-policy-vulnerability-assessment-settings-for-sql-server-should-contain-an-email-address-to-receive-scan-reports-is-set-to-be-deprecated) | January 2023 | | [The name of the Secure score control Protect your applications with Azure advanced networking solutions will be changed](#the-name-of-the-secure-score-control-protect-your-applications-with-azure-advanced-networking-solutions-will-be-changed) | January 2023 |
+| [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 |
### Recommendation to enable diagnostic logs for Virtual Machine Scale Sets to be deprecated
The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_P
| Recommendation | Description | Severity | |--|--|--|
-| Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low |
+| Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year, enabling you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low |
### The policy Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports is set to be deprecated
The secure score control `Protect your applications with Azure advanced networki
The updated name will be reflected on Azure Resource Graph (ARG), Secure Score Controls API and the `Download CSV report`.
+### Deprecation and improvement of selected alerts for Windows and Linux Servers
+
+**Estimated date for change: April 2023**
+
+The security alert quality improvement process for Defender for Servers includes the deprecation of some alerts for both Windows and Linux servers. The deprecated alerts will now be sourced from and covered by Defender for Endpoint threat alerts.
+
+If you already have the Defender for Endpoint integration enabled, no further action is required. You may experience a decrease in your alerts volume in April 2023.
+
+If you don't have the Defender for Endpoint integration enabled in Defender for Servers, you'll need to enable the Defender for Endpoint integration to maintain and improve your alert coverage.
+
+All Defender for Server customers, have full access to the Defender for EndpointΓÇÖs integration as a part of the [Defender for Servers plan](plan-defender-for-servers-select-plan.md#plan-features).
+
+You can learn more about [Microsoft Defender for Endpoint onboarding options](integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration).
+
+You can also view the [full list of alerts](alerts-reference.md#defender-for-servers-alerts-to-be-deprecated) that are set to be deprecated.
++ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
This article describes the Dell Edge 5200 appliance for OT sensors.
| Appliance characteristic |Details | ||| |**Hardware profile** | E500|
-|**Performance** | Max bandwidth: 1 Gbp/s<br>Max devices: 10,000 |
+|**Performance** | Max bandwidth: 1 Gbps<br>Max devices: 10,000 |
|**Physical specifications** | Mounting: Wall Mount<br>Ports: 3x RJ45 |
-|**Status** | Supported, Not available preconfigured|
+|**Status** | Supported, Not available pre-configured|
## Specifications
This article describes the Dell Edge 5200 appliance for OT sensors.
|Quantity|PN|Description| |:-|:-|:-|
-|1|210-BCNV|Dell EMC Edge Gateway 5200,Core i7-9700TE.32G.512G, Win 10 IoT.TPM,OEM|
+|1|210-BCNV|Dell EMC Edge Gateway 5200,Core i7-9700TE.32G.512G, Win 10 IoT.TPM, OEM|
|1|631-ADIJ|User Documentation EMEA 2| |1|683-1187|No Installation Service Selected (Contact Sales Rep for more details)| |1|709-BDGW|Parts Only Warranty 15 Months|
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
This article describes the Dell PowerEdge R340 XL appliance, supported for OT sensors and on-premises management consoles. > [!NOTE]
-> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
+> Legacy appliances are certified but aren't currently offered as pre-configured appliances.
|Appliance characteristic | Description| ||| |**Hardware profile** | E1800|
-|**Performance** | Max bandwidth: 1 Gbp/s<br>Max devices: 10,000 |
+|**Performance** | Max bandwidth: 1 Gbps<br>Max devices: 10,000 |
|**Physical Specifications** | Mounting: 1U<br>Ports: 8x RJ45 or 6x SFP (OPT)| |**Status** | Supported, not available as a preconfigured appliance|
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
This article describes the HPE Edgeline EL300 appliance for OT sensors or on-premises management consoles. > [!NOTE]
-> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
+> Legacy appliances are certified but aren't currently offered as pre-configured appliances.
| Appliance characteristic |Details | ||| |**Hardware profile** | L500 |
-|**Performance** |Max bandwidth: 100 Mbp/s<br>Max devices: 800 |
+|**Performance** |Max bandwidth: 100 Mbps<br>Max devices: 800 |
|**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45| |**Status** | Supported, Not available pre-configured|
defender-for-iot Hpe Proliant Dl20 Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-legacy.md
This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors in an enterprise deployment. > [!NOTE]
-> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
+> Legacy appliances are certified but aren't currently offered as pre-configured appliances.
| Appliance characteristic |Details | ||| |**Hardware profile** | E1800 |
-|**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000 |
+|**Performance** | Max bandwidth: 1 Gbps <br>Max devices: 10,000 |
|**Physical specifications** | Mounting: 1U <br> Ports: 8x RJ45 or 6x SFP (OPT)| |**Status** | Supported, not available pre-configured |
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises managemen
| Appliance characteristic |Details | ||| |**Hardware profile** | E1800 |
-|**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000<br> Up to 8x RJ45 monitoring ports or 6x SFP (OPT) |
-|**Physical specifications** | Mounting: 1U <br> Minimum dimensions ( H x W x D)1.70 x 17.11 x 15.05 in<br>Minimum dimensions ( H x W x D)4.32 x 43.46 x 38.22 cm|
+|**Performance** | Max bandwidth: 1 Gbps <br>Max devices: 10,000<br> Up to 8x RJ45 monitoring ports or 6x SFP (OPT) |
+|**Physical specifications** | Mounting: 1U <br> Minimum dimensions (H x W x D) 1.70 x 17.11 x 15.05 in<br>Minimum dimensions (H x W x D) 4.32 x 43.46 x 38.22 cm|
|**Status** | Supported, available pre-configured | The following image shows a sample of the HPE ProLiant DL20 front panel:
The following image shows a sample of the HPE ProLiant DL20 back panel:
|-||-| |1| P44111-B21 | HPE DL20 Gen10+ 4SFF CTO Server| |1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
-|4| P28610-B21 | HPE 1TB SATA 7.2K SFF BC HDD|
-|2| P43019-B21 | HPE 16GB 1Rx8 PC4-3200AA-E Standard Kit|
+|4| P28610-B21 | HPE 1 TB SATA 7.2K SFF BC HDD|
+|2| P43019-B21 | HPE 16 GB 1Rx8 PC4-3200AA-E Standard Kit|
|1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)| |1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter| |1| P45948-B21 | HPE DL20 Gen10+ RPS FIO Enable Kit|
Optional modules for port expansion include:
|Location |Type|Specifications| |--|--|| | PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
-| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
-| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1Gb 4-port BASE-T Adapter for HPE |
-| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
-| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1 Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25 Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE |
| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver| | SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
Installation includes:
- Installing Defender for IoT software > [!NOTE]
-> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
> ### Enable remote access and update the password
This procedure describes how to update the HPE BIOS configuration for your OT de
> For **Data-at-Rest** encryption, see HPE guidance for activating RAID SR Secure Encryption or using Self-Encrypting-Drives (SED). > + ### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus.
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
The HPE ProLiant DL20 Gen10 Plus is also available for the on-premises managemen
| Appliance characteristic |Details | ||| |**Hardware profile** | L500|
-|**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 <br>Up to 8x Monitoring ports|
-|**Physical specifications** | Mounting: 1U<br>Minimum dimensions ( H x W x D)1.70 x 17.11 x 15.05 in<br>Minimum dimensions ( H x W x D)4.32 x 43.46 x 38.22 cm|
+|**Performance** | Max bandwidth: 200 Mbps <br>Max devices: 1,000 <br>Up to 8x Monitoring ports|
+|**Physical specifications** | Mounting: 1U<br>Minimum dimensions (H x W x D) 1.70 x 17.11 x 15.05 in<br>Minimum dimensions (H x W x D) 4.32 x 43.46 x 38.22 cm|
|**Status** | Supported; available pre-configured | The following image shows a sample of the HPE ProLiant DL20 Gen10 front panel:
The following image shows a sample of the HPE ProLiant DL20 Gen10 back panel:
|-||-| |1| P44111-B21 | HPE DL20 Gen10+ NHP 2LFF CTO Server| |1| P45252-B21 | Intel Xeon E-2334 FIO CPU for HPE|
-|2| P28610-B21 | HPE 1TB SATA 7.2K SFF BC HDD|
-|1| P43016-B21 | HPE 8GB 1Rx8 PC4-3200AA-E Standard Kit|
+|2| P28610-B21 | HPE 1 TB SATA 7.2K SFF BC HDD|
+|1| P43016-B21 | HPE 8 GB 1Rx8 PC4-3200AA-E Standard Kit|
|1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10)| |1| P21106-B21 | INT I350 1GbE 4p BASE-T Adapter| |1| P45948-B21 | HPE DL20 Gen10+ RPS FIO Enable Kit|
Optional modules for port expansion include:
|Location |Type|Specifications| |--|--||
-| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
-| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
-| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1Gb 4-port BASE-T Adapter for HPE |
-| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25Gb 2-port SFP28 Adapter for HPE |
-| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25 Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 1 (Low profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE |
+| PCI Slot 2 (High profile) | Quad Port Ethernet NIC| P21106-B21 - Intel I350-T4 Ethernet 1 Gb 4-port BASE-T Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P26262-B21 - Broadcom BCM57414 Ethernet 10/25 Gb 2-port SFP28 Adapter for HPE |
+| PCI Slot 2 (High profile) | DP F/O NIC |P28787-B21 - Intel X710-DA2 Ethernet 10 Gb 2-port SFP+ Adapter for HPE |
| SFPs for Fiber Optic NICs|MultiMode, Short Range|455883-B21 - HPE BLc 10G SFP+ SR Transceiver| | SFPs for Fiber Optic NICs|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
-## HPE ProLiant HPE ProLiant DL20 Gen10 Plus installation
+## HPE ProLiant DL20 Gen10 Plus installation
This section describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus appliance.
This procedure describes how to update the HPE BIOS configuration for your OT de
:::image type="content" source="../media/tutorial-install-components/boot-override-window-two-v2.png" alt-text="Screenshot that shows the second Boot Override window."::: + ### Install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus This procedure describes how to install Defender for IoT software on the HPE ProLiant DL20 Gen10 Plus.
defender-for-iot Hpe Proliant Dl20 Smb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-smb-legacy.md
This article describes the **HPE ProLiant DL20 Gen10** appliance for OT sensors for monitoring production lines. > [!NOTE]
-> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
+> Legacy appliances are certified but aren't currently offered as pre-configured appliances.
| Appliance characteristic |Details | ||| |**Hardware profile** | L500|
-|**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 |
+|**Performance** | Max bandwidth: 200Mbps <br>Max devices: 1,000 |
|**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45| |**Status** | Supported, not available pre-configured |
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
This article describes the **HPE ProLiant DL360** appliance for OT sensors, cust
| Appliance characteristic |Details | ||| |**Hardware profile** | C5600 |
-|**Performance** | Max bandwidth: 3Gbp/s <br> Max devices: 12,000 |
+|**Performance** | Max bandwidth: 3 Gbps <br> Max devices: 12,000 |
|**Physical specifications** | Mounting: 1U<br>Ports: 15x RJ45 or 8x SFP (OPT)|
-|**Status** | Supported, Available preconfigured|
+|**Status** | Supported, available pre-configured|
The following image describes the hardware elements on the HPE ProLiant DL360 back panel that are used by Defender for IoT:
Optional modules for port expansion include:
| **PCI Slot 1 (Low profile)**| Quad Port Ethernet NIC| 811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI (FW 1.52)| | **PCI Slot 1 (Low profile)** | DP F/O NIC|727054-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter (FW 10.57.3)| |**PCI Slot 2 (High profile)**| Quad Port Ethernet NIC|811546-B21 - HPE 1 GbE 4p BASE-T I350 Adapter SI (FW 1.52)|
-|**PCI Slot 2 (High profile)**| Quad Port Ethernet NIC|647594-B21 - HPE 1 GbE 4p BASE-T BCM5719 Adapter (FW 5719-v1.45 NCSI v1.3.12.0 )|
+|**PCI Slot 2 (High profile)**| Quad Port Ethernet NIC|647594-B21 - HPE 1 GbE 4p BASE-T BCM5719 Adapter (FW 5719-v1.45 NCSI v1.3.12.0)|
| **PCI Slot 2 (High profile)**|DP F/O NIC| 727055-B21 - HPE 10 GbE 2p FLR-SFP+ X710 Adapter (FW 10.57.3)|
-| **PCI Slot 2 (High profile)**|DP F/O NIC| P08421-B21 - HPE Ethernet 10Gb 2-port SFP+ BCM57414 Adapter (FW 214.4.9.6/pkg 214.0.286012)|
+| **PCI Slot 2 (High profile)**|DP F/O NIC| P08421-B21 - HPE Ethernet 10 Gb 2-port SFP+ BCM57414 Adapter (FW 214.4.9.6/pkg 214.0.286012)|
| **PCI Slot 2 (High profile)**|Quad Port F/O NIC| 869585-B21 - HPE 10 GbE 4p SFP+ X710 Adapter SI (FW 10.57.3)| | **SFPs for Fiber Optic NICs**|MultiMode, Short Range| 455883-B21 - HPE BLc 10G SFP+ SR Transceiver| |**SFPs for Fiber Optic NICs**|SingleMode, Long Range | 455886-B21 - HPE BLc 10G SFP+ LR Transceiver|
This section describes how to install OT sensor software on the HPE ProLiant DL3
During this procedure, you'll configure the iLO port. We recommend that you also change the default password provided for the administrative user. > [!NOTE]
-> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
> ### Enable remote access and update the password
This procedure describes how to update the HPE BIOS configuration for your OT se
> For **Data-at-Rest** encryption, see the HPE guidance for activating RAID Secure Encryption or using Self-Encrypting-Drives (SED). >
-### Install OT sensor software with iLO
-
-This procedure describes how to install iLO software remotely from a virtual drive.
-
-1. Sign in to the iLO console, and then right-click the servers' screen.
-
-1. Select **HTML5 Console**.
-
-1. In the console, select the CD icon, and choose the CD/DVD option.
-
-1. Select **Local ISO file**.
-
-1. In the dialog box, choose the D4IoT sensor installation ISO file.
-
-1. Go to the left icon, select **Power**, and the select **Reset**.
-
-1. The appliance will restart, and run the sensor installation process.
### Install OT sensor software on the HPE DL360
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
This article describes the Neousys Nuvo-5006LP appliance for OT sensors. > [!NOTE]
-> Legacy appliances are certified but aren't currently offered as preconfigured appliances.
+> Legacy appliances are certified but aren't currently offered as pre-configured appliances.
| Appliance characteristic |Details | ||| |**Hardware profile** | L100 |
-|**Performance** | Max bandwidth: 30 Mbp/s<br>Max devices: 400 |
+|**Performance** | Max bandwidth: 30 Mbps<br>Max devices: 400 |
|**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45| |**Status** | Supported, Not available pre-configured|
defender-for-iot Ys Techsystems Ys Fit2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/ys-techsystems-ys-fit2.md
This article describes the **YS-techsystems YS-FIT2** appliance deployment and i
| Appliance characteristic |Details | ||| |**Hardware profile** | L100|
-|**Performance** | Max bandwidth: 10Mbp/s<br>Max devices: 100|
+|**Performance** | Max bandwidth: 10Mbps<br>Max devices: 100|
|**Physical specifications** | Mounting: DIN/VESA<br>Ports: 2x RJ45| |**Status** | Supported; Available as pre-configured |
The following image shows a view of the YS-FIT2 back panel:
This section describes how to install OT sensor software on the YS-FIT2 appliance. Before you install the OT sensor software, you must adjust the appliance's BIOS configuration. > [!NOTE]
-> Installation procedures are only relevant if you need to re-install software on a preconfigured device, or if you buy your own hardware and configure the appliance yourself.
+> Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
> ### Configure the YS-FIT2 BIOS
energy-data-services How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md
Use the following steps to create a private endpoint for an existing Microsoft E
## Next steps <!-- Add a context sentence for the following links -->
-To learn more about data security and encryption
+To learn more about using customer Lockbox as an interface to review and approve or reject access requests.
> [!div class="nextstepaction"]
-> [Data security and encryption in Microsoft Energy Data Services](how-to-manage-data-security-and-encryption.md)
+> [Use Lockbox for Microsoft Energy Data Services](how-to-create-lockbox.md)
energy-data-services How To Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-use-managed-identity.md
+
+ Title: Use managed identities for Microsoft Energy Data Services on Azure
+description: Learn how to use Managed Identity to access Microsoft Energy Data Services from other Azure services.
++++ Last updated : 01/04/2023+
+#Customer intent: As a developer, I want to use managed identity to access Microsoft Energy Data Services from other Azure services such as Azure Functions.
+++
+# Use managed identity to access Microsoft Energy Data Services from other Azure services
+
+This article provides an overview on how to access data plane or control plane of Microsoft Energy Data Services from other Microsoft Azure Services using *managed identity*.
+
+There's a need for services such as Azure Functions etc. to be able to consume Microsoft Energy Data Services APIs. This interoperability will allow you to use the best of multiple Azure services, for example, you can write a script in Azure Function to ingest data in Microsoft Energy Data Services. Here, we should assume that Azure Functions is the source service while Microsoft Energy Data Services is the target service. To understand how this scenario works, it's important to understand the concept of managed identity.
+
+## Managed Identity
+
+A managed identity from Azure Active Directory (Azure AD) allows your application to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to create or rotate any secrets. Any Azure service that wants to access Microsoft Energy Data Services control plane or data plane for any operation can use managed identity to do so.
+
+Managed identity is of two types. It could be a system assigned managed identity or user assigned managed identity. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. To learn more about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
+
+Currently, other services can connect to Microsoft Energy Data Services using system or user assigned managed identity. However, Microsoft Energy Data Services doesn't support system assigned managed identity.
+
+For this scenario, we'll use a user assigned managed identity in Azure Function to call a data plane API in Microsoft Energy Data Services.
+
+## Pre-requisites
+
+Before you begin, make sure:
+
+* You've created a [Microsoft Energy Data Services instance](quickstart-create-microsoft-energy-data-services-instance.md).
+
+* You've created a [Azure Function App](../azure-functions/functions-create-function-app-portal.md).
+
+* You've created a [Python Azure Function using portal](../azure-functions/create-first-function-vs-code-python.md) or using [command line.](../azure-functions/create-first-function-cli-python.md)
+
+* You've created [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). You can create a system assigned identity as well however, this document will explain the flow using user assigned managed identity.
++
+## Steps for Azure Functions to access Microsoft Energy Data Services using Managed Identity
+
+There are five important steps to configure Azure Functions to access Microsoft Energy Data Services.
++
+### Step 1: Retrieve the Object ID of system or user-assigned identity that wants to access the Microsoft Energy Data Services APIs.
+
+1. You can get the *Object ID* of system assigned identity associated with Azure Functions by navigating to *Identity* screen of the Azure Function.
+
+[![Screenshot of object id for system assigned identity.](media/how-to-use-managed-identity/1-object-id-system-assigned-identity.png)](media/how-to-use-managed-identity/1-object-id-system-assigned-identity.png#lightbox)
+
+2. Similarly, navigate to the *Overview* tab of the user assigned identity to find its *Object ID*.
+
+[![Screenshot of object id for user assigned identity.](media/how-to-use-managed-identity/2-object-id-user-assigned-identity.png)](media/how-to-use-managed-identity/2-object-id-user-assigned-identity.png#lightbox)
+
+### Step 2. Retrieve the *Application ID* of system or user-assigned identity using the Object ID.
+
+1. Navigate to *Azure Active Directory (Azure AD)* in Azure
+2. Navigate to *Enterprise Application* tab.
+3. Search for the *Object ID* of the user assigned identity or system assigned identity in the *Search by application name or Object ID* search box.
+4. Copy the *Application ID* from Enterprise Application section of Azure Active Directory.
+
+[![Screenshot of Application Id for user assigned identity.](media/how-to-use-managed-identity/3-object-id-application-id-user-assigned-identity.png)](media/how-to-use-managed-identity/3-object-id-application-id-user-assigned-identity.png#lightbox)
+
+### Step 3: Add the user assigned managed identity to Azure Functions
+
+1. Sign in to the Azure portal.
+2. In the Azure portal, navigate to your Azure Function.
+3. Under Account Settings, select Identity.
+4. Select the User assigned tab, and then select Add.
+5. Select your existing user-assigned managed identity and then select Add. You'll then be returned to the User assigned tab.
+
+[![Screenshot of adding user assigned identity to Azure Function.](media/how-to-use-managed-identity/4-user-assigned-identity-azure-function.png)](media/how-to-use-managed-identity/4-user-assigned-identity-azure-function.png#lightbox)
+
+### Step 4: Add the application ID to entitlement groups to access Microsoft Energy Data Services APIs
+Next, you need to add this Application ID to appropriate groups using the entitlement service to access Microsoft Energy Data Services APIs. You need to perform the following actions:
+
+1. Find the tenant-id, client-id, client-secret, Microsoft Energy Data Services url, and data partition-id and generate the [access token](how-to-manage-users.md#prerequisites). You should have the following information handy with you:
+
+* tenant-id
+* client-id
+* client-secret
+* microsoft energy data services uri
+* data-partition-id
+* access token
+* Application ID of the managed identity
++
+2. Next, use the [add-member-api](https://microsoft.github.io/meds-samples/rest-apis/https://docsupdatetracker.net/index.html?page=/meds-samples/rest-apis/entitlements_openapi.yaml#/add-member-api/addMemberUsingPOST) to add the Application ID of the user managed identity to appropriate entitlement groups. For example, in this case, we'll add the Application ID to two groups:
+
+* users@[partition ID].dataservices.energy
+* users.datalake.editors@[partition ID].dataservices.energy
+
+> [!NOTE]
+> In the below commands use the Application ID of the managed identity and not the Object Id of the managed identity in the below command.
+
+* Adding Application ID of the managed identity to users@[partition ID].dataservices.energy
+
+3. Run the following CURL command on Azure bash:
+
+```bash
+ curl --location --request POST 'https://<microsoft energy data services uri>/api/entitlements/v2/groups/users@ <data-partition-id>.dataservices.energy/members' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "<Application ID of the managed identity>",
+ "role": "MEMBER"
+ }'
+```
+
+Sample response:
+```JSON
+{
+ "email": "<Application ID of the managed identity>",
+ "role": "MEMBER"
+ }
+```
+* Adding Application ID of the managed identity to users.datalake.editors@[partition ID].dataservices.energy
+
+4. Run the following CURL command on Azure bash:
+
+```bash
+ curl --location --request POST 'https://<microsoft energy data services uri>/api/entitlements/v2/groups/ users.datalake.editors@ <data-partition-id>.dataservices.energy/members' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "<Application ID of the managed identity>",
+ "role": "MEMBER"
+ }'
+```
+
+Sample response:
+```JSON
+{
+ "email": "<Application ID of the managed identity>",
+ "role": "MEMBER"
+ }
+```
+
+### Step 5: Generate token for accessing Microsoft Energy Data Services from Azure Function
+
+Now Azure Functions is ready to access Microsoft Energy Data Services APIs.
+
+In this case, Azure function generates a token using User Assigned identity. The Azure function uses the Application ID present in the Microsoft Energy Data Services instance, while generating the token.
+Sample Azure function code.
+
+```python
+import logging
+import requests
+import azure.functions as func
+from msrestazure.azure_active_directory import MSIAuthentication
+
+def main(req: func.HttpRequest) -> str:
+ logging.info('Python HTTP trigger function processed a request.')
+
+ //To Authenticate using Managed Identity, we need to pass the Microsoft Energy Data Services Application ID as the resource.
+ //If we want to use a user-assigned identity, we should also include the
+ //Client ID as an additional parameter.
+ //Managed Identity using System Assigned Identity: MSIAuthentication(resource)
+ //Managed Identity using user Assigned Identity: MSIAuthentication(client_id, resource)
+
+ creds = MSIAuthentication(client_id="<client_id_of_managed_identity>ΓÇ¥, resource="<meds_app_id>")
+ url = "https://<meds-uri>/api/entitlements/v2/groups"
+ payload = {}
+ // Passing data partition ID of Microsoft Energy Data Services in headers along with the token received using MI.
+ headers = {
+ 'data-partition-id': '<data partition id>',
+ 'Authorization': 'Bearer ' + creds.token["access_token"]
+ }
+ response = requests.request("GET", url, headers=headers, data=payload, verify=False)
+ return response.text
+
+```
+
+You should get the following successful response from Azure Function:
+
+[![Screenshot of success message from Azure Function.](media/how-to-use-managed-identity/5-azure-function-success.png)](media/how-to-use-managed-identity/5-azure-function-success.png#lightbox)
+
+With the following steps completed, you're now able to use Azure Functions to access Microsoft Energy Data Services APIs with appropriate use of managed identities.
+
+## Next steps
+<!-- Add a context sentence for the following links -->
+To learn more about Lockbox in Microsoft Energy Data Services
+> [!div class="nextstepaction"]
+> [Lockbox in Microsoft Energy Data Services](how-to-create-lockbox.md)
frontdoor Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/managed-identity.md
Azure Front Door also supports using managed identities to access Key Vault certificate. A managed identity generated by Azure Active Directory (Azure AD) allows your Azure Front Door instance to easily and securely access other Azure AD-protected resources, such as Azure Key Vault. Azure manages this identity, so you don't have to create or rotate any secrets. For more information about managed identities, seeΓÇ»[What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). > [!IMPORTANT]
-> Migration identity for Azure Front Door is currently in PREVIEW.
+> Managed identity for Azure Front Door is currently in PREVIEW.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > [!NOTE]
hdinsight Hdinsight Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-availability-zones.md
description: Learn how to create an Azure HDInsight cluster that uses Availabili
Previously updated : 01/05/2023 Last updated : 01/16/2023
-# Create an HDInsight cluster that uses Availability Zones (Preview)
+# Create an HDInsight cluster that uses Availability Zones
An Azure HDInsight cluster consists of multiple nodes (head nodes, worker nodes, gateway nodes and zookeeper nodes). By default, in a region that supports Availability Zones, the user has no control over which cluster nodes are provisioned in which Availability Zone.
In the resources section, you need to add a section of ΓÇÿzonesΓÇÖ and provide w
"resources": [ { "type": "Microsoft.HDInsight/clusters",
- "apiVersion": "2018-06-01-preview",
+ "apiVersion": "2021-06-01",
"name": "[parameters('cluster name')]", "location": "East US 2", "zones": [
lab-services Account Setup Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/account-setup-guide.md
# Lab account setup guide If you're an administrator, before you set up your Azure Lab Services environment, you first need to create a *lab account* within your Azure subscription. A lab account is a container for one or more labs, and it takes only a few minutes to set up.
lab-services Administrator Guide 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide-1.md
Last updated 10/20/2020
# Azure Lab Services - Administrator guide when using lab accounts Information technology (IT) administrators who manage a university's cloud resources are ordinarily responsible for setting up the lab account for their school. After they've set up a lab account, administrators or educators create the labs that are contained within the account. This article provides a high-level overview of the Azure resources that are involved and the guidance for creating them.
lab-services Classroom Labs Fundamentals 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals-1.md
# Architecture Fundamentals in Azure Lab Services when using lab accounts Azure Lab Services is a SaaS (software as a service) solution, which means that the resources needed by Lab Services are handled for you. This article will cover the fundamental resources used by Lab Services and basic architecture of a lab.
lab-services Concept Nested Virtualization Template Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/concept-nested-virtualization-template-vm.md
+
+ Title: Nested virtualization on a template VM
+
+description: In this article, learn about nested virtualization on a template virtual machine in Azure Lab Services.
+++++ Last updated : 01/13/2023++
+# Nested virtualization on a template virtual machine in Azure Lab Services
+
+Azure Lab Services enables you to set up a [template virtual machine](./classroom-labs-concepts.md#template-virtual-machine) in a lab, which serves as a base image for the VMs of your students. Teaching a networking, security or IT class can require an environment with multiple VMs. The VMs also need to communicate with each other.
+
+Nested virtualization enables you to create a multi-VM environment inside a lab's template virtual machine. Publishing the template will provide each lab user with a virtual machine that has multiple VMs within it. This article explains the concepts of nested virtualization on a template VM in Azure Lab Services, and how to enable it.
+
+## What is nested virtualization?
+
+Nested virtualization enables you to create virtual machines within a virtual machine. Nested virtualization is done through Hyper-V, and is only available on Windows VMs.
+
+For more information about nested virtualization, see the following articles:
+
+- [How nested virtualization works](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization#how-nested-virtualization-works).
+- [Nested Virtualization in Azure](https://azure.microsoft.com/blog/nested-virtualization-in-azure/).
+
+## Considerations
+
+Before setting up a lab with nested virtualization, here are a few things to take into consideration.
+
+- Not all VM sizes support nested virtualization. When you create a new lab, select **Medium (Nested virtualization)** or **Large (Nested virtualization)** sizes for the VM size.
+
+- Choose a size that provides good performance for both the host (lab VM) and client VMs (VMs inside the lab VM). Make sure the size you choose can run the host VM and any Hyper-V machines at the same time.
+
+- Client VMs don't have access to Azure resources, such as DNS servers, on the Azure virtual network.
+
+- The host VM requires additional configuration to let the client machines have internet connectivity.
+
+- Hyper-V client VMs are licensed as independent machines. For information about licensing for Microsoft operation systems and products, see [Microsoft Licensing](https://www.microsoft.com/licensing/default). Check licensing agreements for any other software you use, before installing it on the template VM or client VMs.
+
+## Enable nested virtualization on a template VM
+
+To enable nested virtualiztion on a template VM, you first connect to the template VM with a remote desktop client. Then, you make a number of configuration changes inside the VM.
+
+1. Follow these steps to [connect to and update the template machine](./how-to-create-manage-template.md#update-a-template-vm).
+
+1. Next, make the following changes inside the template VM to enable nested virtualization:
+
+ - **Enable the Hyper-V role**. The Hyper-V role must be enabled for the creation and running of VMs inside the template VM.
+ - **Enable DHCP**. When the template VM has the DHCP role enabled, the VMs inside the template VM get an IP address automatically assigned to them.
+ - **Create a NAT network for the Hyper-V VMs**. You set up a Network Address Translation (NAT) network to allow the VMs inside the template VM to have internet access and communicate with each other.
+
+ >[!NOTE]
+ >The NAT network created on the Lab Services VM will allow a Hyper-V VM to access the internet and other Hyper-V VMs on the same Lab Services VM. The Hyper-V VM won't be able to access Azure resources, such as DNS servers, on an Azure virtual network.
+
+You can accomplish the tasks listed above by using a script, or by using Windows tools. Learn how you can [enable nested virtualization on a template VM in Azure Lab Services](./how-to-enable-nested-virtualization-template-vm-using-script.md).
+
+## Processor compatibility
+
+The nested virtualization VM sizes may use different processors as shown in the following table:
+
+ Size | Series | Processor |
+| - | -- | -- |
+| Medium (nested virtualization) | [Standard_D4s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) |
+| Large (nested virtualization) | [Standard_D8s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) |
+
+Each time that a template VM or a student VM is stopped and started, the underlying processor may change. To help ensure that nested VMs work consistently across processors, try enabling [processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v) on the nested VMs. It's recommended to enable **Processor Compatibility** mode on the template VM's nested VMs before publishing or exporting the image.
+
+You should also test the performance of the nested VMs with the **Processor Compatibility** mode enabled to ensure performance isn't negatively impacted. For more information, see [ramifications of using processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v#ramifications-of-using-processor-compatibility-mode).
+
+## Next steps
+
+* Learn how to [enable nested virtualization on a lab VM](./how-to-enable-nested-virtualization-template-vm-using-script.md).
lab-services How To Enable Nested Virtualization Template Vm Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-using-script.md
Title: Enable nested virtualization on a template VM in Azure Lab Services (Script) | Microsoft Docs
-description: Learn how to create a template VM with multiple VMs inside by using a script. In other words, enable nested virtualization on a template VM in Azure Lab Services.
+ Title: Enable nested virtualization on a template VM
+
+description: Learn how to enable nested virtualization on a template VM in Azure Lab Services. Nested virtualization enables you to create a lab with multiple VMs inside it.
++++ Previously updated : 06/26/2020 Last updated : 01/13/2023
-# Enable nested virtualization on a template virtual machine in Azure Lab Services using a script
+# Enable nested virtualization on a template virtual machine in Azure Lab Services
-Nested virtualization enables you to create a multi-VM environment inside a lab's template virtual machine. Publishing the template will provide each user in the lab with a virtual machine set up with multiple VMs within it. For more information about nested virtualization and Azure Lab Services, see [Enable nested virtualization on a template virtual machine in Azure Lab Services](how-to-enable-nested-virtualization-template-vm.md).
+Nested virtualization enables you to create a multi-VM environment inside a lab's template virtual machine. Publishing the template provides each user in the lab with a VM that's set up with multiple VMs within it.
-The steps in this article focus on setting up nested virtualization for Windows Server 2016, Windows Server 2019, or Windows 10. You will use a script to set up template machine with Hyper-V. The following steps will guide you through how to use the [Lab Services Hyper-V scripts](https://github.com/Azure/LabServices/tree/main/ClassTypes/PowerShell/HyperV).
+For more information about nested virtualization and Azure Lab Services, see [Nested virtualization on a template virtual machine](./concept-nested-virtualization-template-vm.md).
+
+To enable nested virtualization on the template VM, you first connect to the VM by using a remote desktop (RDP) client. Then you can apply the configuration changes in either of two ways:
+
+- [Enable nested virtualization by using a script](#enable-nested-virtualization-by-using-a-script).
+- [Enable nested virtualization by using Windows tools](#enable-nested-virtualization-by-using-windows-tools).
>[!IMPORTANT]
->Select **Large (nested virtualization)** or **Medium (nested virtualization)** for the virtual machine size when creating the lab. Nested virtualization will not work otherwise.
+>Select **Large (nested virtualization)** or **Medium (nested virtualization)** for the virtual machine size when creating the lab. Nested virtualization will not work otherwise.
+
+## Prerequisites
+
+- A lab plan and one or more labs. Learn how to [Set up a lab plan](tutorial-setup-lab-plan.md) and [Set up a lab](tutorial-setup-lab.md).
+- Permission to edit the lab. Learn how to [Add a user to the Lab Creator role](tutorial-setup-lab-plan.md#add-a-user-to-the-lab-creator-role). For more role options, see [Lab Services built-in roles](administrator-guide.md#rbac-roles).
-## Run script
+## Enable nested virtualization by using a script
+
+You can use a PowerShell script to set up nested virtualization on a template VM in Azure Lab Services. The following steps will guide you through how to use the [Lab Services Hyper-V scripts](https://github.com/Azure/LabServices/tree/main/ClassTypes/PowerShell/HyperV). The steps are intended for Windows Server 2016, Windows Server 2019, or Windows 10.
+
+1. Follow these steps to [connect to and update the template machine](./how-to-create-manage-template.md#update-a-template-vm).
1. Launch **PowerShell** in **Administrator** mode.+ 1. You may have to change the execution policy to successfully run the script. Run the following command: ```powershell
The steps in this article focus on setting up nested virtualization for Windows
> [!NOTE] > The script may require the machine to be restarted. Follow instructions from the script and re-run the script until **Script completed** is seen in the output.+ 1. Don't forget to reset the execution policy. Run the following command: ```powershell Set-ExecutionPolicy default -force ```
+You've now configured your template VM to use nested virtualization and create VMs inside it.
+
+## Enable nested virtualization by using Windows tools
+
+You can set up nested virtualization on a template VM in Azure Lab Services using Windows roles and tools directly. There are a few things needed on the template VM enable nested virtualization. The following steps describe how to manually set up a Lab Services machine template with Hyper-V. Steps are intended for Windows Server 2016 or Windows Server 2019.
+
+First, follow these steps to [connect to the template virtual machine by using a remote desktop client](./how-to-create-manage-template.md#update-a-template-vm).
+
+### 1. Enable the Hyper-V role
+
+The following steps describe the actions to enable Hyper-V on Windows Server using Server Manager. After enabling Hyper-V, Hyper-V manager is available to add, modify, and delete client VMs.
+
+1. In **Server Manager**, on the Dashboard page, select **Add Roles and Features**.
+
+2. On the **Before you begin** page, select **Next**.
+3. On the **Select installation type** page, keep the default selection of Role-based or feature-based installation and then select **Next**.
+4. On the **Select destination server** page, select Select a server from the server pool. The current server will already be selected. Select Next.
+5. On the **Select server roles** page, select **Hyper-V**.
+6. The **Add Roles and Features Wizard** pop-up will appear. Select **Include management tools (if applicable)**. Select the **Add Features** button.
+7. On the **Select server roles** page, select **Next**.
+8. On the **Select features page**, select **Next**.
+9. On the **Hyper-V** page, select **Next**.
+10. On the **Create Virtual Switches** page, accept the defaults, and select **Next**.
+11. On the **Virtual Machine Migration** page, accept the defaults, and select **Next**.
+12. On the **Default Stores** page, accept the defaults, and select **Next**.
+13. On the **Confirm installation selections** page, select **Restart the destination server automatically if required**.
+14. When the **Add Roles and Features Wizard** pop-up appears, select **Yes**.
+15. Select **Install**.
+16. Wait for the **Installation progress** page to indicate that the Hyper-V role is complete. The machine may restart in the middle of the installation.
+17. Select **Close**.
+
+### 2. Enable the DHCP role
+
+When you create a client VM, it needs an IP address in the Network Address Translation (NAT) network. You'll create the NAT network in a later step.
+
+To assign the IP addresses automatically, configure the lab VM template as a DHCP server:
+
+1. In **Server Manager**, on the **Dashboard** page, select **Add Roles and Features**.
+2. On the **Before you begin** page, select **Next**.
+3. On the **Select installation type** page, select **Role-based or feature-based installation** and then select **Next**.
+4. On the **Select destination server** page, select the current server from the server pool and then select **Next**.
+5. On the **Select server roles** page, select **DHCP Server**.
+6. The **Add Roles and Features Wizard** pop-up will appear. Select **Include management tools (if applicable)**. Select **Add Features**.
+
+ >[!NOTE]
+ >You may see a validation error stating that no static IP addresses were found. This warning can be ignored for our scenario.
+
+7. On the **Select server roles** page, select **Next**.
+8. On the **Select features** page, select **Next**.
+9. On the **DHCP Server** page, select **Next**.
+10. On the **Confirm installation selections** page, select **Install**.
+11. Wait for the **Installation progress page** to indicate that the DHCP role is complete.
+12. Select Close.
+
+### 3. Enable the Routing and Remote Access role
+
+Next, enable the [Routing service](/windows-server/remote/remote-access/remote-access#routing-service) to enable routing network traffic between the VMs on the template VM.
+
+1. In **Server Manager**, on the **Dashboard** page, select **Add Roles and Features**.
+
+2. On the **Before you begin** page, select **Next**.
+3. On the **Select installation type** page, select **Role-based or feature-based installation** and then select **Next**.
+4. On the **Select destination server** page, select the current server from the server pool and then select **Next**.
+5. On the **Select server roles** page, select **Remote Access**. Select **OK**.
+6. On the **Select features** page, select **Next**.
+7. On the **Remote Access** page, select **Next**.
+8. On the **Role Services** page, select **Routing**.
+9. The **Add Roles and Features Wizard** pop-up will appear. Select **Include management tools (if applicable)**. Select **Add Features**.
+10. Select **Next**.
+11. On the **Web Server Role (IIS)** page, select **Next**.
+12. On the **Select role services** page, select **Next**.
+13. On the **Confirm installation selections** page, select **Install**.
+14. Wait for the **Installation progress** page to indicate that the Remote Access role is complete.
+15. Select **Close**.
+
+### 4. Create virtual NAT network
+
+Now that you've installed all the necessary roles, you can create the NAT network. The creation process will involve creating a switch and the NAT network, itself.
+
+A NAT network assigns a public IP address to a group of VMs on a private network to allow connectivity to the internet. In this case, the group of private VMs consists of the nested VMs. The NAT network allows the nested VMs to communicate with one another.
+
+A switch is a network device that handles receiving and routing of traffic in a network.
+
+#### Create a new virtual switch
+
+To create a virtual switch in Hyper-V:
+
+1. Open **Hyper-V Manager** from Windows Administrative Tools.
+
+2. Select the current server in the left-hand navigation menu.
+3. Select **Virtual Switch Manager…** from the **Actions** menu on the right-hand side of the **Hyper-V Manager**.
+4. On the **Virtual Switch Manager** pop-up, select **Internal** for the type of switch to create. Select **Create Virtual Switch**.
+5. For the newly created virtual switch, set the name to something memorable. For this example, we'll use 'LabServicesSwitch'. Select **OK**.
+6. A new network adapter will be created. The name will be similar to 'vEthernet (LabServicesSwitch)'. To verify open the **Control Panel**, select **Network and Internet**, select **View network status and tasks**. On the left, select **Change adapter settings**.
+
+#### Create a NAT network
+
+To create a NAT network on the lab template VM:
+
+1. Open the **Routing and Remote Access** tool from Windows Administrative Tools.
+
+2. Select the local server in the left navigation page.
+3. Choose **Action** -> **Configure and Enable Routing and Remote Access**.
+4. When **Routing and Remote Access Server Setup Wizard** appears, select **Next**.
+5. On the **Configuration** page, select **Network address translation (NAT)** configuration. Select **Next**.
+
+ >[!WARNING]
+ >Do not choose the 'Virtual private network (VPN) access and NAT' option.
+
+6. On **NAT Internet Connection** page, choose 'Ethernet'. Don't choose the 'vEthernet (LabServicesSwitch)' connection we created in Hyper-V Manager. Select **Next**.
+7. Select **Finish** on the last page of the wizard.
+8. When the **Start the service** dialog appears, select **Start Service**.
+9. Wait until service is started.
+
+### 5. Update network adapter settings
+
+Next, associate the IP address of the network adapter with the default gateway IP of the NAT network you created earlier. In this example, assign an IP address of 192.168.0.1, with a subnet mask of 255.255.255.0. Use the virtual switch that you created earlier.
+
+1. Open the **Control Panel**, select **Network and Internet**, select **View network status and tasks**.
+
+2. On the left, select **Change adapter settings**.
+3. In the **Network Connections** window, double-click on 'vEthernet (LabServicesSwitch)' to show the **vEthernet (LabServicesSwitch) Status** details dialog.
+4. Select the **Properties** button.
+5. Select **Internet Protocol Version 4 (TCP/IPv4)** item and select the **Properties** button.
+6. In the **Internet Protocol Version 4 (TCP/IPv4) Properties** dialog:
+
+ - Select **Use the following IP address**.
+ - For the IP address, enter 192.168.0.1.
+ - For the subnet mask, enter 255.255.255.0.
+ - Leave the default gateway and DNs servers blank.
+
+ >[!NOTE]
+ > The range for the NAT network will be, in CIDR notation, 192.168.0.0/24. This range provides usable IP addresses from 192.168.0.1 to 192.168.0.254. By convention, gateways have the first IP address in a subnet range.
+
+7. Select OK.
+
+### 6. Create DHCP Scope
+
+Next, you can add a DHCP scope. In this case, our NAT network is 192.168.0.0/24 in CIDR notation. This range provides usable IP addresses from 192.168.0.1 to 192.168.0.254. The scope you create must be in that range of usable addresses, excluding the IP address you assigned in the previous step.
+
+1. Open **Administrative Tools** and open the **DHCP** administrative tool.
+2. In the **DHCP** tool, expand the node for the current server and select **IPv4**.
+3. From the Action menu, choose **New Scope…**.
+4. When the **New Scope Wizard** appears, select **Next** on the **Welcome** page.
+5. On the **Scope Name** page, enter 'LabServicesDhcpScope' or something else memorable for the name. Select **Next**.
+6. On the **IP Address Range** page, enter the following values.
+
+ - 192.168.0.100 for the Start IP address
+ - 192.168.0.200 for the End IP address
+ - 24 for the Length
+ - 255.255.255.0 for the Subnet mask
+
+7. Select **Next**.
+8. On the **Add Exclusions and Delay** page, select **Next**.
+9. On the **Lease Duration** page, select **Next**.
+10. On the **Configure DHCP Options** page, select **Yes, I want to configure these options now**. Select **Next**.
+11. On the **Router (Default Gateway)**
+12. Add 192.168.0.1, if not done already. Select **Next**.
+13. On the **Domain Name and DNS Servers** page, add 168.63.129.16 as a DNS server IP address, if not done already. 168.63.129.16 is the IP address for an Azure static DNS server. Select **Next**.
+14. On the **WINS Servers** page, select **Next**.
+15. One the **Activate Scope** page, select **Yes, I want to activate this scope now**. Select **Next**.
+16. On the **Completing the New Scope Wizard** page, select **Finish**.
++ ## Conclusion
-Now your template machine is ready to create Hyper-V virtual machines. See [Create a Virtual Machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v) for instructions on how to create Hyper-V virtual machines. Also, see [Microsoft Evaluation Center](https://www.microsoft.com/evalcenter/) to check out available operating systems and software.
+Now your template machine is ready to create Hyper-V virtual machines. See [Create a Virtual Machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v) for instructions on how to create Hyper-V virtual machines. Also, see [Microsoft Evaluation Center](https://www.microsoft.com/evalcenter/) to check out available operating systems and software.
## Next steps
lab-services How To Enable Nested Virtualization Template Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm.md
- Title: Enable nested virtualization on a template VM in Azure Lab Services | Microsoft Docs
-description: In this article, learn how to set up nested virtualization on a template machine in Azure Lab Services.
- Previously updated : 01/04/2022--
-# Enable nested virtualization on a template virtual machine in Azure Lab Services
-
-Azure Lab Services enables you to set up one template virtual machine in a lab and make a single copy available to each of your students. Teaching a networking, security of IT class can require an environment with multiple VMs. The VMs also need to communicate with each other.
-
-Nested virtualization enables you to create a multi-VM environment inside a lab's template virtual machine. Publishing the template will provide each user in the lab with a virtual machine set up with multiple VMs within it. This article covers how to set up nested virtualization on a template machine in Azure Lab Services.
-
-## What is nested virtualization?
-
-Nested virtualization enables you to create virtual machines within a virtual machine. Nested virtualization is done through Hyper-V, and is only available on Windows VMs.
-
-For more information about nested virtualization, see the following articles:
--- [Nested Virtualization in Azure](https://azure.microsoft.com/blog/nested-virtualization-in-azure/)-- [How to enable nested virtualization in an Azure VM](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization)-
-## Considerations
-
-Before setting up a lab with nested virtualization, here are a few things to take into consideration.
--- When creating a new lab, select **Medium (Nested virtualization)** or **Large (Nested virtualization)** sizes for the virtual machine size.-- Choose a size that will provide good performance for both the host and client virtual machines. Make sure the size you choose can run the host VM and any Hyper-V machines at the same time.-- Client virtual machines won't have access to Azure resources, such as DNS servers, on the Azure virtual network.-- The host virtual machine requires setup to allow for the client machine to have internet connectivity.-- Hyper-V client virtual machines are licensed as independent machines. For information about licensing for Microsoft operation systems and products, see [Microsoft Licensing](https://www.microsoft.com/licensing/default). Check licensing agreements for any other software being used before installing it on the template virtual machine or client virtual machines.-
-## Enable nested virtualization on a template VM
-
-This article assumes that you've created a lab account/lab plan and lab. For more information about creating a new lab plan, see [Tutorial: Set up a lab plan](tutorial-setup-lab-plan.md). For more information how to create lab, see [Tutorial: Set up a lab](tutorial-setup-lab.md).
-
->[!IMPORTANT]
->Select **Large (nested virtualization)** or **Medium (nested virtualization)** for the virtual machine size when creating the lab. Nested virtualization will not work otherwise.
-
-To connect to the template machine, see [Create and manage a template in Azure Lab Services](how-to-create-manage-template.md).
-
-To enable nested virtualization, there are a few tasks to accomplish.
--- **Enable Hyper-V role**. Hyper-V role must be enabled for the creation and running of Hyper-V virtual machines.-- **Enable DHCP**. When the Lab Services virtual machine has the DHCP role enabled, the Hyper-V virtual machines can automatically be assigned an IP address.-- **Create NAT network for Hyper-V VMs**. The NAT network is set up to allow the Hyper-V virtual machines to have internet access. The Hyper-V virtual machines can communicate with each other.-
->[!NOTE]
->The NAT network created on the Lab Services VM will allow a Hyper-V VM to access the internet and other Hyper-V VMs on the same Lab Services VM. The Hyper-V VM won't be able to access Azure resources, such as DNS servers, on an Azure virtual network.
-
-Accomplishing the tasks listed above can be done using a script or using Windows tools. Read the sections below for further details.
-
-### Using script to enable nested virtualization
-
-To use the automated setup for nested virtualization with Windows Server 2016 or Windows Server 2019, see [Enable nested virtualization on a template virtual machine in Azure Lab Services using a script](how-to-enable-nested-virtualization-template-vm-using-script.md). You'll use scripts from [Lab Services Hyper-V scripts](https://aka.ms/azlabs/scripts/hyperV) to install the Hyper-V role. The scripts will also set up networking so the Hyper-V virtual machines can have internet access.
-
-### Using Windows tools to enable nested virtualization
-
-To configure nested virtualization for Windows Server 2016 or 2019 manually, see [Enable nested virtualization on a template virtual machine in Azure Lab Services manually](how-to-enable-nested-virtualization-template-vm-ui.md). Instructions will also cover configuring networking so the Hyper-V VMs have internet access.
-
-### Processor compatibility
-
-The nested virtualization VM sizes may use different processors as shown in the following table:
-
- Size | Series | Processor |
-| - | -- | -- |
-| Medium (nested virtualization) | [Standard_D4s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) |
-| Large (nested virtualization) | [Standard_D8s_v4](../virtual-machines/dv4-dsv4-series.md) | 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) or the Intel® Xeon® Platinum 8272CL (Cascade Lake) |
-
-Each time that a template VM or a student VM is stopped and started, the underlying processor may change. To help ensure that nested VMs work consistently across processors, try enabling [processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v) on the nested VMs. It's recommended to enable **Processor Compatibility** mode on the template VM's nested VMs before publishing or exporting the image. You should also test the performance of the nested VMs with the **Processor Compatibility** mode enabled to ensure performance isn't negatively impacted. For more information, see [ramifications of using processor compatibility mode](/windows-server/virtualization/hyper-v/manage/processor-compatibility-mode-hyper-v#ramifications-of-using-processor-compatibility-mode).
lab-services How To Setup Lab Gpu 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-setup-lab-gpu-1.md
# Set up GPU virtual machines in labs contained within lab accounts This article shows you how to do the following tasks:
lab-services Lab Services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md
# What's new in Azure Lab Services August 2022 Update + We've made fundamental improvements for the service to boost performance, reliability, and scalability. In this article, we'll describe all the great changes and new features that are available in this update! ## Overview
lab-services Migrate To 2022 Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/migrate-to-2022-update.md
This article applies to users of Azure Lab Services with labs created with a lab
In this article, you'll learn the sequence to getting started using the features and resources made available beginning in the August 2022 update. The important update to Azure Lab Services August 2022 includes enhancements that boost performance, reliability, and scalability. It also gives you more flexibility in the way you manage labs, use capacity, and track costs.
->[!Important]
-> While you don't have to migrate to the August 2022 update of Azure Lab Services yet, we do recommend you begin using the update for all new labs.
## What's different in the update?
lab-services Reference Powershell Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/reference-powershell-module.md
# Az.LabServices PowerShell module for lab accounts in Azure Lab Services > [!NOTE] > To learn more about the integrated Az module experience available with the August 2022 Update, see [Quickstart: Create a lab plan using PowerShell and the Azure modules](quick-create-lab-plan-powershell.md).
lab-services Specify Marketplace Images 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/specify-marketplace-images-1.md
Last updated 02/15/2022
# Specify Marketplace images available to lab creators in a lab account + As a lab account owner, you can specify the Marketplace images that lab creators can use to create labs in the lab account. ## Select images available for labs
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
In this article, you can learn:
- [Disabling local accounts](../aks/managed-aad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When the AKS Cluster is deployed, local accounts are enabled by default. - If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions. Without access to the API server, the machine learning pods can't be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster. - Azure Machine Learning does not guarantee support for all preview stage features in AKS. For example, [Azure AD pod identity](../aks/use-azure-ad-pod-identity.md) is not supported.-- If you've previously followed the steps from [AzureML AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
+- If you've previously followed the steps from [AzureML AKS v1 document](./v1/how-to-create-attach-kubernetes.md) to create or attach your AKS as inference cluster, use the following link to [clean up the legacy azureml-fe related resources](./v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources) before you continue the next step.
+- We currently don't support attaching your AKS cluster across subscription, which means that your AKS cluster must be in the same subscription as your workspace.
+ - The workaround to meet your cross-subscription requirement is to first connect AKS to Azure-ARC and then attach this ARC-Kubernetes resource.
## Review AzureML extension configuration settings
You can use AzureML CLI command `k8s-extension create` to deploy AzureML extensi
|`sslSecret`| The name of the Kubernetes secret in the `azureml` namespace. This config is used to store `cert.pem` (PEM-encoded TLS/SSL cert) and `key.pem` (PEM-encoded TLS/SSL key), which are required for inference HTTPS endpoint support when ``allowInsecureConnections`` is set to `False`. For a sample YAML definition of `sslSecret`, see [Configure sslSecret](./how-to-secure-kubernetes-online-endpoint.md#configure-sslsecret). Use this config or a combination of `sslCertPemFile` and `sslKeyPemFile` protected config settings. |N/A| Optional | Optional | |`sslCname` |An TLS/SSL CNAME is used by inference HTTPS endpoint. **Required** if `allowInsecureConnections=False` | N/A | Optional | Optional| | `inferenceRouterHA` |`True` or `False`, default `True`. By default, AzureML extension will deploy three inference router replicas for high availability, which requires at least three worker nodes in a cluster. Set to `False` if your cluster has fewer than three worker nodes, in this case only one inference router service is deployed. | N/A| Optional | Optional |
- |`nodeSelector` | By default, the deployed kubernetes resources are randomly deployed to one or more nodes of the cluster, and DaemonSet resources are deployed to ALL nodes. If you want to restrict the extension deployment to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional |
+ |`nodeSelector` | By default, the deployed kubernetes resources and your machine learning workloads are randomly deployed to one or more nodes of the cluster, and DaemonSet resources are deployed to ALL nodes. If you want to restrict the extension deployment and your training/inference workloads to specific nodes with label `key1=value1` and `key2=value2`, use `nodeSelector.key1=value1`, `nodeSelector.key2=value2` correspondingly. | Optional| Optional | Optional |
|`installNvidiaDevicePlugin` | `True` or `False`, default `False`. [NVIDIA Device Plugin](https://github.com/NVIDIA/k8s-device-plugin#nvidia-device-plugin-for-kubernetes) is required for ML workloads on NVIDIA GPU hardware. By default, AzureML extension deployment won't install NVIDIA Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this setting to `True`, to install it, but make sure to fulfill [Prerequisites](https://github.com/NVIDIA/k8s-device-plugin#prerequisites). | Optional |Optional |Optional | |`installPromOp`|`True` or `False`, default `True`. AzureML extension needs prometheus operator to manage prometheus. Set to `False` to reuse the existing prometheus operator. For more information about reusing the existing prometheus operator, refer to [reusing the prometheus operator](./how-to-troubleshoot-kubernetes-extension.md#prometheus-operator)| Optional| Optional | Optional | |`installVolcano`| `True` or `False`, default `True`. AzureML extension needs volcano scheduler to schedule the job. Set to `False` to reuse existing volcano scheduler. For more information about reusing the existing volcano scheduler, refer to [reusing volcano scheduler](./how-to-troubleshoot-kubernetes-extension.md#volcano-scheduler) | Optional| N/A | Optional |
If you plan to deploy AzureML extension for real-time inference workload and wan
* Type `LoadBalancer`. Exposes `azureml-fe` externally using a cloud provider's load balancer. To specify this value, ensure that your cluster supports load balancer provisioning. Note most on-premises Kubernetes clusters might not support external load balancer. * Type `NodePort`. Exposes `azureml-fe` on each Node's IP at a static port. You'll be able to contact `azureml-fe`, from outside of cluster, by requesting `<NodeIP>:<NodePort>`. Using `NodePort` also allows you to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`. * Type `ClusterIP`. Exposes `azureml-fe` on a cluster-internal IP, and it makes `azureml-fe` only reachable from within the cluster. For `azureml-fe` to serve inference requests coming outside of cluster, it requires you to set up your own load balancing solution and TLS/SSL termination for `azureml-fe`.
- * To ensure high availability of `azureml-fe` routing service, AzureML extension deployment by default creates three replicas of `azureml-fe` for clusters having three nodes or more. If your cluster has **less than 3 nodes**, set `inferenceLoadbalancerHA=False`.
+ * To ensure high availability of `azureml-fe` routing service, AzureML extension deployment by default creates three replicas of `azureml-fe` for clusters having three nodes or more. If your cluster has **less than 3 nodes**, set `inferenceRouterHA=False`.
* You also want to consider using **HTTPS** to restrict access to model endpoints and secure the data that clients submit. For this purpose, you would need to specify either `sslSecret` config setting or combination of `sslKeyPemFile` and `sslCertPemFile` config-protected settings. * By default, AzureML extension deployment expects config settings for **HTTPS** support. For development or testing purposes, **HTTP** support is conveniently provided through config setting `allowInsecureConnections=True`.
machine-learning How To Manage Kubernetes Instance Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md
and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resourc
In short, a `nodeSelector` lets you specify which node a pod should run on. The node must have a corresponding label. In the `resources` section, you can set the compute resources (CPU, memory and NVIDIA GPU) for the pod.
+>[!IMPORTANT]
+>
+> If you have [specified a nodeSelector when deploying the AzureML extension](./how-to-deploy-kubernetes-extension.md#review-azureml-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that:
+> - For each instance type creating, the specified nodeSelector should be a subset of the extension-specified nodeSelector.
+> - If you use an instance type **with nodeSelector**, the workload will run on any node matching both the extension-specified nodeSelector and the instance type-specified nodeSelector.
+> - If you use an instance type **without a nodeSelector**, the workload will run on any node mathcing the extension-specified nodeSelector.
++ ## Default instance type By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an AzureML workspace:
machine-learning How To Secure Kubernetes Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-online-endpoint.md
TLS/SSL certificates expire and must be renewed. Typically, this happens every y
If you directly configured the PEM files in the extension deployment command before, you need to run the extension update command and specify the new PEM file's path: ```azurecli
- az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config sslCname=<ssl cname> --config-protected sslCertPemFile=<file-path-to-cert-PEM> sslKeyPemFile=<file-path-to-cert-KEY> --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
``` ## Disable TLS
To disable TLS for a model deployed to Kubernetes:
1. Run the following Azure CLI command in your Kubernetes cluster, and then perform an update. This command assumes that you're using AKS. ```azurecli
- az k8s-extension create --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableInference=True inferenceRouterServiceType=LoadBalancer allowInsercureconnection=True --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config enableInference=True inferenceRouterServiceType=LoadBalancer allowInsercureconnection=True --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
``` > [!WARNING]
machine-learning How To Troubleshoot Kubernetes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-compute.md
Below is a list of error types in **compute scope** that you might encounter whe
* [ERROR: GenericComputeError](#error-genericcomputeerror) * [ERROR: ComputeNotFound](#error-computenotfound) * [ERROR: ComputeNotAccessible](#error-computenotaccessible)
+* [ERROR: InvalidComputeInformation](#error-invalidcomputeinformation)
+* [ERROR: InvalidComputeNoKubernetesConfiguration](#error-invalidcomputenokubernetesconfiguration)
#### ERROR: GenericComputeError
Cannot find Kubernetes compute.
This error should occur when: * The system can't find the compute when create/update new online endpoint/deployment.
-* The compute of existing online endpoints/deployments have been removed.
+* The compute of existing online endpoints/deployments have been removed.
You can check the following items to troubleshoot the issue: * Try to recreate the endpoint and deployment.
The Kubernetes compute is not accessible.
This error should occur when the workspace MSI (managed identity) doesn't have access to the AKS cluster. You can check if the workspace MSI has the access to the AKS, and if not, you can follow this [document](how-to-identity-based-service-authentication.md) to manage access and identity.
+#### ERROR: InvalidComputeInformation
+
+The error message is as follows:
+
+```bash
+The compute information is invalid.
+```
+There is a compute target validation process when deploying models to your Kubernetes cluster. This error should occur when the compute information is invalid when validating, for example the compute target is not found, or the configuration of Azure Machine Learning extension has been updated in your Kubernetes cluster.
+
+You can check the following items to troubleshoot the issue:
+* Check whether the compute target you used is correct and existing in your workspace.
+* Try to detach and reattach the compute to the workspace. Pay attention to more notes on [reattach](#error-genericcomputeerror).
+
+#### ERROR: InvalidComputeNoKubernetesConfiguration
+
+The error message is as follows:
+
+```bash
+The compute kubeconfig is invalid.
+```
+
+This error should occur when the system failed to find any configuration to connect to cluster, such as:
+* For Arc-Kubernetes cluster, there is no Azure Relay configuration can be found.
+* For AKS cluster, there is no AKS configuration can be found.
+
+To rebuild the configuration of compute connection in your cluster, you can try to detach and reattach the compute to the workspace. Pay attention to more notes on [reattach](#error-genericcomputeerror).
+ ### Kubernetes cluster error Below is a list of error types in **cluster scope** that you might encounter when using Kubernetes compute to create online endpoints and online deployments for real-time model inference, which you can trouble shoot by following the guideline: * [ERROR: GenericClusterError](#error-genericclustererror) * [ERROR: ClusterNotReachable](#error-clusternotreachable)
+* [ERROR: ClusterNotFound](#error-clusternotfound)
#### ERROR: GenericClusterError
For AKS clusters:
For an AKS cluster or an Azure Arc enabled Kubernetes cluster:
-1. Check if the Kubernetes API server is accessible by running `kubectl` command in cluster.
+* Check if the Kubernetes API server is accessible by running `kubectl` command in cluster.
#### ERROR: ClusterNotReachable
For AKS clusters:
For an AKS cluster or an Azure Arc enabled Kubernetes cluster: * Check if the Kubernetes API server is accessible by running `kubectl` command in cluster.
+#### ERROR: ClusterNotFound
+
+The error message is as follows:
+
+```bash
+Cannot found Kubernetes cluster.
+```
+
+This error should occur when the system cannot find the AKS/Arc-Kubernetes cluster.
+
+You can check the following items to troubleshoot the issue:
+* First, check the cluster resource ID in the Azure portal to verify whether Kubernetes cluster resource still exists and is running normally.
+* If the cluster exists and is running, then you can try to detach and reattach the compute to the workspace. Pay attention to more notes on [reattach](#error-genericcomputeerror).
+
+> [!TIP]
+ > More troubleshoot guide of common errors when creating/updating the Kubernetes online endpoints and deployments, you can find in [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md).
+ ## Training guide
machine-learning How To Troubleshoot Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-extension.md
volcano-scheduler.conf: |
- name: nodeorder - name: binpack ```
-You need to use the same config settings as above, and disable `job/validate` webhook in the volcano admission, so that AzureML training workloads can perform properly.
+You need to use the same config settings as above, and you need to disable `job/validate` webhook in the volcano admission if your **volcano version is lower than 1.6**, so that AzureML training workloads can perform properly.
+
+#### Volcano scheduler integration supporting cluster autoscaler
+As discussed in this [thread](https://github.com/volcano-sh/volcano/issues/2558) , the **gang plugin** is not working well with the cluster autoscaler(CA) and also the node autoscaler in AKS.
+
+If you use the volcano that comes with the AzureML extension via setting `installVolcano=true`, the extension will have a scheduler config by default, which configures the **gang** plugin to prevent job deadlock. Therefore, the cluster autoscaler(CA) in AKS cluster will not be supported with the volcano installed by extension.
+
+For the case above, if you prefer the AKS cluster autoscaler could work normally, you can configure this `volcanoScheduler.schedulerConfigMap` parameter through updating extension, and specify a custom config of **no gang** volcano scheduler to it, for example:
+
+```yaml
+volcano-scheduler.conf: |
+ actions: "enqueue, allocate, backfill"
+ tiers:
+ - plugins:
+ - name: sla
+ arguments:
+ sla-waiting-time: 1m
+ - plugins:
+ - name: conformance
+ - plugins:
+ - name: overcommit
+ - name: drf
+ - name: predicates
+ - name: proportion
+ - name: nodeorder
+ - name: binpack
+```
+
+To use this config in your AKS cluster, you need to follow the steps below:
+1. Create a configmap file with the above config in the azureml namespace. This namespace will generally be created when you install the AzureML extension.
+1. Set `volcanoScheduler.schedulerConfigMap=<configmap name>` in the extension config to apply this configmap. And you need to skip the resource validation when installing the extension by configuring `amloperator.skipResourceValidation=true`. For example:
+ ```azurecli
+ az k8s-extension update --name <extension-name> --extension-type Microsoft.AzureML.Kubernetes --config volcanoScheduler.schedulerConfigMap=<configmap name> amloperator.skipResourceValidation=true --cluster-type managedClusters --cluster-name <your-AKS-cluster-name> --resource-group <your-RG-name> --scope cluster
+ ```
+
+> [!NOTE]
+> Since the gang plugin is removed, there's potential that the deadlock happens when volcano schedules the job.
+>
+> * To avoid this situation, you can **use same instance type across the jobs**.
+>
+> Note that you need to disable `job/validate` webhook in the volcano admission if your **volcano version is lower than 1.6**.
+
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
To run the `score.py` provided as part of the deployment, Azure creates a contai
- A failure in the `init()` method. - If `get-logs` isn't producing any logs, it usually means that the container has failed to start. To debug this issue, try [deploying locally](#deploy-locally) instead. - Readiness or liveness probes aren't set up correctly.-- There's an error in the environment setup of the container, such as a missing dependency.
+- There's an error in the environment set up of the container, such as a missing dependency.
- When you face `TypeError: register() takes 3 positional arguments but 4 were given` error, the error may be caused by the dependency between flask v2 and `azureml-inference-server-http`. See [FAQs for inference HTTP server](how-to-inference-server-http.md#1-i-encountered-the-following-error-during-server-startup) for more details. ### ERROR: ResourceNotFound
Retrying the operation after waiting several seconds up to a minute may allow it
### ERROR: NamespaceNotFound
-The reason you might run into this error when using Kubernetes online endpoint is because the namespace your Kubernetes compute used is unavailable in your cluster.
+The reason you might run into this error when creating/updating the Kubernetes online endpoints is because the namespace your Kubernetes compute used is unavailable in your cluster.
You can check the Kubernetes compute in your workspace portal and check the namespace in your Kubernetes cluster. If the namespace is not available, you can detach the legacy compute and re-attach to create a new one, specifying a namespace that already exists in your cluster.
-### ERROR: KubernetesCrashLoopBackOff
+### ERROR: EndpointAlreadyExists
-Below is a list of reasons you might run into this error when using Kubernetes online endpoint:
-* There is an error in `score.py` and the container crashed when init your score code, please following [ERROR: ResourceNotReady](#error-resourcenotfound) part.
-* Your scoring process needs more memory that your deployment config limit is insufficient, you can try to update the deployment with a larger memory limit.
+The reason you might run into this error when creating a Kubernetes online endpoint is because the creating endpoint already exists in your cluster.
+
+The endpoint name should be unique per workspace and per cluster, so in this case, you should create endpoint with another name.
+
+### ERROR: ScoringFeUnhealthy
+
+The reason you might run into this error when creating/updating a Kubernetes online endpoint/deployment is because the [Azureml-fe](how-to-kubernetes-inference-routing-azureml-fe.md) that is the system service running in the cluster is not found or unhealthy.
+
+To trouble shoot this issue, you can re-install or update the Azure Machine Learning extension in your cluster.
### ERROR: ACRSecretError
-Below is a list of reasons you might run into this error when using Kubernetes online endpoint:
+Below is a list of reasons you might run into this error when creating/updating the Kubernetes online deployments:
* Role assignment has not yet been completed. In this case, please wait for a few seconds and try again later.
-* The Azure ARC (For Azure Arc Kubernetes cluster) or AMLArc extension (For AKS) is not properly installed or configured. Please try to check the Azure ARC or AMLArc extension configuration and status.
-* The Kubernetes cluster has improper network configuration, please check the proxy, network policy or certificate.
+* The Azure ARC (For Azure Arc Kubernetes cluster) or Azure Machine Learning extension (For AKS) is not properly installed or configured. Please try to check the Azure ARC or Azure Machine Learning extension configuration and status.
+* The Kubernetes cluster has improper network configuration, please check the proxy, network policy or certificate.
+ * If you are using a private AKS cluster, it is necessary to set up private endpoints for ACR, storage account, workspace in the AKS vnet.
+
+### ERROR: EndpointNotFound
+
+The reason you might run into this error when creating/updating Kubernetes online deployments is because the system can't find the endpoint resource for the deployment in the cluster. You should create the deployment in an exist endpoint or create this endpoint first in your cluster.
+
+### ERROR: ValidateScoringFailed
+
+The reason you might run into this error when creating/updating Kubernetes online deployments is because the scoring request URL validation failed when processing the model deploying.
+
+In this case, you can first check the endpoint URL and then try to re-deploy the deployment.
+
+### ERROR: InvalidDeploymentSpec
+
+The reason you might run into this error when creating/updating Kubernetes online deployments is because the deployment spec is invalid.
+
+In this case, you can check the error message.
+* Make sure the `instance count` is valid.
+* If you have enabled auto scaling, make sure the `minimum instance count` and `maximum instance count` are both valid.
+
+### ERROR: ImagePullLoopBackOff
+
+The reason you might run into this error when creating/updating Kubernetes online deployments is because the images can't be downloaded from the container registry, resulting in the images pull failure.
+
+In this case, you can check the cluster network policy and the workspace container registry if cluster can pull image from the container registry.
+
+### ERROR: KubernetesCrashLoopBackOff
+
+Below is a list of reasons you might run into this error when creating/updating the Kubernetes online endpoints/deployments:
+* One or more pod(s) stuck in CrashLoopBackoff status, you can check if the deployment log exists, and check if there are error messages in the log.
+* There is an error in `score.py` and the container crashed when init your score code, please following [ERROR: ResourceNotReady](#error-resourcenotready) part.
+* Your scoring process needs more memory that your deployment config limit is insufficient, you can try to update the deployment with a larger memory limit.
+
+### ERROR: PodUnschedulable
+
+Below is a list of reasons you might run into this error when creating/updating the Kubernetes online endpoints/deployments:
+* Unable to schedule pod to nodes, due to insufficient resources in your cluster.
+* No node match node affinity/selector.
+
+To mitigate this error, refer to the following steps:
+* Check the `node selector` definition of the `instance type` you used, and `node label` configuration of your cluster nodes.
+* Check `instance type` and the node SKU size for AKS cluster or the node resource for Arc-Kubernetes cluster.
+ * If the cluster is under-resourced, you can reduce the instance type resource requirement or use another instance type with smaller resource required.
+* If the cluster has no more resource to meet the requirement of the deployment, delete some deployment to release resources.
+ ### ERROR: InferencingClientCallFailed
-The reason you might run into this error when using Kubernetes online endpoint is because the k8s-extension of the Kubernetes cluster is not connectable.
+The reason you might run into this error when creating/updating Kubernetes online endpoints/deployments is because the k8s-extension of the Kubernetes cluster is not connectable.
In this case, you can detach and then **re-attach** your compute.
Managed online endpoints have bandwidth limits for each endpoint. You find the l
When you access online endpoints with REST requests, the returned status codes adhere to the standards for [HTTP status codes](https://aka.ms/http-status-codes). Below are details about how endpoint invocation and prediction errors map to HTTP status codes.
+#### Common error codes for managed online endpoints
Below are common error codes when consuming managed online endpoints with REST requests: | Status code | Reason phrase | Why this code might get returned |
Below are common error codes when consuming managed online endpoints with REST r
| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints. | | 500 | Internal server error | AzureML-provisioned infrastructure is failing. |
+#### Common error codes for kubernetes online endpoints
+ Below are common error codes when consuming Kubernetes online endpoints with REST requests: | Status code | Reason phrase | Why this code might get returned |
Below are common error codes when consuming Kubernetes online endpoints with RES
| 409 | Conflict error | When an operation is already in progress, any new operation on that same online endpoint will respond with 409 conflict error. For example, If create or update online endpoint operation is in progress and if you trigger a new Delete operation it will throw an error. | | 502 | Has thrown an exception or crashed in the `run()` method of the score.py file | When there's an error in `score.py`, for example an imported package does not exist in the conda environment, a syntax error, or a failure in the `init()` method. You can follow [here](#error-resourcenotready) to debug the file. | | 503 | Receive large spikes in requests per second | The autoscaler is designed to handle gradual changes in load. If you receive large spikes in requests per second, clients may receive an HTTP status code 503. Even though the autoscaler reacts quickly, it takes AKS a significant amount of time to create more containers. You can follow [here](#how-to-prevent-503-status-codes) to prevent 503 status codes. |
-| 504 | Request has timed out | A 504 status code indicates that the request has timed out. The default timeout is 1 minute. You can increase the timeout or try to speed up the endpoint by modifying the score.py to remove unnecessary calls. If these actions don't correct the problem, you can follow [here](#error-resourcenotready) to debug the score.py file. The code may be in a non-responsive state or an infinite loop. |
+| 504 | Request has timed out | A 504 status code indicates that the request has timed out. The default timeout setting is 5 seconds. You can increase the timeout or try to speed up the endpoint by modifying the score.py to remove unnecessary calls. If these actions don't correct the problem, you can follow [here](#error-resourcenotready) to debug the score.py file. The code may be in a non-responsive state or an infinite loop. |
| 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
There are two things that can help prevent 503 status codes:
``` > [!NOTE]
- > If you receive request spikes larger than the new minimum replicas can handle, you may receive 503s again. For example, as traffic to your endpoint increases, you may need to increase the minimum replicas.
+ > If you receive request spikes larger than the new minimum replicas can handle, you may receive 503 again. For example, as traffic to your endpoint increases, you may need to increase the minimum replicas.
#### How to calculate instance count To increase the number of instances, you can calculate the required replicas by using the following code:
We recommend that you use Azure Functions, Azure Application Gateway, or any ser
- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) - [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)-- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
+- [Online endpoint YAML reference](reference-yaml-endpoint-online.md)
+- [Troubleshoot kubernetes compute ](how-to-troubleshoot-kubernetes-compute.md)
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
When you deploy the AzureML extension, some related services will be deployed to
|volcano-schedular |1 |N/A |**&check;**|50|500|128|512|
-Excluding the user deployments/pods, the **total minimum system resources requirements** are as follows:
+Excluding your own deployments/pods, the **total minimum system resources requirements** are as follows:
|Scenario | Enabled Inference | Enabled Training | CPU Request(m) |CPU Limit(m)| Memory Request(Mi) | Memory Limit(Mi) | Node count | Recommended minimum VM size | Corresponding AKS VM SKU | |-- |-- |--|--|--|--|--|--|--|--|
For AzureML extension deployment on ARO or OCP cluster, grant privileged access
> * `{EXTENSION-NAME}`: is the extension name specified with the `az k8s-extension create --name` CLI command. >* `{KUBERNETES-COMPUTE-NAMESPACE}`: is the namespace of the Kubernetes compute specified when attaching the compute to the Azure Machine Learning workspace. Skip configuring `system:serviceaccount:{KUBERNETES-COMPUTE-NAMESPACE}:default` if `KUBERNETES-COMPUTE-NAMESPACE` is `default`.
+## Collected log details
+
+Some logs about AzureML workloads in the cluster will be collected through extension components, such as status, metrics, life cycle, etc. The following list shows all the log details collected, including the type of logs collected and where they were sent to or stored.
+
+|Pod |Resource description |Detail logging info |
+|--|--|--|
+|amlarc-identity-controller |Request and renew Azure Blob/Azure Container Registry token through managed identity. |Only used when `enableInference=true` is set when installing the extension. It has trace logs for status on getting identity for endpoints to authenticate with AzureML service.|
+|amlarc-identity-proxy |Request and renew Azure Blob/Azure Container Registry token through managed identity. |Only used when `enableInference=true` is set when installing the extension. It has trace logs for status on getting identity for the cluster to authenticate with AzureML service.|
+|aml-operator | Manage the lifecycle of training jobs. |The logs contain AzureML training job pod status in the cluster.|
+|azureml-fe-v2| The front-end component that routes incoming inference requests to deployed services. |Access logs at request level, including request ID, start time, response code, error details and durations for request latency. Trace logs for service metadata changes, service running healthy status, etc. for debugging purpose.|
+| gateway | The gateway is used to communicate and send data back and forth. | Trace logs on requests from AzureML services to the clusters.|
+|healthcheck |--| The logs contain azureml namespace resource (AzureML extension) status to diagnose what make the extension not functional. |
+|inference-operator-controller-manager| Manage the lifecycle of inference endpoints. |The logs contain AzureML inference endpoint and deployment pod status in the cluster.|
+| metrics-controller-manager | Manage the configuration for Prometheus.|Trace logs for status of uploading training job and inference deployment metrics on CPU utilization and memory utilization.|
+| relay server | relay server is only needed in arc-connected cluster and will not be installed in AKS cluster.| Relay server works with Azure Relay to communicate with the cloud services. The logs contain request level info from Azure relay. |
+
## AzureML jobs connect with custom data storage
According to your scheduling requirements of the Azureml-dedicated nodes, you ca
- `amlarc workspace (has this <compute X>)` taint - `amlarc <compute X>` taint
+
+## Integrate other load balancers with AzureML extension over HTTP or HTTPS
+
+In addition to the default AzureML inference load balancer [azureml-fe](../machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md), you can also integrate other load balancers with AzureML extension over HTTP or HTTPS.
+
+This tutorial helps illustrate how to integrate the [Nginx Ingress Controller](https://github.com/kubernetes/ingress-nginx) or the [Azure Application Gateway](../application-gateway/overview.md).
+
+### Prerequisites
+
+- [Deploy the AzureML extension](../machine-learning/how-to-deploy-kubernetes-extension.md) with `inferenceRouterServiceType=ClusterIP` and `allowInsecureConnections=True`, so that the Nginx Ingress Controller can handle TLS termination by itself instead of handing it over to [azureml-fe](../machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md) when service is exposed over HTTPS.
+- For integrating with **Nginx Ingress Controller**, you will need a Kubernetes cluster setup with Nginx Ingress Controller.
+ - [**Create a basic controller**](../aks/ingress-basic.md): If you are starting from scratch, refer to these instructions.
+- For integrating with **Azure Application Gateway**, you will need a Kubernetes cluster setup with Azure Application Gateway Ingress Controller.
+ - [**Greenfield Deployment**](../application-gateway/tutorial-ingress-controller-add-on-new.md): If you are starting from scratch, refer to these instructions.
+ - [**Brownfield Deployment**](../application-gateway/tutorial-ingress-controller-add-on-existing.md): If you have an existing AKS cluster and Application Gateway, refer to these instructions.
+- If you want to use HTTPS on this application, you will need a x509 certificate and its private key.
+
+### Expose services over HTTP
+
+In order to expose the azureml-fe, we will use the following ingress resource:
+
+```yaml
+# Nginx Ingress Controller example
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: azureml-fe
+ namespace: azureml
+spec:
+ ingressClassName: nginx
+ rules:
+ - http:
+ paths:
+ - path: /
+ backend:
+ service:
+ name: azureml-fe
+ port:
+ number: 80
+ pathType: Prefix
+```
+This ingress will expose the `azureml-fe` service and the selected deployment as a default backend of the Nginx Ingress Controller.
+++
+```yaml
+# Azure Application Gateway example
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: azureml-fe
+ namespace: azureml
+spec:
+ ingressClassName: azure-application-gateway
+ rules:
+ - http:
+ paths:
+ - path: /
+ backend:
+ service:
+ name: azureml-fe
+ port:
+ number: 80
+ pathType: Prefix
+```
+This ingress will expose the `azureml-fe` service and the selected deployment as a default backend of the Application Gateway.
+
+Save the above ingress resource as `ing-azureml-fe.yaml`.
+
+1. Deploy `ing-azureml-fe.yaml` by running:
+
+ ```bash
+ kubectl apply -f ing-azureml-fe.yaml
+ ```
+
+2. Check the log of the ingress controller for deployment status.
+
+3. Now the `azureml-fe` application should be available. You can check this by visiting:
+ - **Nginx Ingress Controller**: the public LoadBalancer address of Nginx Ingress Controller
+ - **Azure Application Gateway**: the public address of the Application Gateway.
+4. [Create an inference job and invoke](https://github.com/Azure/AML-Kubernetes/blob/master/docs/simple-flow.md).
+
+ >[!NOTE]
+ >
+ > Replace the ip in scoring_uri with public LoadBalancer address of the Nginx Ingress Controller before invoking.
+
+### Expose services over HTTPS
+
+1. Before deploying ingress, you need to create a kubernetes secret to host the certificate and private key. You can create a kubernetes secret by running
+
+ ```bash
+ kubectl create secret tls <ingress-secret-name> -n azureml --key <path-to-key> --cert <path-to-cert>
+ ```
+
+2. Define the following ingress. In the ingress, specify the name of the secret in the `secretName` section.
+
+ ```yaml
+ # Nginx Ingress Controller example
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ name: azureml-fe
+ namespace: azureml
+ spec:
+ ingressClassName: nginx
+ tls:
+ - hosts:
+ - <domain>
+ secretName: <ingress-secret-name>
+ rules:
+ - host: <domain>
+ http:
+ paths:
+ - path: /
+ backend:
+ service:
+ name: azureml-fe
+ port:
+ number: 80
+ pathType: Prefix
+ ```
+
+ ```yaml
+ # Azure Application Gateway example
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ name: azureml-fe
+ namespace: azureml
+ spec:
+ ingressClassName: azure-application-gateway
+ tls:
+ - hosts:
+ - <domain>
+ secretName: <ingress-secret-name>
+ rules:
+ - host: <domain>
+ http:
+ paths:
+ - path: /
+ backend:
+ service:
+ name: azureml-fe
+ port:
+ number: 80
+ pathType: Prefix
+ ```
+
+ >[!NOTE]
+ >
+ > Replace `<domain>` and `<ingress-secret-name>` in the above Ingress Resource with the domain pointing to LoadBalancer of the **Nginx ingress controller/Application Gateway** and name of your secret. Store the above Ingress Resource in a file name `ing-azureml-fe-tls.yaml`.
+
+1. Deploy ing-azureml-fe-tls.yaml by running
+
+ ```bash
+ kubectl apply -f ing-azureml-fe-tls.yaml
+ ```
+
+2. Check the log of the ingress controller for deployment status.
+
+3. Now the `azureml-fe` application will be available on HTTPS. You can check this by visiting the public LoadBalancer address of the Nginx Ingress Controller.
+
+4. [Create an inference job and invoke](../machine-learning/how-to-deploy-online-endpoints.md).
+
+ >[!NOTE]
+ >
+ > Replace the protocol and IP in scoring_uri with https and domain pointing to LoadBalancer of the Nginx Ingress Controller or the Application Gateway before invoking.
+
+## Use ARM Template to Deploy Extension
+Extension on managed cluster can be deployed with ARM template. A sample template can be found from [deployextension.json](https://github.com/Azure/AML-Kubernetes/blob/master/files/deployextension.json), with a demo parameter file [deployextension.parameters.json](https://github.com/Azure/AML-Kubernetes/blob/master/files/deployextension.parameters.json)
+
+To leverage the sample deployment template, edit the parameter file with correct value, then run the following command:
+
+```azurecli
+az deployment group create --name <ARM deployment name> --resource-group <resource group name> --template-file deployextension.json --parameters deployextension.parameters.json
+```
+More information about how to use ARM template can be found from [ARM template doc](../azure-resource-manager/templates/overview.md)
++ ## Azureml extension release note > [!NOTE] >
- > New features are released at a biweekly cadance.
+ > New features are released at a biweekly calendar.
| Date | Version |Version description | ||||
-| Aug 29, 2022 | 1.1.9 | Improved health check logic. Bugs fixed.|
-| Jun 23, 2022 | 1.1.6 | Bugs fixed. |
-| Jun 15, 2022 | 1.1.5 | Updated training to use new common runtime to run jobs. Removed Azure Relay usage for AKS extension. Removed service bus usage from the extension. Updated security context usage. Updated inference scorefe to v2. Updated to use Volcano as training job scheduler. Bugs fixed. |
+| Dec 27, 2022 | 1.1.17 | Move the Fluent-bit from DaemonSet to sidecars. Add MDC support. Refine error messages. Support cluster mode (windows, linux) jobs. Bug fixes|
+| Nov 29, 2022 | 1.1.16 |Add instance type validation by new CRD. Support Tolerance. Shorten SVC Name. Workload Core hour. Multiple Bug fixes and improvements. |
+| Sep 13, 2022 | 1.1.10 | Bug fixes.|
+| Aug 29, 2022 | 1.1.9 | Improved health check logic. Bug fixes.|
+| Jun 23, 2022 | 1.1.6 | Bug fixes. |
+| Jun 15, 2022 | 1.1.5 | Updated training to use new common runtime to run jobs. Removed Azure Relay usage for AKS extension. Removed service bus usage from the extension. Updated security context usage. Updated inference azureml-fe to v2. Updated to use Volcano as training job scheduler. Bug fixes. |
| Oct 14, 2021 | 1.0.37 | PV/PVC volume mount support in AMLArc training job. |
-| Sept 16, 2021 | 1.0.29 | New regions available, WestUS, CentralUS, NorthCentralUS, KoreaCentral. Job queue explainability. See job queue details in AML Workspace Studio. Auto-killing policy. Support max_run_duration_seconds in ScriptRunConfig. The system will attempt to automatically cancel the run if it took longer than the setting value. Performance improvement on cluster autoscale support. Arc agent and ML extension deployment from on premises container registry.|
+| Sept 16, 2021 | 1.0.29 | New regions available, WestUS, CentralUS, NorthCentralUS, KoreaCentral. Job queue explainability. See job queue details in AML Workspace Studio. Auto-killing policy. Support max_run_duration_seconds in ScriptRunConfig. The system will attempt to automatically cancel the run if it took longer than the setting value. Performance improvement on cluster auto scaling support. Arc agent and ML extension deployment from on premises container registry.|
| August 24, 2021 | 1.0.28 | Compute instance type is supported in job YAML. Assign Managed Identity to AMLArc compute.| | August 10, 2021 | 1.0.20 |New Kubernetes distribution support, K3S - Lightweight Kubernetes. Deploy AzureML extension to your AKS cluster without connecting via Azure Arc. Automated Machine Learning (AutoML) via Python SDK. Use 2.0 CLI to attach the Kubernetes cluster to AML Workspace. Optimize AzureML extension components CPU/memory resources utilization.| | July 2, 2021 | 1.0.13 | New Kubernetes distributions support, OpenShift Kubernetes and GKE (Google Kubernetes Engine). Autoscale support. If the user-managed Kubernetes cluster enables the autoscale, the cluster will be automatically scaled out or scaled in according to the volume of active runs and deployments. Performance improvement on job launcher, which shortens the job execution time to a great deal.|
machine-learning How To Deploy Advanced Entry Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md
def run(request):
# For a real-world solution, you would load the data from reqBody # and send it to the model. Then return the response.
- # For demonstration purposes, this example just returns the size of the image as the response..
+ # For demonstration purposes, this example just returns the size of the image as the response.
return AMLResponse(json.dumps(image.size), 200) else: return AMLResponse("bad request", 500)
def run(request):
> pip install azureml-contrib-services > ```
+> [!NOTE]
+> 500 is not recommended as a customed status code, as at azureml-fe side, the status code will be rewritten to 502.
+> * The status code will be passed through the azureml-fe then sent to client.
+> * The azureml-fe will only rewrite the 500 returned from the model side to be 502, the client will receive 502.
+> * But if the azureml-fe itself returns 500, client side will still receive 500.
++ The `AMLRequest` class only allows you to access the raw posted data in the score.py, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data: ```python
marketplace Azure Container Technical Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-technical-assets.md
You can't deploy an image to Azure Container Instances from an on-premises regis
- If you already have a working container in your local registry, create an Azure Registry and upload your container image to the Azure Container Registry. To learn more, see [Tutorial: Build and deploy container images in the cloud with Azure Container Registry Tasks](../container-registry/container-registry-tutorial-quick-task.md). -- If donΓÇÖt have a container image yet and need to containerize your existing application or create a new container based application, clone the application source code from GitHub, create a container image from the application source, and test the image in a local Docker environment. To learn more, see [Tutorial: Create a container image for deployment to Azure Container Instances](../container-instances/container-instances-tutorial-prepare-app.md).
+- If you don't have a container image yet, and you need to containerize your existing application or create a new container-based application, clone the application source code from GitHub, create a container image from the application source, and test the image in a local Docker environment. To learn more, see [Tutorial: Create a container image for deployment to Azure Container Instances](../container-instances/container-instances-tutorial-prepare-app.md).
## Next steps -- [Create your container offer](azure-container-offer-setup.md)
+- [Create your container offer](azure-container-offer-setup.md)
migrate Common Questions Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-business-case.md
This article answers common questions about Business case in Azure Migrate. If y
## General
-### Why is the export gesture disabled?
+### How can I export the business case?
-Currently, Business case export in .xlsx file is not supported.
+You can click on export from the Business case to export it in an .xlsx file. If you see the 'Export' gesture as disabled, you need to recalculate the business case by modifying any one assumption (Azure or on-premises) in the Business Case and click on Save. E.g.:
+ 1. Go to a business case and click on 'Edit assumptions' and choose 'Azure assumptions'
+ 1. Click on 'Reset' next to 'Performance history duration date range is outdated.' warning. You could also choose to change any other setting
+ 1. Click on 'Save'
+
+This would recalculate the business case with the updated assumptions and will enable the export gesture.
### What is the difference between an assessment and a business case?
migrate How To Build A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-build-a-business-case.md
There are three types of migration strategies that you can choose while building
- With the default *Azure recommended approach to minimize cost*, you can get the most cost-efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. - With *Migrate to all IaaS (Infrastructure as a Service)*, you can get a quick lift and shift recommendation to Azure IaaS. - With *Modernize to PaaS (Platform as a Service)*, you can get cost effective recommendations for Azure IaaS and more PaaS preferred targets in Azure PaaS.
+1. In **Savings options**, specify the savings options combination that you want to be considered while optimizing your Azure costs and maximize savings. Based on the availability of the savings option in the chosen region and the targets, the business case will recommend the appropriate savings options to maximize your savings on Azure.
+ - Choose 'Reserved Instance', if your datacenter comprises of most consistently running resources.
+ - Choose 'Reserved Instance + Azure Savings Plan', if you want additional flexibility and automated cost optimization for workloads applicable for Azure Savings Plan (Compute targets including Azure VM and Azure App Service).
+ 1. In **Discount (%) on Pay as you go**, add any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. Note that the discount isn't applicable on top of reserved instance savings option. 1. **Currency** is defaulted to USD and can't be edited. 1. Review the chosen inputs, and select **Build business case**.
migrate How To View A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md
It covers cost components for on-premises and Azure, savings, and insights to un
This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits. - Azure VM:
- - **Estimated cost by savings options**: This card includes compute cost for Azure VMs. It is recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings.
+ - **Estimated cost by savings options**: This card includes compute cost for Azure VMs. It is recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance or 3 year Azure Savings Plan to maximize savings.
- **Recommended VM family**: This card covers the VM sizes recommended. The ones marked Unknown are the VMs that have some readiness issues and no SKUs could be found for them. - **Recommended storage type**: This card covers the storage cost distribution across different recommended storage types. - SQL Server on Azure VM: This section assumes instance to SQL Server on Azure VM migration recommendation, and the number of VMs here are the number of instances recommended to be migrated as SQL Server on Azure VM:
- - **Estimated cost by savings options**: This card includes compute cost for SQL Server on Azure VMs. It is recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings.
+ - **Estimated cost by savings options**: This card includes compute cost for SQL Server on Azure VMs. It is recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance or 3 year Azure Savings Plan to maximize savings.
- **Recommended VM family**: This card covers the VM sizes recommended. The ones marked Unknown are the VMs that have some readiness issues and no SKUs could be found for them. - **Recommended storage type**: This card covers the storage cost distribution across different recommended storage types.
This section assumes instance to SQL Server on Azure VM migration recommendation
This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits. - Azure SQL:
- - Estimated cost by savings options: This card includes compute cost for Azure SQL MI.
- - Distribution by recommended service tier.
+ - **Estimated cost by savings options**: This card includes compute cost for Azure SQL MI. It is recommended that all idle SQL instances are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings.
+ - **Distribution by recommended service tier** : This card covers the recommended service tier.
- Azure App Service:
- - Estimated cost by savings options: This card includes Azure App Service Plans cost.
- - Distribution by recommended plans.
+ - **Estimated cost by savings options**: This card includes Azure App Service Plans cost. It is recommended that the web apps are migrated using 3 year Reserved Instance or 3 year Savings Plan to maximize savings.
+ - **Distribution by recommended plans** : This card covers the recommended App Service plan.
**On-premises tab**
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate. ## Update (January 2022)-- Envision savings with [Azure Savings Plan for compute](https://azure.microsoft.com/pricing/offers/savings-plan-compute) (ASP) savings option with Azure Migrate assessments. ASP as a savings option setting is now available for Azure VM assessment, Azure SQL assessment and Azure App Service assessment.
+- Envision savings with [Azure Savings Plan for compute](https://azure.microsoft.com/pricing/offers/savings-plan-compute) (ASP) savings option with Azure Migrate business case and assessments. ASP as a savings option assumption/setting is now available for business case, Azure VM assessment, Azure SQL assessment and Azure App Service assessment.
+- Support for export of business case report in an .xlsx workbook from the portal. [Learn more]()
- Azure Migrate is now supported in Sweden geography. [Learn more](migrate-support-matrix.md#public-cloud) ## Update (December 2022)
purview Catalog Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-firewall.md
+
+ Title: Configure Microsoft Purview firewall
+description: This article describes how to configure firewall settings for your Microsoft Purview account
+++++ Last updated : 01/13/2023
+# Customer intent: As a Microsoft Purview admin, I want to set firewall settings for my Microsoft Purview account.
++
+# Configure firewall settings for your Microsoft Purview account
+
+This article describes how to configure firewall settings for Microsoft Purview.
+
+## Prerequisites
+
+To configure Microsoft Purview account firewall settings, ensure you meet the following prerequisites:
+
+1. An Azure account with an active subscription. [Create an account for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+<br>
+2. An existing Microsoft Purview account.
+<br>
+
+## Microsoft Purview firewall deployment scenarios
+
+To configure Microsoft Purview firewall follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Navigate to your Microsoft Purview account in the portal.
+
+3. Under **Settings*, choose **Networking**.
+
+4. In the **Firewall** tab, under **Public network access**, change the firewall settings to the option that suits your scenario:
+
+- **Enabled from all networks**
+
+ :::image type="content" source="media/catalog-private-link/purview-firewall-public.png" alt-text="Screenshot showing the purview account firewall page, selecting public network in the Azure portal.":::
+
+ By choosing this option:
+
+ - All public network access into your Microsoft Purview account is allowed.
+ - Public network access is set to _Enabled from all networks_ on your Microsoft Purview account's Managed storage account.
+ - Public network access is set to _All networks_ on your Microsoft Purview account's Managed Event Hubs, if it's used.
+
+ > [!NOTE]
+ > Even though the network access is enaled through public internet, to gain access to Microsoft Purview governance portal, users must be first authenticated and authorized.
+
+- **Disabled for ingestion only (Preview)**
+
+ :::image type="content" source="media/catalog-private-link/purview-firewall-ingestion.png" alt-text="Screenshot showing the purview account firewall page, selecting ingestion only in the Azure portal.":::
+
+ > [!NOTE]
+ > Currently, this option is available in public preview.
+
+ By choosing this option:
+ - Public network access to your Microsoft Purview account through API and Microsoft Purview governance portal is allowed.
+ - All public network traffic for ingestion is disabled. In this case, you must configure a private endpoint for ingestion before setting up any scans. For more information, see [Use private endpoints for your Microsoft Purview account](catalog-private-link.md).
+ - Public network access is set to _Disabled_ on your Microsoft Purview account's Managed storage account.
+ - Public network access is set to _Disabled_ on your Microsoft Purview account's Managed Event Hubs, if it's used.
+
+- **Disabled from all networks**
+
+ :::image type="content" source="media/catalog-private-link/purview-firewall-private.png" alt-text="Screenshot showing the purview account firewall page, selecting private network in the Azure portal.":::
+
+ By choosing this option:
+
+ - All public network access into your Microsoft Purview account is disabled.
+ - All network access to your Microsoft Purview account through APIs or Microsoft Purview governance portal including traffic to run scans is allowed only through private network using private endpoints. For more information, see [Connect to your Microsoft Purview and scan data sources privately and securely](catalog-private-link-end-to-end.md).
+ - Public network access is set to _Disabled_ on your Microsoft Purview account's Managed storage account.
+ - Public network access is set to _Disabled_ on your Microsoft Purview account's Managed Event Hubs, if it's used.
+
+5. Select **Save**.
+
+ :::image type="content" source="media/catalog-private-link/purview-firewall-save.png" alt-text="Screenshot showing the purview account firewall page, selecting save in the Azure portal.":::
+
+## Next steps
+
+- [Deploy end to end private networking](./catalog-private-link-end-to-end.md)
+- [Deploy private networking for the Microsoft Purview governance portal](./catalog-private-link-account-portal.md)
purview Catalog Private Link End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-private-link-end-to-end.md
Previously updated : 12/09/2022 Last updated : 01/13/2023 # Customer intent: As a Microsoft Purview admin, I want to set up private endpoints for my Microsoft Purview account to access purview account and scan data sources from restricted network.
In this guide, you will learn how to deploy _account_, _portal_ and _ingestion_
The Microsoft Purview _account_ private endpoint is used to add another layer of security by enabling scenarios where only client calls that originate from within the virtual network are allowed to access the Microsoft Purview account. This private endpoint is also a prerequisite for the portal private endpoint.
-The Microsoft Purview _portal_ private endpoint is required to enable connectivity to [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) using a private network.
+The Microsoft Purview _compliance portal_ private endpoint is required to enable connectivity to [Microsoft Purview governance portal](https://web.purview.azure.com/resource/) using a private network.
Microsoft Purview can scan data sources in Azure or an on-premises environment by using _ingestion_ private endpoints. Three private endpoint resources are required to be deployed and linked to Microsoft Purview managed or configured resources when ingestion private endpoint is deployed:
Using one of the deployment options explained further in this guide, you can dep
## Enable access to Azure Active Directory > [!NOTE]
-> If your VM, VPN gateway, or VNet Peering gateway has public internet access, it can access the Microsoft Purview portal and the Microsoft Purview account enabled with private endpoints. For this reason, you don't have to follow the rest of the instructions. If your private network has network security group rules set to deny all public internet traffic, you'll need to add some rules to enable Azure Active Directory (Azure AD) access. Follow the instructions to do so.
+> If your VM, VPN gateway, or VNet Peering gateway has public internet access, it can access the Microsoft Purview governance portal and the Microsoft Purview account enabled with private endpoints. For this reason, you don't have to follow the rest of the instructions. If your private network has network security group rules set to deny all public internet traffic, you'll need to add some rules to enable Azure Active Directory (Azure AD) access. Follow the instructions to do so.
These instructions are provided for accessing Microsoft Purview securely from an Azure VM. Similar steps must be followed if you're using VPN or other VNet Peering gateways.
These instructions are provided for accessing Microsoft Purview securely from an
:::image type="content" source="media/catalog-private-link/aadcdn-rule.png" alt-text="Screenshot that shows the Azure A D Content Delivery Network rule.":::
-1. After the new rule is created, go back to the VM and try to sign in by using your Azure AD credentials again. If sign-in succeeds, then the Microsoft Purview portal is ready to use. But in some cases, Azure AD redirects to other domains to sign in based on a customer's account type. For example, for a live.com account, Azure AD redirects to live.com to sign in, and then those requests are blocked again. For Microsoft employee accounts, Azure AD accesses msft.sts.microsoft.com for sign-in information.
+1. After the new rule is created, go back to the VM and try to sign in by using your Azure AD credentials again. If sign-in succeeds, then the Microsoft Purview governance portal is ready to use. But in some cases, Azure AD redirects to other domains to sign in based on a customer's account type. For example, for a live.com account, Azure AD redirects to live.com to sign in, and then those requests are blocked again. For Microsoft employee accounts, Azure AD accesses msft.sts.microsoft.com for sign-in information.
Check the networking requests on the browser **Networking** tab to see which domain's requests are getting blocked, redo the previous step to get its IP, and add outbound port rules in the network security group to allow requests for that IP. If possible, add the URL and IP to the VM's host file to fix the DNS resolution. If you know the exact sign-in domain's IP ranges, you can also directly add them into networking rules.
-1. Now your Azure AD sign-in should be successful. The Microsoft Purview portal will load successfully, but listing all the Microsoft Purview accounts won't work because it can only access a specific Microsoft Purview account. Enter `web.purview.azure.com/resource/{PurviewAccountName}` to directly visit the Microsoft Purview account that you successfully set up a private endpoint for.
+1. Now your Azure AD sign-in should be successful. The Microsoft Purview governance portal will load successfully, but listing all the Microsoft Purview accounts won't work because it can only access a specific Microsoft Purview account. Enter `web.purview.azure.com/resource/{PurviewAccountName}` to directly visit the Microsoft Purview account that you successfully set up a private endpoint for.
## Deploy self-hosted integration runtime (IR) and scan your data sources. Once you deploy ingestion private endpoints for your Microsoft Purview, you need to setup and register at least one self-hosted integration runtime (IR):
Follow the steps in [Create and manage a self-hosted integration runtime](manage
To cut off access to the Microsoft Purview account completely from the public internet, follow these steps. This setting applies to both private endpoint and ingestion private endpoint connections.
-1. Go to the Microsoft Purview account from the Azure portal, and under **Settings** > **Networking**, select **Private endpoint connections**.
+1. From the [Azure portal](https://portal.azure.com), go to the Microsoft Purview account, and under **Settings**, select **Networking**.
-1. Go to the **Firewall** tab, and ensure that the toggle is set to **Deny**.
+1. Go to the **Firewall** tab, and ensure that the toggle is set to **Disable from all networks**.
- :::image type="content" source="media/catalog-private-link/private-endpoint-firewall.png" alt-text="Screenshot that shows private endpoint firewall settings.":::
+ :::image type="content" source="media/catalog-private-link/purview-firewall-private.png" alt-text="Screenshot that shows private endpoint firewall settings.":::
## Next steps
purview Concept Best Practices Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-network.md
Previously updated : 12/09/2022 Last updated : 01/13/2023
This guide covers the following network options:
- Use [Azure public endpoints](#option-1-use-public-endpoints). - Use [private endpoints](#option-2-use-private-endpoints). - Use [private endpoints and allow public access on the same Microsoft Purview account](#option-3-use-both-private-and-public-endpoints).
+- Use Azure [public endpoints to access Microsoft Purview governance portal and private endpoints for ingestion](#option-4-use-private-endpoints-for-ingestion-only).
This guide describes a few of the most common network architecture scenarios for Microsoft Purview. Though you're not limited to those scenarios, keep in mind the [limitations](#current-limitations) of the service when you're planning networking for your Microsoft Purview accounts.
You might choose an option in which a subset of your data sources uses private e
If you need to scan some data sources by using an ingestion private endpoint and some data sources by using public endpoints or a service endpoint, you can: 1. Use private endpoints for your Microsoft Purview account.
-1. Set **Public network access** to **allow** on your Microsoft Purview account.
+1. Set **Public network access** to **Enabled from all networks** on your Microsoft Purview account.
### Integration runtime options
If you need to scan some data sources by using an ingestion private endpoint and
- You must create a credential in Microsoft Purview based on each secret that you create in Azure Key Vault. At minimum, assign _get_ and _list_ access for secrets for Microsoft Purview on the Key Vault resource in Azure. Otherwise, the credentials won't work in the Microsoft Purview account.
+## Option 4: Use private endpoints for ingestion only
+
+You might choose this option if you need to:
+
+- Scan all data sources using ingestion private endpoint.
+- Managed resources must be configured to disable public network.
+- Enable access to Microsoft Purview governance portal through public network.
+
+To enable this option:
+
+1. Configure ingestion private endpoint for your Microsoft Purview account.
+1. Set **Public network access** to **Disabled for ingestion only (Preview)** on your [Microsoft Purview account](catalog-firewall.md).
+
+### Integration runtime options
+
+Follow recommendation for option 2.
+
+### Authentication options
+
+Follow recommendation for option 2.
+ ## Self-hosted integration runtime network and proxy recommendations For scanning data sources across your on-premises and Azure networks, you may need to deploy and use one or multiple [self-hosted integration runtime virtual machines](manage-integration-runtimes.md) inside an Azure VNet or an on-premises network, for any of the scenarios mentioned earlier in this document.
purview Create Microsoft Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-dotnet.md
# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using .NET SDK
-In this quickstart, you'll use the [.NET SDK](/dotnet/api/overview/azure/purviewresourceprovider) to create a Microsoft Purview (formerly Azure Purview) account.
+In this quickstart, you'll use the [.NET SDK](/dotnet/api/overview/azure/purview) to create a Microsoft Purview (formerly Azure Purview) account.
The Microsoft Purview governance portal surfaces tools like the Microsoft Purview Data Map and Microsoft Purview Data Catalog that help you manage and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end linage. Data consumers are able to discover data across your organization, and data administrators are able to audit, secure, and ensure right use of your data.
purview Create Microsoft Purview Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-portal.md
Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account'
description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account and configure permissions to begin using it. Previously updated : 12/09/2022 Last updated : 01/13/2023
For more information about the governance capabilities of Microsoft Purview, for
1. You can choose a name for your managed resource group. Microsoft Purview will create a managed storage account in this group that it will use during its processes.
-1. On the **Networking** tab you can choose to connect to all networks, or to use private endpoints. For more information and configuration options, see our [private endpoints for Microsoft Purview articles.](catalog-private-link.md)
+1. On the **Networking** tab you can choose to connect to all networks, or to use private endpoints. For more information and configuration options, see [Configure firewall settings for your Microsoft Purview account](catalog-firewall.md) and [private endpoints for Microsoft Purview articles.](catalog-private-link.md)
1. On **Configuration** tab you can choose to configure Event Hubs namespaces to programmatically monitor your Microsoft Purview account using Event Hubs and Atlas Kafka. - [Steps to configure Event Hubs namespaces](configure-event-hubs-for-kafka.md)
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Last updated 01/14/2023
Cognitive Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint will be accepted if both the request and the API key are valid.
-API keys are used for content-related requests, such as creating or querying an index. Upon service creation, it's the only authentication mechanism for data plane (content) operations, but you can replace or supplement key authentication with [Azure roles](search-security-rbac.md) if you can't use hard-coded keys in your code.
- > [!NOTE] > A quick note about how "key" terminology is used in Cognitive Search. An "API key", which is described in this article, refers to a GUID used for authenticating a request. A separate term, "document key", refers to a unique string in your indexed content that's used to uniquely identify documents in a search index.
Visually, there's no distinction between an admin key or query key. Both keys ar
## Use API keys on connections
+API keys are used for data plane (content) requests, such as creating or accessing an index or any other request that's represented in the [Search REST APIs](/rest/api/searchservice/). Upon service creation, an API key is the only authentication mechanism for data plane operations, but you can replace or supplement key authentication with [Azure roles](search-security-rbac.md) if you can't use hard-coded keys in your code.
+ API keys are specified on client requests to a search service. Passing a valid API key on the request is considered proof that the request is from an authorized client. If you're creating, modifying, or deleting objects, you'll need an admin API key. Otherwise, query keys are typically distributed to client applications that issue queries. You can specify API keys in a request header for REST API calls, or in code that calls the azure.search.documents client libraries in the Azure SDKs. If you're using the Azure portal to perform tasks, your role assignment determines the [level of access](#permissions-to-view-or-manage-api-keys).
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
This approach assumes Postman as the REST client and uses a Postman collection a
az login ```
-1. Get your subscription ID. You'll provide this value as variable in a future step.
+1. Get your subscription ID. You'll provide this value as variable in a future step.
```azurecli az account show --query id -o tsv ````
-1. Create a resource group for your security principal, specifying a location and name. This example uses the West US region. You'll provide this value as variable in a future step.
+1. Create a resource group for your security principal, specifying a location and name. This example uses the West US region. You'll provide this value as variable in a future step. The role you'll create will be scoped to the resource group.
```azurecli az group create -l westus -n MyResourceGroup
service-bus-messaging Service Bus Python How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md
description: This tutorial shows you how to send messages to and receive message
documentationcenter: python Previously updated : 02/16/2022 Last updated : 01/12/2023 ms.devlang: python-+ # Send messages to and receive messages from Azure Service Bus queues (Python)
> * [JavaScript](service-bus-nodejs-how-to-use-queues.md) > * [Python](service-bus-python-how-to-use-queues.md)
-This article shows you how to use Python to send messages to, and receive messages from Azure Service Bus queues.
+In this tutorial, you complete the following steps:
+
+1. Create a Service Bus namespace, using the Azure portal.
+1. Create a Service Bus queue, using the Azure portal.
+1. Write Python code to use the [azure-servicebus](https://pypi.org/project/azure-servicebus/) package to:
+ 1. Send a set of messages to the queue.
+ 1. Receive those messages from the queue.
> [!NOTE] > This quick start provides step-by-step instructions for a simple scenario of sending messages to a Service Bus queue and receiving them. You can find pre-built JavaScript and TypeScript samples for Azure Service Bus in the [Azure SDK for Python repository on GitHub](https://github.com/azure/azure-sdk-for-python/tree/main/sdk/servicebus/azure-servicebus/samples). ## Prerequisites-- An Azure subscription. You can activate your [Visual Studio or MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).-- If you don't have a queue to work with, follow steps in the [Use Azure portal to create a Service Bus queue](service-bus-quickstart-portal.md) article to create a queue. Note down the **connection string** for your Service Bus namespace and the name of the **queue** you created.-- Python 2.7 or higher, with the [Python Azure Service Bus](https://pypi.python.org/pypi/azure-servicebus) package installed. For more information, see the [Python Installation Guide](/azure/developer/python/sdk/azure-sdk-install). +
+If you're new to the service, see [Service Bus overview](service-bus-messaging-overview.md) before you do this quickstart.
+
+- An Azure subscription. To complete this tutorial, you need an Azure account. You can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/credit-for-visual-studio-subscribers/?WT.mc_id=A85619ABF) or sign-up for a [free account](https://azure.microsoft.com/free/?WT.mc_id=A85619ABF).
+
+- [Python 3.7](https://www.python.org/downloads/) or higher.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+To use this quickstart with your own Azure account:
+* Install [Azure CLI](/cli/azure/install-azure-cli), which provides the passwordless authentication to your developer machine.
+* Sign in with your Azure account at the terminal or command prompt with `az login`.
+* Use the same account when you add the appropriate data role to your resource.
+* Run the code in the same terminal or command prompt.
+* Note the **queue** name for your Service Bus namespace. You'll need that in the code.
+
+### [Connection string](#tab/connection-string)
+
+Note the following, which you'll use in the code below:
+* Service Bus namespace **connection string**
+* Service Bus namespace **queue** you created
+++
+>[!NOTE]
+> This tutorial works with samples that you can copy and run using Python. For instructions on how to create a Python application, see [Create and deploy a Python application to an Azure Website](../app-service/quickstart-python.md). For more information about installing packages used in this tutorial, see the [Python Installation Guide](/azure/developer/python/sdk/azure-sdk-install).
++++
+## Use pip to install packages
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. To install the required Python packages for this Service Bus tutorial, open a command prompt that has Python in its path, change the directory to the folder where you want to have your samples.
+
+1. Install the following packages:
+
+ ```shell
+ pip install azure-servicebus
+ pip install azure-identity
+ pip install aiohttp
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. To install the required Python packages for this Service Bus tutorial, open a command prompt that has Python in its path, change the directory to the folder where you want to have your samples.
+
+1. Install the following package:
+
+ ```bash
+ pip install azure-servicebus
+ ```
++ ## Send messages to a queue
-1. Add the following import statement.
+The following sample code shows you how to send a message to a queue. Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/), create a file *send.py*, and add the following code into it.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. Add import statements.
```python
- from azure.servicebus import ServiceBusClient, ServiceBusMessage
+ import asyncio
+ from azure.servicebus.aio import ServiceBusClient
+ from azure.servicebus import ServiceBusMessage
+ from azure.identity.aio import DefaultAzureCredential
```
-2. Add the following constants.
+1. Add constants and define a credential.
```python
- CONNECTION_STR = "<NAMESPACE CONNECTION STRING>"
- QUEUE_NAME = "<QUEUE NAME>"
+ FULLY_QUALIFIED_NAMESPACE = "FULLY_QUALIFIED_NAMESPACE"
+ QUEUE_NAME = "QUEUE_NAME"
+
+ credential = DefaultAzureCredential()
``` > [!IMPORTANT]
- > - Replace `<NAMESPACE CONNECTION STRING>` with the connection string for your Service Bus namespace.
- > - Replace `<QUEUE NAME>` with the name of the queue.
-3. Add a method to send a single message.
+ > - Replace `FULLY_QUALIFIED_NAMESPACE` with the fully qualified namespace for your Service Bus namespace.
+ > - Replace `QUEUE_NAME` with the name of the queue.
+
+1. Add a method to send a single message.
```python
- def send_single_message(sender):
- # create a Service Bus message
+ async def send_single_message(sender):
+ # Create a Service Bus message and send it to the queue
message = ServiceBusMessage("Single Message")
- # send the message to the queue
- sender.send_messages(message)
+ await sender.send_messages(message)
print("Sent a single message") ```
- The sender is an object that acts as a client for the queue you created. You'll create it later and send as an argument to this function.
-4. Add a method to send a list of messages.
+ The sender is an object that acts as a client for the queue you created. You'll create it later and send as an argument to this function.
+
+1. Add a method to send a list of messages.
```python
- def send_a_list_of_messages(sender):
- # create a list of messages
+ async def send_a_list_of_messages(sender):
+ # Create a list of messages and send it to the queue
messages = [ServiceBusMessage("Message in list") for _ in range(5)]
- # send the list of messages to the queue
- sender.send_messages(messages)
+ await sender.send_messages(messages)
print("Sent a list of 5 messages") ```
-5. Add a method to send a batch of messages.
-
- ```python
- def send_batch_message(sender):
- # create a batch of messages
- batch_message = sender.create_message_batch()
- for _ in range(10):
- try:
- # add a message to the batch
- batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
- except ValueError:
- # ServiceBusMessageBatch object reaches max_size.
- # New ServiceBusMessageBatch object can be created here to send more data.
- break
- # send the batch of messages to the queue
- sender.send_messages(batch_message)
+
+1. Add a method to send a batch of messages.
+
+ ```python
+ async def send_batch_message(sender):
+ # Create a batch of messages
+ async with sender:
+ batch_message = await sender.create_message_batch()
+ for _ in range(10):
+ try:
+ # Add a message to the batch
+ batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
+ except ValueError:
+ # ServiceBusMessageBatch object reaches max_size.
+ # New ServiceBusMessageBatch object can be created here to send more data.
+ break
+ # Send the batch of messages to the queue
+ await sender.send_messages(batch_message)
print("Sent a batch of 10 messages") ```
-6. Create a Service Bus client and then a queue sender object to send messages.
-
- ```python
- # create a Service Bus client using the connection string
- servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STR, logging_enable=True)
- with servicebus_client:
- # get a Queue Sender object to send messages to the queue
- sender = servicebus_client.get_queue_sender(queue_name=QUEUE_NAME)
- with sender:
- # send one message
- send_single_message(sender)
- # send a list of messages
- send_a_list_of_messages(sender)
- # send a batch of messages
- send_batch_message(sender)
+
+1. Create a Service Bus client and then a queue sender object to send messages.
+
+ ```python
+ async def run():
+ # create a Service Bus client using the credential
+ async with ServiceBusClient(
+ fully_qualified_namespace=FULLY_QUALIFIED_NAMESPACE,
+ credential=credential,
+ logging_enable=True) as servicebus_client:
+ # get a Queue Sender object to send messages to the queue
+ sender = servicebus_client.get_queue_sender(queue_name=QUEUE_NAME)
+ async with sender:
+ # send one message
+ await send_single_message(sender)
+ # send a list of messages
+ await send_a_list_of_messages(sender)
+ # send a batch of messages
+ await send_batch_message(sender)
+ # Close credential when no longer needed.
+ await credential.close()
+ ```
+
+1. Call the `run` method and print a message.
+
+ ```python
+ asyncio.run(run())
print("Done sending messages") print("--") ```
-
+
+### [Connection string](#tab/connection-string)
+
+1. Add import statements.
+
+ ```python
+ import asyncio
+ from azure.servicebus.aio import ServiceBusClient
+ from azure.servicebus import ServiceBusMessage
+ ```
+
+1. Add constants.
+
+ ```python
+ NAMESPACE_CONNECTION_STR = "NAMESPACE_CONNECTION_STR"
+ QUEUE_NAME = "QUEUE_NAME"
+ ```
+
+ > [!IMPORTANT]
+ > - Replace `NAMESPACE_CONNECTION_STR` with the connection string for your Service Bus namespace.
+ > - Replace `QUEUE_NAME` with the name of the queue.
+
+1. Add a method to send a single message.
+
+ ```python
+ async def send_single_message(sender):
+ # Create a Service Bus message and send it to the queue
+ message = ServiceBusMessage("Single Message")
+ await sender.send_messages(message)
+ print("Sent a single message")
+ ```
+
+ The sender is an object that acts as a client for the queue you created. You'll create it later and send as an argument to this function.
+
+1. Add a method to send a list of messages.
+
+ ```python
+ async def send_a_list_of_messages(sender):
+ # Create a list of messages and send it to the queue
+ messages = [ServiceBusMessage("Message in list") for _ in range(5)]
+ await sender.send_messages(messages)
+ print("Sent a list of 5 messages")
+ ```
+
+1. Add a method to send a batch of messages.
+
+ ```python
+ async def send_batch_message(sender):
+ # Create a batch of messages
+ async with sender:
+ batch_message = await sender.create_message_batch()
+ for _ in range(10):
+ try:
+ # Add a message to the batch
+ batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
+ except ValueError:
+ # ServiceBusMessageBatch object reaches max_size.
+ # New ServiceBusMessageBatch object can be created here to send more data.
+ break
+ # Send the batch of messages to the queue
+ await sender.send_messages(batch_message)
+ print("Sent a batch of 10 messages")
+ ```
+
+1. Create a Service Bus client and then a queue sender object to send messages.
+
+ ```python
+ async def run():
+ # create a Service Bus client using the connection string
+ async with ServiceBusClient.from_connection_string(
+ conn_str=NAMESPACE_CONNECTION_STR,
+ logging_enable=True) as servicebus_client:
+ # Get a Queue Sender object to send messages to the queue
+ sender = servicebus_client.get_queue_sender(queue_name=QUEUE_NAME)
+ async with sender:
+ # Send one message
+ await send_single_message(sender)
+ # Send a list of messages
+ await send_a_list_of_messages(sender)
+ # Send a batch of messages
+ await send_batch_message(sender)
+ ```
+
+1. Call the `run` method and print a message.
+
+ ```python
+ asyncio.run(run())
+ print("Done sending messages")
+ print("--")
+ ```
+++ ## Receive messages from a queue
-Add the following code after the print statement. This code continually receives new messages until it doesn't receive any new messages for 5 (`max_wait_time`) seconds.
-
-```python
-with servicebus_client:
- # get the Queue Receiver object for the queue
- receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE_NAME, max_wait_time=5)
- with receiver:
- for msg in receiver:
- print("Received: " + str(msg))
- # complete the message so that the message is removed from the queue
- receiver.complete_message(msg)
-```
-## Full code
-
-```python
-# import os
-from azure.servicebus import ServiceBusClient, ServiceBusMessage
-
-CONNECTION_STR = "<NAMESPACE CONNECTION STRING>"
-QUEUE_NAME = "<QUEUE NAME>"
-
-def send_single_message(sender):
- message = ServiceBusMessage("Single Message")
- sender.send_messages(message)
- print("Sent a single message")
-
-def send_a_list_of_messages(sender):
- messages = [ServiceBusMessage("Message in list") for _ in range(5)]
- sender.send_messages(messages)
- print("Sent a list of 5 messages")
-
-def send_batch_message(sender):
- batch_message = sender.create_message_batch()
- for _ in range(10):
- try:
- batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
- except ValueError:
- # ServiceBusMessageBatch object reaches max_size.
- # New ServiceBusMessageBatch object can be created here to send more data.
- break
- sender.send_messages(batch_message)
- print("Sent a batch of 10 messages")
-
-servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STR, logging_enable=True)
-
-with servicebus_client:
- sender = servicebus_client.get_queue_sender(queue_name=QUEUE_NAME)
- with sender:
- send_single_message(sender)
- send_a_list_of_messages(sender)
- send_batch_message(sender)
-
-print("Done sending messages")
-print("--")
-
-with servicebus_client:
- receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE_NAME, max_wait_time=5)
- with receiver:
- for msg in receiver:
- print("Received: " + str(msg))
- receiver.complete_message(msg)
-```
+The following sample code shows you how to receive messages from a queue. The code shown receives new messages until it doesn't receive any new messages for 5 (`max_wait_time`) seconds.
+
+Open your favorite editor, such as [Visual Studio Code](https://code.visualstudio.com/), create a file *recv.py*, and add the following code into it.
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+1. Similar to the send sample, add `import` statements, define constants that you should replace with your own values, and define a credential.
+
+ ```python
+ import asyncio
+
+ from azure.servicebus.aio import ServiceBusClient
+ from azure.identity.aio import DefaultAzureCredential
+
+ FULLY_QUALIFIED_NAMESPACE = "FULLY_QUALIFIED_NAMESPACE"
+ QUEUE_NAME = "QUEUE_NAME"
+
+ credential = DefaultAzureCredential()
+ ```
+
+1. Create a Service Bus client and then a queue receiver object to receive messages.
+
+ ```python
+ async def run():
+ # create a Service Bus client using the connection string
+ async with ServiceBusClient(
+ fully_qualified_namespace=FULLY_QUALIFIED_NAMESPACE,
+ credential=credential,
+ logging_enable=True) as servicebus_client:
+
+ async with servicebus_client:
+ # get the Queue Receiver object for the queue
+ receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE_NAME)
+ async with receiver:
+ received_msgs = await receiver.receive_messages(max_wait_time=5, max_message_count=20)
+ for msg in received_msgs:
+ print("Received: " + str(msg))
+ # complete the message so that the message is removed from the queue
+ await receiver.complete_message(msg)
+
+ # Close credential when no longer needed.
+ await credential.close()
+ ```
+
+1. Call the `run` method.
+
+ ```python
+ asyncio.run(run())
+ ```
+
+### [Connection string](#tab/connection-string)
+
+1. Similar to the send sample, add `import` statements and define constants that you should replace with your own values.
+
+ ```python
+ import asyncio
+ from azure.servicebus.aio import ServiceBusClient
+
+ NAMESPACE_CONNECTION_STR = "NAMESPACE_CONNECTION_STR"
+ QUEUE_NAME = "QUEUE_NAME"
+ ```
+
+1. Create a Service Bus client and then a queue receiver object to receive messages.
+
+ ```python
+ async def run():
+ # create a Service Bus client using the connection string
+ async with ServiceBusClient.from_connection_string(
+ conn_str=NAMESPACE_CONNECTION_STR,
+ logging_enable=True) as servicebus_client:
+
+ async with servicebus_client:
+ # get the Queue Receiver object for the queue
+ receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE_NAME)
+ async with receiver:
+ received_msgs = await receiver.receive_messages(max_wait_time=5, max_message_count=20)
+ for msg in received_msgs:
+ print("Received: " + str(msg))
+ # complete the message so that the message is removed from the queue
+ await receiver.complete_message(msg)
+ ```
+
+1. Call the `run` method.
+
+ ```python
+ asyncio.run(run())
+ ```
++ ## Run the app
-When you run the application, you should see the following output:
+
+Open a command prompt that has Python in its path, and then run the code to send and receive messages from the queue.
+
+```shell
+python send.py; python recv.py
+```
+
+You should see the following output:
```console Sent a single message
Select the queue on this **Overview** page to navigate to the **Service Bus Queu
## Next steps+ See the following documentation and samples: - [Azure Service Bus client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/servicebus/azure-servicebus)
site-recovery Azure To Azure Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-network-mapping.md
Before you map networks, you should have [Azure VNets](../virtual-network/virtua
## Set up network mapping manually (Optional)
->[!NOTE
+>[!NOTE]
> Replication can now be done between any two Azure regions around the world. Customers are no longer limited to enabling replication within their continent. Map networks as follows:
site-recovery Encryption Feature Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/encryption-feature-deprecation.md
To continue successful failover operations, and replications follow the steps me
Follow these steps for each VM: 1. [Disable replication](./site-recovery-manage-registration-and-protection.md#disable-protection-for-a-hyper-v-virtual-machine-replicating-to-azure-using-the-system-center-vmm-to-azure-scenario).
-2. [Create a new replication policy](./hyper-v-azure-tutorial.md#set-up-a-replication-policy).
+2. [Create a new replication policy](./hyper-v-azure-tutorial.md#replication-policy).
3. [Enable replication](./hyper-v-vmm-azure-tutorial.md#enable-replication) and select a storage account with SSE enabled. After completing the initial replication to storage accounts with SSE enabled, your VMs will be using Encryption at Rest with Azure Site Recovery.
site-recovery Hybrid How To Enable Replication Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md
After you finish the preceding tasks, continue with the setup of your on-premise
infrastructure. Continue by completing one of the following tasks: - [Deploy a configuration server for VMware and physical machines](./vmware-azure-deploy-configuration-server.md)-- [Set up the Hyper-V environment for replication](./hyper-v-azure-tutorial.md#set-up-the-source-environment)
+- [Set up the Hyper-V environment for replication](./hyper-v-azure-tutorial.md#source-settings)
After the setup is complete, enable replication for your source machines. Don't set up the infrastructure until after the private endpoints for the vault are created in the
site-recovery Hyper V Azure Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md
Title: Set up Hyper-V disaster recovery using Azure Site Recovery
description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without VMM) to Azure by using Site Recovery. Previously updated : 11/12/2019 Last updated : 01/16/2023
This is the third tutorial in a series. It shows you how to set up disaster reco
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Select your replication source and target.
> * Set up the source replication environment, including on-premises Site Recovery components and the target replication environment. > * Create a replication policy. > * Enable replication for a VM.
In this tutorial, you learn how to:
> Tutorials show you the simplest deployment path for a scenario. They use default options where possible, and don't show all possible settings and paths. For detailed instructions, review the articles in the **How-to Guides** section of the [Site Recovery documentation](./index.yml). -
-## Before you begin
+## Prerequisites
This is the third tutorial in a series. It assumes that you have already completed the tasks in the previous tutorials: 1. [Prepare Azure](./tutorial-prepare-azure-for-hyperv.md) 2. [Prepare on-premises Hyper-V](./hyper-v-prepare-on-premises-tutorial.md)
-## Select a replication goal
-
-1. In the Azure portal, go to **Recovery Services vaults** and select the vault. We prepared the vault **ContosoVMVault** in the previous tutorial.
-2. In **Getting Started**, select **Site Recovery**, and then select **Prepare Infrastructure**.
-3. In **Protection goal** > **Where are your machines located?**, select **On-premises**.
-4. In **Where do you want to replicate your machines?**, select **To Azure**.
-5. In **Are your machines virtualized?**, select **Yes, with Hyper-V**.
-6. In **Are you using System Center VMM to manage your Hyper-V hosts?**, select **No**.
-7. Select **OK**.
+## Prepare infrastructure
- ![Screenshot of the Protection goal options in Prepare infrastructure.](./media/hyper-v-azure-tutorial/replication-goal.png)
+It is important to prepare the infrastructure before you set up disaster recovery of on-premises Hyper-V VMs to Azure.
-## Confirm deployment planning
+### Deployment planning
-1. In **Deployment planning**, if you're planning a large deployment, download the Deployment Planner for Hyper-V from the link on the page. [Learn more](hyper-v-deployment-planner-overview.md) about Hyper-V deployment planning.
-2. For this tutorial, we don't need the Deployment Planner. In **Have you completed deployment planning?**, select **I will do it later**, and then select **OK**.
+1. In the [Azure portal](https://portal.azure.com), go to **Recovery Services vaults** and select the vault. We prepared the vault **ContosoVMVault** in the previous tutorial.
+2. On the vault home page, select **Enable Site Recovery**.
+1. Navigate to the bottom of the page, and select **Prepare infrastructure** under the **Hyper-V machines to Azure** section. This opens the **Prepare infrastructure** pane.
+1. In the **Prepare infrastructure** pane, under **Deployment planning** tab do the following:
+ > [!TIP]
+ > If you're planning a large deployment, download the Deployment Planner for Hyper-V from the link on the page. [Learn more](hyper-v-deployment-planner-overview.md) about Hyper-V deployment planning.
+ 1. For this tutorial, we don't need the Deployment Planner. In **Deployment planning completed?**, select **I will do it later**.
+ 1. Select **Next**.
- ![Screenshot of the Deployment planning options in Prepare infrastructure.](./media/hyper-v-azure-tutorial/deployment-planning.png)
+ :::image type="content" source="./media/hyper-v-azure-tutorial/deployment-planning.png" alt-text="Screenshot displays Deployment settings page." lightbox="./media/hyper-v-azure-tutorial/deployment-planning.png":::
-## Set up the source environment
+### Source settings
To set up the source environment, you create a Hyper-V site and add to that site the Hyper-V hosts containing VMs that you want to replicate. Then, you download and install the Azure Site Recovery Provider and the Azure Recovery Services agent on each host, and register the Hyper-V site in the vault.
-1. Under **Prepare Infrastructure**, select **Source**.
-2. In **Prepare source**, select **+ Hyper-V Site**.
-3. In **Create Hyper-V site**, specify the site name. We're using **ContosoHyperVSite**.
-
- ![Screenshot of Hyper-V site selection in Prepare infrastructure.](./media/hyper-v-azure-tutorial/hyperv-site.png)
+1. In the **Source settings** tab, do the following:
+ 1. For **Are you Using System Center VMM to manage Hyper-V hosts?**, select **No**. This enables new options.
+ 1. Under **Hyper-V site** specify the site name. You can also use the **Add Hyper-V site** option to add a new Hyper-V site. In this tutorial we're using **ContosoHyperVSite**.
+ 1. Under **Hyper-V servers**, select **Add Hyper-V servers** to add servers.
+ :::image type="content" source="./media/hyper-v-azure-tutorial/source-setting.png" alt-text="Screenshot displays Source settings page." lightbox="./media/hyper-v-azure-tutorial/source-setting.png":::
-4. After the site is created, in **Prepare source** > **Step 1: Select Hyper-V site**, select the site you created.
-5. Select **+ Hyper-V Server**.
+ 1. On the new **Add Server** pane, do the following:
+ 1. [Download the installer](#install-the-provider) for the Microsoft Azure Site Recovery Provider.
+ :::image type="content" source="./media/hyper-v-azure-tutorial/add-server.png" alt-text="Screenshot displays Add server page." lightbox="./media/hyper-v-azure-tutorial/add-server.png":::
+ 1. Download the vault registration key. You need this key to the Provider. The key is valid for five days. [Learn more](#install-the-provider-on-a-hyper-v-core-server).
+ 1. Select the site you created.
+ 1. Select **Next**.
+
- ![Screenshot of Hyper-V server selection in Prepare infrastructure.](./media/hyper-v-azure-tutorial/hyperv-server.png)
-
-6. Download the installer for the Microsoft Azure Site Recovery Provider.
-7. Download the vault registration key. You need this key to install the Provider. The key is valid for five days after you generate it.
-
- ![Screenshot of the options to download the Provider and registration key.](./media/hyper-v-azure-tutorial/download.png)
-
+Site Recovery checks if you have one or more compatible Azure storage accounts and networks.
-### Install the Provider
+#### Install the Provider
Install the downloaded setup file (AzureSiteRecoveryProvider.exe) on each Hyper-V host that you want to add to the Hyper-V site. Setup installs the Azure Site Recovery Provider and Recovery Services agent on each Hyper-V host.
If you're running a Hyper-V core server, download the setup file and follow thes
"C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r /Friendlyname "FriendlyName of the Server" /Credentials "path to where the credential file is saved" ```
-## Set up the target environment
+### Target settings
Select and verify target resources:
-1. Select **Prepare infrastructure** > **Target**.
-2. Select the subscription and the resource group **ContosoRG** in which the Azure VMs will be created after failover.
-3. Select the **Resource Manager"** deployment model.
+1. In the **Target settings** tab, do the following:
+ 1. In **Subscription**, select the subscription and the resource group **ContosoRG** in which the Azure VMs will be created after failover.
+ 1. Under **Post-failover deployment model**, select the **Resource Manager** deployment model.
+ 1. Select **Next**.
+
+ :::image type="content" source="./media/hyper-v-azure-tutorial/target-settings.png" alt-text="Screenshot displays Target settings." lightbox="./media/hyper-v-azure-tutorial/target-settings.png":::
-Site Recovery checks that you have one or more compatible Azure storage accounts and networks.
-## Set up a replication policy
+### Replication policy
-1. Select **Prepare infrastructure** > **Replication Settings** > **+Create and associate**.
-2. In **Create and associate policy**, specify a policy name. We're using **ContosoReplicationPolicy**.
-3. For this tutorial, we'll leave the default settings:
- - **Copy frequency** indicates how often delta data (after initial replication) will replicate. The default frequency is every five minutes.
- - **Recovery point retention** indicates that recovery points will be retained for two hours. The maximum allowed value for retention when protecting virtual machines hosted on Hyper-V hosts is 24 hours.
+Under **Replication policy**, do the following:
+1. Under **Replication policy**, specify the replication policy.
+ :::image type="content" source="./media/hyper-v-azure-tutorial/replication-policy.png" alt-text="Screenshot displays Replication policy." lightbox="./media/hyper-v-azure-tutorial/replication-policy.png":::
+1. If you do not have a replication policy, use the **Create new policy and associate** option to create a new policy.
+1. In the **Create and associate policy** page, do the following:
+ - **Name** - specify a policy name. We're using **ContosoReplicationPolicy**.
+ - **Source type** - select the ContosoHyperVSite site.
+ - **Target type** - verify the target (Azure), the vault subscription, and the Resource Manager deployment mode.
+ - **Copy frequency** - indicates how often delta data (after initial replication) will replicate. The default frequency is every five minutes.
+ - **Recovery point retention in hours** indicates that recovery points will be retained for two hours. The maximum allowed value for retention when protecting virtual machines hosted on Hyper-V hosts is 24 hours.
- **App-consistent snapshot frequency** indicates that recovery points containing app-consistent snapshots will be created every hour.
- - **Initial replication start time** indicates that initial replication will start immediately.
-4. After the policy is created, select **OK**. When you create a new policy, it's automatically associated with the specified Hyper-V site. In our tutorial, that's **ContosoHyperVSite**.
+ - **Initial replication start time** indicates that initial replication will start immediately.
+
+1. After the policy is created, select **OK**. When you create a new policy, it's automatically associated with the specified Hyper-V site.
+ :::image type="content" source="./media/hyper-v-azure-tutorial/create-policy.png" alt-text="Screenshot displays Create policy." lightbox="./media/hyper-v-azure-tutorial/create-policy.png":::
+1. Select **Next**.
- ![Replication policy](./media/hyper-v-azure-tutorial/replication-policy.png)
+On the **Review** tab, review your selections and select **Create**.
-## Enable replication
+You can track progress in the notifications. After the job finishes, the initial replication is complete, and the VM is ready for failover.
-1. In **Replicate application**, select **Source**.
-2. In **Source**, select the **ContosoHyperVSite** site. Then, select **OK**.
-3. In **Target**, verify the target (Azure), the vault subscription, and the **Resource Manager** deployment model.
-4. If you're using tutorial settings, select the **contosovmsacct1910171607** storage account created in the previous tutorial for replicated data. Also select the **ContosoASRnet** network, in which Azure VMs will be located after failover.
-5. In **Virtual machines** > **Select**, select the VM that you want to replicate. Then, select **OK**.
+## Enable replication
- You can track progress of the **Enable Protection** action in **Jobs** > **Site Recovery jobs**. After the **Finalize Protection** job finishes, the initial replication is complete, and the VM is ready for failover.
+1. In the [Azure portal](https://portal.azure.com), go to **Recovery Services vaults** and select the vault.
+2. On the vault home page, select **Enable Site Recovery**.
+3. Navigate to the bottom of the page, and select **Enable replication** under the **Hyper-V machines to Azure** section.
+1. Under **Source environment** tab, specify the **source location** and select **Next**.
+
+ :::image type="content" source="./media/hyper-v-azure-tutorial/enable-replication-source.png" alt-text="Screenshot of the source environment page.":::
+
+1. Under **Target environment** tab, do the following:
+ 1. In **Subscription**, specify the subscription name.
+ 1. For **Post-failover resource group**, specify the resource group name to fail over.
+ 1. For **Post-failover deployment model**, specify **Resource Manager**.
+ 1. In **Storage account**, specify the storage account name.
+ 1. Select **Next**.
+ :::image type="content" source="./media/hyper-v-azure-tutorial/enable-replication-target.png" alt-text="Screenshot of the target environment page.":::
+
+1. Under **Virtual machine selection** tab, select the VM that you want to replicate and select **Next**.
+
+1. Under **Replication settings** tab, select and verify the disk details.
+ :::image type="content" source="./media/hyper-v-azure-tutorial/enable-replication-settings.png" alt-text="Screenshot of the replication setting page.":::
+1. Under **Replication policy** tab, verify that the correct replication policy is selected.
+ :::image type="content" source="./media/hyper-v-azure-tutorial/enable-replication-policy.png" alt-text="Screenshot of the replication policy page.":::
+1. Under **Review** tab, review your selections and select **Enable Replication**.
## Next steps
-> [!div class="nextstepaction"]
-> [Run a disaster recovery drill](tutorial-dr-drill-azure.md)
+
+[Learn more](tutorial-dr-drill-azure.md) about running a disaster recovery drill.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
-[Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 | 2.0.9257.0
+[Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 (VMware) & 5.1.7882.0 (Hyper-V) | 2.0.9259.0
[Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0 [Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0 [Rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 9.49.6395.1 | 5.1.7418.0 | 9.49.6395.1 | 5.1.7418.0 | 2.0.9248.0
-[Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9259.0
+[Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9245.0
[Learn more](service-updates-how-to.md) about update installation and support.
site-recovery Upgrade 2012R2 To 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/upgrade-2012R2-to-2016.md
The list of steps mentioned below applies to the user configuration from [Hyper-
1. Follow the steps to perform the [rolling cluster upgrade.](/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade#cluster-os-rolling-upgrade-process) to execute the rolling cluster upgrade process. 2. With every new Windows Server 2016 host that is introduced in the cluster, remove the reference of a Windows Server 2012 R2 host from Azure Site Recovery by following steps mentioned [here]. This should be the host you chose to drain & evict from the cluster. 3. Once the *Update-VMVersion* command has been executed for all virtual machines, the upgrades have been completed.
-4. Use the steps mentioned [here](./hyper-v-azure-tutorial.md#set-up-the-source-environment) to register the new Windows Server 2016 host to Azure Site Recovery. Please note that the Hyper-V site is already active and you just need to register the new host in the cluster.
+4. Use the steps mentioned [here](./hyper-v-azure-tutorial.md#source-settings) to register the new Windows Server 2016 host to Azure Site Recovery. Please note that the Hyper-V site is already active and you just need to register the new host in the cluster.
5. Go to the Azure portal and verify the replicated health status inside the Recovery Services ## Upgrade Windows Server 2012 R2 hosts managed by stand-alone SCVMM 2012 R2 server
update-center Manage Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-workbooks.md
+
+ Title: Create reports using workbooks in update management center (preview)..
+description: This article describes how to create and manage workbooks for VM insights.
+++ Last updated : 01/16/2023+++
+# Manage workbooks in update management center (preview)
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+
+This article describes how to create a workbook and how to edit a workbook to create customized reports.
+
+## Create a workbook
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery.
+1. Select **Quick start** tile > **Empty** or alternatively, you can select **+New** to create a workbook.
+1. Select **+Add** to select any [elements](../azure-monitor/visualize/workbooks-create-workbook.md#create-a-new-azure-workbook) to add to the workbook.
+
+ :::image type="content" source="./media/manage-workbooks/create-workbook-elements.png" alt-text="Screenshot of how to create workbook using elements.":::
+
+1. Select **Done Editing**.
+
+## Edit a workbook
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview).
+1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery.
+1. Select **Update management center** tile > **Overview** to view the Update management center (Preview)|Workbooks|Overview page.
+1. Select your subscription, and select **Edit** to enable the edit mode for all the four options.
+
+ - Machines overall status & configuration
+ - Updates data overview
+ - Schedules/Maintenance configurations
+ - History of Installation runs
+
+ :::image type="content" source="./media/manage-workbooks/edit-workbooks-inline.png" alt-text="Screenshot of enabling the edit mode for all the options in workbooks." lightbox="./media/manage-workbooks/edit-workbooks-expanded.png":::
+
+ You can customize the visualization to create interactive reports, edit the parameters, the size of the charts and the chart settings to define how the chart must be rendered.
+
+ :::image type="content" source="./media/manage-workbooks/workbooks-edit-query-inline.png" alt-text="Screenshot of various edit options in workbooks." lightbox="./media/manage-workbooks/workbooks-edit-query-expanded.png":::
+
+1. Select **Done Editing**.
++
+## Next steps
+
+* [View updates for single machine](view-updates.md)
+* [Deploy updates now (on-demand) for single machine](deploy-updates.md)
+* [Schedule recurring updates](scheduled-patching.md)
+* [Manage update settings via Portal](manage-update-settings.md)
+* [Manage multiple machines using update management center](manage-multiple-machines.md)
update-center Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/workbooks.md
+
+ Title: An overview of Workbooks
+description: This article provides information on how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports.
+ Last updated : 01/16/2023+++++
+# About Workbooks
+
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers.
+
+Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update management center (preview).
+
+## Key benefits
+- Provides a canvas for data analysis and creation of visual reports
+- Access specific metrics from within the reports
+- Create interactive reports with various kinds of visualizations.
+- Create, share, and pin workbooks to the dashboard.
+- Combine text, log queries, metrics, and parameters to make rich visual reports.
+
+## The gallery
+
+The gallery lists all the saved workbooks and templates for your workspace. You can easily organize, sort, and manage workbooks of all types.
+
+ :::image type="content" source="./media/workbooks/workbooks-gallery.png" alt-text="Screenshot of workbooks gallery.":::
+
+- It comprises of the following four tabs that help you organize workbook types:
+
+ | Tab | Description |
+ |||
+ | All | Shows the top four items for workbooks, public templates, and my templates. Workbooks are sorted by modified date, so you'll see the most recent eight modified workbooks.|
+ | Workbooks | Shows the list of all the available workbooks that you created or are shared with you. |
+ | Public Templates | Shows the list of all the available ready to use, get started functional workbook templates published by Microsoft. Grouped by category. |
+ | My Templates | Shows the list of all the available deployed workbook templates that you created or are shared with you. Grouped by category. |
+
+- In the **Quick start** tile, you can create new workbooks.
+ :::image type="content" source="./media/workbooks/quickstart-workbooks.png" alt-text="Screenshot of creating a new workbook using Quick start.":::
+
+- In the **Recently modified** tile, you can view and edit the workbooks.
+
+- In the **Update management center** tile, you can view the following summary:
+ :::image type="content" source="./media/workbooks/workbooks-summary-inline.png" alt-text="Screenshot of workbook summary." lightbox="./media/workbooks/workbooks-summary-expanded.png":::
+
+
+ - **Machines overall status and configurations** - provides the status of all machines in a specific subscription.
+
+ :::image type="content" source="./media/workbooks/workbooks-machine-overall-status-inline.png" alt-text="Screenshot of the overall status and configuration of machines." lightbox="./media/workbooks/workbooks-machine-overall-status-expanded.png":::
+
+ - **Updates data overview** - provides a summary of machines that have no updates, assessments and reboot needed including the pending Windows and Linux updates by classification and by machine count.
+
+ :::image type="content" source="./media/workbooks/workbooks-machines-updates-status-inline.png" alt-text="Screenshot of summary of machines that no updates, and assessments." lightbox="./media/workbooks/workbooks-machines-updates-status-expanded.png":::
+
+ - **Schedules/maintenance configurations** - provides a summary of schedules, maintenance configurations and list of machines attached to the schedule. You can also access the maintenance configuration overview page from this section.
+
+ :::image type="content" source="./media/workbooks/workbooks-schedules-maintenance-inline.png" alt-text="Screenshot of summary of schedules and maintenance configurations." lightbox="./media/workbooks/workbooks-schedules-maintenance-expanded.png":::
+
+ - **History of installation runs** - provides a history of machines and maintenance runs.
+ :::image type="content" source="./media/workbooks/workbooks-history-installation-inline.png" alt-text="Screenshot of history of installation runs." lightbox="./media/workbooks/workbooks-history-installation-expanded.png":::
+
+For information on how to use the workbooks for customized reporting, see [Edit a workbook](manage-workbooks.md#edit-a-workbook).
+
+## Next steps
+
+ Learn about deploying updates to your machines to maintain security compliance by reading [deploy updates](deploy-updates.md)
virtual-desktop Environment Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md
A host pool can be one of two types:
- Personal, where each session host is assigned to an individual user. Personal host pools provide dedicated desktops to end-users that optimize environments for performance and data separation. - Pooled, where user sessions can be load balanced to any session host in the host pool. There can be multiple different users on a single session host at the same time. Pooled host pools provide a shared remote experience to end-users, which ensures lower costs and greater efficiency.
-The following table goes into more detail about the features each type of host pool has:
+The following table goes into more detail about the differences between each type of host pool:
|Feature|Personal host pools|Pooled host pools| ||||