Updates from: 12/25/2020 04:03:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/g-suite-provisioning-tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md new file mode 100644
@@ -0,0 +1,283 @@
+---
+title: 'Tutorial: Configure G Suite for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to G Suite.
+services: active-directory
+author: zchia
+writer: zchia
+manager: CelesteDG
+ms.service: active-directory
+ms.subservice: saas-app-tutorial
+ms.workload: identity
+ms.topic: tutorial
+ms.date: 01/06/2020
+ms.author: Zhchia
+---
+
+# Tutorial: Configure G Suite for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both G Suite and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [G Suite](https://gsuite.google.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+> [!NOTE]
+> This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
+
+> [!NOTE]
+> The G Suite connector was recently updated on October 2019. Changes made to the G Suite connector include:
+>
+> * Added support for additional G Suite user and group attributes.
+> * Updated G Suite target attribute names to match what is defined [here](https://developers.google.com/admin-sdk/directory).
+> * Updated default attribute mappings.
+
+> [!NOTE]
+> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in G Suite
+> * Remove users in G Suite when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and G Suite
+> * Provision groups and group memberships in G Suite
+> * [Single sign-on](./google-apps-tutorial.md) to G Suite (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [A G Suite tenant](https://gsuite.google.com/pricing.html)
+* A user account on a G Suite with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and G Suite](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure G Suite to support provisioning with Azure AD
+
+Before configuring G Suite for automatic user provisioning with Azure AD, you will need to enable SCIM provisioning on G Suite.
+
+1. Sign in to the [G Suite Admin console](https://admin.google.com/) with your administrator account, and then select **Security**. If you don't see the link, it might be hidden under the **More Controls** menu at the bottom of the screen.
+
+ ![G Suite Security](./media/google-apps-provisioning-tutorial/gapps-security.png)
+
+2. On the **Security** page, select **API Reference**.
+
+ ![G Suite API](./media/google-apps-provisioning-tutorial/gapps-api.png)
+
+3. Select **Enable API access**.
+
+ ![G Suite API Enabled](./media/google-apps-provisioning-tutorial/gapps-api-enabled.png)
+
+ > [!IMPORTANT]
+ > For every user that you intend to provision to G Suite, their user name in Azure AD **must** be tied to a custom domain. For example, user names that look like bob@contoso.onmicrosoft.com are not accepted by G Suite. On the other hand, bob@contoso.com is accepted. You can change an existing user's domain by following the instructions [here](../fundamentals/add-custom-domain.md).
+
+4. Once you have added and verified your desired custom domains with Azure AD, you must verify them again with G Suite. To verify domains in G Suite, refer to the following steps:
+
+ a. In the [G Suite Admin Console](https://admin.google.com/), select **Domains**.
+
+ ![G Suite Domains](./media/google-apps-provisioning-tutorial/gapps-domains.png)
+
+ b. Select **Add a domain or a domain alias**.
+
+ ![G Suite Add Domain](./media/google-apps-provisioning-tutorial/gapps-add-domain.png)
+
+ c. Select **Add another domain**, and then type in the name of the domain that you want to add.
+
+ ![G Suite Add Another](./media/google-apps-provisioning-tutorial/gapps-add-another.png)
+
+ d. Select **Continue and verify domain ownership**. Then follow the steps to verify that you own the domain name. For comprehensive instructions on how to verify your domain with Google, see [Verify your site ownership](https://support.google.com/webmasters/answer/35179).
+
+ e. Repeat the preceding steps for any additional domains that you intend to add to G Suite.
+
+5. Next, determine which admin account you want to use to manage user provisioning in G Suite. Navigate to **Admin Roles**.
+
+ ![G Suite Admin](./media/google-apps-provisioning-tutorial/gapps-admin.png)
+
+6. For the **Admin role** of that account, edit the **Privileges** for that role. Make sure to enable all **Admin API Privileges** so that this account can be used for provisioning.
+
+ ![G Suite Admin Privileges](./media/google-apps-provisioning-tutorial/gapps-admin-privileges.png)
+
+## Step 3. Add G Suite from the Azure AD application gallery
+
+Add G Suite from the Azure AD application gallery to start managing provisioning to G Suite. If you have previously setup G Suite for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to G Suite, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to G Suite
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+> [!NOTE]
+> To learn more about G Suite's Directory API endpoint, refer to [Directory API](https://developers.google.com/admin-sdk/directory).
+
+### To configure automatic user provisioning for G Suite in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to login to portal.azure.com and will not be able to use aad.portal.azure.com
+
+ ![Enterprise applications blade](./media/google-apps-provisioning-tutorial/enterprise-applications.png)
+
+ ![All applications blade](./media/google-apps-provisioning-tutorial/all-applications.png)
+
+2. In the applications list, select **G Suite**.
+
+ ![The G Suite link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab. Click on **Get started**.
+
+ ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+
+ ![Get started blade](./media/google-apps-provisioning-tutorial/get-started.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, click on **Authorize**. You will be redirected to a Google authorization dialog box in a new browser window.
+
+ ![G Suite authorize](./media/google-apps-provisioning-tutorial/authorize-1.png)
+
+6. Confirm that you want to give Azure AD permissions to make changes to your G Suite tenant. Select **Accept**.
+
+ ![G Suite Tenant Auth](./media/google-apps-provisioning-tutorial/gapps-auth.png)
+
+7. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to G Suite. If the connection fails, ensure your G Suite account has Admin permissions and try again. Then try the **Authorize** step again.
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
+
+9. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in G Suite for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the G Suite API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|
+ |---|---|
+ |primaryEmail|String|
+ |relations.[type eq "manager"].value|String|
+ |name.familyName|String|
+ |name.givenName|String|
+ |suspended|String|
+ |externalIds.[type eq "custom"].value|String|
+ |externalIds.[type eq "organization"].value|String|
+ |addresses.[type eq "work"].country|String|
+ |addresses.[type eq "work"].streetAddress|String|
+ |addresses.[type eq "work"].region|String|
+ |addresses.[type eq "work"].locality|String|
+ |addresses.[type eq "work"].postalCode|String|
+ |emails.[type eq "work"].address|String|
+ |organizations.[type eq "work"].department|String|
+ |organizations.[type eq "work"].title|String|
+ |phoneNumbers.[type eq "work"].value|String|
+ |phoneNumbers.[type eq "mobile"].value|String|
+ |phoneNumbers.[type eq "work_fax"].value|String|
+ |emails.[type eq "work"].address|String|
+ |organizations.[type eq "work"].department|String|
+ |organizations.[type eq "work"].title|String|
+ |phoneNumbers.[type eq "work"].value|String|
+ |phoneNumbers.[type eq "mobile"].value|String|
+ |phoneNumbers.[type eq "work_fax"].value|String|
+ |addresses.[type eq "home"].country|String|
+ |addresses.[type eq "home"].formatted|String|
+ |addresses.[type eq "home"].locality|String|
+ |addresses.[type eq "home"].postalCode|String|
+ |addresses.[type eq "home"].region|String|
+ |addresses.[type eq "home"].streetAddress|String|
+ |addresses.[type eq "other"].country|String|
+ |addresses.[type eq "other"].formatted|String|
+ |addresses.[type eq "other"].locality|String|
+ |addresses.[type eq "other"].postalCode|String|
+ |addresses.[type eq "other"].region|String|
+ |addresses.[type eq "other"].streetAddress|String|
+ |addresses.[type eq "work"].formatted|String|
+ |changePasswordAtNextLogin|String|
+ |emails.[type eq "home"].address|String|
+ |emails.[type eq "other"].address|String|
+ |externalIds.[type eq "account"].value|String|
+ |externalIds.[type eq "custom"].customType|String|
+ |externalIds.[type eq "customer"].value|String|
+ |externalIds.[type eq "login_id"].value|String|
+ |externalIds.[type eq "network"].value|String|
+ |gender.type|String|
+ |GeneratedImmutableId|String|
+ |Identifier|String|
+ |ims.[type eq "home"].protocol|String|
+ |ims.[type eq "other"].protocol|String|
+ |ims.[type eq "work"].protocol|String|
+ |includeInGlobalAddressList|String|
+ |ipWhitelisted|String|
+ |organizations.[type eq "school"].costCenter|String|
+ |organizations.[type eq "school"].department|String|
+ |organizations.[type eq "school"].domain|String|
+ |organizations.[type eq "school"].fullTimeEquivalent|String|
+ |organizations.[type eq "school"].location|String|
+ |organizations.[type eq "school"].name|String|
+ |organizations.[type eq "school"].symbol|String|
+ |organizations.[type eq "school"].title|String|
+ |organizations.[type eq "work"].costCenter|String|
+ |organizations.[type eq "work"].domain|String|
+ |organizations.[type eq "work"].fullTimeEquivalent|String|
+ |organizations.[type eq "work"].location|String|
+ |organizations.[type eq "work"].name|String|
+ |organizations.[type eq "work"].symbol|String|
+ |OrgUnitPath|String|
+ |phoneNumbers.[type eq "home"].value|String|
+ |phoneNumbers.[type eq "other"].value|String|
+ |websites.[type eq "home"].value|String|
+ |websites.[type eq "other"].value|String|
+ |websites.[type eq "work"].value|String|
+
+
+10. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
+
+11. Review the group attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in G Suite for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|
+ |---|---|
+ |email|String|
+ |Members|String|
+ |name|String|
+ |description|String|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for G Suite, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to G Suite by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+> [!NOTE]
+> If the users already have an existing personal/consumer account using the email address of the Azure AD user, then it may cause some issue which could be resolved by using the Google Transfer Tool prior to performing the directory sync.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
\ No newline at end of file
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-custom-orchestration-status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-custom-orchestration-status.md
@@ -120,7 +120,7 @@ public static async Task<HttpResponseMessage> Run(
{ // Function input comes from the request content. dynamic eventData = await req.Content.ReadAsAsync<object>();
- string instanceId = await starter.StartNewAsync(functionName, eventData);
+ string instanceId = await starter.StartNewAsync(functionName, (string)eventData);
log.LogInformation($"Started orchestration with ID = '{instanceId}'.");
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-overview.md
@@ -3,7 +3,7 @@ title: Durable Functions Overview - Azure
description: Introduction to the Durable Functions extension for Azure Functions. author: cgillum ms.topic: overview
-ms.date: 03/12/2020
+ms.date: 12/23/2020
ms.author: cgillum ms.reviewer: azfuncdf #Customer intent: As a < type of user >, I want < what? > so that < why? >.
@@ -19,10 +19,12 @@ Durable Functions currently supports the following languages:
* **C#**: both [precompiled class libraries](../functions-dotnet-class-library.md) and [C# script](../functions-reference-csharp.md). * **JavaScript**: supported only for version 2.x of the Azure Functions runtime. Requires version 1.7.0 of the Durable Functions extension, or a later version.
-* **Python**: requires version 1.8.5 of the Durable Functions extension, or a later version. Support for Durable Functions is currently in public preview.
+* **Python**: requires version 2.3.1 of the Durable Functions extension, or a later version. Support for Durable Functions is currently in public preview.
* **F#**: precompiled class libraries and F# script. F# script is only supported for version 1.x of the Azure Functions runtime. * **PowerShell**: support for Durable Functions is currently in public preview. Supported only for version 3.x of the Azure Functions runtime and PowerShell 7. Requires version 2.2.2 of the Durable Functions extension, or a later version. Only the following patterns are currently supported: [Function chaining](#chaining), [Fan-out/fan-in](#fan-in-out), [Async HTTP APIs](#async-http).
+To access the latest features and updates, it is recommended you use the latest versions of the Durable Functions extension and the language-specific Durable Functions libraries. Learn more about [Durable Functions versions](durable-functions-versions.md).
+ Durable Functions has a goal of supporting all [Azure Functions languages](../supported-languages.md). See the [Durable Functions issues list](https://github.com/Azure/azure-functions-durable-extension/issues) for the latest status of work to support additional languages. Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio 2019](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-versions.md
@@ -3,7 +3,7 @@ title: Durable Functions versions overview - Azure Functions
description: Learn about Durable Functions versions. author: cgillum ms.topic: conceptual
-ms.date: 08/20/2020
+ms.date: 12/23/2020
ms.author: azfuncdf ---
@@ -45,6 +45,8 @@ Install the latest 2.x version of the Durable Functions bindings extension in yo
Durable Functions 2.x is available in version 2.x of the [Azure Functions extension bundle](../functions-bindings-register.md#extension-bundles).
+Python support in Durable Functions requires Durable Functions 2.x.
+ To update the extension bundle version in your project, open host.json and update the `extensionBundle` section to use version 2.x (`[2.*, 3.0.0)`). ```json
@@ -57,6 +59,9 @@ To update the extension bundle version in your project, open host.json and updat
} ```
+> [!NOTE]
+> If Visual Studio Code is not displaying the correct templates after you change the extension bundle version, reload the window by running the *Developer: Reload Window* command (<kbd>Ctrl+R</kbd> on Windows and Linux, <kbd>Command+R</kbd> on macOS).
+ #### .NET Update your .NET project to use the latest version of the [Durable Functions bindings extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask).
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/durable/quickstart-python-vscode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/quickstart-python-vscode.md
@@ -4,7 +4,7 @@ description: Create and publish an Azure Durable Function in Python using Visual
author: anthonychu ms.topic: quickstart
-ms.date: 04/04/2020
+ms.date: 12/23/2020
ms.reviewer: azfuncdf, antchu ---
@@ -36,7 +36,7 @@ To complete this tutorial:
In this section, you use Visual Studio Code to create a local Azure Functions project.
-1. In Visual Studio Code, press F1 (or Ctrl/Cmd+Shift+P) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`.
+1. In Visual Studio Code, press F1 (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`.
![Create function](media/quickstart-python-vscode/functions-create-project.png)
@@ -56,18 +56,33 @@ Visual Studio Code installs the Azure Functions Core Tools, if needed. It also c
A requirements.txt file is also created in the root folder. It specifies the Python packages needed to run your function app.
+## Update Azure Functions extension bundles version
+
+Python Azure Functions require version 2.x of [Azure Functions extension bundles](../functions-bindings-register.md#access-extensions-in-non-net-languages). Extension bundles are configured in *host.json*.
+
+1. Open *host.json* in the project. Update the extension bundle `version` to `[2.*, 3.0.0)`. This specifies a version range that is greater than or equal to 2.0, and less than 3.0.
+
+ ```json
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[2.*, 3.0.0)"
+ }
+ ```
+
+1. VS Code must be reloaded before the updated extension bundle version is reflected. In the command palette, run search for the *Developer: Reload Window* command and run it.
+ ## Install azure-functions-durable from PyPI When you created the project, the Azure Functions VS Code extension automatically created a virtual environment with your selected Python version. You will activate the virtual environment in a terminal and install some dependencies required by Azure Functions and Durable Functions.
-1. Open `requirements.txt` in the editor and change its content to the following:
+1. Open *requirements.txt* in the editor and change its content to the following:
``` azure-functions
- azure-functions-durable>=1.0.0b6
+ azure-functions-durable>=1.0.0b12
```
-1. Open the editor's integrated terminal in the current folder (`` Ctrl-Shift-` ``).
+1. Open the editor's integrated terminal in the current folder (<kbd>Ctrl+Shift+`</kbd>).
1. In the integrated terminal, activate the virtual environment in the current folder:
@@ -199,7 +214,7 @@ Azure Functions Core Tools lets you run an Azure Functions project on your local
} ```
-1. To stop debugging, press **Shift + F5** in VS Code.
+1. To stop debugging, press <kbd>Shift+F5</kbd> in VS Code.
After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure.
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service-input https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-signalr-service-input.md
@@ -241,11 +241,11 @@ Example function.json:
Here's the Python code: ```python
-def main(req: func.HttpRequest, connectionInfoJson: str) -> func.HttpResponse:
+def main(req: func.HttpRequest, connectionInfo: str) -> func.HttpResponse:
# connectionInfo contains an access key token with a name identifier # claim set to the authenticated user return func.HttpResponse(
- connectionInfoJson,
+ connectionInfo,
status_code=200, headers={ 'Content-type': 'application/json'
@@ -276,4 +276,5 @@ public SignalRConnectionInfo negotiate(
## Next steps
+- [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md)
- [Send SignalR Service messages (Output binding)](./functions-bindings-signalr-service-output.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service-output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-signalr-service-output.md
@@ -749,4 +749,5 @@ The following table explains the binding configuration properties that you set i
## Next steps
+- [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md)
- [Return the service endpoint URL and access token (Input binding)](./functions-bindings-signalr-service-input.md)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service-trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-signalr-service-trigger.md
@@ -13,6 +13,9 @@ ms.author: chenyl
Use the *SignalR* trigger binding to respond to messages sent from Azure SignalR Service. When function is triggered, messages passed to the function is parsed as a json object.
+In SignalR Service serverless mode, SignalR Service uses the [Upstream](../azure-signalr/concept-upstream.md) feature to send messages from client to Function App. And Function App uses SignalR Service trigger binding to handle these messages. The general architecture is shown below:
+:::image type="content" source="media/functions-bindings-signalr-service/signalr-trigger.png" alt-text="SignalR Trigger Architecture":::
+ For information on setup and configuration details, see the [overview](functions-bindings-signalr-service.md). ## Example
@@ -199,15 +202,22 @@ InvocationContext contains all the content in the message send from SignalR Serv
## Using `ParameterNames`
-The property `ParameterNames` in `SignalRTrigger` allows you to bind arguments of invocation messages to the parameters of functions. That gives you a more convenient way to access arguments of `InvocationContext`.
+The property `ParameterNames` in `SignalRTrigger` allows you to bind arguments of invocation messages to the parameters of functions. The name you defined can be used as part of [binding expressions](../azure-functions/functions-bindings-expressions-patterns.md) in other binding or as parameters in your code. That gives you a more convenient way to access arguments of `InvocationContext`.
-Say you have a JavaScript SignalR client trying to invoke method `broadcast` in Azure Function with two arguments.
+Say you have a JavaScript SignalR client trying to invoke method `broadcast` in Azure Function with two arguments `message1`, `message2`.
```javascript await connection.invoke("broadcast", message1, message2); ```
-You can access these two arguments from parameter as well as assign type of parameter for them by using `ParameterNames`.
+After you set `parameterNames`, the name you defined will respectively correspond to the arguments sent on the client side.
+
+```cs
+[SignalRTrigger(parameterNames: new string[] {"arg1, arg2"})]
+```
+
+Then, the `arg1` will contain the content of `message1`, and `arg2` will contain the content of `message2`.
+ ### Remarks
@@ -215,20 +225,28 @@ For the parameter binding, the order matters. If you are using `ParameterNames`,
`ParameterNames` and attribute `[SignalRParameter]` **cannot** be used at the same time, or you will get an exception.
-## Send messages to SignalR Service trigger binding
+## SignalR Service integration
+
+SignalR Service needs a URL to access Function App when you're using SignalR Service trigger binding. The URL should be configured in **Upstream Settings** on the SignalR Service side.
+
+:::image type="content" source="../azure-signalr/media/concept-upstream/upstream-portal.png" alt-text="Upstream Portal":::
-Azure Function generates a URL for SignalR Service trigger binding and it is formatted as following:
+When using SignalR Service trigger, the URL can be simple and formatted as shown below:
```http
-https://<APP_NAME>.azurewebsites.net/runtime/webhooks/signalr?code=<API_KEY>
+<Function_App_URL>/runtime/webhooks/signalr?code=<API_KEY>
```
-The `API_KEY` is generated by Azure Function. You can get the `API_KEY` from Azure portal as you're using SignalR Service trigger binding.
+The `Function_App_URL` can be found on Function App's Overview page and The `API_KEY` is generated by Azure Function. You can get the `API_KEY` from `signalr_extension` in the **App keys** blade of Function App.
:::image type="content" source="media/functions-bindings-signalr-service/signalr-keys.png" alt-text="API key":::
-You should set this URL in `UrlTemplate` in the upstream settings of SignalR Service.
+If you want to use more than one Function App together with one SignalR Service, upstream can also support complex routing rules. Find more details at [Upstream settings](../azure-signalr/concept-upstream.md).
+
+## Step by step sample
+
+You can follow the sample in GitHub to deploy a chat room on Function App with SignalR Service trigger binding and upstream feature: [Bidirectional chat room sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
## Next steps * [Azure Functions development and configuration with Azure SignalR Service](../azure-signalr/signalr-concept-serverless-development-config.md)
-* [SignalR Service Trigger binding sample](https://github.com/Azure/azure-functions-signalrservice-extension/tree/dev/samples/bidirectional-chat)
+* [SignalR Service Trigger binding sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
azure-functions https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-signalr-service.md
@@ -14,6 +14,7 @@ This set of articles explains how to authenticate and send real-time messages to
| Action | Type | |---------|---------|
+| Handle messages from SignalR Service | [Trigger binding](./functions-bindings-signalr-service-trigger.md) |
| Return the service endpoint URL and access token | [Input binding](./functions-bindings-signalr-service-input.md) | | Send SignalR Service messages |[Output binding](./functions-bindings-signalr-service-output.md) |
@@ -51,5 +52,6 @@ To use the SignalR Service annotations in Java functions, you need to add a depe
## Next steps
+- [Handle messages from SignalR Service (Trigger binding)](./functions-bindings-signalr-service-trigger.md)
- [Return the service endpoint URL and access token (Input binding)](./functions-bindings-signalr-service-input.md) - [Send SignalR Service messages (Output binding)](./functions-bindings-signalr-service-output.md)\ No newline at end of file
azure-monitor https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/logs-dedicated-clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/log-query/logs-dedicated-clusters.md
@@ -366,7 +366,7 @@ You can unlink a workspace from a cluster. After unlinking a workspace from the
Old data of the unlinked workspace might be left on the cluster. If this data is encrypted using customer-managed keys (CMK), the Key Vault secrets are kept. The system is abstracts this change from Log Analytics users. Users can just query the workspace as usual. The system performs cross-cluster queries on the backend as needed with no indication to users. > [!WARNING]
-> There is a limit of two linking operations per workspace within a month. Take time to consider and plan unlinking actions accordingly.
+> There is a limit of two linking operations for a specific workspace within a month. Take time to consider and plan unlinking actions accordingly.
## Delete a dedicated cluster
@@ -378,6 +378,9 @@ A *Cluster* resource that was deleted in the last 14 days is in soft-delete stat
Within the 14 days after deletion, the cluster resource name is reserved and cannot be used by other resources.
+> [!WARNING]
+> There is a limit of three clusters per subscription. Both active and soft-deleted clusters are counted as part of this. Customers should not create recurrent procedures that create and delete clusters. It has a significant impact on Log Analytics backend systems.
+ **PowerShell** Use the following PowerShell command to delete a cluster:
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/concept-upstream https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/concept-upstream.md
@@ -48,6 +48,19 @@ When a client in the "chat" hub invokes the hub method `broadcast`, a message wi
http://host.com/chat/api/messages/broadcast ```
+### Key Vault secret reference in URL template settings
+
+The URL of upstream is not encryption at rest. If you have any sensitive information, it's suggested to use Key Vault to save them where access control has better insurance. Basically, you can enable the managed identity of Azure SignalR Service and then grant read permission on a Key Vault instance and use Key Vault reference instead of plaintext in Upstream URL Pattern.
+
+1. Add a system-assigned identity or user-assigned identity. See [How to add managed identity in Azure Portal](./howto-use-managed-identity.md#add-a-system-assigned-identity)
+
+2. Grant secret read permission for the managed identity in the Access policies in the Key Vault. See [Assign a Key Vault access policy using the Azure portal](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy-portal)
+
+3. Replace your sensitive text with the syntax `{@Microsoft.KeyVault(SecretUri=<secret-identity>)}` in the Upstream URL Pattern.
+
+> [!NOTE]
+> The secret content only rereads when you change the Upstream settings or change the managed identity. Make sure you have granted secret read permission to the managed identity before using the Key Vault secret reference.
+ ### Rule settings You can set rules for *hub rules*, *category rules*, and *event rules* separately. The matching rule supports three formats. Take event rules as an example:
@@ -56,8 +69,8 @@ You can set rules for *hub rules*, *category rules*, and *event rules* separatel
- Use the full event name to match the event. For example, `connected` matches the connected event. > [!NOTE]
-> If you're using Azure Functions and [SignalR trigger](../azure-functions/functions-bindings-signalr-service-trigger.md), SignalR trigger will expose a single endpoint in the following format: `https://<APP_NAME>.azurewebsites.net/runtime/webhooks/signalr?code=<API_KEY>`.
-> You can just configure url template to this url.
+> If you're using Azure Functions and [SignalR trigger](../azure-functions/functions-bindings-signalr-service-trigger.md), SignalR trigger will expose a single endpoint in the following format: `<Function_App_URL>/runtime/webhooks/signalr?code=<API_KEY>`.
+> You can just configure **URL template settings** to this url and keep **Rule settings** default. See [SignalR Service integration](../azure-functions/functions-bindings-signalr-service-trigger.md#signalr-service-integration) for details about how to find `<Function_App_URL>` and `<API_KEY>`.
### Authentication settings
@@ -110,7 +123,7 @@ To create upstream settings by using an [Azure Resource Manager template](../azu
## Serverless protocols
-Azure SignalR Service sends messages to endpoints that follow the following protocols.
+Azure SignalR Service sends messages to endpoints that follow the following protocols. You can use [SignalR Service trigger binding](../azure-functions/functions-bindings-signalr-service-trigger.md) with Function App, which handles these protocols for you.
### Method
@@ -164,4 +177,6 @@ Hex_encoded(HMAC_SHA256(accessKey, connection-id))
## Next steps - [Managed identities for Azure SignalR Service](howto-use-managed-identity.md)-- [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)\ No newline at end of file
+- [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)
+- [Handle messages from SignalR Service (Trigger binding)](../azure-functions/functions-bindings-signalr-service-trigger.md)
+- [SignalR Service Trigger binding sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat)
azure-signalr https://docs.microsoft.com/en-us/azure/azure-signalr/howto-use-managed-identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/howto-use-managed-identity.md
@@ -39,7 +39,7 @@ Creating an Azure SignalR Service instance with a user-assigned identity require
4. On the **User assigned** tab, select **Add**.
-5. Search for the identity that you created earlier and select it. Select **Add**.
+5. Search for the identity that you created earlier and selects it. Select **Add**.
:::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal":::
@@ -51,7 +51,10 @@ Azure SignalR Service is a fully managed service, so you can't use a managed ide
1. Add a system-assigned identity or user-assigned identity.
-2. Configure upstream settings and use **ManagedIdentity** as the **Auth** settings. To learn how to create upstream settings with authentication, see [Upstream settings](concept-upstream.md).
+2. Add one Upstream Setting and click any asterisk to get into a detailed page as shown below.
+ :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="pre-msi-setting":::
+
+ :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="msi-setting":::
3. In the managed identity authentication settings, for **Resource**, you can specify the target resource. The resource will become an `aud` claim in the obtained access token, which can be used as a part of validation in your upstream endpoints. The resource can be one of the following: - Empty
@@ -72,6 +75,38 @@ The Azure Active Directory (Azure AD) middleware has built-in capabilities for v
We provide libraries and code samples that show how to handle token validation. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language out there. For more information about Azure AD authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md).
+#### Authentication in Function App
+
+Setting access token validation in Function App is easy and efficient without code works.
+
+1. In the **Authentication / Authorization** page, switch **App Service Authentication** to **On**.
+
+2. Select **Log in with Azure Active Directory** in **Action to take when request is not authenticated**.
+
+3. In the Authentication Provider, click into **Azure Active Directory**
+
+4. In the new page. Select **Express** and **Create New AD App** and then click **OK**
+ :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Aad":::
+
+5. Navigate to SignalR Service and follow [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity.
+
+6. Get into **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously.
+
+After these settings, the Function App will reject requests without an access token in the header.
+
+## Use a managed identity for Key Vault reference
+
+SignalR Service can access Key Vault to get secret using the managed identity.
+
+1. Add a system-assigned identity or user-assigned identity for Azure SignalR Service.
+
+2. Grant secret read permission for the managed identity in the Access policies in the Key Vault. See [Assign a Key Vault access policy using the Azure portal](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy-portal)
+
+Currently, this feature can be used in the following scenarios:
+
+- [Reference secret in Upstream URL Pattern](./concept-upstream.md#key-vault-secret-reference-in-url-template-settings)
++ ## Next steps - [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)\ No newline at end of file
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/database/features-comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
@@ -10,8 +10,8 @@ ms.devlang:
ms.topic: conceptual author: jovanpop-msft ms.author: jovanpop
-ms.reviewer: bonova, sstein
-ms.date: 11/10/2020
+ms.reviewer: bonova, sstein, danil
+ms.date: 12/24/2020
--- # Features comparison: Azure SQL Database and Azure SQL Managed Instance
@@ -142,7 +142,8 @@ The Azure platform provides a number of PaaS capabilities that are added as an a
| [Query Performance Insights (QPI)](query-performance-insight-use.md) | Yes | No. Use built-in reports in SQL Server Management Studio and Azure Data Studio. | | [VNet](../../virtual-network/virtual-networks-overview.md) | Partial, it enables restricted access using [VNet Endpoints](vnet-service-endpoint-rule-overview.md) | Yes, SQL Managed Instance is injected in customer's VNet. See [subnet](../managed-instance/transact-sql-tsql-differences-sql-server.md#subnet) and [VNet](../managed-instance/transact-sql-tsql-differences-sql-server.md#vnet) | | VNet Service endpoint | [Yes](vnet-service-endpoint-rule-overview.md) | No |
-| VNet Global peering | Yes, using [Private IP and service endpoints](vnet-service-endpoint-rule-overview.md) | No, [SQL Managed Instance is not supported](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) due to [load balancer constraint in VNet global peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints).
+| VNet Global peering | Yes, using [Private IP and service endpoints](vnet-service-endpoint-rule-overview.md) | No, [SQL Managed Instance is not supported](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) due to [load balancer constraint in VNet global peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). |
+| [Elastic jobs](elastic-jobs-overview.md) | Yes - see [Elastic jobs (preview)](elastic-jobs-overview.md) | No ([SQL Agent](../managed-instance/transact-sql-tsql-differences-sql-server.md#sql-server-agent) can be used instead). |
## Tools
azure-sql https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/resource-limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/resource-limits.md
@@ -78,7 +78,7 @@ SQL Managed Instance has two service tiers: [General Purpose](../database/servic
| Max sessions | 30000 | 30000 | | Max concurrent workers (requests) | Gen4: 210 * number of vCores + 800<br>Gen5: 105 * number of vCores + 800 | Gen4: 210 * vCore count + 800<br>Gen5: 105 * vCore count + 800 | | [Read-only replicas](../database/read-scale-out.md) | 0 | 1 (included in price) |
-| Compute isolation | Gen5:<br/>-supported for 80 vCores<br/>-not supported for other sizes<br/><br/>Gen4 is not supported due to deprecation|Gen5:<br/>-supported for 60, 64, 80 vCores<br/>-not supported for other sizes<br/><br/>Gen4 is not supported due to deprecation|
+| Compute isolation | Gen5 is not supported as General Purpose instances may share physical hardware with other instances<br/>Gen4 is not supported due to deprecation|Gen5:<br/>-supported for 40, 64, 80 vCores<br/>-not supported for other sizes<br/><br/>Gen4 is not supported due to deprecation|
A few additional considerations:
@@ -144,7 +144,7 @@ The following table shows the **default regional limits** for supported subscrip
|Visual Studio Enterprise|2 |64| |Visual Studio Professional and MSDN Platforms|2|32|
-\* In planning deployments, please take into consideration that Business Critical (BC) service tier requires four (4) times more vCore capacity than General Purpose (GP) service tier. For example: 1 GP vCore = 1 vCore unit and 1 BC vCore = 4 vCore units. To simplify your consumption analysis against the default limits, summarize the vCore units across all subnets in the region where SQL Managed Instance is deployed and compare the results with the instance unit limits for your subscription type. **Max number of vCore units** limit applies to each subscription in a region. There is no limit per individual subnets except that the sum of all vCores deployed across multiple subnets must be lower or equal to **max number of vCore units**.
+\* In planning deployments, please take into consideration that Business Critical (BC) service tier requires four (4) times more vCore capacity than General Purpose (GP) service tier. For example: 1 GP vCore = 1 vCore unit and 1 BC vCore = 4 vCore. To simplify your consumption analysis against the default limits, summarize the vCore units across all subnets in the region where SQL Managed Instance is deployed and compare the results with the instance unit limits for your subscription type. **Max number of vCore units** limit applies to each subscription in a region. There is no limit per individual subnets except that the sum of all vCores deployed across multiple subnets must be lower or equal to **max number of vCore units**.
\*\* Larger subnet and vCore limits are available in the following regions: Australia East, East US, East US 2, North Europe, South Central US, Southeast Asia, UK South, West Europe, West US 2.
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/deploy-azure-vmware-solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-azure-vmware-solution.md
@@ -2,12 +2,15 @@
title: Deploy and configure Azure VMware Solution description: Learn how to use the information gathered in the planning stage to deploy the Azure VMware Solution private cloud. ms.topic: tutorial
-ms.date: 11/09/2020
+ms.date: 12/24/2020
--- # Deploy and configure Azure VMware Solution
-In this article, you'll use the information from the [planning section](production-ready-deployment-steps.md) to deploy Azure VMware Solution. If you haven't defined the information, go back to the [planning section](production-ready-deployment-steps.md) before continuing.
+In this article, you'll use the information from the [planning section](production-ready-deployment-steps.md) to deploy Azure VMware Solution.
+
+>[!IMPORTANT]
+>If you haven't defined the information yet, go back to the [planning section](production-ready-deployment-steps.md) before continuing.
## Register the resource provider
azure-vmware https://docs.microsoft.com/en-us/azure/azure-vmware/includes/register-resource-provider-steps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/includes/register-resource-provider-steps.md
@@ -2,16 +2,30 @@
title: Register the Azure VMware Solution resource provider description: Steps to register the Azure VMware Solution resource provider. ms.topic: include
-ms.date: 09/21/2020
+ms.date: 12/24/2020
--- <!-- Used in avs-deployment.md and tutorial-create-private-cloud.md -->
-To use Azure VMware Solution, you must first register the resource provider with your subscription.
+To use Azure VMware Solution, you must first register the resource provider with your subscription.
+
+### Azure CLI
```azurecli-interactive az provider register -n Microsoft.AVS --subscription <your subscription ID> ```
->[!TIP]
->Alternatively, you can use the GUI to register the **Microsoft.AVS** resource provider. For more information, see the [Register resource provider and types](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) article.
+
+### Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. On the Azure portal menu, select **All services**.
+
+1. In the **All services** box, enter **subscription**, and then select **Subscriptions**.
+
+1. Select the subscription from the subscription list to view.
+
+1. Select **Resource providers** and enter **Microsoft.AVS** into the search.
+
+1. If the resource provider is not registered, select **Register**.
\ No newline at end of file
cognitive-services https://docs.microsoft.com/en-us/azure/cognitive-services/Speech-Service/language-support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
@@ -330,7 +330,7 @@ More than 75 standard voices are available in over 45 languages and locales, whi
| English (United Kingdom) | `en-GB` | Female | `en-GB-Susan`| | English (United States) | `en-US` | Male | `en-US-BenjaminRUS`| | English (United States) | `en-US` | Male | `en-US-GuyRUS`|
-| English (United States) | `en-US` | Female | `en-US-JessaRUS`|
+| English (United States) | `en-US` | Female | `en-US-AriaRUS`|
| English (United States) | `en-US` | Female | `en-US-ZiraRUS`| | Finnish (Finland) | `fi-FI` | Female | `fi-FI-HeidiRUS`| | French (Canada) | `fr-CA` | Female | `fr-CA-Caroline`|
communication-services https://docs.microsoft.com/en-us/azure/communication-services/quickstarts/includes/user-access-token-net https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/includes/user-access-token-net.md
@@ -82,7 +82,7 @@ Add the following code to the `Main` method:
// This code demonstrates how to fetch your connection string // from an environment variable. string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING");
-var client = new CommunicationIdentityClient(ConnectionString);
+var client = new CommunicationIdentityClient(connectionString);
``` ## Create an identity
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-create-virtual-machine-image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md new file mode 100644
@@ -0,0 +1,81 @@
+---
+title: Create VM images for your Azure Stack Edge Pro GPU device
+description: Describes how to create linux or Windows VM images to use with your Azure Stack Edge Pro GPU device.
+services: databox
+author: alkohli
+
+ms.service: databox
+ms.subservice: edge
+ms.topic: how-to
+ms.date: 12/08/2020
+ms.author: alkohli
+#Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
+---
+
+# Create custom VM images for your Azure Stack Edge Pro device
+
+<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
+
+To deploy VMs on your Azure Stack Edge Pro device, you need to be able to create custom VM images that you can use to create VMs. This article describes the steps required to create Linux or Windows VM custom images that you can use to deploy VMs on your Azure Stack Edge Pro device.
+
+## VM image workflow
+
+The workflow requires you to create a virtual machine in Azure, customize the VM, generalize, and then download the VHD corresponding to that VM. This generalized VHD is uploaded to Azure Stack Edge Pro, managed disk is created from that VHD, image is created from managed disk, and finally VMs are created from that image.
+
+For more information, go to [Deploy a VM on your Azure Stack Edge Pro device using Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
++
+## Create a Windows custom VM image
+
+Do the following steps to create a Windows VM image.
+
+1. Create a Windows Virtual Machine. For more information, go to [Tutorial: Create and manage Windows VMs with Azure PowerShell](../virtual-machines/windows/tutorial-manage-vm.md)
+
+2. Download an existing OS disk.
+
+ - Follow the steps in [Download a VHD](../virtual-machines/windows/download-vhd.md).
+
+ - Use the following `sysprep` command instead of what is described in the preceding procedure.
+
+ `c:\windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown /mode:vm`
+
+ You can also refer to [Sysprep (system preparation) overview](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview).
+
+Use this VHD to now create and deploy a VM on your Azure Stack Edge Pro device.
+
+## Create a Linux custom VM image
+
+Do the following steps to create a Linux VM image.
+
+1. Create a Linux Virtual Machine. For more information, go to [Tutorial: Create and manage Linux VMs with the Azure CLI](../virtual-machines/linux/tutorial-manage-vm.md).
+
+1. Deprovision the VM. Use the Azure VM agent to delete machine-specific files and data. Use the `waagent` command with the `-deprovision+user` parameter on your source Linux VM. For more information, see [Understanding and using Azure Linux Agent](../virtual-machines/extensions/agent-linux.md).
+
+ 1. Connect to your Linux VM with an SSH client.
+ 2. In the SSH window, enter the following command:
+
+ ```bash
+ sudo waagent -deprovision+user
+ ```
+ > [!NOTE]
+ > Only run this command on a VM that you'll capture as an image. This command does not guarantee that the image is cleared of all sensitive information or is suitable for redistribution. The `+user` parameter also removes the last provisioned user account. To keep user account credentials in the VM, use only `-deprovision`.
+
+ 3. Enter **y** to continue. You can add the `-force` parameter to avoid this confirmation step.
+ 4. After the command completes, enter **exit** to close the SSH client. The VM will still be running at this point.
++
+1. [Download existing OS disk](../virtual-machines/linux/download-vhd.md).
+
+Use this VHD to now create and deploy a VM on your Azure Stack Edge Pro device. You can use the following two Azure Marketplace images to create Linux custom images:
+
+|Item name |Description |Publisher |
+|---------|---------|---------|
+|[Ubuntu Server](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.ubuntuserver) |Ubuntu Server is the world's most popular Linux for cloud environments.|Canonical|
+|[Debian 8 "Jessie"](https://azuremarketplace.microsoft.com/marketplace/apps/credativ.debian) |Debian GNU/Linux is one of the most popular Linux distributions. |credativ|
+
+For a full list of Azure Marketplace images that could work (presently not tested), go to [Azure Marketplace items available for Azure Stack Hub](/azure-stack/operator/azure-stack-marketplace-azure-items?view=azs-1910).
++
+## Next steps
+
+[Deploy VMs on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
\ No newline at end of file
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md new file mode 100644
@@ -0,0 +1,641 @@
+---
+title: Overview and deployment of GPU VMs on your Azure Stack Edge Pro device
+description: Describes how to create and manage GPU virtual machines (VMs) on an Azure Stack Edge Pro device using templates.
+services: databox
+author: alkohli
+
+ms.service: databox
+ms.subservice: edge
+ms.topic: how-to
+ms.date: 12/21/2020
+ms.author: alkohli
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
+---
+
+# GPU VMs for your Azure Stack Edge Pro device
+
+This article provides an overview of GPU virtual machines (VMs) on your Azure Stack Edge Pro device. The article describes how to create a GPU VM and then install GPU driver extension to install appropriate Nvidia drivers. Use the Azure Resource Manager templates to create the GPU VM and install the GPU driver extension.
+
+This article applies to Azure Stack Edge Pro GPU and Azure Stack Edge Pro R devices.
+
+## About GPU VMs
+
+Your Azure Stack Edge Pro devices are equipped with 1 or 2 of Nvidia's Tesla T4 GPU. To deploy GPU-accelerated VM workloads on these devices, use GPU optimized VM sizes. For example, the NC T4 v3-series should be used to deploy inference workloads featuring T4 GPUs.
+
+For more information, see [NC T4 v3-series VMs](../virtual-machines/nct4-v3-series.md).
+
+## Supported OS and GPU drivers
+
+To take advantage of the GPU capabilities of Azure N-series VMs, Nvidia GPU drivers must be installed.
+
+The Nvidia GPU driver extension installs appropriate Nvidia CUDA or GRID drivers. You can install or manage the extension using the Azure Resource Manager templates.
+
+### Supported OS for GPU extension for Windows
+
+This extension supports the following operating systems (OSs). Other versions may work but have not been tested in-house on GPU VMs running on Azure Stack Edge Pro devices.
+
+| Distribution | Version |
+|---|---|
+| Windows Server 2019 | Core |
+| Windows Server 2016 | Core |
+
+### Supported OS for GPU extension for Linux
+
+This extension supports the following OS distros, depending on the driver support for specific OS version. Other versions may work but have not been tested in-house on GPU VMs running on Azure Stack Edge Pro devices.
++
+| Distribution | Version |
+|---|---|
+| Ubuntu | 18.04 LTS |
+| Red Hat Enterprise Linux | 7.4 |
++
+## GPU VMs and Kubernetes
+
+Before you deploy GPU VMs on your device, review the following considerations if Kubernetes is configured on the device.
+
+#### For 1-GPU device:
+
+- **Create a GPU VM followed by Kubernetes configuration on your device**: In this scenario, the GPU VM creation and Kubernetes configuration will both be successful. Kubernetes will not have access to the GPU in this case.
+
+- **Configure Kubernetes on your device followed by creation of a GPU VM**: In this scenario, the Kubernetes will claim the GPU on your device and the VM creation will fail as there are no GPU resources available.
+
+#### For 2-GPU device
+
+- **Create a GPU VM followed by Kubernetes configuration on your device**: In this scenario, the GPU VM that you create will claim one GPU on your device and Kubernetes configuration will also be successful and claim the remaining one GPU.
+
+- **Create two GPU VMs followed by Kubernetes configuration on your device**: In this scenario, the two GPU VMs will claim the two GPUs on the device and the Kubernetes is configured successfully with no GPUs.
+
+- **Configure Kubernetes on your device followed by creation of a GPU VM**: In this scenario, the Kubernetes will claim both the GPUs on your device and the VM creation will fail as no GPU resources are available.
+
+If you have GPU VMs running on your device and Kubernetes is also configured, then anytime the VM is deallocated (when you stop or remove a VM using Stop-AzureRmVM or Remove-AzureRmVM), there is a risk that the Kubernetes cluster will claim all the GPUs available on the device. In such an instance, you will not be able to restart the GPU VMs deployed on your device or create GPU VMs.
++
+## Create GPU VMs
+
+Follow these steps when deploying GPU VMs on your device:
+
+1. Identify if your device will also be running Kubernetes. If the device will run Kubernetes, then you'll need to create the GPU VM first and then configure Kubernetes. If Kubernetes is configured first, then it will claim all the available GPU resources and the GPU VM creation will fail.
+
+1. [Download the VM templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
+
+1. To create GPU VMs, follow all the steps in the [Deploy VM on your Azure Stack Edge Pro using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md) except for the following differences:
+
+ 1. While configuring compute network, enable the port that is connected to the Internet, for compute. This allows you to download the GPU drivers required for GPU extensions for your GPU VMs.
+
+ Here is an example where Port 2 was connected to the internet and was used to enable the compute network. If you've identified that Kubernetes is not needed in the earlier step, you can skip the Kubernetes node IP and external service IP assignment.
+
+ ![Enable compute settings on port connected to internet](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/enable-compute-network-1.png)
+
+
+ 1. Create a VM using the templates. When specifying GPU VM sizes, make sure to use the NCasT4-v3-series in the `CreateVM.parameters.json` as these are supported for GPU VMs. For more information, see [Supported VM sizes for GPU VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+
+ ```json
+ "vmSize": {
+ "value": "Standard_NC4as_T4_v3"
+ },
+ ```
+
+ 1. Once the GPU VM is successfully created, you can view this VM in the list of virtual machines in your Azure Stack Edge resource in the Azure portal.
+
+ ![GPU VM in list of virtual machines in Azure portal](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/list-virtual-machine-1.png)
+
+1. Select the VM and drill down to the details. Copy the IP allocated to the VM.
+
+ ![IP allocated to GPU VM in Azure portal](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/get-ip-gpu-virtual-machine-1.png)
+
+1. After the VM is created, deploy GPU extension using the extension template. For linux VMs, see [Install GPU extension for Linux](#gpu-extension-for-linux) and for Windows VMs, see [Install GPU extension for Windows](#gpu-extension-for-windows).
+
+1. To verify GPU extension install, connect to the GPU VM:
+ 1. If using a Windows VM, follow the steps in [Connect to a Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-windows-vm). [Verify the installation](#verify-windows-driver-installation).
+ 1. If using a Linux VM, follow the steps in [Connect to a Linux VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-linux-vm). [Verify the installation](#verify-linux-driver-installation).
+
+1. If needed, you could switch the compute network back to whatever you need.
++
+> [!NOTE]
+> When updating your device software version from 2012 to later, you will need to manually stop the GPU VMs.
++
+## Install GPU extension
+
+Depending on the operating system for your VM, you could install GPU extension for Windows or for Linux.
+
+> [!NOTE]
+> Before you install the GPU extension, make sure that the port enabled for compute network on your device is connected to Internet and has access. The GPU drivers are downloaded through the internet access.
+
+### GPU extension for Windows
+
+To deploy Nvidia GPU drivers for an existing VM, edit the `addGPUExtWindowsVM.parameters.json` parameters file and then deploy the template `addGPUextensiontoVM.json`.
+
+#### Edit parameters file
+
+The file `addGPUExtWindowsVM.parameters.json` takes the following parameters:
+
+```json
+"parameters": {
+ "vmName": {
+ "value": "<name of the VM>"
+ },
+ "extensionName": {
+ "value": "<name for the extension. Example: windowsGpu>"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverWindows"
+ },
+ "typeHandlerVersion": {
+ "value": "1.3"
+ },
+ "settings": {
+ "value": {
+ "DriverURL" : "http://us.download.nvidia.com/tesla/442.50/442.50-tesla-desktop-winserver-2019-2016-international.exe",
+ "DriverCertificateUrl" : "https://go.microsoft.com/fwlink/?linkid=871664",
+ "DriverType":"CUDA"
+ }
+ }
+ }
+```
+
+Here is a sample parameter file that was used in this article:
+
+```powershell
+PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\CreateVM\CreateVM.json"
+PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\CreateVM\CreateVM.parameters.json"
+PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
+PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment2"
+
+DeploymentName : deployment2
+ResourceGroupName : myasegpuvm1
+ProvisioningState : Succeeded
+Timestamp : 12/16/2020 12:02:56 AM
+Mode : Incremental
+TemplateLink :
+Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ vmName String VM2
+ adminUsername String Administrator
+ password String Password1
+ imageName String myasewindowsimg
+ vmSize String Standard_NC4as_T4_v3
+ vnetName String ASEVNET
+ vnetRG String aserg
+ subnetName String ASEVNETsubNet
+ nicName String nic6
+ ipConfigName String ipconfig6
+ privateIPAddress String
+
+Outputs :
+DeploymentDebugLogLevel :
+PS C:\WINDOWS\system32>
+```
+#### Deploy template
+
+Deploy the template `addGPUextensiontoVM.json`. This template deploys extension to an existing VM. Run the following command:
+
+```powershell
+$templateFile = "<Path to addGPUextensiontoVM.json>"
+$templateParameterFile = "<Path to addGPUExtWindowsVM.parameters.json>"
+$RGName = "<Name of your resource group>"
+New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Name for your deployment>"
+```
+> [!NOTE]
+> The extension deployment is a long running job and takes about 10 minutes to complete.
+
+Here is a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
+C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json
+PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
+PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addGPUExtWindowsVM.parameters.json"
+PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
+PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment3"
+
+DeploymentName : deployment3
+ResourceGroupName : myasegpuvm1
+ProvisioningState : Succeeded
+Timestamp : 12/16/2020 12:18:50 AM
+Mode : Incremental
+TemplateLink :
+Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ vmName String VM2
+ extensionName String windowsgpuext
+ publisher String Microsoft.HpcCompute
+ type String NvidiaGpuDriverWindows
+ typeHandlerVersion String 1.3
+ settings Object {
+ "DriverURL": "http://us.download.nvidia.com/tesla/442.50/442.50-tesla-desktop-winserver-2019-2016-international.exe",
+ "DriverCertificateUrl": "https://go.microsoft.com/fwlink/?linkid=871664",
+ "DriverType": "CUDA"
+ }
+
+Outputs :
+DeploymentDebugLogLevel :
+PS C:\WINDOWS\system32>
+```
+#### Track deployment
+
+To check the deployment state of extensions for a given VM, run the following command:
+
+```powershell
+Get-AzureRmVMExtension -ResourceGroupName <Name of resource group> -VMName <Name of VM> -Name <Name of the extension>
+```
+Here is a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> Get-AzureRmVMExtension -ResourceGroupName myasegpuvm1 -VMName VM2 -Name windowsgpuext
+
+ResourceGroupName : myasegpuvm1
+VMName : VM2
+Name : windowsgpuext
+Location : dbelocal
+Etag : null
+Publisher : Microsoft.HpcCompute
+ExtensionType : NvidiaGpuDriverWindows
+TypeHandlerVersion : 1.3
+Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM2/extensions/windowsgpuext
+PublicSettings : {
+ "DriverURL": "http://us.download.nvidia.com/tesla/442.50/442.50-tesla-desktop-winserver-2019-2016-international.exe",
+ "DriverCertificateUrl": "https://go.microsoft.com/fwlink/?linkid=871664",
+ "DriverType": "CUDA"
+ }
+ProtectedSettings :
+ProvisioningState : Creating
+Statuses :
+SubStatuses :
+AutoUpgradeMinorVersion : True
+ForceUpdateTag :
+
+PS C:\WINDOWS\system32>
+```
+
+Extension execution output is logged to the following file. Refer to this file `C:\Packages\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\1.3.0.0\Status` to track the status of installation.
++
+A successful install is indicated by a `message` as `Enable Extension` and `status` as `success`.
+
+```powershell
+"status": {
+ "formattedMessage": {
+ "message": "Enable Extension",
+ "lang": "en"
+ },
+ "name": "NvidiaGpuDriverWindows",
+ "status": "success",
+```
+
+#### Verify Windows driver installation
+
+Sign in to the VM and run the nvidia-smi command-line utility installed with the driver. The `nvidia-smi.exe` is located at `C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe`. If you do not see the file, it's possible that the driver installation is still running in the background. Wait for 10 minutes and check again.
+
+If the driver is installed, you see an output similar to the following sample:
+
+```powershell
+PS C:\Users\Administrator> cd "C:\Program Files\NVIDIA Corporation\NVSMI"
+PS C:\Program Files\NVIDIA Corporation\NVSMI> ls
+
+ Directory: C:\Program Files\NVIDIA Corporation\NVSMI
+
+Mode LastWriteTime Length Name
+---- ------------- ------ ----
+-a---- 2/26/2020 12:00 PM 849640 MCU.exe
+-a---- 2/26/2020 12:00 PM 443104 nvdebugdump.exe
+-a---- 2/25/2020 2:06 AM 81823 nvidia-smi.1.pdf
+-a---- 2/26/2020 12:01 PM 566880 nvidia-smi.exe
+-a---- 2/26/2020 12:01 PM 991344 nvml.dll
+
+PS C:\Program Files\NVIDIA Corporation\NVSMI> .\nvidia-smi.exe
+Wed Dec 16 00:35:51 2020
++-----------------------------------------------------------------------------+
+| NVIDIA-SMI 442.50 Driver Version: 442.50 CUDA Version: 10.2 |
+|-------------------------------+----------------------+----------------------+
+| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
+| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+|===============================+======================+======================|
+| 0 Tesla T4 TCC | 0000503C:00:00.0 Off | 0 |
+| N/A 35C P8 11W / 70W | 8MiB / 15205MiB | 0% Default |
++-------------------------------+----------------------+----------------------++++-----------------------------------------------------------------------------+
+| Processes: GPU Memory |
+| GPU PID Type Process name Usage |
+|=============================================================================|
+| No running processes found |
++-----------------------------------------------------------------------------+
+PS C:\Program Files\NVIDIA Corporation\NVSMI>
+```
+
+For more information, see [Nvidia GPU driver extension for Windows](../virtual-machines/extensions/hpccompute-gpu-windows.md).
+
+### GPU extension for Linux
+
+To deploy Nvidia GPU drivers for an existing VM, edit the `addGPUExtLinuxVM.parameters.json` parameters file and then deploy the template `addGPUextensiontoVM.json`.
+
+#### Edit parameters file
+
+If using Ubuntu, the `addGPUExtLinuxVM.parameters.json` file takes the following parameters:
+
+```powershell
+"parameters": {
+ "vmName": {
+ "value": "<name of the VM>"
+ },
+ "extensionName": {
+ "value": "<name for the extension. Example: linuxGpu>"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverLinux"
+ },
+ "typeHandlerVersion": {
+ "value": "1.3"
+ },
+ "settings": {
+ "value": {
+ "DRIVER_URL": "https://go.microsoft.com/fwlink/?linkid=874271",
+ "PUBKEY_URL": "http://download.microsoft.com/download/F/F/A/FFAC979D-AD9C-4684-A6CE-C92BB9372A3B/7fa2af80.pub",
+ "CUDA_ver": "10.0.130",
+ "InstallCUDA": "true"
+ }
+ }
+ }
+```
+If using Red Hat Enterprise Linux (RHEL), file takes the following parameters:
++
+```powershell
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "<name of the VM>"
+ },
+ "extensionName": {
+ "value": "<name for the extension. Example: linuxGpu>"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverLinux"
+ },
+ "typeHandlerVersion": {
+ "value": "1.3"
+ },
+ "settings": {
+ "value": {
+ "isCustomInstall":true,
+ "DRIVER_URL":"https://go.microsoft.com/fwlink/?linkid=874273",
+ "CUDA_ver":"10.0.130",
+ "PUBKEY_URL":"http://download.microsoft.com/download/F/F/A/FFAC979D-AD9C-4684-A6CE-C92BB9372A3B/7fa2af80.pub",
+ "DKMS_URL":"https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm",
+ "LIS_URL":"https://aka.ms/lis",
+ "LIS_RHEL_ver":"3.10.0-1062.9.1.el7"
+ }
+ }
+ }
+}
+```
++
+Here is a sample Ubuntu parameter file that was used in this article:
+
+```powershell
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "VM1"
+ },
+ "extensionName": {
+ "value": "gpuLinux"
+ },
+ "publisher": {
+ "value": "Microsoft.HpcCompute"
+ },
+ "type": {
+ "value": "NvidiaGpuDriverLinux"
+ },
+ "typeHandlerVersion": {
+ "value": "1.3"
+ },
+ "settings": {
+ "value": {
+ "DRIVER_URL": "https://go.microsoft.com/fwlink/?linkid=874271",
+ "PUBKEY_URL": "http://download.microsoft.com/download/F/F/A/FFAC979D-AD9C-4684-A6CE-C92BB9372A3B/7fa2af80.pub",
+ "CUDA_ver": "10.0.130",
+ "InstallCUDA": "true"
+ }
+ }
+ }
+}
+```
++
+#### Deploy template
+
+Deploy the template `addGPUextensiontoVM.json`. This template deploys extension to an existing VM. Run the following command:
+
+```powershell
+$templateFile = "Path to addGPUextensiontoVM.json"
+$templateParameterFile = "Path to addGPUExtLinuxVM.parameters.json"
+$RGName = "<Name of your resource group>"
+New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Name for your deployment>"
+```
+
+> [!NOTE]
+> The extension deployment is a long running job and takes about 10 minutes to complete.
+
+Here is a sample output:
+
+```powershell
+Copyright (C) Microsoft Corporation. All rights reserved.
+Try the new cross-platform PowerShell https://aka.ms/pscore6
+
+PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addGPUextensiontoVM.json"
+PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addGPUExtLinuxVM.parameters.json"
+PS C:\WINDOWS\system32> $RGName = "rg2"
+PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "delpoyment7"
+
+DeploymentName : delpoyment7
+ResourceGroupName : rg2
+ProvisioningState : Succeeded
+Timestamp : 12/10/2020 10:43:23 PM
+Mode : Incremental
+TemplateLink :
+Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ vmName String VM1
+ extensionName String gpuLinux
+ publisher String Microsoft.HpcCompute
+ type String NvidiaGpuDriverLinux
+ typeHandlerVersion String 1.3
+ settings Object {
+ "DRIVER_URL": "https://go.microsoft.com/fwlink/?linkid=874271",
+ "PUBKEY_URL":
+ "http://download.microsoft.com/download/F/F/A/FFAC979D-AD9C-4684-A6CE-C92BB9372A3B/7fa2af80.pub",
+ "CUDA_ver": "10.0.130",
+ "InstallCUDA": "true"
+ }
+
+Outputs :
+DeploymentDebugLogLevel :
+PS C:\WINDOWS\system32>
+```
+
+#### Track deployment status
+
+Template deployment is a long running job. To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator). Run the following command:
+
+```powershell
+Get-AzureRmVMExtension -ResourceGroupName myResourceGroup -VMName <VM Name> -Name <Extension Name>
+```
+Here is a sample output:
+
+```powershell
+Copyright (C) Microsoft Corporation. All rights reserved.
+Try the new cross-platform PowerShell https://aka.ms/pscore6
+
+PS C:\WINDOWS\system32> Get-AzureRmVMExtension -ResourceGroupName rg2 -VMName VM1 -Name gpulinux
+
+ResourceGroupName : rg2
+VMName : VM1
+Name : gpuLinux
+Location : dbelocal
+Etag : null
+Publisher : Microsoft.HpcCompute
+ExtensionType : NvidiaGpuDriverLinux
+TypeHandlerVersion : 1.3
+Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg2/providers/Microsoft.Compute/virtualMachines/VM1/extensions/gpuLinux
+PublicSettings : {
+ "DRIVER_URL": "https://go.microsoft.com/fwlink/?linkid=874271",
+ "PUBKEY_URL": "http://download.microsoft.com/download/F/F/A/FFAC979D-AD9C-4684-A6CE-C92BB9372A3B/7fa2af80.pub",
+ "CUDA_ver": "10.0.130",
+ "InstallCUDA": "true"
+ }
+ProtectedSettings :
+ProvisioningState : Creating
+Statuses :
+SubStatuses :
+AutoUpgradeMinorVersion : True
+ForceUpdateTag :
+
+PS C:\WINDOWS\system32>
+```
+
+> [!NOTE]
+> When the deployment is complete, the `ProvisioningState` changes to `Succeeded`.
+
+The extension execution output is logged to the following file: `/var/log/azure/nvidia-vmext-status`.
+
+#### Verify Linux driver installation
+
+Follow these steps to verify the driver installation:
+
+1. Connect to the GPU VM. Follow the instructions in [Connect to a Linux VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-linux-vm).
+
+ Here is a sample output:
+
+ ```powershell
+ PS C:\WINDOWS\system32> ssh -l Administrator 10.57.50.60
+ Administrator@10.57.50.60's password:
+ Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 5.0.0-1031-azure x86_64)
+ * Documentation: https://help.ubuntu.com
+ * Management: https://landscape.canonical.com
+ * Support: https://ubuntu.com/advantage
+ System information as of Thu Dec 10 22:57:01 UTC 2020
+
+ System load: 0.0 Processes: 133
+ Usage of /: 24.8% of 28.90GB Users logged in: 0
+ Memory usage: 2% IP address for eth0: 10.57.50.60
+ Swap usage: 0%
+
+ 249 packages can be updated.
+ 140 updates are security updates.
+
+ Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 5.0.0-1031-azure x86_64)
+ * Documentation: https://help.ubuntu.com
+ * Management: https://landscape.canonical.com
+ * Support: https://ubuntu.com/advantage
+ System information as of Thu Dec 10 22:57:01 UTC 2020
+ System load: 0.0 Processes: 133
+ Usage of /: 24.8% of 28.90GB Users logged in: 0
+ Memory usage: 2% IP address for eth0: 10.57.50.60
+ Swap usage: 0%
+
+ 249 packages can be updated.
+ 140 updates are security updates.
+
+ New release '20.04.1 LTS' available.
+ Run 'do-release-upgrade' to upgrade to it.
+
+ *** System restart required ***
+ Last login: Thu Dec 10 21:49:29 2020 from 10.90.24.23
+ To run a command as administrator (user "root"), use "sudo <command>".
+ See "man sudo_root" for details.
+
+ Administrator@VM1:~$
+ ```
+2. Run the nvidia-smi command-line utility installed with the driver. If the driver is successfully installed, you will be able to run the utility and see the following output:
+
+ ```powershell
+ Administrator@VM1:~$ nvidia-smi
+ Thu Dec 10 22:58:46 2020
+ +-----------------------------------------------------------------------------+
+ | NVIDIA-SMI 455.45.01 Driver Version: 455.45.01 CUDA Version: 11.1 |
+ |-------------------------------+----------------------+----------------------+
+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
+ | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
+ | | | MIG M. |
+ |===============================+======================+======================|
+ | 0 Tesla T4 Off | 0000941F:00:00.0 Off | 0 |
+ | N/A 48C P0 27W / 70W | 0MiB / 15109MiB | 5% Default |
+ | | | N/A |
+ +-------------------------------+----------------------+----------------------+
+
+ +-----------------------------------------------------------------------------+
+ | Processes: |
+ | GPU GI CI PID Type Process name GPU Memory |
+ | ID ID Usage |
+ |=============================================================================|
+ | No running processes found |
+ +-----------------------------------------------------------------------------+
+ Administrator@VM1:~$
+ ```
+
+For more information, see [Nvidia GPU driver extension for Linux](../virtual-machines/extensions/hpccompute-gpu-linux.md).
+
+## Remove GPU extension
+
+To remove the GPU extension, use the following command:
+
+`Remove-AzureRmVMExtension -ResourceGroupName <Resource group name> -VMName <VM name> -Name <Extension name>`
+
+Here is a sample output:
+
+```powershell
+PS C:\azure-stack-edge-deploy-vms> Remove-AzureRmVMExtension -ResourceGroupName rgl -VMName WindowsVM -Name windowsgpuext
+Virtual machine extension removal operation
+This cmdlet will remove the specified virtual machine extension. Do you want to continue? [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
+Requestld IsSuccessStatusCode StatusCode ReasonPhrase
+--------- ------------------- ---------- ------------
+ True OK OK
+```
++++
+## Next steps
+
+[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0)
\ No newline at end of file
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md new file mode 100644
@@ -0,0 +1,396 @@
+---
+title: Use of Custom Script Extensions for VMs on your Azure Stack Edge Pro device
+description: Describes how to install custom script extensions on virtual machines (VMs) running on an Azure Stack Edge Pro device using templates.
+services: databox
+author: alkohli
+
+ms.service: databox
+ms.subservice: edge
+ms.topic: how-to
+ms.date: 12/21/2020
+ms.author: alkohli
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
+---
+
+# Deploy Custom Script Extension on VMs running on your Azure Stack Edge Pro device
+
+The Custom Script Extension downloads and runs scripts or commands on virtual machines running on your Azure Stack Edge Pro devices. This article details how to install and run the Custom Script Extension by using an Azure Resource Manager template.
+
+This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices.
+
+## About custom script extension
+
+The Custom Script Extension is useful for post-deployment configuration, software installation, or any other configuration/management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide scripts or commands to the extension runtime.
+
+The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using Azure CLI, PowerShell, or the Azure Virtual Machines REST API.
+
+## OS for Custom Script Extension
+
+#### Supported OS for Custom Script Extension on Windows
+
+The Custom Script Extension for Windows will run on the following OSs. Other versions may work but have not been tested in-house on VMs running on Azure Stack Edge Pro devices.
+
+| Distribution | Version |
+|---|---|
+| Windows Server 2019 | Core |
+| Windows Server 2016 | Core |
+
+#### Supported OS for Custom Script Extension on Linux
+
+The Custom Script Extension for Linux will run on the following OSs. Other versions may work but have not been tested in-house on VMs running on Azure Stack Edge Pro devices.
+
+| Distribution | Version |
+|---|---|
+| Linux: Ubuntu | 18.04 LTS |
+| Linux: Red Hat Enterprise Linux | 7.4 |
+
+<!--### Script location
+
+Instead of the scripts, in this article, we pass a command to execute via the Custom Script Extension.
+
+### Internet Connectivity
+
+To download a script externally such as from GitHub or Azure Storage, make sure that the port on which you enable compute network, is connected to the internet.
+
+If your script is on a local server, then you may still need additional firewall and Network Security Group ports need to be opened.
+
+> [!NOTE]
+> Before you install the Custom Script extension, make sure that the port enabled for compute network on your device is connected to Internet and has access. -->
+
+## Prerequisites
+
+1. [Download the VM templates and parameters files](https://aka.ms/ase-vm-templates) to your client machine. Unzip it into a directory youΓÇÖll use as a working directory.
+
+1. You should have a VM created and deployed on your device. To create VMs, follow all the steps in the [Deploy VM on your Azure Stack Edge Pro using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md).
+
+ If you need to download a script externally such as from GitHub or Azure Storage, while configuring compute network, enable the port that is connected to the Internet, for compute. This allows you to download the script.
+
+ Here is an example where Port 2 was connected to the internet and was used to enable the compute network. If you've identified that Kubernetes is not needed in the earlier step, you can skip the Kubernetes node IP and external service IP assignment.
+
+ ![Enable compute settings on port connected to internet](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/enable-compute-network-1.png)
+
+## Install Custom Script Extension
+
+Depending on the operating system for your VM, you could install Custom Script Extension for Windows or for Linux.
+
+
+### Custom Script Extension for Windows
+
+To deploy Custom Script Extension for Windows for a VM running on your device, edit the `addCSExtWindowsVM.parameters.json` parameters file and then deploy the template `addCSextensiontoVM.json`.
+
+#### Edit parameters file
+
+The file `addCSExtWindowsVM.parameters.json` takes the following parameters:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "<Name of VM>"
+ },
+ "extensionName": {
+ "value": "<Name of extension>"
+ },
+ "publisher": {
+ "value": "Microsoft.Compute"
+ },
+ "type": {
+ "value": "CustomScriptExtension"
+ },
+ "typeHandlerVersion": {
+ "value": "1.10"
+ },
+ "settings": {
+ "value": {
+ "commandToExecute" : "<Command to execute>"
+ }
+ }
+ }
+}
+```
+Provide your VM name, name for the extension and the command that you want to execute.
+
+Here is a sample parameter file that was used in this article.
+
+```powershell
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "VM5"
+ },
+ "extensionName": {
+ "value": "CustomScriptExtension"
+ },
+ "publisher": {
+ "value": "Microsoft.Compute"
+ },
+ "type": {
+ "value": "CustomScriptExtension"
+ },
+ "typeHandlerVersion": {
+ "value": "1.10"
+ },
+ "settings": {
+ "value": {
+ "commandToExecute" : "md C:\\Users\\Public\\Documents\\test"
+ }
+ }
+ }
+}
+```
+#### Deploy template
+
+Deploy the template `addCSextensiontoVM.json`. This template deploys extension to an existing VM. Run the following command:
+
+```powershell
+$templateFile = "<Path to addCSExtensiontoVM.json file>"
+$templateParameterFile = "<Path to addCSExtWindowsVM.parameters.json file>"
+$RGName = "<Resource group name>"
+New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Deployment name>"
+```
+> [!NOTE]
+> The extension deployment is a long running job and takes about 10 minutes to complete.
+
+Here is a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addCSExtensiontoVM.json"
+PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addCSExtWindowsVM.parameters.json"
+PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
+PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment7"
+
+DeploymentName : deployment7
+ResourceGroupName : myasegpuvm1
+ProvisioningState : Succeeded
+Timestamp : 12/17/2020 10:07:44 PM
+Mode : Incremental
+TemplateLink :
+Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ vmName String VM5
+ extensionName String CustomScriptExtension
+ publisher String Microsoft.Compute
+ type String CustomScriptExtension
+ typeHandlerVersion String 1.10
+ settings Object {
+ "commandToExecute": "md C:\\Users\\Public\\Documents\\test"
+ }
+
+Outputs :
+DeploymentDebugLogLevel :
+
+PS C:\WINDOWS\system32>
+```
+#### Track deployment
+
+To check the deployment state of extensions for a given VM, run the following command:
+
+```powershell
+Get-AzureRmVMExtension -ResourceGroupName <Name of resource group> -VMName <Name of VM> -Name <Name of the extension>
+```
+Here is a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> Get-AzureRmVMExtension -ResourceGroupName myasegpuvm1 -VMName VM5 -Name CustomScriptExtension
+
+ResourceGroupName : myasegpuvm1
+VMName : VM5
+Name : CustomScriptExtension
+Location : dbelocal
+Etag : null
+Publisher : Microsoft.Compute
+ExtensionType : CustomScriptExtension
+TypeHandlerVersion : 1.10
+Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM5/extensions/CustomScriptExtension
+PublicSettings : {
+ "commandToExecute": "md C:\\Users\\Public\\Documents\\test"
+ }
+ProtectedSettings :
+ProvisioningState : Creating
+Statuses :
+SubStatuses :
+AutoUpgradeMinorVersion : True
+ForceUpdateTag :
+
+PS C:\WINDOWS\system32>
+```
+
+> [!NOTE]
+> When the deployment is complete, the `ProvisioningState` changes to `Succeeded`.
+
+Extension output is logged to files found under the following folder on the target virtual machine.
+
+```cmd
+C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension
+```
+
+The specified files are downloaded into the following folder on the target virtual machine.
+
+```cmd
+C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads\<n>
+```
+where <n> is a decimal integer, which may change between executions of the extension. The 1.* value matches the actual, current `typeHandlerVersion` value of the extension. For example, the actual directory in this instance was `C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.10.9\Downloads\0`.
++
+In this instance, the command to execute for the custom extension was: `md C:\\Users\\Public\\Documents\\test`. When the extension is successfully installed, you can verify that the directory was created in the VM at the specified path in the command.
++
+### Custom Script Extension for Linux
+
+To deploy Custom Script Extension for Windows for a VM running on your device, edit the `addCSExtLinuxVM.parameters.json` parameters file and then deploy the template `addCSExtensiontoVM.json`.
+
+#### Edit parameters file
+
+The file `addCSExtLinuxVM.parameters.json` takes the following parameters:
+
+```powershell
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "<Name of your VM>"
+ },
+ "extensionName": {
+ "value": "<Name of your extension>"
+ },
+ "publisher": {
+ "value": "Microsoft.Azure.Extensions"
+ },
+ "type": {
+ "value": "CustomScript"
+ },
+ "typeHandlerVersion": {
+ "value": "2.0"
+ },
+ "settings": {
+ "value": {
+ "commandToExecute" : "<Command to execute>"
+ }
+ }
+ }
+}
+```
+Provide your VM name, name for the extension and the command that you want to execute.
+
+Here is a sample parameter file that was used in this article:
+
+```powershell
+$templateFile = "<Path to addCSExtensionToVM.json file>"
+$templateParameterFile = "<Path to addCSExtLinuxVM.parameters.json file>"
+$RGName = "<Resource group name>"
+New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Deployment name>"
+```
+
+> [!NOTE]
+> The extension deployment is a long running job and takes about 10 minutes to complete.
+
+Here is a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\ExtensionTemplates\addCSExtensionToVM.json"
+PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\ExtensionTemplates\addCSExtLinuxVM.parameters.json"
+PS C:\WINDOWS\system32> $RGName = "myasegpuvm1"
+PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment99"
+
+DeploymentName : deployment99
+ResourceGroupName : myasegpuvm1
+ProvisioningState : Succeeded
+Timestamp : 12/18/2020 1:55:23 AM
+Mode : Incremental
+TemplateLink :
+Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ vmName String VM6
+ extensionName String LinuxCustomScriptExtension
+ publisher String Microsoft.Azure.Extensions
+ type String CustomScript
+ typeHandlerVersion String 2.0
+ settings Object {
+ "commandToExecute": "sudo echo 'some text' >> /home/Administrator/file2.txt"
+ }
+
+Outputs :
+DeploymentDebugLogLevel :
+
+PS C:\WINDOWS\system32>
+```
+
+The `commandToExecute` was set to create a file `file2.txt` in the `/home/Administrator` directory and the contents of the file are `some text`. In this case, you can verify that the file was created in the specified path.
+
+```powershell
+Administrator@VM6:~$ dir
+file2.txt
+Administrator@VM6:~$ cat file2.txt
+some text
+Administrator@VM6:
+```
+
+#### Track deployment status
+
+Template deployment is a long running job. To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator). Run the following command:
+
+```powershell
+Get-AzureRmVMExtension -ResourceGroupName myResourceGroup -VMName <VM Name> -Name <Extension Name>
+```
+Here is a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> Get-AzureRmVMExtension -ResourceGroupName myasegpuvm1 -VMName VM5 -Name CustomScriptExtension
+
+ResourceGroupName : myasegpuvm1
+VMName : VM5
+Name : CustomScriptExtension
+Location : dbelocal
+Etag : null
+Publisher : Microsoft.Compute
+ExtensionType : CustomScriptExtension
+TypeHandlerVersion : 1.10
+Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/myasegpuvm1/providers/Microsoft.Compute/virtualMachines/VM5/extensions/CustomScriptExtension
+PublicSettings : {
+ "commandToExecute": "md C:\\Users\\Public\\Documents\\test"
+ }
+ProtectedSettings :
+ProvisioningState : Creating
+Statuses :
+SubStatuses :
+AutoUpgradeMinorVersion : True
+ForceUpdateTag :
+
+PS C:\WINDOWS\system32>
+```
+
+> [!NOTE]
+> When the deployment is complete, the `ProvisioningState` changes to `Succeeded`.
+
+The extension execution output is logged to the following file: `/var/lib/waagent/custom-script/download/0/`.
++
+## Remove Custom Script Extension
+
+To remove the Custom Script Extension, use the following command:
+
+`Remove-AzureRmVMExtension -ResourceGroupName <Resource group name> -VMName <VM name> -Name <Extension name>`
+
+Here is a sample output:
+
+```powershell
+PS C:\WINDOWS\system32> Remove-AzureRmVMExtension -ResourceGroupName myasegpuvm1 -VMName VM6 -Name LinuxCustomScriptExtension
+Virtual machine extension removal operation
+This cmdlet will remove the specified virtual machine extension. Do you want to continue?
+[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Yes
+RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+--------- ------------------- ---------- ------------
+ True OK OK
+```
++
+## Next steps
+
+[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0)
\ No newline at end of file
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
@@ -119,7 +119,7 @@ Follow these steps to create a VM after you have created a VM image.
|---------|---------| |Virtual machine name | | |Image | Select from the VM images available on the device. |
- |Size | Choose from the [Supported VM sizes](azure-stack-edge-gpu-deploy-virtual-machine-templates.md#supported-vm-sizes). |
+ |Size | Choose from the [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md). |
|Username | Use the default username *azureuser*. | |Authentication type | Choose from SSH public key or a user-defined password. | |Password | Enter a password to sign into the virtual machine. The password must be at least 12 characters long and meet the defined [Complexity requirements](../virtual-machines/windows/faq.md#what-are-the-password-requirements-when-creating-a-vm). |
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script.md new file mode 100644
@@ -0,0 +1,328 @@
+---
+title: Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell script
+description: Describes how to create and manage virtual machines (VMs) on a Azure Stack Edge Pro device using an Azure PowerShell script.
+services: databox
+author: alkohli
+
+ms.service: databox
+ms.subservice: edge
+ms.topic: how-to
+ms.date: 12/22/2020
+ms.author: alkohli
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using an Azure PowerShell script so that I can efficiently manage my VMs.
+---
+
+# Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell script
+
+<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
+
+This tutorial describes how to create and manage a VM on your Azure Stack Edge Pro device using an Azure PowerShell script.
+
+## Prerequisites
+
+Before you begin creating and managing a VM on your Azure Stack Edge Pro device using this script, you need to make sure you have completed the prerequisites listed in the following steps:
+
+### For Azure Stack Edge Pro device via the local web UI
+
+[!INCLUDE [azure-stack-edge-gateway-deploy-vm-prerequisites](../../includes/azure-stack-edge-gateway-deploy-virtual-machine-prerequisites.md)]
+
+### For your Windows client
+
+1. Make sure that you have modified:
+
+ - The host file on the client, OR,
+ - The DNS server configuration
+
+ > [!IMPORTANT]
+ > We recommend that you modify the DNS server configuration for endpoint name resolution.
+
+ 1. Start **Notepad** as an administrator (Administrator privileges is required to save the file), and then open the **hosts** file located at `C:\Windows\System32\Drivers\etc`.
+
+ ![Windows Explorer hosts file](media/azure-stack-edge-j-series-connect-resource-manager/hosts-file.png)
+
+ 2. Add the following entries to your **hosts** file replacing with appropriate values for your device:
+
+ ```
+ <device IP> login.<appliance name>.<DNS domain>
+ <device IP> management.<appliance name>.<DNS domain>
+ <device IP> <storage name>.blob.<appliance name>.<DNS domain>
+ ```
+ For the storage account, you can provide a name that you want the script to use later to create a new storage account. The script does not check if that storage account is existing.
+
+ 3. Use the following image for reference. Save the **hosts** file.
+
+ ![hosts file in Notepad](media/azure-stack-edge-j-series-deploy-virtual-machine-cli-python/hosts-screenshot-boxed.png)
+
+2. [Download the PowerShell script](https://aka.ms/ase-vm-powershell) used in this procedure.
+
+3. Make sure that your Windows client is running PowerShell 5.0 or later.
+
+4. Make sure that the `Azure.Storage Module version 4.5.0` is installed on your system. You can get this module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Azure.Storage/4.5.0). To install this module, type:
+
+ `Install-Module -Name Azure.Storage -RequiredVersion 4.5.0`
+
+ To verify the version of the installed module, type:
+
+ `Get-InstalledModule -name Azure.Storage`
+
+ To uninstall any other version modules, type:
+
+ `Uninstall-Module -Name Azure.Storage`
+
+5. [Download AzCopy 10](../storage/common/storage-use-azcopy-v10.md#download-azcopy) to your Windows client. Make a note of this location as you will pass it as a parameter while running the script.
+
+6. Make sure that your Windows client is running TLS 1.2 or later.
++
+## Create a VM
+
+1. Run PowerShell as an administrator.
+1. Go to the folder where you downloaded the script on your client.
+1. Before you run the script, make sure you are still connected to the local Azure Resource Manager of the device and the connection has not expired.
+
+ ```powershell
+ PS C:\windows\system32> login-AzureRMAccount -EnvironmentName aztest1 -TenantId c0257de7-538f-415c-993a-1b87a031879d
+
+ Account SubscriptionName TenantId Environment
+ ------- ---------------- -------- -----------
+ EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d aztest1
+
+ PS C:\windows\system32> cd C:\Users\v2
+ PS C:\Users\v2>
+ ```
+1. Use the following command to run the script:
+
+ `.\ArmPowershellClient.ps1 -NicPrivateIp <Private IP> -VHDPath <Path> -VHDFile <VHD File, with extension> -StorageAccountName <Name> -OS <Windows/Linux> -VMSize <Supported VM Size> -VMUserName <Username to be used to sign in to VM> -VMPassword <Password for the VM> --AzCopy10Path <Absolute Path>`
+
+ If you want the IP to be dynamically allocated to the VM, omit the `-NicPrivateIp` parameter.
+
+ Here are the examples when the script is run to create a Windows VM and a Linux VM.
+
+ **For a Windows VM:**
+
+ Here is a sample output for a Windows VM that was created.
+
+ ```powershell
+ PS C:\Users\v2> .\ArmPowershellClient.ps1 -VHDPath \\asefs\Logs\vmvhd -VHDFile WindowsServer2016Datacenter.vhd -StorageAccountName myasesatest -OS Windows -VMSize Standard_D1_v2 -VMUserName Administrator -VMPassword Password1 -AzCopy10Path C:\Users\AzCopy10\AzCopy.exe
+ New-AzureRmResourceGroup -Name rg201221071831 -Location DBELocal -Force
+ Successfully created Resource Group:rg201221071831
+ Successfully created Resource Group:StorAccRG
+ Get-AzureRmStorageAccount -Name myasesatest -ResourceGroupName StorAccRG -ErrorAction SilentlyContinue
+ New-AzureRmStorageAccount -Name myasesatest -ResourceGroupName StorAccRG -SkuName Standard_LRS -Location DBELocal
+
+ Created New Storage Account
+ Get-AzureRmStorageAccount -name myasesatest -resourcegroupname
+ StorageAccountName ResourceGroupName Location SkuName Kind AccessTier CreationTime ProvisioningState EnableHttpsTrafficOnly
+ ------------------ ----------------- -------- ------- ---- ---------- ------------ ----------------- ----------------------
+ myasesatest StorAccRG DBELocal StandardLRS Storage 12/22/2020 3:18:38 AM Succeeded False
+ myasesatest StorAccRG DBELocal StandardLRS Storage 12/22/2020 3:18:38 AM Succeeded False
+
+ Uploading Vhd to Storage Account
+
+ New-AzureStorageContext -StorageAccountName myasesatest -StorageAccountKey hyibjhbVlOROgTlU1nQJIlxrg94eGDhF+RIQ71Z7UVZIxoOPMlHP274NUhZtA1hMxGBcpk2BVApiFasFPEhY/A== -Endpoint https://myasesatest.blob.myasegpuvm.wdshcsso.com/
+
+ New-AzureStorageAccountSASToken -Service Blob,File,Queue,Table -ResourceType Container,Service,Object -Permission
+
+ SAS Token : ?sv=2017-07-29&sig=TXaGbjum9tFFaJnu3SFmDuslJuqNiNQwvuHfpPJMYN0%3D&spr=https&se=2020-12-22T04%3A18%3A43Z&srt=sco&ss=bfqt&sp=racwdl
+
+ C:\Users\AzCopy10\AzCopy.exe make https://myasesatest.blob.myasegpuvm.wdshcsso.com/vmimages?sv=2017-07-29&sig=TXaGbjum9tFFaJnu3SFmDuslJuqNiNQwvuHfpPJMYN0%3D&spr=https&se=2020-12-22T04%3A18%3A43Z&srt=sco&ss=bfqt&sp=racwdl
+
+ Successfully created the resource.
+
+ AzCopy cp \\asefs\Logs\vmvhd\WindowsServer2016Datacenter.vhd https://myasesatest.blob.myasegpuvm.wdshcsso.com/vmimages?sv=2017-07-29&sig=TXaGbjum9tFFaJnu3SFmDuslJuqNiNQwvuHfpPJMYN0%3D&spr=https&se=2020-12-22T04%3A18%3A43Z&srt=sco&ss=bfqt&sp=racwdl
+
+ INFO: Scanning...
+
+ Job b6f54665-93c4-2f47-4770-5f3b7b0de2dc has started
+ Log file is located at: C:\Users\Administrator\.azcopy\b6f54665-93c4-2f47-4770-5f3b7b0de2dc.log
+
+ INFO: AzCopy.exe: A newer version 10.8.0 is available to download
+
+ 99.9 %, 0 Done, 0 Failed, 1 Pending, 0 Skipped, 1 Total, (Disk may be limiting speed)
+
+ Job b6f54665-93c4-2f47-4770-5f3b7b0de2dc summary
+ Elapsed Time (Minutes): 12.7717
+ Total Number Of Transfers: 1
+ Number of Transfers Completed: 1
+ Number of Transfers Failed: 0
+ Number of Transfers Skipped: 0
+ TotalBytesTransferred: 13958644224
+ Final Job Status: Completed
+
+ VHD Upload Done
+
+ Creating a new managed disk
+
+ = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import -SourceUri
+
+ Microsoft.Azure.Commands.Compute.Automation.Models.PSDisk
+
+ New-AzureRmDisk -ResourceGroupName rg201221071831 -DiskName ld201221071831 -Disk
+
+ ResourceGroupName : rg201221071831
+ ManagedBy :
+ Sku : Microsoft.Azure.Management.Compute.Models.DiskSku
+ Zones :
+ TimeCreated : 12/21/2020 7:31:35 PM
+ OsType :
+ CreationData : Microsoft.Azure.Management.Compute.Models.CreationData
+ DiskSizeGB : 13
+ EncryptionSettings :
+ ProvisioningState : Succeeded
+ Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Compute/disks/ld201221071831
+ Name : ld201221071831
+ Type : Microsoft.Compute/disks
+ Location : DBELocal
+ Tags : {}
+
+ Created a new managed disk
+
+ Creating a new Image out of managed disk
+
+ ResourceGroupName :
+ SourceVirtualMachine :
+ StorageProfile : Microsoft.Azure.Management.Compute.Models.ImageStorageProfile
+ ProvisioningState :
+ Id :
+ Name :
+ Type :
+ Location : DBELocal
+ Tags :
+
+ New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig201221071831 -ResourceGroupName rg201221071831
+
+ ResourceGroupName : rg201221071831
+ SourceVirtualMachine :
+ StorageProfile : Microsoft.Azure.Management.Compute.Models.ImageStorageProfile
+ ProvisioningState : Succeeded
+ Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Compute/images/ig201221071831
+ Name : ig201221071831
+ Type : Microsoft.Compute/images
+ Location : dbelocal
+ Tags : {}
+
+ Created a new Image
+
+ Using Vnet /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft.Network/virtualNetworks/ASEVNET
+
+ Creating a new Newtork Interface
+ WARNING: The output object type of this cmdlet will be modified in a future release.
+
+ VirtualMachine :
+ IpConfigurations : {ip201221071831}
+ DnsSettings : Microsoft.Azure.Commands.Network.Models.PSNetworkInterfaceDnsSettings
+ MacAddress : 001DD87D7216
+ Primary :
+ EnableAcceleratedNetworking : False
+ EnableIPForwarding : False
+ NetworkSecurityGroup :
+ ProvisioningState : Succeeded
+ VirtualMachineText : null
+ IpConfigurationsText : [
+ {
+ "Name": "ip201221071831",
+ "Etag": "W/\"27785dd5-d12a-4d73-9495-ffad7847261a\"",
+ "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831/ipConfigurations/ip201221071831",
+ "PrivateIpAddress": "10.57.51.61",
+ "PrivateIpAllocationMethod": "Dynamic",
+ "Subnet": {
+ "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft.Network/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet",
+ "ResourceNavigationLinks": [],
+ "ServiceEndpoints": []
+ },
+ "ProvisioningState": "Succeeded",
+ "PrivateIpAddressVersion": "IPv4",
+ "LoadBalancerBackendAddressPools": [],
+ "LoadBalancerInboundNatRules": [],
+ "Primary": true,
+ "ApplicationGatewayBackendAddressPools": [],
+ "ApplicationSecurityGroups": []
+ }
+ ]
+ DnsSettingsText : {
+ "DnsServers": [],
+ "AppliedDnsServers": [],
+ "InternalDomainNameSuffix": "qgotb4hjdh4efnhn0vz5adtb3f.a--x.internal.cloudapp.net"
+ }
+ NetworkSecurityGroupText : null
+ ResourceGroupName : rg201221071831
+ Location : dbelocal
+ ResourceGuid : e6327ab9-0855-4f04-9b36-17bbf31b5bd8
+ Type : Microsoft.Network/networkInterfaces
+ Tag :
+ TagsTable :
+ Name : nic201221071831
+ Etag : W/"27785dd5-d12a-4d73-9495-ffad7847261a"
+ Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831
+
+ Created Network Interface
+
+ Creating a new VM
+
+ New-AzureRmVMConfig -VMName VM201221071831 -VMSize Standard_D1_v2
+
+ Set-AzureRmVMOperatingSystem -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Windows -ComputerName COM201221071831 -Credential System.Management.Automation.PSCredential
+
+ Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine = Set-AzureRmVMOSDisk -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Name osld201221071831 -Caching ReadWrite -CreateOption FromImage -Windows -StorageAccountType StandardLRS
+
+ Add-AzureRmVMNetworkInterface -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Id /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Network/networkInterfaces/nic201221071831.Id
+
+ Set-AzureRmVMSourceImage -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Id /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/rg201221071831/providers/Microsoft.Compute/images/ig201221071831
+
+ New-AzureRmVM -ResourceGroupName rg201221071831 -Location DBELocal -VM Microsoft.Azure.Commands.Compute.Models.PSVirtualMachine -Verbose
+ WARNING: Since the VM is created using premium storage or managed disk, existing standard storage account, myasesa1, is used for boot
+ diagnostics.
+ VERBOSE: Performing the operation "New" on target "VM201221071831".
+
+ Ticks : 1533424841
+ Days : 0
+ Hours : 0
+ Milliseconds : 342
+ Minutes : 2
+ Seconds : 33
+ TotalDays : 0.00177479726967593
+ TotalHours : 0.0425951344722222
+ TotalMilliseconds : 153342.4841
+ TotalMinutes : 2.55570806833333
+ TotalSeconds : 153.3424841
+
+ RequestId :
+ IsSuccessStatusCode : True
+ StatusCode : OK
+ ReasonPhrase : OK
+
+ PS C:\Users\v2>
+ ```
+
+ **For a Linux VM:**
+
+ Here is the sample of the command that was used to create a Linux VM.
+
+ ```powershell
+ .\ArmPowershellClient.ps1 -VHDPath \\asefs\Logs\vmvhd -VHDFile ubuntu13.vhd -StorageAccountName myasesatest -OS Linux -VMSize Standard_D1_v2 -VMUserName Administrator -VMPassword Password1 -AzCopy10Path C:\Users\AzCopy10\AzCopy.exe
+ New-AzureRmResourceGroup -Name rg201221075546 -Location DBELocal -Force
+ ```
+
+
+1. Once you have successfully created the VMs, these VMs should show up in the list of virtual machines in the Azure portal. To view the VMs, in the Azure Stack Edge resource for your device in Azure portal, go to **Edge services > Virtual machines**.
+
+ ![View list of virtual machines](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script/list-virtual-machine-1.png)
+
+ To view the details of a VM, select the VM name. Note the dynamic allocation of IP for this VM.
+
+ ![View VM details](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell-script/view-virtual-machine-details-1.png)
+
+1. To clean up the resources that the script created, use the following commands:
+
+ ```powershell
+ Get-AzureRmVM | Remove-AzureRmVM -Force
+ Get-AzureRmNetworkInterface | Remove-AzureRmNetworkInterface -Force
+ Get-AzureRmImage | Remove-AzureRmImage -Force
+ Get-AzureRmDisk | Remove-AzureRmDisk -Force
+ Get-AzureRmStorageAccount | Remove-AzureRmStorageAccount -Force
+ ```
++
+## Next steps
+
+[Deploy VMs using Azure PowerShell cmdlets](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md)
\ No newline at end of file
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-powershell.md new file mode 100644
@@ -0,0 +1,537 @@
+---
+title: Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell
+description: Describes how to create and manage virtual machines (VMs) on a Azure Stack Edge Pro GPU device using Azure PowerShell.
+services: databox
+author: alkohli
+
+ms.service: databox
+ms.subservice: edge
+ms.topic: how-to
+ms.date: 12/23/2020
+ms.author: alkohli
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
+---
+
+# Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell
+
+This article describes how to create and manage a VM on your Azure Stack Edge Pro device using Azure PowerShell. This article applies to Azure Stack Edge Pro GPU, Azure Stack Edge Pro R and Azure Stack Edge Mini R devices.
+
+## VM deployment workflow
+
+The deployment workflow is illustrated in the following diagram.
+
+![VM deployment workflow](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell/vm-workflow-r.svg)
+
+## Prerequisites
+
+[!INCLUDE [azure-stack-edge-gateway-deploy-vm-prerequisites](../../includes/azure-stack-edge-gateway-deploy-virtual-machine-prerequisites.md)]
++
+## Query for built in subscription on the device
+
+For Azure Resource Manager, only a single user-visible fixed subscription is supported. This subscription is unique per device and this subscription name or subscription ID cannot be changed.
+
+This subscription contains all the resources that are created required for VM creation.
+
+> [!IMPORTANT]
+> This subscription is created when you enable VMs from the Azure portal and it lives locally on your device .
+
+This subscription is used to deploy the VMs.
+
+1. To list this subscription, type:
+
+ ```powershell
+ Get-AzureRmSubscription
+ ```
+
+ A sample output is shown below.
+
+ ```powershell
+ PS C:\windows\system32> Get-AzureRmSubscription
+
+ Name Id TenantId State
+ ---- -- -------- -----
+ Default Provider Subscription A4257FDE-B946-4E01-ADE7-674760B8D1A3 c0257de7-538f-415c-993a-1b87a031879d Enabled
+
+ PS C:\windows\system32>
+ ```
+
+3. Get the list of the registered resource providers running on the device. This list typically includes Compute, Network, and Storage.
+
+ ```powershell
+ Get-AzureRMResourceProvider
+ ```
+
+ > [!NOTE]
+ > The resource providers are pre-registered and cannot be modified or changed.
+
+ A sample output is shown below:
+
+ ```powershell
+ Get-AzureRmResourceProvider
+ ProviderNamespace : Microsoft.Compute
+ RegistrationState : Registered
+ ResourceTypes : {virtualMachines, virtualMachines/extensions, locations, operations...}
+ Locations : {DBELocal}
+ ZoneMappings :
+
+ ProviderNamespace : Microsoft.Network
+ RegistrationState : Registered
+ ResourceTypes : {operations, locations, locations/operations, locations/usages...}
+ Locations : {DBELocal}
+ ZoneMappings :
+
+ ProviderNamespace : Microsoft.Resources
+ RegistrationState : Registered
+ ResourceTypes : {tenants, locations, providers, checkresourcename...}
+ Locations : {DBELocal}
+ ZoneMappings :
+
+ ProviderNamespace : Microsoft.Storage
+ RegistrationState : Registered
+ ResourceTypes : {storageaccounts, storageAccounts/blobServices, storageAccounts/tableServices,
+ storageAccounts/queueServices...}
+ Locations : {DBELocal}
+ ZoneMappings :
+ ```
+
+## Create a resource group
+
+Create an Azure resource group with [New-AzureRmResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which the Azure resources such as storage account, disk, managed disk are deployed and managed.
+
+> [!IMPORTANT]
+> All the resources are created in the same location as that of the device and the location is set to **DBELocal**.
+
+```powershell
+New-AzureRmResourceGroup -Name <Resource group name> -Location DBELocal
+```
+
+A sample output is shown below.
+
+```powershell
+New-AzureRmResourceGroup -Name rg191113014333 -Location DBELocal
+Successfully created Resource Group:rg191113014333
+```
+
+## Create a storage account
+
+Create a new storage account using the resource group created in the previous step. This is a **local storage account** that will be used to upload the virtual disk image for the VM.
+
+```powershell
+New-AzureRmStorageAccount -Name <Storage account name> -ResourceGroupName <Resource group name> -Location DBELocal -SkuName Standard_LRS
+```
+
+> [!NOTE]
+> Only the local storage accounts such as Locally redundant storage (Standard_LRS or Premium_LRS) can be created via Azure Resource Manager. To create tiered storage accounts, see the steps in [Add, connect to storage accounts on your Azure Stack Edge Pro](azure-stack-edge-j-series-deploy-add-storage-accounts.md).
+
+A sample output is shown below.
+
+```powershell
+New-AzureRmStorageAccount -Name sa191113014333 -ResourceGroupName rg191113014333 -SkuName Standard_LRS -Location DBELocal
+
+ResourceGroupName : rg191113014333
+StorageAccountName : sa191113014333
+Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg191113014333/providers/Microsoft.Storage/storageaccounts/sa191113014333
+Location : DBELocal
+Sku : Microsoft.Azure.Management.Storage.Models.Sku
+Kind : Storage
+Encryption : Microsoft.Azure.Management.Storage.Models.Encryption
+AccessTier :
+CreationTime : 11/13/2019 9:43:49 PM
+CustomDomain :
+Identity :
+LastGeoFailoverTime :
+PrimaryEndpoints : Microsoft.Azure.Management.Storage.Models.Endpoints
+PrimaryLocation : DBELocal
+ProvisioningState : Succeeded
+SecondaryEndpoints :
+SecondaryLocation :
+StatusOfPrimary : Available
+StatusOfSecondary :
+Tags :
+EnableHttpsTrafficOnly : False
+NetworkRuleSet :
+Context : Microsoft.WindowsAzure.Commands.Common.Storage.LazyAzureStorageContext
+ExtendedProperties : {}
+```
+
+To get the storage account key, run the `Get-AzureRmStorageAccountKey` command. A sample output of this command is shown here.
+
+```powershell
+PS C:\Users\Administrator> Get-AzureRmStorageAccountKey
+
+cmdlet Get-AzureRmStorageAccountKey at command pipeline position 1
+Supply values for the following parameters:
+(Type !? for Help.)
+ResourceGroupName: my-resource-ase
+Name:myasestoracct
+
+KeyName Value
+------- -----
+key1 /IjVJN+sSf7FMKiiPLlDm8mc9P4wtcmhhbnCa7...
+key2 gd34TcaDzDgsY9JtDNMUgLDOItUU0Qur3CBo6Q...
+```
+
+## Add blob URI to hosts file
+
+You already added the blob URI in hosts file for the client that you are using to connect to Blob storage in the section [Modify host file for endpoint name resolution](azure-stack-edge-j-series-connect-resource-manager.md#step-5-modify-host-file-for-endpoint-name-resolution). This was the entry for the blob URI:
+
+\<Azure consistent network services VIP \> \<storage name\>.blob.\<appliance name\>.\<dnsdomain\>
++
+## Install certificates
+
+If you are using *https*, then you need to install appropriate certificates on your device. In this case, install the blob endpoint certificate. For more information, see how to create and upload certificates in [Manage certificates](azure-stack-edge-j-series-manage-certificates.md).
+
+## Upload a VHD
+
+Copy any disk images to be used into page blobs in the local storage account that you created in the earlier steps. You can use a tool such as [AzCopy](../storage/common/storage-use-azcopy-v10.md) to upload the VHD to the storage account that you created in earlier steps.
+
+<!--Before you use AzCopy, make sure that the [AzCopy is configured correctly](#configure-azcopy) for use with the blob storage REST API version that you are using with your Azure Stack Edge Pro device.
+
+```powershell
+AzCopy /Source:<sourceDirectoryForVHD> /Dest:<blobContainerUri> /DestKey:<storageAccountKey> /Y /S /V /NC:32 /BlobType:page /destType:blob
+```
+
+> [!NOTE]
+> Set `BlobType` to page for creating a managed disk out of VHD. Set `BlobType` to block when writing to tiered storage accounts using AzCopy.
+
+You can download the disk images from the marketplace. For detailed steps, go to [Get the virtual disk image from Azure marketplace](azure-stack-edge-j-series-create-virtual-machine-image.md).
+
+A sample output using AzCopy 7.3 is shown below. For more information on this command, go to [Upload VHD file to storage account using AzCopy](../devtest-labs/devtest-lab-upload-vhd-using-azcopy.md).
++
+```powershell
+AzCopy /Source:\\hcsfs\scratch\vm_vhds\linux\ /Dest:http://sa191113014333.blob.dbe-1dcmhq2.microsoftdatabox.com/vmimages /DestKey:gJKoyX2Amg0Zytd1ogA1kQ2xqudMHn7ljcDtkJRHwMZbMK== /Y /S /V /NC:32 /BlobType:page /destType:blob /z:2e7d7d27-c983-410c-b4aa-b0aa668af0c6
+```-->
+Use the following commands with AzCopy 10:
+
+```powershell
+$StorageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName <ResourceGroupName> -Name <StorageAccountName>)[0].Value
+
+$endPoint = (Get-AzureRmStorageAccount -name <StorageAccountName> -ResourceGroupName <ResourceGroupName>).PrimaryEndpoints.Blob
+
+$StorageAccountContext = New-AzureStorageContext -StorageAccountName <StorageAccountName> -StorageAccountKey <StorageAccountKey> -Endpoint <Endpoint>
+
+$StorageAccountSAS = New-AzureStorageAccountSASToken -Service Blob,File,Queue,Table -ResourceType Container,Service,Object -Permission "acdlrw" -Context <StorageAccountContext> -Protocol HttpsOnly
+
+<AzCopy exe path> cp "Full VHD path" "<BlobEndPoint>/<ContainerName><StorageAccountSAS>"
+```
+
+Here is an example output:
+
+```powershell
+$ContainerName = <ContainerName>
+$ResourceGroupName = <ResourceGroupName>
+$StorageAccountName = <StorageAccountName>
+$VHDPath = "Full VHD Path"
+$VHDFile = <VHDFileName>
+
+$StorageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName)[0].Value
+
+$endPoint = (Get-AzureRmStorageAccount -name $StorageAccountName -resourcegroupname $ResourceGroupName).PrimaryEndpoints.Blob
+
+$StorageAccountContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey -Endpoint $endPoint
+
+$StorageAccountSAS = New-AzureStorageAccountSASToken -Service Blob,File,Queue,Table -ResourceType Container,Service,Object -Permission "acdlrw" -Context $StorageAccountContext -Protocol HttpsOnly
+
+C:\AzCopy.exe cp "$VHDPath\$VHDFile" "$endPoint$ContainerName$StorageAccountSAS"
+```
+
+## Create managed disks from the VHD
+
+Create a managed disk from the uploaded VHD.
+
+```powershell
+$DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import -SourceUri "Source URL for your VHD"
+```
+A sample output is shown below:
+<code>
+$DiskConfig = New-AzureRmDiskConfig -Location DBELocal -CreateOption Import ΓÇôSourceUri http://</code><code>sa191113014333.blob.dbe-1dcmhq2.microsoftdatabox.com/vmimages/ubuntu13.vhd</code>
+
+```powershell
+New-AzureRMDisk -ResourceGroupName <Resource group name> -DiskName <Disk name> -Disk $DiskConfig
+```
+
+A sample output is shown below. For more information on this cmdlet, go to [New-AzureRmDisk](/powershell/module/azurerm.compute/new-azurermdisk?view=azurermps-6.13.0).
+
+```powershell
+Tags :
+New-AzureRmDisk -ResourceGroupName rg191113014333 -DiskName ld191113014333 -Disk $DiskConfig
+
+ResourceGroupName : rg191113014333
+ManagedBy :
+Sku : Microsoft.Azure.Management.Compute.Models.DiskSku
+Zones :
+TimeCreated : 11/13/2019 1:49:07 PM
+OsType :
+CreationData : Microsoft.Azure.Management.Compute.Models.CreationData
+DiskSizeGB : 30
+EncryptionSettings :
+ProvisioningState : Succeeded
+Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg191113014333/providers/Micros
+ oft.Compute/disks/ld191113014333
+Name : ld191113014333
+Type : Microsoft.Compute/disks
+Location : DBELocal
+Tags : {}
+```
+
+## Create a VM image from the image managed disk
+
+Use the following command to create a VM image from the managed disk. Replace the values within \< \> with the names you choose.
+
+```powershell
+$imageConfig = New-AzureRmImageConfig -Location DBELocal
+$ManagedDiskId = (Get-AzureRmDisk -Name <Disk name> -ResourceGroupName <Resource group name>).Id
+Set-AzureRmImageOsDisk -Image $imageConfig -OsType '<OS type>' -OsState 'Generalized' -DiskSizeGB <Disk size> -ManagedDiskId $ManagedDiskId
+
+The supported OS types are Linux and Windows.
+
+For OS Type=Linux, for example:
+Set-AzureRmImageOsDisk -Image $imageConfig -OsType 'Linux' -OsState 'Generalized' -DiskSizeGB <Disk size> -ManagedDiskId $ManagedDiskId
+New-AzureRmImage -Image $imageConfig -ImageName <Image name> -ResourceGroupName <Resource group name>
+```
+
+A sample output is shown below. For more information on this cmdlet, go to [New-AzureRmImage](/powershell/module/azurerm.compute/new-azurermimage?view=azurermps-6.13.0).
+
+```powershell
+New-AzureRmImage -Image Microsoft.Azure.Commands.Compute.Automation.Models.PSImage -ImageName ig191113014333 -ResourceGroupName rg191113014333
+ResourceGroupName : rg191113014333
+SourceVirtualMachine :
+StorageProfile : Microsoft.Azure.Management.Compute.Models.ImageStorageProfile
+ProvisioningState : Succeeded
+Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg191113014333/providers/Micr
+ osoft.Compute/images/ig191113014333
+Name : ig191113014333
+Type : Microsoft.Compute/images
+Location : dbelocal
+Tags : {}
+```
+
+## Create VM with previously created resources
+
+You must create one virtual network and associate a virtual network interface before you create and deploy the VM.
+
+> [!IMPORTANT]
+> While creating virtual network and virtual network interface, the following rules apply:
+> - Only one Vnet can be created (even across resource groups) and it must match exactly with the logical network in terms of the address space.
+> - Only one subnet will be allowed in the Vnet. The subnet must be the exact same address space as the Vnet.
+> - Only static allocation method will be allowed during Vnic creation and user needs to provide a private IP address.
+
+
+**Query the automatically created Vnet**
+
+When you enable compute from the local UI of your device, a Vnet `ASEVNET` is created automatically under `ASERG` resource group.
+Use the following command to query the existing Vnet:
+
+```powershell
+$aRmVN = Get-AzureRMVirtualNetwork -Name ASEVNET -ResourceGroupName ASERG
+```
+
+<!--```powershell
+$subNetId=New-AzureRmVirtualNetworkSubnetConfig -Name <Subnet name> -AddressPrefix <Address Prefix>
+$aRmVN = New-AzureRmVirtualNetwork -ResourceGroupName <Resource group name> -Name <Vnet name> -Location DBELocal -AddressPrefix <Address prefix> -Subnet $subNetId
+```-->
+
+**Create a Vnic using the Vnet subnet ID**
+
+```powershell
+$ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name <IP config Name> -SubnetId $aRmVN.Subnets[0].Id -PrivateIpAddress <Private IP>
+$Nic = New-AzureRmNetworkInterface -Name <Nic name> -ResourceGroupName <Resource group name> -Location DBELocal -IpConfiguration $ipConfig
+```
+
+The sample output of these commands is shown below:
+
+```powershell
+PS C:\Users\Administrator> $subNetId=New-AzureRmVirtualNetworkSubnetConfig -Name my-ase-subnet -AddressPrefix "5.5.0.0/16"
+
+PS C:\Users\Administrator> $aRmVN = New-AzureRmVirtualNetwork -ResourceGroupName Resource-my-ase -Name my-ase-virtualnetwork -Location DBELocal -AddressPrefix "5.5.0.0/16" -Subnet $subNetId
+WARNING: The output object type of this cmdlet will be modified in a future release.
+PS C:\Users\Administrator> $ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name my-ase-ip -SubnetId $aRmVN.Subnets[0].Id
+PS C:\Users\Administrator> $Nic = New-AzureRmNetworkInterface -Name my-ase-nic -ResourceGroupName Resource-my-ase -Location DBELocal -IpConfiguration $ipConfig
+WARNING: The output object type of this cmdlet will be modified in a future release.
+
+PS C:\Users\Administrator> $Nic
+
+PS C:\Users\Administrator> (Get-AzureRmNetworkInterface)[0]
+
+Name : nic200108020444
+ResourceGroupName : rg200108020444
+Location : dbelocal
+Id : /subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg200108020444/providers/Microsoft.Network/networ
+ kInterfaces/nic200108020444
+Etag : W/"f9d1759d-4d49-42fa-8826-e218e0b1d355"
+ResourceGuid : 3851ae62-c13e-4416-9386-e21d9a2fef0f
+ProvisioningState : Succeeded
+Tags :
+VirtualMachine : {
+ "Id": "/subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg200108020444/providers/Microsoft.Compu
+ te/virtualMachines/VM200108020444"
+ }
+IpConfigurations : [
+ {
+ "Name": "ip200108020444",
+ "Etag": "W/\"f9d1759d-4d49-42fa-8826-e218e0b1d355\"",
+ "Id": "/subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/rg200108020444/providers/Microsoft.Net
+ work/networkInterfaces/nic200108020444/ipConfigurations/ip200108020444",
+ "PrivateIpAddress": "5.5.166.65",
+ "PrivateIpAllocationMethod": "Static",
+ "Subnet": {
+ "Id": "/subscriptions/a4257fde-b946-4e01-ade7-674760b8d1a3/resourceGroups/DbeSystemRG/providers/Microsoft.Netw
+ ork/virtualNetworks/vSwitch1/subnets/subnet123",
+ "ResourceNavigationLinks": [],
+ "ServiceEndpoints": []
+ },
+ "ProvisioningState": "Succeeded",
+ "PrivateIpAddressVersion": "IPv4",
+ "LoadBalancerBackendAddressPools": [],
+ "LoadBalancerInboundNatRules": [],
+ "Primary": true,
+ "ApplicationGatewayBackendAddressPools": [],
+ "ApplicationSecurityGroups": []
+ }
+ ]
+DnsSettings : {
+ "DnsServers": [],
+ "AppliedDnsServers": []
+ }
+EnableIPForwarding : False
+EnableAcceleratedNetworking : False
+NetworkSecurityGroup : null
+Primary : True
+MacAddress : 00155D18E432 :
+```
+
+Optionally, while creating a Vnic for a VM, you can pass the public IP. In this instance, the public IP will return the private IP.
+
+```powershell
+New-AzureRmPublicIPAddress -Name <Public IP> -ResourceGroupName <ResourceGroupName> -AllocationMethod Static -Location DBELocal
+$publicIP = (Get-AzureRmPublicIPAddress -Name <Public IP> -ResourceGroupName <Resource group name>).Id
+$ipConfig = New-AzureRmNetworkInterfaceIpConfig -Name <ConfigName> -PublicIpAddressId $publicIP -SubnetId $subNetId
+```
++
+**Create a VM**
+
+You can now use the VM image to create a VM and attach it to the virtual network that you created earlier.
+
+```powershell
+$pass = ConvertTo-SecureString "<Password>" -AsPlainText -Force;
+$cred = New-Object System.Management.Automation.PSCredential("<Enter username>", $pass)
+
+You will use this username, password to login to the VM, once it is created and powered up.
+
+$VirtualMachine = New-AzureRmVMConfig -VMName <VM name> -VMSize "Standard_D1_v2"
+
+$VirtualMachine = Set-AzureRmVMOperatingSystem -VM $VirtualMachine -<OS type> -ComputerName <Your computer Name> -Credential $cred
+
+$VirtualMachine = Set-AzureRmVMOSDisk -VM $VirtualMachine -Name <OS Disk Name> -Caching "ReadWrite" -CreateOption "FromImage" -Linux -StorageAccountType StandardLRS
+
+$nicID = (Get-AzureRmNetworkInterface -Name <nic name> -ResourceGroupName <Resource Group Name>).Id
+
+$VirtualMachine = Add-AzureRmVMNetworkInterface -VM $VirtualMachine -Id $nicID
+
+$image = (Get-AzureRmImage -ResourceGroupName <Resource Group Name> -ImageName $ImageName).Id
+
+$VirtualMachine = Set-AzureRmVMSourceImage -VM $VirtualMachine -Id $image
+
+New-AzureRmVM -ResourceGroupName <Resource Group Name> -Location DBELocal -VM $VirtualMachine -Verbose
+```
+
+## Connect to a VM
+
+Depending on whether you created a Windows or a Linux VM, the steps to connect can be different.
+
+### Connect to Linux VM
+
+Follow these steps to connect to a Linux VM.
+
+[!INCLUDE [azure-stack-edge-gateway-connect-vm](../../includes/azure-stack-edge-gateway-connect-virtual-machine-linux.md)]
+
+### Connect to Windows VM
+
+Follow these steps to connect to a Windows VM.
+
+[!INCLUDE [azure-stack-edge-gateway-connect-vm](../../includes/azure-stack-edge-gateway-connect-virtual-machine-windows.md)]
++
+<!--Connect to the VM using the private IP that you passed during the VM creation.
+
+Open an SSH session to connect with the IP address.
+
+`ssh -l <username> <ip address>`
+
+When prompted, provide the password that you used when creating the VM.
+
+If you need to provide the SSH key, use this command.
+
+ssh -i c:/users/Administrator/.ssh/id_rsa Administrator@5.5.41.236
+
+If you used a public IP address during VM creation, you can use that IP to connect to the VM. To get the public IP:
+
+```powershell
+$publicIp = Get-AzureRmPublicIpAddress -Name <Public IP> -ResourceGroupName <Resource group name>
+```
+The public IP in this case will be the same as the private IP that you passed during virtual network interface creation.-->
++
+## Manage VM
+
+The following section describes some of the common operations around the VM that you will create on your Azure Stack Edge Pro device.
+
+### List VMs running on the device
+
+To return a list of all the VMs running on your Azure Stack Edge Pro device, run the following command.
++
+`Get-AzureRmVM -ResourceGroupName <String> -Name <String>`
+ΓÇâ
+
+### Turn on the VM
+
+Run the following cmdlet to turn on a virtual machine running on your device:
++
+`Start-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>`
++
+For more information on this cmdlet, go to [Start-AzureRmVM](/powershell/module/azurerm.compute/start-azurermvm?view=azurermps-6.13.0).
+
+### Suspend or shut down the VM
+
+Run the following cmdlet to stop or shut down a virtual machine running on your device:
++
+```powershell
+Stop-AzureRmVM [-Name] <String> [-StayProvisioned] [-ResourceGroupName] <String>
+```
++
+For more information on this cmdlet, go to [Stop-AzureRmVM cmdlet](/powershell/module/azurerm.compute/stop-azurermvm?view=azurermps-6.13.0).
+
+### Add a data disk
+
+If the workload requirements on your VM increase, then you may need to add a data disk.
+
+```powershell
+Add-AzureRmVMDataDisk -VM $VirtualMachine -Name "disk1" -VhdUri "https://contoso.blob.core.windows.net/vhds/diskstandard03.vhd" -LUN 0 -Caching ReadOnly -DiskSizeinGB 1 -CreateOption Empty
+
+Update-AzureRmVM -ResourceGroupName "<Resource Group Name string>" -VM $VirtualMachine
+```
+
+### Delete the VM
+
+Run the following cmdlet to remove a virtual machine from your device:
+
+```powershell
+Remove-AzureRmVM [-Name] <String> [-ResourceGroupName] <String>
+```
+
+For more information on this cmdlet, go to [Remove-AzureRmVm cmdlet](/powershell/module/azurerm.compute/remove-azurermvm?view=azurermps-6.13.0).
+++
+## Next steps
+
+[Azure Resource Manager cmdlets](/powershell/module/azurerm.resources/?view=azurermps-6.13.0)
\ No newline at end of file
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-templates.md
@@ -49,7 +49,7 @@ The high level summary of the deployment workflow using templates is as follows:
2. **Create VM from templates**
- 1. Create a VM image and a VNet using `CreateImageAndVnet.parameters.json` parameters file and `CreateImageAndVnet.json` deployment template.
+ 1. Create a VM image using `CreateImage.parameters.json` parameters file and `CreateImage.json` deployment template.
1. Create a VM with previously created resources using `CreateVM.parameters.json` parameters file and `CreateVM.json` deployment template. ## Device prerequisites
@@ -150,9 +150,9 @@ Skip this step if you will connect via Storage Explorer using *http*. If you are
### Create and upload a VHD
-Make sure that you have a virtual disk image that you can use to upload in the later step. Follow the steps in [Create a VM image](azure-stack-edge-j-series-create-virtual-machine-image.md).
+Make sure that you have a virtual disk image that you can use to upload in the later step. Follow the steps in [Create a VM image](azure-stack-edge-gpu-create-virtual-machine-image.md).
-Copy any disk images to be used into page blobs in the local storage account that you created in the earlier steps. You can use a tool such as [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) or [AzCopy to upload the VHD to the storage account](azure-stack-edge-j-series-deploy-virtual-machine-powershell.md#upload-a-vhd) that you created in earlier steps.
+Copy any disk images to be used into page blobs in the local storage account that you created in the earlier steps. You can use a tool such as [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) or [AzCopy to upload the VHD to the storage account](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#upload-a-vhd) that you created in earlier steps.
### Use Storage Explorer for upload
@@ -210,35 +210,15 @@ Copy any disk images to be used into page blobs in the local storage account tha
![Copy URI](media/azure-stack-edge-gpu-deploy-virtual-machine-templates/copy-uri-1.png)
-<!--### Use AzCopy for upload
-Before you use AzCopy, make sure that the [AzCopy is configured correctly](#configure-azcopy) for use with the blob storage REST API version that you are using with your Azure Stack Edge Pro device.
+## Create image for your VM
-
-```powershell
-AzCopy /Source:<sourceDirectoryForVHD> /Dest:<blobContainerUri> /DestKey:<storageAccountKey> /Y /S /V /NC:32 /BlobType:page /destType:blob
-```
-
-> ![NOTE]
-> Set `BlobType` to page for creating a managed disk out of VHD. Set `BlobType` to block when writing to tiered storage accounts using AzCopy.
-
-You can download the disk images from the marketplace. For detailed steps, go to [Get the virtual disk image from Azure marketplace](azure-stack-edge-j-series-create-virtual-machine-image.md).
-
-A sample output using AzCopy 7.3 is shown below. For more information on this command, go to [Upload VHD file to storage account using AzCopy](../devtest-labs/devtest-lab-upload-vhd-using-azcopy.md).
--
-```powershell
-AzCopy /Source:\\hcsfs\scratch\vm_vhds\linux\ /Dest:http://sa191113014333.blob.dbe-1dcmhq2.microsoftdatabox.com/vmimages /DestKey:gJKoyX2Amg0Zytd1ogA1kQ2xqudMHn7ljcDtkJRHwMZbMK== /Y /S /V /NC:32 /BlobType:page /destType:blob /z:2e7d7d27-c983-410c-b4aa-b0aa668af0c6
-```-->
-
-## Create image and VNet for your VM
-
-To create image and a virtual network for your VM, you will need to edit the `CreateImageAndVnet.parameters.json` parameters file and then deploy the template `CreateImageAndVnet.json` that uses this parameter file.
+To create image for your VM, edit the `CreateImage.parameters.json` parameters file and then deploy the template `CreateImage.json` that uses this parameter file.
### Edit parameters file
-The file `CreateImageAndVnet.parameters.json` takes the following parameters:
+The file `CreateImage.parameters.json` takes the following parameters:
```json "parameters": {
@@ -251,22 +231,10 @@ The file `CreateImageAndVnet.parameters.json` takes the following parameters:
"imageUri": { "value": "<Path to the VHD that you uploaded in the Storage account>" },
- "vnetName": {
- "value": "<Name for the virtual network where you will deploy the VM>"
- },
- "subnetName": {
- "value": "<Name for the subnet for the VNet>"
- },
- "addressPrefix": {
- "value": "<Address prefix for the virtual network>"
- },
- "subnetPrefix": {
- "value": "<Subnet prefix for the subnet for the Vnet>"
- }
} ```
-Edit the file `CreateImageAndVnet.parameters.json` to include the following for your Azure Stack Edge Pro device:
+Edit the file `CreateImage.parameters.json` to include the following for your Azure Stack Edge Pro device:
1. Provide the OS type corresponding to the VHD you will upload. The OS type can be Windows or Linux.
@@ -284,20 +252,9 @@ Edit the file `CreateImageAndVnet.parameters.json` to include the following for
"value": "https://myasegpusavm.blob.myasegpu1.wdshcsso.com/windows/WindowsServer2016Datacenter.vhd" }, ```
- If youΓÇÖre using *http* with Storage Explorer, change this to an *https* URI.
+ If youΓÇÖre using *http* with Storage Explorer, change this to an *http* URI.
-3. Change the `addressPrefix` and `subnetPrefix`. In the local UI of your device, go to the **Network** page. Find the port you enabled for compute. Get the IP address of the base network and add the subnet mask to create the CIDR notation. If you have a standard 255.255.255.0 subnet, do this by replacing the last number of the IP address with 0 and adding /24 to the end. So, 10.126.68.0 with a 255.255.255.0 subnet mask becomes 10.126.68.0/24.
-
- ```json
- "addressPrefix": {
- "value": "10.126.68.0/24"
- },
- "subnetPrefix": {
- "value": "10.126.68.0/24"
- }
- ```
-
-4. Provide the unique image name, VNet name, and subnet name for the parameters.
+3. Provide a unique image name. This image is used to create VM in the later steps.
Here is a sample json that is used in this article.
@@ -307,25 +264,13 @@ Edit the file `CreateImageAndVnet.parameters.json` to include the following for
"contentVersion": "1.0.0.0", "parameters": { "osType": {
- "value": "Windows"
+ "value": "Linux"
}, "imageName": {
- "value": "image1"
+ "value": "myaselinuximg"
}, "imageUri": {
- "value": "https://myasegpusavm.blob.myasegpu1.wdshcsso.com/windows/WindowsServer2016Datacenter.vhd"
- },
- "vnetName": {
- "value": "vnet1"
- },
- "subnetName": {
- "value": "subnet1"
- },
- "addressPrefix": {
- "value": "10.126.68.0/24"
- },
- "subnetPrefix": {
- "value": "10.126.68.0/24"
+ "value": "https://sa2.blob.myasegpuvm.wdshcsso.com/con1/ubuntu18.04waagent.vhd"
} } }
@@ -335,7 +280,7 @@ Edit the file `CreateImageAndVnet.parameters.json` to include the following for
### Deploy template
-Deploy the template `CreateImageAndVnet.json`. This template deploys the VNet and image resources that will be used to create VMs in the later step.
+Deploy the template `CreateImage.json`. This template deploys the image resources that will be used to create VMs in the later step.
> [!NOTE] > When you deploy the template if you get an authentication error, your Azure credentials for this session may have expired. Rerun the `login-AzureRM` command to connect to Azure Resource Manager on your Azure Stack Edge Pro device again.
@@ -343,8 +288,8 @@ Deploy the template `CreateImageAndVnet.json`. This template deploys the VNet an
1. Run the following command: ```powershell
- $templateFile = "Path to CreateImageAndVnet.json"
- $templateParameterFile = "Path to CreateImageAndVnet.parameters.json"
+ $templateFile = "Path to CreateImage.json"
+ $templateParameterFile = "Path to CreateImage.parameters.json"
$RGName = "<Name of your resource group>" New-AzureRmResourceGroupDeployment ` -ResourceGroupName $RGName `
@@ -352,47 +297,42 @@ Deploy the template `CreateImageAndVnet.json`. This template deploys the VNet an
-TemplateParameterFile $templateParameterFile ` -Name "<Name for your deployment>" ```
+ This command deploys an image resource. To query the resource, run the following command:
-2. Check if the image and the VNet resources are successfully provisioned. Here is a sample output of a successfully created image and VNet.
+ ```powershell
+ Get-AzureRmImage -ResourceGroupName <Resource Group Name> -name <Image Name>
+ ```
+ Here is a sample output of a successfully created image.
```powershell
- PS C:\07-30-2020> login-AzureRMAccount -EnvironmentName aztest1 -TenantId c0257de7-538f-415c-993a-1b87a031879d
+ PS C:\WINDOWS\system32> login-AzureRMAccount -EnvironmentName aztest -TenantId c0257de7-538f-415c-993a-1b87a031879d
Account SubscriptionName TenantId Environment ------- ---------------- -------- -----------
- EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d aztest1
+ EdgeArmUser@localhost Default Provider Subscription c0257de7-538f-415c-993a-1b87a031879d aztest
- PS C:\07-30-2020> $templateFile = "C:\07-30-2020\CreateImageAndVnet.json"
- PS C:\07-30-2020> $templateParameterFile = "C:\07-30-2020\CreateImageAndVnet.parameters.json"
- PS C:\07-30-2020> $RGName = "myasegpurgvm"
- PS C:\07-30-2020> New-AzureRmResourceGroupDeployment `
- >> -ResourceGroupName $RGName `
- >> -TemplateFile $templateFile `
- >> -TemplateParameterFile $templateParameterFile `
- >> -Name "Deployment1"
-
- DeploymentName : Deployment1
- ResourceGroupName : myasegpurgvm
+ PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\CreateImage\CreateImage.json"
+ PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\CreateImage\CreateImage.parameters.json"
+ PS C:\WINDOWS\system32> $RGName = "rg2"
+ PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "deployment4"
+
+ DeploymentName : deployment4
+ ResourceGroupName : rg2
ProvisioningState : Succeeded
- Timestamp : 7/30/2020 5:53:32 PM
+ Timestamp : 12/10/2020 7:06:57 PM
Mode : Incremental TemplateLink : Parameters : Name Type Value =============== ========================= ==========
- osType String Windows
- imageName String image1
+ osType String Linux
+ imageName String myaselinuximg
imageUri String
- https://myasegpusavm.blob.myasegpu1.wdshcsso.com/windows/WindowsServer2016Datacenter.vhd
- vnetName String vnet1
- subnetName String subnet1
- addressPrefix String 10.126.68.0/24
- subnetPrefix String 10.126.68.0/24
+ https://sa2.blob.myasegpuvm.wdshcsso.com/con1/ubuntu18.04waagent.vhd
Outputs :
- DeploymentDebugLogLevel :
-
- PS C:\07-30-2020>
+ DeploymentDebugLogLevel :
+ PS C:\WINDOWS\system32>
``` ## Create VM
@@ -418,10 +358,13 @@ To create a VM, use the `CreateVM.parameters.json` parameter file. It takes the
"value": "<A supported size for your VM>" }, "vnetName": {
- "value": "<Name for the virtual network you created earlier>"
+ "value": "<Name for the virtual network, use ASEVNET>"
}, "subnetName": {
- "value": "<Name for the subnet you created earlier>"
+ "value": "<Name for the subnet, use ASEVNETsubNet>"
+ },
+ "vnetRG": {
+ "value": "<Resource group for Vnet, use ASERG>"
}, "nicName": { "value": "<Name for the network interface>"
@@ -438,7 +381,56 @@ Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack
1. Provide a unique name, network interface name, and ipconfig name. 1. Enter a username, password, and a supported VM size.
-1. Give the same name for **VnetName**, **subnetName**, and **ImageName** as given in the parameters for `CreateImageAndVnet.parameters.json`. For example, if you have given VnetName, subnetName and ImageName as **vnet1**, **subnet1**, and **image1**, keep those values same for the parameters in this template as well.
+1. When you enabled the network interface for compute, a virtual switch and a virtual network was automatically created on that network interface. You can query the existing virtual network to get the Vnet name, Subnet name, and Vnet resource group name.
+
+ Run the following command:
+
+ ```powershell
+ Get-AzureRmVirtualNetwork
+ ```
+ Here is the sample output:
+
+ ```powershell
+
+ PS C:\WINDOWS\system32> Get-AzureRmVirtualNetwork
+
+ Name : ASEVNET
+ ResourceGroupName : ASERG
+ Location : dbelocal
+ Id : /subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/providers/Microsoft
+ .Network/virtualNetworks/ASEVNET
+ Etag : W/"990b306d-18b6-41ea-a456-b275efe21105"
+ ResourceGuid : f8309d81-19e9-42fc-b4ed-d573f00e61ed
+ ProvisioningState : Succeeded
+ Tags :
+ AddressSpace : {
+ "AddressPrefixes": [
+ "10.57.48.0/21"
+ ]
+ }
+ DhcpOptions : null
+ Subnets : [
+ {
+ "Name": "ASEVNETsubNet",
+ "Etag": "W/\"990b306d-18b6-41ea-a456-b275efe21105\"",
+ "Id": "/subscriptions/947b3cfd-7a1b-4a90-7cc5-e52caf221332/resourceGroups/ASERG/provider
+ s/Microsoft.Network/virtualNetworks/ASEVNET/subnets/ASEVNETsubNet",
+ "AddressPrefix": "10.57.48.0/21",
+ "IpConfigurations": [],
+ "ResourceNavigationLinks": [],
+ "ServiceEndpoints": [],
+ "ProvisioningState": "Succeeded"
+ }
+ ]
+ VirtualNetworkPeerings : []
+ EnableDDoSProtection : false
+ EnableVmProtection : false
+
+ PS C:\WINDOWS\system32>
+ ```
+
+ Use ASEVNET for Vnet name, ASEVNETsubNet for Subnet name, and ASERG for Vnet resource group name.
+
1. Now youΓÇÖll need a static IP address to assign to the VM that is in the subnet network defined above. Replace **PrivateIPAddress** with this address in the parameter file. To have the VM get an IP address from your local DCHP server, leave the `privateIPAddress` value blank. ```json
@@ -453,40 +445,43 @@ Assign appropriate parameters in `CreateVM.parameters.json` for your Azure Stack
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "vmName": {
- "value": "mywindowsvm"
- },
- "adminUsername": {
- "value": "Administrator"
- },
- "Password": {
- "value": "Password1"
- },
- "imageName": {
- "value": "image1"
- },
- "vmSize": {
- "value": "Standard_D1_v2"
- },
- "vnetName": {
- "value": "vnet1"
- },
- "subnetName": {
- "value": "subnet1"
- },
- "nicName": {
- "value": "nic1"
- },
- "privateIPAddress": {
- "value": "10.126.68.186"
- },
- "IPConfigName": {
- "value": "ipconfig1"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "value": "VM1"
+ },
+ "adminUsername": {
+ "value": "Administrator"
+ },
+ "Password": {
+ "value": "Password1"
+ },
+ "imageName": {
+ "value": "myaselinuximg"
+ },
+ "vmSize": {
+ "value": "Standard_NC4as_T4_v3"
+ },
+ "vnetName": {
+ "value": "ASEVNET"
+ },
+ "subnetName": {
+ "value": "ASEVNETsubNet"
+ },
+ "vnetRG": {
+ "value": "aserg"
+ },
+ "nicName": {
+ "value": "nic5"
+ },
+ "privateIPAddress": {
+ "value": ""
+ },
+ "IPConfigName": {
+ "value": "ipconfig5"
}
+ }
} ```
@@ -513,39 +508,36 @@ Deploy the VM creation template `CreateVM.json`. This template creates a network
The VM creation will take 15-20 minutes. Here is a sample output of a successfully created VM. ```powershell
- PS C:\07-30-2020> $templateFile = "C:\07-30-2020\CreateWindowsVM.json"
- PS C:\07-30-2020> $templateParameterFile = "C:\07-30-2020\CreateWindowsVM.parameters.json"
- PS C:\07-30-2020> $RGName = "myasegpurgvm"
- PS C:\07-30-2020> New-AzureRmResourceGroupDeployment `
- >> -ResourceGroupName $RGName `
- >> -TemplateFile $templateFile `
- >> -TemplateParameterFile $templateParameterFile `
- >> -Name "Deployment2"
-
- DeploymentName : Deployment2
- ResourceGroupName : myasegpurgvm
- ProvisioningState : Succeeded
- Timestamp : 7/30/2020 6:21:09 PM
- Mode : Incremental
- TemplateLink :
- Parameters :
- Name Type Value
- =============== ========================= ==========
- vmName String MyWindowsVM
- adminUsername String Administrator
- password String Password1
- imageName String image1
- vmSize String Standard_D1_v2
- vnetName String vnet1
- subnetName String subnet1
- nicName String Nic1
- ipConfigName String ipconfig1
- privateIPAddress String 10.126.68.186
-
- Outputs :
- DeploymentDebugLogLevel :
-
- PS C:\07-30-2020>
+ PS C:\WINDOWS\system32> $templateFile = "C:\12-09-2020\CreateVM\CreateVM.json"
+ PS C:\WINDOWS\system32> $templateParameterFile = "C:\12-09-2020\CreateVM\CreateVM.parameters.json"
+ PS C:\WINDOWS\system32> $RGName = "rg2"
+ PS C:\WINDOWS\system32> New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "Deployment6"
+
+ DeploymentName : Deployment6
+ ResourceGroupName : rg2
+ ProvisioningState : Succeeded
+ Timestamp : 12/10/2020 7:51:28 PM
+ Mode : Incremental
+ TemplateLink :
+ Parameters :
+ Name Type Value
+ =============== ========================= ==========
+ vmName String VM1
+ adminUsername String Administrator
+ password String Password1
+ imageName String myaselinuximg
+ vmSize String Standard_NC4as_T4_v3
+ vnetName String ASEVNET
+ vnetRG String aserg
+ subnetName String ASEVNETsubNet
+ nicName String nic5
+ ipConfigName String ipconfig5
+ privateIPAddress String
+
+ Outputs :
+ DeploymentDebugLogLevel :
+
+ PS C:\WINDOWS\system32
``` You can also run the `New-AzureRmResourceGroupDeployment` command asynchronously with `ΓÇôAsJob` parameter. Here is a sample output when the cmdlet runs in the background. You can then query the status of job that is created using the `Get-Job` cmdlet.
@@ -589,39 +581,6 @@ Follow these steps to connect to a Linux VM.
[!INCLUDE [azure-stack-edge-gateway-connect-vm](../../includes/azure-stack-edge-gateway-connect-virtual-machine-linux.md)]
-<!--## Manage VM
-
-The following section describes some of the common operations around the VM that you will create on your Azure Stack Edge Pro device.
-
-[!INCLUDE [azure-stack-edge-gateway-manage-vm](../../includes/azure-stack-edge-gateway-manage-vm.md)]-->
--
-## Supported VM sizes
-
-[!INCLUDE [azure-stack-edge-gateway-supported-vm-sizes](../../includes/azure-stack-edge-gateway-supported-vm-sizes.md)]
-
-## Unsupported VM operations and cmdlets
-
-Extensions, scale sets, availability sets, snapshots are not supported.
-
-<!--## Configure AzCopy
-
-When you install the latest version of AzCopy, you will need to configure AzCopy to ensure that it matches the blob storage REST API version of your Azure Stack Edge Pro device.
-
-On the client used to access your Azure Stack Edge Pro device, set up a global variable to match the blob storage REST API version.
-
-### On Windows client
-
-`$Env:AZCOPY_DEFAULT_SERVICE_API_VERSION = "2017-11-09"`
-
-### On Linux client
-
-`export AZCOPY_DEFAULT_SERVICE_API_VERSION=2017-11-09`
-
-To verify if the environment variable for AzCopy was set correctly, take the following steps:
-
-1. Run "azcopy env".
-2. Find `AZCOPY_DEFAULT_SERVICE_API_VERSION` parameter. This should have the value you set in the preceding steps.-->
## Next steps
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-virtual-machine-sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-virtual-machine-sizes.md new file mode 100644
@@ -0,0 +1,30 @@
+---
+title: Supported virtual machine sizes on your Azure Stack Edge
+description: Describes the supported sizes for virtual machines (VMs) on an Azure Stack Edge Pro device templates.
+services: databox
+author: alkohli
+
+ms.service: databox
+ms.subservice: edge
+ms.topic: conceptual
+ms.date: 12/21/2020
+ms.author: alkohli
+#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using APIs so that I can efficiently manage my VMs.
+---
+
+# VM sizes and types for your Azure Stack Edge Pro
+
+This article describes the supported sizes for the virtual machines running on your Azure Stack Edge Pro devices. Use this article before you deploy virtual machines on your Azure Stack Edge Pro devices.
+
+## Supported VM sizes
+
+[!INCLUDE [azure-stack-edge-gateway-supported-vm-sizes](../../includes/azure-stack-edge-gateway-supported-vm-sizes.md)]
++
+## Unsupported VM operations and cmdlets
+
+Scale sets, availability sets, snapshots are not supported.
+
+## Next steps
+
+[Deploy VM on your Azure Stack Edge Pro via the Azure portal ](azure-stack-edge-gpu-deploy-virtual-machine-portal.md)
\ No newline at end of file
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-connect-resource-manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-connect-resource-manager.md
@@ -457,4 +457,4 @@ ExtendedProperties : {}
## Next steps
-[Deploy VMs on your Azure Stack Edge Pro device](azure-stack-edge-j-series-deploy-virtual-machine-powershell.md).
\ No newline at end of file
+[Deploy VMs on your Azure Stack Edge Pro device](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md).
\ No newline at end of file
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-deploy-virtual-machine-cli-python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-virtual-machine-cli-python.md
@@ -24,7 +24,7 @@ This tutorial describes how to create and manage a VM on your Azure Stack Edge P
The deployment workflow is illustrated in the following diagram.
-![VM deployment workflow](media/azure-stack-edge-j-series-deploy-virtual-machine-powershell/vm-workflow_r.svg)
+![VM deployment workflow](media/azure-stack-edge-gpu-deploy-virtual-machine-powershell/vm-workflow-r.svg)
The high level summary of the deployment workflow are as follows:
@@ -40,7 +40,7 @@ The high level summary of the deployment workflow are as follows:
10. Create a VNet 11. Create a VNIC using the VNet subnet ID
-For a detailed explanation of the workflow diagram, see [Deploy VMs on your Azure Stack Edge Pro device using Azure PowerShell](azure-stack-edge-j-series-deploy-virtual-machine-powershell.md). For information on how to connect to Azure Resource Manager, see [Connect to Azure Resource Manager using Azure PowerShell](azure-stack-edge-j-series-connect-resource-manager.md).
+For a detailed explanation of the workflow diagram, see [Deploy VMs on your Azure Stack Edge Pro device using Azure PowerShell](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md). For information on how to connect to Azure Resource Manager, see [Connect to Azure Resource Manager using Azure PowerShell](azure-stack-edge-j-series-connect-resource-manager.md).
## Prerequisites
databox-online https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-j-series-deploy-virtual-machine-powershell-script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-j-series-deploy-virtual-machine-powershell-script.md deleted file mode 100644
@@ -1,118 +0,0 @@
-title: Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell script
-description: Describes how to create and manage virtual machines (VMs) on a Azure Stack Edge Pro device using an Azure PowerShell script.
-services: databox
-author: alkohli
-
-ms.service: databox
-ms.subservice: edge
-ms.topic: how-to
-ms.date: 08/28/2020
-ms.author: alkohli
-#Customer intent: As an IT admin, I need to understand how to create and manage virtual machines (VMs) on my Azure Stack Edge Pro device using an Azure PowerShell script so that I can efficiently manage my VMs.
-
-# Deploy VMs on your Azure Stack Edge Pro GPU device via Azure PowerShell script
-
-<!--[!INCLUDE [applies-to-skus](../../includes/azure-stack-edge-applies-to-all-sku.md)]-->
-
-This tutorial describes how to create and manage a VM on your Azure Stack Edge Pro device using an Azure PowerShell script.
-
-## Prerequisites
-
-Before you begin creating and managing a VM on your Azure Stack Edge Pro device using this script, you need to make sure you have completed the prerequisites listed in the following steps:
-
-### For Azure Stack Edge Pro device via the local web UI
-
-1. You completed the network settings on your Azure Stack Edge Pro device as described in [Step 1: Configure Azure Stack Edge Pro device](azure-stack-edge-j-series-connect-resource-manager.md#step-1-configure-azure-stack-edge-pro-device).
-
-2. Enabled a network interface for compute. This network interface IP is used to create a virtual switch for the VM deployment. The following steps walk you through the process:
-
- 1. Go to the **Compute settings**. Select the network interface that you will use to create a virtual switch.
-
- > [!IMPORTANT]
- > You can only configure one port for compute.
-
- 2. Enable compute on the network interface. Azure Stack Edge Pro creates and manages a virtual switch corresponding to that network interface.
-
-3. You created and installed all the certificates on your Azure Stack Edge Pro device and in the trusted root store of your client. Follow the procedure described in [Step 2: Create and install certificates](azure-stack-edge-j-series-connect-resource-manager.md#step-2-create-and-install-certificates).
-
-### For your Windows client
-
-1. You defined the Azure consistent services virtual internet protocol (VIP) in your **Network** page in the local web UI of device. You need to add this VIP to:
-
- - The host file on the client, OR,
- - The DNS server configuration
-
- > [!IMPORTANT]
- > We recommend that you modify the DNS server configuration for endpoint name resolution.
-
- 1. Start **Notepad** as an administrator (Administrator privileges is required to save the file), and then open the **hosts** file located at `C:\Windows\System32\Drivers\etc`.
-
- ![Windows Explorer hosts file](media/azure-stack-edge-j-series-connect-resource-manager/hosts-file.png)
-
- 2. Add the following entries to your **hosts** file replacing with appropriate values for your device:
-
- ```
- <Azure consistent services VIP> login.<appliance name>.<DNS domain>
- <Azure consistent services VIP> management.<appliance name>.<DNS domain>
- <Azure consistent services VIP> <storage name>.blob.<appliance name>.<DNS domain>
- ```
- For the storage account, you can provide a name that you want the script to use later to create a new storage account. The script does not check if that storage account is existing.
-
- 3. Use the following image for reference. Save the **hosts** file.
-
- ![hosts file in Notepad](media/azure-stack-edge-j-series-deploy-virtual-machine-cli-python/hosts-screenshot-boxed.png)
-
-2. [Download the PowerShell script](https://aka.ms/ase-vm-powershell) used in this procedure.
-
-3. Make sure that your Windows client is running PowerShell 5.0 or later.
-
-4. Make sure that the `Azure.Storage Module version 4.5.0` is installed on your system. You can get this module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Azure.Storage/4.5.0). To install this module, type:
-
- `Install-Module -Name Azure.Storage -RequiredVersion 4.5.0`
-
- To verify the version of the installed module, type:
-
- `Get-InstalledModule -name Azure.Storage`
-
- To uninstall any other version modules, type:
-
- `Uninstall-Module -Name Azure.Storage`
-
-5. [Download AzCopy 10](../storage/common/storage-use-azcopy-v10.md#download-azcopy) to your Windows client. Make a note of this location as you will pass it as a parameter while running the script.
-
-6. Make sure that your Windows client is running TLS 1.2 or later.
--
-## Create a VM
-
-1. Run PowerShell as an administrator.
-2. Go to the folder where you downloaded the script on your client.
-3. Use the following command to run the script:
-
- `.\ArmPowershellClient.ps1 -VNetAddressSpace <AddressSpace> -NicPrivateIp <Private IP> -VHDPath <Path> -VHDFile <VHD File, with extension> -StorageAccountName <Name> -OS <Windows/Linux> -VMSize <Supported VM Size> -VMUserName <UserName to be used to login into VM> -VMPassword <Password for the VM login> --AzCopy10Path <Absolute Path>`
-
- Here are the examples when the script is run to create a Windows VM and a Linux VM.
-
- **For a Windows VM:**
-
- `.\ArmPowershellClient.ps1 -VNetAddressSpace 5.5.0.0/16 -NicPrivateIp 5.5.168.73 -VHDPath \\intel01\d$\vm_vhds\AzureWindowsVMmode -VHDFile WindowsServer2016Datacenter.vhd -StorageAccountName teaaccount1 -OS Windows -VMSize Standard_D1_v2 -VMUserName Administrator -VMPassword Password1 -AzCopy10Path C:\azcopy10\azcopy.exe`
-
- **For a Linux VM:**
-
- `.\ArmPowershellClient.ps1 -VNetAddressSpace 5.5.0.0/16 -NicPrivateIp 5.5.168.83 -VHDPath \\intel01\d$\vm_vhds\AzurestackLinux -VHDFile ubuntu13.vhd -StorageAccountName sa2 -OS Linux -VMSize Standard_D1_v2 -VMUserName Administrator -VMPassword Password1 -AzCopy10Path C:\azcopy10\azcopy.exe`
-
-4. To clean up the resources that the script created, use the following commands:
-
- ```powershell
- Get-AzureRmVM | Remove-AzureRmVM -Force
- Get-AzureRmNetworkInterface | Remove-AzureRmNetworkInterface -Force
- Get-AzureRmResource | Remove-AzureRmResource -f
- Get-AzureRmResourceGroup | Remove-AzureRmResourceGroup -f
- ```
--
-## Next steps
-
-[Deploy VMs using Azure PowerShell cmdlets](azure-stack-edge-j-series-deploy-virtual-machine-powershell.md)
\ No newline at end of file
hpc-cache https://docs.microsoft.com/en-us/azure/hpc-cache/directory-services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/directory-services.md
@@ -41,7 +41,7 @@ Under **Active directory details**, supply these values:
* **Cache server name (computer account)** - Set the name that will be assigned to this HPC cache when it joins the AD domain. Specify a name that is easy to recognize as this cache. The name can be up to 15 characters long and can include capital or lowercase letters, numbers, hyphens (-), and underscores (_).
-* In the **Credentials** section, provide an AD administrator username and password that the Azure HPC Cache can use to access the AD server. This information is encrypted when stored, and can't be queried.
+In the **Credentials** section, provide an AD administrator username and password that the Azure HPC Cache can use to access the AD server. This information is encrypted when stored, and can't be queried.
Save the settings by clicking the button at the top of the page.
machine-learning https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-datasets.md
@@ -10,19 +10,13 @@ ms.author: copeters
author: lostmygithubaccount ms.date: 06/25/2020 ms.topic: conceptual
-ms.custom: how-to, data4ml
+ms.custom: how-to, data4ml, contperf-fy21q2
-## Customer intent: As a data scientist, I want to monitor data drift in my datasets and set alerts.
+## Customer intent: As a data scientist, I want to detect data drift in my datasets and set alerts for when drift is large.
--- # Detect data drift (preview) on datasets -
-> [!IMPORTANT]
-> Data drift detection for datasets is currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Learn how to monitor data drift and set alerts when drift is high. With Azure Machine Learning dataset monitors (preview), you can:
@@ -31,11 +25,17 @@ With Azure Machine Learning dataset monitors (preview), you can:
* **Monitor new data** for differences between any baseline and target dataset. * **Profile features in data** to track how statistical properties change over time. * **Set up alerts on data drift** for early warnings to potential issues.
+* **[Create a new dataset version](how-to-version-track-datasets** when you determine the data has drifted too much.
An [Azure Machine learning dataset](how-to-create-register-datasets.md) is used to create the monitor. The dataset must include a timestamp column. You can view data drift metrics with the Python SDK or in Azure Machine Learning studio. Other metrics and insights are available through the [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) resource associated with the Azure Machine Learning workspace.
+> [!IMPORTANT]
+> Data drift detection for datasets is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Prerequisites To create and work with dataset monitors, you need:
@@ -90,15 +90,20 @@ Dataset monitors depend on the following Azure services.
| *Application insights*| Drift emits metrics to Application Insights belonging to the machine learning workspace. | *Azure blob storage*| Drift emits metrics in json format to Azure blob storage.
-## How dataset monitors data
+### Baseline and target datasets
+
+You monitor [Azure machine learning datasets](how-to-create-register-datasets.md) for data drift. When you create a dataset monitor, you will reference your:
+* Baseline dataset - usually the training dataset for a model.
+* Target dataset - usually model input data - is compared over time to your baseline dataset. This comparison means that your target dataset must have a timestamp column specified.
-Use Machine Learning datasets to monitor for data drift. Specify a baseline dataset - usually the training dataset for a model. A target dataset - usually model input data - is compared over time to your baseline dataset. This comparison means that your target dataset must have a timestamp column specified.
+The monitor will compare the baseline and target datasets.
## Create target dataset The target dataset needs the `timeseries` trait set on it by specifying the timestamp column either from a column in the data or a virtual column derived from the path pattern of the files. Create the dataset with a timestamp through the [Python SDK](#sdk-dataset) or [Azure Machine Learning studio](#studio-dataset). A column representing a "timestamp" must be specified to add `timeseries` trait to the dataset. If your data is partitioned into folder structure with time info, such as '{yyyy/MM/dd}', create a virtual column through the path pattern setting and set it as the "partition timestamp" to improve the importance of time series functionality.
-### <a name="sdk-dataset"></a>Python SDK
+# [Python](#tab/python)
+<a name="sdk-dataset"></a>
The [`Dataset`](/python/api/azureml-core/azureml.data.tabulardataset?preserve-view=true&view=azure-ml-py#&preserve-view=truewith-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false----kwargs-) class [`with_timestamp_columns()`](/python/api/azureml-core/azureml.data.tabulardataset?preserve-view=true&view=azure-ml-py#&preserve-view=truewith-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false----kwargs-) method defines the time stamp column for the dataset.
@@ -127,9 +132,12 @@ dset = dset.with_timestamp_columns('date')
dset = dset.register(ws, 'target') ```
-For a full example of using the `timeseries` trait of datasets, see the [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb) or the [datasets SDK documentation](/python/api/azureml-core/azureml.data.tabulardataset?preserve-view=true&view=azure-ml-py#&preserve-view=truewith-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false----kwargs-).
+> [!TIP]
+> For a full example of using the `timeseries` trait of datasets, see the [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb) or the [datasets SDK documentation](/python/api/azureml-core/azureml.data.tabulardataset?preserve-view=true&view=azure-ml-py#&preserve-view=truewith-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false----kwargs-).
-### <a name="studio-dataset"></a>Azure Machine Learning studio
+# [Studio](#tab/azure-studio)
+
+<a name="studio-dataset"></a>
If you create your dataset using Azure Machine Learning studio, ensure the path to your data contains timestamp information, include all subfolders with data, and set the partition format.
@@ -145,13 +153,14 @@ If your data is partitioned by date, as is the case here, you can also specify t
:::image type="content" source="media/how-to-monitor-datasets/timeseries-partitiontimestamp.png" alt-text="Partition timestamp":::
+---
-## Create dataset monitors
-
-Create dataset monitors to detect and alert to data drift on a new dataset. Use either the [Python SDK](#sdk-monitor) or [Azure Machine Learning studio](#studio-monitor).
+## Create dataset monitor
-### <a name="sdk-monitor"></a>Python SDK
+Create a dataset monitor to detect and alert to data drift on a new dataset. Use either the [Python SDK](#sdk-monitor) or [Azure Machine Learning studio](#studio-monitor).
+# [Python](#tab/python)
+<a name="sdk-monitor"></a>
See the [Python SDK reference documentation on data drift](/python/api/azureml-datadrift/azureml.datadrift) for full details. The following example shows how to create a dataset monitor using the Python SDK
@@ -200,9 +209,12 @@ monitor = monitor.disable_schedule()
monitor = monitor.enable_schedule() ```
-For a full example of setting up a `timeseries` dataset and data drift detector, see our [example notebook](https://aka.ms/datadrift-notebook).
+> [!TIP]
+> For a full example of setting up a `timeseries` dataset and data drift detector, see our [example notebook](https://aka.ms/datadrift-notebook).
+
-### <a name="studio-monitor"></a> Azure Machine Learning studio
+# [Studio](#tab/azure-studio)
+<a name="studio-monitor"></a>
1. Navigate to the [studio's homepage](https://ml.azure.com). 1. Select the **Datasets** tab on the left.
@@ -232,6 +244,8 @@ For a full example of setting up a `timeseries` dataset and data drift detector,
After finishing the wizard, the resulting dataset monitor will appear in the list. Select it to go to that monitor's details page.
+---
+ ## Understand data drift results This section shows you the results of monitoring a dataset, found in the **Datasets** / **Dataset monitors** page in Azure studio. You can update the settings as well as analyze existing data for a specific time period on this page.
@@ -334,7 +348,7 @@ Limitations and known issues for data drift monitors:
| Categorical | string, bool, int, float | The number of unique values in the feature is less than 100 and less than 5% of the number of rows. | Null is treated as its own category. | | Numerical | int, float | The values in the feature are of a numerical data type and do not meet the condition for a categorical feature. | Feature dropped if >15% of values are null. |
-* When you have [created a data drift monitor](how-to-monitor-datasets.md) but cannot see data on the **Dataset monitors** page in Azure Machine Learning studio, try the following.
+* When you have created a data drift monitor but cannot see data on the **Dataset monitors** page in Azure Machine Learning studio, try the following.
1. Check if you have selected the right date range at the top of the page. 1. On the **Dataset Monitors** tab, select the experiment link to check run status. This link is on the far right of the table.
media-services https://docs.microsoft.com/en-us/azure/media-services/live-video-analytics-edge/faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/faq.md
@@ -1,67 +1,66 @@
---
-title: Live Video Analytics on IoT Edge FAQs - Azure
-description: This topic gives answers to Live Video Analytics on IoT Edge FAQs.
+title: Live Video Analytics on IoT Edge FAQ - Azure
+description: This article answers commonly asked questions about Live Video Analytics on IoT Edge.
ms.topic: conceptual ms.date: 12/01/2020 ---
-# Frequently asked questions (FAQs)
+# Live Video Analytics on IoT Edge FAQ
-This topic gives answers to Live Video Analytics on IoT Edge FAQs.
+This article answers commonly asked questions about Live Video Analytics on Azure IoT Edge.
## General
-### What are the system variables that can be used in graph topology definition?
+**What system variables can I use in the graph topology definition?**
-|Variable |Description|
-|---|---|
-|[System.DateTime](/dotnet/framework/data/adonet/sql/linq/system-datetime-methods)|Represents an instant in UTC time, typically expressed as a date and time of day (basic representation yyyyMMddTHHmmssZ).|
-|System.PreciseDateTime|Represents an UTC date time instance in ISO8601 file compliant format with milliseconds (basic representation yyyyMMddTHHmmss.fffZ).|
-|System.GraphTopologyName |Represents a media graph topology, holds the blueprint of a graph.|
-|System.GraphInstanceName| Represents a media graph instance, holds parameter values and references the topology.|
+| Variable | Description |
+| --- | --- |
+| [System.DateTime](/dotnet/framework/data/adonet/sql/linq/system-datetime-methods) | Represents an instant in UTC time, typically expressed as a date and time of day in the following format:<br>*yyyyMMddTHHmmssZ* |
+| System.PreciseDateTime | Represents a Coordinated Universal Time (UTC) date-time instance in an ISO8601 file-compliant format with milliseconds, in the following format:<br>*yyyyMMddTHHmmss.fffZ* |
+| System.GraphTopologyName | Represents a media graph topology, and holds the blueprint of a graph. |
+| System.GraphInstanceName | Represents a media graph instance, holds parameter values, and references the topology. |
## Configuration and deployment
-### Can I deploy the media edge module to a Windows 10 device?
+**Can I deploy the media edge module to a Windows 10 device?**
-Yes. See the article on [Linux Containers on Windows 10](/virtualization/windowscontainers/deploy-containers/linux-containers).
+Yes. For more information, see [Linux containers on Windows 10](/virtualization/windowscontainers/deploy-containers/linux-containers).
## Capture from IP camera and RTSP settings
-### Do I need to use a special SDK on my device to send in a video stream?
+**Do I need to use a special SDK on my device to send in a video stream?**
-No. Live Video Analytics on IoT Edge supports capturing media using RTSP video streaming protocol (which is supported on most IP cameras).
+No, Live Video Analytics on IoT Edge supports capturing media by using RTSP (Real-Time Streaming Protocol) for video streaming, which is supported on most IP cameras.
-### Can I push media to Live Video Analytics on IoT Edge using RTMP or Smooth (like a Media Services Live Event)?
+**Can I push media to Live Video Analytics on IoT Edge by using Real-Time Messaging Protocol (RTMP) or Smooth Streaming Protocol (such as a Media Services Live Event)?**
-* No. Live Video Analytics only support RTSP for capturing video from IP cameras.
-* Any camera that supports RTSP streaming over TCP/HTTP should work.
+No, Live Video Analytics supports only RTSP for capturing video from IP cameras. Any camera that supports RTSP streaming over TCP/HTTP should work.
-### Can I reset or update the RTSP source URL on a graph instance?
+**Can I reset or update the RTSP source URL in a graph instance?**
-Yes, when the graph instance is in inactive state.
+Yes, when the graph instance is in *inactive* state.
-### Is there a RTSP simulator available to use during testing and development?
+**Is an RTSP simulator available to use during testing and development?**
-Yes. There is an [RTSP simulator](https://github.com/Azure/live-video-analytics/tree/master/utilities/rtspsim-live555) edge module available for use in the quick starts and tutorials to support the learning process. This module is provided as best-effort and may not always be available. It is strongly encouraged not to use this for more than a few hours. You should invest in testing with your actual RTSP source before making plans for a production deployment.
+Yes, an [RTSP simulator](https://github.com/Azure/live-video-analytics/tree/master/utilities/rtspsim-live555) edge module is available for use in the quickstarts and tutorials to support the learning process. This module is provided as best-effort and might not always be available. We recommend strongly that you *not* use the simulator for more than a few hours. You should invest in testing with your actual RTSP source before you plan a production deployment.
-### Do you support ONVIF discovery of IP cameras at the edge?
+**Do you support ONVIF discovery of IP cameras at the edge?**
-No, there is no support for ONVIF discovery of devices on the edge.
+No, we don't support Open Network Video Interface Forum (ONVIF) discovery of devices on the edge.
## Streaming and playback
-### Can assets recorded to AMS from the edge be played back using Media Services streaming technologies like HLS or DASH?
+**Can I play back assets recorded to Azure Media Services from the edge by using streaming technologies such as HLS or DASH?**
-Yes. The recorded assets can be streamed like any other asset in Azure Media Services. To stream the content, you must have a Streaming Endpoint created and in the running state. Using the standard Streaming Locator creation process will give you access to an HLS or DASH manifest for streaming to any capable player framework. For details on creating publishing HLS or DASH manifests, see [dynamic packaging](../latest/dynamic-packaging-overview.md).
+Yes. You can stream recorded assets like any other asset in Azure Media Services. To stream the content, you must have a streaming endpoint created and in the running state. Using the standard Streaming Locator creation process will give you access to an Apple HTTP Live Streaming (HLS) or Dynamic Adaptive Streaming over HTTP (DASH, also known as MPEG-DASH) manifest for streaming to any capable player framework. For more information about creating and publishing HLS or DASH manifests, see [dynamic packaging](../latest/dynamic-packaging-overview.md).
-### Can I use the standard content protection and DRM features of Media Services on an archived asset?
+**Can I use the standard content protection and DRM features of Media Services on an archived asset?**
-Yes. All of the standard dynamic encryption content protection and DRM features are available for use on the assets recorded from a media graph.
+Yes. All the standard dynamic encryption content protection and digital rights management (DRM) features are available for use on assets that are recorded from a media graph.
-### What players can I use to view content from the recorded assets?
+**What players can I use to view content from the recorded assets?**
-All standard players that support compliant Apple HTTP Live Streaming (HLS) version 3 or version 4 are supported. In addition, any player that is capable of compliant MPEG-DASH playback is also supported.
+All standard players that support compliant HLS version 3 or version 4 are supported. In addition, any player that's capable of compliant MPEG-DASH playback is also supported.
Recommended players for testing include:
@@ -72,26 +71,26 @@ Recommended players for testing include:
* [Shaka Player](https://github.com/google/shaka-player) * [ExoPlayer](https://github.com/google/ExoPlayer) * [Apple native HTTP Live Streaming](https://developer.apple.com/streaming/)
-* Edge, Chrome, or Safari built in HTML5 video player
+* Edge, Chrome, or Safari built-in HTML5 video player
* Commercial players that support HLS or DASH playback
-### What are the limits on streaming a media graph asset?
+**What are the limits on streaming a media graph asset?**
-Streaming a live or recorded asset from a media graph uses the same high scale infrastructure and streaming endpoint that Media Services supports for on-demand and live streaming for Media & Entertainment, OTT, and broadcast customers. This means that you can quickly and easily enable the Azure CDN, Verizon or Akamai to deliver your content to an audience as small as a few viewers, or up to millions depending on your scenario.
+Streaming a live or recorded asset from a media graph uses the same high-scale infrastructure and streaming endpoint that Media Services supports for on-demand and live streaming for Media & Entertainment, Over the Top (OTT), and broadcast customers. This means that you can quickly and easily enable Azure Content Delivery Network, Verizon, or Akamai to deliver your content to an audience as small as a few viewers or up to millions, depending on your scenario.
-Content can be delivered using both Apple HTTP Live Streaming (HLS) or MPEG-DASH.
+You can deliver content by using either Apple HLS or MPEG-DASH.
## Design your AI model
-### I have multiple AI models wrapped in a docker container. How should I use them with Live Video Analytics?
+**I have multiple AI models wrapped in a Docker container. How should I use them with Live Video Analytics?**
-Solutions are different depending on the communication protocol used by the inferencing server to communicate with Live Video Analytics. Below are some ways of doing this.
+Solutions vary depending on the communication protocol that's used by the inferencing server to communicate with Live Video Analytics. The following sections describe how each protocol works.
-#### HTTP protocol:
+*Use the HTTP protocol*:
* Single container (single lvaExtension):
- In your inferencing server, you can use a single port but different endpoints for different AI models. For example, for a Python sample you can use different `route`s per model as:
+ In your inferencing server, you can use a single port but different endpoints for different AI models. For example, for a Python sample you can use different `route`s per model, as shown here:
``` @app.route('/score/face_detection', methods=['POST'])
@@ -104,149 +103,144 @@ Solutions are different depending on the communication protocol used by the infe
… ```
- And then in your Live Video Analytics deployment, when you instantiate graphs, set the inference server URL for each instance as:
+ And then, in your Live Video Analytics deployment, when you instantiate graphs, set the inference server URL for each instance, as shown here:
1st instance: inference server URL=`http://lvaExtension:44000/score/face_detection`<br/> 2nd instance: inference server URL=`http://lvaExtension:44000/score/vehicle_detection`
+
> [!NOTE]
- > Alternatively, you can also also expose your AI models on different ports and call them when you instantiate graphs.
+ > Alternatively, you can expose your AI models on different ports and call them when you instantiate graphs.
* Multiple containers:
- Each container is deployed with a different name. Currently, in the Live Video Analytics documentation set, we showed you how to deploy an extension with the name: **lvaExtension**. Now you can develop two different containers. Each container has the same HTTP interface (meaning same `/score` endpoint). Deploy these two containers with different names and be sure that both are listening on **different ports**.
+ Each container is deployed with a different name. Previously, in the Live Video Analytics documentation set, we showed you how to deploy an extension named *lvaExtension*. Now you can develop two different containers, each with the same HTTP interface, which means they have the same `/score` endpoint. Deploy these two containers with different names, and ensure that both are listening on *different ports*.
- For example, one container with the name `lvaExtension1` is listening for the port `44000`, other container with the name `lvaExtension2` is listening for the port `44001`.
+ For example, one container named `lvaExtension1` is listening for the port `44000`, and a second container named `lvaExtension2` is listening for the port `44001`.
- In your Live Video Analytics topology, you instantiate two graphs with different inference URLs like:
+ In your Live Video Analytics topology, you instantiate two graphs with different inference URLs, as shown here:
First instance: inference server URL = `http://lvaExtension1:44001/score` Second instance: inference server URL = `http://lvaExtension2:44001/score`
-#### GRPC protocol:
+*Use the gRPC protocol*:
-With Live Video Analytics module 1.0, when using a gRPC protocol, the only way would be if the gRPC server exposed different AI models via different ports. In [this example](https://raw.githubusercontent.com/Azure/live-video-analytics/master/MediaGraph/topologies/grpcExtension/topology.json), there is a single port, 44000 that is exposing all the yolo models. In theory the yolo gRPC server could be rewritten to expose some models at 44000, others at 45000, …
+* With Live Video Analytics module 1.0, when you use a general-purpose remote procedure call (gRPC) protocol, the only way to do so is if the gRPC server exposes different AI models via different ports. In [this code example](https://raw.githubusercontent.com/Azure/live-video-analytics/master/MediaGraph/topologies/grpcExtension/topology.json), a single port, 44000, exposes all the yolo models. In theory, the yolo gRPC server could be rewritten to expose some models at port 44000 and others at port 45000.
-With Live Video Analytics module 2.0, a new property is added to the gRPC extension node. This property is called **extensionConfiguration** which is an optional string that can be used as a part of the gRPC contract. When you have multiple AI models packaged in a single inference server, you will not need to expose a node for every AI model. Instead, for a graph instance, the extension provider (you) can define how to select the different AI models using the **extensionConfiguration** property and during execution, Live Video Analytics will pass this string to the inferencing server which can use this to invoke the desired AI model.
+* With Live Video Analytics module 2.0, a new property is added to the gRPC extension node. This property, **extensionConfiguration**, is an optional string that can be used as a part of the gRPC contract. When you have multiple AI models packaged in a single inference server, you don't need to expose a node for every AI model. Instead, for a graph instance, you, as the extension provider, can define how to select the different AI models by using the **extensionConfiguration** property. During execution, Live Video Analytics passes this string to the inferencing server, which can use it to invoke the desired AI model.
-### I am building a gRPC server around an AI model, and want to be able to support being used by multiple cameras/graph instances. How should I build my server?
+**I'm building a gRPC server around an AI model, and I want to be able to support its use by multiple cameras or graph instances. How should I build my server?**
- Firstly, be sure that your server can handle more than one requests at a time. Or be sure that your server works in parallel threads.
+ First, be sure that your server can either handle more than one request at a time or work in parallel threads.
-For example, in one of [Live Video Analytics GRPC samples](https://github.com/Azure/live-video-analytics/blob/master/utilities/video-analysis/notebooks/Yolo/yolov3/yolov3-grpc-icpu-onnx/lvaextension/server/server.py), there is a default number of parallel channels set. See:
+For example, a default number of parallel channels has been set in the following [Live Video Analytics gRPC sample](https://github.com/Azure/live-video-analytics/blob/master/utilities/video-analysis/notebooks/Yolo/yolov3/yolov3-grpc-icpu-onnx/lvaextension/server/server.py):
``` server = grpc.server(futures.ThreadPoolExecutor(max_workers=3)) ```
-In the above gRPC server instantiation, the server can open only three channels per camera (so per graph topology instance) at a time. You should not try to connect more than three instances to the server. If you do try to open more than three channels, requests will be pending until an existing one drops.
-
-Above gRPC server implementation is used in our Python samples. Developers can implement their own servers or in the above default implementation can increase the worker number set to the number of cameras used to get video feed from.
+In the preceding gRPC server instantiation, the server can open only three channels at a time per camera, or per graph topology instance. Don't try to connect more than three instances to the server. If you do try to open more than three channels, requests will be pending until an existing channel drops.
-To set up and use multiple cameras, developers can instantiate multiple graph topology instance where each instance pointing to same or different inference server (for example, server mentioned in the above paragraph).
+The preceding gRPC server implementation is used in our Python samples. As a developer, you can implement your own server or use the preceding default implementation to increase the worker number, which you set to the number of cameras to use for video feeds.
-### I want to be able to receive multiple frames from upstream before I make an inferencing decision. How can I enable that?
+To set up and use multiple cameras, you can instantiate multiple graph topology instances, each pointing to the same or a different inference server (for example, the server mentioned in the preceding paragraph).
-Current [default samples](https://github.com/Azure/live-video-analytics/tree/master/utilities/video-analysis) work in "stateless" mode. These sample are not keeping the state of the previous calls and even who called (meaning multiple topology instance may call same inference server, and server will not be able to distinguish who is calling and state per caller)
+**I want to be able to receive multiple frames from upstream before I make an inferencing decision. How can I enable that?**
-#### HTTP protocol
+Our current [default samples](https://github.com/Azure/live-video-analytics/tree/master/utilities/video-analysis) work in a *stateless* mode. They don't keep the state of the previous calls or even who called. This means that multiple topology instances might call the same inference server, but the server can't distinguish who is calling or the state per caller.
-When using HTTP protocol:
+*Use the HTTP protocol*:
-To keep the state, each caller (graph topology instance) will call the inferencing server with HTTP Query parameter unique to caller. For example, inference server URL address for
+To keep the state, each caller, or graph topology instance, calls the inferencing server by using the HTTP query parameter that's unique to caller. For example, the inference server URL addresses for each instance are shown here:
1st topology instance= `http://lvaExtension:44000/score?id=1`<br/> 2nd topology instance= `http://lvaExtension:44000/score?id=2` …
-On the server side, the score route will know who is calling. If ID=1, then it can keep the state separately for that caller (graph topology instance). You can then keep the received video frames in a buffer (for example, array, or a dictionary with DateTime Key, and Value is the frame) and then you can define the server to process (infer) after x frames are received.
-
-#### GRPC protocol
-
-When using gRPC protocol:
+On the server side, the score route knows who is calling. If ID=1, then it can keep the state separately for that caller or graph topology instance. You can then keep the received video frames in a buffer. For example, use an array, or a dictionary with a DateTime key, and the value is the frame. You can then define the server to process (infer) after *x* number of frames are received.
-With a gRPC extension, each session is for a single camera feed so there is no need to provide an ID. So now with the extensionConfiguration property, you can store the video frames in a buffer and define the server to process(infer) after x frames are received.
+*Use the gRPC protocol*:
-### Do all ProcessMediaStreams on a particular container run the same AI model?
+With a gRPC extension, each session is for a single camera feed, so there's no need to provide an ID. Now, with the extensionConfiguration property, you can store the video frames in a buffer and define the server to process (infer) after *x* number of frames are received.
-No.
+**Do all ProcessMediaStreams on a particular container run the same AI model?**
-Start/stop calls from the end user on a graph instance constitutes a session, or perhaps there is a camera disconnect/reconnect. The goal is to persist one session if the camera is streaming video.
+No. Start or stop calls from the end user in a graph instance constitute a session, or perhaps there's a camera disconnect or reconnect. The goal is to persist one session if the camera is streaming video.
-* Two cameras sending video for processing, creates two sessions.
-* One camera going to a graph that has two gRPCExtension nodes creates two sessions.
+* Two cameras sending video for processing creates two sessions.
+* One camera going to a graph that has two gRPC extension nodes creates two sessions.
-Each session is a full duplex connection between Live Video Analytics and the gRPC Server and each session can have a different model/pipeline.
+Each session is a full duplex connection between Live Video Analytics and the gRPC server, and each session can have a different model or pipeline.
> [!NOTE]
-> In case of a camera disconnect/reconnect (with camera going offline for a period beyond tolerance limits), Live Video Analytics will open a new session with the gRPC Server. There is no requirement for the server to track state across these sessions.
+> In case of a camera disconnect or reconnect, with the camera going offline for a period beyond tolerance limits, Live Video Analytics will open a new session with the gRPC server. There's no requirement for the server to track the state across these sessions.
-Live Video Analytics also added support of multiple gRPC extensions for a single camera in a graph instance. You will be able to use these gRPC extensions to carry out AI processing sequentially or in parallel or even have a combination of both.
+Live Video Analytics also adds support for multiple gRPC extensions for a single camera in a graph instance. You can use these gRPC extensions to carry out AI processing sequentially, in parallel, or as a combination of both.
> [!NOTE]
-> Having multiple extensions run in parallel will impact your hardware resources and you will have to keep this mind while choosing the hardware that will suit your computational needs.
+> Having multiple extensions run in parallel will affect your hardware resources. Keep this in mind as you're choosing the hardware that suits your computational needs.
-### What is the max # of simultaneous ProcessMediaStreams?
+**What is the maximum number of simultaneous ProcessMediaStreams?**
-There is no limit that Live Video Analytics applies.
+Live Video Analytics applies no limits to this number.
-### How should I decide if my inferencing server should use CPU or GPU or any other hardware accelerator?
+**How can I decide whether my inferencing server should use CPU or GPU or any other hardware accelerator?**
-This is completely dependent on how complex the AI model is developed and how the developer wants to use the CPU and hardware accelerators. While developing the AI model, the developers can specify what resources should be used by the model to perform what actions.
+Your decision depends on the complexity of the developed AI model and how you want to use the CPU and hardware accelerators. As you're developing the AI model, you can specify what resources the model should use and what actions it should perform.
-### How do I store images with bounding boxes post processing?
+**How do I store images with bounding boxes post-processing?**
-Today, we are providing bounding box coordinates as inference messages only. Developers can build a custom MJPEG streamer that can use these messages and overlay the bounding boxes over the video frames.
+Today, we are providing bounding box coordinates as inference messages only. You can build a custom MJPEG streamer that can use these messages and overlay the bounding boxes on the video frames.
## gRPC compatibility
-### How will I know what the mandatory fields for the media stream descriptor are?
+**How will I know what the mandatory fields for the media stream descriptor are?**
-Any field value which is not supplied will be given a default [as specified by gRPC](https://developers.google.com/protocol-buffers/docs/proto3#default).
+Any field that you don't supply a value to is given a [default value, as specified by gRPC](https://developers.google.com/protocol-buffers/docs/proto3#default).
-Live Video Analytics uses **proto3** version of the protocol buffer language. All the protocol buffer data used by Live Video Analytics contracts are available in the protocol buffer files [defined here](https://github.com/Azure/live-video-analytics/tree/master/contracts/grpc).
+Live Video Analytics uses the *proto3* version of the protocol buffer language. All the protocol buffer data that's used by Live Video Analytics contracts is available in the [protocol buffer files](https://github.com/Azure/live-video-analytics/tree/master/contracts/grpc).
-### How should I ensure that I am using the latest protocol buffer files?
+**How can I ensure that I'm using the latest protocol buffer files?**
-The latest protocol buffer files can be [obtained here](https://github.com/Azure/live-video-analytics/tree/master/contracts/grpc). Whenever we update the contract files, they will appear in this location. While there is no immediate plan to update the protocol files, look for the package name at the top of the files to know the version. It should read:
+You can obtain the latest protocol buffer files on the [contract files site](https://github.com/Azure/live-video-analytics/tree/master/contracts/grpc). Whenever we update the contract files, they'll be in this location. There's no immediate plan to update the protocol files, so look for the package name at the top of the files to know the version. It should read:
``` microsoft.azure.media.live_video_analytics.extensibility.grpc.v1 ```
-Any updates to these files, will increment the ΓÇ£v-valueΓÇ¥ at the end of the name.
+Any updates to these files will increment the "v-value" at the end of the name.
> [!NOTE]
-> Since Live Video Analytics uses proto3 version of the language, the fields are optional, and this makes it backward and forward compatible.
+> Because Live Video Analytics uses the proto3 version of the language, the fields are optional, and the version is backward and forward compatible.
-### What gRPC features are available for me to use with Live Video Analytics? Which features are mandatory and which ones are optional?
+**What gRPC features are available for me to use with Live Video Analytics? Which features are mandatory and which are optional?**
-Any server-side gRPC features may be used provided the protobuf contract is fulfilled.
+You can use any server-side gRPC features, provided that the Protocol Buffers (Protobuf) contract is fulfilled.
## Monitoring and metrics
-### Can I monitor the media graph on the edge using Event Grid?
+**Can I monitor the media graph on the edge by using Azure Event Grid?**
-Yes. You can consume the prometheus metrics and publish them to event grid.
+Yes. You can consume Prometheus metrics and publish them to your event grid.
-### Can I use Azure Monitor to view the health, metrics, and performance of my media graphs in the cloud or on the edge?
+**Can I use Azure Monitor to view the health, metrics, and performance of my media graphs in the cloud or on the edge?**
-Yes. This is supported. Learn more on [How to use Azure Monitor Metrics](https://docs.microsoft.com/azure/azure-monitor/platform/data-platform-metrics).
+Yes, we support this approach. To learn more, see [Azure Monitor Metrics overview](https://docs.microsoft.com/azure/azure-monitor/platform/data-platform-metrics).
-### Are there any tools to make it easier to monitor the Media Services IoT Edge module?
+**Are there any tools to make it easier to monitor the Media Services IoT Edge module?**
-Visual Studio Code supports the "Azure IoT Tools " extension that allows you to easily monitor the LVAEdge module endpoints. You can use this tool to quickly start monitoring the IoT Hub built-in endpoint for "events" and see the inference messages that are routed from the edge device to the cloud.
+Visual Studio Code supports the Azure IoT Tools extension, with which you can easily monitor the LVAEdge module endpoints. You can use this tool to quickly start monitoring your IoT hub built-in endpoint for "events" and view the inference messages that are routed from the edge device to the cloud.
-In addition, you can use this extension to edit the Module Twin for the LVAEdge module to modify the media graph settings.
+In addition, you can use this extension to edit the module twin for the LVAEdge module to modify the media graph settings.
For more information, see the [monitoring and logging](monitoring-logging.md) article. ## Billing and availability
-### How is Live Video Analytics on IoT Edge billed?
+**How is Live Video Analytics on IoT Edge billed?**
-See [pricing page](https://azure.microsoft.com/pricing/details/media-services/) for details.
+For billing details, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
## Next steps
-[Quickstart: Get started - Live Video Analytics on IoT Edge](get-started-detect-motion-emit-events-quickstart.md)
+[Quickstart: Get started with Live Video Analytics on IoT Edge](get-started-detect-motion-emit-events-quickstart.md)
media-services https://docs.microsoft.com/en-us/azure/media-services/video-indexer/animated-characters-recognition-how-to https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/video-indexer/animated-characters-recognition-how-to.md
@@ -95,22 +95,37 @@ Before tagging and training the model, all animated characters will be named ΓÇ£
### Customize the animated characters models
-1. Tag and train the model.
-
- 1. Tag the detected character by editing its name. Once a character is trained into the model, it will be recognized it the next video indexed with that model.
- 1. To tag an animated character in your video, go to the **Insights** tab and click on the **Edit** button on the top-right corner of the window.
- 1. In the **Insights** pane, click on any of the detected animated characters and change their names from "Unknown #X" (or the name that was previously assigned to the character).
- 1. After typing in the new name, click on the check icon next to the new name. This saves the new name in the model in Video Indexer.
- 1. After you finished editing all names you want, you need to train the model.
-
- Open the customization page and click on the **Animated characters** tab and then click on the **Train** button to train your model.
-
- If you have a paid account, you can click the **Manage models in Customer Vision** link (as shown below). You will then be forwarded to the model's page in **Custom Vision**.
-
- ![Content model customization](./media/animated-characters-recognition/content-model-customization-tab.png)
-
- 1. Once trained, any video that will be indexed or reindexed with that model will recognize the trained characters.
- Paid accounts that have access to their Custom Vision account can see the models and tagged images there. Learn more about [improving your classifier in Custom Vision](../../cognitive-services/custom-vision-service/getting-started-improving-your-classifier.md).
+1. Name the characters in Video Indexer.
+
+ 1. After the model created character group, it is recommended to review these groups in Custom Vision.
+ 1. To tag an animated character in your video, go to the **Insights** tab and click on the **Edit** button on the top-right corner of the window.
+ 1. In the **Insights** pane, click on any of the detected animated characters and change their names from "Unknown #X" to a temporary name (or the name that was previously assigned to the character).
+ 1. After typing in the new name, click on the check icon next to the new name. This saves the new name in the model in Video Indexer.
+1. Paid accounts only: Review the groups in Custom Vision
+
+ > [!NOTE]
+ > Paid accounts that have access to their Custom Vision account can see the models and tagged images there. Learn more aboutΓÇ»[improving your classifier in Custom Vision](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/getting-started-improving-your-classifier). ItΓÇÖs important to note that training of the model should be done only via Video Indexer (as described in this topid), and not via the Custom Vision website.
+
+ 1. Go to the **Custom Models** page in Video Indexer and choose the **Animated characters** tab.
+ 1. Click on the Edit button for the model you are working on to manage it in Custom Vision.
+ 1. Review each character group:
+
+ * If the group contains unrelated images it is recommended to delete these in the Custom Vision website.
+ * If there are images that belong to a different character, change the tag on these specific images by click on the image, adding the right tag and deleting the wrong tag.
+ * If the group is not correct, meaning it contains mainly non-character images or images from multiple characters, you can delete in in Custom Vision website or in Video Indexer insights.
+ * The grouping algorithm will sometimes split your characters to different groups. It is therefore recommended to give all the groups that belong to the same character the same name (in Video Indexer Insights), which will immediately cause all these groups to appear as on in Custom Vision website.
+ 1. Once the group is refined, make sure the initial name you tagged it with reflects the character in the group.
+1. Train the model
+
+ 1. After you finished editing all names you want, you need to train the model.
+ 1. Once a character is trained into the model, it will be recognized it the next video indexed with that model.
+ 1. Open the customization page and click on the **Animated characters** tab and then click on the **Train** button to train your model. In order to keep the connection between Video
+
+Indexer and the model, don't train the model in the Custom Vision website (paid accounts have access to Custom Vision website), only in Video Indexer.
+Once trained, any video that will be indexed or reindexed with that model will recognize the trained characters.
+
+## Delete an animated character and the model
+ 1. Delete an animated character. 1. To delete an animated character in your video insights, go to the **Insights** tab and click on the **Edit** button on the top-right corner of the window.
security-center https://docs.microsoft.com/en-us/azure/security-center/continuous-export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/continuous-export.md
@@ -6,7 +6,7 @@ author: memildin
manager: rkarlin ms.service: security-center ms.topic: how-to
-ms.date: 12/08/2020
+ms.date: 12/24/2020
ms.author: memildin ---
@@ -19,6 +19,7 @@ Azure Security Center generates detailed security alerts and recommendations. Yo
- All high severity alerts are sent to an Azure Event Hub - All medium or higher severity findings from vulnerability assessment scans of your SQL servers are sent to a specific Log Analytics workspace - Specific recommendations are delivered to an Event Hub or Log Analytics workspace whenever they're generated
+- The secure score for a subscription is sent to a Log Analytics workspace whenever the score for a control changes by 0.01 or more
This article describes how to configure continuous export to Log Analytics workspaces or Azure Event Hubs.
@@ -40,8 +41,15 @@ This article describes how to configure continuous export to Log Analytics works
|||
+## What data types can be exported?
+Continuous export can export the following data types whenever they change:
+- Security alerts
+- Security recommendations
+- Security findings which can be thought of as 'sub' recommendations like findings from vulnerability assessment scanners or specific system updates. You can select to include them with their 'parent' recommendations such as "System updates should be installed on your machines".
+- Secure score (per subscription or per control)
+- Regulatory compliance data
## Set up a continuous export
@@ -62,7 +70,7 @@ The steps below are necessary whether you're setting up a continuous export to L
Here you see the export options. There's a tab for each available export target. 1. Select the data type you'd like to export and choose from the filters on each type (for example, export only high severity alerts).
-1. Optionally, if your selection includes one of these four recommendations, you can include the vulnerability assessment findings together with them:
+1. Optionally, if your selection includes one of these recommendations, you can include the vulnerability assessment findings together with them:
- Vulnerability Assessment findings on your SQL databases should be remediated - Vulnerability Assessment findings on your SQL servers on machines should be remediated (Preview) - Vulnerabilities in Azure Container Registry images should be remediated (powered by Qualys)
@@ -211,6 +219,9 @@ No. Continuous export is built for streaming of **events**:
- **Alerts** received before you enabled export won't be exported. - **Recommendations** are sent whenever a resource's compliance state changes. For example, when a resource turns from healthy to unhealthy. Therefore, as with alerts, recommendations for resources that haven't changed state since you enabled export won't be exported.
+- **Secure score (preview)** per security control or subscription is sent when a security control's score changes by 0.01 or more.
+- **Regulatory compliance status (preview)** is sent when the status of the resource's compliance changes.
+ ### Why are recommendations sent at different intervals?
security-center https://docs.microsoft.com/en-us/azure/security-center/release-notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
@@ -10,7 +10,7 @@ ms.devlang: na
ms.topic: overview ms.tgt_pltfrm: na ms.workload: na
-ms.date: 12/23/2020
+ms.date: 12/24/2020
ms.author: memildin ---
@@ -39,6 +39,7 @@ Updates in December include:
- [Revitalized Security Center experience in Azure SQL Database & SQL Managed Instance](#revitalized-security-center-experience-in-azure-sql-database--sql-managed-instance) - [Asset inventory tools and filters updated](#asset-inventory-tools-and-filters-updated) - [Recommendation about web apps requesting SSL certificates no longer part of secure score](#recommendation-about-web-apps-requesting-ssl-certificates-no-longer-part-of-secure-score)
+- [Continuous export gets new data types and improved deployifnotexist policies](#continuous-export-gets-new-data-types-and-improved-deployifnotexist-policies)
### Azure Defender for SQL servers on machines is generally available
@@ -151,6 +152,28 @@ Wish this change, the recommendation is now a recommended best practice which do
Learn which recommendations are in each security control in [Security controls and their recommendations](secure-score-security-controls.md#security-controls-and-their-recommendations). +
+### Continuous export gets new data types and improved deployifnotexist policies
+
+Azure Security Center's continuous export tools enable you to export Security Center's recommendations and alerts for use with other monitoring tools in your environment.
+
+Continuous export lets you fully customize what will be exported, and where it will go. For full details, see [Continuously export Security Center data](continuous-export.md).
+
+These tools have been enhanced and expanded in the following ways:
+
+- **Continuous export's deployifnotexist policies enhanced**. The policies now:
+
+ - **Check whether the configuration is enabled.** If it isn't, the policy will show as non-compliant and create a compliant resource. Learn more about the the supplied Azure Policy templates in the "Deploy at scale with Azure Policy tab" in [Set up a continuous export](continuous-export.md#set-up-a-continuous-export).
+
+ - **Support exporting security findings.** When using the Azure Policy templates, you can configure your continuous export to include findings. This is relevant when exporting recommendations that have 'sub' recommendations, like findings from vulnerability assessment scanners or specific system updates for the 'parent' recommendation "System updates should be installed on your machines".
+
+ - **Support exporting secure score data.**
+
+- **Regulatory compliance assessment data added (in preview).** You can now continuously export updates to regulatory compliance assessments, including for any custom initiatives, to a Log Analytics workspace or Event Hub. This feature is unavailable on national/sovereign clouds.
+
+ :::image type="content" source="media/release-notes/continuous-export-regulatory-compliance-option.png" alt-text="The options for including regulatory compliant assessment information with your continuous export data.":::
++ ## November 2020 Updates in November include:
storage https://docs.microsoft.com/en-us/azure/storage/common/storage-monitoring-diagnosing-troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-monitoring-diagnosing-troubleshooting.md
@@ -114,8 +114,7 @@ You can use the [Azure portal](https://portal.azure.com) to view the health of t
The [Azure portal](https://portal.azure.com) can also provide notifications of incidents that affect the various Azure services. Note: This information was previously available, along with historical data, on the [Azure Service Dashboard](https://status.azure.com).-
-While the [Azure portal](https://portal.azure.com) collects health information from inside the Azure datacenters (inside-out monitoring), you could also consider adopting an outside-in approach to generate synthetic transactions that periodically access your Azure-hosted web application from multiple locations. The services offered by [Dynatrace](https://www.dynatrace.com/en/synthetic-monitoring) and Application Insights for Azure DevOps are examples of this approach. For more information about Application Insights for Azure DevOps, see the appendix "[Appendix 5: Monitoring with Application Insights for Azure DevOps](#appendix-5)."
+For more information about Application Insights for Azure DevOps, see the appendix "[Appendix 5: Monitoring with Application Insights for Azure DevOps](#appendix-5)."
### <a name="monitoring-capacity"></a>Monitoring capacity Storage Metrics only stores capacity metrics for the blob service because blobs typically account for the largest proportion of stored data (at the time of writing, it is not possible to use Storage Metrics to monitor the capacity of your tables and queues). You can find this data in the **$MetricsCapacityBlob** table if you have enabled monitoring for the Blob service. Storage Metrics records this data once per day, and you can use the value of the **RowKey** to determine whether the row contains an entity that relates to user data (value **data**) or analytics data (value **analytics**). Each stored entity contains information about the amount of storage used (**Capacity** measured in bytes) and the current number of containers (**ContainerCount**) and blobs (**ObjectCount**) in use in the storage account. For more information about the capacity metrics stored in the **$MetricsCapacityBlob** table, see [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema).
@@ -860,4 +859,4 @@ For more information about analytics in Azure Storage, see these resources:
[7]: ./media/storage-monitoring-diagnosing-troubleshooting/wireshark-screenshot-2.png [8]: ./media/storage-monitoring-diagnosing-troubleshooting/wireshark-screenshot-3.png [9]: ./media/storage-monitoring-diagnosing-troubleshooting/mma-screenshot-1.png
-[10]: ./media/storage-monitoring-diagnosing-troubleshooting/mma-screenshot-2.png
\ No newline at end of file
+[10]: ./media/storage-monitoring-diagnosing-troubleshooting/mma-screenshot-2.png
stream-analytics https://docs.microsoft.com/en-us/azure/stream-analytics/connect-job-to-vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/connect-job-to-vnet.md
@@ -37,8 +37,6 @@ Your jobs can connect to the following Azure services using this technique:
If your jobs need to connect to other input or output types, then the only option is to use private endpoints in Stream Analytics clusters.
-You can implement machine learning models as a user-defined function (UDF) in your Azure Stream Analytics jobs to do real-time scoring and predictions on your streaming input data. [Azure Machine Learning](../machine-learning/overview-what-is-azure-ml.md) allows you to use any popular open-source tool, such as Tensorflow, scikit-learn, or PyTorch, to prep, train, and deploy models.
- ## Next steps * [Create and remove Private Endpoints in Stream Analytics clusters](https://docs.microsoft.com/azure/stream-analytics/private-endpoints)
virtual-machine-scale-sets https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
@@ -40,6 +40,9 @@ The upgrade process works as follows:
The scale set OS upgrade orchestrator checks for the overall scale set health before upgrading every batch. While upgrading a batch, there could be other concurrent planned or unplanned maintenance activities that could impact the health of your scale set instances. In such cases if more than 20% of the scale set's instances become unhealthy, then the scale set upgrade stops at the end of current batch.
+> [!NOTE]
+>Automatic OS upgrade does not upgrade the reference image Sku on the scale set. To change the Sku (such as Ubuntu 16.04-LTS to 18.04-LTS), you must update the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model) directly with the desired image Sku. Image publisher and offer can't be changed for an existing scale set.
+ ## Supported OS images Only certain OS platform images are currently supported. Custom images [are supported](virtual-machine-scale-sets-automatic-upgrade.md#automatic-os-image-upgrade-for-custom-images) if the scale set uses custom images through [Shared Image Gallery](shared-image-galleries.md).
@@ -49,16 +52,15 @@ The following platform SKUs are currently supported (and more are added periodic
|-------------------------|---------------|--------------------| | Canonical | UbuntuServer | 16.04-LTS | | Canonical | UbuntuServer | 18.04-LTS |
-| Rogue Wave (OpenLogic) | CentOS | 7.5 |
-| CoreOS | CoreOS | Stable |
-| Microsoft Corporation | WindowsServer | 2012-R2-Datacenter |
-| Microsoft Corporation | WindowsServer | 2016-Datacenter |
-| Microsoft Corporation | WindowsServer | 2016-Datacenter-Smalldisk |
-| Microsoft Corporation | WindowsServer | 2016-Datacenter-with-Containers |
-| Microsoft Corporation | WindowsServer | 2019-Datacenter |
-| Microsoft Corporation | WindowsServer | 2019-Datacenter-Smalldisk |
-| Microsoft Corporation | WindowsServer | 2019-Datacenter-with-Containers |
-| Microsoft Corporation | WindowsServer | Datacenter-Core-1903-with-Containers-smalldisk |
+| OpenLogic | CentOS | 7.5 |
+| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-Smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-Containers |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
+| MicrosoftWindowsServer | WindowsServer | Datacenter-Core-1903-with-Containers-smalldisk |
## Requirements for configuring automatic OS image upgrade
virtual-machines https://docs.microsoft.com/en-us/azure/virtual-machines/windows/automatic-vm-guest-patching https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/automatic-vm-guest-patching.md
@@ -5,7 +5,7 @@ author: mayanknayar
ms.service: virtual-machines-windows ms.workload: infrastructure ms.topic: how-to
-ms.date: 09/09/2020
+ms.date: 12/23/2020
ms.author: manayar ---
@@ -29,11 +29,11 @@ Automatic VM guest patching has the following characteristics:
If automatic VM guest patching is enabled on a VM, then the available *Critical* and *Security* patches are downloaded and applied automatically on the VM. This process kicks off automatically every month when new patches are released through Windows Update. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
-The VM is assessed periodically to determine the applicable patches for that VM. The patches can be installed any day on the VM during off-peak hours for the VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
+The VM is assessed periodically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. The patches can be installed any day on the VM during off-peak hours for the VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
-Patches are installed within 30 days of the monthly Windows Update release, following availability-first orchestration described below. Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be automatically assessed and applicable patches will be installed automatically during the next periodic assessment when the VM is powered on.
+Patches are installed within 30 days of the monthly Windows Update release, following availability-first orchestration described below. Patches are installed only during off-peak hours for the VM, depending on the time zone of the VM. The VM must be running during the off-peak hours for patches to be automatically installed. If a VM is powered off during a periodic assessment, the VM will be automatically assessed and applicable patches will be installed automatically during the next periodic assessment (usually within a few days) when the VM is powered on.
-To install patches with other patch classifications or schedule patch installation within your own custom maintenance window, you can use [Update Management](tutorial-config-management.md#manage-windows-updates).
+Definition updates and other patches not classified as *Critical* or *Security* will not be installed through the automatic VM guest patching. To install patches with other patch classifications or schedule patch installation within your own custom maintenance window, you can use [Update Management](tutorial-config-management.md#manage-windows-updates).
### Availability-first patching
@@ -64,11 +64,11 @@ The following platform SKUs are currently supported (and more are added periodic
| Publisher | OS Offer | Sku | |-------------------------|---------------|--------------------|
-| Microsoft Corporation | WindowsServer | 2012-R2-Datacenter |
-| Microsoft Corporation | WindowsServer | 2016-Datacenter |
-| Microsoft Corporation | WindowsServer | 2016-Datacenter-Server-Core |
-| Microsoft Corporation | WindowsServer | 2019-Datacenter |
-| Microsoft Corporation | WindowsServer | 2019-Datacenter-Server-Core |
+| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-Server-Core |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
## Patch orchestration modes Windows VMs on Azure now support the following patch orchestration modes:
@@ -78,7 +78,7 @@ Windows VMs on Azure now support the following patch orchestration modes:
- This mode is required for availability-first patching. - Setting this mode also disables the native Automatic Updates on the Windows virtual machine to avoid duplication. - This mode is only supported for VMs that are created using the supported OS platform images above.-- To use this mode, set the property `osProfile.windowsConfiguration.enableAutomaticUpdates=true`, and set the property `osProfile.windowsConfiguration.patchSettings.patchMode=AutomaticByPlatfom` in the VM template.
+- To use this mode, set the property `osProfile.windowsConfiguration.enableAutomaticUpdates=true`, and set the property `osProfile.windowsConfiguration.patchSettings.patchMode=AutomaticByPlatform` in the VM template.
**AutomaticByOS:** - This mode enables Automatic Updates on the Windows virtual machine, and patches are installed on the VM through Automatic Updates.
@@ -102,7 +102,7 @@ Windows VMs on Azure now support the following patch orchestration modes:
- The virtual machine must be able to access Windows Update endpoints. If your virtual machine is configured to use Windows Server Update Services (WSUS), the relevant WSUS server endpoints must be accessible. - Use Compute API version 2020-06-01 or higher.
-Enabling the preview functionality requires a one-time opt-in for the feature *InGuestAutoPatchVMPreview* per subscription, as detailed below.
+Enabling the preview functionality requires a one-time opt-in for the feature **InGuestAutoPatchVMPreview** per subscription, as detailed below.
### REST API The following example describes how to enable the preview for your subscription:
@@ -249,10 +249,10 @@ The patch installation results for your VM can be reviewed under the `lastPatchI
## On-demand patch assessment If automatic VM guest patching is already enabled for your VM, a periodic patch assessment is performed on the VM during the VM's off-peak hours. This process is automatic and the results of the latest assessment can be reviewed through the VM's instance view as described earlier in this document. You can also trigger an on-demand patch assessment for your VM at any time. Patch assessment can take a few minutes to complete and the status of the latest assessment is updated on the VM's instance view.
-Enabling the preview functionality requires a one-time opt-in for the feature *InGuestPatchVMPreview* per subscription. The feature preview for on-demand patch assessment can be enabled following the [preview enablement process](automatic-vm-guest-patching.md#requirements-for-enabling-automatic-vm-guest-patching) described earlier for automatic VM guest patching.
+Enabling the preview functionality requires a one-time opt-in for the feature **InGuestPatchVMPreview** per subscription. This feature preview is different from the automatic VM guest patching feature enrollment done earlier for **InGuestAutoPatchVMPreview**. Enabling the additional feature preview is a separate and additional requirement. The feature preview for on-demand patch assessment can be enabled following the [preview enablement process](automatic-vm-guest-patching.md#requirements-for-enabling-automatic-vm-guest-patching) described earlier for automatic VM guest patching.
> [!NOTE]
->On-demand patch assessment does not automatically trigger patch installation. Assessed and applicable patches for the VM will only be installed during the VM's off-peak hours, following the availability-first patching process described earlier in this document.
+>On-demand patch assessment does not automatically trigger patch installation. If you have enabled automatic VM guest patching then the assessed and applicable patches for the VM will be installed during the VM's off-peak hours, following the availability-first patching process described earlier in this document.
### REST API ```