Updates from: 05/18/2021 03:05:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-proxy/what-is-application-proxy.md
Up to this point, we've focused on using Application Proxy to publish on-premise
* **Securely publish REST APIs**. When you have business logic or APIs running on-premises or hosted on virtual machines in the cloud, Application Proxy provides a public endpoint for API access. API endpoint access lets you control authentication and authorization without requiring incoming ports. It provides additional security through Azure AD Premium features such as multi-factor authentication and device-based Conditional Access for desktops, iOS, MAC, and Android devices using Intune. To learn more, see [How to enable native client applications to interact with proxy applications](../manage-apps/application-proxy-configure-native-client-application.md) and [Protect an API by using OAuth 2.0 with Azure Active Directory and API Management](../../api-management/api-management-howto-protect-backend-with-aad.md). * **Remote Desktop Services** **(RDS)**. Standard RDS deployments require open inbound connections. However, the [RDS deployment with Application Proxy](../manage-apps/application-proxy-integrate-with-remote-desktop-services.md) has a permanent outbound connection from the server running the connector service. This way, you can offer more applications to end users by publishing on-premises applications through Remote Desktop Services. You can also reduce the attack surface of the deployment with a limited set of two-step verification and Conditional Access controls to RDS.
-* **Publish applications that connect using WebSockets**. Support with [Qlik Sense](../manage-apps/application-proxy-qlik.md) is in Public Preview and will be expanded to other apps in the future.
+* **Publish applications that connect using WebSockets**. Support with [Qlik Sense](/azure/active-directory/app-proxy/application-proxy-qlik) is in Public Preview and will be expanded to other apps in the future.
* **Enable native client applications to interact with proxy applications**. You can use Azure AD Application Proxy to publish web apps, but it also can be used to publish [native client applications](../manage-apps/application-proxy-configure-native-client-application.md) that are configured with the Azure AD Authentication Library (ADAL). Native client applications differ from web apps because they're installed on a device, while web apps are accessed through a browser. ## Conclusion
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
> [!NOTE] > If you purchase and plan to use NFC-based security keys, you need a supported NFC reader for the security key. The NFC reader isn't an Azure requirement or limitation. Check with the vendor for your NFC-based security key for a list of supported NFC readers.
-If you're a vendor and want to get your device on this list of supported devices, contact [Fido2Request@Microsoft.com](mailto:Fido2Request@Microsoft.com).
+If you're a vendor and want to get your device on this list of supported devices, check out our guidance on how to [become a Microsoft-compatible FIDO2 security key vendor](https://docs.microsoft.com/security/zero-trust/isv/fido2-hardware-vendor).
To get started with FIDO2 security keys, complete the following how-to:
active-directory Tutorial Existing Forest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/tutorial-existing-forest.md
In this scenario, there is an existing forest synced using Azure AD Connect sync
9. On the **Configuration complete** screen, click **Confirm**. This operation will register and restart the agent.</br> ![Screenshot that shows the "Configuration complete" screen.](media/how-to-install/install-4a.png)</br>
+ > [!NOTE]
+ > The group managed service account (for example, CONTOSO\provAgentgMSA$) is created in the same Active Directory domain where the host server has joined.
+ 10. Once this operation completes you should see a notice: **Your agent configuration was successfully verified.** You can click **Exit**.</br> ![Welcome screen](media/how-to-install/install-5.png)</br> 11. If you still see the initial splash screen, click **Close**.
You have now successfully set up a hybrid identity environment that you can use
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
Previously updated : 12/10/2020 Last updated : 05/17/2021 # Customer intent: As an application developer, I want to download and run a demo ASP.NET Core web app that can sign in users with personal Microsoft accounts (MSA) and work/school accounts from any Azure Active Directory instance, then access their data in Microsoft Graph on their behalf.
See [How the sample works](#how-the-sample-works) for an illustration.
> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/) > * [.NET Core SDK 3.1+](https://dotnet.microsoft.com/download) >
-> ## Register and download the quickstart app
-> You have two options to start your quickstart application:
-> * [Express] [Option 1: Register and auto configure your app and then download your code sample](#option-1-register-and-auto-configure-your-app-and-then-download-your-code-sample)
-> * [Manual] [Option 2: Register and manually configure your application and code sample](#option-2-register-and-manually-configure-your-application-and-code-sample)
>
-> ### Option 1: Register and auto configure your app and then download your code sample
->
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/AspNetCoreWebAppQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
-> 1. Enter a name for your application and select **Register**.
-> 1. Follow the instructions to download and automatically configure your new application for you in one click.
->
-> ### Option 2: Register and manually configure your application and code sample
->
-> #### Step 1: Register your application
+> ## Step 1: Register your application
> To register your application and add the app's registration information to your solution manually, follow these steps: > > 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
See [How the sample works](#how-the-sample-works) for an illustration.
> 1. Select **Add** and immediately record the secret's **Value** for use in a later step. The secret value is *never displayed again* and is irretrievable by any other means. Record it in a secure location as you would any password. > [!div class="sxs-lookup" renderon="portal"]
-> #### Step 1: Configure your application in the Azure portal
+> ## Step 1: Configure your application in the Azure portal
+>
> For the code sample in this quickstart to work, add a **Redirect URI** of `https://localhost:44321/signin-oidc` and **Front-channel logout URL** of `https://localhost:44321/signout-oidc` in the app registration. > > [!div renderon="portal" id="makechanges" class="nextstepaction"] > > [Make this change for me]()
See [How the sample works](#how-the-sample-works) for an illustration.
> > [!div id="appconfigured" class="alert alert-info"] > > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
-#### Step 2: Download the ASP.NET Core project
+## Step 2: Download the ASP.NET Core project
> [!div renderon="docs"] > [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
See [How the sample works](#how-the-sample-works) for an illustration.
[!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)] > [!div class="sxs-lookup" renderon="portal"]
-> #### Step 3: Your app is configured and ready to run
+> ## Step 3: Your app is configured and ready to run
+>
> We have configured your project with values of your app's properties and it's ready to run. > [!div class="sxs-lookup" renderon="portal"] > > [!NOTE] > > `Enter_the_Supported_Account_Info_Here` > [!div renderon="docs"]
-> #### Step 3: Configure your ASP.NET Core project
+>
+> ## Step 3: Configure your ASP.NET Core project
> 1. Extract the .zip archive into a local folder near the root of your drive. For example, into *C:\Azure-Samples*. > 1. Open the solution in Visual Studio 2019. > 1. Open the *appsettings.json* file and modify the following:
See [How the sample works](#how-the-sample-works) for an illustration.
> > For this quickstart, don't change any other values in the *appsettings.json* file. >
-> #### Step 4: Build and run the application
+> ## Step 4: Build and run the application
> > Build and run the app in Visual Studio by selecting the **Debug** menu > **Start Debugging**, or by pressing the `F5` key. >
active-directory Scenario Spa Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-sign-in.md
const loginRequest = {
scopes: ["User.ReadWrite"] }
-let username = "";
+let accountId = "";
const myMsal = new PublicClientApplication(config); myMsal.loginPopup(loginRequest) .then(function (loginResponse) {
- //login success
-
- // In case multiple accounts exist, you can select
- const currentAccounts = myMsal.getAllAccounts();
-
- if (currentAccounts === null) {
- // no accounts detected
- } else if (currentAccounts.length > 1) {
- // Add choose account code here
- } else if (currentAccounts.length === 1) {
- username = currentAccounts[0].username;
- }
-
+ accountId = loginResponse.account.homeAccountId;
+ // Display signed-in user content, call API, etc.
}).catch(function (error) { //login failure console.log(error);
const loginRequest = {
scopes: ["User.ReadWrite"] }
-let username = "";
+let accountId = "";
const myMsal = new PublicClientApplication(config); function handleResponse(response) {
- //handle redirect response
-
- // In case multiple accounts exist, you can select
- const currentAccounts = myMsal.getAllAccounts();
-
- if (currentAccounts === null) {
- // no accounts detected
- } else if (currentAccounts.length > 1) {
- // Add choose account code here
- } else if (currentAccounts.length === 1) {
- username = currentAccounts[0].username;
+ if (response !== null) {
+ accountId = response.account.homeAccountId;
+ // Display signed-in user content, call API, etc.
+ } else {
+ // In case multiple accounts exist, you can select
+ const currentAccounts = myMsal.getAllAccounts();
+
+ if (currentAccounts.length === 0) {
+ // no accounts signed-in, attempt to sign a user in
+ myMsal.loginRedirect(loginRequest);
+ } else if (currentAccounts.length > 1) {
+ // Add choose account code here
+ } else if (currentAccounts.length === 1) {
+ accountId = currentAccounts[0].homeAccountId;
+ }
} } myMsal.handleRedirectPromise().then(handleResponse);-
-myMsal.loginRedirect(loginRequest);
``` # [JavaScript (MSAL.js v1)](#tab/javascript1)
active-directory Pim Apis Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-apis-concept.md
- Title: API concepts in Privileged Identity management - Azure AD | Microsoft Docs
-description: Information for understanding the APIs in Azure AD Privileged Identity Management (PIM).
------- Previously updated : 05/04/2021----
-# Understand the Privileged Identity Management APIs
-
-You can perform Privileged Identity Management (PIM) tasks using the Microsoft Graph APIs for Azure Active Directory (Azure AD) roles and the Azure Resource Manager API for Azure resource roles (sometimes called Azure RBAC roles). This article describes important concepts for using the APIs for Privileged Identity Management.
-
-For requests and other details about PIM APIs, check out:
--- PIM for Azure AD roles API reference-- [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests)-
-> [!IMPORTANT]
-> PIM APIs [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
-
-## PIM API history
-
-There have been several iterations of the PIM API over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
-
-### Iteration 1 ΓÇô only supports Azure AD roles, deprecating
-
-Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the PIM API which is no longer supported in most tenants. We are in the process of deprecating remaining access to this API on 05/31.
-
-### Iteration 2 ΓÇô supports Azure AD roles and Azure resource roles
-
-Under the /beta/privilegedAccess endpoint, Microsoft supported both /aadRoles and /azureResources. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
-
-### Current iteration ΓÇô Azure AD roles in Microsoft Graph and Azure resource roles in Azure Resource Manager
-
-Now in beta, Microsoft has the final iteration of the PIM API before we release the API to general availability. Based on customer feedback, the Azure AD PIM API is now under the unifiedRoleManagement set of API and the Azure Resource PIM API is now under the Azure Resource Manager role assignment API. These locations also provide a few additional benefits including:
--- Alignment of the PIM API for regular role assignment API for both Azure AD roles and Azure Resource roles.-- Reducing the need to call additional PIM API to onboard a resource, get a resource, or get role definition.-- Supporting app-only permissions.-- New features such as approval and email notification configuration.-
-In the current iteration, there is *no API support* for PIM alerts and privileged access groups. They are on the roadmap for future development.
-
-## Current permissions required
--- Azure AD roles-
- To call the PIM Graph API for Azure AD roles, you will need at least one of the following permissions:
-
- - RoleManagement.ReadWrite.Directory
- - RoleManagement.Read.Directory
-
- The easiest way to specify the required permissions is to use the Azure AD consent framework.
--- Azure resource roles-
- The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any graph permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
-
-## Calling PIM API with an app-only token
--- Azure AD roles-
- PIM API now supports app-only permissions on top of delegated permissions. For app-only permissions, you must call the API with an application that's already been consented to the above permissions. For delegated permission, you must call the PIM API with both a user and an application token. The user must be assigned to either the Global Administrator role or Privileged Role Administrator role, and ensure that the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
--- Azure resource roles-
- PIM API for Azure resources supports both user only and application only calls. Simply make sure the service principal has either the owner or user access administrator role on the resource.
-
-## Design of current API iteration
-
-PIM API consists of two categories that are consistent for both the API for Azure AD roles and Azure resource roles: assignment and activation API requests, and policy settings.
-
-### Assignment and activation API
-
-To make eligible assignments, time-bound eligible/active assignments, and to activate assignments, PIM provides the following entities:
--- RoleAssignmentSchedule-- RoleEligibilitySchedule-- RoleAssignmentScheduleInstance-- RoleEligibilityScheduleInstance-- RoleAssignmentScheduleRequest-- RoleEligibilityScheduleRequest-
-These entities work alongside pre-existing roleDefinition and roleAssignment entities for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
--- If you are trying to create or retrieve a persistent (active) role assignment that does not have a schedule (start or end time), you should avoid these PIM entities and focus on the read/write operations under the roleAssignment entity--- To create an eligible assignment with or without an expiration time you can use the write operation on roleEligibilityScheduleRequest--- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on roleAssignmentScheduleRequest --- To activate an eligible assignment, you should also use the write operation on roleAssignmentScheduleRequest with a modified action parameter called selfActivate-
-Each of the request objects would either create a roleAssignmentSchedule or a roleEligibilitySchedule object. These objects are read-only and show a schedule of all the current and future assignments.
-
-When an eligible assignment is activated, the roleEligibilityScheduleInstance continues to exist. The roleAssignmentScheduleRequest for the activation would create a separate roleAssignmentSchedule and roleAssignmentScheduleInstance for that activated duration.
-
-The instance objects are the actual assignments that currently exist whether it is an eligible assignment or an active assignment. You should use the GET operation on the instance entity to retrieve a list of eligible assignments / active assignments to a role/user.
-
-### Policy setting API
-
-To manage the setting, we provide the following entities:
--- roleManagementPolicy-- roleManagementPolicyAssignment-
-The *role management policy* defines the setting of the rule. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The *policy assignment* attaches the policy to a specific role.
-
-The two-entity design could support future scenarios such as attaching a policy to multiple roles. For now, use this API is to get a list of all the roleManagementPolicyAssignments, filter it by the roleDefinitionID you want to modify, and then update the policy associated with the policyAssignment.
-
-## Relationship between PIM entities and role assignment entities
-
-The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the roleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and roleAssignmentScheduleInstance would both include:
--- Persistent (active) assignments made outside of PIM-- Persistent (active) assignments with a schedule made inside PIM-- Activated eligible assignments-
-## Next steps
active-directory Pim Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-apis.md
Title: Microsoft Graph APIs for PIM (Preview) - Azure AD | Microsoft Docs
-description: Provides information about using the Microsoft Graph APIs for Azure AD Privileged Identity Management (PIM) (Preview).
+ Title: API concepts in Privileged Identity management - Azure AD | Microsoft Docs
+description: Information for understanding the APIs in Azure AD Privileged Identity Management (PIM).
documentationcenter: ''
Previously updated : 01/02/2020 Last updated : 05/14/2021
-# Microsoft Graph APIs for Privileged Identity Management (Preview)
+# Understand the Privileged Identity Management APIs
-You can perform Privileged Identity Management tasks using the [Microsoft Graph APIs](/graph/overview) for Azure Active Directory. This article describes important concepts for using the Microsoft Graph APIs for Privileged Identity Management.
+You can perform Privileged Identity Management (PIM) tasks using the Microsoft Graph APIs for Azure Active Directory (Azure AD) roles and the Azure Resource Manager API for Azure resource roles (sometimes called Azure RBAC roles). This article describes important concepts for using the APIs for Privileged Identity Management.
-For details about the Microsoft Graph APIs, check out the [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagement-root?view=graph-rest-beta&preserve-view=true).
+For requests and other details about PIM APIs, check out:
+
+- [PIM for Azure AD roles API reference](/graph/api/resources/unifiedroleeligibilityschedulerequest?view=graph-rest-beta&preserve-view=true)
+- [PIM for Azure resource roles API reference](/rest/api/authorization/roleeligibilityschedulerequests)
> [!IMPORTANT]
-> APIs under the /beta version in Microsoft Graph are in preview and are subject to change. Use of these APIs in production applications is not supported.
+> PIM APIs [!INCLUDE [PREVIEW BOILERPLATE](../../../includes/active-directory-develop-preview.md)]
+
+## PIM API history
+
+There have been several iterations of the PIM API over the past few years. You'll find some overlaps in functionality, but they don't represent a linear progression of versions.
+
+### Iteration 1 ΓÇô only supports Azure AD roles, deprecating
+
+Under the /beta/privilegedRoles endpoint, Microsoft had a classic version of the PIM API which is no longer supported in most tenants. We are in the process of deprecating remaining access to this API on 05/31.
+
+### Iteration 2 ΓÇô supports Azure AD roles and Azure resource roles
+
+Under the /beta/privilegedAccess endpoint, Microsoft supported both /aadRoles and /azureResources. This endpoint is still available in your tenant but Microsoft recommends against starting any new development with this API. This beta API will never be released to general availability and will be eventually deprecated.
+
+### Current iteration ΓÇô Azure AD roles in Microsoft Graph and Azure resource roles in Azure Resource Manager
+
+Now in beta, Microsoft has the final iteration of the PIM API before we release the API to general availability. Based on customer feedback, the Azure AD PIM API is now under the unifiedRoleManagement set of API and the Azure Resource PIM API is now under the Azure Resource Manager role assignment API. These locations also provide a few additional benefits including:
+
+- Alignment of the PIM API for regular role assignment API for both Azure AD roles and Azure Resource roles.
+- Reducing the need to call additional PIM API to onboard a resource, get a resource, or get role definition.
+- Supporting app-only permissions.
+- New features such as approval and email notification configuration.
+
+In the current iteration, there is no API support for PIM alerts and privileged access groups.
+
+## Current permissions required
+
+### Azure AD roles
+
+ To call the PIM Graph API for Azure AD roles, you will need at least one of the following permissions:
+
+- RoleManagement.ReadWrite.Directory
+- RoleManagement.Read.Directory
+
+ The easiest way to specify the required permissions is to use the Azure AD consent framework.
+
+### Azure resource roles
+
+ The PIM API for Azure resource roles is developed on top of the Azure Resource Manager framework. You will need to give consent to Azure Resource Management but wonΓÇÖt need any Graph API permission. You will also need to make sure the user or the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+
+## Calling PIM API with an app-only token
+
+### Azure AD roles
+
+ PIM API now supports app-only permissions on top of delegated permissions.
+
+- For app-only permissions, you must call the API with an application that's already been consented with either the required Azure AD or Azure role permissions.
+- For delegated permission, you must call the PIM API with both a user and an application token. The user must be assigned to either the Global Administrator role or Privileged Role Administrator role, and ensure that the service principal calling the API has at least the Owner or User Access Administrator role on the resource you are trying to administer.
+
+### Azure resource roles
+
+ PIM API for Azure resources supports both user only and application only calls. Simply make sure the service principal has either the owner or user access administrator role on the resource.
+
+## Design of current API iteration
+
+PIM API consists of two categories that are consistent for both the API for Azure AD roles and Azure resource roles: assignment and activation API requests, and policy settings.
+
+### Assignment and activation API
+
+To make eligible assignments, time-bound eligible/active assignments, and to activate assignments, PIM provides the following entities:
+
+- RoleAssignmentSchedule
+- RoleEligibilitySchedule
+- RoleAssignmentScheduleInstance
+- RoleEligibilityScheduleInstance
+- RoleAssignmentScheduleRequest
+- RoleEligibilityScheduleRequest
+
+These entities work alongside pre-existing roleDefinition and roleAssignment entities for both Azure AD roles and Azure roles to allow you to create end to end scenarios.
+
+- If you are trying to create or retrieve a persistent (active) role assignment that does not have a schedule (start or end time), you should avoid these PIM entities and focus on the read/write operations under the roleAssignment entity
+
+- To create an eligible assignment with or without an expiration time you can use the write operation on roleEligibilityScheduleRequest
-## Required permissions
+- To create a persistent (active) assignment with a schedule (start or end time), you can use the write operation on roleAssignmentScheduleRequest
-To call the Microsoft Graph APIs for Privileged Identity Management, you must have **one or more** of the following permissions:
+- To activate an eligible assignment, you should also use the write operation on roleAssignmentScheduleRequest with a modified action parameter called selfActivate
-- `Directory.AccessAsUser.All`-- `Directory.Read.All`-- `Directory.ReadWrite.All`-- `PrivilegedAccess.ReadWrite.AzureAD`
+Each of the request objects would either create a roleAssignmentSchedule or a roleEligibilitySchedule object. These objects are read-only and show a schedule of all the current and future assignments.
-### Set permissions
+When an eligible assignment is activated, the roleEligibilityScheduleInstance continues to exist. The roleAssignmentScheduleRequest for the activation would create a separate roleAssignmentSchedule and roleAssignmentScheduleInstance for that activated duration.
-For applications to call the Microsoft Graph APIs for Privileged Identity Management, they must have the required permissions. The easiest way to specify the required permissions is to use the [Azure AD consent framework](../develop/consent-framework.md).
+The instance objects are the actual assignments that currently exist whether it is an eligible assignment or an active assignment. You should use the GET operation on the instance entity to retrieve a list of eligible assignments / active assignments to a role/user.
-### Set permissions in Graph Explorer
+### Policy setting API
-If you are using the Graph Explorer to test your calls, you can specify the permissions in the tool.
+To manage the setting, we provide the following entities:
-1. Sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) as a Global Administrator.
+- roleManagementPolicy
+- roleManagementPolicyAssignment
-1. Click **modify permissions**.
+The *role management policy* defines the setting of the rule. For example, whether MFA/approval is required, whether and who to send the email notifications to, or whether permanent assignments are allowed or not. The *policy assignment* attaches the policy to a specific role.
- ![Screenshot that shows the "Graph Explorer" page with the "modify permissions" action selected.](./media/pim-apis/graph-explorer.png)
+Use this API is to get a list of all the roleManagementPolicyAssignments, filter it by the roleDefinitionID you want to modify, and then update the policy associated with the policyAssignment.
-1. Select the checkboxes next to the permissions you want to include. `PrivilegedAccess.ReadWrite.AzureAD` is not yet available in Graph Explorer.
+## Relationship between PIM entities and role assignment entities
- ![Graph Explorer - modify permissions](./media/pim-apis/graph-explorer-modify-permissions.png)
+The only link between the PIM entity and the role assignment entity for persistent (active) assignment for either Azure AD roles or Azure roles is the roleAssignmentScheduleInstance. There is a one-to-one mapping between the two entities. That mapping means roleAssignment and roleAssignmentScheduleInstance would both include:
-1. Click **Modify Permissions** to apply the permission changes.
+- Persistent (active) assignments made outside of PIM
+- Persistent (active) assignments with a schedule made inside PIM
+- Activated eligible assignments
## Next steps -- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagement-root?view=graph-rest-beta&preserve-view=true)
+- [Azure AD Privileged Identity Management API reference](/graph/api/resources/privilegedidentitymanagement-root?view=graph-rest-beta&preserve-view=true)
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-download-logs.md
Previously updated : 05/05/2021 Last updated : 05/14/2021
This article explains how to download activity logs in Azure AD.
The option to download the data of an activity log is available in all editions of Azure AD.
+You can also download activity logs using Microsoft Graph; however, downloading logs grammatically requires a premium incense.
+ ## Who can do it?
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
To access this property, you need an Azure Active Directory Premium edition.
To read this property, you need to grant the following rights: - AuditLogs.Read.All-- Organisation.Read.All
+- Organization.Read.All
### When does Azure AD update the property?
To generate a lastSignInDateTime timestamp, you need a successful sign-in. Becau
* [Get data using the Azure Active Directory reporting API with certificates](tutorial-access-api-with-certificates.md) * [Audit API reference](/graph/api/resources/directoryaudit) * [Sign-in activity report API reference](/graph/api/resources/signin)+
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/cluster-container-registry-integration.md
nginx0-deployment-669dfc4d4b-xdpd6 1/1 Running 0 20s
### Troubleshooting * Run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the registry is accessible from the AKS cluster.
-* Learn more about [ACR Diagnostics](../container-registry/container-registry-diagnostics-audit-logs.md)
+* Learn more about [ACR Monitoring](../container-registry/monitor-service.md)
* Learn more about [ACR Health](../container-registry/container-registry-check-health.md) <!-- LINKS - external -->
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
spec:
The issue has been resolved by Kubernetes v1.20, refer [Kubernetes 1.20: Granular Control of Volume Permission Changes](https://kubernetes.io/blog/2020/12/14/kubernetes-release-1.20-fsgroupchangepolicy-fsgrouppolicy/) for more details.
+## Can I use FIPS cryptographic libraries with deployments on AKS?
+
+FIPS-enabled nodes are currently available in preview on Linux-based node pools. For more details, see [Add a FIPS-enabled node pool (preview)](use-multiple-node-pools.md#add-a-fips-enabled-node-pool-preview).
<!-- LINKS - internal -->
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/planned-maintenance.md
To delete a certain maintenance configuration window in your AKS Cluster, use th
az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCluster --name default ```
+## Using Planned Maintenance with Cluster Auto-Upgrade
+
+Planned Maintenance will detect if you are using Cluster Auto-Upgrade and schedule your upgrades during your maintenance window automatically. For more details on about Cluster Auto-Upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
+ ## Next steps - To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade]
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/upgrade-cluster.md
To set the auto-upgrade channel on existing cluster, update the *auto-upgrade-ch
az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable ```
+## Using Cluster Auto-Upgrade with Planned Maintenance
+
+If you are using Planned Maintenance as well as Auto-Upgrade, your upgrade will start during your specified maintenance window. For more details on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)][planned-maintenance].
+ ## Next steps This article showed you how to upgrade an existing AKS cluster. To learn more about deploying and managing AKS clusters, see the set of tutorials.
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[az-provider-register]: /cli/azure/provider#az_provider_register [nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool [upgrade-cluster]: #upgrade-an-aks-cluster
+[planned-maintenance]: planned-maintenance.md
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-multiple-node-pools.md
az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
] ```
+## Add a FIPS-enabled node pool (preview)
+
+The Federal Information Processing Standard (FIPS) 140-2 is a US government standard that defines minimum security requirements for cryptographic modules in information technology products and systems. AKS allows you to create Linux-based node pools with FIPS 140-2 enabled. Deployments running on FIPS-enabled node pools can use those cryptographic modules to provide increased security and help meet security controls as part of FedRAMP compliance. For more details on FIPS 140-2, see [Federal Information Processing Standard (FIPS) 140-2][fips].
+
+FIPS-enabled node pools are currently in preview.
++
+You will need the *aks-preview* Azure CLI extension version *0.5.11* or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+To use the feature, you must also enable the `FIPSPreview` feature flag on your subscription.
+
+Register the `FIPSPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "FIPSPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/FIPSPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+FIPS-enabled node pools have the following limitations:
+
+* Currently, you can only have FIPS-enabled Linux-based node pools running on Ubuntu 18.04.
+* FIPS-enabled node pools require Kubernetes version 1.19 and greater.
+* To update the underlying packages or modules used for FIPS, you must use [Node Image Upgrade][node-image-upgrade].
+
+> [!IMPORTANT]
+> The FIPS-enabled Linux image is a different image than the default Linux image used for Linux-based node pools. To enable FIPS on a node pool, you must create a new Linux-based node pool. You can't enable FIPS on existing node pools.
+>
+> FIPS-enabled node images may have different version numbers, such as kernel version, than images that are not FIPS-enabled. Also, the update cycle for FIPS-enabled node pools and node images may differ from node pools and images that are not FIPS-enabled.
+
+To create a FIPS-enabled node pool, use [az aks nodepool add][az-aks-nodepool-add] with the *--enable-fips-image* parameter when creating a node pool.
+
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name fipsnp \
+ --enable-fips-image
+```
+
+> [!NOTE]
+> You can also use the *--enable-fips-image* parameter with [az aks create][az-aks-create] when creating a cluster to enable FIPS on the default node pool. When adding node pools to a cluster created in this way, you still must use the *--enable-fips-image* parameter when adding node pools to create a FIPS-enabled node pool.
+
+To verify your node pool is FIPS-enabled, use [az aks show][az-aks-show] to check the *enableFIPS* value in *agentPoolProfiles*.
+
+```azurecli-interactive
+az aks show --resource-group myResourceGroup --cluster-name myAKSCluster --query="agentPoolProfiles[].{Name:name enableFips:enableFips}" -o table
+```
+
+The following example output shows the *fipsnp* node pool is FIPS-enabled and *nodepool1* is not.
+
+```output
+Name enableFips
+
+fipsnp True
+nodepool1 False
+```
+
+You can also verify deployments have access to the FIPS cryptographic libraries using `kubectl debug` on a node in the FIPS-enabled node pool. Use `kubectl get nodes` to list the nodes:
+
+```output
+$ kubectl get nodes
+NAME STATUS ROLES AGE VERSION
+aks-fipsnp-12345678-vmss000000 Ready agent 6m4s v1.19.9
+aks-fipsnp-12345678-vmss000001 Ready agent 5m21s v1.19.9
+aks-fipsnp-12345678-vmss000002 Ready agent 6m8s v1.19.9
+aks-nodepool1-12345678-vmss000000 Ready agent 34m v1.19.9
+```
+
+In the above example, the nodes starting with `aks-fipsnp` are part of the FIPS-enabled node pool. Use `kubectl debug` to run a deployment with an interactive session on one of those nodes in the FIPS-enabled node pool.
+
+```azurecli-interactive
+kubectl debug node/aks-fipsnp-12345678-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
+```
+
+From the interactive session, you can verify the FIPS cryptographic libraries are enabled:
+
+```output
+root@aks-fipsnp-12345678-vmss000000:/# cat /proc/sys/crypto/fips_enabled
+1
+```
+
+FIPS-enabled node pools also have a *kubernetes.azure.com/fips_enabled=true* label, which can be used by deployments to target those node pools.
+ ## Manage node pools using a Resource Manager template When you use an Azure Resource Manager template to create and managed resources, you can typically update the settings in your template and redeploy to update the resource. With node pools in AKS, the initial node pool profile can't be updated once the AKS cluster has been created. This behavior means that you can't update an existing Resource Manager template, make a change to the node pools, and redeploy. Instead, you must create a separate Resource Manager template that updates only the node pools for an existing AKS cluster.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_upgrade [az-aks-nodepool-scale]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_scale [az-aks-nodepool-delete]: /cli/azure/aks/nodepool?view=azure-cli-latest&preserve-view=true#az_aks_nodepool_delete
+[az-aks-show]: /cli/azure/aks#az_aks_show
[az-extension-add]: /cli/azure/extension?view=azure-cli-latest&preserve-view=true#az_extension_add [az-extension-update]: /cli/azure/extension?view=azure-cli-latest&preserve-view=true#az_extension_update
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-provider-register]: /cli/azure/provider#az_provider_register
[az-group-create]: /cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_create [az-group-delete]: /cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_delete [az-deployment-group-create]: /cli/azure/deployment/group?view=azure-cli-latest&preserve-view=true#az_deployment_group_create
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[reduce-latency-ppg]: reduce-latency-ppg.md [public-ip-prefix-benefits]: ../virtual-network/public-ip-address-prefix.md#why-create-a-public-ip-address-prefix [az-public-ip-prefix-create]: /cli/azure/network/public-ip/prefix?view=azure-cli-latest&preserve-view=true#az_network_public_ip_prefix_create
+[node-image-upgrade]: node-image-upgrade.md
+[fips]: /azure/compliance/offerings/offering-fips-140-2
api-management Api Management Caching Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-caching-policies.md
For more information and examples of this policy, see [Custom caching in Azure A
| caching-type | Choose between the following values of the attribute:<br />- `internal` to use the built-in API Management cache,<br />- `external` to use the external cache as described in [Use an external Azure Cache for Redis in Azure API Management](api-management-howto-cache-external.md),<br />- `prefer-external` to use external cache if configured or internal cache otherwise. | No | `prefer-external` | | default-value | A value that will be assigned to the variable if the cache key lookup resulted in a miss. If this attribute is not specified, `null` is assigned. | No | `null` | | key | Cache key value to use in the lookup. | Yes | N/A |
-| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will be assigned the value of the `default-value` attribute or `null`, if the `default-value` attribute is omitted. | Yes | N/A |
+| variable-name | Name of the [context variable](api-management-policy-expressions.md#ContextVariables) the looked up value will be assigned to, if lookup is successful. If lookup results in a miss, the variable will not be set. | Yes | N/A |
### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
For more information working with policies, see:
+ [Policies in API Management](api-management-howto-policies.md) + [Transform APIs](transform-api.md) + [Policy Reference](./api-management-policies.md) for a full list of policy statements and their settings
-+ [Policy samples](./policy-reference.md)
++ [Policy samples](./policy-reference.md)
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-policies.md
See [Policy samples](./policy-reference.md) for more code examples.
### Apply policies specified at different scopes
-If you have a policy at the global level and a policy configured for an API, then whenever that particular API is used both policies will be applied. API Management allows for deterministic ordering of combined policy statements via the base element.
+If you have a policy at the global level and a policy configured for an API, then whenever that particular API is used both policies will be applied. API Management allows for deterministic ordering of combined policy statements via the `base` element.
```xml <policies>
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-use-managed-service-identity.md
Last updated 03/09/2021-+
To set up a managed identity in the Azure portal, you'll first create an API Man
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
+The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
1. If needed, install Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-az-ps). Then run `Connect-AzAccount` to create a connection with Azure.
To set up a managed identity in the portal, you'll first create an API Managemen
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
+The following steps walk you through creating an API Management instance and assigning it an identity by using Azure PowerShell.
1. If needed, install the Azure PowerShell by using the instructions in the [Azure PowerShell guide](/powershell/azure/install-az-ps). Then run `Connect-AzAccount` to create a connection with Azure.
For example, a complete Azure Resource Manager template might look like the foll
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('identityName'))]": {} } },
- "dependsOn": [
+ "dependsOn": [
"[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('identityName'))]" ] }]
The `principalId` property is a unique identifier for the identity that's used f
## Supported scenarios using User Assigned Managed Identity ### <a name="use-ssl-tls-certificate-from-azure-key-vault-ua"></a>Obtain a custom TLS/SSL certificate for the API Management instance from Azure Key Vault
-You can use any user-assigned identity to establish trust between an API Management instance and KeyVault. This trust can then be used to retrieve custom TLS/SSL certificates stored in Azure Key Vault. You can then assign these certificates to custom domains in the API Management instance.
+You can use any user-assigned identity to establish trust between an API Management instance and KeyVault. This trust can then be used to retrieve custom TLS/SSL certificates stored in Azure Key Vault. You can then assign these certificates to custom domains in the API Management instance.
Keep these considerations in mind:
Keep these considerations in mind:
> [!Important] > If you don't provide the object version of the certificate, API Management will automatically obtain the newer version of the certificate within four hours after it's updated in Key Vault.
-For the complete template, see [API Management with KeyVault based SSL using User Assigned Identity](https://github.com/Azure/azure-quickstart-templates/blob/master/101-api-management-key-vault-create/azuredeploy.json).
+For the complete template, see [API Management with KeyVault based SSL using User Assigned Identity](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.apimanagement/api-management-key-vault-create/azuredeploy.json).
In this template, you will deploy:
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-ip-restrictions.md
As part of any rule, you can add additional http header filters. The following h
* X-Azure-FDID * X-FD-HealthProbe
-For each header name you can add up to 8 values separated by comma. The http header filters are evaluated after the rule itself and both conditions must be true for the rule to apply.
+For each header name, you can add up to eight values separated by comma. The http header filters are evaluated after the rule itself and both conditions must be true for the rule to apply.
### Multi-source rules
-Multi-source rules allow you to combine up to 8 IP ranges or 8 Service Tags in a single rule. You might use this if you have more than 512 IP ranges or you want to create logical rules where multiple IP ranges are combined with a single http header filter.
+Multi-source rules allow you to combine up to eight IP ranges or eight Service Tags in a single rule. You might use this if you have more than 512 IP ranges or you want to create logical rules where multiple IP ranges are combined with a single http header filter.
Multi-source rules are defined the same way you define single-source rules, but with each range separated with comma.
You can add access restrictions programmatically by doing either of the followin
--rule-name 'IP example rule' --action Allow --ip-address 122.133.144.0/24 --priority 100 ```
+ > [!NOTE]
+ > Working with service tags, http headers or multi-source rules in Azure CLI requires at least version 2.23.0. You can verify the version of the installed module with: ```az version```
+ * Use [Azure PowerShell](/powershell/module/Az.Websites/Add-AzWebAppAccessRestrictionRule). For example:
You can add access restrictions programmatically by doing either of the followin
-Name "Ip example rule" -Priority 100 -Action Allow -IpAddress 122.133.144.0/24 ``` > [!NOTE]
- > Working with service tags, http headers or multi-source rules requires at least version 5.7.0. You can verify the version of the installed module with: **Get-InstalledModule -Name Az**
+ > Working with service tags, http headers or multi-source rules in Azure PowerShell requires at least version 5.7.0. You can verify the version of the installed module with: ```Get-InstalledModule -Name Az```
You can also set values manually by doing either of the following:
app-service App Service Sql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-sql-github-actions.md
+
+ Title: "Tutorial: Use GitHub Actions to deploy to App Service for Containers and connect to a database"
+description: Learn how to deploy an ASP.NET core app to Azure and to Azure SQL Database with GitHub Actions
+ms.devlang: csharp
+ Last updated : 04/22/2021++++
+# Tutorial: Use GitHub Actions to deploy to App Service for Containers and connect to a database
+
+This tutorial walks you through setting up a GitHub Actions workflow to deploy a containerized ASP.NET Core application with an [Azure SQL Database](../azure-sql/database/sql-database-paas-overview.md) backend. When you're finished, you have an ASP.NET app running in Azure and connected to SQL Database. You'll first create Azure resources with an [ARM template](/azure/azure-resource-manager/templates/overview) GitHub Actions workflow.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> - Use a GitHub Actions workflow to add resources to Azure with a Azure Resource Manager template (ARM template)
+> - Use a GitHub Actions workflow to build a container with the latest web app changes
++
+## Prerequisites
+
+To complete this tutorial, you'll need:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A GitHub account. If you don't have one, sign up for [free](https://github.com/join).
+ - A GitHub repository to store your Resource Manager templates and your workflow files. To create one, see [Creating a new repository](https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/creating-a-new-repository).
+
+## Download the sample
+
+[Fork the sample project](https://github.com/Azure-Samples/dotnetcore-containerized-sqldb-ghactions/) in the Azure Samples repo.
+
+```
+https://github.com/Azure-Samples/dotnetcore-containerized-sqldb-ghactions/
+```
+
+## Create the resource group
+
+Open the Azure Cloud Shell at https://shell.azure.com. You can alternately use the Azure CLI if you've installed it locally. (For more information on Cloud Shell, see the Cloud Shell Overview.)
+
+```azurecli-interactive
+ az group create --name {resource-group-name} --location {resource-group-location}
+```
+
+## Generate deployment credentials
+
+You'll need to authenticate with a service principal for the resource deployment script to work. You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az_ad_sp_create_for_rbac) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
+
+```azurecli-interactive
+ az ad sp create-for-rbac --name "{service-principal-name}" --sdk-auth --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}
+```
+
+In the example, replace the placeholders with your subscription ID, resource group name, and service principal name. The output is a JSON object with the role assignment credentials that provide access to your App Service app. Copy this JSON object for later. For help, go to [configure deployment credentials](https://github.com/Azure/login#configure-deployment-credentials).
+
+```output
+ {
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ (...)
+ }
+```
+
+> [!IMPORTANT]
+> It is always a good practice to grant minimum access. The scope in the previous example is limited to the specific App Service app and not the entire resource group.
+
+## Configure the GitHub secret for authentication
+
+In [GitHub](https://github.com/), browse your repository, select **Settings > Secrets > Add a new secret**.
+
+To use [user-level credentials](#generate-deployment-credentials), paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret the name `AZURE_CREDENTIALS`.
+
+## Add a SQL Server secret
+
+Create a new secret in your repository for `SQL_SERVER_ADMIN_PASSWORD`. This secret can be any password that meets the Azure standards for password security. You won't be able to access this password again so save it separately.
+
+## Create Azure resources
+
+The create Azure resources workflow runs an [ARM template](/azure/azure-resource-manager/templates/overview) to deploy resources to Azure. The workflow:
+
+- Checks out source code with the [Checkout action](https://github.com/marketplace/actions/checkout).
+- Logs into Azure with the [Azure Login action](https://github.com/marketplace/actions/azure-login) and gathers environment and Azure resource information.
+- Deploys resources with the [Azure Resource Manager Deploy action](https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template).
+
+To run the create Azure resources workflow:
+
+1. Open the `azuredeploy.yaml` file in `.github/workflows` within your repository.
+
+1. Update the value of `AZURE_RESOURCE_GROUP` to your resource group name.
+
+1. Go to **Actions** and select **Run workflow**.
+
+ :::image type="content" source="media/github-actions-workflows/github-actions-run-workflow.png" alt-text="Run the GitHub Actions workflow to add resources.":::
+
+1. Verify that your action ran successfully by checking for a green checkmark on the **Actions** page.
+
+ :::image type="content" source="media/github-actions-workflows/create-resources-success.png" alt-text="Successful run of create resources. ":::
+
+## Add container registry and SQL secrets
+
+1. In the Azure portal, open your newly created Azure Container Registry in your resource group.
+
+1. Go to **Access keys** and copy the username and password values.
+
+1. Create new GitHub secrets for `ACR_USERNAME` and `ACR_PASSWORD` password in your repository.
+
+1. In the Azure portal, open your Azure SQL database. Open **Connection strings** and copy the value.
+
+1. Create a new secret for `SQL_CONNECTION_STRING`. Replace `{your_password}` with your `SQL_SERVER_ADMIN_PASSWORD`.
+
+## Build, push, and deploy your image
+
+The build, push, and deploy workflow builds a container with the latest app changes, pushes the container to [Azure Container Registry](/azure/container-registry/) and, updates the web application staging slot to point to the latest container pushed. The workflow containers a build and deploy job:
+
+- The build job checks out source code with the [Checkout action](https://github.com/marketplace/actions/checkout). The job then uses the [Docker login action](https://github.com/marketplace/actions/docker-login) and a custom script to authenticate with Azure Container Registry, build a container image, and deploy it to Azure Container Registry.
+- The deployment job logs into Azure with the [Azure Login action](https://github.com/marketplace/actions/azure-login) and gathers environment and Azure resource information. The job then updates Web App Settings with the [Azure App Service Settings action](https://github.com/marketplace/actions/azure-app-service-settings) and deploys to an App Service staging slot with the [Azure Web Deploy action](https://github.com/marketplace/actions/azure-webapp). Last, the job runs a custom script to update the SQL database and swaps staging slot to production.
+
+To run the build, push, and deploy workflow:
+
+1. Open your `build-deploy.yaml` file in `.github/workflows` within your repository.
+
+1. Verify that the environment variables for `AZURE_RESOURCE_GROUP` and `WEB_APP_NAME` match the ones in `azuredeploy.yaml`.
+
+1. Update the `ACR_LOGIN_SERVER` value for your Azure Container Registry login server.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about Azure and GitHub integration](/azure/developer/github/)
app-service Deploy Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-resource-manager-template.md
You deploy resources in the following order:
Typically, your solution includes only some of these resources and tiers. For missing tiers, map lower resources to the next-higher tier.
-The following example shows part of a template. The value of the connection string configuration depends on the MSDeploy extension. The MSDeploy extension depends on the web app and database.
+The following example shows part of a template. The value of the connection string configuration depends on the MSDeploy extension. The MSDeploy extension depends on the web app and database.
```json {
The following example shows part of a template. The value of the connection stri
} ```
-For a ready-to-run sample that uses the code above, see [Template: Build a simple Umbraco Web App](https://github.com/Azure/azure-quickstart-templates/tree/master/umbraco-webapp-simple).
+For a ready-to-run sample that uses the code above, see [Template: Build a simple Umbraco Web App](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/umbraco/umbraco-webapp-simple).
## Find information about MSDeploy errors
app-service App Gateway With Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking/app-gateway-with-service-endpoints.md
You can now access the App Service through Application Gateway, but if you try t
![Screenshot shows the text of an Error 403 - Forbidden.](./media/app-gateway-with-service-endpoints/website-403-forbidden.png) ## Using Azure Resource Manager template
-The [Resource Manager deployment template][template-app-gateway-app-service-complete] will provision a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you'll have to clone the repo or download the template and edit it.
+The [Resource Manager deployment template][template-app-gateway-app-service-complete] will provision a complete scenario. The scenario consists of an App Service instance locked down with service endpoints and access restriction to only receive traffic from Application Gateway. The template includes many Smart Defaults and unique postfixes added to the resource names for it to be simple. To override them, you'll have to clone the repo or download the template and edit it.
To apply the template you can use the Deploy to Azure button found in the description of the template, or you can use appropriate PowerShell/CLI.
az webapp config access-restriction add --resource-group myRG --name myWebApp --
In the default configuration, the command will ensure both setup of the service endpoint configuration in the subnet and the access restriction in the App Service. ## Considerations for ILB ASE
-ILB ASE isn't exposed to the internet and traffic between the instance and an Application Gateway is therefore already isolated to the Virtual Network. The following [how-to guide](../environment/integrate-with-application-gateway.md) configures an ILB ASE and integrates it with an Application Gateway using Azure portal.
+ILB ASE isn't exposed to the internet and traffic between the instance and an Application Gateway is therefore already isolated to the Virtual Network. The following [how-to guide](../environment/integrate-with-application-gateway.md) configures an ILB ASE and integrates it with an Application Gateway using Azure portal.
If you want to ensure that only traffic from the Application Gateway subnet is reaching the ASE, you can configure a Network security group (NSG) which affect all web apps in the ASE. For the NSG, you are able to specify the subnet IP range and optionally the ports (80/443). Make sure you don't override the [required NSG rules](../environment/network-info.md#network-security-groups) for ASE to function correctly.
To isolate traffic to an individual web app you'll need to use ip-based access r
## Considerations for External ASE External ASE has a public facing load balancer like multi-tenant App Service. Service endpoints don't work for ASE, and that's why you'll have to use ip-based access restrictions using the public IP of the Application Gateway instance. To create an External ASE using the Azure portal, you can follow this [Quickstart](../environment/create-external-ase.md)
-[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-with-app-gateway-v2/ "Azure Resource Manager template for complete scenario"
+[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2/ "Azure Resource Manager template for complete scenario"
## Considerations for kudu/scm site The scm site, also known as kudu, is an admin site, which exists for every web app. It isn't possible to reverse proxy the scm site and you most likely also want to lock it down to individual IP addresses or a specific subnet.
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/networking/private-endpoint.md
We are improving Private Link feature and Private Endpoint regularly, check [thi
[howtoguide2]: ../scripts/cli-deploy-privateendpoint.md [howtoguide3]: ../scripts/powershell-deploy-private-endpoint.md [howtoguide4]: ../scripts/template-deploy-private-endpoint.md
-[howtoguide5]: https://github.com/Azure/azure-quickstart-templates/tree/master/101-webapp-privateendpoint-vnet-injection
+[howtoguide5]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection
[howtoguide6]: ../scripts/terraform-secure-backend-frontend.md [TiP]: https://docs.microsoft.com/azure/app-service/deploy-staging-slots#route-traffic
app-service Samples Resource Manager Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/samples-resource-manager-templates.md
To learn about the JSON syntax and properties for App Services resources, see [M
| [App Service plan and basic Windows app](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-windows) | Deploys an App Service app that is configured for Windows. | | [App linked to a GitHub repository](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-github-deploy)| Deploys an App Service app that pulls code from GitHub. | | [App with custom deployment slots](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-custom-deployment-slots)| Deploys an App Service app with custom deployment slots/environments. |
-| [App with Private Endpoint](https://github.com/Azure/azure-quickstart-templates/tree/master/101-private-endpoint-webapp)| Deploys an App Service app with a Private Endpoint. |
+| [App with Private Endpoint](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/private-endpoint-webapp)| Deploys an App Service app with a Private Endpoint. |
|**Configuring an app**| **Description** | | [App certificate from Key Vault](https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-certificate-from-key-vault)| Deploys an App Service app certificate from an Azure Key Vault secret and uses it for TLS/SSL binding. | | [App with a custom domain and SSL](https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-custom-domain-and-ssl)| Deploys an App Service app with a custom host name, and gets an app certificate from Key Vault for TLS/SSL binding. | | [App with a GoLang extension](https://github.com/Azure/azure-quickstart-templates/tree/master/101-webapp-with-golang)| Deploys an App Service app with the Golang site extension. You can then run web applications developed on Golang on Azure. |
-| [App with Java 8 and Tomcat 8](https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-java-tomcat)| Deploys an App Service app with Java 8 and Tomcat 8 enabled. You can then run Java applications in Azure. |
-| [App with regional VNet integration](https://github.com/Azure/azure-quickstart-templates/tree/master/101-app-service-regional-vnet-integration)| Deploys an App Service app with regional VNet integration enabled. |
+| [App with Java 8 and Tomcat 8](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-java-tomcat)| Deploys an App Service app with Java 8 and Tomcat 8 enabled. You can then run Java applications in Azure. |
+| [App with regional VNet integration](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/app-service-regional-vnet-integration)| Deploys an App Service app with regional VNet integration enabled. |
|**Protecting an app**| **Description** |
-| [App integrated with Azure Application Gateway](https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-with-app-gateway-v2)| Deploys an App Service app and an Application Gateway, and isolates the traffic using service endpoint and access restrictions. |
+| [App integrated with Azure Application Gateway](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2)| Deploys an App Service app and an Application Gateway, and isolates the traffic using service endpoint and access restrictions. |
|**Linux app with connected resources**| **Description** | | [App on Linux with MySQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-linux-managed-mysql) | Deploys an App Service app on Linux with Azure Database for MySQL. | | [App on Linux with PostgreSQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-linux-managed-postgresql) | Deploys an App Service app on Linux with Azure Database for PostgreSQL. |
To learn about the JSON syntax and properties for App Services resources, see [M
| [App with a database in Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database)| Deploys an App Service app and a database in Azure SQL Database at the Basic service level. | | [App with a Blob storage connection](https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-blob-connection)| Deploys an App Service app with an Azure Blob storage connection string. You can then use Blob storage from the app. | | [App with an Azure Cache for Redis](https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-with-redis-cache)| Deploys an App Service app with an Azure Cache for Redis. |
-| [App connected to a backend webapp](https://github.com/Azure/azure-quickstart-templates/tree/master/101-webapp-privateendpoint-vnet-injection)| Deploys two web apps (frontend and backend) securely connected together with VNet injection and Private Endpoint. |
+| [App connected to a backend webapp](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection)| Deploys two web apps (frontend and backend) securely connected together with VNet injection and Private Endpoint. |
|**App Service Environment**| **Description** | | [Create an App Service environment v2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-create) | Creates an App Service environment v2 in your virtual network. | | [Create an App Service environment v2 with an ILB address](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-asev2-ilb-create) | Creates an App Service environment v2 in your virtual network with a private internal load balancer address. |
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-role-based-access-control.md
description: This article describes how to use Azure role-based access control (
keywords: automation rbac, role based access control, azure rbac Previously updated : 07/21/2020 Last updated : 05/17/2020
The following sections describe the minimum required permissions needed for enab
Update management reaches across multiple services to provide its service. The following table shows the permissions needed to manage update management deployments:
-|**Resource** |**Role** |**Scope** |
+|**Resource** |**Role** |**Scope** |
||||
-|Automation account | Log Analytics Contributor | Automation account |
-|Automation account | Virtual Machine Contributor | Resource Group for the account |
-|Log Analytics workspace | Log Analytics Contributor| Log Analytics workspace |
-|Log Analytics workspace |Log Analytics Reader| Subscription|
-|Solution |Log Analytics Contributor | Solution|
-|Virtual Machine | Virtual Machine Contributor | Virtual Machine |
+|Automation account |Log Analytics Contributor |Automation account |
+|Automation account |Virtual Machine Contributor |Resource Group for the account |
+|Log Analytics workspace Log Analytics Contributor|Log Analytics workspace |
+|Log Analytics workspace |Log Analytics Reader|Subscription|
+|Solution |Log Analytics Contributor |Solution|
+|Virtual Machine |Virtual Machine Contributor |Virtual Machine |
+|**Actions on Virtual Machine** | | |
+|View history of update schedule execution ([Software Update Configuration Machine Runs](/rest/api/automation/softwareupdateconfigurationmachineruns)) |Reader |Automation account |
+|**Actions on virtual machine** |**Permission** | |
+|Create update schedule ([Software Update Configurations](/rest/api/automation/softwareupdateconfigurations)) |Microsoft.Compute/virtualMachines/write |For static VM list and resource groups |
+|Create update schedule ([Software Update Configurations](/rest/api/automation/softwareupdateconfigurations)) |Microsoft.OperationalInsights/workspaces/analytics/query/action |For workspace resource ID when using non-Azure dynamic list.|
## Configure Azure RBAC for your Automation account
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 02/17/2021 Last updated : 05/17/2021
Python 3 runbooks are supported in the following Azure global infrastructures:
* You must be familiar with Python scripting. * To use third-party libraries, you must [import the packages](python-packages.md) into the Automation account. * Using **Start-AutomationRunbook** cmdlet in PowerShell/PowerShell Workflow to start a Python 3 runbook (preview) does not work. You can use **Start-AzAutomationRunbook** cmdlet from Az.Automation module or **Start-AzureRmAutomationRunbook** cmdlet from AzureRm.Automation module to work around this limitation. 
-* Python 3 runbooks (preview) and packages do not work with PowerShell.
* Azure Automation does not supportΓÇ»**sys.stderr**. ### Known issues
automation Create Run As Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/create-run-as-account.md
Title: Create an Azure Automation Run As account
description: This article tells how to create an Azure Automation Run As account with PowerShell or from the Azure portal. Previously updated : 04/29/2021 Last updated : 05/17/2021
The following list provides the requirements to create a Run As account in Power
* An Automation account, which is referenced as the value for the `AutomationAccountName` and `ApplicationDisplayName` parameters. * Permissions equivalent to the ones listed in [Required permissions to configure Run As accounts](automation-security-overview.md#permissions).
+If you are planning to use a certificate from your enterprise or third-party certificate authority (CA), Automation requires the certificate to have the following configuration:
+
+ * Specify the provider **Microsoft Enhanced RSA and AES Cryptographic Provider**
+ * Marked as exportable
+ * Configured to use the SHA256 algorithm
+ * Saved in the `*.pfx` or `*.cer` format.
+ To get the values for `AutomationAccountName`, `SubscriptionId`, and `ResourceGroupName`, which are required parameters for the PowerShell script, complete the following steps. 1. Sign in to the Azure portal.
automation Automation Tutorial Runbook Textual Python2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/learn/automation-tutorial-runbook-textual-python2.md
Title: Create a Python runbook in Azure Automation
description: This article teaches you to create, test, and publish a simple Python runbook in your Azure Automation account. Previously updated : 04/28/2021 Last updated : 05/17/2021
This tutorial walks you through the creation of a [Python runbook](../automation
> * Run and track the status of the runbook job > * Update the runbook to start an Azure virtual machine with runbook parameters
-> [!NOTE]
-> Using a webhook to start a Python runbook is not supported.
- ## Prerequisites To complete this tutorial, you need the following:
automation Manage Runas Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/manage-runas-account.md
Title: Manage an Azure Automation Run As account
description: This article tells how to manage your Azure Automation Run As account with PowerShell or from the Azure portal. Previously updated : 04/29/2021 Last updated : 05/17/2021 # Manage an Azure Automation Run As account
-Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features. This article provides guidance on how to manage a Run As or Classic Run As account.
+Run As accounts in Azure Automation provide authentication for managing resources on the Azure Resource Manager or Azure Classic deployment model using Automation runbooks and other Automation features.
+
+In this article we cover how to manage a Run as or Classic Run As account, including:
+
+ * How to renew a self-signed certificate
+ * How to renew a certificate from an enterprise or third-party certificate authority (CA)
+ * Manage permissions for the Run As account
To learn more about Azure Automation account authentication and guidance related to process automation scenarios, see [Automation Account authentication overview](automation-security-overview.md).
When you renew the self-signed certificate, the current valid certificate is ret
>If you think that the Run As account has been compromised, you can delete and re-create the self-signed certificate. >[!NOTE]
->If you have configured your Run As account to use a certificate issued by your enterprise or third-party certificate authority (CA) and you use the option to renew a self-signed certificate option, the enterprise certificate is replaced by a self-signed certificate.
+>If you have configured your Run As account to use a certificate issued by your enterprise or third-party CA and you use the option to renew a self-signed certificate option, the enterprise certificate is replaced by a self-signed certificate. To renew your certificate in this case, see [Renew an enterprise or third-party certificate](#renew-an-enterprise-or-third-party-certificate).
Use the following steps to renew the self-signed certificate.
Use the following steps to renew the self-signed certificate.
1. While the certificate is being renewed, you can track the progress under **Notifications** from the menu.
+## Renew an enterprise or third-party certificate
+
+Every certificate has a built-in expiration date. If the certificate you assigned to the Run As account was issued by a certification authority (CA), you need to perform a different set of steps to configure the Run As account with the new certificate before it expires. You can renew it any time before it expires.
+
+1. Import the renewed certificate following the steps for [Create a new certificate](./shared-resources/certificates.md#create-a-new-certificate). Automation requires the certificate to have the following configuration:
+
+ * Specify the provider **Microsoft Enhanced RSA and AES Cryptographic Provider**
+ * Marked as exportable
+ * Configured to use the SHA256 algorithm
+ * Saved in the `*.pfx` or `*.cer` format.
+
+ After you import the certificate, note or copy the certificate **Thumbprint** value. This value is used to update the Run As connection properties with the new certificate.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Automation Accounts**.
+
+1. On the Automation Accounts page, select your Automation account from the list.
+
+1. In the left pane, select **Connections**.
+
+1. On the **Connections** page, select **AzureRunAsConnection** and update the **Certificate Thumbprint** with the new certificate thumbprint.
+
+1. Select **Save** to commit your changes.
+ ## Grant Run As account permissions in other subscriptions Azure Automation supports using a single Automation account from one subscription, and executing runbooks against Azure Resource Manager resources across multiple subscriptions. This configuration does not support the Azure Classic deployment model.
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/managed-instance-high-availability.md
Capabilities that availability groups enable:
- All databases are automatically added to the availability group, including all user and system databases like `master` and `msdb`. This capability provides a single-system view across the availability group replicas. Notice both `containedag_master` and `containedag_msdb` databases if you connect directly to the instance. The `containedag_*` databases represent the `master` and `msdb` inside the availability group. -- An external endpoint is automatically provisioned for connecting to databases within the availability group. This endpoint `<managed_instance_name>-svc-external` plays the role of the availability group listener.
+- An external endpoint is automatically provisioned for connecting to databases within the availability group. This endpoint `<managed_instance_name>-external-svc` plays the role of the availability group listener.
### Deploy
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-infrastructure-as-code.md
A Consumption plan does not need to be defined. One will automatically be create
The Consumption plan is a special type of "serverfarm" resource. For Windows, you can specify it by using the `Dynamic` value for the `computeMode` and `sku` properties: ```json
-{
+{
"type":"Microsoft.Web/serverfarms", "apiVersion":"2016-09-01", "name":"[variables('hostingPlanName')]", "location":"[resourceGroup().location]",
- "properties":{
+ "properties":{
"name":"[variables('hostingPlanName')]", "computeMode":"Dynamic" },
- "sku":{
+ "sku":{
"name":"Y1", "tier":"Dynamic", "size":"Y1",
If you do explicitly define your Consumption plan, you will need to set the `ser
### Create a function app
-The settings required by a function app running in Consumption plan defer between Windows and Linux.
+The settings required by a function app running in Consumption plan defer between Windows and Linux.
#### Windows
On Windows, a Consumption plan requires an additional setting in the site config
``` > [!IMPORTANT]
-> Don't set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting as it's generated for you when the site is first created.
+> Don't set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting as it's generated for you when the site is first created.
#### Linux
-On Linux, the function app must have its `kind` set to `functionapp,linux`, and it must have the `reserved` property set to `true`.
+On Linux, the function app must have its `kind` set to `functionapp,linux`, and it must have the `reserved` property set to `true`.
```json {
A function app on a Premium plan must have the `serverFarmId` property set to th
"type": "Microsoft.Web/sites", "name": "[variables('functionAppName')]", "location": "[resourceGroup().location]",
- "kind": "functionapp",
+ "kind": "functionapp",
"dependsOn": [ "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]", "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
A function app on a Premium plan must have the `serverFarmId` property set to th
} ``` > [!IMPORTANT]
-> Don't set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting as it's generated for you when the site is first created.
+> Don't set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting as it's generated for you when the site is first created.
<a name="app-service-plan"></a>
Learn more about how to develop and configure Azure Functions.
<!-- LINKS --> [Function app on Consumption plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/function-app-create-dynamic/azuredeploy.json
-[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/101-function-app-create-dedicated/azuredeploy.json
+[Function app on Azure App Service plan]: https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.web/azuredeploy.json
azure-monitor Activity Log Alerts Webhook https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/activity-log-alerts-webhook.md
For specific schema details on all other activity log alerts, see [Overview of t
* [Learn more about the activity log](../essentials/platform-logs-overview.md). * [Execute Azure automation scripts (Runbooks) on Azure alerts](https://go.microsoft.com/fwlink/?LinkId=627081). * [Use a logic app to send an SMS via Twilio from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-text-message-with-logic-app). This example is for metric alerts, but it can be modified to work with an activity log alert.
-* [Use a logic app to send a Slack message from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/201-alert-to-slack-with-logic-app). This example is for metric alerts, but it can be modified to work with an activity log alert.
+* [Use a logic app to send a Slack message from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-slack-with-logic-app). This example is for metric alerts, but it can be modified to work with an activity log alert.
* [Use a logic app to send a message to an Azure queue from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-queue-with-logic-app). This example is for metric alerts, but it can be modified to work with an activity log alert.
azure-monitor Alerts Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-webhooks.md
The POST operation contains the following JSON payload and schema for all metric
* Learn more about Azure alerts and webhooks in the video [Integrate Azure alerts with PagerDuty](https://go.microsoft.com/fwlink/?LinkId=627080). * Learn how to [execute Azure Automation scripts (runbooks) on Azure alerts](https://go.microsoft.com/fwlink/?LinkId=627081). * Learn how to [use a logic app to send an SMS message via Twilio from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-text-message-with-logic-app).
-* Learn how to [use a logic app to send a Slack message from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/201-alert-to-slack-with-logic-app).
+* Learn how to [use a logic app to send a Slack message from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-slack-with-logic-app).
* Learn how to [use a logic app to send a message to an Azure queue from an Azure alert](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-queue-with-logic-app).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-core.md
The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Micro
* **Deployment method**: Framework dependent or self-contained. * **Web server**: IIS (Internet Information Server) or Kestrel. * **Hosting platform**: The Web Apps feature of Azure App Service, Azure VM, Docker, Azure Kubernetes Service (AKS), and so on.
-* **.NET Core version**: All officially [supported](https://dotnet.microsoft.com/download/dotnet-core) .NET Core versions.
+* **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that are not in preview.
* **IDE**: Visual Studio, VS Code, or command line. > [!NOTE]
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps.md
Title: Monitor Azure app services performance | Microsoft Docs description: Application performance monitoring for Azure app services. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 08/06/2020 Last updated : 05/17/2021
-# Monitor Azure App Service performance
+# Application Monitoring for Azure App Service
-Enabling monitoring on your ASP.NET, ASP.NET Core, and Node.js based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually install a site extension, the latest extension/agent is now built into the app service image by default. This article will walk you through enabling Application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
+Enabling monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based web applications running on [Azure App Services](../../app-service/index.yml) is now easier than ever. Whereas previously you needed to manually instrument your app, the latest extension/agent is now built into the App Service image by default. This article will walk you through enabling Azure Monitor application Insights monitoring as well as provide preliminary guidance for automating the process for large-scale deployments.
> [!NOTE]
-> Manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the agent based instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
+> For .Net on Windows only: manually adding an Application Insights site extension via **Development Tools** > **Extensions** is deprecated. This method of extension installation was dependent on manual updates for each new version. The latest stable release of the extension is now [preinstalled](https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions) as part of the App Service image. The files are located in `d:\Program Files (x86)\SiteExtensions\ApplicationInsightsAgent` and are automatically updated with each stable release. If you follow the agent based instructions to enable monitoring below, it will automatically remove the deprecated extension for you.
## Enable Application Insights There are two ways to enable application monitoring for Azure App Services hosted applications: * **Agent-based application monitoring** (ApplicationInsightsAgent).
- * This method is the easiest to enable, and no advanced configuration is required. It is often referred to as "runtime" monitoring. For Azure App Services we recommend at a minimum enabling this level of monitoring, and then based on your specific scenario you can evaluate whether more advanced monitoring through manual instrumentation is needed.
+ * This method is the easiest to enable, and no code change or advanced configurations are required. It is often referred to as "runtime" monitoring. For Azure App Services we recommend at a minimum enabling this level of monitoring, and then based on your specific scenario you can evaluate whether more advanced monitoring through manual instrumentation is needed.
* **Manually instrumenting the application through code** by installing the Application Insights SDK.
There are two ways to enable application monitoring for Azure App Services hoste
# [ASP.NET Core](#tab/netcore) > [!IMPORTANT]
-> The following versions of ASP.NET Core are supported: ASP.NET Core 2.1 and 3.1. Versions 2.0, 2.2, and 3.0 have been retired and are no longer supported. Please upgrade to a [supported version](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core for auto-instrumentation to work.
+> The following versions of ASP.NET Core are supported: ASP.NET Core 2.1, 3.1, and 5.0. Versions 2.0, 2.2, and 3.0 have been retired and are no longer supported. Please upgrade to a [supported version](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) of .NET Core for auto-instrumentation to work.
Targeting the full framework from ASP.NET Core, self-contained deployment, and Linux based applications are currently **not supported** with agent/extension based monitoring. ([Manual instrumentation](./asp-net-core.md) via code will work in all of the previous scenarios.)
Targeting the full framework from ASP.NET Core, self-contained deployment, and L
> [!NOTE] > When you click **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
- ![Instrument your web app](./media/azure-web-apps/create-resource-01.png)
+ ![Instrument your web app](./media/azure-web-apps/create-resource-01.png)
2. After specifying which resource to use, you can choose how you want Application Insights to collect data per platform for your application. ASP.NET Core offers **Recommended collection** or **Disabled** for ASP.NET Core 2.1 and 3.1.
- ![Choose options per platform](./media/azure-web-apps/choose-options-new-net-core.png)
+ ![Choose options per platform.](./media/azure-web-apps/choose-options-new-net-core.png)
# [Node.js](#tab/nodejs) Windows agent-based monitoring is not supported, to enable with Linux visit the [Node.js App Service documentation](../../app-service/configure-language-nodejs.md?pivots=platform-linux#monitor-with-application-insights).
+You can monitor your Node.js apps running in Azure App Service without any code change, just with a couple of simple steps. Application insights for Node.js applications is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps. The integration is in public preview. The integration adds Node.js SDK, which is in GA.
+
+1. **Select Application Insights** in the Azure control panel for your app service.
+
+ > [!div class="mx-imgBorder"]
+ > ![Under Settings, choose Application Insights.](./media/azure-web-apps/ai-enable.png)
+
+ * Choose to create a new resource, unless you already set up an Application Insights resource for this application.
+
+ > [!NOTE]
+ > When you click **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
+
+ ![Instrument your web app.](./media/azure-web-apps/create-resource-01.png)
+
+2. Once you have specified which resource to use, you are all set to go.
+
+ > [!div class="mx-imgBorder"]
+ > ![Choose options per platform.](./media/azure-web-apps/app-service-node.png)
+ # [Java](#tab/java)
-Follow the guidelines for [Application Insights Java 3.0 agent](./java-in-process-agent.md) to enable auto-instrumentation for your Java apps without changing your code.
-The automatic integration is not yet available for App Service.
+You can turn on monitoring for your Java apps running in Azure App Service just with one click, no code change required. Application Insights for Java is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows - code-based apps. The integration is in public preview. It is important to know how your application will be monitored. The integration adds [Application Insights Java 3.0](./java-in-process-agent.md), which is in GA. You will get all the telemetry that it auto-collects.
+
+1. **Select Application Insights** in the Azure control panel for your app service.
+
+ > [!div class="mx-imgBorder"]
+ > ![Under Settings, choose Application Insights.](./media/azure-web-apps/ai-enable.png)
+
+ * Choose to create a new resource, unless you already set up an application insights resource for this application.
+
+ > [!NOTE]
+ > When you click **OK** to create the new resource you will be prompted to **Apply monitoring settings**. Selecting **Continue** will link your new Application Insights resource to your app service, doing so will also **trigger a restart of your app service**.
+
+ ![Instrument your web app.](./media/azure-web-apps/create-resource-01.png)
+
+2. After specifying which resource to use, you can configure the Java agent. The full [set of configurations](./java-standalone-config.md) is available, you just need to paste a valid json file without specifying the connection string. You have already picked an application insights resource to connect to, remember?
+
+ > [!div class="mx-imgBorder"]
+ > ![Choose options per platform.](./media/azure-web-apps/create-app-service-ai.png)
# [Python](#tab/python)
If the upgrade is done from a version prior to 2.5.1, check that the Application
Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET and ASP.NET Core based applications running on Azure App Services.
-> [!NOTE]
-> The recommended approach to monitor Java applications is to use the auto-instrumentation without changing the code. Please follow the guidelines for [Application Insights Java 3.0 agent](./java-in-process-agent.md).
- 1. Check that the application is monitored via `ApplicationInsightsAgent`.
- * Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2".
+ * Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~3".
2. Ensure that the application meets the requirements to be monitored. * Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/codeless-overview.md
description: Overview of auto-instrumentation for Azure Monitor Application Insi
Previously updated : 05/31/2020 Last updated : 05/17/2021
As we're adding additional integrations, the auto-instrumentation capability mat
|Environment/Resource Provider | .NET | .NET Core | Java | Node.js | Python | ||--|--|--|--|--|
-|Azure App Service on Windows | GA, OnBD* | GA, opt-in | In progress | In progress | Not supported |
-|Azure App Service on Linux | N/A | Not supported | In progress | Public Preview | Not supported |
+|Azure App Service on Windows | GA, OnBD* | GA, opt-in | Public Preview | Public Preview | Not supported |
+|Azure App Service on Linux | N/A | Not supported | Public Preview | Public Preview | Not supported |
|Azure Functions - basic | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | GA, OnBD* | |Azure Functions Windows - dependencies | Not supported | Not supported | Public Preview | Not supported | Not supported |
+|Azure Spring Cloud | Not supported | Not supported | Public Preview | Not supported | Not supported |
|Azure Kubernetes Service | N/A | In design | Through agent | In design | Not supported |
-|Azure VMs Windows | Public Preview | Not supported | Not supported | Not supported | Not supported |
+|Azure VMs Windows | Public Preview | Not supported | Through agent | Not supported | Not supported |
|On-Premises VMs Windows | GA, opt-in | Not supported | Through agent | Not supported | Not supported | |Standalone agent - any env. | Not supported | Not supported | GA | Not supported | Not supported |
As we're adding additional integrations, the auto-instrumentation capability mat
### Windows
-#### .NET
-Application monitoring on Azure App Service on Windows is available for [.NET applications](./azure-web-apps.md?tabs=net) .NET and is enabled by default.
+Application monitoring on Azure App Service on Windows is available for **[.NET](./azure-web-apps.md?tabs=net)** (enabled by default), **[.NET Core](./azure-web-apps.md?tabs=netcore)**, **[Java](./azure-web-apps.md?tabs=java)** (in public preview), and **[Node.js](./azure-web-apps.md?tabs=nodejs)** applications. To monitor a Python app, add the [SDK](./opencensus-python.md) to your code.
-#### .NETCore
-Monitoring for [.NETCore applications](./azure-web-apps.md?tabs=netcore) can be enabled with one click.
-
-#### Java
-The portal integration for monitoring of Java applications on App Service on Windows is currently unavailable, however, you can add Application Insights [Java 3.0 standalone agent](./java-in-process-agent.md) to your application without any code changes before deploying the apps to App Service. Application Insights Java 3.0 agent is generally available.
-
-#### Node.js
-Monitoring for Node.js applications on Windows cannot currently be enabled from the portal. To monitor Node.js applications, use the [SDK](./nodejs.md).
+> [!NOTE]
+> Application monitoring is currently available for Windows code-based applications on App Service. Monitoring for apps on Windows Containers on App Service is not yet supported through the integration with Application Insights.
### Linux
+You can enable monitoring for **[Java](./azure-web-apps.md?tabs=java)** and **[Node.js](./azure-web-apps.md?tabs=nodejs)** apps running on Linux in App Service through the portal - the experience for both languages is in public preview and available in all regions.
-#### .NETCore
-To monitor .NETCore applications running on Linux, use the [SDK](./asp-net-core.md).
-
-#### Java
-Enabling Java application monitoring for App Service on Linux from portal isn't available, but you can add [Application Insights Java 3.0 agent](./java-in-process-agent.md) to your app before deploying the apps to App Service. Application Insights Java 3.0 agent is generally available.
+For other languages - [.NET Core](./asp-net-core.md) and [Python](./opencensus-python.md), use the SDK.
-#### Node.js
-[Monitoring Node.js applications in App Service on Linux](./azure-web-apps.md?tabs=nodejs) is in public preview and can be enabled in Azure portal, available in all regions.
+## Azure Functions
-#### Python
-Use the SDK to [monitor your Python app](./opencensus-python.md)
+The basic monitoring for Azure Functions is enabled by default to collects log, performance, error data, and HTTP requests. For Java applications, you can enable richer monitoring with distributed tracing and get the end-to-end transaction details. This functionality for Java is in public preview for Windows and you can [enable it in Azure portal](./monitor-functions.md).
-## Azure Functions
+## Azure Spring Cloud
-The basic monitoring for Azure Functions is enabled by default to collects log, performance, error data, and HTTP requests. For Java applications, you can enable richer monitoring with distributed tracing and get the end-to-end transaction details. This functionality for Java is in public preview and you can [enable it in Azure portal](./monitor-functions.md).
+### Java
+Application monitoring for Java apps running in Azure Spring Cloud is integrated into the portal, you can enable Application Insights directly from the Azure portal, both for the existing and newly created Azure Spring Cloud resources.
## Azure Kubernetes Service
azure-monitor Resource Logs Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-schema.md
The schema for resource logs varies depending on the resource and log category.
| Azure Automation |[Log analytics for Azure Automation](../../automation/automation-manage-send-joblogs-log-analytics.md) | | Azure Batch |[Azure Batch logging](../../batch/batch-diagnostics.md) | | Cognitive Services | [Logging for Azure Cognitive Services](../../cognitive-services/diagnostic-logging.md) |
-| Container Registry | [Logging for Azure Container Registry](../../container-registry/container-registry-diagnostics-audit-logs.md) |
+| Container Registry | [Monitor Azure Container Registry](../../container-registry/monitor-service.md) |
| Content Delivery Network | [Azure Logs for CDN](../../cdn/cdn-azure-diagnostic-logs.md) | | CosmosDB | [Azure Cosmos DB Logging](../../cosmos-db/monitor-cosmos-db.md) | | Data Factory | [Monitor Data Factories using Azure Monitor](../../data-factory/monitor-using-azure-monitor.md) |
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data from a Log Analyti
## Limitations -- Configuration can be performed using CLI or REST requests currently. Azure portal or PowerShell are not supported yet.-- The ```--export-all-tables``` option in CLI and REST isn't supported and will be removed. You should provide the list of tables in export rules explicitly.-- Supported tables are currently limited those specific in the [supported tables](#supported-tables) section below. For example, custom log tables aren't supported currently.-- If the data export rule includes an unsupported table, the operation will succeed, but no data will be exported for that table until table gets supported. -- If the data export rule includes a table that doesn't exist, it will fail with error ```Table <tableName> does not exist in the workspace```.-- Data export will be available in all regions, but currently not available in the following: Azure Government regions, Japan West, Brazil south east, Norway East, Norway West, UAE North, UAE Central, Australia Central 2, Switzerland North, Switzerland West, Germany West Central, South India, France South, Japan West
+- Configuration currently can be performed using CLI or REST requests. Azure portal or PowerShell are not supported yet.
+- The `--export-all-tables` option in CLI and REST isn't supported and will be removed. You should provide the list of tables in export rules explicitly.
+- Supported tables currently are limited those specified in the [supported tables](#supported-tables) section below. For example, custom log tables currently aren't supported.
+- If the data export rule includes an unsupported table, the operation will succeed, but no data will be exported for that table until the table gets supported.
+- If the data export rule includes a table that doesn't exist, it will fail with error `Table <tableName> does not exist in the workspace`.
+- Data export will be available in all regions, but currently it's not available in the following: Azure Government regions, Japan West, Brazil south east, Norway East, Norway West, UAE North, UAE Central, Australia Central 2, Switzerland North, Switzerland West, Germany West Central, South India, France South, Japan West
- You can define up to 10 enabled rules in your workspace. Additional rules are allowed but in disable state. - Destination must be unique across all export rules in your workspace. - The destination storage account or event hub must be in the same region as the Log Analytics workspace.-- Names of tables to be exported can be no longer than 60 characters for a storage account and no more than 47 characters to an event hub. Tables with longer names will not be exported.
+- Names of tables to be exported can be no longer than 60 characters for a storage account and no more than 47 characters for an event hub. Tables with longer names will not be exported.
## Data completeness Data export will continue to retry sending data for up to 30 minutes in the event that the destination is unavailable. If it's still unavailable after 30 minutes then data will be discarded until the destination becomes available. ## Cost
-There are currently no additional charges for the data export feature. Pricing for data export will be announced in the future and a notice provided prior to start of billing. If you choose to continue using data export after the notice period, you will be billed at the applicable rate.
+Currently, there are no additional charges for the data export feature. Pricing for data export will be announced in the future and a notice period provided prior to the start of billing. If you choose to continue using data export after the notice period, you will be billed at the applicable rate.
## Export destinations
Data is sent to your event hub in near-real-time as it reaches Azure Monitor. An
> The [number of supported event hubs per 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). If you export more than 10 tables, either split the tables between several export rules to different event hub namespaces, or provide event hub name in the export rule and export all tables to that event hub. Considerations:
-1. 'Basic' event hub tier supports lower [event size](../../event-hubs/event-hubs-quotas.md) and some logs in your workspace can exceed it and be dropped. We recommend to use 'Standard' or 'Dedicated' event hub as export destination.
-2. The volume of exported data often increase over time, and the event hub scale needs to be increased to handle larger transfer rates and avoid throttling scenarios and data latency. You should use the auto-inflate feature of Event Hubs to automatically scale up and increase the number of throughput units and meet usage needs. See [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md) for details.
+1. The 'Basic' event hub SKU supports a lower event size [limit](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-tiers) and some logs in your workspace can exceed it and be dropped. We recommend using a 'Standard' or 'Dedicated' event hub as an export destination.
+2. The volume of exported data often increases over time, and the event hub scale needs to be increased to handle larger transfer rates and avoid throttling scenarios and data latency. You should use the auto-inflate feature of Event Hubs to automatically scale up and increase the number of throughput units to meet usage needs. See [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md) for details.
## Prerequisites
-Following are prerequisites that must be completed before configuring Log Analytics data export.
+The following prerequisites must be completed before configuring Log Analytics data export:
- Destinations must be created prior to the export rule configuration and should be in the same region as your Log Analytics workspace. If you need to replicate your data to other storage accounts, you can use any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md). - The storage account must be StorageV1 or StorageV2. Classic storage is not supported - If you have configured your storage account to allow access from selected networks, you need to add an exception in your storage account settings to allow Azure Monitor to write to your storage. ## Enable data export
-The follow steps must be performed to enable Log Analytics data export. See the following sections for more details on each.
+The following steps must be performed to enable Log Analytics data export. See the following sections for more details on each.
- Register resource provider. - Allow trusted Microsoft services.
If the data export rule includes a table that doesn't exist, it will fail with t
## Supported tables
-Supported tables are currently limited to those specified below. All data from the table will be exported unless limitations are specified. This list will be updated as support for additional tables is added.
+Supported tables currently are limited to those specified below. All data from the table will be exported unless limitations are specified. This list will be updated as support for additional tables is added.
| Table | Limitations |
Supported tables are currently limited to those specified below. All data from t
| NWConnectionMonitorTestResult | | | OfficeActivity | Partial support ΓÇô some of the data to ingested via webhooks from O365 into LA. This portion is missing in export currently. | | Operation | Partial support ΓÇô some of the data is ingested through internal services that isn't supported for export. This portion is missing in export currently. |
-| Perf | Partial support ΓÇô only windows perf data is currently supported. The Linux perf data is missing in export currently. |
+| Perf | Partial support ΓÇô only Windows perf data currently is supported. The Linux perf data is missing in export currently. |
| PowerBIDatasetsTenant | | | PowerBIDatasetsWorkspace | | | PowerBIDatasetsWorkspacePreview | |
azure-netapp-files Azacsnap Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-troubleshoot.md
na ms.devlang: na Previously updated : 04/21/2021 Last updated : 05/17/2021 # Troubleshoot Azure Application Consistent Snapshot tool
-This article provides troubleshooting content for using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files.
+This article provides troubleshooting content for using the Azure Application Consistent Snapshot tool that you can use with Azure NetApp Files and Azure Large Instance.
The following are common issues that you may encounter while running the commands. Follow the resolution instructions mentioned to fix the issue. If you still encounter an issue, open a Service Request from Azure portal and assign the request into the SAP HANA Large Instance queue for Microsoft Support to respond.
When validating communication with Azure NetApp Files, communication might fail
- (https://)management.azure.com:443 - (https://)login.microsoftonline.com:443
-## Failed communication with SAP HANA
+## Problems with SAP HANA
+
+### Running the test command fails
When validating communication with SAP HANA by running a test with `azacsnap -c test --test hana` and it provides the following error:
Cannot get SAP HANA version, exiting with error: 127
In this example, the `hdbsql` command isn't in the users `$PATH`. ```bash
- hdbsql -n 172.18.18.50 - i 00 -U SCADMIN "select version from sys.m_database"
+ hdbsql -n 172.18.18.50 - i 00 -U AZACSNAP "select version from sys.m_database"
``` ```output
Cannot get SAP HANA version, exiting with error: 127
``` ```bash
- hdbsql -n 172.18.18.50 -i 00 -U SCADMIN "select version from sys.m_database"
+ hdbsql -n 172.18.18.50 -i 00 -U AZACSNAP "select version from sys.m_database"
``` ```output
- * -10104: Invalid value for KEY (SCADMIN)
+ * -10104: Invalid value for KEY (AZACSNAP)
``` > [!NOTE] > To permanently add to the user's `$PATH`, update the user's `$HOME/.profile` file
-## The `hdbuserstore` location
+### Insufficient privilege
+
+If running `azacsnap` presents an error such as `* 258: insufficient privilege`, check to ensure the appropriate privilege has been asssigned to the "AZACSNAP" database user (assuming this is the user created per the [installation guide](azacsnap-installation.md#enable-communication-with-sap-hana)). Verify the user's current privilege with the following command:
+
+```bash
+hdbsql -U AZACSNAP "select GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE from sys.granted_privileges "' | grep -i -e GRANTEE -e azacsnap
+```
+
+```output
+GRANTEE,GRANTEE_TYPE,PRIVILEGE,IS_VALID,IS_GRANTABLE
+"AZACSNAP","USER","BACKUP ADMIN","TRUE","FALSE"
+"AZACSNAP","USER","CATALOG READ","TRUE","FALSE"
+"AZACSNAP","USER","CREATE ANY","TRUE","TRUE"
+```
+
+The error might also provide further information to help determine the required SAP HANA privileges, such as the output of `Detailed info for this error can be found with guid '99X9999X99X9999X99X99XX999XXX999' SQLSTATE: HY000`. In this case follow SAP's instructions at [SAP Help Portal - GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS](https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.05/en-US/9a73c4c017744288b8d6f3b9bc0db043.html) which recommends using the following SQL query to determine the detail on the required privilege.
+
+```sql
+CALL SYS.GET_INSUFFICIENT_PRIVILEGE_ERROR_DETAILS ('99X9999X99X9999X99X99XX999XXX999', ?)
+```
+
+```output
+GUID,CREATE_TIME,CONNECTION_ID,SESSION_USER_NAME,CHECKED_USER_NAME,PRIVILEGE,IS_MISSING_ANALYTIC_PRIVILEGE,IS_MISSING_GRANT_OPTION,DATABASE_NAME,SCHEMA_NAME,OBJECT_NAME,OBJECT_TYPE
+"99X9999X99X9999X99X99XX999XXX999","2021-01-01 01:00:00.180000000",120212,"AZACSNAP","AZACSNAP","DATABASE ADMIN or DATABASE BACKUP ADMIN","FALSE","FALSE","","","",""
+```
+
+In the example above, adding the 'DATABASE BACKUP ADMIN' privilege to the SYSTEMDB's AZACSNAP user, should resolve the insufficient privilege error.
+
+### The `hdbuserstore` location
When setting up communication with SAP HANA, the `hdbuserstore` program is used to create the secure communication settings. The `hdbuserstore` program is usually found under `/usr/sap/<SID>/SYS/exe/hdb/` or `/usr/sap/hdbclient`. Normally the installer adds the correct location to the `azacsnap` user's `$PATH`.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports LDAP signing for secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controllers. This feature is currently in preview.
-* [AES encryption for AD authentication](azure-netapp-files-create-volumes-smb.md) (Preview)
+* [AES encryption for AD authentication](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
Azure NetApp Files now supports AES encryption on LDAP connection to DC to enable AES encryption for an SMB volume. This feature is currently in preview.
azure-percept Dev Tools Installer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/dev-tools-installer.md
# Dev Tools Pack Installer overview
-The Dev Tools Pack Installer is a one-stop solution that installs and configures all of the tools required to develop an advanced intelligent edge solution.
+The Dev Tools Pack Installer is a one-stop solution that installs and configures all the tools required to develop an advanced intelligent edge solution.
## Mandatory tools * [Visual Studio Code](https://code.visualstudio.com/) * [Python 3.6 or later](https://www.python.org/)
-* [Docker 19.03](https://www.docker.com/)
-* [PIP3](https://pip.pypa.io/en/stable/user_guide/)
-* [TensorFlow 1.13](https://www.tensorflow.org/)
-* [Azure Machine Learning SDK 1.1](/python/api/overview/azure/ml/)
+* [Docker 20.10](https://www.docker.com/)
+* [PIP3 21.1](https://pip.pypa.io/en/stable/user_guide/)
+* [TensorFlow 2.0](https://www.tensorflow.org/)
+* [Azure Machine Learning SDK 1.2](/python/api/overview/azure/ml/)
## Optional tools
-* [Nvidia DeepStream SDK 5](https://developer.nvidia.com/deepstream-sdk) (toolkit for developing solutions for Nvidia Accelerators)
-* [Intel OpenVino Toolkit 2020.2](https://docs.openvinotoolkit.org/) (toolkit for developing solutions for Intel Accelerators)
-* [Lobe.ai](https://lobe.ai/)
-* [Streamlit](https://www.streamlit.io/)
+* [NVIDIA DeepStream SDK 5](https://developer.nvidia.com/deepstream-sdk) (toolkit for developing solutions for NVIDIA Accelerators)
+* [Intel OpenVINO Toolkit 2021.3](https://docs.openvinotoolkit.org/) (toolkit for developing solutions for Intel Accelerators)
+* [Lobe.ai 0.9](https://lobe.ai/)
+* [Streamlit 0.8](https://www.streamlit.io/)
* [Pytorch 1.4.0 (Windows) or 1.2.0 (Linux)](https://pytorch.org/)
-* [Miniconda3](https://docs.conda.io/en/latest/miniconda.html)
-* [Chainer 5.2](https://chainer.org/)
-* [Caffe](https://caffe.berkeleyvision.org/)
-* [CUDA Toolkit 10.0.130](https://developer.nvidia.com/cuda-toolkit)
+* [Miniconda 4.5](https://docs.conda.io/en/latest/miniconda.html)
+* [Chainer 7.7](https://chainer.org/)
+* [Caffe 1.0](https://caffe.berkeleyvision.org/)
+* [CUDA Toolkit 11.2](https://developer.nvidia.com/cuda-toolkit)
* [Microsoft Cognitive Toolkit 2.5.1](https://www.microsoft.com/research/product/cognitive-toolkit/?lang=fr_ca) ## Known issues -- Optional Caffe install may fail if Docker is not running properly. If you would like to install Caffe, make sure Docker is installed and running before attempting the Caffe installation through the Dev Tools Pack Installer.
+- Optional Caffe, NVIDIA DeepStream SDK, and Intel OpenVINO Toolkit installations might fail if Docker isn't running properly. To install these optional tools, ensure that Docker is installed and running before you attempt the installations through the Dev Tools Pack Installer.
-- Optional CUDA install fails on incompatible systems. Before attempting to install the [CUDA Toolkit 10.0.130](https://developer.nvidia.com/cuda-toolkit) through the Dev Tools Pack Installer, verify your system compatibility.
+- Optional CUDA Toolkit installed on the Mac version is 10.0.130. CUDA Toolkit 11 no longer supports development or running applications on macOSity.
## Docker minimum requirements
azure-percept Quickstart Percept Audio Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-audio-setup.md
Azure Percept Audio works out of the box with Azure Percept DK. No unique setup
- Azure Percept Audio - [Azure subscription](https://azure.microsoft.com/free/) - [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md): you connected your devkit to a Wi-Fi network, created an IoT Hub, and connected your devkit to the IoT Hub-- Speaker or headphones that can connect to a 3.5mm audio jack (optional)
+- Speaker or headphones that can connect to a 3.5-mm audio jack (optional)
## Connecting your devices 1. Connect the Azure Percept Audio device to the Azure Percept DK carrier board with the included Micro USB to USB Type-A cable. Connect the Micro USB end of the cable to the Audio interposer (developer) board and the Type-A end to the Percept DK carrier board.
-1. (Optional) connect your speaker or headphones to your Azure Percept Audio device via the audio jack, which is labeled "Line Out." This will allow you to hear audio responses.
+1. (Optional) connect your speaker or headphones to your Azure Percept Audio device via the audio jack, labeled "Line Out." This will allow you to hear audio responses.
-1. Power on the devkit. LED L02 on the Audio interposer board will change to blinking white to indicate that the device was powered on and that the Audio SoM is authenticating.
+1. Power on the devkit. LED L02 will change to blinking white, which indicates that the device was powered on and is authenticating.
-1. Wait for the authentication process to complete--this can take up to 3 minutes.
+1. Wait for the authentication process to complete, which takes up to 5 minutes.
-1. You are ready to begin prototyping when you see one of the following:
+1. You're ready to begin prototyping when you see one of the following LED states:
- - LED L02 will change to solid white: this indicates that authentication is complete, and the devkit has not been configured with a keyword yet.
- - All three LEDs turn blue: this indicates that authentication is complete, and the devkit is configured with a keyword.
+ - LED L02 will change to solid white, indicating that the authentication is complete and the devkit is configured without a keyword.
+ - All three LEDs turn blue, indicating that the authentication is complete and the devkit is configured with a keyword.
## Next steps
azure-percept Quickstart Percept Dk Set Up https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/quickstart-percept-dk-set-up.md
To verify if your Azure account is an ΓÇ£ownerΓÇ¥ or ΓÇ£contributorΓÇ¥ within th
> [!WARNING] > While connected to the Azure Percept DK Wi-Fi access point, your host computer will temporarily lose its connection to the Internet. Active video conference calls, web streaming, or other network-based experiences will be interrupted.
-1. Once connected to the dev kitΓÇÖs Wi-Fi access point, the host computer will automatically launch the setup experience in a new browser window with **your.new.device/** in the address bar. If the tab does not open automatically, launch the setup experience by going to [http://10.1.1.1](http://10.1.1.1). Make sure your browser is signed in with the same Azure account credentials you intend to use with Azure Percept.
+1. Once connected to the dev kitΓÇÖs Wi-Fi access point, the host computer will automatically launch the setup experience in a new browser window with **your.new.device/** in the address bar. If the tab does not open automatically, launch the setup experience by going to [http://10.1.1.1](http://10.1.1.1) in a web browser. Make sure your browser is signed in with the same Azure account credentials you intend to use with Azure Percept.
:::image type="content" source="./media/quickstart-percept-dk-setup/main-01-welcome.png" alt-text="Welcome page.":::
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
Use the guidelines below to troubleshoot voice assistant application issues.
+## Understanding Ear SoM LED indicators
+
+You can use LED indicators to understand which state your device is in. It takes around 4-5 minutes for the device to power on and the module to fully initialize. As it goes through initialization steps, you will see:
+
+1. Center white LED on (static): the device is powered on.
+1. Center white LED on (blinking): authentication is in progress.
+1. Center white LED on (static): the device is authenticated but keyword is not configured.ΓÇï
+1. All three LEDs will change to blue once a demo was deployed and the device is ready to use.
+
+For more reference, see this article about [Azure Percept Audio button and LED behavior](./audio-button-led-behavior.md).
+
+### Troubleshooting LED issues
+- **If the center LED is solid white**, try [using a template to create a voice assistant](./tutorial-no-code-speech.md).
+- **If the center LED is always blinking**, it indicates an authentication issue. Try these troubleshooting steps:
+ - Make sure that your USB-A and micro USB connections are secured
+ - Check to see if the [speech module is running](./troubleshoot-audio-accessory-speech-module.md#checking-runtime-status-of-the-speech-module)
+ - Restart the device
+ - [Collect logs](./troubleshoot-audio-accessory-speech-module.md#collecting-speech-module-logs) and attach them to a support request
+ - Check to see if your dev kit is running the latest software and apply an update if available.
+
+## Checking runtime status of the speech module
+
+Check if the runtime status of **azureearspeechclientmodule** shows as **running**. To locate the runtime status of your device modules, open the [Azure portal](https://portal.azure.com/) and navigate to **All resources** -> **[your IoT hub]** -> **IoT Edge** -> **[your device ID]**. Click the **Modules** tab to see the runtime status of all installed modules.
++
+If the runtime status of **azureearspeechclientmodule** is not listed as **running**, click **Set modules** -> **azureearspeechclientmodule**. On the **Module Settings** page, set **Desired Status** to **running** and click **Update**.
+ ## Collecting speech module logs To run these commands, [SSH into the dev kit](./how-to-ssh-into-percept-dk.md) and enter the commands into the SSH client prompt.
After redirecting output to a .txt file, copy the file to your host PC via SCP:
scp [remote username]@[IP address]:[remote file path]/[file name].txt [local host file path] ```
-[local host file path] refers to the location on your host PC which you would like to copy the .txt file to. [remote username] is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
-
-## Checking runtime status of the speech module
-
-Check if the runtime status of **azureearspeechclientmodule** shows as **running**. To locate the runtime status of your device modules, open the [Azure portal](https://portal.azure.com/) and navigate to **All resources** -> **[your IoT hub]** -> **IoT Edge** -> **[your device ID]**. Click the **Modules** tab to see the runtime status of all installed modules.
--
-If the runtime status of **azureearspeechclientmodule** is not listed as **running**, click **Set modules** -> **azureearspeechclientmodule**. On the **Module Settings** page, set **Desired Status** to **running** and click **Update**.
-
-## Understanding Ear SoM LED indicators
-
-You can use LED indicators to understand which state you device is in. Usually it takes around 2 minutes for the module to fully initialize after the device powers on. As it goes through initialization steps, you will see:
-
-1. Center white LED on (static): the device is powered on.
-2. Center white LED on (blinking): authentication is in progress.
-3. All three LEDs will change to blue once the device is authenticated and ready to use.
-
-|LED|LED State|Ear SoM Status|
-|||--|
-|L02|1x white, static on|Power on |
-|L02|1x white, 0.5 Hz flashing|Authentication in progress |
-|L01 & L02 & L03|3x blue, static on|Waiting for keyword|
-|L01 & L02 & L03|LED array flashing, 20fps |Listening or speaking|
-|L01 & L02 & L03|LED array racing, 20fps|Thinking|
-|L01 & L02 & L03|3x red, static on |Mute|
+[local host file path] refers to the location on your host PC, which you would like to copy the .txt file to. [remote username] is the SSH username chosen during the [setup experience](./quickstart-percept-dk-set-up.md).
-## Next steps
+## Known issues
+- If using a free trial, the speech model may exceed the free trial price plan. In this case, the model will stop working without an error message.
+- If more than 5 IoT Edge devices are connected, the report (the text sent via telemetry to IoT Hub and Speech Studio) may be blocked.
+- If the device is in a different region than the resources, the report message may be delayed.
-See the [general troubleshooting guide](./troubleshoot-dev-kit.md) for more information on troubleshooting your Azure Percept DK.
+## Useful links
+- [Azure Percept Audio setup](./quickstart-percept-audio-setup.md)
+- [Azure Percept Audio button and LED behavior](./audio-button-led-behavior.md)
+- [Create a voice assistant with Azure Percept DK and Azure Percept Audio](./tutorial-no-code-speech.md)
+- [Azure Percept DK general troubleshooting guide](./troubleshoot-dev-kit.md)
azure-resource-manager Tutorial Create Managed App With Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/tutorial-create-managed-app-with-custom-provider.md
description: This tutorial describes how to create an Azure Managed Application
Previously updated : 06/20/2019 Last updated : 06/20/2019
To complete this tutorial, you need to know:
In this tutorial, you create a managed application and its managed resource group will contain custom provider instance, storage account, and function. The Azure Function used in this example implements an API that handles custom provider operations for actions and resources. Azure Storage Account is used as basic storage for your custom provider resources.
-The user interface definition for creating a managed application instance includes `funcname` and `storagename` input elements. Storage account name and function name must be globally unique. By default function files will be deployed from [sample function package](https://github.com/Azure/azure-quickstart-templates/tree/master/101-custom-rp-with-function/artifacts/functionzip), but you can change it by adding an input element for a package link in *createUiDefinition.json*:
+The user interface definition for creating a managed application instance includes `funcname` and `storagename` input elements. Storage account name and function name must be globally unique. By default function files will be deployed from [sample function package](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.customproviders/custom-rp-with-function/artifacts/functionzip), but you can change it by adding an input element for a package link in *createUiDefinition.json*:
```json {
Package the following managed application artifacts to zip archive and upload it
* mainTemplate.json * viewDefinition.json
-All files must be at root level. The package with artifacts can be stored in any storage, for example GitHub blob or Azure Storage Account blob. Here is a script to upload the application package to storage account:
+All files must be at root level. The package with artifacts can be stored in any storage, for example GitHub blob or Azure Storage Account blob. Here is a script to upload the application package to storage account:
```powershell $resourceGroup="appResourcesGroup"
Set-AzStorageBlobContent `
-File "path_to_your_zip_package" ` -Container appcontainer ` -Blob app.zip `
- -Context $ctx
+ -Context $ctx
# Get blob absolute uri $blobUri=(Get-AzureStorageBlob -Container appcontainer -Blob app.zip -Context $ctx).ICloudBlob.uri.AbsoluteUri
az managedapp definition create \
# [Portal](#tab/azure-portal) 1. In the Azure portal, select **All services**. In the list of resources, type and select **Managed Applications Center**.
-2. On the **Managed Applications Center**, choose **Service Catalog application definition** and click **Add**.
-
+2. On the **Managed Applications Center**, choose **Service Catalog application definition** and click **Add**.
+ ![Add Service Catalog](./media/tutorial-create-managed-app-with-custom-provider/service-catalog-managed-application.png) 3. Provide values for creating a Service Catalog definition:
az managedapp create \
# [Portal](#tab/azure-portal) 1. In the Azure portal, select **All services**. In the list of resources, type and select **Managed Applications Center**.
-2. On the **Managed Applications Center**, choose **Service Catalog applications** and click **Add**.
+2. On the **Managed Applications Center**, choose **Service Catalog applications** and click **Add**.
![Add managed application](./media/tutorial-create-managed-app-with-custom-provider/add-managed-application.png)
az managedapp create \
![Application settings](./media/tutorial-create-managed-app-with-custom-provider/application-settings.png)
-5. When validation passed, click **OK** to deploy an instance of a managed application.
-
+5. When validation passed, click **OK** to deploy an instance of a managed application.
+ ![Deploy managed application](./media/tutorial-create-managed-app-with-custom-provider/deploy-managed-application.png)
You can go to managed application instance and perform **custom action** in "Ove
## Looking for help
-If you have questions about Azure Managed Applications, you can try asking on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-managed-app) with tag azure-managed-app or [Microsoft Q&A](/answers/topics/azure-managed-applications.html) with tag azure-managed-application. A similar question may have already been asked and answered, so check first before posting. Please use respective tags for faster response.
+If you have questions about Azure Managed Applications, you can try asking on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-managed-app) with tag azure-managed-app or [Microsoft Q&A](/answers/topics/azure-managed-applications.html) with tag azure-managed-application. A similar question may have already been asked and answered, so check first before posting. Please use respective tags for faster response.
## Next steps
azure-resource-manager Key Vault Parameter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/key-vault-parameter.md
Title: Key Vault secret with template description: Shows how to pass a secret from a key vault as a parameter during deployment. Previously updated : 04/23/2021 Last updated : 05/17/2021
$secret = Set-AzKeyVaultSecret -VaultName ExampleVault -Name 'ExamplePassword' -
-As the owner of the key vault, you automatically have access to create secrets. If the user working with secrets isn't the owner of the key vault, grant access with:
+As the owner of the key vault, you automatically have access to create secrets. If you need to let another user create secrets, use:
# [Azure CLI](#tab/azure-cli)
Set-AzKeyVaultAccessPolicy `
+The access policies aren't needed if the user is deploying a template that retrieves a secret. Add a user to the access policies only if the user needs to work directly with the secrets. The deployment permissions are defined in the next section.
+ For more information about creating key vaults and adding secrets, see: - [Set and retrieve a secret by using CLI](../../key-vault/secrets/quick-create-cli.md)
For more information about creating key vaults and adding secrets, see:
- [Set and retrieve a secret by using .NET](../../key-vault/secrets/quick-create-net.md) - [Set and retrieve a secret by using Node.js](../../key-vault/secrets/quick-create-node.md)
-## Grant access to the secrets
+## Grant deployment access to the secrets
+
+The user who deploys the template must have the `Microsoft.KeyVault/vaults/deploy/action` permission for the scope of the resource group and key vault. By checking this access, Azure Resource Manager prevents an unapproved user from accessing the secret by passing in the resource ID for the key vault. You can grant deployment access to users without granting write access to the secrets.
-The user who deploys the template must have the `Microsoft.KeyVault/vaults/deploy/action` permission for the scope of the resource group and key vault. The [Owner](../../role-based-access-control/built-in-roles.md#owner) and [Contributor](../../role-based-access-control/built-in-roles.md#contributor) roles both grant this access. If you created the key vault, you're the owner and have the permission.
+The [Owner](../../role-based-access-control/built-in-roles.md#owner) and [Contributor](../../role-based-access-control/built-in-roles.md#contributor) roles both grant this access. If you created the key vault, you're the owner and have the permission.
-The following procedure shows how to create a role with the minimum permission, and how to assign the user.
+For other users, grant the `Microsoft.KeyVault/vaults/deploy/action` permission. The following procedure shows how to create a role with the minimum permission, and assign it to a user.
1. Create a custom role definition JSON file:
The following template dynamically creates the key vault ID and passes it as a p
``` > [!NOTE]
-> As of Bicep version 0.3.255, a parameter file is needed to retrieve a key vault secret because the `reference` keyword isn't supported. There's work in progress to add support and for more information, see [GitHub issue 1028](https://github.com/Azure/bicep/issues/1028).
+> As of Bicep version 0.3.539, you can use an **existing** key vault secret. The key vault and secret must exist before a deployment begins. For more information, see the [Bicep spec](https://github.com/Azure/bicep/blob/main/docs/spec/modules.md#using-existing-key-vaults-secret-as-input-for-secure-string-module-parameter).
## Next steps
azure-resource-manager Template Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-syntax.md
Title: Template structure and syntax description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 03/26/2021 Last updated : 05/17/2021 # Understand the structure and syntax of ARM templates
You have a few options for adding comments and metadata to your template.
### Comments
-For inline comments, you can use either `//` or `/* ... */` but this syntax doesn't work with all tools. If you add this style of comment, be sure the tools you use support inline JSON comments.
+For inline comments, you can use either `//` or `/* ... */`.
> [!NOTE]
-> To deploy templates with comments by using Azure CLI with version 2.3.0 or older, you must use the `--handle-extended-json-format` switch.
+>
+> To deploy templates with comments, use Azure PowerShell or Azure CLI. For CLI, use version 2.3.0 or later, and specify the `--handle-extended-json-format` switch.
+>
+> Comments aren't supported when you deploy the template through the Azure portal, a DevOps pipeline, or the REST API.
```json {
You can't add a `metadata` object to user-defined functions.
You can break a string into multiple lines. For example, see the `location` property and one of the comments in the following JSON example.
+> [!NOTE]
+>
+> To deploy templates with multi-line strings, use Azure PowerShell or Azure CLI. For CLI, use version 2.3.0 or later, and specify the `--handle-extended-json-format` switch.
+>
+> Multi-line strings aren't supported when you deploy the template through the Azure portal, a DevOps pipeline, or the REST API.
++ ```json { "type": "Microsoft.Compute/virtualMachines",
You can break a string into multiple lines. For example, see the `location` prop
], ```
-> [!NOTE]
-> To deploy templates with multi-line strings by using Azure CLI with version 2.3.0 or older, you must use the `--handle-extended-json-format` switch.
- ## Next steps * To view complete templates for many different types of solutions, see the [Azure Quickstart Templates](https://azure.microsoft.com/documentation/templates/).
azure-signalr Howto Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/howto-use-managed-identity.md
We provide libraries and code samples that show how to handle token validation.
Setting access token validation in Function App is easy and efficient without code works.
-1. In the **Authentication / Authorization** page, switch **App Service Authentication** to **On**.
+1. In the **Authentication (classic)** page, switch **App Service Authentication** to **On**.
2. Select **Log in with Azure Active Directory** in **Action to take when request is not authenticated**.
Setting access token validation in Function App is easy and efficient without co
After these settings, the Function App will reject requests without an access token in the header.
+> [!Important]
+> To pass the authentication, the *Issuer Url* must match the *iss* claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md#v10-and-v20)), so the *Issuer Url* should look like `https://sts.windows.net/<tenant-id>/`. Check the *Issuer Url* configured in Azure Function. For **Authentication**, go to *Identity provider* -> *Edit* -> *Issuer Url* and for **Authentication (classic)**, go to *Azure Active Directory* -> *Advanced* -> *Issuer Url*
++ ## Use a managed identity for Key Vault reference SignalR Service can access Key Vault to get secret using the managed identity.
Currently, this feature can be used in the following scenarios:
## Next steps -- [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)
+- [Azure Functions development and configuration with Azure SignalR Service](signalr-concept-serverless-development-config.md)
azure-sql Active Geo Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/active-geo-replication-overview.md
For more information on the SQL Database compute sizes, see [What are SQL Databa
## Cross-subscription geo-replication
+> [!NOTE]
+> Creating a geo-replica on a logical server in a different Azure tenant is not supported when [Azure Active Directory](https://techcommunity.microsoft.com/t5/azure-sql/support-for-azure-ad-user-creation-on-behalf-of-azure-ad/ba-p/2346849) auth is active (enabled) on either primary or secondary logical server.
+ To setup active geo-replication between two databases belonging to different subscriptions (whether under the same tenant or not), you must follow the special procedure described in this section. The procedure is based on SQL commands and requires: - Creating a privileged login on both servers
azure-sql Arm Templates Content Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/arm-templates-content-guide.md
The following table includes links to Azure Resource Manager templates for Azure
| [Auditing to Azure Event Hub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sql/sql-auditing-server-policy-to-eventhub) | This template allows you to deploy a server with auditing enabled to write audit logs to an existing event hub. In order to send audit events to Event Hubs, set auditing settings with `Enabled` `State`, and set `IsAzureMonitorTargetEnabled` as `true`. Also, configure Diagnostic Settings with the `SQLSecurityAuditEvents` log category on the `master` database (for server-level auditing). Auditing tracks database events and writes them to an audit log that can be placed in your Azure storage account, OMS workspace, or Event Hubs.| | [Azure Web App with SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database) | This sample creates a free Azure web app and a database in Azure SQL Database at the "Basic" service level.| | [Azure Web App and Redis Cache with SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-redis-cache-sql-database) | This template creates a web app, Redis Cache, and database in the same resource group and creates two connection strings in the web app for the database and Redis Cache.|
-| [Import data from Blob storage using ADF V2](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.datafactory/101-data-factory-v2-blob-to-sql-copy) | This Azure Resource Manager template creates an instance of Azure Data Factory V2 that copies data from Azure Blob storage to SQL Database.|
-| [HDInsight cluster with a database](https://github.com/Azure/azure-quickstart-templates/tree/master/101-hdinsight-linux-with-sql-database) | This template allows you to create an HDInsight cluster, a logical SQL server, a database, and two tables. This template is used by the [Use Sqoop with Hadoop in HDInsight article](../../hdinsight/hadoop/hdinsight-use-sqoop.md). |
-| [Azure Logic App that runs a SQL Stored Procedure on a schedule](https://github.com/Azure/azure-quickstart-templates/tree/master/101-logic-app-sql-proc) | This template allows you to create a logic app that will run a SQL stored procedure on schedule. Any arguments for the procedure can be put into the body section of the template.|
+| [Import data from Blob storage using ADF V2](https://github.com/Azure/azure-quickstart-templates/tree/master/101-data-factory-v2-blob-to-sql-copy) | This Azure Resource Manager template creates an instance of Azure Data Factory V2 that copies data from Azure Blob storage to SQL Database.|
+| [HDInsight cluster with a database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-linux-with-sql-database) | This template allows you to create an HDInsight cluster, a logical SQL server, a database, and two tables. This template is used by the [Use Sqoop with Hadoop in HDInsight article](../../hdinsight/hadoop/hdinsight-use-sqoop.md). |
+| [Azure Logic App that runs a SQL Stored Procedure on a schedule](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/logic-app-sql-proc) | This template allows you to create a logic app that will run a SQL stored procedure on schedule. Any arguments for the procedure can be put into the body section of the template.|
## [Azure SQL Managed Instance](#tab/managed-instance)
azure-sql Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connectivity-architecture.md
If you are connecting from outside Azure, your connections have a connection pol
## Gateway IP addresses
-The table below lists the IP Addresses of Gateways by region. To connect to SQL Database or Azure Synapse, you need to allow network traffic to and from **all** Gateways for the region.
-
-Details of how traffic shall be migrated to new Gateways in specific regions are in the following article: [Azure SQL Database traffic migration to newer Gateways](gateway-migration.md)
-
-| Region name | Gateway IP addresses |
-| | |
-| Australia Central | 20.36.105.0, 20.36.104.6, 20.36.104.7 |
-| Australia Central 2 | 20.36.113.0, 20.36.112.6 |
-| Australia East | 13.75.149.87, 40.79.161.1, 13.70.112.9 |
-| Australia South East | 191.239.192.109, 13.73.109.251, 13.77.48.10, 13.77.49.32 |
-| Brazil South | 191.233.200.14, 191.234.144.16, 191.234.152.3 |
-| Canada Central | 40.85.224.249, 52.246.152.0, 20.38.144.1 |
-| Canada East | 40.86.226.166, 52.242.30.154, 40.69.105.9 , 40.69.105.10 |
-| Central US | 13.67.215.62, 52.182.137.15, 104.208.16.96, 104.208.21.1, 13.89.169.20 |
-| China East | 139.219.130.35 |
-| China East 2 | 40.73.82.1 |
-| China North | 139.219.15.17 |
-| China North 2 | 40.73.50.0 |
-| East Asia | 52.175.33.150, 13.75.32.4, 13.75.32.14 |
-| East US | 40.121.158.30, 40.79.153.12, 40.78.225.32 |
-| East US 2 | 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, 104.208.150.3, 40.70.144.193 |
-| France Central | 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.145.12 |
-| France South | 40.79.177.0, 40.79.177.10 ,40.79.177.12 |
-| Germany Central | 51.4.144.100 |
-| Germany North East | 51.5.144.179 |
-| Germany West Central | 51.116.240.0, 51.116.248.0, 51.116.152.0 |
-| India Central | 104.211.96.159, 104.211.86.30 , 104.211.86.31 |
-| India South | 104.211.224.146 |
-| India West | 104.211.160.80, 104.211.144.4 |
-| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5, 13.78.104.32 |
-| Japan West | 104.214.148.156, 40.74.100.192, 40.74.97.10 |
-| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23, 20.44.24.32, 20.194.64.33 |
-| Korea South | 52.231.200.86, 52.231.151.96 |
-| North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33, 52.162.105.9 |
-| North Europe | 40.113.93.91, 52.138.224.1, 13.74.104.113 |
-| Norway East | 51.120.96.0, 51.120.96.33 |
-| Norway West | 51.120.216.0 |
-| South Africa North | 102.133.152.0, 102.133.120.2, 102.133.152.32 |
-| South Africa West | 102.133.24.0 |
-| South Central US | 13.66.62.124, 104.214.16.32, 20.45.121.1, 20.49.88.1 |
-| South East Asia | 104.43.15.0, 40.78.232.3, 13.67.16.193 |
-| Switzerland North | 51.107.56.0, 51.107.57.0 |
-| Switzerland West | 51.107.152.0, 51.107.153.0 |
-| UAE Central | 20.37.72.64 |
-| UAE North | 65.52.248.0 |
-| UK South | 51.140.184.11, 51.105.64.0, 51.140.144.36, 51.105.72.32 |
-| UK West | 51.141.8.11, 51.140.208.96, 51.140.208.97 |
-| West Central US | 13.78.145.25, 13.78.248.43, 13.71.193.32, 13.71.193.33 |
-| West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 |
-| West US | 104.42.238.205, 13.86.216.196 |
-| West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 |
-| West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 |
-| | |
+The table below lists the individual Gateway IP addresses and also Gateway IP address ranges per region.
+Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](gateway-migration.md). We strongly encourage customers to use the **Gateway IP address ranges** in order to not be impacted by this activity in a region.
+
+> [!IMPORTANT]
+> Logins for SQL Database or Azure Synapse can land on **any of the Gateways in a region**. For consistent connectivity to SQL Database or Azure Synapse, allow network traffic to and from **ALL** Gateway IP addresses or Gateway IP address ranges for the region.
+
+| Region name | Gateway IP addresses | Gateway IP address ranges |
+| | | |
+| Australia Central | 20.36.105.0, 20.36.104.6, 20.36.104.7 | 20.36.105.32/29 |
+| Australia Central 2 | 20.36.113.0, 20.36.112.6 | 20.36.113.32/29 |
+| Australia East | 13.75.149.87, 40.79.161.1, 13.70.112.9 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
+| Australia South East | 191.239.192.109, 13.73.109.251, 13.77.48.10, 13.77.49.32 | 13.77.49.32/29 |
+| Brazil South | 191.233.200.14, 191.234.144.16, 191.234.152.3 | 191.233.200.32/29, 191.234.144.32/29 |
+| Canada Central | 40.85.224.249, 52.246.152.0, 20.38.144.1 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 |
+| Canada East | 40.86.226.166, 52.242.30.154, 40.69.105.9 , 40.69.105.10 | 40.69.105.32/29|
+| Central US | 13.67.215.62, 52.182.137.15, 104.208.21.1, 13.89.169.20 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 |
+| China East | 139.219.130.35 | 52.130.112.136/29 |
+| China East 2 | 40.73.82.1 | 52.130.120.88/29 |
+| China North | 139.219.15.17 | 52.130.128.88/29 |
+| China North 2 | 40.73.50.0 | 52.130.40.64/29 |
+| East Asia | 52.175.33.150, 13.75.32.4, 13.75.32.14 | 13.75.32.192/29, 13.75.33.192/29 |
+| East US | 40.121.158.30, 40.79.153.12, 40.78.225.32 | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29 |
+| East US 2 | 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, 104.208.150.3, 40.70.144.193 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 |
+| France Central | 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.145.12 | 40.79.136.32/29, 40.79.144.32/29 |
+| France South | 40.79.177.0, 40.79.177.10 ,40.79.177.12 | 40.79.176.40/29, 40.79.177.32/29 |
+| Germany West Central | 51.116.240.0, 51.116.248.0, 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 |
+| India Central | 104.211.96.159, 104.211.86.30 , 104.211.86.31 | 104.211.86.32/29, 20.192.96.32/29 |
+| India South | 104.211.224.146 | 40.78.192.32/29, 40.78.193.32/29 |
+| India West | 104.211.160.80, 104.211.144.4 | 104.211.144.32/29, 104.211.145.32/29 |
+| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5, 13.78.104.32 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
+| Japan West | 104.214.148.156, 40.74.100.192, 40.74.97.10 | 40.74.96.32/29 |
+| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23, 20.44.24.32, 20.194.64.33 | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 |
+| Korea South | 52.231.200.86, 52.231.151.96 | |
+| North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33, 52.162.105.9 | 52.162.105.192/29 |
+| North Europe | 40.113.93.91, 52.138.224.1, 13.74.104.113 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
+| Norway East | 51.120.96.0, 51.120.96.33 | 51.120.96.32/29 |
+| Norway West | 51.120.216.0 | 51.120.217.32/29 |
+| South Africa North | 102.133.152.0, 102.133.120.2, 102.133.152.32 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29|
+| South Africa West | 102.133.24.0 | 102.133.25.32/29 |
+| South Central US | 13.66.62.124, 104.214.16.32, 20.45.121.1, 20.49.88.1 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 |
+| South East Asia | 104.43.15.0, 40.78.232.3, 13.67.16.193 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29|
+| Switzerland North | 51.107.56.0, 51.107.57.0 | 51.107.56.32/29 |
+| Switzerland West | 51.107.152.0, 51.107.153.0 | 51.107.153.32/29 |
+| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29 |
+| UAE North | 65.52.248.0 | 40.120.72.32/29, 65.52.248.32/29 |
+| UK South | 51.140.184.11, 51.105.64.0, 51.140.144.36, 51.105.72.32 | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 |
+| UK West | 51.141.8.11, 51.140.208.96, 51.140.208.97 | 51.140.208.96/29, 51.140.209.32/29 |
+| West Central US | 13.78.145.25, 13.78.248.43, 13.71.193.32, 13.71.193.33 | 13.71.193.32/29 |
+| West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 |
+| West US | 104.42.238.205, 13.86.216.196 | 13.86.217.224/29 |
+| West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 |
+| West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
+| | | |
## Next steps
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
AS COPY OF source_server_name.source_database_name;
> [!TIP] > Database copy using T-SQL supports copying a database from a subscription in a different Azure tenant. This is only supported when using a SQL authentication login to log in to the target server.
+> Creating a database copy on a logical server in a different Azure tenant is not supported when [Azure Active Directory](https://techcommunity.microsoft.com/t5/azure-sql/support-for-azure-ad-user-creation-on-behalf-of-azure-ad/ba-p/2346849) auth is active (enabled) on either source or target logical server.
## Monitor the progress of the copying operation
azure-sql Db2 To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/db2-to-sql-database-guide.md
Previously updated : 11/06/2020 Last updated : 05/14/2021 # Migration guide: IBM Db2 to Azure SQL Database [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqldb.md)]
For additional assistance, see the following resources, which were developed in
|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.| |[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/blob/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Db2 To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/db2-to-managed-instance-guide.md
Previously updated : 11/06/2020 Last updated : 05/14/2021 # Migration guide: IBM Db2 to Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../../includes/appliesto-sqlmi.md)]
For additional assistance, see the following resources, which were developed in
|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.| |[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/blob/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Db2 To Sql On Azure Vm Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/virtual-machines/db2-to-sql-on-azure-vm-guide.md
Previously updated : 11/06/2020 Last updated : 05/14/2021 # Migration guide: IBM Db2 to SQL Server on Azure VM [!INCLUDE[appliesto--sqlmi](../../includes/appliesto-sqlvm.md)]
For additional assistance, see the following resources, which were developed in
|[Data workload assessment model and tool](https://github.com/Microsoft/DataMigrationTeam/tree/master/Data%20Workload%20Assessment%20Model%20and%20Tool)| This tool provides suggested "best fit" target platforms, cloud readiness, and application/database remediation level for a given workload. It offers simple, one-click calculation and report generation that helps to accelerate large estate assessments by providing and automated and uniform target platform decision process.| |[Db2 zOS data assets discovery and assessment package](https://github.com/microsoft/DataMigrationTeam/tree/master/DB2%20zOS%20Data%20Assets%20Discovery%20and%20Assessment%20Package)|After running the SQL script on a database, you can export the results to a file on the file system. Several file formats are supported, including *.csv, so that you can capture the results in external tools such as spreadsheets. This method can be useful if you want to easily share results with teams that do not have the workbench installed.| |[IBM Db2 LUW inventory scripts and artifacts](https://github.com/microsoft/DataMigrationTeam/tree/master/IBM%20DB2%20LUW%20Inventory%20Scripts%20and%20Artifacts)|This asset includes a SQL query that hits IBM Db2 LUW version 11.1 system tables and provides a count of objects by schema and object type, a rough estimate of "raw data" in each schema, and the sizing of tables in each schema, with results stored in a CSV format.|
-|[Db2 LUW pure scale on Azure - setup guide](https://github.com/microsoft/DataMigrationTeam/blob/master/Whitepapers/DB2%20PureScale%20on%20Azure.pdf)|This guide serves as a starting point for a Db2 implementation plan. Although business requirements will differ, the same basic pattern applies. This architectural pattern can also be used for OLAP applications on Azure.|
The Data SQL Engineering team developed these resources. This team's core charter is to unblock and accelerate complex modernization for data platform migration projects to Microsoft's Azure data platform.
azure-sql Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/storage-configuration.md
This article teaches you how to configure your storage for your SQL Server on Azure Virtual Machines (VMs).
-SQL Server VMs deployed through marketplace images automatically follow default [storage best practices](performance-guidelines-best-practices-storage.md) which can be modified during deployment. Some of these configuration settings can be changed after deployment.
+SQL Server VMs deployed through marketplace images automatically follow default [storage best practices](performance-guidelines-best-practices-storage.md) which can be modified during deployment. Some of these configuration settings can be changed after deployment.
## Prerequisites
The following sections describe how to configure storage for new SQL Server virt
### Azure portal
-When provisioning an Azure VM using a SQL Server gallery image, select **Change configuration** on the **SQL Server Settings** tab to open the Performance Optimized Storage Configuration page. You can either leave the values at default, or modify the type of disk configuration that best suits your needs based on your workload.
+When provisioning an Azure VM using a SQL Server gallery image, select **Change configuration** on the **SQL Server Settings** tab to open the Performance Optimized Storage Configuration page. You can either leave the values at default, or modify the type of disk configuration that best suits your needs based on your workload.
![Screenshot that highlights the SQL Server settings tab and the Change configuration option.](./media/storage-configuration/sql-vm-storage-configuration-provisioning.png)
-Select the type of workload you're deploying your SQL Server for under **Storage optimization**. With the **General** optimization option, by default you will have one data disk with 5000 max IOPS, and you will use this same drive for your data, transaction log, and TempDB storage.
+Select the type of workload you're deploying your SQL Server for under **Storage optimization**. With the **General** optimization option, by default you will have one data disk with 5000 max IOPS, and you will use this same drive for your data, transaction log, and TempDB storage.
-Selecting either **Transactional processing** (OLTP) or **Data warehousing** will create a separate disk for data, a separate disk for the transaction log, and use local SSD for TempDB. There are no storage differences between **Transactional processing** and **Data warehousing**, but it does change your [stripe configuration, and trace flags](#workload-optimization-settings). Choosing premium storage sets the caching to *ReadOnly* for the data drive, and *None* for the log drive as per [SQL Server VM performance best practices](./performance-guidelines-best-practices-checklist.md).
+Selecting either **Transactional processing** (OLTP) or **Data warehousing** will create a separate disk for data, a separate disk for the transaction log, and use local SSD for TempDB. There are no storage differences between **Transactional processing** and **Data warehousing**, but it does change your [stripe configuration, and trace flags](#workload-optimization-settings). Choosing premium storage sets the caching to *ReadOnly* for the data drive, and *None* for the log drive as per [SQL Server VM performance best practices](./performance-guidelines-best-practices-checklist.md).
![SQL Server VM Storage Configuration During Provisioning](./media/storage-configuration/sql-vm-storage-configuration.png)
-The disk configuration is completely customizable so that you can configure the storage topology, disk type and IOPs you need for your SQL Server VM workload. You also have the ability to use UltraSSD (preview) as an option for the **Disk type** if your SQL Server VM is in one of the supported regions (East US 2, SouthEast Asia and North Europe) and you've enabled [ultra disks for your subscription](../../../virtual-machines/disks-enable-ultra-ssd.md).
+The disk configuration is completely customizable so that you can configure the storage topology, disk type and IOPs you need for your SQL Server VM workload. You also have the ability to use UltraSSD (preview) as an option for the **Disk type** if your SQL Server VM is in one of the supported regions (East US 2, SouthEast Asia and North Europe) and you've enabled [ultra disks for your subscription](../../../virtual-machines/disks-enable-ultra-ssd.md).
-Additionally, you have the ability to set the caching for the disks. Azure VMs have a multi-tier caching technology called [Blob Cache](../../../virtual-machines/premium-storage-performance.md#disk-caching) when used with [Premium Disks](../../../virtual-machines/disks-types.md#premium-ssd). Blob Cache uses a combination of the Virtual Machine RAM and local SSD for caching.
+Additionally, you have the ability to set the caching for the disks. Azure VMs have a multi-tier caching technology called [Blob Cache](../../../virtual-machines/premium-storage-performance.md#disk-caching) when used with [Premium Disks](../../../virtual-machines/disks-types.md#premium-ssd). Blob Cache uses a combination of the Virtual Machine RAM and local SSD for caching.
-Disk caching for Premium SSD can be *ReadOnly*, *ReadWrite* or *None*.
+Disk caching for Premium SSD can be *ReadOnly*, *ReadWrite* or *None*.
-- *ReadOnly* caching is highly beneficial for SQL Server data files that are stored on Premium Storage. *ReadOnly* caching brings low read latency, high read IOPS, and throughput as, reads are performed from cache, which is within the VM memory and local SSD. These reads are much faster than reads from data disk, which is from Azure Blob storage. Premium storage does not count the reads served from cache towards the disk IOPS and throughput. Therefore, your applicable is able to achieve higher total IOPS and throughput. -- *None* cache configuration should be used for the disks hosting SQL Server Log file as the log file is written sequentially and does not benefit from *ReadOnly* caching. -- *ReadWrite* caching should not be used to host SQL Server files as SQL Server does not support data consistency with the *ReadWrite* cache. Writes waste capacity of the *ReadOnly* blob cache and latencies slightly increase if writes go through *ReadOnly* blob cache layers.
+- *ReadOnly* caching is highly beneficial for SQL Server data files that are stored on Premium Storage. *ReadOnly* caching brings low read latency, high read IOPS, and throughput as, reads are performed from cache, which is within the VM memory and local SSD. These reads are much faster than reads from data disk, which is from Azure Blob storage. Premium storage does not count the reads served from cache towards the disk IOPS and throughput. Therefore, your applicable is able to achieve higher total IOPS and throughput.
+- *None* cache configuration should be used for the disks hosting SQL Server Log file as the log file is written sequentially and does not benefit from *ReadOnly* caching.
+- *ReadWrite* caching should not be used to host SQL Server files as SQL Server does not support data consistency with the *ReadWrite* cache. Writes waste capacity of the *ReadOnly* blob cache and latencies slightly increase if writes go through *ReadOnly* blob cache layers.
> [!TIP]
- > Be sure that your storage configuration matches the limitations imposed by the the selected VM size. Choosing storage parameters that exceed the performance cap of the VM size will result in warning: `The desired performance might not be reached due to the maximum virtual machine disk performance cap`. Either decrease the IOPs by changing the disk type, or increase the performance cap limitation by increasing the VM size. This will not stop provisioning.
+ > Be sure that your storage configuration matches the limitations imposed by the the selected VM size. Choosing storage parameters that exceed the performance cap of the VM size will result in warning: `The desired performance might not be reached due to the maximum virtual machine disk performance cap`. Either decrease the IOPs by changing the disk type, or increase the performance cap limitation by increasing the VM size. This will not stop provisioning.
Based on your choices, Azure performs the following storage configuration tasks after creating the VM:
For a full walkthrough of how to create a SQL Server VM in the Azure portal, see
If you use the following Resource Manager templates, two premium data disks are attached by default, with no storage pool configuration. However, you can customize these templates to change the number of premium data disks that are attached to the virtual machine.
-* [Create VM with Automated Backup](https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-sql-full-autobackup)
+* [Create VM with Automated Backup](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-sql-full-autobackup)
* [Create VM with Automated Patching](https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-sql-full-autopatching) * [Create VM with AKV Integration](https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-sql-full-keyvault) ### Quickstart template
-You can use the following quickstart template to deploy a SQL Server VM using storage optimization.
+You can use the following quickstart template to deploy a SQL Server VM using storage optimization.
* [Create VM with storage optimization](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-new-storage/)
-* [Create VM using UltraSSD](https://github.com/Azure/azure-quickstart-templates/tree/master/101-sql-vm-new-storage-ultrassd)
+* [Create VM using UltraSSD](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.sqlvirtualmachine/sql-vm-new-storage-ultrassd)
## Existing VMs
For existing SQL Server VMs, you can modify some storage settings in the Azure p
* Other (non-SQL storage) * Available
-To modify the storage settings, select **Configure** under **Settings**.
+To modify the storage settings, select **Configure** under **Settings**.
![Screenshot that highlights the Configure option and the Storage Usage section.](./media/storage-configuration/sql-vm-storage-configuration-existing.png)
-You can modify the disk settings for the drives that were configured during the SQL Server VM creation process. Selecting **Extend drive** opens the drive modification page, allowing you to change the disk type, as well as add additional disks.
+You can modify the disk settings for the drives that were configured during the SQL Server VM creation process. Selecting **Extend drive** opens the drive modification page, allowing you to change the disk type, as well as add additional disks.
![Configure Storage for Existing SQL Server VM](./media/storage-configuration/sql-vm-storage-extend-drive.png)
The following table describes the three workload type options available and thei
> [!NOTE] > You can only specify the workload type when you provision a SQL Server virtual machine by selecting it in the storage configuration step.
-## Enable caching
+## Enable caching
-Change the caching policy at the disk level. You can do so using the Azure portal, [PowerShell](/powershell/module/az.compute/set-azvmdatadisk), or the [Azure CLI](/cli/azure/vm/disk).
+Change the caching policy at the disk level. You can do so using the Azure portal, [PowerShell](/powershell/module/az.compute/set-azvmdatadisk), or the [Azure CLI](/cli/azure/vm/disk).
To change your caching policy in the Azure portal, follow these steps:
-1. Stop your SQL Server service.
-1. Sign into the [Azure portal](https://portal.azure.com).
-1. Navigate to your virtual machine, select **Disks** under **Settings**.
-
+1. Stop your SQL Server service.
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. Navigate to your virtual machine, select **Disks** under **Settings**.
+ ![Screenshot showing the VM disk configuration blade in the Azure portal.](./media/storage-configuration/disk-in-portal.png)
-1. Choose the appropriate caching policy for your disk from the drop-down.
+1. Choose the appropriate caching policy for your disk from the drop-down.
![Screenshot showing the disk caching policy configuration in the Azure portal.](./media/storage-configuration/azure-disk-config.png)
-1. After the change takes effect, reboot the SQL Server VM and start the SQL Server service.
+1. After the change takes effect, reboot the SQL Server VM and start the SQL Server service.
## Enable Write Accelerator
-Write Acceleration is a disk feature that is only available for the M-Series Virtual Machines (VMs). The purpose of write acceleration is to improve the I/O latency of writes against Azure Premium Storage when you need single digit I/O latency due to high volume mission critical OLTP workloads or data warehouse environments.
+Write Acceleration is a disk feature that is only available for the M-Series Virtual Machines (VMs). The purpose of write acceleration is to improve the I/O latency of writes against Azure Premium Storage when you need single digit I/O latency due to high volume mission critical OLTP workloads or data warehouse environments.
-Stop all SQL Server activity and shut down the SQL Server service before making changes to your write acceleration policy.
+Stop all SQL Server activity and shut down the SQL Server service before making changes to your write acceleration policy.
-If your disks are striped, enable Write Acceleration for each disk individually, and your Azure VM should be shut down before making any changes.
+If your disks are striped, enable Write Acceleration for each disk individually, and your Azure VM should be shut down before making any changes.
To enable Write Acceleration using the Azure portal, follow these steps:
-1. Stop your SQL Server service. If your disks are striped, shut down the virtual machine.
-1. Sign into the [Azure portal](https://portal.azure.com).
-1. Navigate to your virtual machine, select **Disks** under **Settings**.
-
+1. Stop your SQL Server service. If your disks are striped, shut down the virtual machine.
+1. Sign into the [Azure portal](https://portal.azure.com).
+1. Navigate to your virtual machine, select **Disks** under **Settings**.
+ ![Screenshot showing the VM disk configuration blade in the Azure portal.](./media/storage-configuration/disk-in-portal.png)
-1. Choose the cache option with **Write Accelerator** for your disk from the drop-down.
+1. Choose the cache option with **Write Accelerator** for your disk from the drop-down.
![Screenshot showing the write accelerator cache policy.](./media/storage-configuration/write-accelerator.png)
-1. After the change takes effect, start the virtual machine and SQL Server service.
+1. After the change takes effect, start the virtual machine and SQL Server service.
## Disk striping
For example, the following PowerShell creates a new storage pool with the interl
```powershell $PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like "*2" -or $_.FriendlyName -like "*3"}
-
+ New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Storage Spaces*" ` -PhysicalDisks $PhysicalDisks | New- VirtualDisk -FriendlyName "DataFiles" ` -Interleave 65536 -NumberOfColumns $PhysicalDisks .Count -ResiliencySettingName simple ` ΓÇôUseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter ` -UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "DataDisks" `
- -AllocationUnitSize 65536 -Confirm:$false
+ -AllocationUnitSize 65536 -Confirm:$false
``` * For Windows 2008 R2 or earlier, you can use dynamic disks (OS striped volumes) and the stripe size is always 64 KB. This option is deprecated as of Windows 8/Windows Server 2012. For information, see the support statement at [Virtual Disk Service is transitioning to Windows Storage Management API](/windows/win32/w8cookbook/vds-is-transitioning-to-wmiv2-based-windows-storage-management-api).
-
+ * If you are using [Storage Spaces Direct (S2D)](/windows-server/storage/storage-spaces/storage-spaces-direct-in-vm) with [SQL Server Failover Cluster Instances](./failover-cluster-instance-storage-spaces-direct-manually-configure.md), you must configure a single pool. Although different volumes can be created on that single pool, they will all share the same characteristics, such as the same caching policy.
-
+ * Determine the number of disks associated with your storage pool based on your load expectations. Keep in mind that different VM sizes allow different numbers of attached data disks. For more information, see [Sizes for virtual machines](../../../virtual-machines/sizes.md?toc=/azure/virtual-machines/windows/toc.json).
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
Title: Configure and manage DHCP in Azure VMware Solution
description: Learn how to create and manage DHCP for your Azure VMware Solution private cloud. Previously updated : 05/14/2021 Last updated : 05/17/2021 # Configure and manage DHCP in Azure VMware Solution
Applications and workloads running in a private cloud environment require DHCP s
- If you're using a third-party external DHCP server in your network, you'll need to [create DHCP relay service](#create-dhcp-relay-service). When you create a relay to a DHCP server, whether using NSX-T or a third-party to host your DHCP server, you'll need to specify the DHCP IP address range. >[!IMPORTANT]
->DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Configure DHCP on L2 stretched VMware HCX networks](#configure-dhcp-on-l2-stretched-vmware-hcx-networks) procedure.
+>DHCP does not work for virtual machines (VMs) on the VMware HCX L2 stretch network when the DHCP server is in the on-premises datacenter. NSX, by default, blocks all DHCP requests from traversing the L2 stretch. For the solution, see the [Configure DHCP on L2 stretched VMware HCX networks](configure-l2-stretched-vmware-hcx-networks.md) procedure.
## Create a DHCP server
If you want to use a third-party external DHCP server, you'll need to create a D
:::image type="content" source="./media/manage-dhcp/assigned-to-segment.png" alt-text="DHCP server pool assigned to segment" border="true":::
-## Configure DHCP on L2 stretched VMware HCX networks
-If you want to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server, you'll create a new security segment profile.
->[!IMPORTANT]
->VMs on the same L2 segment that runs as DHCP servers are blocked from serving client requests. Because of this, it's important to follow the steps in this section.
-
-1. (Optional) If you need to locate the segment name of the L2 extension:
-
- 1. Sign in to your on-premises vCenter, and under **Home**, select **HCX**.
-
- 1. Select **Network Extension** under **Services**.
-
- 1. Select the network extension you want to support DHCP requests from Azure VMware Solution to on-premises.
-
- 1. Take note of the destination network name.
- :::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client" lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
-
-1. In the Azure VMware Solution NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**.
-
-1. Select **Add Segment Profile** and then **Segment Security**.
-
- :::image type="content" source="media/manage-dhcp/add-segment-profile.png" alt-text="Screenshot of how to add a segment profile in NSX-T" lightbox="media/manage-dhcp/add-segment-profile.png":::
-1. Provide a name and a tag, and then set the **BPDU Filter** toggle to ON and all the DHCP toggles to OFF.
-
- :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off" lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::
-
- :::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field" lightbox="media/manage-dhcp/edit-segment-security.png":::
+## Next steps
+If you want to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server, you'll create a new security segment profile. For the solution, see the [Configure DHCP on L2 stretched VMware HCX networks](configure-l2-stretched-vmware-hcx-networks.md) procedure.
-## Next steps
-Learn more about [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
azure-vmware Configure L2 Stretched Vmware Hcx Networks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-l2-stretched-vmware-hcx-networks.md
+
+ Title: Configure DHCP on L2 stretched VMware HCX networks
+description: Learn how to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server.
+ Last updated : 05/17/2021++
+# Configure DHCP on L2 stretched VMware HCX networks
+If you want to send DHCP requests from your Azure VMware Solution VMs to a non-NSX-T DHCP server, you'll create a new security segment profile.
+
+>[!IMPORTANT]
+>VMs on the same L2 segment that runs as DHCP servers are blocked from serving client requests. Because of this, it's important to follow the steps in this section.
+
+1. (Optional) If you need to locate the segment name of the L2 extension:
+
+ 1. Sign in to your on-premises vCenter, and under **Home**, select **HCX**.
+
+ 1. Select **Network Extension** under **Services**.
+
+ 1. Select the network extension you want to support DHCP requests from Azure VMware Solution to on-premises.
+
+ 1. Take note of the destination network name.
+
+ :::image type="content" source="media/manage-dhcp/hcx-find-destination-network.png" alt-text="Screenshot of a network extension in VMware vSphere Client" lightbox="media/manage-dhcp/hcx-find-destination-network.png":::
+
+1. In the Azure VMware Solution NSX-T Manager, select **Networking** > **Segments** > **Segment Profiles**.
+
+1. Select **Add Segment Profile** and then **Segment Security**.
+
+ :::image type="content" source="media/manage-dhcp/add-segment-profile.png" alt-text="Screenshot of how to add a segment profile in NSX-T" lightbox="media/manage-dhcp/add-segment-profile.png":::
+1. Provide a name and a tag, and then set the **BPDU Filter** toggle to ON and all the DHCP toggles to OFF.
+
+ :::image type="content" source="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png" alt-text="Screenshot showing the BPDU Filter toggled on and the DHCP toggles off" lightbox="media/manage-dhcp/add-segment-profile-bpdu-filter-dhcp-options.png":::
+
+ :::image type="content" source="media/manage-dhcp/edit-segment-security.png" alt-text="Screenshot of the Segment Security field" lightbox="media/manage-dhcp/edit-segment-security.png":::
azure-vmware Deploy Traffic Manager Balance Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
The following steps verify the configuration of the NSX-T segment in the Azure V
3. Provide the following information and then select **Create**: - Profile name
- - Routing method (use [weighted](../traffic-manager/traffic-manager-routing-methods.md)
+ - Routing method (use [weighted](../traffic-manager/traffic-manager-routing-methods.md))
- Subscription - Resource group
backup Backup Azure Backup Server Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-backup-server-vmware.md
Add the vCenter Server to Azure Backup Server.
6. Select **Add** to add the VMware server to the servers list. Then select **Next**.
- ![Add VMWare server and credential](./media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png)
+ ![Add VMware server and credential](./media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png)
7. In the **Summary** page, select **Add** to add the VMware server to Azure Backup Server. The new server is added immediately, no agent is needed on the VMware server.
Add VMware VMs for backup. Protection groups gather multiple VMs and apply the s
>[!NOTE] > This feature is applicable for MABS V3 UR1.
-With earlier versions of MABS, parallel backups were performed only across protection groups. With MABS V3 UR1, all your VMWare VMs backups within a single protection group are parallel, leading to faster VM backups. All VMWare delta replication jobs run in parallel. By default, the number of jobs to run in parallel is set to 8.
+With earlier versions of MABS, parallel backups were performed only across protection groups. With MABS V3 UR1, all your VMware VMs backups within a single protection group are parallel, leading to faster VM backups. All VMware delta replication jobs run in parallel. By default, the number of jobs to run in parallel is set to 8.
You can modify the number of jobs by using the registry key as shown below (not present by default, you need to add it):
-**Key Path**: `Software\Microsoft\Microsoft Data Protection Manager\Configuration\ MaxParallelIncrementalJobs\VMWare`<BR>
+**Key Path**: `Software\Microsoft\Microsoft Data Protection Manager\Configuration\ MaxParallelIncrementalJobs\VMware`<BR>
**Key Type**: DWORD (32-bit) value. > [!NOTE]
-> You can modify the number of jobs to a higher value. If you set the jobs number to 1, replication jobs run serially. To increase the number to a higher value, you must consider the VMWare performance. Consider the number of resources in use and additional usage required on VMWare vSphere Server, and determine the number of delta replication jobs to run in parallel. Also, this change will affect only the newly created protection groups. For existing protection groups you must temporarily add another VM to the protection group. This should update the protection group configuration accordingly. You can remove this VM from the protection group after the procedure is completed.
+> You can modify the number of jobs to a higher value. If you set the jobs number to 1, replication jobs run serially. To increase the number to a higher value, you must consider the VMware performance. Consider the number of resources in use and additional usage required on VMWare vSphere Server, and determine the number of delta replication jobs to run in parallel. Also, this change will affect only the newly created protection groups. For existing protection groups you must temporarily add another VM to the protection group. This should update the protection group configuration accordingly. You can remove this VM from the protection group after the procedure is completed.
-## VMWare vSphere 6.7
+## VMware vSphere 6.7
To back up vSphere 6.7, do the following: - Enable TLS 1.2 on the MABS Server >[!NOTE]
->VMWare 6.7 onwards had TLS enabled as communication protocol.
+>VMware 6.7 onwards had TLS enabled as communication protocol.
- Set the registry keys as follows:
With MABS V3 UR1, you can exclude the specific disk from VMware VM backup. The c
To configure the disk exclusion, follow the steps below:
-### Identify the VMWare VM and disk details to be excluded
+### Identify the VMware VM and disk details to be excluded
1. On the VMware console, go to VM settings for which you want to exclude the disk. 2. Select the disk that you want to exclude and note the path for that disk.
backup Backup Rm Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-rm-template-samples.md
Title: Azure Resource Manager templates
+ Title: Azure Resource Manager templates
description: Azure Resource Manager templates for use with Recovery Services vaults and Azure Backup features Last updated 01/31/2019
The following table includes links to Azure Resource Manager templates for use w
| [Back up Resource Manager VMs](https://github.com/Azure/azure-quickstart-templates/tree/master/101-recovery-services-backup-vms) | Use the existing Recovery Services vault and Backup policy to back up Resource Manager-virtual machines in the same resource group.| | [Back up IaaS VMs to Recovery Services vault](https://github.com/Azure/azure-quickstart-templates/tree/master/201-recovery-services-backup-classic-resource-manager-vms) | Template to back up classic and Resource Manager-virtual machines. | | [Create Weekly Backup policy for IaaS VMs](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.recoveryservices/recovery-services-weekly-backup-policy-create) | Template creates Recovery Services vault and a weekly backup policy, which is used to back up classic and Resource Manager-virtual machines.|
-| [Create Daily Backup policy for IaaS VMs](https://github.com/Azure/azure-quickstart-templates/tree/master/101-recovery-services-daily-backup-policy-create) | Template creates Recovery Services vault and a daily backup policy, which is used to back up classic and Resource Manager-virtual machines.|
+| [Create Daily Backup policy for IaaS VMs](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.recoveryservices/recovery-services-daily-backup-policy-create) | Template creates Recovery Services vault and a daily backup policy, which is used to back up classic and Resource Manager-virtual machines.|
| [Deploy Windows Server VM with backup enabled](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.recoveryservices/recovery-services-create-vm-and-configure-backup) | Template creates a Windows Server VM and Recovery Services vault with the default backup policy enabled.| |**Monitor Backup jobs** | |
-| [Use Azure Monitor logs with Azure Backup](https://github.com/Azure/azure-quickstart-templates/tree/master/101-backup-oms-monitoring) | Template deploys Azure Monitor logs with Azure Backup, which allows you to monitor backup and restore jobs, backup alerts, and the Cloud storage used in your Recovery Services vaults.|
+| [Use Azure Monitor logs with Azure Backup](https://github.com/Azure/azure-quickstart-templates/tree/master/101-backup-oms-monitoring) | Template deploys Azure Monitor logs with Azure Backup, which allows you to monitor backup and restore jobs, backup alerts, and the Cloud storage used in your Recovery Services vaults.|
|**Back up SQL Server in Azure VM** | | | [Back up SQL Server in Azure VM](https://github.com/Azure/azure-quickstart-templates/tree/master/101-recovery-services-vm-workload-backup) | Template creates a Recovery Services vault and Workload specific Backup Policy. It Registers the VM with Azure Backup service and Configures Protection on that VM. Currently, it only works for SQL Gallery images. | |**Back up Azure file shares** | |
cognitive-services Identify Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/How-to/identify-anomalies.md
By sending your time series data at once, the API will generate a model using th
To continuously detect anomalies on streaming data, use the following request URI with your latest data point:
-`/timeseries/last/detect'`.
+`/timeseries/last/detect`.
By sending new data points as you generate them, you can monitor your data in real time. A model will be generated with the data points you send, and the API will determine if the latest point in the time series is an anomaly.
cognitive-services Best Practices Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/concepts/best-practices-multivariate.md
keywords: anomaly detection, machine learning, algorithms
This article will provide guidance around recommended practices to follow when using the multivariate Anomaly Detector APIs.
-## How to prepare data for training
+## Training data
-To use the Anomaly Detector multivariate APIs, we need to train our own model before using detection. Data used for training is a batch of time series, each time series should be in CSV format with two columns, timestamp and value. All of the time series should be zipped into one zip file and uploaded to Azure Blob storage. By default the file name will be used to represent the variable for the time series. Alternatively, an extra meta.json file can be included in the zip file if you wish the name of the variable to be different from the .zip file name. Once we generate a [blob SAS (Shared access signatures) URL](../../../storage/common/storage-sas-overview.md), we can use it for training.
+### Data schema
+To use the Anomaly Detector multivariate APIs, you need to first train your own models. Training data is a set of multiple time series that meet the following requirements:
-## Data quality and quantity
+Each time series should be a CSV file with two (and only two) columns, **"timestamp"** and **"value"** (all in lowercase) as the header row. The "timestamp" values should conform to ISO 8601; the "value" could be integers or decimals with any number of decimal places. For example:
-The Anomaly Detector multivariate API uses state-of-the-art deep neural networks to learn normal patterns from historical data and predicts whether future values are anomalies. The quality and quantity of training data is important to train an optimal model. As the model learns normal patterns from historical data, the training data should represent the overall normal state of the system. It is hard for the model to learn these types of patterns if the training data is full of anomalies. Also, the model has millions of parameters and it needs a minimum number of data points to learn an optimal set of parameters. The general rule is that you need to provide at least 15,000 data points per variable to properly train the model. The more data, the better the model.
+|timestamp | value|
+|-|-|
+|2019-04-01T00:00:00Z| 5|
+|2019-04-01T00:01:00Z| 3.6|
+|2019-04-01T00:02:00Z| 4|
+|`...`| `...` |
+
+Each CSV file should be named after a different variable that will be used for model training. For example, "temperature.csv" and "humidity.csv". All the CSV files should be zipped into one zip file without any subfolders. The zip file can have whatever name you want. The zip file should be uploaded to Azure Blob storage. Once you generate the [blob SAS (Shared access signatures) URL](../../../storage/common/storage-sas-overview.md) for the zip file, it can be used for training. Refer to this document for how to generate SAS URLs from Azure Blob Storage.
-It is common that many time series have missing values, which may affect the performance of trained models. The missing ratio of each time series should be controlled under a reasonable value. A time series having 90% values missing provides little information about normal patterns of the system. Even worse, the model may consider filled values as normal patterns, which are usually straight segments or constant values. When new data flows in, the data might be detected as anomalies.
+### Data quality
+- As the model learns normal patterns from historical data, the training data should **represent the overall normal state of the system**. It is hard for the model to learn these types of patterns if the training data is full of anomalies.
+- The model has millions of parameters and it needs a minimum number of data points to learn an optimal set of parameters. The general rule is that you need to provide **at least 15,000 data points per variable** to properly train the model. The more data, the better the model.
+- In general, the **missing value ratio of training data should be under 20%**. Too much missing data may end up with automatically filled values (usually straight segments or constant values) being learnt as normal patterns. That may result in real data points being detected as anomalies.
-A recommended max missing value threshold is 20%, but a higher threshold might be acceptable under some circumstances. For example, if you have a time series with one-minute granularity and another time series with hourly granularity. Each hour there are 60 data points per minute of data and 1 data point for hourly data, which means that the missing ratio for hourly data is 98.33%. However, it is fine to fill the hourly data with the only value if the hourly time series does not typically fluctuate too much.
+ However, there are cases when a high ratio is acceptable. For example, if you have two time series in a group using `Outer` mode to align timestamps. One has one-minute granularity, the other one has hourly granularity. Then the hourly time series by nature has at least 59 / 60 = 98.33% missing data points. In such cases, it's fine to fill the hourly time series using the only value available if it does not fluctuate too much typically.
## Parameters
Multivariate Anomaly Detection, as an unsupervised model. The best way to evalua
## Next steps - [Quickstarts](../quickstarts/client-libraries-multivariate.md).-- [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
+- [Learn about the underlying algorithms that power Anomaly Detector Multivariate](https://arxiv.org/abs/2009.02040)
cognitive-services Overview Multivariate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview-multivariate.md
keywords: anomaly detection, machine learning, algorithms
# Multivariate time series Anomaly Detection (public preview)
-The first release of the Azure Cognitive Services Anomaly Detector allowed you to build metrics monitoring solutions using the easy-to-use [univariate time series Anomaly Detector APIs](overview.md). By allowing analysis of time series individually, Anomaly Detector univariate provides simplicity and scalability.
- The new **multivariate anomaly detection** APIs further enable developers by easily integrating advanced AI for detecting anomalies from groups of metrics, without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between up to 300 different signals are now automatically counted as key factors. This new capability helps you to proactively protect your complex systems such as software applications, servers, factory machines, spacecraft, or even your business, from failures.
+![Multiple time series line graphs for variables of: vibration, temperature, pressure, velocity, rotation speed with anomalies highlighted in orange](./media/multivariate-graph.png)
+ Imagine 20 sensors from an auto engine generating 20 different signals like vibration, temperature, fuel pressure, etc. The readings of those signals individually may not tell you much about system level issues, but together they can represent the health of the engine. When the interaction of those signals deviates outside the usual range, the multivariate anomaly detection feature can sense the anomaly like a seasoned expert. The underlying AI models are trained and customized using your data such that it understands the unique needs of your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time series anomaly detection capabilities into predictive maintenance solutions, AIOps monitoring solutions for complex enterprise software, or business intelligence tools. ## When to use **multivariate** versus **univariate**
-Use univariate anomaly detection APIs, if your goal is to detect anomalies out of a normal pattern on each individual time series purely based on their own historical data. Examples: you want to detect daily revenue anomalies based on revenue data itself, or you want to detect a CPU spike purely based on CPU data.
-- `POST /anomalydetector/v1.0/timeseries/last/detect`-- `POST /anomalydetector/v1.0/timeseries/batch/detect`-- `POST /anomalydetector/v1.0/timeseries/changepoint/detect`
+If your goal is to detect anomalies out of a normal pattern on each individual time series purely based on their own historical data, use univariate anomaly detection APIs. For example, you want to detect daily revenue anomalies based on revenue data itself, or you want to detect a CPU spike purely based on CPU data.
-![Time series line graph with a single variable's fluctuating values captured by a blue line with anomalies identified by orange circles](./media/anomaly_detection2.png)
+If your goal is to detect system level anomalies from a group of time series data, use multivariate anomaly detection APIs. Particularly, when any individual time series won't tell you much, and you have to look at all signals (a group of time series) holistically to determine a system level issue. For example, you have an expensive physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of different types of sensors. You would have to look at all those time series signals from those sensors to decide whether there is system level issue.
-Use multivariate anomaly detection APIs below, if your goal is to detect system level anomalies from a group of time series data. Particularly, when any individual time series won't tell you much, and you have to look at all signals (a group of time series) holistically to determine a system level issue. Example: you have an expensive physical asset like aircraft, equipment on an oil rig, or a satellite. Each of these assets has tens or hundreds of different types of sensors. You would have to look at all those time series signals from those sensors to decide whether there is system level issue.
+## Notebook
-- `POST /anomalydetector/v1.1-preview/multivariate/models`-- `GET /anomalydetector/v1.1-preview/multivariate/models[?$skip][&$top]`-- `GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}`-- `POST/anomalydetector/v1.1-preview/multivariate/models/{modelId}/detect`-- `GET /anomalydetector/v1.1-preview/multivariate/results/{resultId}`-- `DELETE /anomalydetector/v1.1-preview/multivariate/models/{modelId}`-- `GET /anomalydetector/v1.1-preview/multivariate/models/{modelId}/export`
+To learn how to call the Anomaly Detector API (multivariate), try this [Notebook](https://github.com/Azure-Samples/AnomalyDetector/blob/master/ipython-notebook/Multivariate%20API%20Demo%20Notebook.ipynb). This Jupyter Notebook shows you how to send an API request and visualize the result.
+
+To run the Notebook, you should get a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
-![Multiple time series line graphs for variables of: vibration, temperature, pressure, velocity, rotation speed with anomalies highlighted in orange](./media/multivariate-graph.png)
## Region support
The public preview of Anomaly Detector multivariate is currently available in th
## Algorithms -- [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040)
+See the following technical documents for information about the algorithms used:
+
+* Blog: [Introducing Multivariate Anomaly Detection](https://techcommunity.microsoft.com/t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679)
+* Paper: [Multivariate time series Anomaly Detection via Graph Attention Network](https://arxiv.org/abs/2009.02040)
++
+> [!VIDEO https://www.youtube.com/watch?v=FwuI02edclQ]
+ ## Join the Anomaly Detector community
The public preview of Anomaly Detector multivariate is currently available in th
## Next steps - [Quickstarts](./quickstarts/client-libraries-multivariate.md).-- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
+- [Best Practices](./concepts/best-practices-multivariate.md): This article is about recommended patterns to use with the multivariate APIs.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/overview.md
To run the demo, you need to create an Anomaly Detector resource and get the API
To learn how to call the Anomaly Detector API, try this [Notebook](https://aka.ms/adNotebook). This Jupyter Notebook shows you how to send an API request and visualize the result.
-To run the Notebook, complete the following steps:
-
-1. Get a valid Anomaly Detector API subscription key and an API endpoint. The section below has instructions for signing up.
-1. Sign in, and select Clone, in the upper right corner.
-1. Uncheck the "public" option in the dialog box before completing the clone operation, otherwise your notebook, including any subscription keys, will be public.
-1. Select **Run on free compute**
-1. Select one of the notebooks.
-1. Add your valid Anomaly Detector API subscription key to the `subscription_key` variable.
-1. Change the `endpoint` variable to your endpoint. For example: `https://westus2.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/last/detect`
-1. On the top menu bar, select **Cell**, then **Run All**.
+To run the Notebook, you should get a valid Anomaly Detector API **subscription key** and an **API endpoint**. In the notebook, add your valid Anomaly Detector API subscription key to the `subscription_key` variable, and change the `endpoint` variable to your endpoint.
## Workflow
No customer configuration is necessary to enable zone-resiliency. Zone-resilienc
* [Quickstart: Detect anomalies in your time series data using the Anomaly Detector](quickstarts/client-libraries.md) * The Anomaly Detector API [online demo](https://github.com/Azure-Samples/AnomalyDetector/tree/master/ipython-notebook)
-* The Anomaly Detector [REST API reference](https://aka.ms/anomaly-detector-rest-api-ref)
+* The Anomaly Detector [REST API reference](https://aka.ms/anomaly-detector-rest-api-ref)
cognitive-services Spatial Analysis Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-web-app.md
Last updated 01/12/2021
-# How to: Deploy a People Counting web application
+# How to: Deploy a Spatial Analysis web application
-Use this article to learn how to integrate Spatial Analysis into a web app that understands the movement of people, and monitors the number of people occupying a physical space.
+Use this article to learn how to deploy a web app which will collect spatial analysis data(insights) from IotHub and visualize it. This can have useful applications across a wide range of scenarios and industries. For example, if a company wants to optimize the use of its real estate space, they are able to quickly create a solution with different scenarios.
In this tutorial you will learn how to:
In this tutorial you will learn how to:
* Configure the IoT Hub connection in the Web Application * Deploy and test the Web Application
+This app will showcase below scenarios:
+
+* Count of people entering and exiting a space/store
+* Count of people entering and exiting a checkout area/zone and the time spent in the checkout line (dwell time)
+* Count of people wearing a face mask
+* Count of people violating social distancing guidelines
+ ## Prerequisites * Azure subscription - [create one for free](https://azure.microsoft.com/free/cognitive-services/)
az iot hub device-identity create --hub-name "<IoT Hub Name>" --device-id "<Edge
### Deploy the container on Azure IoT Edge on the host computer
-Deploy the Spatial Analysis container as an IoT Module on the host computer, using the Azure CLI. The deployment process requires a deployment manifest file which outlines the required containers, variables, and configurations for your deployment. You can find a sample [Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2142179), [non-Azure Stack Edge specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189), and [Azure VM with GPU specific deployment manifest](https://go.microsoft.com/fwlink/?linkid=2152189) on GitHub, which include a basic deployment configuration for the *spatial-analysis* container.
-
-Alternatively, you can use the Azure IoT extensions for Visual Studio Code to perform operations with your IoT hub. Go to [Deploy Azure IoT Edge Modules from Visual Studio Code](../../iot-edge/how-to-deploy-modules-vscode.md) to learn more.
-
-> [!NOTE]
-> The *spatial-analysis-telegraf* and *spatial-analysis-diagnostics* containers are optional. You may decide to remove them from the *DeploymentManifest.json* file. For more information see the [telemetry and troubleshooting](./spatial-analysis-logging.md) article. You can find three sample *DeploymentManifest.json* files on GitHub, for [Azure Stack Edge devices](https://go.microsoft.com/fwlink/?linkid=2142179), a [Desktop machine](https://go.microsoft.com/fwlink/?linkid=2152189), or an [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189)
+The next step is to deploy the **spatial analysis** container as an IoT Module on the host computer using the Azure CLI. The deployment process requires a Deployment Manifest file which outlines the required containers, variables, and configurations for your deployment. A sample Deployment Manifest can be found at [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) which includes pre-built configurations for all scenarios.
### Set environment variables
-Most of the **Environment Variables** for the IoT Edge Module are already set in the sample *DeploymentManifest.json* files linked above. In the file, search for the `BILLING_ENDPOINT` and `API_KEY` environment variables, shown below. Replace the values with the Endpoint URI and the API Key that you created earlier. Ensure that the EULA value is set to "accept".
+Most of the **Environment Variables** for the IoT Edge Module are already set in the sample *DeploymentManifest.json* files linked above. In the file, search for the `ENDPOINT` and `APIKEY` environment variables, shown below. Replace the values with the Endpoint URI and the API Key that you created earlier. Ensure that the EULA value is set to "accept".
```json "EULA": { "value": "accept" },-
-"BILLING_ENDPOINT":{
+"ENDPOINT":{
"value": "<Use a key from your Computer Vision resource>" },
-"API_KEY":{
+"APIKEY":{
"value": "<Use the endpoint from your Computer Vision resource>" } ``` ### Configure the operation parameters
-Now that the initial configuration of the *spatial-analysis* container is complete, the next step is to configure the operations parameters and add them to the deployment.
+If you are using the sample [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) which already has all of the required configurations (operations, recorded video file urls and zones etc.), then you can skip to the **Execute the deployment** section.
-The first step is to update the sample deployment manifest linked above and configure the operationId for `cognitiveservices.vision.spatialanalysis-personcount` as shown below:
+Now that the initial configuration of the spatial analysis container is complete, the next step is to configure the operations parameters and add them to the deployment.
+The first step is to update the sample [DeploymentManifest.json](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/deployment.json) and configure the desired operation. For example, configuration for cognitiveservices.vision.spatialanalysis-personcount is shown below:
```json "personcount": {
Locate the *Runtime Status* in the IoT Edge Module Settings for the spatial-anal
![Example deployment verification](./media/spatial-analysis/deployment-verification.png)
-At this point, the spatial-analysis container is running the operation. It emits AI insights for the `cognitiveservices.vision.spatialanalysis-personcount` operation and it routes these insights as telemetry to your Azure IoT Hub instance. To configure additional cameras, you can update the deployment manifest file and execute the deployment again.
+At this point, the spatial analysis container is running the operation. It emits AI insights for the operations and routes these insights as telemetry to your Azure IoT Hub instance. To configure additional cameras, you can update the deployment manifest file and execute the deployment again.
-## Person Counting Web Application
+## Spatial Analysis Web Application
-This person counting web application enables you to quickly configure a sample web app and host it in your Azure environment.
+The Spatial Analysis Web Application enables developers to quickly configure a sample web app, host it in their Azure environment, and use the app to validate E2E events.
-### Get the person counting app container
+## Build Docker Image
-A container form of this app available on the Azure Container Registry. Use the following docker pull command to download it. Contact Microsoft at projectarchon@microsoft.com for the access token.
+Follow the [guide](https://github.com/Azure-Samples/cognitive-services-spatial-analysis/blob/main/README.md#docker-image) to build and push the image to an Azure Container Registry in your subscription.
-```bash
-docker login rtvsofficial.azurecr.io -u <token name> -p <password>
-docker pull rtvsofficial.azurecr.io/acceleratorapp.personcount:1.0
-```
-
-Push the container to your Azure Container Registry (ACR).
-
-```bash
-az acr login --name <your ACR name>
-
-docker tag rtvsofficial.azurecr.io/acceleratorapp.personcount:1.0 [desired local image name]
-
-docker push [desired local image name]
-```
+## Setup Steps
To install the container, create a new Azure App Service and fill in the required parameters. Then go to the **Docker** Tab and select **Single Container**, then **Azure Container Registry**. Use your instance of Azure Container Registry where you pushed the image above.
cognitive-services Custom Keyword Basics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-keyword-basics.md
Before you can use a custom keyword, you need to create a keyword using the [Cus
1. At the [Custom Keyword](https://aka.ms/sdsdk-wakewordportal) page, create a **New project**.
-1. Enter a **Name**, an optional **Description**, and select the language. You need one project per language, and support is currently limited to the `en-US` language.
+1. Enter a **Name**, an optional **Description**, and select the language. You need one project per language, and support is currently limited to English (United States) and Chinese (Mandarin, Simplified).
![Describe your keyword project](media/custom-keyword/custom-kws-portal-new-project.png)
Before you can use a custom keyword, you need to create a keyword using the [Cus
1. To create a new keyword model, click **Train model**.
-1. Enter a **Name** for the model, an optional **Description**, and the **Keyword** of your choice, then click **Next**. See the [guidelines](./custom-keyword-overview.md#choose-an-effective-keyword) on choosing an effective keyword.
+1. Enter a **Name** for the model, an optional **Description**, and the **Keyword** of your choice, then click **Next**. See the [guidelines](keyword-recognition-guidelines.md#choosing-an-effective-keyword) on choosing an effective keyword.
![Enter your keyword](media/custom-keyword/custom-kws-portal-new-model.png)
Before you can use a custom keyword, you need to create a keyword using the [Cus
1. The downloaded file is a `.zip` archive. Extract the archive, and you see a file with the `.table` extension. This is the file you use with the SDK in the next section, so make sure to note its path. the file name mirrors your keyword name, for example a keyword **Activate device** has the file name `Activate_device.table`.
-## Use a keyword model with the SDK
+## Use a keyword model with the Speech SDK
::: zone pivot="programming-language-csharp" [!INCLUDE [C# Basics include](includes/how-to/keyword-recognition/keyword-basics-csharp.md)]
Before you can use a custom keyword, you need to create a keyword using the [Cus
## Next steps
-Test your custom keyword with the [Speech Devices SDK Quickstart](./speech-devices-sdk-quickstart.md?pivots=platform-android).
+> [!div class="nextstepaction"]
+> [Get the Speech SDK](speech-sdk.md)
cognitive-services Custom Keyword Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-keyword-overview.md
- Title: Custom Keywords - Speech service-
-description: An overview of the features, capabilities, and restrictions for custom keywords using the Speech Software Development Kit (SDK).
------ Previously updated : 04/06/2020----
-# What is a keyword?
-
-A keyword is a word or short phrase which allows your product to be voice activated. For example, "Hey Cortana" is the keyword for the Cortana assistant. Voice activation allows your users to start interacting with your product completely hands-free by simply speaking the keyword. As your product continuously listens for the keyword, all audio is processed locally on the user's device until a detection occurs to ensure their data stays as private as possible.
-
-## Core features of Custom Keyword
-
-With Custom Keyword's customization, performance, and integration features, you can tailor voice activation to best suit your product's vision and users' needs.
-
-| Feature | Description |
-|-|-|
-| Keyword customization | As an extension of your brand, a keyword reinforces the equity you've built with your customers. The Custom Keyword portal on Speech Studio allows you to specify any word or short phrase that best represents your brand. You can further personalize your keyword by choosing the right pronunciations, which will be honored by the keyword model generated.
-| Keyword verification | When there's high confidence in the keyword being detected locally, audio is sent to the cloud for further verification that a user said the keyword. Keyword verification provides an additional layer of security by reducing the impact of an incorrect local detection and protecting user privacy.
-| Voice assistant & Speech SDK integration | Keywords generated from the Custom Keyword on Speech Studio can be easily integrated within your device or application via the Speech SDK. Simply point the SDK to the keyword model provided by Speech Studio and your product will be voice activated, backed by keyword verification. You can complete your product's voice experiences by building your own [voice assistant](voice-assistants.md).
-
-## Get started with custom keywords
-
-* See [custom keyword basics](custom-keyword-basics.md) for basic usage and design patterns.
-* How to [voice-activate your product with the Speech SDK, using C#](tutorial-voice-enable-your-bot-speech-sdk.md)
-
-## Choose an effective keyword
-
-Creating an effective keyword is vital to ensuring your device will consistently and accurately respond. Customizing your keyword is an effective way to differentiate your device and strengthen your branding. Consider the following guidelines when you choose a keyword:
-
-> [!div class="checklist"]
-> * Your keyword should be an English word or phrase.
-> * It should take no longer than two seconds to say.
-> * Words of 4 to 7 syllables work best. For example, "Hey, Computer" is a good keyword. Just "Hey" is a poor one.
-> * Keywords should follow common English pronunciation rules.
-> * A unique or even a made-up word that follows common English pronunciation rules might reduce false positives. For example, "computerama" might be a good keyword.
-> * Do not choose a common word. For example, "eat" and "go" are words that people say frequently in ordinary conversation. They might be false triggers for your device.
-> * Avoid using a keyword that might have alternative pronunciations. Users would have to know the "right" pronunciation to get their device to respond. For example, "509" can be pronounced "five zero nine," "five oh nine," or "five hundred and nine." "R.E.I." can be pronounced "r-e-i" or "ray." "Live" can be pronounced "/l─½v/" or "/liv/".
-> * Do not use special characters, symbols, or digits. For example, "Go#" and "20 + cats" could be problematic keywords. However, "go sharp" or "twenty plus cats" might work. You can still use the symbols in your branding and use marketing and documentation to reinforce the proper pronunciation.
-
-> [!NOTE]
-> If you choose a trademarked word as your keyword, be sure that you own that trademark or that you have permission from the trademark owner to use the word. Microsoft is not liable for any legal issues that might arise from your choice of keyword.
-
-## See samples on GitHub
-
-* [Recognize keywords with the Speech SDK, on Universal Windows Platform using C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)
-* [Recognize keywords with the Speech SDK, on Android using Java](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)
-
-## Next steps
-
-* [Get a Speech service subscription key for free](overview.md#try-the-speech-service-for-free)
-* [Get the Speech SDK](speech-sdk.md)
-* [Learn more about Voice Assistants](voice-assistants.md)
cognitive-services Keyword Recognition Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/keyword-recognition-guidelines.md
+
+ Title: Keyword recognition recommendations and guidelines - Speech service
+
+description: An overview of recommendations and guidelines when using keyword recognition.
++++++ Last updated : 04/30/2021++++
+# Recommendations and guidelines for keyword recognition
+
+This article outlines how to choose your keyword optimize its accuracy characteristics and how to design your user experiences with Keyword Verification.
+
+## Choosing an effective keyword
+
+Creating an effective keyword is vital to ensuring your product will consistently and accurately respond. Consider the following guidelines when you choose a keyword.
+
+> [!NOTE]
+> The examples below are in English but the guidelines apply to all languages supported by Custom Keyword. For a list of all supported languages, see [Language support](language-support.md#custom-keyword-and-keyword-verification).
+
+- It should take no longer than two seconds to say.
+- Words of 4 to 7 syllables work best. For example, "Hey, Computer" is a good keyword. Just "Hey" is a poor one.
+- Keywords should follow common pronunciation rules specific to the native language of your end-users.
+- A unique or even a made-up word that follows common pronunciation rules might reduce false positives. For example, "computerama" might be a good keyword.
+- Do not choose a common word. For example, "eat" and "go" are words that people say frequently in ordinary conversation. They might lead to higher than desired false accept rates for your product.
+- Avoid using a keyword that might have alternative pronunciations. Users would have to know the "right" pronunciation to get their product to voice activate. For example, "509" can be pronounced "five zero nine," "five oh nine," or "five hundred and nine." "R.E.I." can be pronounced "r-e-i" or "ray." "Live" can be pronounced "/l─½v/" or "/liv/".
+- Do not use special characters, symbols, or digits. For example, "Go#" and "20 + cats" could be problematic keywords. However, "go sharp" or "twenty plus cats" might work. You can still use the symbols in your branding and use marketing and documentation to reinforce the proper pronunciation.
++
+## User experience recommendations with Keyword Verification
+
+With a multi-stage keyword recognition scenario where [Keyword Verification](keyword-recognition-overview.md#keyword-verification) is used, applications can choose when the end-user is notified of a keyword detection. The recommendation for rendering any visual or audible indicator is to rely upon on responses from the Keyword Verification service:
+
+![User experience guideline when optimizing for accuracy.](media/custom-keyword/keyword-verification-ux-accuracy.png)
+
+This ensures the optimal experience in terms of accuracy to minimize the user-perceived impact of false accepts but incurs additional latency.
+
+For applications that require latency optimization, applications can provide light and unobtrusive indicators to the end-user based on the on-device keyword recognition. For example, lighting an LED pattern or pulsing an icon. The indicators can continue to exist if Keyword Verification responds with a keyword accept, or can be dismissed if the response is a keyword reject:
+
+![User experience guideline when optimizing for latency.](media/custom-keyword/keyword-verification-ux-latency.png)
+
+## Next steps
+
+* [Get the Speech SDK.](speech-sdk.md)
+* [Learn more about Voice Assistants.](voice-assistants.md)
cognitive-services Keyword Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/keyword-recognition-overview.md
+
+ Title: Keyword recognition - Speech service
+
+description: An overview of the features, capabilities, and restrictions for keyword recognition using the Speech Software Development Kit (SDK).
++++++ Last updated : 04/30/2021++++
+# Keyword recognition
+
+Keyword recognition refers to speech technology that recognizes the existence of a word or short phrase within a given stream of audio. It is often synonymously referred to as keyword spotting. The most common use case of keyword recognition is voice activation of virtual assistants. For example, "Hey Cortana" is the keyword for the Cortana assistant. Upon recognition of the keyword, a scenario-specific action is carried out. For virtual assistant scenarios, a common resulting action is speech recognition of audio that follows the keyword.
+
+Generally, virtual assistants are always listening. Keyword recognition acts as a privacy boundary for the user. A keyword requirement acts as a gate that prevents unrelated user audio from crossing the local device to the cloud.
+
+To balance accuracy, latency, and computational complexity, keyword recognition is implemented as a multi-stage system. For all stages beyond the first, audio is only processed if the stage prior to it believed to have recognized the keyword of interest.
+
+The current system is designed with multiple stages spanning across the edge and cloud:
+
+![Multiple stages of keyword recognition across edge and cloud.](media/custom-keyword/keyword-recognition-multi-stage.png)
+
+Accuracy of keyword recognition is measured via the following metrics:
+* **Correct accept rate (CA)** ΓÇô Measures the systemΓÇÖs ability to recognize the keyword when it is spoken by an end-user. This is also known as the true positive rate.
+* **False accept rate (FA)** ΓÇô Measures the systemΓÇÖs ability to filter out audio that is not the keyword spoken by an end-user. This is also known as the false positive rate.
+
+The goal is to maximize the correct accept rate while minimizing the false accept rate. The current system is designed to detect a keyword or phrase preceded by a short amount of silence. Detecting a keyword in the middle of a sentence or utterance is not supported.
+
+## Custom Keyword for on-device models
+
+The [Custom Keyword portal on Speech Studio](https://speech.microsoft.com/customkeyword) allows you to generate keyword recognition models that execute at the edge by specifying any word or short phrase. You can further personalize your keyword model by choosing the right pronunciations.
+
+### Pricing
+
+There is no cost to using Custom Keyword for generating models, including both Basic and Advanced models. There is also no cost for running models on-device with the Speech SDK.
+
+### Types of models
+
+Custom Keyword allows you to generate two types of on-device models for any keyword:
+
+| Model type | Description |
+| - | -- |
+| Basic | Best suited for demo or rapid prototyping purposes. Models are generated with a common base model and can take up to 15 minutes to be ready. Models may not have optimal accuracy characteristics. |
+| Advanced | Best suited for product integration purposes. Models are generated with adaptation of a common base model using simulated training data to improve accuracy characteristics. It can take up to 48 hours for models to be ready. |
+
+Neither model type requires you to upload training data. Custom Keyword fully handles data generation and model training.
+
+### Pronunciations
+
+When creating a new model, Custom Keyword automatically generates possible pronunciations of the provided keyword. You can listen to each pronunciation and choose all that closely represent the way you expect end-users to say the keyword. All other pronunciations should not be selected.
+
+It is important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, choosing more pronunciations than needed can lead to higher false accept rates. Choosing too few pronunciations, where not all expected variations are covered, can lead to lower correct accept rates.
+
+### Testing models
+
+Once on-device models are generated by Custom Keyword, they can be tested directly on the portal. The portal allows you to speak directly into your browser and get keyword recognition results.
+
+## Keyword Verification
+
+Keyword Verification is a cloud service that reduces the impact of false accepts from on-device models with robust models running on Azure. There is no tuning or training required for Keyword Verification to work with your keyword. Incremental model updates are continually deployed to the service to improve accuracy and latency, completely transparent to client applications.
+
+### Pricing
+
+Keyword Verification is always used in combination with Speech-to-text, and there is no cost to using Keyword Verification beyond the cost of Speech-to-text.
+
+### Keyword Verification and Speech-to-text
+
+When Keyword Verification is used, it is always in combination with Speech-to-text. Both services run in parallel. This means that audio is sent to both services for simultaneous processing.
+
+![Parallel processing of Keyword Verification and Speech-to-text.](media/custom-keyword/keyword-verification-parallel-processing.png)
+
+Running Keyword Verification and Speech-to-text in parallel yields the following benefits:
+* **No additional latency on Speech-to-text results** ΓÇô Parallel execution means Keyword Verification adds no latency, and the client receives Speech-to-text results just as quickly. If Keyword Verification determines the keyword was not present in the audio, Speech-to-text processing is terminated, which protects against unnecessary Speech-to-text processing. However, network and cloud model processing increases the user-perceived latency of voice activation. For details, see [Recommendations and guidelines](keyword-recognition-guidelines.md)).
+* **Forced keyword prefix in Speech-to-text results** ΓÇô Speech-to-text processing will ensure that the results sent to the client are prefixed with the keyword. This allows for increased accuracy in the Speech-to-text results for speech that follows the keyword.
+* **Increased Speech-to-text timeout** ΓÇô Due to the expected presence of the keyword at the beginning of audio, Speech-to-text will allow for a longer pause of up to 5 seconds after the keyword, before determining end of speech and terminating Speech-to-text processing. This ensures the end-user experience is correctly handled for both staged commands (*\<keyword> \<pause> \<command>*) and chained commands (*\<keyword> \<command>*).
+
+### Keyword Verification responses and latency considerations
+
+For each request to the service, Keyword Verification will return one of two responses: Accepted or Rejected. The processing latency varies depending on the length of the keyword and the length of the audio segment expected to contain the keyword. Processing latency does not include network cost between the client and Azure Speech services.
+
+| Keyword Verification response | Description |
+| -- | -- |
+| Accepted | Indicates the service believed the keyword was present in the audio stream provided as part of the request. |
+| Rejected | Indicates the service believed the keyword was not present in the audio stream provided as part of the request. |
+
+Rejected cases often yield higher latencies as the service processes more audio than accepted cases. By default, Keyword Verification will process a maximum of two seconds of audio to search for the keyword. If the keyword is determined not to be present in the two seconds, the service will time out and signal a rejected response to the client.
+
+### Using Keyword Verification with on-device models from Custom Keyword
+
+The Speech SDK facilitates seamless use of on-device models generated using Custom Keyword with Keyword Verification and Speech-to-text. It transparently handles:
+* Audio gating to Keyword Verification & Speech recognition based on the outcome of on-device model.
+* Communicating the keyword to the Keyword Verification service.
+* Communicating any additional metadata to the cloud for orchestrating the end-to-end scenario.
+
+You do not need to explicitly specify any configuration parameters. All necessary information will automatically be extracted from the on-device model generated by Custom Keyword.
+
+The sample and tutorials linked below show how to use the Speech SDK:
+ * [Voice assistant samples on GitHub](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)
+ * [Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK](./tutorial-voice-enable-your-bot-speech-sdk.md)
+ * [Tutorial: Create a Custom Commands application with simple voice commands](./how-to-develop-custom-commands-application.md)
+
+## Speech SDK integration and scenarios
+
+The Speech SDK facilitates easy use of personalized on-device keyword recognition models generated with Custom Keyword and the Keyword Verification service. To ensure your product needs can be met, the SDK supports two scenarios:
+
+| Scenario | Description | Samples |
+| -- | -- | - |
+| End-to-end keyword recognition with Speech-to-text | Best suited for products that will use a customized on-device keyword model from Custom Keyword with Azure SpeechΓÇÖs Keyword Verification and Speech-to-text services. This is the most common scenario. | <ul><li>[Voice assistant sample code.](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant)</li><li>[Tutorial: Voice enable your assistant built using Azure Bot Service with the C# Speech SDK.](./tutorial-voice-enable-your-bot-speech-sdk.md)</li><li>[Tutorial: Create a Custom Commands application with simple voice commands.](./how-to-develop-custom-commands-application.md)</li></ul> |
+| Offline keyword recognition | Best suited for products without network connectivity that will use a customized on-device keyword model from Custom Keyword. | <ul><li>[C# on Windows UWP sample.](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/keyword-recognizer)</li><li>[Java on Android sample.](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/java/android/keyword-recognizer)</li></ul>
+
+## Next steps
+
+* [Read the quickstart to generate on-device keyword recognition models using Custom Keyword.](custom-keyword-basics.md)
+* [Learn more about Voice Assistants.](voice-assistants.md)
cognitive-services Keyword Recognition Region Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/keyword-recognition-region-support.md
+
+ Title: Keyword recognition region support - Speech service
+
+description: An overview of the Azure regions supported for keyword recognition.
++++++ Last updated : 04/30/2021++++
+# Keyword recognition region support
+
+| Region | Custom Keyword (Basic models) | Custom Keyword (Advanced models) | Keyword Verification |
+| | -- | -- | -- |
+| West US | Yes | No | Yes |
+| West US 2 | Yes | Yes | Yes |
+| East US | Yes | Yes | Yes |
+| East US 2 | Yes | Yes | Yes |
+| West Central US | Yes | No | Yes |
+| South Central US | Yes | Yes | Yes |
+| West Europe | Yes | Yes | Yes |
+| North Europe | Yes | Yes | Yes |
+| UK South | Yes | Yes | No |
+| East Asia | Yes | No | Yes |
+| Southeast Asia | Yes | Yes | Yes |
+| Central India | Yes | Yes | Yes |
+| Japan East | Yes | No | Yes |
+| Japan West | Yes | No | No |
+| Australia East | Yes | Yes | No |
+| Brazil South | Yes | No | No |
+| Canada Central | Yes | No | No |
+| Korea Central | Yes | No | No |
+| France Central | Yes | No | No |
+| North Central US | Yes | Yes | No |
+| Central US | Yes | No | No |
+| South Africa North | Yes | No | No |
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
See the following table for supported languages for the various Speaker Recognit
|Spanish (Mexico) | es-MX | n/a | yes | yes| |Spanish (Spain) | es-ES | n/a | yes | yes|
+## Custom Keyword and Keyword Verification
+
+The following table outlines supported languages for Custom Keyword and Keyword Verification.
+
+| Language | Locale (BCP-47) | Custom Keyword | Keyword Verification |
+| -- | | -- | -- |
+| Chinese (Mandarin, Simplified) | zh-CN | Yes | Yes |
+| English (United States) | en-US | Yes | Yes |
+| Japanese (Japan) | ja-JP | No | Yes |
+| Portuguese (Brazil) | pt-BR | No | Yes |
+ ## Next steps * [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
Stay healthy!
**New features** - **Go**: New Go language support for [speech recognition](./get-started-speech-to-text.md?pivots=programming-language-go) and [custom voice assistant](./quickstarts/voice-assistants.md?pivots=programming-language-go). Set up your dev environment [here](./quickstarts/setup-platform.md?pivots=programming-language-go). For sample code, see the Samples section below. - **JavaScript**: Added Browser support for Text-To-Speech. See documentation [here](./get-started-text-to-speech.md?pivots=programming-language-JavaScript).-- **C++, C#, Java**: New `KeywordRecognizer` object and APIs supported on Windows, Android, Linux & iOS platforms. Read the documentation [here](./custom-keyword-overview.md). For sample code, see the Samples section below.
+- **C++, C#, Java**: New `KeywordRecognizer` object and APIs supported on Windows, Android, Linux & iOS platforms. Read the documentation [here](./keyword-recognition-overview.md). For sample code, see the Samples section below.
- **Java**: Added multi-device conversation with translation support. See the reference doc [here](/java/api/com.microsoft.cognitiveservices.speech.transcription). **Improvements & Optimizations**
cognitive-services Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk.md
The Speech SDK exposes many features from the Speech service, but not all of the
#### Keyword recognition
-The concept of [keyword recognition](./custom-keyword-basics.md) is supported in the Speech SDK. Keyword recognition is the act of identifying a keyword in speech, followed by an action upon hearing the keyword. For example, "Hey Cortana" would activate the Cortana assistant.
+The concept of [keyword recognition](custom-keyword-basics.md) is supported in the Speech SDK. Keyword recognition is the act of identifying a keyword in speech, followed by an action upon hearing the keyword. For example, "Hey Cortana" would activate the Cortana assistant.
**Keyword recognition** is available on the following platforms: - C++/Windows & Linux - C#/Windows & Linux - Python/Windows & Linux
- - Java/Windows & Linux & Android (Speech Devices SDK)
- - Keyword recognition functionality might work with any microphone type, official keyword recognition support, however, is currently limited to the microphone arrays found in the Azure Kinect DK hardware or the Speech Devices SDK
+ - Java/Windows & Linux & Android
### Meeting scenarios
cognitive-services Voice Assistants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/voice-assistants.md
Voice assistants built using Azure Speech services can use the full range of cus
* [Custom Speech](./custom-speech-overview.md) * [Custom Voice](how-to-custom-voice.md)
-* [Custom Keyword](custom-keyword-overview.md)
+* [Custom Keyword](keyword-recognition-overview.md)
> [!NOTE] > Customization options vary by language/locale (see [Supported languages](language-support.md)).
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
> [!NOTE] > Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting. Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact, in real time, to your users within your applicationΓÇÖs user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation. -
+> [!NOTE]
+> VoIP and Chat usage is only billed to your Azure resource when using Azure APIs and SDKs. Teams clients interacting with Azure Communication Services applications are free.
Azure Communication Services can be used to build custom meeting experiences that interact with Microsoft Teams. Users of your Communication Services solution(s) can interact with Teams participants over voice, video, chat, and screen sharing.
Azure Communication Services interoperability isn't compatible with Teams deploy
For more information, see the following articles: -- Learn about [UI Framework](./ui-framework/ui-sdk-overview.md)-- Learn about [UI Framework capabilities](./ui-framework/ui-sdk-features.md)
+- Learn about [UI Library](./ui-library/ui-library-overview.md)
+- Learn about [UI Library capabilities](./ui-library/ui-library-use-cases.md)
communication-services Ui Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-features.md
- Title: Azure Communication Services UI Framework capabilities-
-description: Learn about UI Framework capabilities
-- Previously updated : 03/10/2021-----
-# UI Framework capabilities
--
-The Azure Communication Services UI Framework lets you build communication experiences using a set of reusable components. These components come in two flavors: **Base** components are the most basic building blocks of your UI experience, while combinations of these base components are called **composite** components.
-
-## UI Framework composite components
-
-| Composite | Description | Web | Android | iOS |
-|-|--|-||-|
-| Group Calling Composite | Light-weight voice and video outbound calling experience for Azure Communication Services calling using Fluent UI design assets. Supports group calling using Azure Communication Services Group ID. The composite allows for one-to-one calling to be used by referencing an Azure Communication Services identity or a phone number for PSTN using a phone number procured through Azure. | React | | |
-| Group Chat Composite | Light-weight chat experience for Azure Communication Services using Fluent UI design assets. This experience concentrates on delivering a simple chat client that can connect to Azure Communication Services threads. It allows users to send messages and see received messages with typing indicators and read receipts. It scales from 1:1 to group chat scenarios. Supports a single chat thread. | React | | |
-
-## UI Framework base components
-
-| Component | Description | Web | Android | iOS |
-|--||-||--|
-| Calling Provider | Core initializing component for calling. Required component to then initialize other components on top of it. Handles core logic to initialize the calling client using Azure Communication Services access tokens. Supports Group join. | React | N/A | N/A |
-| Media Controls | Allows users to manage the current call by toggling mute, turning video on/off and end the call. | React | N/A | N/A |
-| Media Gallery | Showcase all call participants in a single gallery. Gallery supports both video-enabled and static participants. For video-enabled participants, video is rendered. | React | N/A | N/A |
-| Microphone Settings | Pick the microphone to be used for calling. This control can be used before and during a call to select the microphone device. | React | N/A | N/A |
-| Camera Settings | Pick the camera to be used for video calling. This control can be used before and during a call to select the video device. | React | N/A | N/A |
-| Device Settings | Combines microphone and camera settings into a single component | React | N/A | N/A |
-| Chat Provider | Core initializing component for chat. Required component to then initialize other components on top of it. Handles core logic to initialize the chat client with an Azure Communication Services access token and the thread that it will join. | React | N/A | N/A |
-| Send Box | Input component that allows users to send messages to the chat thread. Input supports text, hyperlinks, emojis and other Unicode characters including other alphabets. | React | N/A | N/A |
-| Chat Thread | Thread component that shows the user both received and sent messages with their sender information. The thread supports typing indicators and read receipts. You can scroll these threads to review chat history.
-| Participant List | Show all the participants of the call or chat thread as a list. | React | N/A | N/A |
-
-## UI Framework capabilities
-
-| Feature | Group Calling Composite | Group Chat Composite | Base Components |
-||-|-|--|
-| Join Teams Meeting | | |
-| Join Teams Live Event | | |
-| Start VoIP call to Teams user | | |
-| Join a Teams Meeting Chat | | |
-| Join Azure Communication Services call with Group Id | Γ£ö | | Γ£ö
-| Start a VoIP call to one or more Azure communication Services users | | |
-| Join an Azure Communication Services chat thread | | Γ£ö | Γ£ö
-| Mute/unmute call | Γ£ö | | Γ£ö
-| Video on/off on call | Γ£ö | | Γ£ö
-| Screen Sharing | Γ£ö | | Γ£ö
-| Participant gallery | Γ£ö | | Γ£ö
-| Microphone management | Γ£ö | | Γ£ö
-| Camera management | Γ£ö | | Γ£ö
-| Call Lobby | | | Γ£ö
-| Send chat message | | Γ£ö |
-| Receive chat message | | Γ£ö | Γ£ö
-| Typing Indicators | | Γ£ö | Γ£ö
-| Read Receipt | | Γ£ö | Γ£ö
-| Participant List | | | Γ£ö
--
-## Customization support
-
-| Component Type | Themes | Layout | Data Models |
-||||-|
-| Composite Component | N/A | N/A | N/A |
-| Base Component | N/A | Layout of components can be modified using external styling | N/A |
--
-## Platform support
-
-| SDK | Windows | macOS | Ubuntu | Linux | Android | iOS |
-|--|--|-|-|-|-||
-| UI SDK | Chrome\*, new Edge | Chrome\*, Safari\*\* | Chrome\* | Chrome\* | Chrome\* | Safari\*\* |
-
-\*Note that the latest version of Chrome is supported in addition to the
-previous two releases.
-
-\*\*Note that Safari versions 13.1+ are supported. Outgoing video for Safari
-macOS is not yet supported, but it is supported on iOS. Outgoing screen sharing
-is only supported on desktop iOS.
communication-services Ui Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-framework/ui-sdk-overview.md
- Title: Azure Communication Services UI Framework overview-
-description: Learn about Azure Communication Services UI Framework
-- Previously updated : 03/10/2021-----
-# Azure Communication Services UI Framework
---
-Azure Communication Services UI Framework makes it easy for you to build modern communications user experiences. It gives you a library of production-ready UI components that you can drop into your applications:
--- **Composite Components** - These components are turn-key solutions that implement common communication scenarios. You can quickly add video calling or chat experiences to their applications. Composites are open-source components built using base components.-- **Base Components** - These components are open-source building blocks that let you build custom communications experience. Components are offered for both calling and chat capabilities that can be combined to build experiences. -
-These UI SDKs all use [Microsoft's Fluent design language](https://developer.microsoft.com/fluentui/) and assets. Fluent UI provides a foundational layer for the UI Framework that has been battle tested across Microsoft products.
-
-## **Differentiating Components and Composites**
-
-**Base Components** are built on top of core Azure Communication Services SDKs and implement basic actions such as initializing the core SDKs, rendering video, and providing user controls for muting, video on/off, etc. You can use these **Base Components** to build your own custom layout experiences using pre-built, production ready communication components.
--
-**Composite Components** combine multiple **Base Components** to create more complete communication experiences. These higher-level components can be easily integrated into an existing app to drop a fully fledge communication experience without the task of building it from scratch. Developers can concentrate on building the surrounding experience and flow desired into their apps and leave the communications complexity to Composite Components.
--
-## What UI Framework is best for my project?
-
-Understanding these requirements will help you choose the right SDK:
--- **How much customization do you desire?** Azure Communication core SDKs don't have a UX and are designed so you can build whatever UX you want. UI Framework components provide UI assets at the cost of reduced customization.-- **Do you require Meeting features?** The Meeting system has several unique capabilities not currently available in the core Azure Communication Services SDKs, such as blurred background and raised hand.-- **What platforms are you targeting?** Different platforms have different capabilities.-
-Details about feature availability in the varied [UI SDKs is available here](ui-sdk-features.md), but key trade-offs are summarized below.
-
-|SDK / SDK|Implementation Complexity| Customization Ability| Calling| Chat| [Teams Interop](./../teams-interop.md)
-||||||||
-|Composite Components|Low|Low|Γ£ö|Γ£ö|Γ£ò
-|Base Components|Medium|Medium|Γ£ö|Γ£ö|Γ£ò
-|Core SDKs|High|High|Γ£ö|Γ£ö |Γ£ö
-
-## Cost
-
-Usage of Azure UI Frameworks does not have any extra Azure cost or metering. You only pay for the
-usage of the underlying service, using the same Calling, Chat, and PSTN meters.
-
-## Supported use cases
-
-Calling:
--- Join Azure Communication Services call with Group ID-
-Chat:
--- Join Azure Communication Services chat with Thread ID-
-## Supported identities:
-
-An Azure Communication Services identity is required to initialize the UI Framework and authenticate to the service. For more information on authentication, see [Authentication](../authentication.md) and [Access Tokens](../../quickstarts/access-tokens.md)
--
-## Recommended architecture
--
-Composite and Base Components are initialized using an Azure Communication Services access token. Access tokens should be procured from Azure Communication Services through a
-trusted service that you manage. See [Quickstart: Create Access Tokens](../../quickstarts/access-tokens.md) and [Trusted Service Tutorial](../../tutorials/trusted-service-tutorial.md) for more information.
-
-These SDKs also require the context for the call or chat they will join. Similar to user access tokens, this context should be disseminated to clients via your own trusted service. The list below summarizes the initialization and resource management functions that you need to operationalize.
-
-| Contoso Responsibilities | UI Framework Responsibilities |
-|-|--|
-| Provide access token from Azure | Pass through given access token to initialize components |
-| Provide refresh function | Refresh access token using developer provided function |
-| Retrieve/Pass join information for call or chat | Pass through call and chat information to initialize components |
-| Retrieve/Pass user information for any custom data model | Pass through custom data model to components to render |
communication-services Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/teams-embed.md
+
+ Title: UI Library Teams Embed
+
+description: In this document, you'll learn how the Azure Communication Services UI Library Teams Embed capability can be used to build turnkey calling experiences.
++ Last updated : 11/16/2020+++++
+# Teams Embed
+++
+Teams Embed is an Azure Communication Services capability focused on common business-to-consumer and business-to-business calling interactions. The core of the Teams Embed system is [video and voice calling](../voice-video-calling/calling-sdk-features.md), but the Teams Embed system builds on Azure's calling primitives to deliver a complete user experience based on Microsoft Teams meetings.
+
+Teams Embed SDKs are closed-source and make these capabilities available to you in a turnkey, composite format. You drop Teams Embed into your app's canvas and the SDK generates a complete user experience. Because this user experience is very similar to Microsoft Teams meetings you can take advantage of:
+
+- Reduced development time and engineering complexity
+- End-user familiarity with Teams
+- Ability to re-use [Teams end-user training content](https://support.microsoft.com/office/meetings-in-teams-e0b0ae21-53ee-4462-a50d-ca9b9e217b67)
+
+The Teams Embed provides most features supported in Teams meetings, including:
+
+- Pre-meeting experience where a user configures their audio and video devices
+- In-meeting experience for configuring audio and video devices
+- [Video Backgrounds](https://support.microsoft.com/office/change-your-background-for-a-teams-meeting-f77a2381-443a-499d-825e-509a140f4780): allowing participants to blur or replace their backgrounds
+- [Multiple options for the video gallery](https://support.microsoft.com/office/using-video-in-microsoft-teams-3647fc29-7b92-4c26-8c2d-8a596904cdae) large gallery, together mode, focus, pinning, and spotlight
+- [Content Sharing](https://support.microsoft.com/office/share-content-in-a-meeting-in-teams-fcc2bf59-aecd-4481-8f99-ce55dd836ce8): allowing participants to share their screen
+
+For more information about this UI compared to other Azure Communication SDKs, see the [UI SDK concept introduction](ui-library-overview.md).
communication-services Ui Library Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/ui-library-overview.md
+
+ Title: UI Library overview
+
+description: Learn about the Azure Communication Services UI Library.
+++++ Last updated : 05/11/2021++++
+# UI Library Overview
++
+> [!NOTE]
+> For detailed documentation on the UI Library visit the [ UI Library Storybook ](https://azure.github.io/communication-ui-sdk). There you will find additional conceptual documentation, quickstarts and examples.
+
+Azure Communication Services - UI Library makes it easy for you to build modern communications user experiences using Azure Communication Services.
+It gives you a library of production-ready UI components that you can drop into your applications:
+
+- **Composites** - These are turn-key solutions that implement common communication scenarios.
+ You can quickly add video calling or chat experiences to your applications.
+ Composites are open-source higher order components built using UI components.
+
+- **UI Components** - These components are open-source building blocks that let you build custom communications experience.
+ Components are offered for both calling and chat capabilities that can be combined to build experiences.
+
+These UI client libraries all use [Microsoft's Fluent design language](https://developer.microsoft.com/fluentui/) and assets. Fluent UI provides a foundational layer for the UI Library and is actively used across Microsoft products.
+
+In conjunction to the UI components, the UI Library exposes a stateful client library for calling and chat.
+This client is agnostic to any specific state management framework and can be integrated with common state managers like Redux or React Context.
+This stateful client library can be used with the UI Components to pass props and methods for the UI Components to render data. See [Stateful Client Overview](https://azure.github.io/communication-ui-sdk/?path=/story/stateful-client-what-is-stateful--page) for more information.
+
+## Installing UI Library
+
+Stateful clients are found as part of the `@azure/communication-react` package.
+
+```bash
+npm i --save @azure/communication-react
+```
+
+## Composites overview
+
+Composites are higher-level components composed of UI components that deliver turn-key solutions for common communication scenarios using Azure Communication Services.
+Developers can easily instantiate the Composite using an Azure Communication Services access token and the required configuration attributed for call or chat.
+
+| Composite | Use Cases |
+| | - |
+| [Group Calling](https://azure.github.io/communication-ui-sdk/?path=/docs/composites-groupcall--group-call-composite) | Calling experience that allows users to start or join a call. Inside the experience users can configure their devices, participate in the call with video and see other participants, including those with video turn on. For Teams Interop is includes lobby functionality for user to wait to be admitted. |
+| [Group Chat](https://azure.github.io/communication-ui-sdk/?path=/docs/composites-groupchat--group-chat-composite) | Chat experience where user can send and receive messages. Thread events like typing, reads, participants entering and leaving are displayed to the user as part of the chat thread. |
+
+## UI Component overview
+
+Pure UI Components that can be leveraged by developers to compose communication experiences, from stitching video tiles into a grid to showcase remote participants, to organizing components to fit your applications specifications.
+UI Components support customization to give the components the right feel and look to match an applications branding and style.
+
+| Area | Component | Description |
+| - | | -- |
+| Calling | [Grid Layout](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-gridlayout--grid-layout-component) | Grid component to organize Video Tiles into an NxN grid |
+| | [Video Tile](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-videotile--video-tile-component) | Component that displays video stream when available and a default static component when not |
+| | [Control Bar](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-controlbar--control-bar-component) | Container to organize DefaultButtons to hook up to specific call actions like mute or share screen |
+| Chat | [Message Thread](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-messagethread--message-thread-component) | Container that renders chat messages, system messages and custom messages |
+| | [Send Box](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-sendbox--send-box-component) | Text input component with a discrete send button |
+| | [Read Receipt](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-readreceipt--read-reciept-icon-component) | Multi-state read receipt component to show state of sent message |
+| | [Typing indicator](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-typingindicator--typing-indicator-component) | Text component to render the participants who are actively typing on a thread |
+| Common | [Participant Item](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-participantitem--participant-item-component) | Common component to render a call or chat participant including avatar and display name |
+| | [Error Bar](https://azure.github.io/communication-ui-sdk/?path=/story/ui-components-errorbar--error-bar-component) | Common error bar with multiple built-in states to show user events |
+
+## What UI artifact is best for my project?
+
+Understanding these requirements will help you choose the right client library:
+
+- **How much customization do you desire?** Azure Communication core client libraries don't have a UX and are designed so you can build whatever UX you want. UI Library components provide UI assets at the cost of reduced customization.
+- **What platforms are you targeting?** Different platforms have different capabilities.
+
+Details about feature availability in the [UI Library is available here](https://azure.github.io/communication-ui-sdkhttps://azure.github.io/communication-ui-sdk/?path=/story/use-cases--page), but key trade-offs are summarized below.
+
+| Client library / SDK | Implementation Complexity | Customization Ability | Calling | Chat | [Teams Interop](../teams-interop.md) |
+| | - | | - | - | -- |
+| Composite Components | Low | Low | Γ£ö | Γ£ö | Γ£ö |
+| Base Components | Medium | Medium | Γ£ö | Γ£ö | Γ£ö |
+| Core client libraries | High | High | Γ£ö | Γ£ö | Γ£ö |
+
+> [!div class="nextstepaction"]
+> [Visit UI Library Storybook](https://azure.github.io/communication-ui-sdk)
communication-services Ui Library Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/ui-library/ui-library-use-cases.md
+
+ Title: UI Library use cases
+
+description: Learn about the UI Library and how it can help you build communication experiences
+++++ Last updated : 05/11/2021++++
+# UI Library use cases
++
+> [!NOTE]
+> For detailed documentation on the UI Library visit the [ UI Library Storybook ](https://azure.github.io/communication-ui-sdk). There you will find additional conceptual documentation, quickstarts and examples.
++
+UI Library supports a number of use cases across calling and chat experiences.
+These capabilities are available through UI Components and Composites.
+For Composites, these capabilities are built directly in and exposed when the composite is integrated into an application.
+
+For UI Components, these capabilities are exposed through a combination of UI functionality and underlying stateful libraries.
+
+To take full advantage of these capabilities we recommend using the UI Components in conjunction with the stateful call and chat client libraries.
+
+## Calling use cases
+
+| Area | Use Cases |
+| - | |
+| Call Types | Join Teams Meeting |
+| | Join Azure Communication Services call with Group Id |
+| Teams Interop | Call Lobby |
+| | Transcription and recording alert banner |
+| Call Controls | Mute/unmute call |
+| | Video on/off on call |
+| | Screen Sharing |
+| | End call |
+| Participant Gallery | Remote participants are displayed on grid |
+| | Video preview available throughout call for local user |
+| | Default avatars available when video is off |
+| | Shared screen content displayed on participant gallery |
+| Call configuration | Microphone device management |
+| | Camera device management |
+| | Speaker device management |
+| | Local preview available for user to check video |
+| Participants | Participant roster |
+
+## Chat use cases
+
+| Area | Use Cases |
+| | |
+| Chat Types | Join a Teams Meeting Chat |
+| | Join an Azure Communication Services chat thread |
+| Chat Actions | Send chat message |
+| | Receive chat message |
+| Chat Events | Typing Indicators |
+| | Read Receipt |
+| | Participant added/removed |
+| | Chat title changed |
+| Participants | Participant roster |
+
+## Supported identities
+
+An Azure Communication Services identity is required to initialize the stateful client libraries and authenticate to the service.
+For more information on authentication, see [Authentication](../authentication.md) and [Access Tokens](../../quickstarts/access-tokens.md?pivots=programming-language-javascript)
+
+## Customization
+
+UI Library exposes patterns for developers to modify components to fit the look and feel of their application.
+This is a key area of differentiation between Composites and UI Components, where Composites provide less customization options in favor of a simpler integration experience.
+
+| Use Case | Composites | UI Components |
+| | - | - |
+| Fluent based Theming | X | X |
+| Experience layout is composable | | X |
+| CSS Styling can be used to modify style properties | | X |
+| Icons can be replaced | | X |
+| Participant gallery layout can be modified | | X |
+| Call control layout can be modified | X | X |
+| Data models can be injected to modify user metadata | X | X |
+
+## Observability
+
+As part of the decoupled state management architecture of the UI Library, developers are able to access the stateful calling and chat clients directly.
+
+Developers can hook into the stateful client to read the state, handle events and override behavior to pass onto the UI Components.
+
+| Use Case | Composites | UI Components |
+| -- | - | - |
+| Call/Chat client state can be accessed | X | X |
+| Client events can be accessed and handled | X | X |
+| UI events can be accessed and handled | X | X |
+
+## Recommended architecture
++
+Composite and Base Components are initialized using an Azure Communication Services access token. Access tokens should be procured from Azure Communication Services through a
+trusted service that you manage. See [Quickstart: Create Access Tokens](../../quickstarts/access-tokens.md?pivots=programming-language-javascript) and [Trusted Service Tutorial](../../tutorials/trusted-service-tutorial.md) for more information.
+
+These client libraries also require the context for the call or chat they will join. Similar to user access tokens, this context should be disseminated to clients via your own trusted service. The list below summarizes the initialization and resource management functions that you need to operationalize.
+
+| Contoso Responsibilities | UI Library Responsibilities |
+| -- | |
+| Provide access token from Azure | Pass through given access token to initialize components |
+| Provide refresh function | Refresh access token using developer provided function |
+| Retrieve/Pass join information for call or chat | Pass through call and chat information to initialize components |
+| Retrieve/Pass user information for any custom data model | Pass through custom data model to components to render |
+
+## Platform support
+
+| SDK | Windows | macOS | Ubuntu | Linux | Android | iOS |
+| | | -- | -- | -- | -- | - |
+| UI SDK | Chrome\*, new Edge | Chrome\*, Safari\*\* | Chrome\* | Chrome\* | Chrome\* | Safari\*\* |
+
+\*Note that the latest version of Chrome is supported in addition to the
+previous two releases.
+
+\*\*Note that Safari versions 13.1+ are supported. Outgoing video for Safari
+macOS is not yet supported, but it is supported on iOS. Outgoing screen sharing
+is only supported on desktop iOS.
+
+## Accessibility
+
+Accessibility by design is a principle across Microsoft products.
+UI Library follows this principle in making sure that all UI Components are fully accessible.
+During public preview, the UI Library will continue to improve and add accessibility feature to the UI Components.
+We expect to add more details on accessibility ahead of the UI Library being in General Availability.
+
+## Localization
+
+Localization is a key to making products that can be used across the world and by people who who speak different languages.
+UI Library will provide out of the box support for some languages and capabilities such as RTL.
+Developers can provide their own localization files to be used for the UI Library.
+These localization capabilities will be added ahead of General Availability.
+
+> [!div class="nextstepaction"]
+> [Visit UI Library Storybook](https://azure.github.io/communication-ui-sdk)
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
Last updated 03/10/2021
+zone_pivot_groups: acs-web-ios
> [!IMPORTANT] > To enable/disable [Teams tenant interoperability](../../concepts/teams-interop.md), complete [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR21ouQM6BHtHiripswZoZsdURDQ5SUNQTElKR0VZU0VUU1hMOTBBMVhESS4u).
-Get started with Azure Communication Services by connecting your chat solution to Microsoft Teams using the JavaScript SDK.
+Get started with Azure Communication Services by connecting your chat solution to Microsoft Teams.
-## Prerequisites
-1. AΓÇ»[Teams deployment](/deployoffice/teams-install).
-2. A working [chat app](./get-started.md).
-
-## Enable Teams interoperability
-
-A Communication Services user that joins a Teams meeting as a guest user can access the meeting's chat only when they've joined the Teams meeting call. See the [Teams interop](../voice-video-calling/get-started-teams-interop.md) documentation to learn how to add a Communication Services user to a Teams meeting call.
-
-You must be a member of the owning organization of both entities to use this feature.
- ## Clean up resources
container-instances Container Instances Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-application-gateway.md
Title: Static IP address for container group
-description: Create a container group in a virtual network and use an Azure application gateway to expose a static frontend IP address to a containerized web app
+description: Create a container group in a virtual network and use an Azure application gateway to expose a static frontend IP address to a containerized web app
Last updated 03/16/2020 # Expose a static IP address for a container group
-This article shows one way to expose a static, public IP address for a [container group](container-instances-container-groups.md) by using an Azure [application gateway](../application-gateway/overview.md). Follow these steps when you need a static entry point for an external-facing containerized app that runs in Azure Container Instances.
+This article shows one way to expose a static, public IP address for a [container group](container-instances-container-groups.md) by using an Azure [application gateway](../application-gateway/overview.md). Follow these steps when you need a static entry point for an external-facing containerized app that runs in Azure Container Instances.
In this article you use the Azure CLI to create the resources for this scenario:
az network public-ip create \
## Create container group
-Run the following [az container create][az-container-create] to create a container group in the virtual network you configured in the previous step.
+Run the following [az container create][az-container-create] to create a container group in the virtual network you configured in the previous step.
-The group is deployed in the *myACISubnet* subnet and contains a single instance named *appcontainer* that pulls the `aci-helloworld` image. As shown in other articles in the documentation, this image packages a small web app written in Node.js that serves a static HTML page.
+The group is deployed in the *myACISubnet* subnet and contains a single instance named *appcontainer* that pulls the `aci-helloworld` image. As shown in other articles in the documentation, this image packages a small web app written in Node.js that serves a static HTML page.
```azurecli az container create \
az network application-gateway create \
--public-ip-address myAGPublicIPAddress \ --vnet-name myVNet \ --subnet myAGSubnet \
- --servers "$ACI_IP"
+ --servers "$ACI_IP"
```
-It can take up to 15 minutes for Azure to create the application gateway.
+It can take up to 15 minutes for Azure to create the application gateway.
## Test public IP address
-
+ Now you can test access to the web app running in the container group behind the application gateway. Run the [az network public-ip show][az-network-public-ip-show] command to retrieve the frontend public IP address of the gateway:
To view the running web app when successfully configured, navigate to the gatewa
## Next steps
-* See a [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/201-aci-wordpress-vnet) to create a container group with a WordPress container instance as a backend server behind an application gateway.
+* See a [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/wordpress/aci-wordpress-vnet) to create a container group with a WordPress container instance as a backend server behind an application gateway.
* You can also configure an application gateway with a certificate for SSL termination. See the [overview](../application-gateway/ssl-overview.md) and the [tutorial](../application-gateway/create-ssl-portal.md). * Depending on your scenario, consider using other Azure load-balancing solutions with Azure Container Instances. For example, use [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) to distribute traffic across multiple container instances and across multiple regions. See this [blog post](https://aaronmsft.com/posts/azure-container-instances/).
container-instances Container Instances Container Group Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-container-group-ssl.md
Last updated 07/02/2020
This article shows how to create a [container group](container-instances-container-groups.md) with an application container and a sidecar container running a TLS/SSL provider. By setting up a container group with a separate TLS endpoint, you enable TLS connections for your application without changing your application code. You set up an example container group consisting of two containers:
-* An application container that runs a simple web app using the public Microsoft [aci-helloworld](https://hub.docker.com/_/microsoft-azuredocs-aci-helloworld) image.
-* A sidecar container running the public [Nginx](https://hub.docker.com/_/nginx) image, configured to use TLS.
+* An application container that runs a simple web app using the public Microsoft [aci-helloworld](https://hub.docker.com/_/microsoft-azuredocs-aci-helloworld) image.
+* A sidecar container running the public [Nginx](https://hub.docker.com/_/nginx) image, configured to use TLS.
-In this example, the container group only exposes port 443 for Nginx with its public IP address. Nginx routes HTTPS requests to the companion web app, which listens internally on port 80. You can adapt the example for container apps that listen on other ports.
+In this example, the container group only exposes port 443 for Nginx with its public IP address. Nginx routes HTTPS requests to the companion web app, which listens internally on port 80. You can adapt the example for container apps that listen on other ports.
See [Next steps](#next-steps) for other approaches to enabling TLS in a container group.
http {
location / { proxy_pass http://localhost:80; # TODO: replace port if app listens on port other than 80
-
+ proxy_set_header Connection ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr;
If you deploy your container group in an [Azure virtual network](container-insta
* [Azure Functions Proxies](../azure-functions/functions-proxies.md) * [Azure API Management](../api-management/api-management-key-concepts.md)
-* [Azure Application Gateway](../application-gateway/overview.md) - see a sample [deployment template](https://github.com/Azure/azure-quickstart-templates/tree/master/201-aci-wordpress-vnet).
+* [Azure Application Gateway](../application-gateway/overview.md) - see a sample [deployment template](https://github.com/Azure/azure-quickstart-templates/tree/master/application-workloads/wordpress/aci-wordpress-vnet).
container-instances Container Instances Image Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-image-security.md
Maintain an accurate audit trail of administrative access to your container ecos
* [Integration of Azure Kubernetes Service with Azure Security Center](../security-center/defender-for-kubernetes-introduction.md) to monitor the security configuration of the cluster environment and generate security recommendations * [Azure Container Monitoring solution](../azure-monitor/containers/containers.md)
-* Resource logs for [Azure Container Instances](container-instances-log-analytics.md) and [Azure Container Registry](../container-registry/container-registry-diagnostics-audit-logs.md)
+* Resource logs for [Azure Container Instances](container-instances-log-analytics.md) and [Azure Container Registry](../container-registry/monitor-service.md)
## Next steps
container-registry Container Registry Diagnostics Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-diagnostics-audit-logs.md
- Title: Collect & analyze resource logs
-description: Record and analyze resource log events for Azure Container Registry such as authentication, image push, and image pull.
- Previously updated : 06/01/2020-
-# Azure Container Registry logs for diagnostic evaluation and auditing
-
-This article explains how to collect log data for an Azure container registry using features of [Azure Monitor](../azure-monitor/overview.md). Azure Monitor collects [resource logs](../azure-monitor/essentials/platform-logs-overview.md) (formerly called *diagnostic logs*) for user-driven events in your registry. Collect and consume this data to meet needs such as:
-
-* Audit registry authentication events to ensure security and compliance
-
-* Provide a complete activity trail on registry artifacts such as pull and pull events so you can diagnose operational issues with your registry
-
-Collecting resource log data using Azure Monitor may incur additional costs. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-## Repository events
-
-The following repository-level events for images and other artifacts are currently logged:
-
-* **Push**
-* **Pull**
-* **Untag**
-* **Delete** (including repository delete events)
-* **Purge tag** and **Purge manifest**
-
-> [!NOTE]
-> Purge events are logged only if a registry [retention policy](container-registry-retention-policy.md) is configured.
-
-## Registry resource logs
-
-Resource logs contain information emitted by Azure resources that describe their internal operation. For an Azure container registry, the logs contain authentication and repository-level events stored in the following tables.
-
-* **ContainerRegistryLoginEvents** - Registry authentication events and status, including the incoming identity and IP address
-* **ContainerRegistryRepositoryEvents** - Operations such as push and pull for images and other artifacts in registry repositories
-* **AzureMetrics** - [Container registry metrics](../azure-monitor/essentials/metrics-supported.md#microsoftcontainerregistryregistries) such as aggregated push and pull counts.
-
-For operations, log data includes:
- * Success or failure status
- * Start and end time stamps
-
-In addition to resource logs, Azure provides an [activity log](../azure-monitor/essentials/platform-logs-overview.md), a single subscription-level record of Azure management events such as the creation or deletion of a container registry.
-
-## Enable collection of resource logs
-
-Collection of resource logs for a container registry isn't enabled by default. Explicitly enable diagnostic settings for each registry you want to monitor. For options to enable diagnostic settings, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md).
-
-For example, to view logs and metrics for a container registry in near real-time in Azure Monitor, collect the resource logs in a Log Analytics workspace. To enable this diagnostic setting using the Azure portal:
-
-1. If you don't already have a workspace, create a workspace using the [Azure portal](../azure-monitor/logs/quick-create-workspace.md). To minimize latency in data collection, ensure that the workspace is in the **same region** as your container registry.
-1. In the portal, select the registry, and select **Monitoring > Diagnostic settings > Add diagnostic setting**.
-1. Enter a name for the setting, and select **Send to Log Analytics**.
-1. Select the workspace for the registry diagnostic logs.
-1. Select the log data you want to collect, and click **Save**.
-
-The following image shows creation of a diagnostic setting for a registry using the portal.
-
-![Enable diagnostic settings](media/container-registry-diagnostics-audit-logs/diagnostic-settings.png)
-
-> [!TIP]
-> Collect only the data that you need, balancing cost and your monitoring needs. For example, if you only need to audit authentication events, select only the **ContainerRegistryLoginEvents** log.
-
-## View data in Azure Monitor
-
-After you enable collection of diagnostic logs in Log Analytics, it can take a few minutes for data to appear in Azure Monitor. To view the data in the portal, select the registry, and select **Monitoring > Logs**. Select one of the tables that contains data for the registry.
-
-Run queries to view the data. Several sample queries are provided, or run your own. For example, the following query retrieves the most recent 24 hours of data from the **ContainerRegistryRepositoryEvents** table:
-
-```Kusto
-ContainerRegistryRepositoryEvents
-| where TimeGenerated > ago(1d)
-```
-
-The following image shows sample output:
-
-![Query log data](media/container-registry-diagnostics-audit-logs/azure-monitor-query.png)
-
-For a tutorial on using Log Analytics in the Azure portal, see [Get started with Azure Monitor Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md), or try the Log Analytics [Demo environment](https://portal.loganalytics.io/demo).
-
-For more information on log queries, see [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
-
-## Query examples
-
-### Error events from the last hour
-
-```Kusto
-union Event, Syslog // Event table stores Windows event records, Syslog stores Linux records
-| where TimeGenerated > ago(1h)
-| where EventLevelName == "Error" // EventLevelName is used in the Event (Windows) records
- or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records
-```
-
-### 100 most recent registry events
-
-```Kusto
-ContainerRegistryRepositoryEvents
-| union ContainerRegistryLoginEvents
-| top 100 by TimeGenerated
-| project TimeGenerated, LoginServer, OperationName, Identity, Repository, DurationMs, Region , ResultType
-```
-
-### Identity of user or object that deleted repository
-
-```Kusto
-ContainerRegistryRepositoryEvents
-| where OperationName contains "Delete"
-| project LoginServer, OperationName, Repository, Identity, CallerIpAddress
-```
-
-### Identity of user or object that deleted tag
-
-```Kusto
-ContainerRegistryRepositoryEvents
-| where OperationName contains "Untag"
-| project LoginServer, OperationName, Repository, Tag, Identity, CallerIpAddress
-```
-
-### Repository-level operation failures
-
-```kusto
-ContainerRegistryRepositoryEvents
-| where ResultDescription contains "40"
-| project TimeGenerated, OperationName, Repository, Tag, ResultDescription
-```
-
-### Registry authentication failures
-
-```kusto
-ContainerRegistryLoginEvents
-| where ResultDescription != "200"
-| project TimeGenerated, Identity, CallerIpAddress, ResultDescription
-```
--
-## Additional log destinations
-
-In addition to sending the logs to Log Analytics, or as an alternative, a common scenario is to select an Azure Storage account as a log destination. To archive logs in Azure Storage, create a storage account before enabling archiving through the diagnostic settings.
-
-You can also stream diagnostic log events to an [Azure Event Hub](../event-hubs/event-hubs-about.md). Event Hubs can ingest millions of events per second, which you can then transform and store using any real-time analytics provider.
-
-## Next steps
-
-* Learn more about using [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md) and creating [log queries](../azure-monitor/logs/get-started-queries.md).
-* See [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md) to learn about platform logs that are available at different layers of Azure.
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-access.md
Related links:
* [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md) * [HTTP/HTTPS proxy configuration](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) * [Geo-replicationin Azure Container Registry](container-registry-geo-replication.md)
-* [Azure Container Registry logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md)
+* [Monitor Azure Container Registry](monitor-service.md)
### Configure public access to registry
Related links:
## Advanced troubleshooting
-If [collection of resource logs](container-registry-diagnostics-audit-logs.md) is enabled in the registry, review the ContainterRegistryLoginEvents log. This log stores authentication events and status, including the incoming identity and IP address. Query the log for [registry authentication failures](container-registry-diagnostics-audit-logs.md#registry-authentication-failures).
+If [collection of resource logs](monitor-service.md) is enabled in the registry, review the ContainterRegistryLoginEvents log. This log stores authentication events and status, including the incoming identity and IP address. Query the log for [registry authentication failures](monitor-service.md#registry-authentication-failures).
Related links:
-* [Logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md)
+* [Monitor Azure Container Registry](monitor-service.md)
* [Container registry FAQ](container-registry-faq.md) * [Azure Security Baseline for Azure Container Registry](security-baseline.md) * [Best practices for Azure Container Registry](container-registry-best-practices.md)
container-registry Container Registry Troubleshoot Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-login.md
Run the [az acr check-health](/cli/azure/acr#az_acr_check_health) command to get
See [Check the health of an Azure container registry](container-registry-check-health.md) for command examples. If errors are reported, review the [error reference](container-registry-health-error-reference.md) and the following sections for recommended solutions.
-If you're experiencing problems using the registry wih Azure Kubernetes Service, run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the registry is accessible from the AKS cluster.
+If you're experiencing problems using the registry with Azure Kubernetes Service, run the [az aks check-acr](/cli/azure/aks#az_aks_check_acr) command to validate that the registry is accessible from the AKS cluster.
> [!NOTE] > Some authentication or authorization errors can also occur if there are firewall or network configurations that prevent registry access. See [Troubleshoot network issues with registry](container-registry-troubleshoot-access.md).
Related links:
## Advanced troubleshooting
-If [collection of resource logs](container-registry-diagnostics-audit-logs.md) is enabled in the registry, review the ContainterRegistryLoginEvents log. This log stores authentication events and status, including the incoming identity and IP address. Query the log for [registry authentication failures](container-registry-diagnostics-audit-logs.md#registry-authentication-failures).
+If [collection of resource logs](monitor-service.md) is enabled in the registry, review the ContainterRegistryLoginEvents log. This log stores authentication events and status, including the incoming identity and IP address. Query the log for [registry authentication failures](monitor-service.md#registry-authentication-failures).
Related links:
-* [Logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md)
+* [Monitor Azure Container Registry](monitor-service.md)
* [Container registry FAQ](container-registry-faq.md) * [Best practices for Azure Container Registry](container-registry-best-practices.md)
If you don't resolve your problem here, see the following options.
* [Troubleshoot registry performance](container-registry-troubleshoot-performance.md) * [Community support](https://azure.microsoft.com/support/community/) options * [Microsoft Q&A](/answers/products/)
-* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) - based on information you provide, a quick diagnostic might be run for authentication failures in your registry
+* [Open a support ticket](https://azure.microsoft.com/support/create-ticket/) - based on information you provide, a quick diagnostic might be run for authentication failures in your registry
container-registry Container Registry Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-performance.md
Related links:
If your permissions to registry resources allow, [check the health of the registry environment](container-registry-check-health.md). If errors are reported, review the [error reference](container-registry-health-error-reference.md) for potential solutions.
-If [collection of resource logs](container-registry-diagnostics-audit-logs.md) is enabled in the registry, review the ContainterRegistryRepositoryEvents log. This log stores information for operations such as push or pull events. Query the log for [repository-level operation failures](container-registry-diagnostics-audit-logs.md#repository-level-operation-failures).
+If [collection of resource logs](monitor-service.md) is enabled in the registry, review the ContainterRegistryRepositoryEvents log. This log stores information for operations such as push or pull events. Query the log for [repository-level operation failures](monitor-service.md#repository-level-operation-failures).
Related links:
-* [Logs for diagnostic evaluation and auditing](container-registry-diagnostics-audit-logs.md)
+* [Monitor Azure Container Registry](monitor-service.md)
* [Container registry FAQ](container-registry-faq.md) * [Best practices for Azure Container Registry](container-registry-best-practices.md)
container-registry Monitor Service Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/monitor-service-reference.md
+
+ Title: Monitoring Azure Container Registry data reference
+description: Important reference material needed when you monitor your Azure container registry. Provides details about metrics, resource logs, and log schemas.
+++++ Last updated : 03/19/2021++
+# Monitoring Azure Container Registry data reference
+
+See [Monitor Azure Container Registry](monitor-service.md) for details on collecting and analyzing monitoring data for Azure Container Registry.
+
+## Metrics
+
+### Container Registry metrics
+
+Resource Provider and Type: [Microsoft.ContainerRegistry/registries](/azure/azure-monitor/platform/metrics-supported#microsoftcontainerregistryregistries)
+
+| Metric | Exportable via Diagnostic Settings? | Unit | Aggregation Type | Description | Dimensions |
+|:-|:--|:--|:|:|:-- |
+| AgentPoolCPUTime | Yes | Seconds | Total | CPU time used by [ACR tasks](container-registry-tasks-overview.md) running on dedicated [agent pools](tasks-agent-pools.md) | None |
+| RunDuration | Yes | Milliseconds | Total | Duration of [ACR tasks](container-registry-tasks-overview.md) runs | None |
+| StorageUsed | No | Bytes | Average | Storage used by the container registry<br/><br/>Sum of storage for unique and shared layers, manifest files, and replica copies in all repositories<sup>1</sup> | Geolocation |
+| SuccessfulPullCount | Yes | Count | Total | Successful pulls of container images and other artifacts from the registry | None |
+| SuccessfulPushCount | Yes | Count | Total | Successful pushes of container images and other artifacts to the registry | None |
+| TotalPullCount | Yes | Count | Total | Total pulls of container images and other artifacts from the registry | None |
+| TotalPushCount | Yes | Count | Total | Total pushes of container images and other artifacts to the registry | None |
+
+<sup>1</sup>Because of layer sharing, registry storage used may be less than the sum of storage for individual repositories. When you [delete](container-registry-delete.md) a repository or tag, you recover only the storage used by manifest files and the unique layers referenced.
+
+For more information, see a list of [all platform metrics supported in Azure Monitor](https://docs.microsoft.com/azure/azure-monitor/platform/metrics-supported).
+
+## Metric Dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+Azure Container Registry has the following dimensions associated with its metrics.
+
+| Dimension Name | Description |
+| - | -- |
+| **Geolocation** | The Azure region for a registry or [geo-replica](container-registry-geo-replication.md). |
++
+## Resource logs
+
+This section lists the types of resource logs you can collect for Azure Container Registry.
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+
+### Container Registries
+
+Resource Provider and Type: [Microsoft.ContainerRegistry/registries](/azure/azure-monitor/essentials/resource-logs-categories#microsoftcontainerregistryregistries)
+
+| Category | Display Name | Details |
+|:|:-||
+| ContainerRegistryLoginEvents | Login Events | Registry authentication events and status, including the incoming identity and IP address |
+| ContainerRegistryRepositoryEvents | Repository Events | Operations on images and other artifacts in registry repositories<br/><br/> The following operations are logged: push, pull, untag, delete (including repository delete), purge tag, and purge manifest<sup>1</sup> |
+
+<sup>1</sup>Purge events are logged only if a registry [retention policy](container-registry-retention-policy.md) is configured.
+
+## Azure Monitor Logs tables
+
+This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Container Registry and available for query by Log Analytics.
+
+### Container Registry
+
+| Table | Description |
+|:|:-|
+| [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) | Entries from the Azure Activity log that provide insight into any subscription-level or management group level events that have occurred in Azure. |
+| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | Metric data emitted by Azure services that measure their health and performance. |
+| [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/containerregistryloginevents) | Azure Container Registry Login Auditing Logs |
+| [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/containerregistryloginevents) | Azure Container Registry Repository Auditing Logs |
+
+For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+## Activity log
+
+The following table lists operations related to Azure Container Registry that may be created in the [Activity log](/azure/azure-monitor/platform/activity-log). This list is not exhaustive.
+
+| Operation | Description |
+|:|:|
+| Create or Update Container Registry | Create a container registry or update a registry property |
+| Delete Container Registry | Delete a container registry |
+| List Container Registry Login Credentials | Show credentials for registry's admin account |
+| Import Image | Import an image or other artifact to a registry |
+| Create Role Assignment | Assign an identity an RBAC role to access a resource |
++
+## Schemas
+
+The following schemas are in use by Azure Container Registry's resource logs.
+
+| Schema | Description |
+|: |:|
+| [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/ContainerRegistryLoginEvents) | Schema for registry authentication events and status, including the incoming identity and IP address |
+| [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/ContainerRegistryRepositoryEvents) | Schema for operations on images and other artifacts in registry repositories |
+## Next steps
+
+- See [Monitor Azure Container Registry](monitor-service.md) for a description of monitoring an Azure container registry.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
container-registry Monitor Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/monitor-service.md
+
+ Title: Monitor Azure Container Registry
+description: Start here to learn how to monitor your Azure container registry using features of Azure Monitor
+++++ Last updated : 03/19/2021++
+# Monitor Azure Container Registry
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Container Registry and how you can use the features of Azure Monitor to analyze and alert on this data.
+
+## Monitor overview
+
+The **Overview** page in the Azure portal for each registry includes a brief view of recent resource usage and activity, such as push and pull operations. This high-level information is useful, but only a small amount of the monitoring data is shown there.
++
+## What is Azure Monitor?
+
+Azure Container Registry creates monitoring data using [Azure Monitor](/azure/azure-monitor/overview), which is a full stack monitoring service in Azure that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises.
+
+Start with the article [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource), which describes the following concepts:
+
+- What is Azure Monitor?
+- Costs associated with monitoring
+- Monitoring data collected in Azure
+- Configuring data collection
+- Standard tools in Azure for analyzing and alerting on monitoring data
+
+The following sections build on this article by describing the specific data gathered for Azure Container Registry and providing examples for configuring data collection and analyzing this data with Azure tools.
+
+## Monitoring data
+
+Azure Container Registry collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring Azure Container Registry data reference](monitor-service-reference.md) for detailed information on the metrics and logs created by Azure Container Registry.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Container Registry are listed in [Azure Container Registry monitoring data reference](monitor-service-reference.md#resource-logs).
+
+> [!TIP]
+> You can also create registry diagnostic settings by navigating to your registry in the portal. In the menu, select **Diagnostic settings** under **Monitoring**.
+
+The following image shows the options when you enable diagnostic setting for a registry.
++
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics (preview)
+
+You can analyze metrics for an Azure container registry with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool.
+
+> [!TIP]
+> You can also go to the metrics explorer by navigating to your registry in the portal. In the menu, select **Metrics (preview)** under **Monitoring**.
+
+For a list of the platform metrics collected for Azure Container Registry, see [Monitoring Azure Container Registry data reference metrics](monitor-service-reference.md#metrics)
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+### Azure CLI
+
+The following Azure CLI commands can be used to get information about the Azure Container Registry metrics.
+
+* [az acr show-usage](/cli/azure/acr/#az_acr_show_usage) - Show the current storage used by an Azure container registry
+* [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az_monitor_metrics_list_definitions) - List metric definitions and dimensions
+* [az monitor metrics list](/cli/azure/monitor/metrics#az_monitor_metrics_list) - Retrieve metric values
+
+### REST API
+
+You can use the Azure Monitor REST API to get information programmatically about the Azure Container Registry metrics.
+
+* [List metric definitions and dimensions](/rest/api/monitor/metricdefinitions/list)
+* [Retrieve metric values](/rest/api/monitor/metrics/list)
+
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](/azure/azure-monitor/essentials/diagnostic-logs-schema#top-level-resource-logs-schema). The schema for Azure Container Registry resource logs is found in the [Azure Container Registry Data Reference](monitor-service-reference.md#schemas).
+
+The [Activity log](/azure/azure-monitor/platform/activity-log) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Azure Container Registry, see [Monitoring Azure Container Registry data reference](monitor-service-reference.md#resource-logs).
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Container Reference data reference](monitor-service-reference.md#azure-monitor-logs-tables).
+
+### Sample Kusto queries
+
+> [!IMPORTANT]
+> When you select **Logs** from the **Azure Container Registry** menu, Log Analytics is opened with the query scope set to the current registry. This means that log queries will only include data from that resource. If you want to run a query that includes data from other registries or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/log-query/scope/) for details.
+
+For example, the following query retrieves the most recent 24 hours of data from the **ContainerRegistryRepositoryEvents** table:
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| where TimeGenerated > ago(1d)
+```
+
+The following image shows sample output:
++
+Following are queries that you can use to help you monitor your registry resource.
+
+### Error events from the last hour
+
+```Kusto
+union Event, Syslog // Event table stores Windows event records, Syslog stores Linux records
+| where TimeGenerated > ago(1h)
+| where EventLevelName == "Error" // EventLevelName is used in the Event (Windows) records
+ or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records
+```
+
+### 100 most recent registry events
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| union ContainerRegistryLoginEvents
+| top 100 by TimeGenerated
+| project TimeGenerated, LoginServer, OperationName, Identity, Repository, DurationMs, Region , ResultType
+```
+
+### Identity of user or object that deleted repository
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| where OperationName contains "Delete"
+| project LoginServer, OperationName, Repository, Identity, CallerIpAddress
+```
+
+### Identity of user or object that deleted tag
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| where OperationName contains "Untag"
+| project LoginServer, OperationName, Repository, Tag, Identity, CallerIpAddress
+```
+
+### Repository-level operation failures
+
+```kusto
+ContainerRegistryRepositoryEvents
+| where ResultDescription contains "40"
+| project TimeGenerated, OperationName, Repository, Tag, ResultDescription
+```
+
+### Registry authentication failures
+
+```kusto
+ContainerRegistryLoginEvents
+| where ResultDescription != "200"
+| project TimeGenerated, Identity, CallerIpAddress, ResultDescription
+```
++
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts). Different types of alerts have benefits and drawbacks.
++
+<!-- only include next line if applications run on your service and work with App Insights.
+
+If you are creating or running an application which run on <*service*> [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional types of alerts.
+-->
+
+The following table lists common and recommended alert rules for Azure Container Registry.
+
+<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable -->
+| Alert type | Condition | Description |
+|:|:|:|
+| metric | Signal: Storage used<br/>Operator: Greater than<br/>Aggregation type: Average<br/>Threshold value: 5 GB| Alerts if the registry storage used exceeds a specified value.|
+| | | |
+
+### Example: Send email alert when registry storage used exceeds a value
+
+1. In the Azure portal, navigate to your registry.
+1. Select **Metrics (preview)** under **Monitoring**.
+1. In the metrics explorer, in **Metric**, select **Storage used**.
+1. Select **New alert rule**.
+1. In **Scope**, confirm the registry resource for which you want to create an alert rule.
+1. In **Condition**, select **Add condition**.
+ 1. In **Signal name**, select **Storage used**.
+ 1. In **Chart period**, select **Over the last 24 hours**.
+ 1. In **Alert logic**, in **Threshold value**, select a value such as *5*. In **Unit**, select a value such as *GB*.
+ 1. Accept default values for the remaining settings, and select **Done**.
+1. In **Actions**, select **Add action groups** > **+ Create action group**.
+ 1. Enter details of the action group.
+ 1. On the **Notifications** tab, select **Email/SMS message/Push/Voice** and enter a recipient such as *admin@contoso.com*. Select **Review + create**.
+1. Enter a name and description of the alert rule, and select the severity level.
+1. Select **Create alert rule**.
+
+## Next steps
+
+- See [Monitoring Azure Container Registry data reference](monitor-service-reference.md) for a reference of the metrics, logs, and other important values created by Azure Container Registry.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-administration.md
If you're not an enterprise administrator, contact an enterprise administrator t
#### If your enterprise administrator can't help you
-If your enterprise administrator can't assist you, create an [Azure Enterprise portal support request](https://support.microsoft.com/supportrequestform/cf791efa-485b-95a3-6fad-3daf9cd4027c). Provide the following information:
+If your enterprise administrator can't assist you, create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Provide the following information:
- Enrollment number - Email address to add, and authentication type (work, school, or Microsoft account)
cost-management-billing Ea Portal Vm Reservations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-vm-reservations.md
Title: Azure EA VM reserved instances
description: This article summaries how Azure reservations for VM reserved instances can help you save you money with your enterprise enrollment. Previously updated : 10/14/2020 Last updated : 05/17/2021
For more information about reservation costs and usage, see [Get Enterprise Agre
For information about pricing, see [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) or [Windows Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/windows/).
+### Reservation prices
+
+Any reservation discounts that your organization might have negotiated are not shown in the EA portal price sheet. Previously, the discounted rates were available in the EA portal, however that functionality was removed. If youΓÇÖve negotiated reduced reservation prices, currently the only way to get a list of them is to create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+The prices for reservations are not necessarily the same between retail rates and EA. They could be the same, but if youΓÇÖve negotiated a discount, the rates will differ.
+
+Prices shown in the [Azure Pricing calculator](https://azure.microsoft.com/pricing/calculator/) and [Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices) are the same. Querying the API is the best way to view all prices at once.
+ ## Reserved instances API support Use Azure APIs to programmatically get information for your organization about Azure service or software reservations. For example, use the APIs to:
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/download-azure-invoice.md
tags: billing
Previously updated : 08/20/2020 Last updated : 05/17/2021
To download an invoice:
## Get MOSP subscription invoice in email
-You must have an account admin role on a subscription or a support plan to opt in to receive its invoice by email. Email invoices are available only for subscriptions and support plans, not for reservations or Azure Marketplace purchases. Once you've opted-in you can add additional recipients, who receive the invoice by email as well.
+You must have an account admin role on a subscription or a support plan to opt in to receive its invoice by email. Once you've opted-in you can add additional recipients, who receive the invoice by email as well.
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Search for **Cost Management + Billing**.
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/understand/review-individual-bill.md
tags: billing
Previously updated : 10/26/2020 Last updated : 05/17/2021
Costs shown in cost analysis should match precisely to the *usage charges* cost
![Invoice usage charges](./media/review-individual-bill/invoice-usage-charges.png)
-## External Marketplace services are billed separately
+## External Marketplace services
<a name="external"></a> External services or marketplace charges are for resources that have been created by third-party software vendors. Those resources are available for use from the Azure Marketplace. For example, a Barracuda Firewall is an Azure Marketplace resource offered by a third-party. All charges for the firewall and its corresponding meters appear as external service charges.
-External service charges are billed separately. The charges don't show up on your Azure invoice.
+External service charges appear on a separate invoice.
### Resources are billed by usage meters
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-integration-runtime.md
The following diagram shows location settings of Data Factory and its integratio
![Integration runtime location](media/concepts-integration-runtime/integration-runtime-location.png) ## Determining which IR to use
-If one data factory activity associates with more than one type of integration runtime, it will resolve to one of them。 The self-hosted integration runtime takes precedence over Azure integration runtime in Azure Data Factory managed virtual network. And the latter takes precedence over public Azure integration runtime.
+If one data factory activity associates with more than one type of integration runtime, it will resolve to one of them. The self-hosted integration runtime takes precedence over Azure integration runtime in Azure Data Factory managed virtual network. And the latter takes precedence over public Azure integration runtime.
For example, one copy activity is used to copy data from source to sink. The public Azure integration runtime is associated with the linked service to source and an Azure integration runtime in Azure Data Factory managed virtual network associates with the linked service for sink, then the result is that both source and sink linked service use Azure integration runtime in Azure Data Factory managed virtual network. But if a self-hosted integration runtime associates the linked service for source, then both source and sink linked service use self-hosted integration runtime. ### Copy activity
data-factory Copy Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-activity-log.md
You can log your copied file names in copy activity, which can help you to furth
When you enable fault tolerance setting in copy activity to skip faulty data, the skipped files and skipped rows can also be logged. You can get more details from [fault tolerance in copy activity](copy-activity-fault-tolerance.md).
+Given you have the opportunity to get all the file names copied by ADF copy activity via enabling session log, it will be helpful for you in the following scenarios:
+- After you use ADF copy activities to copy the files from one storage to another, you see some files are shown up in destination store which should not. You can scan the copy activity session logs to see which copy activity actually copied those files and when to copy those files. By those, you can easily find the root cause and fix your configurations in ADF.
+- After you use ADF copy activities to copy the files from one storage to another, you feel the files copied to the destination are not the same as the ones from the source store. You can scan the copy activity session logs to get the timestamp of copy jobs as well as the metadata of files when ADF copy activities read them from the source store. By those, you can know if those files had been updated by other applications on source store after being copied by ADF.
++ ## Configuration The following example provides a JSON definition to enable session log in Copy Activity:
data-factory Tutorial Bulk Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-bulk-copy-portal.md
The **IterateAndCopySQLTables** pipeline takes a list of tables as a parameter.
1. Click the **Pre-copy Script** input box -> select the **Add dynamic content** below -> enter the following expression as script -> select **Finish**. ```sql
- TRUNCATE TABLE [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}]
+ IF EXISTS (SELECT * FROM [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}) TRUNCATE TABLE [@{item().TABLE_SCHEMA}].[@{item().TABLE_NAME}]
``` ![Copy sink settings](./media/tutorial-bulk-copy-portal/copy-sink-settings.png)
data-lake-store Data Lake Store Hdinsight Hadoop Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-lake-store/data-lake-store-hdinsight-hadoop-use-resource-manager-template.md
Before you begin this tutorial, you must have the following:
**If you are not an Azure AD administrator**, you will not be able to perform the steps required to create a service principal. In such a case, your Azure AD administrator must first create a service principal before you can create an HDInsight cluster with Data Lake Storage Gen1. Also, the service principal must be created using a certificate, as described at [Create a service principal with certificate](../active-directory/develop/howto-authenticate-service-principal-powershell.md#create-service-principal-with-certificate-from-certificate-authority). ## Create an HDInsight cluster with Data Lake Storage Gen1
-The Resource Manager template, and the prerequisites for using the template, are available on GitHub at [Deploy a HDInsight Linux cluster with new Data Lake Storage Gen1](https://github.com/Azure/azure-quickstart-templates/tree/master/201-hdinsight-datalake-store-azure-storage). Follow the instructions provided at this link to create an HDInsight cluster with Data Lake Storage Gen1 as the additional storage.
+The Resource Manager template, and the prerequisites for using the template, are available on GitHub at [Deploy a HDInsight Linux cluster with new Data Lake Storage Gen1](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-datalake-store-azure-storage). Follow the instructions provided at this link to create an HDInsight cluster with Data Lake Storage Gen1 as the additional storage.
The instructions at the link mentioned above require PowerShell. Before you start with those instructions, make sure you log in to your Azure account. From your desktop, open a new Azure PowerShell window, and enter the following snippets. When prompted to log in, make sure you log in as one of the subscription administrators/owner:
databox Data Box Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-copy-data.md
Previously updated : 11/11/2020 Last updated : 05/17/2021 ms.localizationpriority: high
If using a Windows Server host computer, follow these steps to connect to the Da
3. To access the shares associated with your storage account (*utsac1* in the following example) from your host computer, open a command window. At the command prompt, type:
- `net use \\<IP address of the device>\<share name> /u:<user name for the share>`
+ `net use \\<IP address of the device>\<share name> /u:<IP address of the device>\<user name for the share>`
Depending upon your data format, the share paths are as follows: - Azure Block blob - `\\10.126.76.138\utSAC1_202006051000_BlockBlob`
If using a Windows Server host computer, follow these steps to connect to the Da
4. Enter the password for the share when prompted. If the password has special characters, add double quotation marks before and after it. The following sample shows connecting to a share via the preceding command. ```
- C:\Users\Databoxuser>net use \\10.126.76.138\utSAC1_202006051000_BlockBlob /u:testuser1
+ C:\Users\Databoxuser>net use \\10.126.76.138\utSAC1_202006051000_BlockBlob /u:10.126.76.138\testuser1
Enter the password for 'testuser1' to connect to '10.126.76.138': "ab1c2def$3g45%6h7i&j8kl9012345" The command completed successfully. ```
databox Data Box Deploy Export Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-deploy-export-copy-data.md
Previously updated : 12/18/2020 Last updated : 05/17/2021 ms.localizationpriority: high
If using a Windows Server host computer, follow these steps to connect to the Da
3. To access the shares associated with your storage account (*exportbvtdataset2* in the following example) from your host computer, open a command window. At the command prompt, type:
- `net use \\<IP address of the device>\<share name> /u:<user name for the share>`
+ `net use \\<IP address of the device>\<share name> /u:<IP address of the device>\<user name for the share>`
Depending upon your data format, the share paths are as follows: - Azure Block blob - `\\169.254.143.85\exportbvtdataset2_BlockBlob`
If using a Windows Server host computer, follow these steps to connect to the Da
4. Enter the password for the share when prompted. The following sample shows connecting to a share via the preceding command. ```
- C:\Users\Databoxuser>net use \\169.254.143.85\exportbvtdataset2_BlockBlob /u:exportbvtdataset2
+ C:\Users\Databoxuser>net use \\169.254.143.85\exportbvtdataset2_BlockBlob /u:169.254.143.85\exportbvtdataset2
Enter the password for 'exportbvtdataset2' to connect to '169.254.143.85': The command completed successfully. ```
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-high-availability-disaster-recovery.md
Azure Digital Twins provides intra-region HA by implementing redundancies within
There could be some rare situations when a data center experiences extended outages due to power failures or other events in the region. Such events are rare, and during such failures, the intra region HA capability described above may not help. Azure Digital Twins addresses this with Microsoft-initiated failover.
-**Microsoft-initiated failover** is exercised by Microsoft in rare situations to failover all the Azure Digital Twins instances from an affected region to the corresponding geo-paired region. This process is a default option (with no way for users to opt out), and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve a user consent before the user's instance is failed over.
+**Microsoft-initiated failover** is exercised in rare situations to failover all the Azure Digital Twins instances from an affected region to the corresponding [geo-paired region](../best-practices-availability-paired-regions.md). This process is a default option (with no way for users to opt out), and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve user consent before the user's instance is failed over.
>[!NOTE]
-> Some Azure services also provide an additional option called **customer-initiated failover**, which enables customers to initiate a failover just for their instance, such as to run a DR drill. This mechanism is currently **not supported** by Azure Digital Twins.
+> Some Azure services provide an additional option called **customer-initiated failover**, which enables customers to initiate a failover just for their instance, such as to run a DR drill. This mechanism is currently **not supported** by Azure Digital Twins.
+
+If it's important for you to keep all data within certain geographical areas, please check the location of the [geo-paired region](../best-practices-availability-paired-regions.md#azure-regional-pairs) for the region where you're creating your instance, to ensure that it meets your data residency requirements.
+
+>[!NOTE]
+> Some Azure services provide an option for users to configure a different region for failover, as a way to meet data residency requirements. This capability is currently **not supported** by Azure Digital Twins.
## Monitor service health
digital-twins How To Enable Managed Identities Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-enable-managed-identities-portal.md
Last updated 1/21/2021 -+ # Optional fields. Don't forget to remove # if you need a field. #
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
You can create patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/ap
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="UpdateTwin":::
-### Update properties in digital twin components
+### Update sub-properties in digital twin components
Recall that a model may contain components, allowing it to be made up of other models.
To patch properties in a digital twin's components, you can use path syntax in J
:::code language="json" source="~/digital-twins-docs-samples/models/patch-component.json":::
+### Update sub-properties in object-type properties
+
+Models may contain properties that are of an object type. Those objects may have their own properties, and you may want to update one of those sub-properties belonging to the object-type property. This process is similar to the process for [updating sub-properties in components](#update-sub-properties-in-digital-twin-components), but may require some extra steps.
+
+Consider a model with an object-type property, `ObjectProperty`. `ObjectProperty` has a string property named `StringSubProperty`.
+
+When a twin is created using this model, it's not necessary to instantiate the `ObjectProperty` at that time. If the object property is not instantiated during twin creation, there is no default path created to access `ObjectProperty` and its `StringSubProperty` for a patch operation. You will need to add the path to `ObjectProperty` yourself before you can update its properties.
+
+This can be done with a JSON Patch `add` operation, like this:
+
+```json
+[
+ {
+ "op": "add",
+ "path": "/ObjectProperty",
+ "value": {"StringSubProperty":"<string-value>"}
+ }
+]
+```
+
+>[!NOTE]
+> If `ObjectProperty` has more than one property, you should include all of them in the `value` field of this operation, even if you're only updating one:
+> ```json
+>... "value": {"StringSubProperty":"<string-value>", "Property2":"<property2-value>", ...}
+>```
++
+After this has been done once, a path to `StringSubProperty` exists, and it can be updated directly from now on with a typical `replace` operation:
+
+```json
+[
+ {
+ "op": "replace",
+ "path": "/ObjectProperty/StringSubProperty",
+ "value": "<string-value>"
+ }
+]
+```
+
+Although the first step isn't necessary in cases where `ObjectProperty` was instantiated when the twin was created, it's recommended to use it every time you update a sub-property for the first time, since you may not always know for sure whether the object property was initially instantiated or not.
+ ### Update a digital twin's model The `UpdateDigitalTwin()` function can also be used to migrate a digital twin to a different model.
digital-twins How To Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-query-graph.md
You can select the several "top" items in a query using the `Select TOP` clause.
## Filter results: specify return set with projections
-By using projections in the `SELECT` statement, you can choose which columns a query will return.
-
->[!NOTE]
->At this time, complex properties are not supported. To make sure that projection properties are valid, combine the projections with an `IS_PRIMITIVE` check.
+By using projections in the `SELECT` statement, you can choose which columns a query will return. Projection is now supported for both primitive and complex properties. For more information about projections with Azure Digital Twins, see the [SELECT clause reference documentation](reference-query-clause-select.md#select-columns-with-projections).
Here is an example of a query that uses projection to return twins and relationships. The following query projects the Consumer, Factory and Edge from a scenario where a Factory with an ID of *ABC* is related to the Consumer through a relationship of *Factory.customer*, and that relationship is presented as the *Edge*.
digital-twins How To Set Up Instance Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-set-up-instance-portal.md
If you do want to configure more details for your instance, the next section des
Here are the additional options you can configure during setup, using the other tabs in the **Create Resource** process. * **Networking**: In this tab, you can enable private endpoints with [Azure Private Link](../private-link/private-link-overview.md) to eliminate public network exposure to your instance. For instructions, see [How-to: Enable private access with Private Link (preview)](./how-to-enable-private-link-portal.md#add-a-private-endpoint-during-instance-creation).
-* **Advanced**: In this tab, you can enable a [system-managed identity](../active-directory/managed-identities-azure-resources/overview.md) for your instance that can be used when forwarding events to [endpoints](concepts-route-events.md). For instructions, see [How-to: Enable managed identities for routing events (preview)](./how-to-enable-managed-identities-portal.md#add-a-system-managed-identity-during-instance-creation).
+* **Advanced**: In this tab, you can enable a system-managed identity for your instance that can be used when forwarding events to [endpoints](concepts-route-events.md). For more information about using system-managed identities with Azure Digital Twins, see [Concepts: Security for Azure Digital Twins solutions](concepts-security.md#managed-identity-for-accessing-other-resources-preview).
* **Tags**: In this tab, you can add tags to your instance to help you organize it among your Azure resources. For more about Azure resource tags, see [Tag resources, resource groups, and subscriptions for logical organization](../azure-resource-manager/management/tag-resources.md). ### Verify success and collect important values
education-hub Azure Students Starter Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/education-hub/azure-dev-tools-teaching/azure-students-starter-program.md
There is no cost to you. This benefit provides you access to a free tier of the
- Azure Notification Hubs - Azure Database for MySQL - Application Insights-- Azure DevOps Server (formerly Visual Studio Team Services)
+- Azure DevOps Server (formerly Visual Studio Team Foundation Server)
Azure for Students Starter is available to verified students at no cost and without commitment or time limit. See the [Azure for Students Starter Offer](https://azure.microsoft.com/offers/ms-azr-0144p/)
Account portal](https://account.azure.com/).
- [Get help with login errors](troubleshoot-login.md) - [Download software (Azure for Students Starter)](download-software.md) - [Azure for Students program](azure-students-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)
+- [Microsoft Learn: a free online learning platform](/learn/)
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
-
+ Title: 'Connectivity providers and locations: Azure ExpressRoute | Microsoft Docs' description: This article provides a detailed overview of locations where services are offered and how to connect to Azure regions. Sorted by connectivity provider.
If you are remote and don't have fiber connectivity or you want to explore other
| **[Airgate Technologies, Inc.](https://www.airgate.ca/)** | Equinix, Cologix | Toronto, Montreal | | **[Alaska Communications](https://www.alaskacommunications.com/Business)** |Equinix |Seattle | | **[Altice Business](https://golightpath.com/transport)** |Equinix |New York, Washington DC |
+| **[Aptum Technologies](https://aptum.com/services/cloud/managed-azure/)**| Equinix | Montreal, Toronto |
| **[Arteria Networks Corporation](https://www.arteria-net.com/business/service/cloud/sca/)** |Equinix |Tokyo | | **[Axtel](https://alestra.mx/landing/expressrouteazure/)** |Equinix |Dallas| | **[Beanfield Metroconnect](https://www.beanfield.com/business/cloud-exchange)** |Megaport |Toronto|
If you are remote and don't have fiber connectivity or you want to explore other
| **[Cinia](https://www.cinia.fi/en/services/connectivity-services/direct-public-cloud-connection.html)** | Equinix, Megaport | Frankfurt, Hamburg | | **[CloudXpress](https://www2.telenet.be/fr/business/produits-services/internet/cloudxpress/)** | Equinix | Amsterdam | | **[CMC Telecom](https://cmctelecom.vn/san-pham/value-added-service-and-it/cmc-telecom-cloud-express-en/)** | Equinix | Singapore |
-| **[Aptum Technologies](https://aptum.com/services/cloud/managed-azure/)**| Equinix | Montreal, Toronto |
| **[CoreAzure](https://www.coreazure.com/)**| Equinix | London | | **[Cox Business](https://www.cox.com/business/networking/cloud-connectivity.html)**| Equinix | Dallas, Silicon Valley, Washington DC | | **[Crown Castle](https://fiber.crowncastle.com/solutions/added/cloud-connect)**| Equinix | Atlanta, Chicago, Dallas, Los Angeles, New York, Washington DC |
If you are remote and don't have fiber connectivity or you want to explore other
| **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**|Equinix | Amsterdam, Dublin, London, Paris | | **[QSC AG](https://www2.qbeyond.de/en/)** |Interxion | Frankfurt | | **[RETN](https://retn.net/services/cloud-connect/)** | Equinix | Amsterdam |
-| **[Tata Teleservices](https://www.tatateleservices.com/business-services/data-services/secure-cloud-connect)** | Tata Communications | Chennai, Mumbai |
| **Rogers** | Cologix, Equinix | Montreal, Toronto | | **[Spectrum Enterprise](https://enterprise.spectrum.com/services/cloud/cloud-connect.html)** | Equinix | Chicago, Dallas, Los Angeles, New York, Silicon Valley | | **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London |
+| **[Tata Teleservices](https://www.tatateleservices.com/business-services/data-services/secure-cloud-connect)** | Tata Communications | Chennai, Mumbai |
| **[TDC Erhverv](https://tdc.dk/Produkter/cloudaccessplus)** | Equinix | Amsterdam | | **[Telecom Italia Sparkle](https://www.tisparkle.com/our-platform/corporate-platform/sparkle-cloud-connect#catalogue)**| Equinix | Amsterdam | | **[Telekom Deutschland GmbH](https://cloud.telekom.de/de/infrastruktur/managed-it-services/managed-hybrid-infrastructure-mit-microsoft-azure)** | Interxion | Amsterdam, Frankfurt |
If you are remote and don't have fiber connectivity or you want to explore other
| **[QTS Data Centers](https://www.qtsdatacenters.com/hybrid-solutions/connectivity/azure-cloud )** | Megaport, PacketFabric | | **[Stream Data Centers]( https://www.streamdatacenters.com/products-services/network-cloud/ )** | Megaport | | **[RagingWire Data Centers](https://www.ragingwire.com/wholesale/wholesale-data-centers-worldwide-nexcenters)** | IX Reach, Megaport, PacketFabric |
-| **[vXchnge](https://www.vxchnge.com/colocation-services/interconnection)** | IX Reach, Megaport |
| **[T5 Datacenters](https://t5datacenters.com/)** | IX Reach |
+| **[vXchnge](https://www.vxchnge.com/colocation-services/interconnection)** | IX Reach, Megaport |
## Connectivity through National Research and Education Networks (NREN)
frontdoor Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-geo-filtering.md
# Geo-filtering on a domain for Azure Front Door
-By default, Azure Front Door will respond to all user requests regardless of the location where the request is coming from. In some scenarios, you may want to restrict the access to your web application by countries/regions. The Web application firewall (WAF) service in Front Door enables you to define a policy using custom access rules for a specific path on your endpoint to either allow or block access from specified countries/regions.
+By default, Azure Front Door will respond to all user requests regardless of the location where the request is coming from. In some scenarios, you may want to restrict the access to your web application by countries/regions. The Web application firewall (WAF) service in Front Door enables you to define a policy using custom access rules for a specific path on your endpoint to either allow or block access from specified countries/regions.
-A WAF policy contains a set of custom rules. The rule consists of match conditions, an action, and a priority. In a match condition, you define a match variable, operator, and match value. For a geo filtering rule, a match variable is REMOTE_ADDR, the operator is GeoMatch, and the value is a two letter country/region code of interest. "ZZ" country code or "Unknown" country captures IP addresses that are not yet mapped to a country in our dataset. You may add ZZ to your match condition to avoid false positives. You can combine a GeoMatch condition and a REQUEST_URI string match condition to create a path-based geo-filtering rule.
+A WAF policy contains a set of custom rules. The rule consists of match conditions, an action, and a priority. In a match condition, you define a match variable, operator, and match value. For a geo filtering rule, a match variable is REMOTE_ADDR, the operator is GeoMatch, and the value is a two letter country/region code of interest. "ZZ" country code or "Unknown" country captures IP addresses that are not yet mapped to a country in our dataset. You may add ZZ to your match condition to avoid false positives. You can combine a GeoMatch condition and a REQUEST_URI string match condition to create a path-based geo-filtering rule.
-You can configure a geo-filtering policy for your Front Door by using [Azure PowerShell](front-door-tutorial-geo-filtering.md) or by using a [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-geo-filtering).
+You can configure a geo-filtering policy for your Front Door by using [Azure PowerShell](front-door-tutorial-geo-filtering.md) or by using a [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering).
## Country/Region code reference
frontdoor Front Door Lb With Azure App Delivery Suite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-lb-with-azure-app-delivery-suite.md
ms.devlang: na
na Previously updated : 04/06/2021 Last updated : 05/16/2021
When choosing a global load balancer between Traffic Manager and Azure Front Doo
## Building with AzureΓÇÖs application delivery suite We recommend all websites, APIs, services be geographically redundant so it can deliver traffic to its users from the nearest location whenever possible. Combining multiple load-balancing services enables you to build geographical and zonal redundancy to maximize reliability and performance.
-In the following diagram, we describe an example architecture that uses a combination of all these services to build a global web service. The architect used Traffic Manager to route traffic to global backends for file and object delivery. While using Front Door, to globally route URL paths that match the pattern /store/* to the service theyΓÇÖve migrated to App Service. Lastly, routing all other requests to regional Application Gateways.
+In the following diagram, we describe an example architecture that uses a combination of all these services to build a global web service. The architecture uses Traffic Manager to route traffic to global backends for file and object delivery, Front Door to globally route URL paths that match the pattern /store/* to their service that theyΓÇÖve migrated to App Service, and all other requests to regional Application Gateways.
In each region of IaaS service, the application developer has decided that any URLs that match the pattern /images/* get served from a dedicated pool of VMs. This pool of VMs is different from the rest of the web farm.
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-quickstart-template-samples.md
na Previously updated : 02/04/2020 Last updated : 05/14/2021 # Azure Resource Manager deployment model templates for Front Door
The following table includes links to Azure Resource Manager deployment model te
| Template | Description | | | | | [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
-| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
+| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
| [Onboard a custom domain with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-custom-domain)| Add a custom domain to your Front Door. |
-| [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
+| [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
| [Control Health Probes for your backends on Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-health-probes)| Update your Front Door to change the health probe settings by updating the probe path and also the intervals in which the probes will be sent. | | [Create Front Door with Active/Standby backend configuration](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-priority-lb)| Creates a Front Door that demonstrates priority-based routing for Active/Standby application topology, that is, by default send all traffic to the primary (highest-priority) backend until it becomes unavailable. | | [Create Front Door with caching enabled for certain routes](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-create-caching)| Creates a Front Door with caching enabled for the defined routing configuration thus caching any static assets for your workload. |
frontdoor Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/resource-manager-template-samples.md
The following table includes links to Azure Resource Manager templates for Azure
| [WAF policy with geo-filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-geo-filtering/) | Creates a Front Door profile and WAF with a custom rule to perform geo-filtering. | |**App Service origins**| **Description** | | [App Service](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-app-service-public) | Creates an App Service app with a public endpoint, and a Front Door profile. |
-| [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. |
+| [App Service with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-app-service-private-link) | Creates an App Service app with a private endpoint, and a Front Door profile. |
|**Azure Functions origins**| **Description** | | [Azure Functions](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-function-public/) | Creates an Azure Functions app with a public endpoint, and a Front Door profile. |
-| [Azure Functions with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-premium-function-private-link) | Creates an Azure Functions app with a private endpoint, and a Front Door profile. |
+| [Azure Functions with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-function-private-link) | Creates an Azure Functions app with a private endpoint, and a Front Door profile. |
|**API Management origins**| **Description** | | [API Management (external)](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-api-management-external) | Creates an API Management instance with external VNet integration, and a Front Door profile. | |**Storage origins**| **Description** | | [Storage static website](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-storage-static-website) | Creates an Azure Storage account and static website with a public endpoint, and a Front Door profile. |
-| [Storage blobs with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-premium-storage-blobs-private-link) | Creates an Azure Storage account and blob container with a private endpoint, and a Front Door profile. |
+| [Storage blobs with Private Link](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-premium-storage-blobs-private-link) | Creates an Azure Storage account and blob container with a private endpoint, and a Front Door profile. |
|**Application Gateway origins**| **Description** | | [Application Gateway](https://github.com/Azure/azure-quickstart-templates/tree/master/201-front-door-standard-premium-application-gateway-public) | Creates an Application Gateway, and a Front Door profile. | |**Virtual machine origins**| **Description** |
governance Work With Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/resource-graph/concepts/work-with-data.md
Title: Work with large data sets description: Understand how to get, format, page, and skip records in large data sets while working with Azure Resource Graph. Previously updated : 05/01/2021 Last updated : 05/17/2021
is configured with the **resultFormat** parameter as part of the request options
is the default value for **resultFormat**. Results from Azure CLI are provided in JSON by default. Results in Azure PowerShell are a
-**PSCustomObject** by default, but they can quickly be converted to JSON using the `ConvertTo-Json`
-cmdlet. For other SDKs, the query results can be configured to output the _ObjectArray_ format.
+**PSResourceGraphResponse** object, but they can quickly be converted to JSON using the
+`ConvertTo-Json` cmdlet on the **Data** property. For other SDKs, the query results can be
+configured to output the _ObjectArray_ format.
### Format - Table
hdinsight Hdinsight Troubleshoot Converting Service Principal Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/hdinsight-troubleshoot-converting-service-principal-certificate.md
When using PowerShell or Azure template deployment to create clusters with Data
## Resolution
-Once you have the service principal certificate in pfx format (see [here](https://github.com/Azure/azure-quickstart-templates/tree/master/201-hdinsight-datalake-store-azure-storage) for sample service principal creation steps), use the following PowerShell command or C# snippet to convert the certificate contents to base-64 format.
+Once you have the service principal certificate in pfx format (see [here](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-datalake-store-azure-storage) for sample service principal creation steps), use the following PowerShell command or C# snippet to convert the certificate contents to base-64 format.
```powershell $servicePrincipalCertificateBase64 = [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes(path-to-servicePrincipalCertificatePfxFile))
hdinsight Hdinsight Custom Ambari Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-custom-ambari-db.md
When you host your Apache Ambari DB in an external database, remember the follow
## Deploy clusters with a custom Ambari DB
-To create an HDInsight cluster that uses your own external Ambari database, use the [custom Ambari DB Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-hdinsight-custom-ambari-db).
+To create an HDInsight cluster that uses your own external Ambari database, use the [custom Ambari DB Quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.hdinsight/hdinsight-custom-ambari-db).
Edit the parameters in the `azuredeploy.parameters.json` to specify information about your new cluster and the database that will hold Ambari.
hdinsight Hdinsight Hadoop Linux Use Ssh Unix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md
Title: Use SSH with Hadoop - Azure HDInsight
+ Title: Use SSH with Hadoop - Azure HDInsight
description: "You can access HDInsight using Secure Shell (SSH). This document provides information on connecting to HDInsight using the ssh commands from Windows, Linux, Unix, or macOS clients."
Replace `<clustername>` with the name of your cluster. Replace `<edgenodename>`
If your cluster contains an edge node, we recommend that you __always connect to the edge node__ using SSH. The head nodes host services that are critical to the health of Hadoop. The edge node runs only what you put on it. For more information on using edge nodes, see [Use edge nodes in HDInsight](hdinsight-apps-use-edge-node.md#access-an-edge-node).
-> [!TIP]
+> [!TIP]
> When you first connect to HDInsight, your SSH client may display a warning that the authenticity of the host can't be established. When prompted select 'yes' to add the host to your SSH client's trusted server list. > > If you have previously connected to a server with the same name, you may receive a warning that the stored host key does not match the host key of the server. Consult the documentation for your SSH client on how to remove the existing entry for the server name.
If your SSH account is secured using a key, the client must provide the matching
* If you have __multiple private keys__ for use with different servers, consider using a utility such as [ssh-agent (https://en.wikipedia.org/wiki/Ssh-agent)](https://en.wikipedia.org/wiki/Ssh-agent). The `ssh-agent` utility can be used to automatically select the key to use when establishing an SSH session.
-> [!IMPORTANT]
+> [!IMPORTANT]
> If you secure your private key with a passphrase, you must enter the passphrase when using the key. Utilities such as `ssh-agent` can cache the password for your convenience. ### Create an SSH key pair
You're prompted for information during the key creation process. For example, wh
* The __private key__ is used to authenticate your client to the HDInsight cluster.
-> [!IMPORTANT]
+> [!IMPORTANT]
> You can secure your keys using a passphrase. A passphrase is effectively a password on your private key. Even if someone obtains your private key, they must have the passphrase to use the key. ### Create HDInsight using the public key
You're prompted for information during the key creation process. For example, wh
| Azure portal | Uncheck __Use cluster login password for SSH__, and then select __Public Key__ as the SSH authentication type. Finally, select the public key file or paste the text contents of the file in the __SSH public key__ field.</br>:::image type="content" source="./media/hdinsight-hadoop-linux-use-ssh-unix/create-hdinsight-ssh-public-key.png" alt-text="SSH public key dialog in HDInsight cluster creation"::: | | Azure PowerShell | Use the `-SshPublicKey` parameter of the [New-AzHdinsightCluster](/powershell/module/az.hdinsight/new-azhdinsightcluster) cmdlet and pass the contents of the public key as a string.| | Azure CLI | Use the `--sshPublicKey` parameter of the [`az hdinsight create`](/cli/azure/hdinsight#az_hdinsight_create) command and pass the contents of the public key as a string. |
-| Resource Manager Template | For an example of using SSH keys with a template, see [Deploy HDInsight on Linux with SSH key](https://azure.microsoft.com/resources/templates/101-hdinsight-linux-ssh-publickey/). The `publicKeys` element in the [azuredeploy.json](https://github.com/Azure/azure-quickstart-templates/blob/master/101-hdinsight-linux-ssh-publickey/azuredeploy.json) file is used to pass the keys to Azure when creating the cluster. |
+| Resource Manager Template | For an example of using SSH keys with a template, see [Deploy HDInsight on Linux with SSH key](https://azure.microsoft.com/resources/templates/quickstarts/microsoft.hdinsight/hdinsight-linux-ssh-publickey/). The `publicKeys` element in the [azuredeploy.json](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.hdinsight/hdinsight-linux-ssh-publickey/azuredeploy.json) file is used to pass the keys to Azure when creating the cluster. |
## Authentication: Password SSH accounts can be secured using a password. When you connect to HDInsight using SSH, you're prompted to enter the password.
-> [!WARNING]
+> [!WARNING]
> Microsoft does not recommend using password authentication for SSH. Passwords can be guessed and are vulnerable to brute force attacks. Instead, we recommend that you use [SSH keys for authentication](#sshkey).
-> [!IMPORTANT]
+> [!IMPORTANT]
> The SSH account password expires 70 days after the HDInsight cluster is created. If your password expires, you can change it using the information in the [Manage HDInsight](hdinsight-administer-use-portal-linux.md#change-passwords) document. ### Create HDInsight using a password
The head nodes and edge node (if there's one) can be accessed over the internet
ssh sshuser@edgnodename.clustername-ssh.azurehdinsight.net ```
-> [!IMPORTANT]
+> [!IMPORTANT]
> The previous examples assume that you are using password authentication, or that certificate authentication is occurring automatically. If you use an SSH key-pair for authentication, and the certificate is not used automatically, use the `-i` parameter to specify the private key. For example, `ssh -i ~/.ssh/mykey sshuser@clustername-ssh.azurehdinsight.net`. Once connected, the prompt changes to indicate the SSH user name and the node you're connected to. For example, when connected to the primary head node as `sshuser`, the prompt is `sshuser@<active-headnode-name>:~$`.
If the SSH account is secured using a __password__, enter the password when conn
If the SSH account is secured using __SSH keys__, make sure that SSH forwarding is enabled on the client.
-> [!NOTE]
+> [!NOTE]
> Another way to directly access all nodes in the cluster is to install HDInsight into an Azure Virtual Network. Then, you can join your remote machine to the same virtual network and directly access all nodes in the cluster. > > For more information, see [Plan a virtual network for HDInsight](hdinsight-plan-virtual-network-deployment.md). ### Configure SSH agent forwarding
-> [!IMPORTANT]
+> [!IMPORTANT]
> The following steps assume a Linux or UNIX-based system, and work with Bash on Windows 10. If these steps do not work for your system, you may need to consult the documentation for your SSH client. 1. Using a text editor, open `~/.ssh/config`. If this file doesn't exist, you can create it by entering `touch ~/.ssh/config` at a command line.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/configure-export-data.md
Previously updated : 3/18/2021 Last updated : 5/11/2021
Now, you can move to the next step by creating a storage account and assign perm
## Adding permission to storage account
-The next step in export is to assign permission for Azure API for FHIR service to write to the storage account.
+The next step in export data is to assign permission for Azure API for FHIR service to write to the storage account.
-After you've created a storage account, go to **Access Control (IAM)** in the storage account and select **Add role assignment**.
+After you've created a storage account, go to the **Access Control (IAM)** in the storage account, and then select **Add role assignment**.
-![Export Role Assignment](media/export-data/fhir-export-role-assignment.png)
+For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
-It is here that you'll add the role **Storage Blob Data Contributor** to our service name, and then select **Save**.
+It is here that you'll add the role [Storage Blob Data Contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) to our service name, and then select **Save**.
-![Add Role](media/export-data/fhir-export-role-add.png)
+![Add role assignment page](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
Now you are ready to select the storage account in Azure API for FHIR as a default storage account for $export.
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/convert-data.md
Previously updated : 01/19/2021 Last updated : 05/11/2021
# How to convert data to FHIR (Preview) > [!IMPORTANT]
-> This capability is in public preview, is provided without a service level agreement,
-> and is not recommended for production workloads. Certain features might not be supported
+> This capability is in public preview, and it's provided without a service level agreement.
+> It's not recommended for production workloads. Certain features might not be supported
> or might have constrained capabilities. For more information, see > [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
| -- | -- | -- | | inputData | Data to be converted. | A valid value of JSON String datatype| | inputDataType | Data type of input. | ```HL7v2``` |
-| templateCollectionReference | Reference to a template collection. It can be a reference either to the **Default templates**, or a custom template image that is registered with Azure API for FHIR. See below to learn about customizing the templates, hosting those on ACR, and registering to the Azure API for FHIR. | ```microsofthealth/fhirconverter:default```, \<RegistryServer\>/\<imageName\>@\<imageDigest\> |
+| templateCollectionReference | Reference to a template collection. It can be a reference either to the **Default templates**, or a custom template image that's registered with Azure API for FHIR. See below to learn about customizing the templates, hosting those on ACR, and registering to the Azure API for FHIR. | ```microsofthealth/fhirconverter:default```, \<RegistryServer\>/\<imageName\>@\<imageDigest\> |
| rootTemplate | The root template to use while transforming the data. | ```ADT_A01```, ```OML_O21```, ```ORU_R01```, ```VXU_V04``` | > [!WARNING]
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
## Customize templates
-You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize the templates as per your needs. The extension provides an interactive editing experience, and makes it easy to download Microsoft-published templates and sample data. See the documentation in the extension for details.
+You can use the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) for Visual Studio Code to customize the templates as per your needs. The extension provides an interactive editing experience, and makes it easy to download Microsoft-published templates and sample data. Refer to the documentation in the extension for more details.
## Host and use templates
-It is strongly recommended that you host your own copy of templates on ACR. There're four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
+It's strongly recommended that you host your own copy of templates on ACR. There're four steps involved in hosting your own copy of templates and using those in the $convert-data operation:
1. Push the templates to your Azure Container Registry. 1. Enable Managed Identity on your Azure API for FHIR instance.
After creating an ACR instance, you can use the _FHIR Converter: Push Templates_
### Enable Managed Identity on Azure API for FHIR
-Browse to your instance of Azure API for FHIR service in the Azure portal and select the **Identity** blade.
+Browse to your instance of Azure API for FHIR service in the Azure portal, and then select the **Identity** blade.
Change the status to **On** to enable managed identity in Azure API for FHIR. ![Enable Managed Identity](media/convert-data/fhir-mi-enabled.png) ### Provide access of the ACR to Azure API for FHIR
-Navigate to Access Control (IAM) blade in your ACR instance and select _Add Role Assignments_.
+1. Browse to the **Access control (IAM)** blade.
-![ACR Role Assignment](media/convert-data/fhir-acr-role-assignment.png)
+1. Select **Add**, and then select **Add role assignment** to open the Add role assignment page.
-Grant AcrPull role to your Azure API for FHIR service instance.
+1. Assign the [AcrPull](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#acrpull) role.
-![Add Role](media/convert-data/fhir-acr-role-add.png)
+ ![Add role assignment page](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+For more information about assigning roles in the Azure portal, see [Azure built-in roles](../../role-based-access-control/role-assignments-portal.md).
### Register the ACR servers in Azure API for FHIR You can register the ACR server using the Azure portal, or using CLI. #### Registering the ACR server using Azure portal
-Navigate to the _Artifacts_ blade under _Data transformation_ in your Azure API for FHIR instance. You will see the list of currently registered ACR servers. Select _Add_ and then select your registry server from the drop-down . You will need to select _Save_ for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
+Browse to the **Artifacts** blade under **Data transformation** in your Azure API for FHIR instance. You will see the list of currently registered ACR servers. Select **Add**, and then select your registry server from the drop-down menu. You'll need to select **Save** for the registration to take effect. It may take a few minutes to apply the change and restart your instance.
#### Registering the ACR server using CLI You can register up to 20 ACR servers in the Azure API for FHIR.
-Install the healthcareapis CLI from Azure PowerShell if needed:
+Install the Healthcare APIs CLI from Azure PowerShell if needed:
```powershell az extension add -n healthcareapis
In the table below, you'll find the IP address for the Azure region where the Az
> [!NOTE]
-> The above steps are similar to the configuration steps described in the document How to export FHIR data. For more information, see [Secure Export to Azure Storage](https://docs.microsoft.com/azure/healthcare-apis/fhir/export-data#secure-export-to-azure-storage)
+> The above steps are similar to the configuration steps described in the document How to export FHIR data. For more information, see [Secure Export to Azure Storage](https://docs.microsoft.com/azure/healthcare-apis/fhir/export-data#secure-export-to-azure-storage)
### Verify
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-features-supported.md
Currently, the allowed actions for a given role are applied *globally* on the AP
* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have this increased. The maximum available is 1,000,000.
-* **Concurrent connections** and **Instances** - By default, you have five concurrent connections on two instances in the cluster (for a total of 10 concurrent requests). If you believe you need more concurrent requests, open a support ticket with details on your needs.
- * **Bundle size** - Each bundle is limited to 500 items. * **Data size** - Data/Documents must each be slightly less than 2 MB.
healthcare-apis Overview Of Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/overview-of-search.md
Previously updated : 5/3/2021 Last updated : 5/17/2021 # Overview of FHIR search
For more information, see the HL7 [Composite Search Parameters](https://www.hl7.
| :type (reference) | Yes | Yes | Yes | | :not | Yes | Yes | Yes | | :below (uri) | Yes | Yes | Yes |
-| :above (uri) | No | No | No |
+| :above (uri) | Yes | Yes | Yes |
| :in (token) | No | No | No | | :below (token) | No | No | No | | :above (token) | No | No | No |
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-device-templates.md
A device template includes the following sections:
## Device models
-A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can import a JSON file that defines the device model into a device template, or use the web UI in IoT Central to create or edit a device model. Changes to a device model made using the Web UI require the [device template to be versioned](./howto-version-device-template.md).
+A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can import a JSON file that defines the device model into a device template, or use the web UI in IoT Central to create or edit a device model.
+
+To learn more about editing a device model, see [Edit an existing device template](howto-edit-device-template.md)
A solution developer can also export a JSON file that contains the device model. A device developer can use this JSON document to understand how the device should communicate with the IoT Central application.
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
To learn more, see [Tutorial: Add an Azure IoT Edge device to your Azure IoT Cen
### Update a deployment manifest
-If you create a new [version](howto-version-device-template.md) of the device template, you can replace the deployment manifest with a new version:
- When you replace the deployment manifest, any connected IoT Edge devices download the new manifest and update their modules. However, IoT Central doesn't update the interfaces in the device template with any changes to the module configuration. For example, if you replace the manifest shown in the previous snippet with the following manifest, you don't automatically see the **SendUnits** property in the **management** interface in the device template. Manually add the new property to the **management** interface for IoT Central to recognize it: ```json
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-edit-device-template.md
+
+ Title: Edit a device template in your Azure IoT Central application | Microsoft Docs
+description: Iterate over your device templates without impacting your live connected devices
++ Last updated : 04/26/2021++++++
+# Edit an existing device template
+
+*This article applies to solution builders and device developers.*
+
+A device template includes a model that describes how a device interacts with IoT Central. This model defines the capabilities of the device and how to IoT Central interacts with them. Devices can send telemetry and property values to IoT Central, IoT Central can send property updates and commands to a device. IoT Central also uses the model to define interactions with IoT Central features such as jobs, rules, and exports.
+
+Changes to the model in a device template can affect your entire application, including any connected devices. Changes to a capability that's used by rules, exports, device groups, or jobs may cause them to behave unexpectedly or not work at all. For example, if you remove a telemetry definition from a template:
+
+- IoT Central is no longer able interpret that value. IoT Central shows device data that it can't interpret as **Unmodeled data** on the device's **Raw data** page.
+- IoT Central no longer includes the value in any data exports.
+
+To help you avoid any unintended consequences from editing a device template, this article includes recommendations based on your current development life-cycle stage. In general, the earlier you are in the development life cycle, the more tolerant you can be to device template changes.
+
+To learn more about device templates and how to create one, see [What are device templates?](concepts-device-templates.md) and [Set up a device template](howto-set-up-template.md).
+
+## Modify a device template
+
+Additive changes, such as adding a capability or interface to a model are non-breaking changes. You can make additive changes to a model at any stage of the development life cycle.
+
+Breaking changes include removing parts of a model, or changing a capability name or schema type. These changes could cause application features such as rules, exports, or dashboards to display error messages and stop working.
+
+In early device development phases, while you're still designing and testing the model, there's greater tolerance for making changes directly to your device model. Before you connect production devices to a device template, you can edit the device template directly. IoT Central applies those changes automatically to devices when you publish the device template.
+
+After you attach production devices to a device template, evaluate the impact of any changes before you edit a device template. You shouldn't make breaking changes to a device template in production. To make such changes, create a new version of the device template. Test the new device template and then migrate your production devices to the new template at a scheduled downtime.
+
+## Update an IoT Edge device template
+
+IoT Edge device templates contain a _deployment manifest_ in addition to the device model. For an IoT Edge device, the model groups capabilities by modules that correspond to the IoT Edge modules running on the device. The deployment manifest is a separate JSON document that tells an IoT Edge device which modules to install and how to configure them. The same guidance as outlined in the previous section applies to the modules in the device model. Also, every module defined in the device model must be included in the deployment manifest. Once an IoT Edge device template is published, you must create a new version if you need to replace the deployment manifest. For IoT Edge devices to receive the new deployment manifest, migrate them to the new template version.
+
+To learn more, see [IoT Edge deployment manifests and IoT Central device templates](concepts-iot-edge.md#iot-edge-deployment-manifests-and-iot-central-device-templates).
+
+### Edit and publish actions
+
+The following actions are useful when you edit a device template:
+
+- _Save_. When you change part of your device template, saving the changes creates a draft that you can return to. These changes don't yet affect connected devices. Any devices created from this template won't have the saved changes until you publish it.
+- _Publish_. When you publish the device template, it applies any saved changes to existing device instances. Newly created device instances always use the latest published template.
+- _Version a template_. When you version a device template, it creates a new template with all the latest saved changes. Existing device instances aren't impacted by changes made to a new version. To learn more, see [Version a device template](#version-a-device-template).
+- _Version an interface_. When you version an interface, it creates a new interface with all the latest saved capabilities. You can reuse an interface in multiples locations within a template. That's why a change made to one reference to an interface changes all the places in the template that use the interface. When you version an interface, this behavior changes because the new version is now a separate interface. To learn more, see [Version an interface](#version-an-interface).
+- _Migrate a device_. When you migrate a device, the device instance swaps from one device template to another. Device migration can cause a short while IoT Central processes the changes. To learn more, see [Migrate a device across versions](#migrate-a-device-across-versions).
+
+### Version numbers
+
+Both device models and interfaces have version numbers. Different version numbers let models or interfaces share an `@id` value, while providing a history of updates. Version numbers only increment if you choose to version the template or interface, or if you deliberately change the version number. You should change a version number when you make a major change to a template or interface.
+
+The following snippet shows the device model for a thermostat device. The device model has a single interface. You can see the version number, `1`, at the end of the`@id` field.
+
+```json
+{
+ "@context": "dtmi:dtdl:context;2",
+ "@id": "dtmi:com:example:Thermostat;1",
+ "@type": "Interface",
+ "displayName": "Thermostat",
+ "description": "Reports current temperature and provides desired temperature control.",
+ "contents": [
+ // ...
+ ]
+}
+```
+
+To view this information in the IoT Central UI, select **View identity** in the device template editor:
++
+## Version a device template
+
+To version a device template:
+
+1. Go to the **Device templates** page.
+1. Select the device template you want to version.
+1. Select **Version** at the top of the page and give the template a new name. IoT Central suggests a new name, which you can edit.
+1. Select **Create**.
+
+Now you've created a new template with a unique identity that isn't attached to any existing devices.
+
+## Version an interface
+
+To version an interface:
+
+1. Go to the **Device templates** page.
+1. Select the device template you have in a draft mode.
+1. Select the published interface that you want to version and edit.
+1. Select **Version** at the top of the interface page.
+1. Select **Create**.
+
+Now you've created a new interface with a unique identity isn't synchronized with the previous interface version.
+
+## Migrate a device across versions
+
+You can create multiple versions of the device template. Over time, you'll have multiple connected devices using these device templates. You can migrate devices from one version of your device template to another. The following steps describe how to migrate a device:
+
+1. Go to the **Devices** page.
+1. Select the device you need to migrate to another version.
+1. Choose **Migrate**:
+
+ :::image type="content" source="media/howto-edit-device-template/migrate-device.png" alt-text="Choose the option to start migrating a device":::
+
+1. Select the device template with the version you want to migrate the device to and select **Migrate**.
+
+## Next steps
+
+If you're an operator or solution builder, a suggested next step is to learn [how to manage your devices](./howto-manage-devices.md).
+
+If you're a device developer, a suggested next step is to read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md).
iot-central Howto Run A Job https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-run-a-job.md
Enter a job name and description, and then select **Rerun job**. A new job is su
Now that you've learned how to create jobs in your Azure IoT Central application, here are some next steps: - [Manage your devices](howto-manage-devices.md)-- [Version your device template](howto-version-device-template.md)
+- [Edit a device template](howto-edit-device-template.md)
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-set-up-template.md
To create a device template in IoT Central:
## Manage a device template
-You can rename or delete a template from the template's home page.
+You can rename or delete a template from the template's editor page.
-After you've added a device model to your template, you can publish it. Until you've published the template, you can't connect a device based on this template for your operators to see in the **Devices** page.
+After you've defined the template, you can publish it. Until the template is published, you can't connect a device to it, and it doesn't appear on the **Devices** page.
+
+To learn more about modifying device templates, see [Edit an existing device template](howto-edit-device-template.md).
## Create a capability model
The following table shows the configuration settings for a command capability:
| Display Name | The display name for the command used on views and forms. | | Name | The name of the command. IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field needs to be alphanumeric. | | Capability Type | Command. |
+| Queue if offline | If enabled, you can call the command even if the device is offline. If not enabled, you can only call the command when the the device is online. |
| Comment | Any comments about the command capability. | | Description | A description of the command capability. | | Request | If enabled, a definition of the request parameter, including: name, display name, schema, unit, and display unit. |
Cloud-to-device messages:
## Manage a component
-If you haven't published the component, you can edit the capabilities defined by the component. After you publish the component, if you want to make any changes, you must create a new version of the device template and [version the component](howto-version-device-template.md). You can make changes that don't require versioning, such as display names or units, in the **Customize** section.
+Use components to assemble a device template from other interfaces. For example, the device template for a temperature controller could include several thermostat components. Components can be edited directly in the device template or exported and imported as JSON files. Devices can interact with component instances. For example, a device with two thermostats can send telemetry from each thermostat to separate components in your IoT Central application.
+
+## Inheritance
-You can also export the component as a JSON file if you want to reuse it in another capability model.
+You can extend an interface using inheritance. Use inheritance to add capabilities to existing interfaces. Inherited interfaces are transparent to devices.
## Add cloud properties
The following table shows the configuration settings for a cloud property:
## Add customizations
-Use customizations when you need to modify an imported component or add IoT Central-specific features to a capability. You can only customize fields that don't break component compatibility. For example, you can:
--- Customize the display name and units of a capability.-- Add a default color to use when the value appears on a chart.-- Specify initial, minimum, and maximum values for a property.-
-You can't customize the capability name or capability type. If there are changes you can't make in the **Customize** section, you'll need to version your device template and component to modify the capability.
+Use customizations when you need to modify an imported component or add IoT Central-specific features to a capability. You can customize any part of an existing device template's capabilities.
### Generate default views
To add a form to a device template:
Before you can connect a device that implements your device model, you must publish your device template.
-After you publish a device template, you can only make limited changes to the device model. To modify a component, you need to [create and publish a new version](./howto-version-device-template.md).
+To learn more about modifying a device template it's published, see [Edit an existing device template](howto-edit-device-template.md).
To publish a device template, go to you your device template, and select **Publish**.
After you publish a device template, an operator can go to the **Devices** page,
## Next steps
-A suggested next step is to read about [device template versioning](./howto-version-device-template.md).
+A suggested next step is to read about how to [Make changes to an existing device template](howto-edit-device-template.md).
iot-central Howto Version Device Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-version-device-template.md
- Title: Understanding device template versioning for your Azure IoT Central apps | Microsoft Docs
-description: Iterate over your device templates by creating new versions and without impacting your live connected devices
-- Previously updated : 11/06/2020-----
-# This article applies to solution builders and device developers.
--
-# Create a new device template version
-
-A device template includes a schema that describes how a device interacts with IoT Central. These interactions include telemetry, properties, and commands. Both the device and the IoT Central application rely on a shared understanding of this schema to exchange information. You can only make limited changes to the schema without breaking the contract, that's why most schema changes require a new version of the device template. Versioning the device template lets older devices continue with the schema version they understand, while newer or updated devices use a later schema version.
-
-The schema in a device template is defined in the device model and its interfaces. Device templates include other information, such as cloud properties, display customizations, and views. If you make changes to those parts of the device template that don't define how the device exchanges data with IoT Central, you don't need to version the template.
-
-You must always publish an updated device template before an operator can use it. IoT Central stops you from publishing breaking changes to a device template without first versioning the template.
-
-> [!NOTE]
-> To learn more about how to create a device template see [Set up and manage a device template](howto-set-up-template.md)
-
-## Versioning rules
-
-This section summarizes the versioning rules that apply to device templates. Both device models and interfaces have version numbers. The following snippet shows the device model for a thermostat device. The device model has a single interface. You can see the version number at the end of the`@id` field.
-
-```json
-{
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:example:Thermostat;1",
- "@type": "Interface",
- "displayName": "Thermostat",
- "description": "Reports current temperature and provides desired temperature control.",
- "contents": [
- // ...
- ]
-}
-```
-
-To view this information in the IoT Central UI, select **View identity** in the device template editor:
--
-The following list shows the rules that determine when you must create a new version:
-
-* After a device model is published, you can't remove any interfaces, even in a new version of the device template.
-* After a device model is published, you can add an interface if you create a new version of the device template.
-* After a device model is published, you can replace an interface with a newer version if you create a new version of the device template. For example, if the sensor v1 device template uses the thermostat v1 interface, you can create a sensor v2 device template that uses the thermostat v2 interface.
-* After an interface is published, you can't remove any of the interface contents, even in a new version of the device template.
-* After an interface is published, you can add items to the contents of an interface if you create a new version of the interface and device template. Items that you can add to the interface include telemetry, properties, and commands.
-* After an interface is published, you can make non-schema changes to existing items in the interface if you create a new version of the interface and device template. Non-schema parts of an interface item include the display name and the semantic type. The schema parts of an interface item that you can't change are name, capability type, and schema.
-
-The following sections walk you through some examples of modifying device templates in IoT Central.
-
-## Customize the device template without versioning
-
-Certain elements of your device capabilities can be edited without needing to version your device template and interfaces. For example, some of these fields include display name, semantic type, minimum value, maximum value, decimal places, color, unit, display unit, comment, and description. To add one of these customizations:
-
-1. Go to the **Device templates** page.
-1. Select the device template you wish to customize.
-1. Choose the **Customize** tab.
-1. All the capabilities defined in your device model are listed here. You can edit, save, and use all of these fields without the need to version your device template. If there are fields you wish to edit that are read-only, you must version your device template to change them. Select a field you wish to edit and enter in any new values.
-1. Select **Save**. Now these values override anything that was initially saved in your device template and are used across the application.
-
-## Version a device template
-
-Creating a new version of your device template creates a draft version of the template where the device model can be edited. Any published interfaces remain published until they're individually versioned. To modify a published interface, first create a new device template version.
-
-Only version the device template when you're trying to edit a part of the device model that you can't edit in the customizations section.
-
-To version a device template:
-
-1. Go to the **Device templates** page.
-1. Select the device template you're trying to version.
-1. Select the **Version** button at the top of the page and give the template a new name. IoT Central suggests a new name, which you can edit.
-1. Select **Create**.
-1. Now your device template is in draft mode. You can see your interfaces are still locked. Version any interfaces you want to modify.
-
-## Version an interface
-
-Versioning an interface allows you to add and update capabilities inside the interface you had already created.
-
-To version an interface:
-
-1. Go to the **Device templates** page.
-1. Select the device template you have in a draft mode.
-1. Select the interface that is in published mode that you want to version and edit.
-1. Select the **Version** button at the top of the interface page.
-1. Select **Create**.
-1. Now your interface is in draft mode. You can add or edit capabilities to your interface without breaking existing customizations and views.
-
-## Migrate a device across versions
-
-You can create multiple versions of the device template. Over time, you'll have multiple connected devices using these device templates. You can migrate devices from one version of your device template to another. The following steps describe how to migrate a device:
-
-1. Go to the **Devices** page.
-1. Select the device you need to migrate to another version.
-1. Choose **Migrate**:
- :::image type="content" source="media/howto-version-device-template/migrate-device.png" alt-text="Choose the option to start migrating a device":::
-1. Select the device template with the version number you want to migrate the device to and select **Migrate**.
-
-## Next steps
-
-A suggested next step is to learn [how to manage your devices](./howto-manage-devices.md).
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
Communication protocols that a device can use to connect to IoT Central include
An IoT Central device template includes a _model_ that specifies the behaviors a device of that type should implement. Behaviors include telemetry, properties, and commands.
+To learn more about best practices you edit a model, see [Edit an existing device template](howto-edit-device-template.md).
+ > [!TIP] > You can export the model from IoT Central as a [Digital Twins Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl) JSON file.
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-configure-rules.md
Before you begin, you should complete the two previous quickstarts [Create an Az
:::image type="content" source="media/quick-configure-rules/condition.png" alt-text="Screenshot showing the rule condition":::
-1. To add an email action to run when the rule triggers, select **+ Email**.
+1. To add an email action to run when the rule triggers, in the **Action** section, select **+ Email**.
1. Use the information in the following table to define your action and then select **Done**:
iot-central Quick Create Simulated Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-create-simulated-device.md
To publish a device template:
1. On the dialog that appears, select **Publish**.
-After you publish a device template, it's visible on the **Devices** page. In a published device template, you can't edit a device model without creating a new version. However, you can modify cloud properties, customizations, and views in a published device template without versioning. After making any changes, select **Publish** to push those changes for real and simulated devices to use.
+After you publish a device template, it's visible on the **Devices** page. Publishing a template makes it available for your operator to create devices, device groups, rules, exports, and jobs. Once a template is published, any modifications to capabilities, interfaces, or modules directly impact your device instances and the behavior of other areas of the application.
## Add a simulated device
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
To add cloud properties to the **Smart Building gateway device** template.
1. In the **Smart Building gateway device** template, select **Cloud properties**.
-1. Use the information in the following table to add two cloud properties to your gateway device template.
+1. Use the information in the following table to add two cloud properties to your gateway device template.
| Display name | Semantic type | Schema | | -- | - | | | Last Service Date | None | Date | | Customer Name | None | String |
-2. Select **Save**.
+1. Select **Save**.
### Create views
To publish the gateway device template:
3. In the **Publish a Device Template** dialog box, choose **Publish**.
-After a device template is published, it's visible on the **Devices** page and to the operator. In a published device template, you can't edit a device model without creating a new version. However, you can make updates to cloud properties, customizations, and views, in a published device template. These updates don't cause a new version to be created. After making any changes, select **Publish** to push those changes out to your operator.
+After a device template is published, it's visible on the **Devices** page and to the operator. The operator can use the template to create device instances or establish rules and monitoring. Editing a published template could affect behavior across the application.
+
+To learn more about modifying a device template after it's published, see [Edit an existing device template](howto-edit-device-template.md).
## Create the simulated devices
iot-develop Quickstart Device Development https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-device-development.md
The following tutorials are included in the getting started guide:
|[Getting started with the NXP MIMXRT1060-EVK Evaluation kit](https://go.microsoft.com/fwlink/p/?linkid=2129821) |[NXP MIMXRT1060-EVK](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/mimxrt1060-evk-i-mx-rt1060-evaluation-kit:MIMXRT1060-EVK)| |[Getting started with the NXP MIMXRT1050-EVKB Evaluation kit](https://github.com/azure-rtos/getting-started/tree/master/NXP/MIMXRT1050-EVKB) |[NXP MIMXRT1050-EVKB](https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt1050-evaluation-kit:MIMXRT1050-EVK)| |[Getting started with the Microchip ATSAME54-XPRO Evaluation kit](https://go.microsoft.com/fwlink/p/?linkid=2129537) |[Microchip ATSAME54-XPRO](https://www.microchip.com/developmenttools/productdetails/atsame54-xpro)|
-|[Getting started with the MXChip AZ3166 IoT DevKit](https://github.com/azure-rtos/getting-started/tree/master/MXChip/AZ3166) |[MXChip AZ3166 IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/)|
|[Getting started with the Renesas Starter Kit+ for RX65N-2MB](https://github.com/azure-rtos/getting-started/tree/master/Renesas/RSK_RX65N_2MB) |[Renesas Starter Kit+ for RX65N-2MB](https://www.renesas.com/us/en/products/microcontrollers-microprocessors/rx-32-bit-performance-efficiency-mcus/rx65n-2mb-starter-kit-plus-renesas-starter-kit-rx65n-2mb)| ## Next steps
iot-develop Quickstart Devkit Mxchip Az3166 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-mxchip-az3166.md
To install the tools:
cmake --version ```
-## Create the cloud components
-
-### Create the IoT Central application
-
-There are several ways to connect devices to Azure IoT. In this section, you learn how to connect a device by using Azure IoT Central. IoT Central is an IoT application platform that reduces the cost and complexity of creating and managing IoT solutions.
-
-To create a new application:
-1. From [Azure IoT Central portal](https://apps.azureiotcentral.com/), select **My apps** on the side navigation menu.
-1. Select **+ New application**.
-1. Select **Custom apps**.
-1. Add Application Name and a URL.
-1. Choose the **Free** Pricing plan to activate a 7-day trial.
-
- :::image type="content" source="media/quickstart-devkit-mxchip-az3166/iot-central-create-custom.png" alt-text="Create a custom app in Azure IoT Central":::
-
-1. Select **Create**.
-
- After IoT Central provisions the application, it redirects you automatically to the new application dashboard.
-
- > [!NOTE]
- > If you have an existing IoT Central application, you can use it to complete the steps in this article rather than create a new application.
-
-### Create a new device
-
-In this section, you use the IoT Central application dashboard to create a new device. You will use the connection information for the newly created device to securely connect your physical device in a later section.
-
-To create a device:
-1. From the application dashboard, select **Devices** on the side navigation menu.
-1. Select **+ New** to open the **Create a new device** window.
-1. Leave Device template as **Unassigned**.
-1. Fill in the desired Device name and Device ID.
-
- :::image type="content" source="media/quickstart-devkit-mxchip-az3166/iot-central-create-device.png" alt-text="Create a device in Azure IoT Central":::
-
-1. Select the **Create** button.
-1. The newly created device will appear in the **All devices** list. Select on the device name to show details.
-1. Select **Connect** in the top right menu bar to display the connection information used to configure the device in the next section.
-
- :::image type="content" source="media/quickstart-devkit-mxchip-az3166/iot-central-device-connection-info.png" alt-text="View device connection details":::
-
-1. Note the connection values for the following connection string parameters displayed in **Connect** dialog. You'll add these values to a configuration file in the next step:
-
- > * `ID scope`
- > * `Device ID`
- > * `Primary key`
## Prepare the device
iot-pnp Concepts Modeling Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-pnp/concepts-modeling-guide.md
To ensure devices and server-side solutions that use models continue to work, pu
The DTMI includes a version number that you can use to create multiple versions of a model. Devices and server-side solutions can use the specific version they were designed to use.
-IoT Central implements more versioning rules for device models. If you version a device template and its model in IoT Central, you can migrate devices from previous versions to later versions. However, migrated devices can't use new capabilities without a firmware upgrade. To learn more, see [Create a new device template version](../iot-central/core/howto-version-device-template.md).
+IoT Central implements more versioning rules for device models. If you version a device template and its model in IoT Central, you can migrate devices from previous versions to later versions. However, migrated devices can't use new capabilities without a firmware upgrade. To learn more, see [Edit a device template](../iot-central/core/howto-edit-device-template.md).
## Limits and constraints
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/backend-pool-management.md
There are two ways of configuring a backend pool:
* Network Interface Card (NIC) * Combination of IP address and Virtual Network (VNET) Resource ID
-Configure your backend pool by NIC when using existing virtual machines and virtual machine scale sets. This method builds the most direct link between your resource and the backend pool.
+Configure your backend pool by NIC when using existing virtual machines and virtual machine scale sets. This method builds the most direct link between your resource and the backend pool.
When preallocating your backend pool with an IP address range which you plan to later create virtual machines and virtual machine scale sets, configure your backend pool by IP address and VNET ID combination.
The configuration sections of this article will focus on:
* Azure PowerShell * Azure CLI * REST API
-* Azure Resource Manager templates
+* Azure Resource Manager templates
These sections give insight into how the backend pools are structured for each configuration option.
The backend pool is created as part of the load balancer operation. The IP confi
The following examples are focused on the create and populate operations for the backend pool to highlight this workflow and relationship.
- >[!NOTE]
+ >[!NOTE]
>It is important to note that backend pools configured via network interface cannot be updated as part of an operation on the backend pool. Any addition or deletion of backend resources must occur on the network interface of the resource. ### PowerShell
$resourceGroup = "myResourceGroup"
$loadBalancerName = "myLoadBalancer" $backendPoolName = "myBackendPool"
-$backendPool =
+$backendPool =
New-AzLoadBalancerBackendAddressPool -ResourceGroupName $resourceGroup -LoadBalancerName $loadBalancerName -BackendAddressPoolName $backendPoolName   ```
$nicname = "myNic"
$location = "eastus" $vnetname = <your-vnet-name>
-$vnet =
+$vnet =
Get-AzVirtualNetwork -Name $vnetname -ResourceGroupName $resourceGroup
-$nic =
+$nic =
New-AzNetworkInterface -ResourceGroupName $resourceGroup -Location $location -Name $nicname -LoadBalancerBackendAddressPool $backendPoolName -Subnet $vnet.Subnets[0] ```
$location = "eastus"
$nic = Get-AzNetworkInterface -Name $nicname -ResourceGroupName $resourceGroup
-$vmConfig =
+$vmConfig =
New-AzVMConfig -VMName $vmname -VMSize $vmsize | Set-AzVMOperatingSystem -Windows -ComputerName $vmname -Credential $cred | Set-AzVMSourceImage -PublisherName $pubname -Offer $off -Skus $sku -Version latest | Add-AzVMNetworkInterface -Id $nic.Id
-
+ # Create a virtual machine using the configuration $vm1 = New-AzVM -ResourceGroupName $resourceGroup -Zone 1 -Location $location -VM $vmConfig ```
Create the backend pool:
az network lb address-pool create \ --resource-group myResourceGroup \ --lb-name myLB \name myBackendPool
+--name myBackendPool
``` Create a new network interface and add it to the backend pool:
az vm create \
### Resource Manager Template
-Follow this [quickstart Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-load-balancer-standard-create/) to deploy a load balancer and virtual machines and add the virtual machines to the backend pool via network interface.
+Follow this [quickstart Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/load-balancer-standard-create/) to deploy a load balancer and virtual machines and add the virtual machines to the backend pool via network interface.
Follow this [quickstart Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-load-balancer-ip-configured-backend-pool) to deploy a load balancer and virtual machines and add the virtual machines to the backend pool via IP address.
Get-AzLoadBalancerBackendAddressPool -ResourceGroupName $resourceGroup -LoadB
Create a network interface and add it to the backend pool. Set the IP address to one of the backend addresses: ```azurepowershell-interactive
-$nic =
+$nic =
New-AzNetworkInterface -ResourceGroupName $resourceGroup -Location $location -Name $nicName -PrivateIpAddress 10.0.0.4 -Subnet $virtualNetwork.Subnets[0] ```
$location = "eastus"
$nic = Get-AzNetworkInterface -Name $nicname -ResourceGroupName $resourceGroup
-$vmConfig =
+$vmConfig =
New-AzVMConfig -VMName $vmname -VMSize $vmsize | Set-AzVMOperatingSystem -Windows -ComputerName $vmname -Credential $cred | Set-AzVMSourceImage -PublisherName $pubname -Offer $off -Skus $sku -Version latest | Add-AzVMNetworkInterface -Id $nic.Id # Create a virtual machine using the configuration
$vm1 = New-AzVM -ResourceGroupName $resourceGroup -Zone 1 -Location $location -V
``` ### CLI
-Using CLI you can either populate the backend pool via command-line parameters or through a JSON configuration file.
+Using CLI you can either populate the backend pool via command-line parameters or through a JSON configuration file.
Create and populate the backend pool via the command-line parameters:
az vm create \
--admin-username azureuser \ --generate-ssh-keys ```
-
+ ### Limitations A Backend Pool configured by IP address has the following limitations: * Can only be used for Standard load balancers
logic-apps Monitor Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/monitor-logic-apps.md
To get alerts based on specific metrics or exceeded thresholds for your logic ap
> [request trigger](../connectors/connectors-native-reqres.md) in your workflow, > which lets you perform tasks like these examples: >
-> * [Post to Slack](https://github.com/Azure/azure-quickstart-templates/tree/master/201-alert-to-slack-with-logic-app)
+> * [Post to Slack](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-slack-with-logic-app)
> * [Send a text](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-text-message-with-logic-app) > * [Add a message to a queue](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/alert-to-queue-with-logic-app)
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
And returns this result:
This example uses the `json()` and `xml()` functions to convert XML that has a single child element in the root element into a JSON object named `person` for that child element:
-`json(xml('<?xml version="1.0"?> <root> <person id='1'> <name>Sophia Owen</name> <occupation>Engineer</occupation> </person> </root>'))`
+`json(xml('<?xml version="1.0"?> <root> <person id="1"> <name>Sophia Owen</name> <occupation>Engineer</occupation> </person> </root>'))`
And returns this result:
And returns this result:
This example uses the `json()` and `xml()` functions to convert XML that has multiple child elements in the root element into an array named `person` that contains JSON objects for those child elements:
-`json(xml('<?xml version="1.0"?> <root> <person id='1'> <name>Sophia Owen</name> <occupation>Engineer</occupation> </person> <person id='2'> <name>John Doe</name> <occupation>Engineer</occupation> </person> </root>'))`
+`json(xml('<?xml version="1.0"?> <root> <person id="1"> <name>Sophia Owen</name> <occupation>Engineer</occupation> </person> <person id="2"> <name>John Doe</name> <occupation>Engineer</occupation> </person> </root>'))`
And returns this result:
xml('<value>')
*Example 1*
+This example converts the string to XML:
+
+`xml('<name>Sophia Owen</name>')`
+
+And returns this result XML:
+
+```xml
+<name>Sophia Owen</name>
+```
+
+*Example 2*
+ This example creates the XML version for this string, which contains a JSON object:
-`xml(json('{ \"name\": \"Sophia Owen\" }'))`
+`xml(json('{ "name": "Sophia Owen" }'))`
And returns this result XML:
And returns this result XML:
<name>Sophia Owen</name> ```
-*Example 2*
+*Example 3*
Suppose you have this JSON object:
Suppose you have this JSON object:
This example creates XML for a string that contains this JSON object:
-`xml(json('{\"person\": {\"name\": \"Sophia Owen\", \"city\": \"Seattle\"}}'))`
+`xml(json('{"person": {"name": "Sophia Owen", "city": "Seattle"}}'))`
And returns this result XML:
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
Title: 'What is an Azure Machine Learning compute instance?'
-description: Learn about the Azure Machine Learning compute instance, a fully managed cloud-based workstation.
+description: Learn about the Azure Machine Learning compute instance, a fully managed cloud-based workstation.
Last updated 10/02/2020
An Azure Machine Learning compute instance is a managed cloud-based workstation for data scientists.
-Compute instances make it easy to get started with Azure Machine Learning development as well as provide management and enterprise readiness capabilities for IT administrators.
+Compute instances make it easy to get started with Azure Machine Learning development as well as provide management and enterprise readiness capabilities for IT administrators.
-Use a compute instance as your fully configured and managed development environment in the cloud for machine learning. They can also be used as a compute target for training and inferencing for development and testing purposes.
+Use a compute instance as your fully configured and managed development environment in the cloud for machine learning. They can also be used as a compute target for training and inferencing for development and testing purposes.
For production grade model training, use an [Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md) with multi-node scaling capabilities. For production grade model deployment, use [Azure Kubernetes Service cluster](how-to-deploy-azure-kubernetes-service.md).
You can also **[use a setup script (preview)](how-to-create-manage-compute-insta
> [!IMPORTANT] > Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Azure Machine Learning compute instance enables you to author, train, and deploy models in a fully integrated notebook experience in your workspace. You can run Jupyter notebooks in [VS Code](https://techcommunity.microsoft.com/t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630) using compute instance as the remote server with no SSH needed. You can also enable VS Code integration through [remote SSH extension](https://devblogs.microsoft.com/python/enhance-your-azure-machine-learning-experience-with-the-vs-code-extension/).
-You can [install packages](how-to-access-terminal.md#install-packages) and [add kernels](how-to-access-terminal.md#add-new-kernels) to your compute instance.
+You can [install packages](how-to-access-terminal.md#install-packages) and [add kernels](how-to-access-terminal.md#add-new-kernels) to your compute instance.
-Following tools and environments are already installed on the compute instance:
+Following tools and environments are already installed on the compute instance:
|General tools & environments|Details| |-|:-:|
Following tools and environments are already installed on the compute instance:
|Docker|| |Nginx|| |NCCL 2.0 ||
-|Protobuf||
+|Protobuf||
|**R** tools & environments|Details| |-|:-:|
The files in the file share are accessible from all compute instances in the sam
You can also clone the latest Azure Machine Learning samples to your folder under the user files directory in the workspace file share.
-Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you are writing many small files, try using a directory directly on the compute instance, such as a `/tmp` directory. Note these files will not be accessible from other compute instances.
+Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you are writing many small files, try using a directory directly on the compute instance, such as a `/tmp` directory. Note these files will not be accessible from other compute instances.
-Do not store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, do not write very large files of data on the OS disk of the compute instance. OS disk on compute instance has 128 GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is configurable based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](concept-azure-machine-learning-architecture.md#datasets-and-datastores).
+Do not store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, do not write very large files of data on the OS disk of the compute instance. OS disk on compute instance has 128 GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is configurable based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](concept-azure-machine-learning-architecture.md#datasets-and-datastores).
## Managing a compute instance
For more about managing the compute instance, see [Create and manage an Azure Ma
### <a name="create"></a>Create a compute instance
-As an administrator, you can **[create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#on-behalf)**.
+As an administrator, you can **[create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#on-behalf)**.
You can also **[use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script)** for an automated way to customize and configure the compute instance.
-To create your a compute instance for yourself, use your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-attach-compute-studio.md#compute-instance) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
+To create your a compute instance for yourself, use your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-attach-compute-studio.md#compute-instance) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
You can also create an instance * Directly from the [integrated notebooks experience](tutorial-train-models-with-aml.md#azure) * In Azure portal
-* From Azure Resource Manager template. For an example template, see the [create an Azure Machine Learning compute instance template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance).
+* From Azure Resource Manager template. For an example template, see the [create an Azure Machine Learning compute instance template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance).
* With [Azure Machine Learning SDK](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/concept-compute-instance.md) * From the [CLI extension for Azure Machine Learning](reference-azure-machine-learning-cli.md#computeinstance)
Compute instance comes with P10 OS disk. Temp disk type depends on the VM size c
## Compute target
-Compute instances can be used as a [training compute target](concept-compute-target.md#train) similar to Azure Machine Learning compute training clusters.
+Compute instances can be used as a [training compute target](concept-compute-target.md#train) similar to Azure Machine Learning compute training clusters.
A compute instance: * Has a job queue.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-target.md
You can create Azure Machine Learning compute instances or compute clusters from
* [Compute instance](how-to-create-manage-compute-instance.md). * [Compute cluster](how-to-create-attach-compute-cluster.md). * The [R SDK](https://azure.github.io/azureml-sdk-for-r/reference/https://docsupdatetracker.net/index.html#section-compute-targets) (preview).
-* An Azure Resource Manager template. For an example template, see [Create an Azure Machine Learning compute cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-amlcompute).
+* An Azure Resource Manager template. For an example template, see [Create an Azure Machine Learning compute cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-amlcompute).
* A machine learning [extension for the Azure CLI](reference-azure-machine-learning-cli.md#resource-management). When created, these compute resources are automatically part of your workspace, unlike other kinds of compute targets.
See the following table to learn more about supported series and restrictions.
| NC | None. | GPU | Compute clusters and instance | | NC Promo | None. | GPU | Compute clusters and instance | | NCsv2 | Requires approval. | GPU | Compute clusters and instance |
-| NCsv3 | Requires approval. | GPU | Compute clusters and instance |
-| NDs | Requires approval. | GPU | Compute clusters and instance |
-| NDv2 | Requires approval. | GPU | Compute clusters and instance |
-| NV | None. | GPU | Compute clusters and instance |
-| NVv3 | Requires approval. | GPU | Compute clusters and instance |
+| NCsv3 | Requires approval. | GPU | Compute clusters and instance |
+| NDs | Requires approval. | GPU | Compute clusters and instance |
+| NDv2 | Requires approval. | GPU | Compute clusters and instance |
+| NV | None. | GPU | Compute clusters and instance |
+| NVv3 | Requires approval. | GPU | Compute clusters and instance |
While Azure Machine Learning supports these VM series, they might not be available in all Azure regions. To check whether VM series are available, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
Use a compute instance as your fully configured and managed development environm
In this article, you learn how to:
-* Create a compute instance
+* Create a compute instance
* Manage (start, stop, restart, delete) a compute instance
-* Access the terminal window
+* Access the terminal window
* Install R or Python packages * Create new environments or Jupyter kernels
-Compute instances can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
+Compute instances can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
## Prerequisites
Compute instances can run jobs securely in a [virtual network environment](how-t
> [!IMPORTANT] > Items marked (preview) below are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). **Time estimate**: Approximately 5 minutes.
For information on creating a compute instance in the studio, see [Create comput
-You can also create a compute instance with an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance).
+You can also create a compute instance with an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance).
You can also create a compute instance with an [Azure Resource Manager template]
As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
-* [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance). For details on how to find the TenantID and ObjectID needed in this template, see [Find identity object IDs for authentication configuration](../healthcare-apis/fhir/find-identity-object-ids.md). You can also find these values in the Azure Active Directory portal.
+* [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance). For details on how to find the TenantID and ObjectID needed in this template, see [Find identity object IDs for authentication configuration](../healthcare-apis/fhir/find-identity-object-ids.md). You can also find these values in the Azure Active Directory portal.
* REST API
-The data scientist you create the compute instance for needs the following be [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) permissions:
+The data scientist you create the compute instance for needs the following be [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) permissions:
* *Microsoft.MachineLearningServices/workspaces/computes/start/action* * *Microsoft.MachineLearningServices/workspaces/computes/stop/action* * *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
The data scientist can start, stop, and restart the compute instance. They can u
## <a name="setup-script"></a> Customize the compute instance with a script (preview)
-Use a setup script for an automated way to customize and configure the compute instance at provisioning time. As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements.
+Use a setup script for an automated way to customize and configure the compute instance at provisioning time. As an administrator, you can write a customization script to be used to provision all compute instances in the workspace according to your requirements.
Some examples of what you can do in a setup script:
The setup script is a shell script which runs as *rootuser*. Create or upload t
When the script runs, the current working directory of the script is the directory where it was uploaded. For example, if you upload the script to **Users>admin**, the location of the script on the compute instance and current working directory when the script runs is */home/azureuser/cloudfiles/code/Users/admin*. This would enable you to use relative paths in the script.
-Script arguments can be referred to in the script as $1, $2, etc.
+Script arguments can be referred to in the script as $1, $2, etc.
If your script was doing something specific to azureuser such as installing conda environment or jupyter kernel you will have to put it within *sudo -u azureuser* block like this
Please note that if workspace storage is attached to a virtual network you might
### Use script in a Resource Manager template
-In a Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance), add `setupScripts` to invoke the setup script when the compute instance is provisioned. For example:
+In a Resource Manager [template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance), add `setupScripts` to invoke the setup script when the compute instance is provisioned. For example:
```json "setupScripts":{
In the examples below, the name of the compute instance is **instance**
* Delete ```python
- # delete() is used to delete the ComputeInstance target. Useful if you want to re-use the compute name
+ # delete() is used to delete the ComputeInstance target. Useful if you want to re-use the compute name
instance.delete(wait_for_completion=True, show_output=True) ```
In the examples below, the name of the compute instance is **instance**
For more information, see [az ml computetarget stop computeinstance](/cli/azure/ml/computetarget/computeinstance#az_ml_computetarget_computeinstance_stop).
-* Start
+* Start
```azurecli-interactive az ml computetarget start computeinstance -n instance -v
In the examples below, the name of the compute instance is **instance**
For more information, see [az ml computetarget start computeinstance](/cli/azure/ml/computetarget/computeinstance#az_ml_computetarget_computeinstance_start).
-* Restart
+* Restart
```azurecli-interactive az ml computetarget restart computeinstance -n instance -v
In your workspace in Azure Machine Learning studio, select **Compute**, then sel
You can perform the following actions:
-* Create a new compute instance
+* Create a new compute instance
* Refresh the compute instances tab. * Start, stop, and restart a compute instance. You do pay for the instance whenever it is running. Stop the compute instance when you are not using it to reduce cost. Stopping a compute instance deallocates it. Then start it again when you need it. * Delete a compute instance.
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-with-triton.md
Previously updated : 02/16/2020 Last updated : 05/17/2021
Before attempting to use Triton for your own model, it's important to understand
:::image type="content" source="./media/how-to-deploy-with-triton/triton-deploy.png" alt-text="Inferenceconfig deployment with Triton only, and no Python middleware":::
-**Inference configuration deployment with Triton**
-
-* Multiple [Gunicorn](https://gunicorn.org/) workers are started to concurrently handle incoming requests.
-* The requests are forwarded to the **Triton server**.
-* Triton processes requests in batches to maximize GPU utilization.
-* The client uses the __Azure ML scoring URI__ to make requests. For example, `https://myservice.azureml.net/score`.
--
-The workflow to use Triton for your model deployment is:
-
-1. Serve your model with Triton directly.
-1. Verify you can send requests to your Triton-deployed model.
-1. (Optional) Create a layer of Python middleware for server-side pre- and post-processing
## Deploying Triton without Python pre- and post-processing
az ml model register -n my_triton_model -p models --model-framework=Multi
For more information on `az ml model register`, consult the [reference documentation](/cli/azure/ml/model).
-When registering the model in Azure Machine Learning, the value for the `--model-path -p` parameter must be the name of the parent folder of the Triton.
+When registering the model in Azure Machine Learning, the value for the `--model-path -p` parameter must be the name of the parent folder of the Triton Model Repository.
In the example above, `--model-path` is 'models'.
-The value for `--name -n` parameter, ‘my_triton_model’ in the example, will be the model name known to Azure Machine Learning Workspace.
+The value for `--name -n` parameter, `my_triton_models` in the example, will be the model name known to Azure Machine Learning Workspace.
# [Python](#tab/python)
-```python
-
-from azureml.core.model import Model
-
-model_path = "models"
-
-model = Model.register(
- model_path=model_path,
- model_name="bidaf-9-tutorial",
- tags={"area": "Natural language processing", "type": "Question-answering"},
- description="Question answering from ONNX model zoo",
- workspace=ws,
- model_framework=Model.Framework.MULTI, # This line tells us you are registering a Triton model
-)
+[!notebook-python[] (~/Azureml-examples-main/python-sdk/experimental/deploy-triton/1.bidaf-ncd-local.ipynb?name=register-model)]
-```
For more information, see the documentation for the [Model class](/python/api/azureml-core/azureml.core.model.model).
az ml model deploy -n triton-webservice -m triton_model:1 --dc deploymentconfig.
# [Python](#tab/python)
-```python
-from azureml.core.webservice import AksWebservice
-from azureml.core.model import InferenceConfig
-from random import randint
-
-service_name = "triton-webservice"
-
-config = AksWebservice.deploy_configuration(
- compute_target_name="aks-gpu",
- gpu_cores=1,
- cpu_cores=1,
- memory_gb=4,
- auth_enabled=True,
-)
-
-service = Model.deploy(
- workspace=ws,
- name=service_name,
- models=[model],
- deployment_config=config,
- overwrite=True,
-)
-```
+[!notebook-python[] (~/Azureml-examples-main/python-sdk/experimental/deploy-triton/1.bidaf-ncd-local.ipynb?name=deploy-webservice)]
+ See [this documentation for more details on deploying models](how-to-deploy-and-where.md). ### Call into your deployed model
-First, get your scoring URI and Bearer tokens.
+First, get your scoring URI and bearer tokens.
# [Azure CLI](#tab/azcli) - ```azurecli az ml service show --name=triton-webservice ``` # [Python](#tab/python)
-```python
-import requests
-
-print(service.scoring_uri)
-print(service.get_keys())
-
-```
+[!notebook-python[] (~/Azureml-examples-main/python-sdk/experimental/deploy-triton/1.bidaf-ncd-local.ipynb?name=get-keys)]
Then, ensure your service is running by doing:
+# [Azure CLI](#tab/azcli)
+
+```azurecli
```{bash} !curl -v $scoring_uri/v2/health/ready -H 'Authorization: Bearer '"$service_key"'' ```
This command returns information similar to the following. Note the `200 OK`; th
< HTTP/1.1 200 OK HTTP/1.1 200 OK ```-
-Once you've performed a health check, you can create a client to send data to Triton for inference. For more information on creating a client, see the [client examples](https://aka.ms/nvidia-client-examples) in the NVIDIA documentation. There are also [Python samples at the Triton GitHub](https://aka.ms/nvidia-triton-docs).
-
-At this point, if you do not want to add Python pre- and post-processing to your deployed webservice, you may be done. If you want to add this pre- and post-processing logic, read on.
-
-## (Optional) Re-deploy with a Python entry script for pre- and post-processing
-
-After verifying that the Triton is able to serve your model, you can add pre and post-processing code by defining an _entry script_. This file is named `score.py`. For more information on entry scripts, see [Define an entry script](how-to-deploy-and-where.md#define-an-entry-script).
-
-The two main steps are to initialize a Triton HTTP client in your `init()` method, and to call into that client in your `run()` function.
-
-### Initialize the Triton Client
-
-Include code like the following example in your `score.py` file. Triton in Azure Machine Learning expects to be addressed on localhost, port 8000. In this case, localhost is inside the Docker image for this deployment, not a port on your local machine:
-
-> [!TIP]
-> The `tritonhttpclient` pip package is included in the curated `AzureML-Triton` environment, so there's no need to specify it as a pip dependency.
-
-```python
-import tritonhttpclient
-
-def init():
- global triton_client
- triton_client = tritonhttpclient.InferenceServerClient(url="localhost:8000")
-```
-
-### Modify your scoring script to call into Triton
-
-The following example demonstrates how to dynamically request the metadata for the model:
-
-> [!TIP]
-> You can dynamically request the metadata of models that have been loaded with Triton by using the `.get_model_metadata` method of the Triton client. See the [sample notebook](https://aka.ms/triton-aml-sample) for an example of its use.
-
-```python
-input = tritonhttpclient.InferInput(input_name, data.shape, datatype)
-input.set_data_from_numpy(data, binary_data=binary_data)
-
-output = tritonhttpclient.InferRequestedOutput(
- output_name, binary_data=binary_data, class_count=class_count)
-
-# Run inference
-res = triton_client.infer(model_name,
- [input]
- request_id='0',
- outputs=[output])
-
-```
-
-<a id="redeploy"></a>
-
-### Redeploy with an inference configuration
-
-An inference configuration allows you use an entry script, as well as the Azure Machine Learning deployment process using the Python SDK or Azure CLI.
-
-> [!IMPORTANT]
-> You must specify the `AzureML-Triton` [curated environment](./resource-curated-environments.md).
->
-> The Python code example clones `AzureML-Triton` into another environment called `My-Triton`. The Azure CLI code also uses this environment. For more information on cloning an environment, see the [Environment.Clone()](/python/api/azureml-core/azureml.core.environment.environment#clone-new-name-) reference.
-
-# [Azure CLI](#tab/azcli)
-
-> [!TIP]
-> For more information on creating an inference configuration, see the [inference configuration schema](./reference-azure-machine-learning-cli.md#inference-configuration-schema).
-
-```azurecli
-az ml model deploy -n triton-densenet-onnx \
--m densenet_onnx:1 \ic inference-config.json \--e My-Triton --dc deploymentconfig.json \overwrite --compute-target=aks-gpu
-```
- # [Python](#tab/python)
-```python
-from azureml.core.webservice import LocalWebservice
-from azureml.core import Environment
-from azureml.core.model import InferenceConfig
--
-local_service_name = "triton-bidaf-onnx"
-env = Environment.get(ws, "AzureML-Triton").clone("My-Triton")
-
-for pip_package in ["nltk"]:
- env.python.conda_dependencies.add_pip_package(pip_package)
-
-inference_config = InferenceConfig(
- entry_script="score_bidaf.py", # This entry script is where we dispatch a call to the Triton server
- source_directory=os.path.join("..", "scripts"),
- environment=env
-)
-
-local_config = LocalWebservice.deploy_configuration(
- port=6789
-)
-
-local_service = Model.deploy(
- workspace=ws,
- name=local_service_name,
- models=[model],
- inference_config=inference_config,
- deployment_config=local_config,
- overwrite=True)
-
-local_service.wait_for_deployment(show_output = True)
-print(local_service.state)
-# Print the URI you can use to call the local deployment
-print(local_service.scoring_uri)
-```
+[!notebook-python[] (~/Azureml-examples-main/python-sdk/experimental/deploy-triton/1.bidaf-ncd-local.ipynb?name=query-service)]
-After deployment completes, the scoring URI is displayed. For this local deployment, it will be `http://localhost:6789/score`. If you deploy to the cloud, you can use the [az ml service show](/cli/azure/ml/service#az_ml_service_show) CLI command to get the scoring URI.
-
-For information on how to create a client that sends inference requests to the scoring URI, see [consume a model deployed as a web service](how-to-consume-web-service.md).
-
-### Setting the number of workers
-
-To set the number of workers in your deployment, set the environment variable `WORKER_COUNT`. Given you have an [Environment](/python/api/azureml-core/azureml.core.environment.environment) object called `env`, you can do the following:
-
-```{py}
-env.environment_variables["WORKER_COUNT"] = "1"
-```
-
-This will tell Azure ML to spin up the number of workers you specify.
+Once you've performed a health check, you can create a client to send data to Triton for inference. For more information on creating a client, see the [client examples](https://aka.ms/nvidia-client-examples) in the NVIDIA documentation. There are also [Python samples at the Triton GitHub](https://aka.ms/nvidia-triton-docs).
## Clean up resources
az ml service delete -n triton-densenet-onnx
``` # [Python](#tab/python)
-```python
-local_service.delete()
-```
-
+[!notebook-python[] (~/Azureml-examples-main/python-sdk/experimental/deploy-triton/1.bidaf-ncd-local.ipynb?name=delete-service)]
## Troubleshoot * [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md), learn how to troubleshoot and solve, or work around, common errors you may encounter when deploying a model.
-* If deployment logs show that **TritonServer failed to start**, please refer to [Nvidia’s open source documentation.](https://github.com/triton-inference-server/server)
+* If deployment logs show that **TritonServer failed to start**, please refer to [Nvidia's open source documentation.](https://github.com/triton-inference-server/server)
## Next steps
media-services Analyze Face Redaction Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/analyze-face-redaction-concept.md
Azure Media Services v3 API includes a Face Detector preset that offers scalable
This article gives details about **Face Detector Preset** and shows how to use it with Azure Media Services SDK for .NET. - ## Compliance, privacy, and security
-
+ As an important reminder, you must comply with all applicable laws in your use of analytics in Azure Media Services. You must not use Azure Media Services or any other Azure service in a manner that violates the rights of others. Before uploading any videos, including any biometric data, to the Azure Media Services service for processing and storage, you must have all the proper rights, including all appropriate consents, from the individuals in the video. To learn about compliance, privacy and security in Azure Media Services, the Azure [Cognitive Services Terms](https://azure.microsoft.com/support/legal/cognitive-services-compliance-and-privacy/). For MicrosoftΓÇÖs privacy obligations and handling of your data, review MicrosoftΓÇÖs [Privacy Statement](https://privacy.microsoft.com/PrivacyStatement), the [Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products) (OST) and [Data Processing Addendum](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=67) (ΓÇ£DPAΓÇ¥). More privacy information, including on data retention, deletion/destruction, is available in the OST and [here](../video-indexer/faq.md). By using Azure Media Services, you agree to be bound by the Cognitive Services Terms, the OST, DPA, and the Privacy Statement ## Face redaction modes
This produces a redacted MP4 video file in a single pass without any manual edit
### Analyze mode
-The **Analyze** pass of the two-pass workflow takes a video input and produces a JSON file with a list of the face locations, Face ID's and jpg images of each detected face.
+The **Analyze** pass of the two-pass workflow takes a video input and produces a JSON file with a list of the face locations, Face ID's and jpg images of each detected face.
| Stage | File Name | Notes | | | | |
In the **Combined** or **Redact** mode, there are five different blur modes you
You can find samples of the blur types below. - #### Low ![Low resolution blur setting example.](./media/media-services-face-redaction/blur-1.png)
This code sample shows how the preset is passed into a Transform object during c
## Provide feedback -
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
Last updated 05/10/2020
# Using Azure Migrate with private endpoints
-This article describes how to use Azure Migrate to discover, assess, and migrate servers over a private network using [Azure private link](../private-link/private-endpoint-overview.md).
+This article describes how to use Azure Migrate to discover, assess, and migrate servers over a private network using [Azure Private Link](../private-link/private-endpoint-overview.md).
-You can use the [Azure Migrate: Discovery and Assessment](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to the Azure Migrate service over an ExpressRoute private peering or a site to site VPN connection, using Azure private link.
+You can use the [Azure Migrate: Discovery and Assessment](./migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) and [Azure Migrate: Server Migration](./migrate-services-overview.md#azure-migrate-server-migration-tool) tools to connect privately and securely to the Azure Migrate service over an ExpressRoute private peering or a site to site VPN connection, using Azure Private Link.
-The private endpoint connectivity method is recommended when there is an organizational requirement to access the Azure Migrate service and other Azure resources without traversing public networks. You can also use the private link support to use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
+The private endpoint connectivity method is recommended when there is an organizational requirement to access the Azure Migrate service and other Azure resources without traversing public networks. Using the Private Link, you can use your existing ExpressRoute private peering circuits for better bandwidth or latency requirements.
## Support requirements
The private endpoint connectivity method is recommended when there is an organiz
Other migration tools may not be able to upload usage data to the Azure Migrate project if the public network access is disabled. The Azure Migrate project should be configured to allow traffic from all networks to receive data from other Microsoft or external [independent software vendor (ISV)](./migrate-services-overview.md#isv-integration) offerings.
-To enable public network access for the Azure Migrate project, go to the Azure Migrate **properties page** on the Azure portal, select **No**, and select **Save**.
+To enable public network access for the Azure Migrate project, Sign in to Azure portal, Navigate to **Azure Migrate properties** page on the Azure portal, select **No** > **Save**.
![Diagram that shows how to change the network access mode.](./media/how-to-use-azure-migrate-with-private-endpoints/migration-project-properties.png)
To enable public network access for the Azure Migrate project, go to the Azure M
**Considerations** | **Details** |
-**Pricing** | For pricing information, see [Azure blob pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure private link pricing](https://azure.microsoft.com/pricing/details/private-link/).
+**Pricing** | For pricing information, see [Azure blob pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
**Virtual network requirements** | The ExpressRoute/VPN gateway endpoint should reside in the selected virtual network or a virtual network connected to it. You may need ~15 IP addresses in the virtual network. ## Create a project with private endpoint connectivity
Use this [article](./create-manage-projects.md#create-a-project-for-the-first-ti
> You cannot change the connectivity method to private endpoint connectivity for existing Azure Migrate projects. In the **Advanced** configuration section, provide the below details to create a private endpoint for your Azure Migrate project.-- In **Connectivity method**, choose **Private endpoint**.-- In **Disable public endpoint access**, keep the default setting **No**. Some migration tools may not be able to upload usage data to the Azure Migrate project if public network access is disabled. [Learn more.](#other-integrated-tools)-- In **Virtual network subscription**, select the subscription for the private endpoint virtual network.-- In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.-- In **Subnet**, select the subnet for the private endpoint.
+1. In **Connectivity method**, choose **Private endpoint**.
+2. In **Disable public endpoint access**, keep the default setting **No**. Some migration tools may not be able to upload usage data to the Azure Migrate project if public network access is disabled. [Learn more.](#other-integrated-tools)
+3. In **Virtual network subscription**, select the subscription for the private endpoint virtual network.
+4. In **Virtual network**, select the virtual network for the private endpoint. The Azure Migrate appliance and other software components that need to connect to the Azure Migrate project must be on this network or a connected virtual network.
+5. In **Subnet**, select the subnet for the private endpoint.
-Select **Create**. Wait a few minutes for the Azure Migrate project to deploy. Do not close this page while the project creation is in progress.
+ ![Create project](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
-![Create project](./media/how-to-use-azure-migrate-with-private-endpoints/create-project.png)
+6. Select **Create**. to create a migrate project and attach a Private Endpoint to it. Wait a few minutes for the Azure Migrate project to deploy. Do not close this page while the project creation is in progress.
-
-This creates a migrate project and attaches a private endpoint to it.
-
-## Discover and assess servers for migration using Azure private link
+## Discover and assess servers for migration using Azure Private Link
### Set up the Azure Migrate appliance
This creates a migrate project and attaches a private endpoint to it.
> [!Important] > Do not close the Discover machines page during the creation of resources. - At this step, Azure Migrate creates a key vault, storage account, Recovery Services vault (only for agentless VMware migrations), and a few internal resources and attaches a private endpoint to each resource. The private endpoints are created in the virtual network selected during the project creation.
- - Once the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix 'privatelink'. By default, Azure Migrate also creates a private DNS zone corresponding to the 'privatelink' subdomain for each resource type and inserts DNS A records for the associated private endpoints. This enables the Azure Migrate appliance and other software components residing in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
+ - Once the private endpoints are created, the DNS CNAME resource records for the Azure Migrate resources are updated to an alias in a subdomain with the prefix *privatelink*. By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type and inserts DNS A records for the associated private endpoints. This enables the Azure Migrate appliance and other software components residing in the source network to reach the Azure Migrate resource endpoints on private IP addresses.
- Azure Migrate also enables a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the migrate project, and grants permissions to the managed identity to securely access the storage account. 4. After the key is successfully generated, copy the key details to configure and register the appliance.
Azure Migrate: Discovery and assessment use a lightweight Azure Migrate applianc
> [!Note] > The option to deploy an appliance using a template (OVA for servers on VMware environment and VHD Hyper-V environment) isn't supported for Azure Migrate projects with private endpoint connectivity.
-To set up the appliance, download the zipped file containing the installer script from the portal. Copy the zipped file on the server that will host the appliance. After downloading the zipped file, verify the file security and run the installer script to deploy the appliance.
+To set up the appliance:
+ 1. Download the zipped file containing the installer script from the portal.
+ 2. Copy the zipped file on the server that will host the appliance.
+ 3. After downloading the zipped file, verify the file security
+ 4. Run the installer script to deploy the appliance.
-Here are the download links for each of the scenario with their hash values:
+Here are the download links for each of the scenario:
Scenario | Download link | Hash value | |
Make sure the server meets the [hardware requirements](./migrate-appliance.md) f
3. Change the PowerShell directory to the folder containing the contents extracted from the downloaded zipped file. 4. Run the script **AzureMigrateInstaller.ps1**, as follows:
- ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-VMware-public-PrivateLink> .\AzureMigrateInstaller.ps1```
+ ```
+ PS C:\Users\administrator\Desktop\AzureMigrateInstaller-VMware-public-PrivateLink> .\AzureMigrateInstaller.ps1
+ ```
5. After the script runs successfully, it launches the appliance configuration manager so that you can configure the appliance. If you encounter any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
Open a browser on any machine that can connect to the appliance server, and open
- **Connectivity**: The appliance checks for access to the required URLs. If the server uses a proxy: - Select **Set up proxy** to specify the proxy address `http://ProxyIPAddress` or `http://ProxyFQDN` and listening port. - Specify credentials if the proxy needs authentication. Only HTTP proxy is supported.
- - If you want, you can add a list of URLs/IP addresses that should bypass the proxy server. If you are using ExpressRoute private peering, ensure that you bypass these [URLs](./replicate-using-expressroute.md#configure-proxy-bypass-rules-on-the-azure-migrate-appliance-for-vmware-agentless-migrations).
- - You need to select **Save** to register the configuration if you have updated the proxy server details or added URLs/IP addresses to bypass proxy.
+ - You can add a list of URLs/IP addresses that should bypass the proxy server. If you are using ExpressRoute private peering, ensure that you bypass these [URLs](./replicate-using-expressroute.md#configure-proxy-bypass-rules-on-the-azure-migrate-appliance-for-vmware-agentless-migrations).
+ - Select **Save** to register the configuration if you have updated the proxy server details or added URLs/IP addresses to bypass proxy.
> [!Note]
- > If you are getting an error with aka.ms/* link during connectivity check and you do not want the appliance to access this URL over the internet, you need to disable the auto update service on the appliance by following the steps [**here**](./migrate-appliance.md#turn-off-auto-update). After the auto-update has been disabled, the aka.ms/* URL connectivity check will be skipped.
+ > If you get an error with aka.ms/* link during connectivity check and you do not want the appliance to access this URL over the internet, you need to disable the auto update service on the appliance by following the steps [**here**](./migrate-appliance.md#turn-off-auto-update). After the auto-update has been disabled, the aka.ms/* URL connectivity check will be skipped.
- **Time sync**: The time on the appliance should be in sync with internet time for discovery to work properly.
- - **Install updates**: The appliance ensures that the latest updates are installed. After the check completes, you can select **View appliance services** to see the status and versions of the services running on the appliance server.
+ - **Install updates**: The appliance ensures that the latest updates are installed. After the check completes, select **View appliance services** to see the status and versions of the services running on the appliance server.
> [!Note] > If you have chosen to disable auto update service on the appliance, you can update the appliance services manually to get the latest versions of the services by following the steps [**here**](./migrate-appliance.md#manually-update-an-older-version).
- - **Install VDDK**: (_Needed only for VMware appliance)_ The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware, and extract the downloaded zip contents to the specified location on the appliance, as provided in the **Installation instructions**.
+ - **Install VDDK**: (_Needed only for VMware appliance)_ The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If it isn't installed, download VDDK 6.7 from VMware, and extract the downloaded zipped contents to the specified location on the appliance, as provided in the **Installation instructions**.
#### Register the appliance and start continuous discovery
-After the prerequisites check has completed, follow these steps to register the appliance and start continuous discovery for respective scenarios:
-[VMware VMs](./tutorial-discover-vmware.md#register-the-appliance-with-azure-migrate),
-[Hyper-V VMs](./tutorial-discover-hyper-v.md#register-the-appliance-with-azure-migrate),
-[Physical Servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate),
-[AWS VMs](./tutorial-discover-aws.md#register-the-appliance-with-azure-migrate),
-[GCP VMs](./tutorial-discover-gcp.md#register-the-appliance-with-azure-migrate).
+After the prerequisites check has completed, follow the steps to register the appliance and start continuous discovery for respective scenarios:
+- [VMware VMs](./tutorial-discover-vmware.md#register-the-appliance-with-azure-migrate)
+- [Hyper-V VMs](./tutorial-discover-hyper-v.md#register-the-appliance-with-azure-migrate)
+- [Physical Servers](./tutorial-discover-physical.md#register-the-appliance-with-azure-migrate)
+- [AWS VMs](./tutorial-discover-aws.md#register-the-appliance-with-azure-migrate)
+- [GCP VMs](./tutorial-discover-gcp.md#register-the-appliance-with-azure-migrate)
>[!Note]
-> If you are getting DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step on portal are reachable from the on-premises server hosting the Azure Migrate appliance. [Learn more on how to verify network connectivity](#troubleshoot-network-connectivity).
+> If you get a DNS resolution issues during appliance registration or at the time of starting discovery, ensure that Azure Migrate resources created during the **Generate key** step on portal are reachable from the on-premises server hosting the Azure Migrate appliance. [Learn more on how to verify network connectivity](#troubleshoot-network-connectivity).
### Assess your servers for migration to Azure After the discovery is complete, assess your servers ([VMware VMs](./tutorial-assess-vmware-azure-vm.md), [Hyper-V VMs](./tutorial-assess-hyper-v.md), [physical servers](./tutorial-assess-vmware-azure-vm.md), [AWS VMs](./tutorial-assess-aws.md), [GCP VMs](./tutorial-assess-gcp.md)) for migration to Azure VMs or Azure VMware Solution (AVS), using the Azure Migrate: Discovery and Assessment tool. You can also [assess your on-premises machines](./tutorial-discover-import.md#prepare-the-csv) with the Azure Migrate: Discovery and Assessment tool using an imported comma-separated values (CSV) file.
-## Migrate servers to Azure using Azure private link
+## Migrate servers to Azure using Azure Private Link
The following sections describe the steps required to use Azure Migrate with [private endpoints](../private-link/private-endpoint-overview.md) for migrations using ExpressRoute private peering or VPN connections. This article shows a proof-of-concept deployment path for agent-based replications to migrate your [VMware VMs](./tutorial-migrate-vmware-agent.md), [Hyper-V VMs](./tutorial-migrate-physical-virtual-machines.md), [physical servers](./tutorial-migrate-physical-virtual-machines.md), [VMs running on AWS](./tutorial-migrate-aws-virtual-machines.md), [VMs running on GCP](./tutorial-migrate-gcp-virtual-machines.md), or VMs running on a different virtualization provider using Azure private endpoints. You can use a similar approach for performing [agentless Hyper-V migrations](./tutorial-migrate-hyper-v.md) using private link. >[!Note]
->[Agentless VMware migrations](./tutorial-assess-physical.md) require Internet access or connectivity via ExpressRoute Microsoft peering.
+>[Agentless VMware migrations](./tutorial-assess-physical.md) require Internet access or connectivity via ExpressRoute Microsoft peering.
### Set up a replication appliance for migration
The following diagram illustrates the agent-based replication workflow with priv
![Replication architecture](./media/how-to-use-azure-migrate-with-private-endpoints/replication-architecture.png)
-The tool uses a replication appliance to replicate your servers to Azure. Use this article to [prepare and set up a machine for the replication appliance. ](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance)
+The tool uses a replication appliance to replicate your servers to Azure. See this article to [prepare and set up a machine for the replication appliance. ](./tutorial-migrate-physical-virtual-machines.md#prepare-a-machine-for-the-replication-appliance)
After you set up the replication appliance, use the following instructions to create the required resources for migration.
After you set up the replication appliance, use the following instructions to cr
- This creates a Recovery Services vault in the background and enables a managed identity for the vault. A Recovery Services vault is an entity that contains the replication information of servers and is used to trigger replication operations. - If the Azure Migrate project has private endpoint connectivity, a private endpoint is created for the Recovery Services vault. This adds five fully qualified private names (FQDNs) to the private endpoint, one for each microservice linked to the Recovery Services vault. - The five domain names are formatted in this pattern: <br/> _{Vault-ID}-asr-pod01-{type}-.{target-geo-code}_.privatelink.siterecovery.windowsazure.com
- - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS zone is then linked to the private endpoint virtual network. This allows the on-premises replication appliance to resolve the fully qualified domain names to their private IP addresses.
+ - By default, Azure Migrate automatically creates a private DNS zone and adds DNS A records for the Recovery Services vault microservices. The private DNS zone links to the private endpoint virtual network and allows the on-premises replication appliance to resolve the fully qualified domain names to their private IP addresses.
4. Before you register the replication appliance, ensure that the vault's private link FQDNs are reachable from the machine hosting the replication appliance. [Learn more on how to verify network connectivity.](#troubleshoot-network-connectivity) 5. Once you verify the connectivity, download the appliance setup and key file, run the installation process, and register the appliance to Azure Migrate. Review the [detailed steps here](./tutorial-migrate-physical-virtual-machines.md#set-up-the-replication-appliance). After you set up the replication appliance, follow these instructions to [install the mobility service](./tutorial-migrate-physical-virtual-machines.md#install-the-mobility-service) on the machines you want to migrate.
-### Replicate servers to Azure using Azure private link
+### Replicate servers to Azure using Azure Private Link
-Now, follow [these steps](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) to select servers for replication.
+Follow [these steps](./tutorial-migrate-physical-virtual-machines.md#replicate-machines) to select servers for replication.
In **Replicate** > **Target settings** > **Cache/Replication storage account**, use the drop-down to select a storage account to replicate over a private link.
You can find the details of the Recovery Services vault on the Azure Migrate: Se
![Overview page on the Azure Migrate hub](./media/how-to-use-azure-migrate-with-private-endpoints/hub-overview.png)
-2. On the left pane, select **Properties**. Make note of the Recovery Services vault name and managed identity ID. The vault will have _Private endpoint_ as the **connectivity type** and _Other_ as the **replication type**. You will need this information while providing access to the vault.
+2. On the left pane, select **Properties**. Make a note of the Recovery Services vault name and managed identity ID. The vault will have _Private endpoint_ as the **connectivity type** and _Other_ as the **replication type**. You will need this information while providing access to the vault.
![Azure Migrate: Server Migration properties page](./media/how-to-use-azure-migrate-with-private-endpoints/vault-info.png) **_Grant the required permissions to access the storage account_**
- The managed identity of the vault must be granted the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
+ To the managed identity of the vault you must be grant the following role permissions on the storage account required for replication. In this case, you must create the storage account in advance.
>[!Note] > For migrating Hyper-V VMs to Azure using private link, you must grant access to both the replication storage account and cache storage account.
To replicate using ExpressRoute with private peering, [create a private endpoint
>[!Note] >
-> - You can create private endpoints only on a General Purpose v2 (GPv2) storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure private link pricing](https://azure.microsoft.com/pricing/details/private-link/)
+> - You can create private endpoints only on a General Purpose v2 (GPv2) storage account. For pricing information, see [Azure Page Blobs pricing](https://azure.microsoft.com/pricing/details/storage/page-blobs/) and [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/)
-The private endpoint for the storage account should be created in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
+Create The private endpoint for the storage account in the same virtual network as the Azure Migrate project private endpoint or another virtual network connected to this network.
Select **Yes** and integrate with a private DNS zone. The private DNS zone helps in routing the connections from the virtual network to the storage account over a private link. Selecting **Yes** automatically links the DNS zone to the virtual network and adds the DNS records for the resolution of new IPs and fully qualified domain names created. Learn more about [private DNS zones.](../dns/private-dns-overview.md)
-If the user creating the private endpoint is also the owner of the storage account, the private endpoint will be auto-approved. Otherwise, the owner of the storage account must approve the private endpoint for usage. To approve or reject a requested private endpoint connection, go to **Private endpoint connections** under **Networking** on the storage account page.
+If the user creating the private endpoint is also the storage account owner, the private endpoint creation will be auto approved. Otherwise, the owner of the storage account must approve the private endpoint for usage. To approve or reject a requested private endpoint connection, go to **Private endpoint connections** under **Networking** on the storage account page.
Review the status of the private endpoint connection state before proceeding.
Next, follow these instructions to [review and start replication](./tutorial-mig
Make sure the private endpoint is an approved state. 1. Go to Azure Migrate: Discovery and Assessment and Server Migration properties page.+ 2. The properties page contains the list of private endpoints and private link FQDNs that were automatically created by Azure Migrate. 3. Select the private endpoint you want to diagnose.
- 1. Validate that the connection state is Approved.
- 2. If the connection is in a Pending state, you need to get it approved.
- 3. You may also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network.
+ a. Validate that the connection state is Approved.
+ b. If the connection is in a Pending state, you need to get it approved.
+ c. You may also navigate to the private endpoint resource and review if the virtual network matches the Migrate project private endpoint virtual network.
- ![View Private Endpoint connection](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png)
+ ![View Private Endpoint connection](./media/how-to-use-azure-migrate-with-private-endpoints/private-endpoint-connection.png)
### Validate the Data flow through the private endpoints
Review the data flow metrics to verify the traffic flow through private endpoint
### Verify DNS resolution
-The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You may require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [Use this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
+The on-premises appliance (or replication provider) will access the Azure Migrate resources using their fully qualified private link domain names (FQDNs). You may require additional DNS settings to resolve the private IP address of the private endpoints from the source environment. [See this article](../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder) to understand the DNS configuration scenarios that can help troubleshoot any network connectivity issues.
To validate the private link connection, perform a DNS resolution of the Azure Migrate resource endpoints (private link resource FQDNs) from the on-premises server hosting the Migrate appliance and ensure that it resolves to a private IP address. The private endpoint details and private link resource FQDNs' information is available in the Discovery and Assessment and Server Migration properties pages. Select **Download DNS settings** to view the list.
If the DNS resolution is incorrect, follow these steps:
If the DNS resolution is not working as described in the previous section, there might be an issue with your Private DNS Zone. #### Confirm that the required Private DNS Zone resource exists
-By default, Azure Migrate also creates a private DNS zone corresponding to the 'privatelink' subdomain for each resource type. The private DNS zone will be created in the same Azure resource group as the private endpoint resource group. The Azure resource group should contain private DNS zone resources with the following format:
+By default, Azure Migrate also creates a private DNS zone corresponding to the *privatelink* subdomain for each resource type. The private DNS zone will be created in the same Azure resource group as the private endpoint resource group. The Azure resource group should contain private DNS zone resources with the following format:
- privatelink.vaultcore.azure.net for the key vault - privatelink.blob.core.windows.net for the storage account - privatelink.siterecovery.windowsazure.com for the recovery services vault (for Hyper-V and agent-based replications) - privatelink.prod.migration.windowsazure.com - migrate project, assessment project, and discovery site.
-The private DNS zone will be automatically created by Azure Migrate (except for the cache/replication storage account selected by the user). You can locate the linked private DNS zone by navigating to the private endpoint page and selecting DNS configurations. You should see the private DNS zone under the private DNS integration section.
+Azure Migrate automatically creates the private DNS zone (except for the cache/replication storage account selected by the user). You can locate the linked private DNS zone by navigating to the private endpoint page and selecting DNS configurations. Here, you should see the private DNS zone under the private DNS integration section.
![DNS configuration screenshot](./media/how-to-use-azure-migrate-with-private-endpoints/dns-configuration.png)
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-hyper-v-migration.md
You can select up to 10 VMs at once for replication. If you want to migrate more
| **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. | | **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. | | **UEFI - Secure boot** | Not supported for migration.|
-| **Disk size** | 2 TB for the OS disk (BIOS boot), 4 TB for the OS disk (UEFI boot), 4 TB for the data disks.|
+| **Disk size** | up to 2 TB OS disk, 8 TB for the data disks.|
| **Disk number** | A maximum of 16 disks per VM.| | **Encrypted disks/volumes** | Not supported for migration.| | **RDM/passthrough disks** | Not supported for migration.|
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-physical-migration.md
The table summarizes support for physical servers you want to migrate using agen
**UEFI boot** | Supported. UEFI-based machines will be migrated to Azure generation 2 VMs. <br/><br/> The OS disk should have up to four partitions, and volumes should be formatted with NTFS. **UEFI - Secure boot** | Not supported for migration. **Target disk** | Machines can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
-**Disk size** | 2 TB OS disk; 32 TB for data disks.
+**Disk size** | up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM;; 32 TB for data disks.
**Disk limits** | Up to 63 disks per machine. **Encrypted disks/volumes** | Machines with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported.
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes agentless migration requirements for VMware VMs.
**Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br/> - Cent OS 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 12 SP1+<br/> - SUSE Linux Enterprise Server 15 SP1 <br/>- Ubuntu 19.04, 19.10, 14.04LTS, 16.04LTS, 18.04LTS<br/> - Debian 7, 8, 9 <br/> Oracle Linux 7.7, 7.7-CI<br/> For other operating systems you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually. **Linux boot** | If /boot is on a dedicated partition, it should reside on the OS disk, and not be spread across multiple disks.<br/> If /boot is part of the root (/) partition, then the '/' partition should be on the OS disk, and not span other disks. **UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs.
-**Disk size** | 2 TB OS disk; 32 TB for data disks.
+**Disk size** | up to 2 TB OS disk for gen 1 VM and gen 2 VMs; 32 TB for data disks.
**Disk limits** | Up to 60 disks per VM. **Encrypted disks/volumes** | VMs with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported.
The table summarizes VMware VM support for VMware VMs you want to migrate using
**UEFI boot** | Supported. UEFI-based VMs will be migrated to Azure generation 2 VMs. **UEFI - Secure boot** | Not supported for migration. **Target disk** | VMs can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
-**Disk size** | 2 TB OS disk; 32 TB for data disks.
+**Disk size** | up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks.
**Disk limits** | Up to 63 disks per VM. **Encrypted disks/volumes** | VMs with encrypted disks/volumes aren't supported for migration. **Shared disk cluster** | Not supported.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (May 2021)
+- Migration of VMs and physical servers with OS disks up to 4 TB is now supported using the agent-based migration method.
+ ## Update (March 2021) - Support to provide multiple server credentials on Azure Migrate appliance to discover installed applications (software inventory), agentless dependency analysis and discover SQL Server instances and databases in your VMware environment. [Learn more](tutorial-discover-vmware.md#provide-server-credentials) - Discovery and assessment of SQL Server instances and databases running in your VMware environment is now in preview. [Learn More](concepts-azure-sql-assessment-calculation.md) Refer to the [Discovery](tutorial-discover-vmware.md) and [assessment](tutorial-assess-sql.md) tutorials to get started.
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/network-watcher/connection-monitor-overview.md
Use Log Analytics to create custom views of your monitoring data. All data that
#### Metrics in Azure Monitor
-In connection monitors that were created before the Connection Monitor experience, all four metrics are available: % Probes Failed, AverageRoundtripMs, ChecksFailedPercent (Preview), and RoundTripTimeMs (Preview). In connection monitors that were created in the Connection Monitor experience, data is available only for the metrics that are tagged with *(Preview)*.
+In connection monitors that were created before the Connection Monitor experience, all four metrics are available: % Probes Failed, AverageRoundtripMs, ChecksFailedPercent, and RoundTripTimeMs. In connection monitors that were created in the Connection Monitor experience, data is available only for ChecksFailedPercent and RoundTripTimeMs metrics.
:::image type="content" source="./media/connection-monitor-2-preview/monitor-metrics.png" alt-text="Screenshot showing metrics in Connection Monitor" lightbox="./media/connection-monitor-2-preview/monitor-metrics.png":::
When you use metrics, set the resource type as Microsoft.Network/networkWatchers
| AverageRoundtripMs (classic) | Avg. Round-trip Time (ms) (classic) | Milliseconds | Average | Average network RTT for connectivity monitoring probes sent between source and destination. | No dimensions | | ChecksFailedPercent | % Checks Failed | Percentage | Average | Percentage of failed checks for a test. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region | | RoundTripTimeMs | Round-trip Time (ms) | Milliseconds | Average | RTT for checks sent between source and destination. This value isn't averaged. | ConnectionMonitorResourceId <br>SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>Region |
-| TestResult | Test Result | Count | Average | Connection monitor test result | SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet |
+| TestResult | Test Result | Count | Average | Connection monitor test result <br>Interpretation of result values is as follows: <br>0- Indeterminate <br>1- Pass <br>2- Warning <br>3- Fail| SourceAddress <br>SourceName <br>SourceResourceId <br>SourceType <br>Protocol <br>DestinationAddress <br>DestinationName <br>DestinationResourceId <br>DestinationType <br>DestinationPort <br>TestGroupName <br>TestConfigurationName <br>SourceIP <br>DestinationIP <br>SourceSubnet <br>DestinationSubnet |
#### Metric based alerts for Connection Monitor
You can create metric alerts on connection monitors using the methods below
1. From Azure Monitor - To create an alert in Azure Monitor: 1. Choose the connection monitor resource that you created in Connection Monitor. 1. Ensure that **Metric** shows up as signal type for the connection monitor.
- 1. In **Add Condition**, for the **Signal Name**, select **ChecksFailedPercent(Preview)** or **RoundTripTimeMs(Preview)**.
- 1. For **Signal Type**, choose **Metrics**. For example, select **ChecksFailedPercent(Preview)**.
+ 1. In **Add Condition**, for the **Signal Name**, select **ChecksFailedPercent** or **RoundTripTimeMs**.
+ 1. For **Signal Type**, choose **Metrics**. For example, select **ChecksFailedPercent**.
1. All of the dimensions for the metric are listed. Choose the dimension name and dimension value. For example, select **Source Address** and then enter the IP address of any source in your connection monitor. 1. In **Alert Logic**, fill in the following details: * **Condition Type**: **Static**.
For networks whose sources are Azure VMs, the following issues can be detected:
* BGP isn't enabled on the gateway connection. * The DIP probe is down at the load balancer.
+## FAQ
+
+### Are classic VMs supported?
+No, Connection Monitor does not support Classic VMs. It is recommended to migrate IaaS resources from classic to Azure Resource Manager as classic resources will be [deprecated](../virtual-machines/classic-vm-deprecation.md). Refer this article to understand [how to migrate](../virtual-machines/migration-classic-resource-manager-overview.md).
+ ## Next Steps * Learn [How to create Connection Monitor using Azure portal](./connection-monitor-create-using-portal.md)
purview Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/purview-connector-overview.md
details.
||[Azure SQL Database](register-scan-azure-sql-database.md)|Yes| Yes| No| Yes| Yes| Yes| ||[Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md)|Yes| Yes| No| Yes| Yes| Yes| ||[Azure Synapse Analytics (formerly SQL DW)](register-scan-azure-synapse-analytics.md)|Yes| Yes| No| Yes| Yes| Yes|
-|Database|[Oracle DB](register-scan-oracle-source.md)|Yes| Yes| No| No| No| Yes|
+|Database|[Hive Metastore DB](register-scan-oracle-source.md)|Yes| Yes| No| No| No| Yes|
+||[Oracle DB](register-scan-oracle-source.md)|Yes| Yes| No| No| No| Yes|
||[SQL Server](register-scan-on-premises-sql-server.md)|Yes| Yes| No| Yes| Yes| Yes| ||[Teradata](register-scan-teradata-source.md)|Yes| Yes| No| No| No| Yes| |Power BI|[Power BI](register-scan-power-bi-tenant.md)|Yes| Yes| No| No| No| Yes|
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-hive-metastore-source.md
+
+ Title: Register Hive Metastore database and setup scans in Azure Purview
+description: This article outlines how to register Hive Metastore database in Azure Purview and set up a scan.
+++++ Last updated : 5/17/2021+
+# Register and Scan Hive Metastore Database
+
+This article outlines how to register a Hive Metastore database in
+Purview and set up a scan.
+
+## Supported Capabilities
+
+The Hive Metastore source supports Full scan to extract metadata from a **Hive Metastore database** and fetches Lineage between data assets. The supported platforms are Apache Hadoop, Cloudera, Hortonworks and Databricks.
+
+## Prerequisites
+
+1. Set up the latest [self-hosted integration
+ runtime](https://www.microsoft.com/download/details.aspx?id=39717).
+ For more information, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
+
+2. Make sure [JDK
+ 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
+ is installed on your virtual machine where self-hosted integration
+ runtime is installed.
+
+3. Make sure \"Visual C++ Redistributable 2012 Update 4\" is installed
+ on the self-hosted integration runtime machine. If you don\'t yet
+ have it installed, download it from
+ [here](https://www.microsoft.com/download/details.aspx?id=30679).
+
+4. You will have to manually download the Hive Metastore database's
+ JDBC driver on your virtual machine where self-hosted integration
+ runtime is running. For example, if the database used is mssql, make
+ sure to download [Microsoft's JDBC driver for SQL Server](https://docs.microsoft.com/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server?view=sql-server-ver15).
+
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+
+5. Supported Hive versions are 2.x to 3.x.
+
+## Setting up authentication for a scan
+
+The only supported authentication for a Hive Metastore database is **Basic authentication.**
+
+## Register a Hive Metastore database
+
+To register a new Hive Metastore database in your data catalog, do the
+following:
+
+1. Navigate to your Purview account.
+
+2. Select **Sources** on the left navigation.
+
+3. Select **Register.**
+
+4. On Register sources, select Hive **Metastore**. Select **Continue.**
+
+ :::image type="content" source="media/register-scan-hive-metastore-source/register-sources.png" alt-text="register hive source" border="true":::
+
+On the Register sources (Hive Metastore) screen, do the following:
+
+1. Enter a **Name** that the data source will be listed within the
+ Catalog.
+
+2. Enter the **Hive Cluster URL.** The Cluster URL can be either
+ obtained from the Ambari URL or from Databricks workspace URL. For
+ example, hive.azurehdinsight.net
+
+3. Enter the **Hive Metastore Server URL.** For example,
+ sqlserver://hive.database.windows.net
+
+4. Select a collection or create a new one (Optional)
+
+5. Finish to register the data source.
+
+ :::image type="content" source="media/register-scan-hive-metastore-source/configure-sources.png" alt-text="configure hive source" border="true":::
++
+## Creating and running a scan
+
+To create and run a new scan, do the following:
+
+1. In the Management Center, click on Integration runtimes. Make sure a
+ self-hosted integration runtime is set up. If it is not set up, use
+ the steps mentioned
+ [here](./manage-integration-runtimes.md)
+ to setup a self-hosted integration runtime
+
+2. Navigate to **Sources**.
+
+3. Select the registered **Hive Metastore** database.
+
+4. Select **+ New scan**.
+
+5. Provide the below details:
+
+ a. **Name**: The name of the scan
+
+ b. **Connect via integration runtime**: Select the configured
+ self-hosted integration runtime.
+
+ c. **Credential**: Select the credential to connect to your data
+ source. Make sure to:
+
+ - Select Basic Authentication while creating a credential.
+ - Provide the Metastore username in the User name input field
+ - Store the Metastore password in the secret key.
+
+ To understand more on credentials, refer to the link [here](manage-credentials.md)
+
+ d. **Metastore JDBC Driver Location**: Specify the path to the JDBC
+ driver location on your VM where self-host integration runtime is
+ running. This should be the path to valid JARs folder location.
+
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
+
+ e. **Metastore JDBC Driver Class**: Provide the connection driver class
+ name. For example,\com.microsoft.sqlserver.jdbc.SQLServerDriver
+
+ f. **Metastore JDBC URL**: Provide the Connection URL value and define
+ connection to Metastore DB server URL. For example,
+ jdbc:sqlserver://hive.database.windows.net;database=hive;encrypt=true;trustServerCertificate=true;create=false;loginTimeout=300
+
+ g. **Metastore database name**: Provide the Hive Metastore Database name
+
+ h. **Schema**: Specify a list of Hive schemas to import. For example,
+ schema1; schema2. All user schemas are imported if that list is
+ empty. All system schemas (for example, SysAdmin) and objects are
+ ignored by default. When the list is empty, all available schemas
+ are imported.
+ Acceptable schema name patterns using SQL LIKE expressions syntax include using %, e.g. A%; %B; %C%; D
+
+ - start with A or
+ - end with B or
+ - contain C or
+ - equal D
+
+ Usage of NOT and special characters are not acceptable.
+
+ i. **Maximum memory available**: Maximum memory (in GB) available on
+ customer's VM to be used by scanning processes. This is dependent on
+ the size of Hive Metastore database to be scanned.
+
+ :::image type="content" source="media/register-scan-hive-metastore-source/scan.png" alt-text="scan hive source" border="true":::
+
+6. Click on **Continue**.
+
+7. Choose your **scan trigger**. You can set up a schedule or ran the
+ scan once.
+
+8. Review your scan and click on **Save and Run**.
+
+## Viewing your scans and scan runs
+
+1. Navigate to the management center. Select **Data sources** under the **Sources and scanning** section.
+
+2. Select the desired data source. You will see a list of existing scans on that data source.
+
+3. Select the scan whose results you are interested to view.
+
+4. This page will show you all of the previous scan runs along with metrics and status for each scan run. It will also display whether your scan was scheduled or manual, how many assets had classifications applied, how many total assets were discovered, the start and end time of the scan, and the total scan duration.
+
+## Manage your scans
+
+To manage or delete a scan, do the following:
+
+1. Navigate to the management center. Select **Data sources** under the **Sources and scanning** section then select on the desired data source.
+
+2. Select the scan you would like to manage. You can edit the scan by selecting **Edit**.
+
+3. You can delete your scan by selecting **Delete**.
+
+## Next steps
+
+- [Browse the Azure Purview Data catalog](how-to-browse-catalog.md)
+- [Search the Azure Purview Data Catalog](how-to-search-catalog.md)
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-oracle-source.md
Title: Register Oracle source and setup scans (preview) in Azure Purview
+ Title: Register Oracle source and setup scans in Azure Purview
description: This article outlines how to register Oracle source in Azure Purview and set up a scan.
Last updated 2/25/2021
-# Register and Scan Oracle source (preview)
+# Register and Scan Oracle source
This article outlines how to register an Oracle data base in Purview and set up a scan.
To create and run a new scan, do the following:
6. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
7. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-sapecc-source.md
Title: Register SAP ECC source and setup scans (preview) in Azure Purview
+ Title: Register SAP ECC source and setup scans in Azure Purview
description: This article outlines how to register SAP ECC source in Azure Purview and set up a scan.
Last updated 2/25/2021
-# Register and scan SAP ECC source (preview)
+# Register and scan SAP ECC source
This article outlines how to register an SAP ECC source in Purview and set up a scan.
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-saps4hana-source.md
Title: Register SAP S/4HANA source and setup scans (preview) in Azure Purview
+ Title: Register SAP S/4HANA source and setup scans in Azure Purview
description: This article outlines how to register SAP S/4HANA source in Azure Purview and set up a scan.
Last updated 2/25/2021
-# Register and Scan a SAP S/4HANA source (preview)
+# Register and Scan a SAP S/4HANA source
This article outlines how to register an SAP S/4HANA source in Purview and set up a scan.
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-teradata-source.md
Title: Register a Teradata source and setup scans (preview) in Azure Purview
+ Title: Register a Teradata source and setup scans in Azure Purview
description: This article outlines how to register a Teradata source in Azure Purview and set up a scan.
Last updated 2/25/2021
-# Register and scan Teradata source (preview)
+# Register and scan Teradata source
This article outlines how to register a Teradata source in Purview and set up a scan.
role-based-access-control Transfer Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/transfer-subscription.md
To complete these steps, you will need:
- [Bash in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure CLI](/cli/azure) - Account Administrator of the subscription you want to transfer in the source directory-- [Owner](built-in-roles.md#owner) role in the target directory
+- A user account in both the source and target directory for the user making the directory change
## Step 1: Prepare for the transfer
search Search Blob Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-blob-storage-integration.md
The Blob indexer comes with configuration parameters and supports change trackin
### Supported access tiers
-Blob storage [access tiers](../storage/blobs/storage-blob-storage-tiers.md) include hot, cool, and archive. Only hot and hool can be accessed by indexers.
+Blob storage [access tiers](../storage/blobs/storage-blob-storage-tiers.md) include hot, cool, and archive. Only hot and cool can be accessed by indexers.
### Supported content types
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-blob-storage.md
This article shows you how to configure a blob indexer for either scenario. If y
## Supported access tiers
-Blob storage [access tiers](../storage/blobs/storage-blob-storage-tiers.md) include hot, cool, and archive. Only hot and hool can be accessed by indexers.
+Blob storage [access tiers](../storage/blobs/storage-blob-storage-tiers.md) include hot, cool, and archive. Only hot and cool can be accessed by indexers.
<a name="SupportedFormats"></a>
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-how-to-query-request.md
Only the top 50 matches from the initial results can be semantically ranked, and
[Search explorer](search-explorer.md) has been updated to include options for semantic queries. These options become visible in the portal after completing the following steps:
-1. [Sign up](https://aka.ms/SemanticSearchPreviewSignup) and admittance of your search service into the preview program
+1. Complete the [preview sign up](https://aka.ms/SemanticSearchPreviewSignup). Support for semantic query types must be enabled internally for your service.
1. Open the portal with this syntax: `https://portal.azure.com/?feature.semanticSearch=true`
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-search-overview.md
Semantic search is a collection of query-related capabilities that add semantic
Semantic search is a premium feature. We recommend this article for background, but if you'd rather get started, follow these steps: 1. [Check regional and service tier requirements](#availability-and-pricing).
-1. [Sign up for the preview program](https://aka.ms/SemanticSearchPreviewSignup).
+1. [Sign up for the preview program](https://aka.ms/SemanticSearchPreviewSignup). It can take up to two business days to process the request.
1. Upon acceptance, create or modify queries to return [semantic captions and highlights](semantic-how-to-query-request.md). 1. Add a few more query properties to also return [semantic answers](semantic-answers.md). 1. Optionally, include a [spell check](speller-how-to-add.md) property to maximize precision and recall.
Final pricing information will be documented in the [Cognitive Search pricing pa
[Sign-up](https://aka.ms/SemanticSearchPreviewSignup) for the preview on a search service that meets the tier and regional requirements noted in the previous section.
-When your service is ready, [create a semantic query](semantic-how-to-query-request.md) to see semantic ranking in action.
+It can take up to two business days to process the request. Once your service is ready, [create a semantic query](semantic-how-to-query-request.md) to see semantic ranking in action.
search Speller How To Add https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/speller-how-to-add.md
You can improve recall by spell-correcting individual search query terms before
The queryLanguage is required for speller, and currently "en-us" is the only valid value. > [!Note]
-> The speller parameter is available on all tiers, in the same regions that provide semantic search. You do not need to sign up for access to this preview feature. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> The speller parameter is available on all tiers, in the same regions that provide semantic search. ign up is required but there is no charge and no tier restrictions. For more information, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
## Spell correction with simple search
service-bus-messaging Service Bus Resource Manager Namespace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-resource-manager-namespace.md
If you don't have an Azure subscription, [create a free account](https://azure.m
## Create a service bus namespace
-In this quickstart, you use an [existing Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/101-servicebus-create-namespace/azuredeploy.json) from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/):
+In this quickstart, you use an [existing Resource Manager template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.servicebus/servicebus-create-namespace/azuredeploy.json) from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/):
[!code-json[create-azure-service-bus-namespace](~/quickstart-templates/quickstarts/microsoft.servicebus/servicebus-create-namespace/azuredeploy.json)]
To create a service bus namespace by deploying a template:
$serviceBusNamespaceName = Read-Host -Prompt "Enter a name for the service bus namespace to be created" $location = Read-Host -Prompt "Enter the location (i.e. centralus)" $resourceGroupName = "${serviceBusNamespaceName}rg"
- $templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/101-servicebus-create-namespace/azuredeploy.json"
+ $templateUri = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.servicebus/servicebus-create-namespace/azuredeploy.json"
New-AzResourceGroup -Name $resourceGroupName -Location $location New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -serviceBusNamespaceName $serviceBusNamespaceName
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-networking.md
Be aware of these considerations when creating new NSG rules for your managed cl
## Apply NSG rules
-With classic (non-managed) Service Fabric clusters, you must declare and manage a separate *Microsoft.Network/networkSecurityGroups* resource in order to [apply Network Security Group (NSG) rules to your cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/service-fabric-secure-nsg-cluster-65-node-3-nodetype). Service Fabric managed clusters enable you to assign NSG rules directly within the cluster resource of your deployment template.
+With classic (non-managed) Service Fabric clusters, you must declare and manage a separate *Microsoft.Network/networkSecurityGroups* resource in order to [apply Network Security Group (NSG) rules to your cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.servicefabric/service-fabric-secure-nsg-cluster-65-node-3-nodetype). Service Fabric managed clusters enable you to assign NSG rules directly within the cluster resource of your deployment template.
Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedclusters#managedclusterproperties-object) property of your *Microsoft.ServiceFabric/managedclusters* resource (version `2021-05-01` or later) to assign NSG rules. For example:
Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedc
"destinationPortRange": "33000-33499", "access": "Allow", "priority": 2001,
- "direction": "Inbound"
+ "direction": "Inbound"
}, { "name": "AllowARM",
Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedc
"destinationPortRange": "33500-33699", "access": "Allow", "priority": 2002,
- "direction": "Inbound"
+ "direction": "Inbound"
}, { "name": "DenyCustomers",
Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedc
Service Fabric managed clusters do not allow access to the RDP ports by default. You can open RDP ports to the internet by setting the following property on a Service Fabric managed cluster resource. ```json
-"allowRDPAccess": true
+"allowRDPAccess": true
```
-When the allowRDPAccess property is set to true, the following NSG rule will be added to your cluster deployment.
+When the allowRDPAccess property is set to true, the following NSG rule will be added to your cluster deployment.
```json {
- "name": "SFMC_AllowRdpPort",
+ "name": "SFMC_AllowRdpPort",
"type": "Microsoft.Network/networkSecurityGroups/securityRules", "properties": { "description": "Optional rule to open RDP ports.",
A default NSG rule is added to allow the Service Fabric resource provider to acc
>This rule is always added and cannot be overridden. ```json
-{
- "name": "SFMC_AllowServiceFabricGatewayToSFRP",
- "type": "Microsoft.Network/networkSecurityGroups/securityRules",
- "properties": {
- "description": "This is required rule to allow SFRP to connect to the cluster. This rule cannot be overridden.",
- "protocol": "TCP",
- "sourcePortRange": "*",
- "sourceAddressPrefix": "ServiceFabric",
- "destinationAddressPrefix": "VirtualNetwork",
- "access": "Allow",
- "priority": 500,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [
- "19000",
- "19080"
- ]
- }
+{
+ "name": "SFMC_AllowServiceFabricGatewayToSFRP",
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules",
+ "properties": {
+ "description": "This is required rule to allow SFRP to connect to the cluster. This rule cannot be overridden.",
+ "protocol": "TCP",
+ "sourcePortRange": "*",
+ "sourceAddressPrefix": "ServiceFabric",
+ "destinationAddressPrefix": "VirtualNetwork",
+ "access": "Allow",
+ "priority": 500,
+ "direction": "Inbound",
+ "sourcePortRanges": [],
+ "destinationPortRanges": [
+ "19000",
+ "19080"
+ ]
+ }
} ``` ### NSG rule: SFMC_AllowServiceFabricGatewayPorts
-This is an optional NSG rule to allow access to the clientConnectionPort, and httpGatewayPort from the internet. This rule allows customers to access SFX, connect to the cluster using PowerShell, and use Service Fabric cluster API endpoints from outside of the.
+This is an optional NSG rule to allow access to the clientConnectionPort, and httpGatewayPort from the internet. This rule allows customers to access SFX, connect to the cluster using PowerShell, and use Service Fabric cluster API endpoints from outside of the.
>[!NOTE]
->This rule will not be added if there is a custom rule with the same access, direction, and protocol values for the same port. You can override this rule with custom NSG rules.
+>This rule will not be added if there is a custom rule with the same access, direction, and protocol values for the same port. You can override this rule with custom NSG rules.
```json
-{
- "name": "SFMC_AllowServiceFabricGatewayPorts",
- "type": "Microsoft.Network/networkSecurityGroups/securityRules",
- "properties": {
- "description": "Optional rule to open SF cluster gateway ports. To override add a custom NSG rule for gateway ports in priority range 1000-3000.",
- "protocol": "tcp",
- "sourcePortRange": "*",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "VirtualNetwork",
- "access": "Allow",
- "priority": 3001,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [
- "19000",
- "19080"
- ]
- }
+{
+ "name": "SFMC_AllowServiceFabricGatewayPorts",
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules",
+ "properties": {
+ "description": "Optional rule to open SF cluster gateway ports. To override add a custom NSG rule for gateway ports in priority range 1000-3000.",
+ "protocol": "tcp",
+ "sourcePortRange": "*",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "VirtualNetwork",
+ "access": "Allow",
+ "priority": 3001,
+ "direction": "Inbound",
+ "sourcePortRanges": [],
+ "destinationPortRanges": [
+ "19000",
+ "19080"
+ ]
+ }
} ``` ## Load balancer ports
-Service Fabric managed clusters create an NSG rule in default priority range for all the load balancer (LB) ports configured under "loadBalancingRules" section under *ManagedCluster* properties. This rule opens LB ports for inbound traffic from the internet.
+Service Fabric managed clusters create an NSG rule in default priority range for all the load balancer (LB) ports configured under "loadBalancingRules" section under *ManagedCluster* properties. This rule opens LB ports for inbound traffic from the internet.
>[!NOTE] >This rule is added in the optional priority range and can be overridden by adding custom NSG rules.
Service Fabric managed clusters create an NSG rule in default priority range for
"type": "Microsoft.Network/networkSecurityGroups/securityRules", "properties": { "description": "Optional rule to open LB ports",
- "protocol": "*",
+ "protocol": "*",
"sourcePortRange": "*", "sourceAddressPrefix": "*", "destinationAddressPrefix": "VirtualNetwork",
Service Fabric managed clusters create an NSG rule in default priority range for
Service Fabric managed clusters automatically create load balancer probes for fabric gateway ports as well as all ports configured under the "loadBalancingRules" section of managed cluster properties. ```json
-{
- "value": [
- {
- "name": "FabricTcpGateway",
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "Tcp",
- "port": 19000,
- "intervalInSeconds": 5,
- "numberOfProbes": 2,
- "loadBalancingRules": [
- {
+{
+ "value": [
+ {
+ "name": "FabricTcpGateway",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "protocol": "Tcp",
+ "port": 19000,
+ "intervalInSeconds": 5,
+ "numberOfProbes": 2,
+ "loadBalancingRules": [
+ {
"id": "<>"
- }
- ]
- },
- "type": "Microsoft.Network/loadBalancers/probes"
- },
- {
- "name": "FabricHttpGateway",
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "Tcp",
- "port": 19080,
- "intervalInSeconds": 5,
- "numberOfProbes": 2,
- "loadBalancingRules": [
- {
- "id": "<>"
- }
+ }
] },
- "type": "Microsoft.Network/loadBalancers/probes"
+ "type": "Microsoft.Network/loadBalancers/probes"
}, {
- "name": "probe1_tcp_8080",
- "properties": {
- "provisioningState": "Succeeded",
- "protocol": "Tcp",
- "port": 8080,
- "intervalInSeconds": 5,
- "numberOfProbes": 2,
- "loadBalancingRules": [
- {
- "id": "<>"
- }
- ]
- },
- "type": "Microsoft.Network/loadBalancers/probes"
- }
- ]
-}
+ "name": "FabricHttpGateway",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "protocol": "Tcp",
+ "port": 19080,
+ "intervalInSeconds": 5,
+ "numberOfProbes": 2,
+ "loadBalancingRules": [
+ {
+ "id": "<>"
+ }
+ ]
+ },
+ "type": "Microsoft.Network/loadBalancers/probes"
+ },
+ {
+ "name": "probe1_tcp_8080",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "protocol": "Tcp",
+ "port": 8080,
+ "intervalInSeconds": 5,
+ "numberOfProbes": 2,
+ "loadBalancingRules": [
+ {
+ "id": "<>"
+ }
+ ]
+ },
+ "type": "Microsoft.Network/loadBalancers/probes"
+ }
+ ]
+}
``` ## Next steps
service-fabric Service Fabric Backuprestoreservice Quickstart Azurecluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-backuprestoreservice-quickstart-azurecluster.md
Last updated 5/24/2019
# Periodic backup and restore in an Azure Service Fabric cluster > [!div class="op_single_selector"]
-> * [Clusters on Azure](service-fabric-backuprestoreservice-quickstart-azurecluster.md)
+> * [Clusters on Azure](service-fabric-backuprestoreservice-quickstart-azurecluster.md)
> * [Standalone Clusters](service-fabric-backuprestoreservice-quickstart-standalonecluster.md)
->
+>
Service Fabric is a distributed systems platform that makes it easy to develop and manage reliable, distributed, microservices based cloud applications. It allows running of both stateless and stateful micro services. Stateful services can maintain mutable, authoritative state beyond the request and response or a complete transaction. If a Stateful service goes down for a long time or loses information due to a disaster, it may need to be restored to some recent backup of its state in order to continue providing service after it comes back up. Service Fabric replicates the state across multiple nodes to ensure that the service is highly available. Even if one node in the cluster fails, the service continues to be available. In certain cases, however, it is still desirable for the service data to be reliable against broader failures.
-
+ For example, service may want to back up its data in order to protect from the following scenarios: - In the event of the permanent loss of an entire Service Fabric cluster. - Permanent loss of a majority of the replicas of a service partition
For example, service may want to back up its data in order to protect from the f
Service Fabric provides an inbuilt API to do point in time [backup and restore](service-fabric-reliable-services-backup-restore.md). Application developers may use these APIs to back up the state of the service periodically. Additionally, if service administrators want to trigger a backup from outside of the service at a specific time, like before upgrading the application, developers need to expose backup (and restore) as an API from the service. Maintaining the backups is an additional cost above this. For example, you may want to take five incremental backups every half hour, followed by a full backup. After the full backup, you can delete the prior incremental backups. This approach requires additional code leading to additional cost during application development.
-The Backup and Restore service in Service Fabric enables easy and automatic backup of information stored in stateful services. Backing up application data on a periodic basis is fundamental for guarding against data loss and service unavailability. Service Fabric provides an optional backup and restore service, which allows you to configure periodic backup of stateful Reliable Services (including Actor Services) without having to write any additional code. It also facilitates restoring previously taken backups.
+The Backup and Restore service in Service Fabric enables easy and automatic backup of information stored in stateful services. Backing up application data on a periodic basis is fundamental for guarding against data loss and service unavailability. Service Fabric provides an optional backup and restore service, which allows you to configure periodic backup of stateful Reliable Services (including Actor Services) without having to write any additional code. It also facilitates restoring previously taken backups.
Service Fabric provides a set of APIs to achieve the following functionality related to periodic backup and restore feature:
Service Fabric provides a set of APIs to achieve the following functionality rel
```powershell
- Connect-SFCluster -ConnectionEndpoint 'https://mysfcluster.southcentralus.cloudapp.azure.com:19080' -X509Credential -FindType FindByThumbprint -FindValue '1b7ebe2174649c45474a4819dafae956712c31d3' -StoreLocation 'CurrentUser' -StoreName 'My' -ServerCertThumbprint '1b7ebe2174649c45474a4819dafae956712c31d3'
+ Connect-SFCluster -ConnectionEndpoint 'https://mysfcluster.southcentralus.cloudapp.azure.com:19080' -X509Credential -FindType FindByThumbprint -FindValue '1b7ebe2174649c45474a4819dafae956712c31d3' -StoreLocation 'CurrentUser' -StoreName 'My' -ServerCertThumbprint '1b7ebe2174649c45474a4819dafae956712c31d3'
```
Enable `Include backup restore service` check box under `+ Show optional setting
### Using Azure Resource Manager Template
-First you need to enable the _backup and restore service_ in your cluster. Get the template for the cluster that you want to deploy. You can either use the [sample templates](https://github.com/Azure/azure-quickstart-templates/tree/master/service-fabric-secure-cluster-5-node-1-nodetype) or create a Resource Manager template. Enable the _backup and restore service_ with the following steps:
+First you need to enable the _backup and restore service_ in your cluster. Get the template for the cluster that you want to deploy. You can either use the [sample templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.servicefabric/service-fabric-secure-cluster-5-node-1-nodetype) or create a Resource Manager template. Enable the _backup and restore service_ with the following steps:
1. Check that the `apiversion` is set to **`2018-02-01`** for the `Microsoft.ServiceFabric/clusters` resource, and if not, update it as shown in the following snippet:
First you need to enable the _backup and restore service_ in your cluster. Get t
} ```
-2. Now enable the _backup and restore service_ by adding the following `addonFeatures` section under `properties` section as shown in the following snippet:
+2. Now enable the _backup and restore service_ by adding the following `addonFeatures` section under `properties` section as shown in the following snippet:
```json "properties": {
First you need to enable the _backup and restore service_ in your cluster. Get t
} ```
-3. Configure X.509 certificate for encryption of credentials. This is important to ensure that the credentials provided to connect to storage are encrypted before persisting. Configure encryption certificate by adding the following `BackupRestoreService` section under `fabricSettings` section as shown in the following snippet:
+3. Configure X.509 certificate for encryption of credentials. This is important to ensure that the credentials provided to connect to storage are encrypted before persisting. Configure encryption certificate by adding the following `BackupRestoreService` section under `fabricSettings` section as shown in the following snippet:
```json "properties": {
First you need to enable the _backup and restore service_ in your cluster. Get t
} ```
-4. Once you have updated your cluster template with the preceding changes, apply them and let the deployment/upgrade complete. Once complete, the _backup and restore service_ starts running in your cluster. The Uri of this service is `fabric:/System/BackupRestoreService` and the service can be located under system service section in the Service Fabric explorer.
+4. Once you have updated your cluster template with the preceding changes, apply them and let the deployment/upgrade complete. Once complete, the _backup and restore service_ starts running in your cluster. The Uri of this service is `fabric:/System/BackupRestoreService` and the service can be located under system service section in the Service Fabric explorer.
## Enabling periodic backup for Reliable Stateful service and Reliable Actors Let's walk through steps to enable periodic backup for Reliable Stateful service and Reliable Actors. These steps assume
Let's walk through steps to enable periodic backup for Reliable Stateful service
### Create backup policy
-First step is to create backup policy describing backup schedule, target storage for backup data, policy name, maximum incremental backups to be allowed before triggering full backup and retention policy for backup storage.
+First step is to create backup policy describing backup schedule, target storage for backup data, policy name, maximum incremental backups to be allowed before triggering full backup and retention policy for backup storage.
For backup storage, use the Azure Storage account created above. Container `backup-container` is configured to store backups. A container with this name is created, if it does not already exist, during backup upload. Populate `ConnectionString` with a valid connection string for the Azure Storage account, replacing `account-name` with your storage account name, and `account-key` with your storage account key.
$ScheduleInfo = @{
ScheduleKind = 'FrequencyBased' }
-$RetentionPolicy = @{
+$RetentionPolicy = @{
RetentionPolicyType = 'Basic' RetentionDuration = 'P10D' }
$body = (ConvertTo-Json $BackupPolicyReference)
$url = "https://mysfcluster.southcentralus.cloudapp.azure.com:19080/Applications/SampleApp/$/EnableBackup?api-version=6.4" Invoke-WebRequest -Uri $url -Method Post -Body $body -ContentType 'application/json' -CertificateThumbprint '1b7ebe2174649c45474a4819dafae956712c31d3'
-```
+```
#### Using Service Fabric Explorer
-Make sure the [advanced mode](service-fabric-visualizing-your-cluster.md#backup-and-restore) for Service Fabric Explorer is enabled
+Make sure the [advanced mode](service-fabric-visualizing-your-cluster.md#backup-and-restore) for Service Fabric Explorer is enabled
1. Select an application and go to action. Click Enable/Update Application Backup.
Make sure the [advanced mode](service-fabric-visualizing-your-cluster.md#backup-
### Verify that periodic backups are working
-After enabling backup at the application level, all partitions belonging to Reliable Stateful services and Reliable Actors under the application will start getting backed-up periodically as per the associated backup policy.
+After enabling backup at the application level, all partitions belonging to Reliable Stateful services and Reliable Actors under the application will start getting backed-up periodically as per the associated backup policy.
![Partition BackedUp Health Event][0]
Backups associated with all partitions belonging to Reliable Stateful services a
#### PowerShell using Microsoft.ServiceFabric.Powershell.Http Module ```powershell
-
+ Get-SFApplicationBackupList -ApplicationId WordCount ```
BackupType : Full
EpochOfLastBackupRecord : @{DataLossNumber=131675205859825409; ConfigurationNumber=8589934592} LsnOfLastBackupRecord : 3334 CreationTimeUtc : 2018-04-06T20:55:16Z
-FailureError :
+FailureError :
BackupId : b0035075-b327-41a5-a58f-3ea94b68faa4 BackupChainId : b9577400-1131-4f88-b309-2bb1e943322c
BackupType : Incremental
EpochOfLastBackupRecord : @{DataLossNumber=131675205859825409; ConfigurationNumber=8589934592} LsnOfLastBackupRecord : 3552 CreationTimeUtc : 2018-04-06T21:10:27Z
-FailureError :
+FailureError :
BackupId : 69436834-c810-4163-9386-a7a800f78359 BackupChainId : b9577400-1131-4f88-b309-2bb1e943322c
BackupType : Incremental
EpochOfLastBackupRecord : @{DataLossNumber=131675205859825409; ConfigurationNumber=8589934592} LsnOfLastBackupRecord : 3764 CreationTimeUtc : 2018-04-06T21:25:36Z
-FailureError :
+FailureError :
``` #### Using Service Fabric Explorer
service-fabric Service Fabric Dnsservice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-dnsservice.md
Title: Azure Service Fabric DNS service
+ Title: Azure Service Fabric DNS service
description: Use Service Fabric's dns service for discovering microservices from inside the cluster.
Last updated 7/20/2018
# DNS Service in Azure Service Fabric
-The DNS Service is an optional system service that you can enable in your cluster to discover other services using the DNS protocol.
+The DNS Service is an optional system service that you can enable in your cluster to discover other services using the DNS protocol.
-Many services, especially containerized services, are addressable through a pre-existing URL. Being able to resolve these services using the standard DNS protocol, rather than the Service Fabric Naming Service protocol, is desirable. The DNS service enables you to map DNS names to a service name and hence resolve endpoint IP addresses. Such functionality maintains the portability of containerized services across different platforms and can make "lift and shift" scenarios easier, by letting you use existing service URLs rather than having to rewrite code to leverage the Naming Service.
+Many services, especially containerized services, are addressable through a pre-existing URL. Being able to resolve these services using the standard DNS protocol, rather than the Service Fabric Naming Service protocol, is desirable. The DNS service enables you to map DNS names to a service name and hence resolve endpoint IP addresses. Such functionality maintains the portability of containerized services across different platforms and can make "lift and shift" scenarios easier, by letting you use existing service URLs rather than having to rewrite code to leverage the Naming Service.
The DNS service maps DNS names to service names, which in turn are resolved by the Naming Service to return the service endpoint. The DNS name for the service is provided at the time of creation. The following diagram shows how the DNS service works for stateless services.
When you create a cluster using the portal, the DNS service is enabled by defaul
If you're not using the portal to create your cluster or if you're updating an existing cluster, you'll need to enable the DNS service in a template: -- To deploy a new cluster, you can either use the [sample templates](https://github.com/Azure/azure-quickstart-templates/tree/master/service-fabric-secure-cluster-5-node-1-nodetype) or create your own Resource Manager template.
+- To deploy a new cluster, you can either use the [sample templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.servicefabric/service-fabric-secure-cluster-5-node-1-nodetype) or create your own Resource Manager template.
- To update an existing cluster, you can navigate to the cluster's resource group on the portal and click **Automation Script** to work with a template that reflects the current state of the cluster and other resources in the group. To learn more, see [Export the template from resource group](../azure-resource-manager/templates/export-template-portal.md). After you have a template, you can enable the DNS service with the following steps:
After you have a template, you can enable the DNS service with the following ste
```json "properties": {
- ...
+ ...
"fabricSettings": [ ... {
After you have a template, you can enable the DNS service with the following ste
] } ```
-3. Once you have updated the cluster template with your changes, apply them and let the upgrade complete. When the upgrade completes, the DNS system service starts running in your cluster. The service name is `fabric:/System/DnsService`, and you can find it under the **System** service section in Service Fabric explorer.
+3. Once you have updated the cluster template with your changes, apply them and let the upgrade complete. When the upgrade completes, the DNS system service starts running in your cluster. The service name is `fabric:/System/DnsService`, and you can find it under the **System** service section in Service Fabric explorer.
> [!NOTE] > When upgrading DNS from disabled to enabled, Service Fabric Explorer may not reflect the new state. To solve, restart the nodes by modifying the UpgradePolicy in your Azure Resource Manager template. See the [Service Fabric Template Reference](/azure/templates/microsoft.servicefabric/2019-03-01/clusters/applications) for more.
After you have a template, you can enable the DNS service with the following ste
## Setting the DNS name for your service You can set a DNS name for your services either declaratively for default services in the ApplicationManifest.xml file or through PowerShell commands.
-The DNS name for your service is resolvable throughout the cluster so it is important to ensure the uniqueness of the DNS name across the cluster.
+The DNS name for your service is resolvable throughout the cluster so it is important to ensure the uniqueness of the DNS name across the cluster.
It is highly recommended that you use a naming scheme of `<ServiceDnsName>.<AppInstanceName>`; for example, `service1.application1`. If an application is deployed using Docker compose, services are automatically assigned DNS names using this naming scheme.
Open your project in Visual Studio, or your favorite editor, and open the Applic
</StatelessService> </Service> ```
-Once the application is deployed, the service instance in the Service Fabric explorer shows the DNS name for this instance, as shown in the following figure:
+Once the application is deployed, the service instance in the Service Fabric explorer shows the DNS name for this instance, as shown in the following figure:
![service endpoints](./media/service-fabric-dnsservice/service-fabric-explorer-dns.png)
Where:
- *First-Label-Of-Partitioned-Service-DNSName* is the first part of your service DNS name. - *PartitionPrefix* is a value that can be set in the DnsService section of the cluster manifest or through the cluster's Resource Manager template. The default value is "--". To learn more, see [DNS Service settings](./service-fabric-cluster-fabric-settings.md#dnsservice).-- *Target-Partition-Name* is the name of the partition.
+- *Target-Partition-Name* is the name of the partition.
- *PartitionSuffix* is a value that can be set in the DnsService section of the cluster manifest or through the cluster's Resource Manager template. The default value is empty string. To learn more, see [DNS Service settings](./service-fabric-cluster-fabric-settings.md#dnsservice). - *Remaining-Partitioned-Service-DNSName* is the remaining part of your service DNS name.
-The following examples show DNS queries for partitioned services running on a cluster that has default settings for `PartitionPrefix` and `PartitionSuffix`:
+The following examples show DNS queries for partitioned services running on a cluster that has default settings for `PartitionPrefix` and `PartitionSuffix`:
- To resolve partition ΓÇ£0ΓÇ¥ of a service with DNS name `backendrangedschemesvc.application` that uses a ranged partitioning scheme, use `backendrangedschemesvc-0.application`. - To resolve partition ΓÇ£firstΓÇ¥ of a service with DNS name `backendnamedschemesvc.application` that uses a named partitioning scheme, use `backendnamedschemesvc-first.application`.
The following examples show DNS queries for partitioned services running on a cl
The DNS service returns the IP address of the primary replica of the partition. If no partition is specified, the service returns the IP address of the primary replica of a randomly selected partition. ## Using DNS in your services
-If you deploy more than one service, you can find the endpoints of other services to communicate with by using a DNS name. The DNS service works for stateless services, and, in Service Fabric version 6.3 and later, for stateful services. For stateful services running on versions of Service Fabric prior to 6.3, you can use the built-in [reverse proxy service](./service-fabric-reverseproxy.md) for http calls to call a particular service partition.
+If you deploy more than one service, you can find the endpoints of other services to communicate with by using a DNS name. The DNS service works for stateless services, and, in Service Fabric version 6.3 and later, for stateful services. For stateful services running on versions of Service Fabric prior to 6.3, you can use the built-in [reverse proxy service](./service-fabric-reverseproxy.md) for http calls to call a particular service partition.
Dynamic ports are not supported by the DNS service. You can use the reverse proxy service to resolve services that use dynamic ports.
public class ValuesController : Controller
HttpClient client = new HttpClient(); var response = await client.GetAsync(uri); result = await response.Content.ReadAsStringAsync();
-
+ } catch (Exception e) {
public class ValuesController : Controller
HttpClient client = new HttpClient(); var response = await client.GetAsync(uri); result = await response.Content.ReadAsStringAsync();
-
+ } catch (Exception e) {
public class ValuesController : Controller
``` ## Known Issues
-* For Service Fabric versions 6.3 and higher, there is a problem with DNS lookups for service names containing a hyphen in the DNS name. For more information on this issue, please track the following [GitHub Issue](https://github.com/Azure/service-fabric-issues/issues/1197). A fix for this is coming in the next 6.3 update.
+* For Service Fabric versions 6.3 and higher, there is a problem with DNS lookups for service names containing a hyphen in the DNS name. For more information on this issue, please track the following [GitHub Issue](https://github.com/Azure/service-fabric-issues/issues/1197). A fix for this is coming in the next 6.3 update.
* DNS service for Service Fabric services is not yet supported on Linux. DNS service is supported for containers on Linux. Manual resolution using Fabric Client/ServicePartitionResolver is the available alternative.
service-fabric Service Fabric Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-support.md
We have created a number of support request options to serve the needs of managing your Service Fabric clusters and application workloads, depending on the urgency of support needed and the severity of the issue. ## Create an Azure support request
+<div class='icon is-large'>
+ <img alt='Azure support' src='./media/logos/logo-azure.svg'>
+</div>
To report issues related to your Service Fabric cluster running on Azure, open a support ticket [on the Azure portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) or [Microsoft support portal](https://support.microsoft.com/oas/default.aspx?prid=16146).
Learn more about:
<a id="getlivesitesupportonprem"></a> ## Create a support request for standalone Service Fabric clusters
+<div class='icon is-large'>
+ <img alt='Azure support' src='./media/logos/logo-azure.svg'>
+</div>
To report issues related to Service Fabric clusters running on-premises or on other clouds, you may open a ticket for professional support on the [Microsoft support portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
Learn more about:
- [Microsoft premier support](https://support.microsoft.com/en-us/premier). ## Post a question to Microsoft Q&A
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='./media/logos/microsoft-logo.png'>
+</div>
Get answers to Service Fabric questions directly from Microsoft engineers, Azure Most Valuable Professionals (MVPs), and members of our expert community.
-[Microsoft Q&A](https://docs.microsoft.com/answers/products/) is Azure's recommended source of community support.
+[Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-service-fabric.html) is Azure's recommended source of community support.
If you can't find an answer to your problem by searching Microsoft Q&A, submit a new question. Be sure to post your question using the **azure-service-fabric** tag. Here are some Microsoft Q&A tips for writing [high-quality questions](https://docs.microsoft.com/answers/articles/24951/how-to-write-a-quality-question.html). ## Open a GitHub issue
+<div class='icon is-large'>
+ <img alt='GitHub-image' src='./media/logos/github-logo.png'>
+</div>
Report Azure Service Fabric issues at the [Service Fabric GitHub](https://github.com/microsoft/service-fabric/issues). This repo is intended for reporting and tracking issues as well as making small feature requests related to Azure Service Fabric. **Do not use this medium to report live-site issues**. ## Check the StackOverflow forum
+<div class='icon is-large'>
+ <img alt='Stack Overflow' src='./media/logos/logo-stack-overflow.svg'>
+</div>
The `azure-service-fabric` tag on [StackOverflow][stackoverflow] is used for asking general questions about how the platform works and how you may use it to accomplish certain tasks. ## Submit feedback on Azure Feedback
+<div class='icon is-large'>
+ <img alt='UserVoice' src='./media/logos/logo-uservoice.svg'>
+</div>
The [Azure Feedback Forum for Service Fabric][uservoice-forum] is the best place for submitting significant product feature ideas. We review the most popular requests and factor them for our medium to long-term planning. We encourage you to rally support for your suggestions within the community.
static-web-apps Key Vault Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/key-vault-secrets.md
Previously updated : 05/07/2021 Last updated : 05/17/2021 # Securing authentication secrets in Azure Key Vault
-When configuring custom authentication providers, you may want to store connection secrets in Key Vault. This article demonstrates how to use a managed identity to grant Azure Static Web Apps access to Key Vault for secrets custom authentication.
+When configuring custom authentication providers, you may want to store connection secrets in Azure Key Vault. This article demonstrates how to use a managed identity to grant Azure Static Web Apps access to Key Vault for custom authentication secrets.
Security secrets require the following items to be in place.
static-web-apps Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/plans.md
+
+ Title: Azure Static Web Apps hosting plans
+description: Compare and contrast the different Azure Static Web Apps hosting plans.
++++ Last updated : 05/14/2021+++
+# Azure Static Web Apps hosting plans
+
+Azure Static Web Apps is available through two different plans, Free and Standard. See the [pricing page for Standard plan costs](https://azure.microsoft.com/pricing/details/app-service/static/).
+
+## Features
+
+| Feature | Free plan <br> (For personal projects) | Standard plan <br> (For production apps) |
+| | | |
+| Web hosting | Γ£ö | Γ£ö |
+| GitHub integration | Γ£ö | Γ£ö |
+| Azure DevOps integration | Γ£ö | Γ£ö |
+| Globally distributed static content | Γ£ö | Γ£ö |
+| Free, automatically renewing SSL certificates | Γ£ö | Γ£ö |
+| Staging environments | 3 per app | 10 per app |
+| Max app size | 250 MB per app | 500 MB per app |
+| Custom domains | 2 per app | 5 per app |
+| APIs via Azure Functions | Managed | Managed or<br>[Bring your own Functions app](functions-bring-your-own.md) |
+| Authentication provider integration | [Pre-configured](authentication-authorization.md)<br>(Service defined) | [Custom registrations](authentication-custom.md) |
+| [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/app-service-static/v1_0/) | None | Γ£ö |
+
+## Selecting a plan
+
+The following scenarios can help you decide if the Standard plan best fits your needs.
+
+- Expected traffic volumes exceed bandwidth maximums.
+- The existing Azure Functions app you want to use either has triggers and bindings beyond HTTP endpoints, or can't be converted to a managed Functions app.
+- Security requirements that require a [custom provider registration](authentication-custom.md).
+- The site's web assets total file size exceed the storage maximums.
+- You require formal customer support.
+- You require more than three [staging environments](review-publish-pull-requests.md).
+
+See the [quotas guide](quotas.md) for limitation details.
+
+## Changing plans
+
+You can move between Free or Standard plans via the Azure portal.
+
+1. Navigate to your Static Web Apps resource in the Azure portal.
+
+1. Under the _Settings_ menu, select **Hosting plan**.
+
+1. Select the hosting plan you want for your static web app.
+
+1. Select **Save**.
storage Storage Blob Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-reserved-capacity.md
Previously updated : 10/08/2020 Last updated : 05/17/2021
All access tiers (hot, cool, and archive) are supported for reservations. For mo
All types of redundancy are supported for reservations. For more information about redundancy options, see [Azure Storage redundancy](../common/storage-redundancy.md). > [!NOTE]
-> Azure Storage reserved capacity is not available for premium storage accounts, general-purpose v1 (GPv1) storage accounts, Azure Data Lake Storage Gen1, page blobs, Azure Queue storage, Azure Table storage, or Azure Files.
+> Azure Storage reserved capacity is not available for premium storage accounts, general-purpose v1 (GPv1) storage accounts, Azure Data Lake Storage Gen1, page blobs, Azure Queue storage, or Azure Table storage. For information about reserved capacity for Azure Files, see [Optimize costs for Azure Files with reserved capacity](../files/files-reserve-capacity.md).
### Security requirements for purchase
If you have questions or need help, [create a support request](https://go.micros
## Next steps - [What are Azure Reservations?](../../cost-management-billing/reservations/save-compute-costs-reservations.md)-- [Understand how the reservation discount is applied to Azure Storage](../../cost-management-billing/reservations/understand-storage-charges.md)
+- [Understand how the reservation discount is applied to Azure Storage](../../cost-management-billing/reservations/understand-storage-charges.md)
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-auth-abac-attributes.md
The following table lists the descriptions for the supported attributes for cond
> [!NOTE] > When specifying conditions for `Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path` attribute, the values shouldn't include the container name or a preceding '/' character. Use the path characters without any URL encoding.
+> [!NOTE]
+> Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which have a [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS). You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.
+ ## Attributes available for each action The following table lists which attributes you can use in your condition expressions depending on the action you target. If you select multiple actions for a single condition, there might be fewer attributes to choose from for your condition because the attributes must be available across the selected actions.
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-auth-abac.md
Conditions in Azure Storage are supported for blobs. This includes blobs in acco
In this preview, you can add conditions to built-in roles for accessing Blob data, including [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) and [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader). If you're working with conditions based on [blob index tags](../blobs/storage-manage-find-blobs.md), you might have to use [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) since permissions for tag operations are included in this role.
+> [!NOTE]
+> Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which have a [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS). You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.
+ The [Azure role assignment condition format](../../role-based-access-control/conditions-format.md) allows use of `@Resource` or `@Request` attributes in the conditions. A `@Resource` attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage account, a container, or a blob. A `@Request` attribute refers to an attribute included in a storage operation request. Azure Storage supports a select set of request or resource attributes that may be used in conditions on role assignments for each DataAction. For the full list of attributes supported for each DataAction, please see the [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md).
storage Storage Files Enable Smb Multichannel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-enable-smb-multichannel.md
description: Learn how to enable SMB Multichannel on Azure premium file shares.
Previously updated : 04/15/2021 Last updated : 05/17/2021 # Enable SMB Multichannel on a FileStorage account (preview)
-Azure FileStorage accounts support SMB Multichannel (preview), which increases the performance from an SMB 3.x client by establishing multiple network connections to your premium file shares. This article provides step-by-step guidance to enable SMB Multichannel on an existing storage account. For detailed information on Azure Files SMB Multichannel, see SMB Multichannel performance.
+Azure FileStorage accounts support SMB Multichannel (preview), which increases the performance from an SMB 3.x client by establishing multiple network connections to your premium file shares. This article provides step-by-step guidance to enable SMB Multichannel on an existing storage account. For detailed information on Azure Files SMB Multichannel, seeΓÇ»[SMB Multichannel performance](storage-files-smb-multichannel-performance.md).
## Limitations
storage Storage Files Smb Multichannel Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-smb-multichannel-performance.md
Title: SMB Multichannel performance - Azure Files description: Learn about SMB Multichannel performance.-+ Previously updated : 11/16/2020- Last updated : 05/17/2021+
stream-analytics Azure Synapse Analytics Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/azure-synapse-analytics-output.md
Azure Stream Analytics jobs can output to a dedicated SQL pool table in Azure Sy
The dedicated SQL pool table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
-To use Azure Synapse as output, you need to ensure that you have the storage account configured. Navigate to Storage account settings to configure the storage account. Only the storage account types that support tables are permitted: General-purpose V2 and General-purpose V1. Select Standard Tier only. Premium tier is not supported.
+> [!NOTE]
+> To use Azure Synapse Analytics as output, ensure that the storage account is configured at the job level, not at the output level. To change the storage account settings, in the **Configure** menu of the Stream Analytics job, go to **Storage account settings**. Use only storage account types that support tables: General Purpose V2 and General Purpose V1. Choose only Standard tier. Premium tier isn't supported in this scenario.
## Output configuration
The following table lists the property names and their descriptions for creating
## Next steps * [Use managed identities to access Azure SQL Database or Azure Synapse Analytics from an Azure Stream Analytics job (Preview)](sql-database-output-managed-identity.md)
-* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
+* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
To optimize performance, you should use the smallest possible types in the `WITH
## Access and permissions As a final step, you should create database users that should be able to access your LDW, and give them permissions to select data from the external tables and views.
-In the following script you can see how to add a new user and provide permissions to read data:
+In the following script you can see how to add a new user that will be authenticated using Azure AD identity:
```sql CREATE USER [jovan@contoso.com] FROM EXTERNAL PROVIDER; GO
+```
+
+Instead of Azure AD principals, you can create SQL principals that authenticate with the login name and password.
+
+```sql
+CREATE LOGIN [jovan] WITH PASSWORD = 'My Very strong Password ! 1234';
+CREATE USER [jovan] FROM LOGIN [jovan];
+```
+
+In both cases, you can assign permissions to the users.
+
+```sql
DENY ADMINISTER DATABASE BULK OPERATIONS TO [jovan@contoso.com] GO GRANT SELECT ON SCHEMA::ecdc_adls TO [jovan@contoso.com]
This user has minimal permissions needed to query external data. If you want to
GRANT CONTROL TO [jovan@contoso.com] ```
+### Role-based security
+
+Instead of assigning permissions to the individual uses, a good practice it to organize the users into roles and manage permission at role-level.
+The following code sample creates a new role representing the people who can analyze COVID-19 cases, and adds three users to this role:
+
+```sql
+CREATE ROLE CovidAnalyst;
+
+ALTER ROLE CovidAnalyst ADD MEMBER [jovan@contoso.com];
+ALTER ROLE CovidAnalyst ADD MEMBER [milan@contoso.com];
+ALTER ROLE CovidAnalyst ADD MEMBER [petar@contoso.com];
+```
+
+You can assign the permissions to all users that belong to the group:
+
+```sql
+GRANT SELECT ON SCHEMA::ecdc_cosmosdb TO [CovidAnalyst];
+GO
+DENY SELECT ON SCHEMA::ecdc_adls TO [CovidAnalyst];
+GO
+DENY ADMINISTER DATABASE BULK OPERATIONS TO [CovidAnalyst];
+```
+
+This role-based security access control might simplify management of your security rules.
+ ## Next steps - To learn how to connect serverless SQL pool to Power BI Desktop and create reports, see [Connect serverless SQL pool to Power BI Desktop and create reports](tutorial-connect-power-bi-desktop.md).
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/language-packs.md
You need the following things to customize your Windows 10 Enterprise multi-sess
- [Windows 10, version 2004 or 20H2 **1C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2101C.iso) - [Windows 10, version 2004 or 20H2 **2C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2102C.iso) - [Windows 10, version 2004 or 20H2 **4B** LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2104B.iso)
+ - [Windows 10, version 2004 or 20H2 **4C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2104C.iso)
- An Azure Files Share or a file share on a Windows File Server Virtual Machine
virtual-machine-scale-sets Disk Encryption Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md
You can encrypt or decrypt Linux virtual machine scale sets using Azure Resource
First, select the template that fits your scenario. -- [Enable disk encryption on a running Linux virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-vmss-linux)
+- [Enable disk encryption on a running Linux virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-vmss-linux)
- [Enable disk encryption on a running Windows virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-vmss-windows)
First, select the template that fits your scenario.
- [Deploy a virtual machine scale set of Windows VMs with a jumpbox and enables encryption on Windows virtual machine scale sets](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-vmss-windows-jumpbox) -- [Disable disk encryption on a running Linux virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-vmss-linux)
+- [Disable disk encryption on a running Linux virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-vmss-linux)
-- [Disable disk encryption on a running Windows virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-vmss-windows)
+- [Disable disk encryption on a running Windows virtual machine scale set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/decrypt-vmss-windows)
Then follow these steps:
virtual-machines Azure Disk Enc Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/azure-disk-enc-linux.md
Title: Azure Disk Encryption for Linux
+ Title: Azure Disk Encryption for Linux
description: Deploys Azure Disk Encryption for Linux to a virtual machine using a virtual machine extension. -+ Last updated 03/19/2020
For a full list of prerequisites, see [Azure Disk Encryption for Linux VMs](../l
There are two versions of extension schema for Azure Disk Encryption (ADE): - v1.1 - A newer recommended schema that does not use Azure Active Directory (AAD) properties.-- v0.1 - An older schema that requires Azure Active Directory (AAD) properties.
+- v0.1 - An older schema that requires Azure Active Directory (AAD) properties.
To select a target schema, the `typeHandlerVersion` property must be set equal to version of schema you want to use.
The v1.1 schema is recommended and does not require Azure Active Directory (AAD)
```
-### Schema v0.1: with AAD
+### Schema v0.1: with AAD
The 0.1 schema requires `AADClientID` and either `AADClientSecret` or `AADClientCertificate`.
Using `AADClientCertificate`:
| publisher | Microsoft.Azure.Security | string | | type | AzureDiskEncryptionForLinux | string | | typeHandlerVersion | 1.1, 0.1 | int |
-| (0.1 schema) AADClientID | xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | guid |
+| (0.1 schema) AADClientID | xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | guid |
| (0.1 schema) AADClientSecret | password | string | | (0.1 schema) AADClientCertificate | thumbprint | string | | (optional) (0.1 schema) Passphrase | password | string | | DiskFormatQuery | {"dev_path":"","name":"","file_system":""} | JSON dictionary |
-| EncryptionOperation | EnableEncryption, EnableEncryptionFormatAll | string |
+| EncryptionOperation | EnableEncryption, EnableEncryptionFormatAll | string |
| (optional - default RSA-OAEP ) KeyEncryptionAlgorithm | 'RSA-OAEP', 'RSA-OAEP-256', 'RSA1_5' | string | | KeyVaultURL | url | string | | KeyVaultResourceId | url | string |
Using `AADClientCertificate`:
For an example of template deployment based on schema v1.1, see the Azure Quickstart Template [201-encrypt-running-linux-vm-without-aad](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-linux-vm-without-aad).
-For an example of template deployment based on schema v0.1, see the Azure Quickstart Template [201-encrypt-running-linux-vm](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-linux-vm).
+For an example of template deployment based on schema v0.1, see the Azure Quickstart Template [encrypt-running-linux-vm](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-linux-vm).
>[!WARNING] > - If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option to encrypt your VM.
-> - When encrypting Linux OS volumes, the VM should be considered unavailable. We strongly recommend to avoid SSH logins while the encryption is in progress to avoid issues blocking any open files that will need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) PowerShell cmdlet or the [vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) CLI command. This process can be expected to take a few hours for a 30GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time will be proportional to the size and quantity of the data volumes unless the encrypt format all option is used.
-> - Disabling encryption on Linux VMs is only supported for data volumes. It is not supported on data or OS volumes if the OS volume has been encrypted.
+> - When encrypting Linux OS volumes, the VM should be considered unavailable. We strongly recommend to avoid SSH logins while the encryption is in progress to avoid issues blocking any open files that will need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) PowerShell cmdlet or the [vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) CLI command. This process can be expected to take a few hours for a 30GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time will be proportional to the size and quantity of the data volumes unless the encrypt format all option is used.
+> - Disabling encryption on Linux VMs is only supported for data volumes. It is not supported on data or OS volumes if the OS volume has been encrypted.
>[!NOTE] > Also if `VolumeType` parameter is set to All, data disks will be encrypted only if they are properly mounted.
For troubleshooting, refer to the [Azure Disk Encryption troubleshooting guide](
### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/).
+If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/).
Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure Support FAQ](https://azure.microsoft.com/support/faq/).
virtual-machines Azure Disk Enc Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/azure-disk-enc-windows.md
Title: Azure Disk Encryption for Windows
+ Title: Azure Disk Encryption for Windows
description: Deploys Azure Disk Encryption to a Windows virtual machine using a virtual machine extension. -+ Last updated 03/19/2020
Last updated 03/19/2020
## Overview
-Azure Disk Encryption leverages BitLocker to provide full disk encryption on Azure virtual machines running Windows. This solution is integrated with Azure Key Vault to manage disk encryption keys and secrets in your key vault subscription.
+Azure Disk Encryption leverages BitLocker to provide full disk encryption on Azure virtual machines running Windows. This solution is integrated with Azure Key Vault to manage disk encryption keys and secrets in your key vault subscription.
## Prerequisites
For a full list of prerequisites, see [Azure Disk Encryption for Windows VMs](..
There are two versions of extension schema for Azure Disk Encryption (ADE): - v2.2 - A newer recommended schema that does not use Azure Active Directory (AAD) properties.-- v1.1 - An older schema that requires Azure Active Directory (AAD) properties.
+- v1.1 - An older schema that requires Azure Active Directory (AAD) properties.
To select a target schema, the `typeHandlerVersion` property must be set equal to version of schema you want to use.
The v2.2 schema is recommended for all new VMs and does not require Azure Active
```
-### Schema v1.1: with AAD
+### Schema v1.1: with AAD
The 1.1 schema requires `aadClientID` and either `aadClientSecret` or `AADClientCertificate` and is not recommended for new VMs.
Using `aadClientSecret`:
"properties": { "protectedSettings": { "AADClientSecret": "[aadClientSecret]"
- },
+ },
"publisher": "Microsoft.Azure.Security", "type": "AzureDiskEncryption", "typeHandlerVersion": "1.1",
Using `AADClientCertificate`:
"properties": { "protectedSettings": { "AADClientCertificate": "[aadClientCertificate]"
- },
+ },
"publisher": "Microsoft.Azure.Security", "type": "AzureDiskEncryption", "typeHandlerVersion": "1.1",
Using `AADClientCertificate`:
| publisher | Microsoft.Azure.Security | string | | type | AzureDiskEncryption | string | | typeHandlerVersion | 2.2, 1.1 | string |
-| (1.1 schema) AADClientID | xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | guid |
+| (1.1 schema) AADClientID | xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | guid |
| (1.1 schema) AADClientSecret | password | string | | (1.1 schema) AADClientCertificate | thumbprint | string |
-| EncryptionOperation | EnableEncryption | string |
+| EncryptionOperation | EnableEncryption | string |
| (optional - default RSA-OAEP ) KeyEncryptionAlgorithm | 'RSA-OAEP', 'RSA-OAEP-256', 'RSA1_5' | string | | KeyVaultURL | url | string | | KeyVaultResourceId | url | string |
Using `AADClientCertificate`:
## Template deployment
-For an example of template deployment based on schema v2.2, see Azure QuickStart Template [201-encrypt-running-windows-vm-without-aad](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm-without-aad).
+For an example of template deployment based on schema v2.2, see Azure QuickStart Template [encrypt-running-windows-vm-without-aad](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm-without-aad).
For an example of template deployment based on schema v1.1, see Azure QuickStart Template [201-encrypt-running-windows-vm](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-windows-vm). >[!NOTE]
-> Also if `VolumeType` parameter is set to All, data disks will be encrypted only if they are properly formatted.
+> Also if `VolumeType` parameter is set to All, data disks will be encrypted only if they are properly formatted.
## Troubleshoot and support
For troubleshooting, refer to the [Azure Disk Encryption troubleshooting guide](
### Support
-If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/).
+If you need more help at any point in this article, you can contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/community/).
Alternatively, you can file an Azure support incident. Go to [Azure support](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure Support FAQ](https://azure.microsoft.com/support/faq/).
virtual-machines Dsc Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/dsc-windows.md
The following JSON shows the schema for the settings portion of the DSC Extensio
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when deploying one or more virtual machines that require post deployment configuration. A sample Resource Manager template that includes the DSC extension for Windows can be found on the
-[Azure Quick Start Gallery](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/automation-configuration/automation-configuration/nested/provisionServer.json#L91).
+[Azure Quick Start Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.automation/automation-configuration/nested/provisionServer.json#L91).
## Troubleshoot and support
virtual-machines Disk Encryption Linux Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disk-encryption-linux-aad.md
You can enable many disk-encryption scenarios, and the steps might vary accordin
- [Networking and Group Policy](disk-encryption-overview-aad.md#networking-and-group-policy) - [Encryption key storage requirements](disk-encryption-overview-aad.md#encryption-key-storage-requirements)
-Take a [snapshot](snapshot-copy-managed-disk.md), make a backup, or both before you encrypt the disks. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. After a backup is made, you can use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see [Azure Backup](../../backup/backup-azure-vms-encryption.md).
+Take a [snapshot](snapshot-copy-managed-disk.md), make a backup, or both before you encrypt the disks. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. After a backup is made, you can use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see [Azure Backup](../../backup/backup-azure-vms-encryption.md).
>[!WARNING] > - If you previously used [Azure Disk Encryption with the Azure AD app](disk-encryption-overview-aad.md) to encrypt this VM, you must continue to use this option to encrypt your VM. You can't use [Azure Disk Encryption](disk-encryption-overview.md) on this encrypted VM because this isn't a supported scenario, which means switching away from the Azure AD application for this encrypted VM isn't supported yet. > - To make sure the encryption secrets don't cross regional boundaries, Azure Disk Encryption needs the key vault and the VMs to be co-located in the same region. Create and use a key vault that's in the same region as the VM to be encrypted. > - When you encrypt Linux OS volumes, the process can take a few hours. It's normal for Linux OS volumes to take longer than data volumes to encrypt.
-> - When you encrypt Linux OS volumes, the VM should be considered unavailable. We strongly recommend that you avoid SSH logins while the encryption is in progress to avoid blocking any open files that need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) or [vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) commands. You can expect this process to take a few hours for a 30-GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time is proportional to the size and quantity of the data volumes unless the **encrypt format all** option is used.
- > - Disabling encryption on Linux VMs is only supported for data volumes. It's not supported on data or OS volumes if the OS volume has been encrypted.
+> - When you encrypt Linux OS volumes, the VM should be considered unavailable. We strongly recommend that you avoid SSH logins while the encryption is in progress to avoid blocking any open files that need to be accessed during the encryption process. To check progress, use the [Get-AzVMDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) or [vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) commands. You can expect this process to take a few hours for a 30-GB OS volume, plus additional time for encrypting data volumes. Data volume encryption time is proportional to the size and quantity of the data volumes unless the **encrypt format all** option is used.
+ > - Disabling encryption on Linux VMs is only supported for data volumes. It's not supported on data or OS volumes if the OS volume has been encrypted.
+
-
## <a name="bkmk_RunningLinux"> </a> Enable encryption on an existing or running IaaS Linux VM
-In this scenario, you can enable encryption by using the Azure Resource Manager template, PowerShell cmdlets, or Azure CLI commands.
+In this scenario, you can enable encryption by using the Azure Resource Manager template, PowerShell cmdlets, or Azure CLI commands.
>[!IMPORTANT]
- >It's mandatory to take a snapshot or back up a managed disk-based VM instance outside of and prior to enabling Azure Disk Encryption. You can take a snapshot of the managed disk from the Azure portal, or you can use [Azure Backup](../../backup/backup-azure-vms-encryption.md). Backups ensure that a recovery option is possible in the case of any unexpected failure during encryption. After a backup is made, use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt managed disks by specifying the -skipVmBackup parameter. The Set-AzVMDiskEncryptionExtension command fails against managed disk-based VMs until a backup is made and this parameter is specified.
+ >It's mandatory to take a snapshot or back up a managed disk-based VM instance outside of and prior to enabling Azure Disk Encryption. You can take a snapshot of the managed disk from the Azure portal, or you can use [Azure Backup](../../backup/backup-azure-vms-encryption.md). Backups ensure that a recovery option is possible in the case of any unexpected failure during encryption. After a backup is made, use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt managed disks by specifying the -skipVmBackup parameter. The Set-AzVMDiskEncryptionExtension command fails against managed disk-based VMs until a backup is made and this parameter is specified.
>
->Encrypting or disabling encryption might cause the VM to reboot.
+>Encrypting or disabling encryption might cause the VM to reboot.
>
-### <a name="bkmk_RunningLinuxCLI"> </a>Enable encryption on an existing or running Linux VM by using the Azure CLI
+### <a name="bkmk_RunningLinuxCLI"> </a>Enable encryption on an existing or running Linux VM by using the Azure CLI
You can enable disk encryption on your encrypted VHD by installing and using the [Azure CLI 2.0](/cli/azure) command-line tool. You can use it in your browser with [Azure Cloud Shell](../../cloud-shell/overview.md), or you can install it on your local machine and use it in any PowerShell session. To enable encryption on existing or running IaaS Linux VMs in Azure, use the following CLI commands: Use the [az vm encryption enable](/cli/azure/vm/encryption#az_vm_encryption_enable) command to enable encryption on a running IaaS virtual machine in Azure. - **Encrypt a running VM by using a client secret:**
-
+ ```azurecli-interactive az vm encryption enable --resource-group "MyVirtualMachineResourceGroup" --name "MySecureVM" --aad-client-id "<my spn created with CLI/my Azure AD ClientID>" --aad-client-secret "My-AAD-client-secret" --disk-encryption-keyvault "MySecureVault" --volume-type [All|OS|Data] ``` - **Encrypt a running VM by using KEK to wrap the client secret:**
-
+ ```azurecli-interactive az vm encryption enable --resource-group "MyVirtualMachineResourceGroup" --name "MySecureVM" --aad-client-id "<my spn created with CLI which is the Azure AD ClientID>" --aad-client-secret "My-AAD-client-secret" --disk-encryption-keyvault "MySecureVault" --key-encryption-key "MyKEK_URI" --key-encryption-keyvault "MySecureVaultContainingTheKEK" --volume-type [All|OS|Data] ``` >[!NOTE]
- > The syntax for the value of the disk-encryption-keyvault parameter is the full identifier string:
+ > The syntax for the value of the disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name].</br> </br> The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]. -- **Verify that the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [az vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) command.
+- **Verify that the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [az vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) command.
```azurecli-interactive az vm encryption show --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup" ``` - **Disable encryption:** To disable encryption, use the [az vm encryption disable](/cli/azure/vm/encryption#az_vm_encryption_disable) command. Disabling encryption is only allowed on data volumes for Linux VMs.
-
+ ```azurecli-interactive az vm encryption disable --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup" --volume-type DATA ```
Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvm
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri; $KeyVaultResourceId = $KeyVault.ResourceId; $sequenceVersion = [Guid]::NewGuid();
-
+ Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -VolumeType '[All|OS|Data]' -SequenceVersion $sequenceVersion -skipVmBackup; ```-- **Encrypt a running VM by using KEK to wrap the client secret:** Azure Disk Encryption lets you specify an existing key in your key vault to wrap disk encryption secrets that were generated while enabling encryption. When a key encryption key is specified, Azure Disk Encryption uses that key to wrap the encryption secrets before writing to the key vault. Modify the -VolumeType parameter to specify which disks you're encrypting.
+- **Encrypt a running VM by using KEK to wrap the client secret:** Azure Disk Encryption lets you specify an existing key in your key vault to wrap disk encryption secrets that were generated while enabling encryption. When a key encryption key is specified, Azure Disk Encryption uses that key to wrap the encryption secrets before writing to the key vault. Modify the -VolumeType parameter to specify which disks you're encrypting.
```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup';
Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvm
$KeyVaultResourceId = $KeyVault.ResourceId; $keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name $keyEncryptionKeyName).Key.kid; $sequenceVersion = [Guid]::NewGuid();
-
+ Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -KeyEncryptionKeyUrl $keyEncryptionKeyUrl -KeyEncryptionKeyVaultId $KeyVaultResourceId -VolumeType '[All|OS|Data]' -SequenceVersion $sequenceVersion -skipVmBackup; ``` >[!NOTE]
- > The syntax for the value of the disk-encryption-keyvault parameter is the full identifier string:
+ > The syntax for the value of the disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[KVresource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name].</br> </br> > The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
- https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id].
-
-- **Verify that the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [Get-AzVmDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) cmdlet.
-
- ```azurepowershell-interactive
+ https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id].
+
+- **Verify that the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [Get-AzVmDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) cmdlet.
+
+ ```azurepowershell-interactive
Get-AzVmDiskEncryptionStatus -ResourceGroupName MyVirtualMachineResourceGroup -VMName MySecureVM ```
-
+ - **Disable disk encryption:** To disable the encryption, use the [Disable-AzureΓÇïRmVMDiskΓÇïEncryption](/powershell/module/az.compute/disable-azvmdiskencryption) cmdlet. Disabling encryption is only allowed on data volumes for Linux VMs.
-
- ```azurepowershell-interactive
+
+ ```azurepowershell-interactive
Disable-AzVMDiskEncryption -ResourceGroupName 'MyVirtualMachineResourceGroup' -VMName 'MySecureVM' ``` ### <a name="bkmk_RunningLinux"> </a> Enable encryption on an existing or running IaaS Linux VM with a template
-You can enable disk encryption on an existing or running IaaS Linux VM in Azure by using the [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-linux-vm).
+You can enable disk encryption on an existing or running IaaS Linux VM in Azure by using the [Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-linux-vm).
1. Select **Deploy to Azure** on the Azure quickstart template.
The EncryptFormatAll parameter reduces the time for Linux data disks to be encry
>[!WARNING] > EncryptFormatAll shouldn't be used when there's needed data on a VM's data volumes. You can exclude disks from encryption by unmounting them. Try out the EncryptFormatAll parameter on a test VM first to understand the feature parameter and its implication before you try it on the production VM. The EncryptFormatAll option formats the data disk, so all the data on it will be lost. Before you proceed, verify that any disks you want to exclude are properly unmounted. </br></br>
- >If you set this parameter while you update encryption settings, it might lead to a reboot before the actual encryption. In this case, you also want to remove the disk you don't want formatted from the fstab file. Similarly, you should add the partition you want encrypt-formatted to the fstab file before you initiate the encryption operation.
+ >If you set this parameter while you update encryption settings, it might lead to a reboot before the actual encryption. In this case, you also want to remove the disk you don't want formatted from the fstab file. Similarly, you should add the partition you want encrypt-formatted to the fstab file before you initiate the encryption operation.
### <a name="bkmk_EFACriteria"> </a> EncryptFormatAll criteria
-The parameter goes through all partitions and encrypts them as long as they meet *all* of the following criteria:
+The parameter goes through all partitions and encrypts them as long as they meet *all* of the following criteria:
- Is not a root/OS/boot partition - Is not already encrypted - Is not a BEK volume
Encrypt the disks that compose the RAID or LVM volume rather than the RAID or LV
### <a name="bkmk_EFATemplate"> </a> Use the EncryptFormatAll parameter with a template To use the EncryptFormatAll option, use any preexisting Azure Resource Manager template that encrypts a Linux VM and change the **EncryptionOperation** field for the AzureDiskEncryption resource.
-1. As an example, use the [Resource Manager template to encrypt a running Linux IaaS VM](https://github.com/vermashi/azure-quickstart-templates/tree/encrypt-format-running-linux-vm/201-encrypt-running-linux-vm).
+1. As an example, use the [Resource Manager template to encrypt a running Linux IaaS VM](https://github.com/vermashi/azure-quickstart-templates/tree/encrypt-format-running-linux-vm/201-encrypt-running-linux-vm).
2. Select **Deploy to Azure** on the Azure quickstart template. 3. Change the **EncryptionOperation** field from **EnableEncryption** to **EnableEncryptionFormatAl**. 4. Select the subscription, resource group, resource group location, other parameters, legal terms, and agreement. Select **Create** to enable encryption on the existing or running IaaS VM.
To use the EncryptFormatAll option, use any preexisting Azure Resource Manager t
Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) cmdlet with the EncryptFormatAll parameter. **Encrypt a running VM by using a client secret and EncryptFormatAll:** As an example, the following script initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet with the EncryptFormatAll parameter. The resource group, VM, key vault, Azure AD app, and client secret should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, MySecureVault, My-AAD-client-ID, and My-AAD-client-secret with your values.
-
+ ```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup';
- $VMRGName = 'MyVirtualMachineResourceGroup';
+ $VMRGName = 'MyVirtualMachineResourceGroup';
$aadClientID = 'My-AAD-client-ID'; $aadClientSecret = 'My-AAD-client-secret'; $KeyVaultName = 'MySecureVault'; $KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname; $diskEncryptionKeyVaultUrl = $KeyVault.VaultUri; $KeyVaultResourceId = $KeyVault.ResourceId;
-
+ Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -EncryptFormatAll ```
-### <a name="bkmk_EFALVM"> </a> Use the EncryptFormatAll parameter with Logical Volume Manager (LVM)
+### <a name="bkmk_EFALVM"> </a> Use the EncryptFormatAll parameter with Logical Volume Manager (LVM)
We recommend an LVM-on-crypt setup. For all the following examples, replace the device-path and mountpoints with whatever suits your use case. This setup can be done as follows: - Add the data disks that will compose the VM. - Format, mount, and add these disks to the fstab file. 1. Format the newly added disk. We use symlinks generated by Azure here. Using symlinks avoids problems related to device names changing. For more information, see [Troubleshoot device names problems](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
-
+ ```console mkfs -t ext4 /dev/disk/azure/scsi1/lun0 ```
We recommend an LVM-on-crypt setup. For all the following examples, replace the
## <a name="bkmk_VHDpre"> </a> New IaaS VMs created from customer-encrypted VHD and encryption keys
-In this scenario, you can enable encrypting by using the Resource Manager template, PowerShell cmdlets, or CLI commands. The following sections explain in greater detail the Resource Manager template and CLI commands.
+In this scenario, you can enable encrypting by using the Resource Manager template, PowerShell cmdlets, or CLI commands. The following sections explain in greater detail the Resource Manager template and CLI commands.
Use the instructions in the appendix for preparing pre-encrypted images that can be used in Azure. After the image is created, you can use the steps in the next section to create an encrypted Azure VM. * [Prepare a pre-encrypted Linux VHD](disk-encryption-sample-scripts.md) >[!IMPORTANT]
- >It's mandatory to take a snapshot or back up a managed disk-based VM instance outside of and prior to enabling Azure Disk Encryption. You can take a snapshot of the managed disk from the portal, or you can use [Azure Backup](../../backup/backup-azure-vms-encryption.md). Backups ensure that a recovery option is possible in the case of any unexpected failure during encryption. After a backup is made, use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt managed disks by specifying the -skipVmBackup parameter. The Set-AzVMDiskEncryptionExtension command fails against managed disk-based VMs until a backup is made and this parameter is specified.
+ >It's mandatory to take a snapshot or back up a managed disk-based VM instance outside of and prior to enabling Azure Disk Encryption. You can take a snapshot of the managed disk from the portal, or you can use [Azure Backup](../../backup/backup-azure-vms-encryption.md). Backups ensure that a recovery option is possible in the case of any unexpected failure during encryption. After a backup is made, use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt managed disks by specifying the -skipVmBackup parameter. The Set-AzVMDiskEncryptionExtension command fails against managed disk-based VMs until a backup is made and this parameter is specified.
> >Encrypting or disabling encryption might cause the VM to reboot.
-### <a name="bkmk_VHDprePSH"> </a> Use Azure PowerShell to encrypt IaaS VMs with pre-encrypted VHDs
-You can enable disk encryption on your encrypted VHD by using the PowerShell cmdlet [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk#examples). The following example gives you some common parameters.
+### <a name="bkmk_VHDprePSH"> </a> Use Azure PowerShell to encrypt IaaS VMs with pre-encrypted VHDs
+You can enable disk encryption on your encrypted VHD by using the PowerShell cmdlet [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk#examples). The following example gives you some common parameters.
```powershell $VirtualMachine = New-AzVMConfig -VMName "MySecureVM" -VMSize "Standard_A1"
New-AzVM -VM $VirtualMachine -ResourceGroupName "MyVirtualMachineResourceGroup"
``` ## Enable encryption on a newly added data disk
-You can add a new data disk by using [az vm disk attach](add-disk.md) or [through the Azure portal](attach-disk-portal.md). Before you can encrypt, you need to mount the newly attached data disk first. You must request encryption of the data drive because the drive will be unusable while encryption is in progress.
+You can add a new data disk by using [az vm disk attach](add-disk.md) or [through the Azure portal](attach-disk-portal.md). Before you can encrypt, you need to mount the newly attached data disk first. You must request encryption of the data drive because the drive will be unusable while encryption is in progress.
### Enable encryption on a newly added disk with the Azure CLI
- If the VM was previously encrypted with "All," then the --volume-type parameter should remain All. All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS," then the --volume-type parameter should be changed to All so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data," then it can remain Data as demonstrated here. Adding and attaching a new data disk to a VM isn't sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM before you enable encryption. On Linux, the disk must be mounted in /etc/fstab with a [persistent block device name](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
+ If the VM was previously encrypted with "All," then the --volume-type parameter should remain All. All includes both OS and data disks. If the VM was previously encrypted with a volume type of "OS," then the --volume-type parameter should be changed to All so that both the OS and the new data disk will be included. If the VM was encrypted with only the volume type of "Data," then it can remain Data as demonstrated here. Adding and attaching a new data disk to a VM isn't sufficient preparation for encryption. The newly attached disk must also be formatted and properly mounted within the VM before you enable encryption. On Linux, the disk must be mounted in /etc/fstab with a [persistent block device name](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
In contrast to PowerShell syntax, the CLI doesn't require you to provide a unique sequence version when you enable encryption. The CLI automatically generates and uses its own unique sequence version value. -- **Encrypt a running VM by using a client secret:**
-
+- **Encrypt a running VM by using a client secret:**
+ ```azurecli-interactive az vm encryption enable --resource-group "MyVirtualMachineResourceGroup" --name "MySecureVM" --aad-client-id "<my spn created with CLI/my Azure AD ClientID>" --aad-client-secret "My-AAD-client-secret" --disk-encryption-keyvault "MySecureVault" --volume-type "Data" ```
In contrast to PowerShell syntax, the CLI doesn't require you to provide a uniqu
``` ### Enable encryption on a newly added disk with Azure PowerShell
- When you use PowerShell to encrypt a new disk for Linux, a new sequence version needs to be specified. The sequence version has to be unique. The following script generates a GUID for the sequence version.
-
+ When you use PowerShell to encrypt a new disk for Linux, a new sequence version needs to be specified. The sequence version has to be unique. The following script generates a GUID for the sequence version.
+ - **Encrypt a running VM by using a client secret:** The following script initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, key vault, Azure AD app, and client secret should have already been created as prerequisites. Replace MyVirtualMachineResourceGroup, MyKeyVaultResourceGroup, MySecureVM, MySecureVault, My-AAD-client-ID, and My-AAD-client-secret with your values. The -VolumeType parameter is set to data disks and not the OS disk. If the VM was previously encrypted with a volume type of "OS" or "All," then the -VolumeType parameter should be changed to All so that both the OS and the new data disk will be included. ```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup';
- $VMRGName = 'MyVirtualMachineResourceGroup';
+ $VMRGName = 'MyVirtualMachineResourceGroup';
$vmName = 'MySecureVM'; $aadClientID = 'My-AAD-client-ID'; $aadClientSecret = 'My-AAD-client-secret';
In contrast to PowerShell syntax, the CLI doesn't require you to provide a uniqu
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri; $KeyVaultResourceId = $KeyVault.ResourceId; $sequenceVersion = [Guid]::NewGuid();
-
+ Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -VolumeType 'data' ΓÇôSequenceVersion $sequenceVersion; ``` - **Encrypt a running VM by using KEK to wrap the client secret:** Azure Disk Encryption lets you specify an existing key in your key vault to wrap disk encryption secrets that were generated while enabling encryption. When a key encryption key is specified, Azure Disk Encryption uses that key to wrap the encryption secrets before writing to the key vault. The -VolumeType parameter is set to data disks and not the OS disk. If the VM was previously encrypted with a volume type of "OS" or "All," then the -VolumeType parameter should be changed to All so that both the OS and the new data disk will be included.
In contrast to PowerShell syntax, the CLI doesn't require you to provide a uniqu
$KeyVaultResourceId = $KeyVault.ResourceId; $keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name $keyEncryptionKeyName).Key.kid; $sequenceVersion = [Guid]::NewGuid();
-
+ Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -KeyEncryptionKeyUrl $keyEncryptionKeyUrl -KeyEncryptionKeyVaultId $KeyVaultResourceId -VolumeType 'data' ΓÇôSequenceVersion $sequenceVersion; ``` >[!NOTE]
-> The syntax for the value of the disk-encryption-keyvault parameter is the full identifier string:
+> The syntax for the value of the disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]. </br> </br> > The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]. ## Disable encryption for Linux VMs
-You can disable encryption by using Azure PowerShell, the Azure CLI, or a Resource Manager template.
+You can disable encryption by using Azure PowerShell, the Azure CLI, or a Resource Manager template.
>[!IMPORTANT]
->Disabling encryption with Azure Disk Encryption on Linux VMs is only supported for data volumes. It's not supported on data or OS volumes if the OS volume has been encrypted.
+>Disabling encryption with Azure Disk Encryption on Linux VMs is only supported for data volumes. It's not supported on data or OS volumes if the OS volume has been encrypted.
-- **Disable disk encryption with Azure PowerShell:** To disable encryption, use the [Disable-AzureΓÇïRmVMDiskΓÇïEncryption](/powershell/module/az.compute/disable-azvmdiskencryption) cmdlet.
+- **Disable disk encryption with Azure PowerShell:** To disable encryption, use the [Disable-AzureΓÇïRmVMDiskΓÇïEncryption](/powershell/module/az.compute/disable-azvmdiskencryption) cmdlet.
```azurepowershell-interactive Disable-AzVMDiskEncryption -ResourceGroupName 'MyVirtualMachineResourceGroup' -VMName 'MySecureVM' [--volume-type {ALL, DATA, OS}] ``` -- **Disable encryption with the Azure CLI:** To disable encryption, use the [az vm encryption disable](/cli/azure/vm/encryption#az_vm_encryption_disable) command.
+- **Disable encryption with the Azure CLI:** To disable encryption, use the [az vm encryption disable](/cli/azure/vm/encryption#az_vm_encryption_disable) command.
```azurecli-interactive az vm encryption disable --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup" --volume-type [ALL, DATA, OS] ``` - **Disable encryption with a Resource Manager template:** To disable encryption, use the [Disable encryption on a running Linux VM](https://aka.ms/decrypt-linuxvm) template. 1. Select **Deploy to Azure**. 2. Select the subscription, resource group, location, VM, legal terms, and agreement.
- 3. Select **Purchase** to disable disk encryption on a running Windows VM.
+ 3. Select **Purchase** to disable disk encryption on a running Windows VM.
## Next steps
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
# Azure Disk Encryption sample scripts for Linux VMs
-This article provides sample scripts for preparing pre-encrypted VHDs and other tasks.
+This article provides sample scripts for preparing pre-encrypted VHDs and other tasks.
> [!NOTE] > All scripts refer to the latest, non-AAD version of ADE, except where noted.
-## Sample PowerShell scripts for Azure Disk Encryption
+## Sample PowerShell scripts for Azure Disk Encryption
- **List all encrypted VMs in your subscription**
-
+ You can find all ADE-encrypted VMs and the extension version, in all resource groups present in a subscription, using [this PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/Find_1passAdeVersion_VM.ps1). Alternatively, these cmdlets will show all ADE-encrypted VMs (but not the extension version):
This article provides sample scripts for preparing pre-encrypted VHDs and other
``` - **List all encrypted VMSS instances in your subscription**
-
+ You can find all ADE-encrypted VMSS instances and the extension version, in all resource groups present in a subscription, using [this PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/Find_1passAdeVersion_VMSS.ps1). -- **List all disk encryption secrets used for encrypting VMs in a key vault**
+- **List all disk encryption secrets used for encrypting VMs in a key vault**
```azurepowershell-interactive Get-AzKeyVaultSecret -VaultName $KeyVaultName | where {$_.Tags.ContainsKey('DiskEncryptionKeyFileName')} | format-table @{Label="MachineName"; Expression={$_.Tags['MachineName']}}, @{Label="VolumeLetter"; Expression={$_.Tags['VolumeLetter']}}, @{Label="EncryptionKeyURL"; Expression={$_.Id}} ``` ### Using the Azure Disk Encryption prerequisites PowerShell script
-If you're already familiar with the prerequisites for Azure Disk Encryption, you can use the [Azure Disk Encryption prerequisites PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1 ). For an example of using this PowerShell script, see the [Encrypt a VM Quickstart](disk-encryption-powershell-quickstart.md). You can remove the comments from a section of the script, starting at line 211, to encrypt all disks for existing VMs in an existing resource group.
+If you're already familiar with the prerequisites for Azure Disk Encryption, you can use the [Azure Disk Encryption prerequisites PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1 ). For an example of using this PowerShell script, see the [Encrypt a VM Quickstart](disk-encryption-powershell-quickstart.md). You can remove the comments from a section of the script, starting at line 211, to encrypt all disks for existing VMs in an existing resource group.
-The following table shows which parameters can be used in the PowerShell script:
+The following table shows which parameters can be used in the PowerShell script:
|Parameter|Description|Mandatory?|
The following table shows which parameters can be used in the PowerShell script:
### Encrypt or decrypt VMs without an Azure AD app -- [Enable disk encryption on an existing or running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-linux-vm-without-aad) -- [Disable encryption on a running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-running-linux-vm-without-aad)
- - Disabling encryption is only allowed on Data volumes for Linux VMs.
+- [Enable disk encryption on an existing or running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-linux-vm-without-aad)
+- [Disable encryption on a running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-linux-vm)
+ - Disabling encryption is only allowed on Data volumes for Linux VMs.
### Encrypt or decrypt VMs with an Azure AD app (previous release)
-
-- [Enable disk encryption on an existing or running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-linux-vm)
+- [Enable disk encryption on an existing or running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-linux-vm)
-- [Disable encryption on a running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-running-linux-vm)
- - Disabling encryption is only allowed on Data volumes for Linux VMs.
+
+- [Disable encryption on a running Linux VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-running-linux-vm)
+ - Disabling encryption is only allowed on Data volumes for Linux VMs.
- [Create a new encrypted managed disk from a pre-encrypted VHD/storage blob](https://github.com/Azure/azure-quickstart-templates/tree/master/201-create-encrypted-managed-disk)
The following table shows which parameters can be used in the PowerShell script:
### Prerequisites for OS disk encryption
-* The VM must be using a distribution compatible with OS disk encryption as listed in the [Azure Disk Encryption supported operating systems](disk-encryption-overview.md#supported-vms)
+* The VM must be using a distribution compatible with OS disk encryption as listed in the [Azure Disk Encryption supported operating systems](disk-encryption-overview.md#supported-vms)
* The VM must be created from the Marketplace image in Azure Resource Manager. * Azure VM with at least 4 GB of RAM (recommended size is 7 GB). * (For RHEL and CentOS) Disable SELinux. To disable SELinux, see "4.4.2. Disabling SELinux" in the [SELinux User's and Administrator's Guide](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/SELinux_Users_and_Administrators_Guide/sect-Security-Enhanced_Linux-Working_with_SELinux-Changing_SELinux_Modes.html#sect-Security-Enhanced_Linux-Enabling_and_Disabling_SELinux-Disabling_SELinux) on the VM.
You can monitor OS encryption progress in three ways:
|-- virtualMachines |-- [Your virtual machine] |-- InstanceView
- ```
+ ```
In the InstanceView, scroll down to see the encryption status of your drives.
You can monitor OS encryption progress in three ways:
We recommend that you don't sign-in to the VM while OS encryption is in progress. Copy the logs only when the other two methods have failed. ## Prepare a pre-encrypted Linux VHD
-The preparation for pre-encrypted VHDs can vary depending on the distribution. Examples on preparing Ubuntu 16, openSUSE 13.2, and CentOS 7 are available.
+The preparation for pre-encrypted VHDs can vary depending on the distribution. Examples on preparing Ubuntu 16, openSUSE 13.2, and CentOS 7 are available.
### Ubuntu 16 Configure encryption during the distribution installation by doing the following steps:
To configure encryption to work with Azure, do the following steps:
break fi done
- ```
+ ```
5. Run the "/usr/sbin/dracut -f -v" to update the initrd. ![CentOS 7 Setup - run /usr/sbin/dracut -f -v](./media/disk-encryption/centos-encrypt-fig5.png)
After DM-Crypt encryption is enabled, the local encrypted VHD needs to be upload
## Upload the secret for the pre-encrypted VM to your key vault When encrypting using an Azure AD app (previous release), the disk-encryption secret that you obtained previously must be uploaded as a secret in your key vault. The key vault needs to have disk encryption and permissions enabled for your Azure AD client.
-```powershell
+```powershell
$AadClientId = "My-AAD-Client-Id" $AadClientSecret = "My-AAD-Client-Secret"
When encrypting using an Azure AD app (previous release), the disk-encryption se
Set-AzKeyVaultAccessPolicy -VaultName $KeyVaultName -ResourceGroupName $ResourceGroupName -ServicePrincipalName $AadClientId -PermissionsToKeys all -PermissionsToSecrets all Set-AzKeyVaultAccessPolicy -VaultName $KeyVaultName -ResourceGroupName $ResourceGroupName -EnabledForDiskEncryption
-```
+```
### Disk encryption secret not encrypted with a KEK To set up the secret in your key vault, use [Set-AzKeyVaultSecret](/powershell/module/az.keyvault/set-azkeyvaultsecret). The passphrase is encoded as a base64 string and then uploaded to the key vault. In addition, make sure that the following tags are set when you create the secret in the key vault.
virtual-machines Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/network-overview.md
This table lists the methods that you can use to create a network interface.
| Azure portal | When you create a VM in the Azure portal, a network interface is automatically created for you (you cannot use a NIC you create separately). The portal creates a VM with only one NIC. If you want to create a VM with more than one NIC, you must create it with a different method. | | [Azure PowerShell](./windows/multiple-nics.md) | Use [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface) with the **-PublicIpAddressId** parameter to provide the identifier of the public IP address that you previously created. | | [Azure CLI](./linux/multiple-nics.md) | To provide the identifier of the public IP address that you previously created, use [az network nic create](/cli/azure/network/nic) with the **--public-ip-address** parameter. |
-| [Template](../virtual-network/template-samples.md) | Use [Network Interface in a Virtual Network with Public IP Address](https://github.com/Azure/azure-quickstart-templates/tree/master/101-nic-publicip-dns-vnet) as a guide for deploying a network interface using a template. |
+| [Template](../virtual-network/template-samples.md) | Use [Network Interface in a Virtual Network with Public IP Address](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/nic-publicip-dns-vnet) as a guide for deploying a network interface using a template. |
## IP addresses
This table lists the methods that you can use to create a network security group
| [Azure portal](../virtual-network/tutorial-filter-network-traffic.md) | When you create a VM in the Azure portal, an NSG is automatically created and associated to the NIC the portal creates. The name of the NSG is a combination of the name of the VM and **-nsg**. This NSG contains one inbound rule with a priority of 1000, service set to RDP, the protocol set to TCP, port set to 3389, and action set to Allow. If you want to allow any other inbound traffic to the VM, you must add additional rules to the NSG. | | [Azure PowerShell](../virtual-network/tutorial-filter-network-traffic.md) | Use [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and provide the required rule information. Use [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup) to create the NSG. Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to configure the NSG for the subnet. Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to add the NSG to the VNet. | | [Azure CLI](../virtual-network/tutorial-filter-network-traffic-cli.md) | Use [az network nsg create](/cli/azure/network/nsg) to initially create the NSG. Use [az network nsg rule create](/cli/azure/network/nsg/rule) to add rules to the NSG. Use [az network vnet subnet update](/cli/azure/network/vnet/subnet) to add the NSG to the subnet. |
-| [Template](../virtual-network/template-samples.md) | Use [Create a Network Security Group](https://github.com/Azure/azure-quickstart-templates/tree/master/101-security-group-create) as a guide for deploying a network security group using a template. |
+| [Template](../virtual-network/template-samples.md) | Use [Create a Network Security Group](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/security-group-create) as a guide for deploying a network security group using a template. |
## Load balancers
This table lists the methods that you can use to create an internet-facing load
| Azure portal | You can [load balance internet traffic to VMs using the Azure portal](../load-balancer/quickstart-load-balancer-standard-public-portal.md). | | [Azure PowerShell](../load-balancer/quickstart-load-balancer-standard-public-powershell.md) | To provide the identifier of the public IP address that you previously created, use [New-AzLoadBalancerFrontendIpConfig](/powershell/module/az.network/new-azloadbalancerfrontendipconfig) with the **-PublicIpAddress** parameter. Use [New-AzLoadBalancerBackendAddressPoolConfig](/powershell/module/az.network/new-azloadbalancerbackendaddresspoolconfig) to create the configuration of the back-end address pool. Use [New-AzLoadBalancerInboundNatRuleConfig](/powershell/module/az.network/new-azloadbalancerinboundnatruleconfig) to create inbound NAT rules associated with the front-end IP configuration that you created. Use [New-AzLoadBalancerProbeConfig](/powershell/module/az.network/new-azloadbalancerprobeconfig) to create the probes that you need. Use [New-AzLoadBalancerRuleConfig](/powershell/module/az.network/new-azloadbalancerruleconfig) to create the load balancer configuration. Use [New-AzLoadBalancer](/powershell/module/az.network/new-azloadbalancer) to create the load balancer.| | [Azure CLI](../load-balancer/quickstart-load-balancer-standard-public-cli.md) | Use [az network lb create](/cli/azure/network/lb) to create the initial load balancer configuration. Use [az network lb frontend-ip create](/cli/azure/network/lb/frontend-ip) to add the public IP address that you previously created. Use [az network lb address-pool create](/cli/azure/network/lb/address-pool) to add the configuration of the back-end address pool. Use [az network lb inbound-nat-rule create](/cli/azure/network/lb/inbound-nat-rule) to add NAT rules. Use [az network lb rule create](/cli/azure/network/lb/rule) to add the load balancer rules. Use [az network lb probe create](/cli/azure/network/lb/probe) to add the probes. |
-| [Template](../load-balancer/quickstart-load-balancer-standard-public-template.md) | Use [3 VMs in a Load Balancer](https://github.com/Azure/azure-quickstart-templates/tree/master/101-load-balancer-standard-create) as a guide for deploying a load balancer using a template. |
+| [Template](../load-balancer/quickstart-load-balancer-standard-public-template.md) | Use [3 VMs in a Load Balancer](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/load-balancer-standard-create) as a guide for deploying a load balancer using a template. |
This table lists the methods that you can use to create an internal load balancer.
virtual-machines Tag Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/tag-template.md
Last updated 10/26/2018
This article describes how to tag a VM in Azure using a Resource Manager template. Tags are user-defined key/value pairs which can be placed directly on a resource or a resource group. Azure currently supports up to 50 tags per resource and resource group. Tags may be placed on a resource at the time of creation or added to an existing resource.
-[This template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-tags) places tags on the following resources: Compute (Virtual Machine), Storage (Storage Account), and Network (Public IP Address, Virtual Network, and Network Interface). This template is for a Windows VM but can be adapted for Linux VMs.
+[This template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-tags) places tags on the following resources: Compute (Virtual Machine), Storage (Storage Account), and Network (Public IP Address, Virtual Network, and Network Interface). This template is for a Windows VM but can be adapted for Linux VMs.
-Click the **Deploy to Azure** button from the [template link](https://github.com/Azure/azure-quickstart-templates/tree/master/101-vm-tags). This will navigate to the [Azure portal](https://portal.azure.com/) where you can deploy this template.
+Click the **Deploy to Azure** button from the [template link](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-tags). This will navigate to the [Azure portal](https://portal.azure.com/) where you can deploy this template.
![Simple deployment with Tags](./media/tag/deploy-to-azure-tags.png)
virtual-machines Using Managed Disks Template Deployments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/using-managed-disks-template-deployments.md
# Using disks in Azure Resource Manager Templates
-This document walks through the differences between managed and unmanaged disks when using Azure Resource Manager templates to provision virtual machines. The examples help you to update existing templates that are using unmanaged Disks to managed disks. For reference, we are using the [vm-simple-windows](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows) template as a guide. You can see the template using both [managed Disks](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windowsazuredeploy.json) and a prior version using [unmanaged disks](https://github.com/Azure/azure-quickstart-templates/tree/93b5f72a9857ea9ea43e87d2373bf1b4f724c6aa/101-vm-simple-windows/azuredeploy.json) if you'd like to directly compare them.
+This document walks through the differences between managed and unmanaged disks when using Azure Resource Manager templates to provision virtual machines. The examples help you to update existing templates that are using unmanaged Disks to managed disks. For reference, we are using the [vm-simple-windows](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows) template as a guide. You can see the template using both [managed Disks](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows/azuredeploy.json) and a prior version using [unmanaged disks](https://github.com/Azure/azure-quickstart-templates/tree/93b5f72a9857ea9ea43e87d2373bf1b4f724c6aa/101-vm-simple-windows/azuredeploy.json) if you'd like to directly compare them.
## Unmanaged Disks template formatting
virtual-machines Vm Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-support-help.md
Here are suggestions for where you can get help when developing your Azure Virtu
## Self help troubleshooting Content <div class='icon is-large'>
- <img alt='Self help content' src='https://docs.microsoft.com/media//common/i_article.svg'>
+ <img alt='Self help content' src='./media/logos/i-article.svg'>
</div> Various articles explain how to determine, diagnose, and fix issues that you might encounter when using Azure Virtual Machines. Use these articles to troubleshoot deployment failures, unexpected restarts, connection issues and more.
For a full list of self help troubleshooting content, see [Azure Virtual Machine
## Post a question on Microsoft Q&A <div class='icon is-large'>
- <img alt='Microsoft Q&A' src='./media/microsoft-logo.png'>
+ <img alt='Microsoft Q&A' src='./media/logos/microsoft-logo.png'>
</div> For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), AzureΓÇÖs preferred destination for community support.
If you can't find an answer to your problem using search, submit a new question
## Create an Azure support request <div class='icon is-large'>
- <img alt='Azure support' src='https://docs.microsoft.com/media/logos/logo_azure.svg'>
+ <img alt='Azure support' src='./media/logos/logo-azure.svg'>
</div> Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
Explore the range of [Azure support options and choose the plan](https://azure.m
## Create a GitHub issue <div class='icon is-large'>
- <img alt='GitHub-image' src='../active-directory/develop/media/common/github.svg'>
+ <img alt='GitHub-image' src='./media/logos/github-logo.png'>
</div> If you need help with the language and tools used to develop and manage Azure Virtual Machines, open an issue in its repository on GitHub.
If you need help with the language and tools used to develop and manage Azure Vi
## Submit feature requests on Azure Feedback <div class='icon is-large'>
- <img alt='UserVoice' src='https://docs.microsoft.com/media/logos/logo-uservoice.svg'>
+ <img alt='UserVoice' src='./media/logos/logo-uservoice.svg'>
</div> To request new features, post them on Azure Feedback. Share your ideas for improving Azure Virtual Machines.
To request new features, post them on Azure Feedback. Share your ideas for impro
## Stay informed of updates and new releases <div class='icon is-large'>
- <img alt='Stay informed' src='https://docs.microsoft.com/media/common/i_blog.svg'>
+ <img alt='Stay informed' src='./media/logos/i-blog.svg'>
</div> Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute).
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disk-encryption-sample-scripts.md
-# Azure Disk Encryption sample scripts
+# Azure Disk Encryption sample scripts
This article provides sample scripts for preparing pre-encrypted VHDs and other tasks. > [!NOTE] > All scripts refer to the latest, non-AAD version of ADE, except where noted.
-## Sample PowerShell scripts for Azure Disk Encryption
+## Sample PowerShell scripts for Azure Disk Encryption
- **List all encrypted VMs in your subscription**
This article provides sample scripts for preparing pre-encrypted VHDs and other
``` - **List all encrypted VMSS instances in your subscription**
-
+ You can find all ADE-encrypted VMSS instances and the extension version, in all resource groups present in a subscription, using [this PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/Find_1passAdeVersion_VMSS.ps1).
-
+ - **List all disk encryption secrets used for encrypting VMs in a key vault** ```azurepowershell-interactive
Get-AzKeyVaultSecret -VaultName $KeyVaultName | where {$_.Tags.ContainsKey('Disk
### Using the Azure Disk Encryption prerequisites PowerShell script
-If you're already familiar with the prerequisites for Azure Disk Encryption, you can use the [Azure Disk Encryption prerequisites PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1 ). For an example of using this PowerShell script, see the [Encrypt a VM Quickstart](disk-encryption-powershell-quickstart.md). You can remove the comments from a section of the script, starting at line 211, to encrypt all disks for existing VMs in an existing resource group.
+If you're already familiar with the prerequisites for Azure Disk Encryption, you can use the [Azure Disk Encryption prerequisites PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1 ). For an example of using this PowerShell script, see the [Encrypt a VM Quickstart](disk-encryption-powershell-quickstart.md). You can remove the comments from a section of the script, starting at line 211, to encrypt all disks for existing VMs in an existing resource group.
-The following table shows which parameters can be used in the PowerShell script:
+The following table shows which parameters can be used in the PowerShell script:
|Parameter|Description|Mandatory?| ||||
The following table shows which parameters can be used in the PowerShell script:
### Encrypt or decrypt VMs without an Azure AD app -- [Enable disk encryption on an existing or running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-windows-vm-without-aad) -- [Disable encryption on a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-running-windows-vm-without-aad)
+- [Enable disk encryption on an existing or running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm-without-aad)
+- [Disable encryption on a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-running-windows-vm-without-aad)
-### Encrypt or decrypt VMs with an Azure AD app (previous release)
-
-- [Enable disk encryption on an existing or running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-windows-vm) -- [Disable encryption on a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-running-windows-vm)
+### Encrypt or decrypt VMs with an Azure AD app (previous release)
+
+- [Enable disk encryption on an existing or running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-windows-vm)
+- [Disable encryption on a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-decrypt-running-windows-vm)
- [Create a new encrypted managed disk from a pre-encrypted VHD/storage blob](https://github.com/Azure/azure-quickstart-templates/tree/master/201-create-encrypted-managed-disk) - Creates a new encrypted managed disk provided a pre-encrypted VHD and its corresponding encryption settings
ServerManagerCmd -install BitLockers
To compress the OS partition and prepare the machine for BitLocker, execute the [bdehdcfg](/windows/security/information-protection/bitlocker/bitlocker-basic-deployment) if needed: ```console
-bdehdcfg -target c: shrink -quiet
+bdehdcfg -target c: shrink -quiet
``` ### Protect the OS volume by using BitLocker
After DM-Crypt encryption is enabled, the local encrypted VHD needs to be upload
## Upload the secret for the pre-encrypted VM to your key vault The disk encryption secret that you obtained previously must be uploaded as a secret in your key vault. This requires granting the set secret permission and the wrapkey permission to the account that will upload the secrets.
-```powershell
+```powershell
# Typically, account Id is the user principal name (in user@domain.com format) $upn = (Get-AzureRmContext).Account.Id Set-AzKeyVaultAccessPolicy -VaultName $kvname -UserPrincipalName $acctid -PermissionsToKeys wrapKey -PermissionsToSecrets set
-# In cloud shell, the account ID is a managed service identity, so specify the username directly
-# $upn = "user@domain.com"
+# In cloud shell, the account ID is a managed service identity, so specify the username directly
+# $upn = "user@domain.com"
# Set-AzKeyVaultAccessPolicy -VaultName $kvname -UserPrincipalName $acctid -PermissionsToKeys wrapKey -PermissionsToSecrets set
-# When running as a service principal, retrieve the service principal ID from the account ID, and set access policy to that
+# When running as a service principal, retrieve the service principal ID from the account ID, and set access policy to that
# $acctid = (Get-AzureRmContext).Account.Id # $spoid = (Get-AzureRmADServicePrincipal -ServicePrincipalName $acctid).Id # Set-AzKeyVaultAccessPolicy -VaultName $kvname -ObjectId $spoid -BypassObjectIdValidation -PermissionsToKeys wrapKey -PermissionsToSecrets set
virtual-machines Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/disk-encryption-windows.md
You can only apply disk encryption to virtual machines of [supported VM sizes an
- [Encryption key storage requirements](disk-encryption-overview.md#encryption-key-storage-requirements) >[!IMPORTANT]
-> - If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Azure AD (previous release)](disk-encryption-overview-aad.md) for details.
+> - If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option to encrypt your VM. See [Azure Disk Encryption with Azure AD (previous release)](disk-encryption-overview-aad.md) for details.
>
-> - You should [take a snapshot](snapshot-copy-managed-disk.md) and/or create a backup before disks are encrypted. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. Once a backup is made, you can use the [Set-AzVMDiskEncryptionExtension cmdlet](/powershell/module/az.compute/set-azvmdiskencryptionextension) to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see [Back up and restore encrypted Azure VM](../../backup/backup-azure-vms-encryption.md).
+> - You should [take a snapshot](snapshot-copy-managed-disk.md) and/or create a backup before disks are encrypted. Backups ensure that a recovery option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before encryption occurs. Once a backup is made, you can use the [Set-AzVMDiskEncryptionExtension cmdlet](/powershell/module/az.compute/set-azvmdiskencryptionextension) to encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore encrypted VMs, see [Back up and restore encrypted Azure VM](../../backup/backup-azure-vms-encryption.md).
> > - Encrypting or disabling encryption may cause a VM to reboot.
You can only apply disk encryption to virtual machines of [supported VM sizes an
## Enable encryption on an existing or running Windows VM In this scenario, you can enable encryption by using the Resource Manager template, PowerShell cmdlets, or CLI commands. If you need schema information for the virtual machine extension, see the [Azure Disk Encryption for Windows extension](../extensions/azure-disk-enc-windows.md) article.
-### Enable encryption on existing or running VMs with Azure PowerShell
-Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) cmdlet to enable encryption on a running IaaS virtual machine in Azure.
+### Enable encryption on existing or running VMs with Azure PowerShell
+Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvmdiskencryptionextension) cmdlet to enable encryption on a running IaaS virtual machine in Azure.
- **Encrypt a running VM:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, and key vault should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, and MySecureVault with your values.
Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvm
Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGname -VMName $vmName -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId; ```-- **Encrypt a running VM using KEK:**
+- **Encrypt a running VM using KEK:**
```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup';
Use the [Set-AzVMDiskEncryptionExtension](/powershell/module/az.compute/set-azvm
Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGname -VMName $vmName -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -DiskEncryptionKeyVaultId $KeyVaultResourceId -KeyEncryptionKeyUrl $keyEncryptionKeyUrl -KeyEncryptionKeyVaultId $KeyVaultResourceId; ```
-
+ >[!NOTE] > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
-/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
+/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
> The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
-https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
+https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
-- **Verify the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [Get-AzVmDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) cmdlet.
+- **Verify the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [Get-AzVmDiskEncryptionStatus](/powershell/module/az.compute/get-azvmdiskencryptionstatus) cmdlet.
```azurepowershell-interactive Get-AzVmDiskEncryptionStatus -ResourceGroupName 'MyVirtualMachineResourceGroup' -VMName 'MySecureVM' ```
-
+ - **Disable disk encryption:** To disable the encryption, use the [Disable-AzVMDiskEncryption](/powershell/module/az.compute/disable-azvmdiskencryption) cmdlet. Disabling data disk encryption on Windows VM when both OS and data disks have been encrypted doesn't work as expected. Disable encryption on all disks instead. ```azurepowershell-interactive
Use the [az vm encryption enable](/cli/azure/vm/encryption#az_vm_encryption_enab
``` >[!NOTE]
- > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
- /subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name] </br>
+ > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
+ /subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name] </br>
> The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
- https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
+ https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
-- **Verify the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [az vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) command.
+- **Verify the disks are encrypted:** To check on the encryption status of an IaaS VM, use the [az vm encryption show](/cli/azure/vm/encryption#az_vm_encryption_show) command.
```azurecli-interactive az vm encryption show --name "MySecureVM" --resource-group "MyVirtualMachineResourceGroup"
Use the [az vm encryption enable](/cli/azure/vm/encryption#az_vm_encryption_enab
### Using the Resource Manager template
-You can enable disk encryption on existing or running IaaS Windows VMs in Azure by using the [Resource Manager template to encrypt a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-windows-vm-without-aad).
+You can enable disk encryption on existing or running IaaS Windows VMs in Azure by using the [Resource Manager template to encrypt a running Windows VM](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm-without-aad).
1. On the Azure quickstart template, click **Deploy to Azure**.
The following table lists the Resource Manager template parameters for existing
| keyVaultName | Name of the key vault that the BitLocker key should be uploaded to. You can get it by using the cmdlet `(Get-AzKeyVault -ResourceGroupName <MyKeyVaultResourceGroupName>). Vaultname` or the Azure CLI command `az keyvault list --resource-group "MyKeyVaultResourceGroup"`| | keyVaultResourceGroup | Name of the resource group that contains the key vault| | keyEncryptionKeyURL | The URL of the key encryption key, in the format https://&lt;keyvault-name&gt;.vault.azure.net/key/&lt;key-name&gt;. If you do not wish to use a KEK, leave this field blank. |
-| volumeType | Type of volume that the encryption operation is performed on. Valid values are _OS_, _Data_, and _All_.
+| volumeType | Type of volume that the encryption operation is performed on. Valid values are _OS_, _Data_, and _All_.
| forceUpdateTag | Pass in a unique value like a GUID every time the operation needs to be force run. | | resizeOSDisk | Should the OS partition be resized to occupy full OS VHD before splitting system volume. | | location | Location for all resources. |
In addition to the scenarios listed in the [Unsupported Scenarios](#unsupported-
## New IaaS VMs created from customer-encrypted VHD and encryption keys
-In this scenario, you can create a new VM from a pre-encrypted VHD and the associated encryption keys using PowerShell cmdlets or CLI commands.
+In this scenario, you can create a new VM from a pre-encrypted VHD and the associated encryption keys using PowerShell cmdlets or CLI commands.
Use the instructions in [Prepare a pre-encrypted Windows VHD](disk-encryption-sample-scripts.md#prepare-a-pre-encrypted-windows-vhd). After the image is created, you can use the steps in the next section to create an encrypted Azure VM. ### Encrypt VMs with pre-encrypted VHDs with Azure PowerShell
-You can enable disk encryption on your encrypted VHD by using the PowerShell cmdlet [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk#examples). The example below gives you some common parameters.
+You can enable disk encryption on your encrypted VHD by using the PowerShell cmdlet [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk#examples). The example below gives you some common parameters.
```azurepowershell $VirtualMachine = New-AzVMConfig -VMName "MySecureVM" -VMSize "Standard_A1"
New-AzVM -VM $VirtualMachine -ResourceGroupName "MyVirtualMachineResourceGroup"
``` ## Enable encryption on a newly added data disk
-You can [add a new disk to a Windows VM using PowerShell](attach-disk-ps.md), or [through the Azure portal](attach-managed-disk-portal.md).
+You can [add a new disk to a Windows VM using PowerShell](attach-disk-ps.md), or [through the Azure portal](attach-managed-disk-portal.md).
### Enable encryption on a newly added disk with Azure PowerShell
- When using PowerShell to encrypt a new disk for Windows VMs, a new sequence version should be specified. The sequence version has to be unique. The script below generates a GUID for the sequence version. In some cases, a newly added data disk might be encrypted automatically by the Azure Disk Encryption extension. Auto encryption usually occurs when the VM reboots after the new disk comes online. This is typically caused because "All" was specified for the volume type when disk encryption previously ran on the VM. If auto encryption occurs on a newly added data disk, we recommend running the Set-AzVmDiskEncryptionExtension cmdlet again with new sequence version. If your new data disk is auto encrypted and you do not wish to be encrypted, decrypt all drives first then re-encrypt with a new sequence version specifying OS for the volume type.
-
-
+ When using PowerShell to encrypt a new disk for Windows VMs, a new sequence version should be specified. The sequence version has to be unique. The script below generates a GUID for the sequence version. In some cases, a newly added data disk might be encrypted automatically by the Azure Disk Encryption extension. Auto encryption usually occurs when the VM reboots after the new disk comes online. This is typically caused because "All" was specified for the volume type when disk encryption previously ran on the VM. If auto encryption occurs on a newly added data disk, we recommend running the Set-AzVmDiskEncryptionExtension cmdlet again with new sequence version. If your new data disk is auto encrypted and you do not wish to be encrypted, decrypt all drives first then re-encrypt with a new sequence version specifying OS for the volume type.
++ -- **Encrypt a running VM:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, and key vault should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, and MySecureVault with your values. This example uses "All" for the -VolumeType parameter, which includes both OS and Data volumes. If you only want to encrypt the OS volume, use "OS" for the -VolumeType parameter.
+- **Encrypt a running VM:** The script below initializes your variables and runs the Set-AzVMDiskEncryptionExtension cmdlet. The resource group, VM, and key vault should have already been created as prerequisites. Replace MyKeyVaultResourceGroup, MyVirtualMachineResourceGroup, MySecureVM, and MySecureVault with your values. This example uses "All" for the -VolumeType parameter, which includes both OS and Data volumes. If you only want to encrypt the OS volume, use "OS" for the -VolumeType parameter.
```azurepowershell $KVRGname = 'MyKeyVaultResourceGroup';
You can [add a new disk to a Windows VM using PowerShell](attach-disk-ps.md), or
``` >[!NOTE]
- > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
-/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
+ > The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
+/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]</br>
> The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in:
-https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
+https://[keyvault-name].vault.azure.net/keys/[kekname]/[kek-unique-id]
### Enable encryption on a newly added disk with Azure CLI
- The Azure CLI command will automatically provide a new sequence version for you when you run the command to enable encryption. The example uses "All" for the volume-type parameter. You may need to change the volume-type parameter to OS if you're only encrypting the OS disk. In contrast to PowerShell syntax, the CLI does not require the user to provide a unique sequence version when enabling encryption. The CLI automatically generates and uses its own unique sequence version value.
+ The Azure CLI command will automatically provide a new sequence version for you when you run the command to enable encryption. The example uses "All" for the volume-type parameter. You may need to change the volume-type parameter to OS if you're only encrypting the OS disk. In contrast to PowerShell syntax, the CLI does not require the user to provide a unique sequence version when enabling encryption. The CLI automatically generates and uses its own unique sequence version value.
- **Encrypt a running VM:**
virtual-network Virtual Networks Name Resolution For Vms And Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md
Your name resolution needs might go beyond the features provided by Azure. For e
DNS servers within a virtual network can forward DNS queries to the recursive resolvers in Azure. This enables you to resolve host names within that virtual network. For example, a domain controller (DC) running in Azure can respond to DNS queries for its domains, and forward all other queries to Azure. Forwarding queries allows VMs to see both your on-premises resources (via the DC) and Azure-provided host names (via the forwarder). Access to the recursive resolvers in Azure is provided via the virtual IP 168.63.129.16.
-DNS forwarding also enables DNS resolution between virtual networks, and allows your on-premises machines to resolve Azure-provided host names. In order to resolve a VM's host name, the DNS server VM must reside in the same virtual network, and be configured to forward host name queries to Azure. Because the DNS suffix is different in each virtual network, you can use conditional forwarding rules to send DNS queries to the correct virtual network for resolution. The following image shows two virtual networks and an on-premises network doing DNS resolution between virtual networks, by using this method. An example DNS forwarder is available in the [Azure Quickstart Templates gallery](https://azure.microsoft.com/documentation/templates/demos/dns-forwarder/) and [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/dns-forwarder).
+DNS forwarding also enables DNS resolution between virtual networks, and allows your on-premises machines to resolve Azure-provided host names. In order to resolve a VM's host name, the DNS server VM must reside in the same virtual network, and be configured to forward host name queries to Azure. Because the DNS suffix is different in each virtual network, you can use conditional forwarding rules to send DNS queries to the correct virtual network for resolution. The following image shows two virtual networks and an on-premises network doing DNS resolution between virtual networks, by using this method. An example DNS forwarder is available in the [Azure Quickstart Templates gallery](https://azure.microsoft.com/en-us/resources/templates/dns-forwarder) and [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/dns-forwarder).
> [!NOTE] > A role instance can perform name resolution of VMs within the same virtual network. It does so by using the FQDN, which consists of the VM's host name and **internal.cloudapp.net** DNS suffix. However, in this case, name resolution is only successful if the role instance has the VM name defined in the [Role Schema (.cscfg file)](/previous-versions/azure/reference/jj156212(v=azure.100)).
Suppose you need to perform name resolution from your web app built by using App
If you need to perform name resolution from your web app built by using App Service, linked to a virtual network, to VMs in a different virtual network, you have to use custom DNS servers on both virtual networks, as follows:
-* Set up a DNS server in your target virtual network, on a VM that can also forward queries to the recursive resolver in Azure (virtual IP 168.63.129.16). An example DNS forwarder is available in the [Azure Quickstart Templates gallery](https://azure.microsoft.com/documentation/templates/demos/dns-forwarder) and [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/dns-forwarder).
+* Set up a DNS server in your target virtual network, on a VM that can also forward queries to the recursive resolver in Azure (virtual IP 168.63.129.16). An example DNS forwarder is available in the [Azure Quickstart Templates gallery](https://azure.microsoft.com/en-us/resources/templates/dns-forwarder/) and [GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/dns-forwarder).
* Set up a DNS forwarder in the source virtual network on a VM. Configure this DNS forwarder to forward queries to the DNS server in your target virtual network. * Configure your source DNS server in your source virtual network's settings. * Enable virtual network integration for your web app to link to the source virtual network, following the instructions in [Integrate your app with a virtual network](../app-service/web-sites-integrate-with-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
Classic deployment model:
* [Azure Service Configuration Schema](/previous-versions/azure/reference/ee758710(v=azure.100)) * [Virtual Network Configuration Schema](/previous-versions/azure/reference/jj157100(v=azure.100))
-* [Configure a Virtual Network by using a network configuration file](/previous-versions/azure/virtual-network/virtual-networks-using-network-configuration-file)
+* [Configure a Virtual Network by using a network configuration file](/previous-versions/azure/virtual-network/virtual-networks-using-network-configuration-file)
virtual-wan Quickstart Route Shared Services Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/quickstart-route-shared-services-vnet-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM tem
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f301-virtual-wan-with-route-tables%2fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.network%2fvirtual-wan-with-route-tables%2fazuredeploy.json)
## Prerequisites
If your environment meets the prerequisites and you're familiar with using ARM t
## <a name="review"></a>Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/301-virtual-wan-with-route-tables). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://github.com/Azure/azure-quickstart-templates/blob/master/301-virtual-wan-with-route-tables/azuredeploy.json).
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templatesvirtual-wan-with-route-tables). The template for this article is too long to show here. To view the template, see [azuredeploy.json](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/virtual-wan-with-route-tables/azuredeploy.json).
In this quickstart, you'll create an Azure Virtual WAN multi-hub deployment, including all gateways and VNet connections. The list of input parameters has been purposely kept at a minimum. The IP addressing scheme can be changed by modifying the variables inside of the template. The scenario is explained further in the [Scenario: Shared services VNet](scenario-shared-services-vnet.md) article.
To deploy this template properly, you must use the button to Deploy to Azure but
1. Click **Deploy to Azure**.
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2f301-virtual-wan-with-route-tables%2fazuredeploy.json)
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.network%2fvirtual-wan-with-route-tables%2fazuredeploy.json)
1. To view the template, click **Edit template**. On this page, you can adjust some of the values such as address space or the name of certain resources. **Save** to save your changes, or **Discard**. 1. On the template page, enter the values. For this template, the P2S public certificate data is required. If you are using this article as an exercise, you can use the following data from this .cer file as sample data for both hubs. Once the template runs and deployment is complete, in order to use the P2S configuration, you must replace this information with the public key [certificate data](certificates-point-to-site.md#cer) for your own deployment.
web-application-firewall Waf Front Door Geo Filtering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md
# What is geo-filtering on a domain for Azure Front Door Service?
-By default, Azure Front Door Service responds to user requests regardless of the location of the user making the request. However, in some cases, you may want to restrict access to your web applications by country/region. Web application firewall (WAF) service at Front Door enables you to define a policy using custom access rules for specific path on your endpoint to allow or block access from specified countries/regions.
+By default, Azure Front Door Service responds to user requests regardless of the location of the user making the request. However, in some cases, you may want to restrict access to your web applications by country/region. Web application firewall (WAF) service at Front Door enables you to define a policy using custom access rules for specific path on your endpoint to allow or block access from specified countries/regions.
A WAF policy usually includes a set of custom rules. A rule consists of match conditions, an action, and a priority. In match condition, you define a match variable, operator, and match value. For geo filtering rule, match variable is REMOTE_ADDR, operator is GeoMatch, value is the two letter country/region code of interest. You may combine a GeoMatch condition and a REQUEST_URI string match condition to create a path-based geo-filtering rule.
-You can configure a geo-filtering policy for your Front Door by either using [Azure PowerShell](waf-front-door-tutorial-geo-filtering.md) or by using our [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-front-door-geo-filtering).
+You can configure a geo-filtering policy for your Front Door by either using [Azure PowerShell](waf-front-door-tutorial-geo-filtering.md) or by using our [quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering).
## Country/Region code reference
web-application-firewall Application Gateway Web Application Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/application-gateway-web-application-firewall-portal.md
Previously updated : 03/25/2021 Last updated : 05/17/2021 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway with Web Application Firewall so I can protect my applications.
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
- **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column.
- - **Address range** (backend server subnet): In the second row of the **Subnets** Grid, enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.0.0.0/24, enter *10.0.1.0/24* for the address range of *myBackendSubnet*.
+ - **Address range** (backend server subnet): In the second row of the **Subnets** Grid, enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.21.0.0/24, enter *10.21.1.0/24* for the address range of *myBackendSubnet*.
Select **OK** to close the **Create virtual network** window and save the virtual network settings.