Updates from: 05/20/2021 03:03:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
Previously updated : 05/18/2021 Last updated : 05/19/2021
Policy 1: All users with the directory role of Global administrator, accessing t
1. Confirm your settings and set **Enable policy** to **On**. 1. Select **Create** to create to enable your policy.
-Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding filters for devices using rule expression device.extensionAttribute1 not equals SAW and for Access controls, Block.
+Policy 2: All users with the directory role of Global administrator, accessing the Microsoft Azure Management cloud app, excluding filters for devices using rule expression device.extensionAttribute1 equals SAW and for Access controls, Block.
1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
active-directory Developer Guide Conditional Access Authentication Context https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/developer-guide-conditional-access-authentication-context.md
+
+ Title: Developer guidance for Azure AD Conditional Access authentication context
+description: Developer guidance and scenarios for Azure AD Conditional Access authentication context
+++++ Last updated : 05/18/2021++++++++++
+# DevelopersΓÇÖ guide to Conditional Access authentication context
+
+[Conditional Access](../conditional-access/overview.md) is the Zero Trust control plane that allows you to target policies for access to all your apps ΓÇô old or new, private, or public, on-premises, or multi-cloud. With [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context-preview), you can apply different policies within those apps.
+
+Conditional Access authentication context (auth context) allows you to apply granular policies to sensitive data and actions instead of just at the app level. You can refine your Zero Trust policies for least privileged access while minimizing user friction and keeping users more productive and your resources more secure. Today, it can be used by applications using [OpenId Connect](https://openid.net/specs/openid-connect-core-1_0.html) for authentication developed by your company to protect sensitive resources, like high-value transactions or viewing employee personal data.
+
+Use the Azure AD Conditional access engineΓÇÖs new auth context feature to trigger a demand for step-up authentication from within your application and services. Developers now have the power to demand enhanced stronger authentication, selectively, like MFA from their end users from within their applications. This feature helps developers build smoother user experiences for most parts of their application, while access to more secure operations and data remains behind stronger authentication controls.
+
+## Problem statement
+
+IT administrators and regulators often struggle between balancing prompting their users with extra factors of authentication too frequently and achieving adequate security and policy adherence for applications and services where applications contain sensitive data or operations. Users being prompted too often and worse too many times make up for degraded user experiences while not doing so can potentially degrade the security posture.
+
+So, what if apps were able to mix both, where they can function with a relatively lesser security and less frequent prompts for most users and operations and yet conditionally stepping up the security requirement when the users accessed more sensitive parts?
+
+## Common scenarios
+
+For example, users can browse and work on a certain web application with standard authentication, but access to a certain portion of the app containing highly sensitive documents should be possible only for users who have performed more methods of authentication, like MFA (2FA) and also satisfy other policies like accessing the document from within a known IP range.
+
+## Steps
+
+The following are the prerequisites and the steps if you want to use Conditional Access authentication context.
+
+### Prerequisites
+
+First, your app should be integrated with the Microsoft Identity Platform using the use OpenID Connect/ OAuth 2.0 protocols for authentication and authorization. We recommend you use [Microsoft identity platform authentication libraries](reference-v2-libraries.md) to integrate and secure your application with Azure Active Directory. [Microsoft identity platform documentation](index.yml) is a good place to start learning how to integrate your apps with the Microsoft Identity Platform. Conditional Access auth context feature support is built on top of protocol extensions provided by the industry standard OpenID Connect. Developers use a Conditional Access auth context reference value with the claims request parameter to give apps a way to trigger and satisfy policy.
+
+Second, [Conditional Access](../conditional-access/overview.md) requires Azure AD Premium P1 licensing. More information about licensing can be found on the [Azure AD pricing page](https://azure.microsoft.com/pricing/details/active-directory/).
+
+Third, today it is only available to applications that sign-in users. Applications that authenticate as themselves are not supported. Use the [Authentication flows and application scenarios guide](authentication-flows-app-scenarios.md) to learn about the supported authentication app types and flows in the Microsoft Identity Platform.
+
+### Integration steps
+
+Once your application is integrated using the supported authentication protocols and registered in an Azure AD tenant that has the Conditional Access feature available for use, you can kick start the process to integrating this feature in your applications that sign-in users.
+
+First, declare and make the authentication contexts available in your tenant. For more information, see [Configure authentication contexts](../conditional-access/concept-conditional-access-cloud-apps.md#configure-authentication-contexts)
+
+Values C1-C25 are available for use as auth context IDs in a tenant. Examples of auth context may be:
+
+- C1 - Require strong authentication
+- C2 ΓÇô Require compliant devices
+- C3 ΓÇô Require trusted locations
+
+Create or modify your Conditional Access policies to use the Conditional Access auth contexts. Examples policies could be:
+
+- All users signing-into this web application should have successfully completed 2FA for auth context ID C1.
+- All users signing into this web application should have successfully completed 2FA and also access the web app from a certain IP address range for auth context ID C3.
+
+> [!NOTE]
+> The Conditional Access auth context values are declared and maintained separately from applications. It is not advisable for applications to take hard dependency on auth context ids. The Conditional Access policies will are usually crafted by IT administrators as they have a better understanding of the resources available to apply policies on. For example, for an Azure AD tenant, IT admins would have the knowledge of how many of the tenantΓÇÖs users are equipped to use 2FA for MFA and thus can ensure that Conditional Access policies that require 2FA are scoped to these equipped users.
+> Similarly, if the application is used in multiple tenants, the auth context ids in use could be different and, in some cases, not available at all.
+
+Second: The developers of an application planning to use Conditional Access auth context are advised to first provide the application admins or IT admins a means to map potential sensitive actions to auth context IDs. The steps roughly being:
+
+1. Identity actions in the code that can be made available to map against auth context Ids.
+1. Build a screen in the admin portal of the app (or an equivalent functionality) that IT admins can use to map sensitive actions against an available auth context ID.
+1. See the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md) for an example on how it is done.
+
+These steps are the changes that you need to carry in your code base. The steps broadly comprise of
+
+- Query MS Graph to list all the available auth contexts [conditionalaccess resource type](/graph/api/resources/conditionalaccessroot?view=graph-rest-beta&preserve-view=true).
+- Allow IT admins to select sensitive/ high-privileged operations and assign them against the available auth contexts.
+- Save this mapping information in your database, per tenant, unless your application is going to ever be used in a single tenant.
++
+Third: Your application, and for this example, weΓÇÖd assume itΓÇÖs a web API, then needs to evaluate calls against the saved mapping and accordingly raise claim challenges for its client apps. To prepare for this action, the following steps are to be taken:
+
+1. Request the Authentication Context Class Reference (acrs) as an optional claim in its [Access token](access-tokens.md) by requesting it in the [Web APIs app manifest](reference-app-manifest.md).
+
+ ```json
+ "optionalClaims":
+ {
+ "accessToken": [
+ {
+ "additionalProperties": [],
+ "essential": false,
+ "name": "acrs",
+ "source": null
+ }
+ ],
+ "idToken": [],
+ "saml2Token": []
+ }
+ ```
+
+1. In a sensitive and protected by auth context operation, evaluate the values in the acrs claim against the auth context ID mapping saved earlier and raise a claims challenge as provided in the code snippet below.
+1. The following diagram shows the interaction between the user, client app, and the web API.
+
+ :::image type="content" source="media/developer-guide-conditional-access-authentication-context/authentication-context-application-flow.png" alt-text="Diagram showing the interaction of user, web app, API, and Azure AD":::
+
+ The code snippet that follows is from the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md). The first method, EnsureUserHasElevatedScope() in the API checks if the action being called,
+
+ - Requires step-up authentication. It does so by checking its database for a saved mapping for this method
+ - If this action indeed requires an elevated auth context, it checks the acrs claim for an existing, matching auth context ID.
+ - If a matching auth context ID is not found, it raises a [claims challenge](claims-challenge.md#claims-challenge-header-format).
+
+ ```
+ public void EnsureUserHasElevatedScope(string method)
+ {
+ string authType = _commonDBContext.AuthContext.FirstOrDefault(x => x.Operation == method
+ && x.TenantId == _configuration["AzureAD:TenantId"])?.AuthContextId;
+
+ if (!string.IsNullOrEmpty(authType))
+ {
+ HttpContext context = this.HttpContext;
+ string authenticationContextClassReferencesClaim = "acrs";
+
+ if (context == null || context.User == null || context.User.Claims == null
+ || !context.User.Claims.Any())
+ {
+ throw new ArgumentNullException("No Usercontext is available to pick claims from");
+ }
+
+ Claim acrsClaim = context.User.FindAll(authenticationContextClassReferencesClaim).FirstOrDefault(x
+ => x.Value == authType);
+
+ if (acrsClaim == null || acrsClaim.Value != authType)
+ {
+ if (IsClientCapableofClaimsChallenge(context))
+ {
+ string clientId = _configuration.GetSection("AzureAd").GetSection("ClientId").Value;
+ var base64str = Convert.ToBase64String(Encoding.UTF8.GetBytes("{\"access_token\":{\"acrs\":{\"essential\":true,\"value\":\"" + authType + "\"}}}"));
+
+ context.Response.Headers.Append("WWW-Authenticate", $"Bearer realm=\"\", authorization_uri=\"https://login.microsoftonline.com/common/oauth2/authorize\", client_id=\"" + clientId + "\", error=\"insufficient_claims\", claims=\"" + base64str + "\", cc_type=\"authcontext\"");
+ context.Response.StatusCode = (int)HttpStatusCode.Unauthorized;
+ string message = string.Format(CultureInfo.InvariantCulture, "The presented access tokens had insufficient claims. Please request for claims requested in the WWW-Authentication header and try again.");
+ context.Response.WriteAsync(message);
+ context.Response.CompleteAsync();
+ throw new UnauthorizedAccessException(message);
+ }
+ else
+ {
+ throw new UnauthorizedAccessException("The caller does not meet the authentication bar to carry our this operation. The service cannot allow this operation");
+ }
+ }
+ }
+ }
+ ```
+
+ > [!NOTE]
+ > The format of the claims challenge is described in the article, [Claims Challenge in the Microsoft Identity Platform](claims-challenge.md).
+
+1. In the client application, Intercept the claims challenge and redirect the user back to Azure AD for further policy evaluation. The code snippet that follows is from the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md).
+
+ ```
+ internal static string ExtractHeaderValues(WebApiMsalUiRequiredException response)
+ {
+ if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized && response.Headers.WwwAuthenticate.Any())
+ {
+ AuthenticationHeaderValue bearer = response.Headers.WwwAuthenticate.First(v => v.Scheme == "Bearer");
+ IEnumerable<string> parameters = bearer.Parameter.Split(',').Select(v => v.Trim()).ToList();
+ var errorValue = GetParameterValue(parameters, "error");
+
+ try
+ {
+ // read the header and checks if it conatins error with insufficient_claims value.
+ if (null != errorValue && "insufficient_claims" == errorValue)
+ {
+ var claimChallengeParameter = GetParameterValue(parameters, "claims");
+ if (null != claimChallengeParameter)
+ {
+ var claimChallenge = ConvertBase64String(claimChallengeParameter);
+
+ return claimChallenge;
+ }
+ }
+ }
+ catch (Exception ex)
+ {
+ throw ex;
+ }
+ }
+ return null;
+ }
+ ```
+
+ Handle exception in the call to Web API, if a claims challenge is presented, the redirect the user back to Azure AD for further processing.
+
+ ```
+ try
+ {
+ // Call the API
+ await _todoListService.AddAsync(todo);
+ }
+ catch (WebApiMsalUiRequiredException hex)
+ {
+ // Challenges the user if exception is thrown from Web API.
+ try
+ {
+ var claimChallenge =ExtractHeaderValues(hex);
+ _consentHandler.ChallengeUser(new string[] { "user.read" }, claimChallenge);
+
+ return new EmptyResult();
+ }
+ catch (Exception ex)
+ {
+ _consentHandler.HandleException(ex);
+ }
+
+ Console.WriteLine(hex.Message);
+ }
+ return RedirectToAction("Index");
+ ```
+
+1. (Optional) Declare client capability. Client capabilities help resources providers (RP) like our Web API above to detect if the calling client application understands the claims challenge and can then customize its response accordingly. This capability could be useful where not all of the APIs clients are capable of handling claim challenges and some older ones still expect a different response. For more information, see the section [Client capabilities](claims-challenge.md#client-capabilities).
+
+## Caveats and recommendations
+
+Do not hardcode auth context values in your app. Apps should read and apply auth context using MS Graph calls [Link to auth context APIs]. This practice is critical in [multi-tenant applications](howto-convert-app-to-be-multi-tenant.md). The auth context values will vary between Azure AD tenants and are not available in Azure AD free edition. For more information on how an app should query, set, and use auth context in their code, see the code sample, [Use the Conditional Access auth context to perform step-up authentication](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md).
+
+Do not use auth context where the app itself is going to be a target of Conditional Access policies. The feature works best when parts of the application require the user to meet a higher bar of authentication.
+
+## Next steps
+
+- [Zero trust with the Microsoft Identity platform](/security/zero-trust/identity-developer)
+- [Use the Conditional Access auth context to perform step-up authentication for high-privilege operations in a Web API](https://github.com/Azure-Samples/ms-identity-ca-auth-context/blob/main/README.md)
+- [Conditional Access authentication context](../conditional-access/concept-conditional-access-cloud-apps.md#authentication-context-preview)
+- [Claims challenge, claims request, and client capabilities in the Microsoft Identity Platform](claims-challenge.md)
+- [Using authentication context with Microsoft Information Protection and SharePoint](/microsoft-365/compliance/sensitivity-labels-teams-groups-sites#more-information-about-the-dependencies-for-the-authentication-context-option)
+- [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
+- [How to use Continuous Access Evaluation enabled APIs in your applications](app-resilience-continuous-access-evaluation.md)
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/managed-aad.md
If you want to access the cluster, follow the steps [here][access-cluster].
There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with kubectl. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to access the cluster with non-interactive service principal sign-in.
+## Disable local accounts (preview)
+
+When deploying an AKS Cluster, local accounts are enabled by default. Even when enabling RBAC or Azure Active Directory integration, `--admin` access still exists, essentially as a non-auditable backdoor option. With this in mind, AKS offers users the ability to disable local accounts via a flag, `disable-local`. A field, `properties.disableLocalAccounts`, has also been added to the managed cluster API to indicate whether the feature has been enabled on the cluster.
+
+> [!NOTE]
+> On clusters with Azure AD integration enabled, users belonging to a group specified by `aad-admin-group-object-ids` will still be able to gain access via non-admin credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to true, obtaining both user and admin credentials will fail.
+
+### Register the `DisableLocalAccountsPreview` preview feature
++
+To use an AKS cluster without local accounts, you must enable the `DisableLocalAccountsPreview` feature flag on your subscription. Ensure you are using the latest version of the Azure CLI and the `aks-preview` extension.
+
+Register the `DisableLocalAccountsPreview` feature flag using the [az feature register][az-feature-register] command as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "DisableLocalAccountsPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. You can check on the registration status using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DisableLocalAccountsPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Create a new cluster without local accounts
+
+To create a new AKS cluster without any local accounts, use the [az aks create][az-aks-create] command with the `disable-local` flag:
+
+```azurecli-interactive
+az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local
+```
+
+In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to true:
+
+```output
+"properties": {
+ ...
+ "disableLocalAccounts": true,
+ ...
+}
+```
+
+Attempting to get admin credentials will fail with an error message indicating the feature is preventing access:
+
+```azurecli-interactive
+az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
+
+Operation failed with status: 'Bad Request'. Details: Getting static credential is not allowed because this cluster is set to disable local accounts.
+```
+
+### Disable local accounts on an existing cluster
+
+To disable local accounts on an existing AKS cluster, use the [az aks update][az-aks-update] command with the `disable-local` flag:
+
+```azurecli-interactive
+az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local
+```
+
+In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to true:
+
+```output
+"properties": {
+ ...
+ "disableLocalAccounts": true,
+ ...
+}
+```
+
+Attempting to get admin credentials will fail with an error message indicating the feature is preventing access:
+
+```azurecli-interactive
+az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
+
+Operation failed with status: 'Bad Request'. Details: Getting static credential is not allowed because this cluster is set to disable local accounts.
+```
+
+### Re-enable local accounts on an existing cluster
+
+AKS also offers the ability to re-enable local accounts on an existing cluster with the `enable-local` flag:
+
+```azurecli-interactive
+az aks update -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --enable-local
+```
+
+In the output, confirm local accounts have been re-enabled by checking the field `properties.disableLocalAccounts` is set to false:
+
+```output
+"properties": {
+ ...
+ "disableLocalAccounts": false,
+ ...
+}
+```
+
+Attempting to get admin credentials will succeed:
+
+```azurecli-interactive
+az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
+
+Merged "<cluster-name>-admin" as current context in C:\Users\<username>\.kube\config
+```
+ ## Use Conditional Access with Azure AD and AKS When integrating Azure AD with your AKS cluster, you can also use [Conditional Access][aad-conditional-access] to control access to your cluster.
Make sure the admin of the security group has given your account an *Active* ass
[access-cluster]: #access-an-azure-ad-enabled-cluster [aad-migrate]: #upgrading-to-aks-managed-azure-ad-integration [aad-assignments]: ../active-directory/privileged-identity-management/groups-assign-member-owner.md#assign-an-owner-or-member-of-a-group
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-aks-update]: /cli/azure/aks#az_aks_update
api-management Api Management Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-role-based-access-control.md
na Previously updated : 06/20/2018 Last updated : 05/18/2021
The following table provides brief descriptions of the built-in roles. You can a
| API Management Service Contributor | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Super user. Has full CRUD access to API Management services and entities (for example, APIs and policies). Has access to the legacy publisher portal. | | API Management Service Reader | Γ£ô | | || Has read-only access to API Management services and entities. | | API Management Service Operator | Γ£ô | | Γ£ô | | Can manage API Management services, but not entities.|
-| API Management Service Editor<sup>*</sup> | Γ£ô | Γ£ô | | | Can manage API Management entities, but not services.|
-| API Management Content Manager<sup>*</sup> | Γ£ô | | | Γ£ô | Can manage the developer portal. Read-only access to services and entities.|
<sup>[1] Read access to API Management services and entities (for example, APIs and policies).</sup> <sup>[2] Write access to API Management services and entities except the following operations: instance creation, deletion, and scaling; VPN configuration; and custom domain setup.</sup>
-<sup>\* The Service Editor role will be available after we migrate all the admin UI from the existing publisher portal to the Azure portal. The Content Manager role will be available after the publisher portal is refactored to only contain functionality related to managing the developer portal.</sup>
- ## Custom roles If none of the built-in roles meet your specific needs, custom roles can be created to provide more granular access management for API Management entities. For example, you can create a custom role that has read-only access to an API Management service, but only has write access to one specific API. To learn more about custom roles, see [Custom roles in Azure RBAC](../role-based-access-control/custom-roles.md).
azure-functions Durable Functions Sub Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-sub-orchestrations.md
In addition to calling activity functions, orchestrator functions can call other orchestrator functions. For example, you can build a larger orchestration out of a library of smaller orchestrator functions. Or you can run multiple instances of an orchestrator function in parallel.
-An orchestrator function can call another orchestrator function using the `CallSubOrchestratorAsync` or the `CallSubOrchestratorWithRetryAsync` methods in .NET, or the `callSubOrchestrator` or `callSubOrchestratorWithRetry` methods in JavaScript. The [Error Handling & Compensation](durable-functions-error-handling.md#automatic-retry-on-failure) article has more information on automatic retry.
+An orchestrator function can call another orchestrator function using the `CallSubOrchestratorAsync` or the `CallSubOrchestratorWithRetryAsync` methods in .NET, the `callSubOrchestrator` or `callSubOrchestratorWithRetry` methods in JavaScript, and the `call_sub_orchestrator` or `call_sub_orchestrator_with_retry` methods in Python. The [Error Handling & Compensation](durable-functions-error-handling.md#automatic-retry-on-failure) article has more information on automatic retry.
Sub-orchestrator functions behave just like activity functions from the caller's perspective. They can return a value, throw an exception, and can be awaited by the parent orchestrator function. > [!NOTE]
-> Sub-orchestrations are currently supported in .NET and JavaScript.
+> Sub-orchestrations are currently supported in .NET, JavaScript, and Python.
## Example
module.exports = df.orchestrator(function*(context) {
}); ```
+# [Python](#tab/python)
+
+```python
+import azure.functions as func
+import azure.durable_functions as df
+
+def orchestrator_function(context: df.DurableOrchestrationContext):
+ device_id = context.get_input()
+
+ # Step 1: Create an installation package in blob storage and return a SAS URL.
+ sas_url = yield context.call_activity"CreateInstallationPackage", device_id)
+
+ # Step 2: Notify the device that the installation package is ready.
+ yield context.call_activity("SendPackageUrlToDevice", { "id": device_id, "url": sas_url })
+
+ # Step 3: Wait for the device to acknowledge that it has downloaded the new package.
+ yield context.call_activity("DownloadCompletedAck")
+
+ # Step 4: ...
+```
+
-This orchestrator function can be used as-is for one-off device provisioning or it can be part of a larger orchestration. In the latter case, the parent orchestrator function can schedule instances of `DeviceProvisioningOrchestration` using the `CallSubOrchestratorAsync` (.NET) or `callSubOrchestrator` (JavaScript) API.
+This orchestrator function can be used as-is for one-off device provisioning or it can be part of a larger orchestration. In the latter case, the parent orchestrator function can schedule instances of `DeviceProvisioningOrchestration` using the `CallSubOrchestratorAsync` (.NET), `callSubOrchestrator` (JavaScript), or `call_sub_orchestrator` (Python) API.
Here is an example that shows how to run multiple orchestrator functions in parallel.
module.exports = df.orchestrator(function*(context) {
}); ``` +
+# [Python](#tab/python)
+
+```Python
+import azure.functions as func
+import azure.durable_functions as df
+
+def orchestrator_function(context: df.DurableOrchestrationContext):
+
+ device_IDs = yield context.call_activity("GetNewDeviceIds")
+
+ # Run multiple device provisioning flows in parallel
+ provisioning_tasks = []
+ id_ = 0
+ for device_id in device_IDs:
+ child_id = context.instance_id + ":" + id_
+ provision_task = context.call_sub_orchestrator("DeviceProvisioningOrchestration", device_id, child_id)
+ provisioning_tasks.append(provision_task)
+ id_ += 1
+
+ yield context.task_all(provisioning_tasks)
+
+ # ...
+```
> [!NOTE]
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Title: "Quickstart: Create your first function in Azure using Visual Studio"
-description: In this quickstart, you learn how to create and publish an HTTP trigger Azure Function by using Visual Studio.
+ Title: "Quickstart: Create your first C# function in Azure using Visual Studio"
+description: In this quickstart, you learn how to use Visual Studio to create and publish a C# HTTP triggered function to Azure Functions that runs on .NET Core 3.1.
ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Previously updated : 09/30/2020- Last updated : 05/18/2021+ adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B adobe-target-content: ./functions-create-your-first-function-visual-studio-uiex
-# Quickstart: Create your first function in Azure using Visual Studio
+# Quickstart: Create your first C# function in Azure using Visual Studio
-In this article, you use Visual Studio to create a C# class library-based function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+In this article, you use Visual Studio to create a C# class library (.NET Core 3.1) function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions. This project runs in-process on .NET Core 3.1. If you instead want to run out-of-process on .NET 5.0, see [Develop and publish .NET 5 functions using Azure Functions](dotnet-isolated-process-developer-howtos.md).
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. ## Prerequisites
-To complete this tutorial, first install [Visual Studio 2019](https://azure.microsoft.com/downloads/). Ensure you select the **Azure development** workload during installation. If you want to create an Azure Functions project by using Visual Studio 2017 instead, you must first install the [latest Azure Functions tools](functions-develop-vs.md#check-your-tools-version).
++ [Visual Studio 2019](https://azure.microsoft.com/downloads/). Make sure to select the **Azure development** workload during installation.
-![Install Visual Studio with the Azure development workload](media/functions-create-your-first-function-visual-studio/functions-vs-workloads.png)
-
-If you don't have an [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/dotnet/) before you begin.
++ [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing). If you don't already have an account [create a free one](https://azure.microsoft.com/free/dotnet/) before you begin. ## Create a function app project
The `FunctionName` method attribute sets the name of the function, which by defa
1. In **File Explorer**, right-click the Function1.cs file and rename it to `HttpExample.cs`.
-1. In the code, rename the Function1 class to `HttpExample'.
+1. In the code, rename the Function1 class to `HttpExample`.
1. In the `HttpTrigger` method named `Run`, rename the `FunctionName` method attribute to `HttpExample`.
+Your function definition should now look like the following code:
+
+
Now that you've renamed the function, you can test it on your local computer. ## Run the function locally
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-monitoring.md
The full list of Application Insights features available to your function app is
## Application Insights integration
-Typically, you create an Application Insights instance when you create your function app. In this case, the instrumentation key required for the integration is already set as an application setting named *APPINSIGHTS_INSTRUMENTATIONKEY*. If for some reason your function app doesn't have the instrumentation key set, you need to [enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
+Typically, you create an Application Insights instance when you create your function app. In this case, the instrumentation key required for the integration is already set as an application setting named `APPINSIGHTS_INSTRUMENTATIONKEY`. If for some reason your function app doesn't have the instrumentation key set, you need to [enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
+
+> [!IMPORTANT]
+> Sovereign clouds, such as Azure Government, require the use of the Application Insights connection string (`APPLICATIONINSIGHTS_CONNECTION_STRING`) instead of the instrumentation key. To learn more, see the [APPLICATIONINSIGHTS_CONNECTION_STRING reference](functions-app-settings.md#applicationinsights_connection_string).
## Collecting telemetry data
azure-maps Creator Long Running Operation V2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-long-running-operation-v2.md
+
+ Title: Azure Maps long-running operation API V2
+description: Learn about long-running asynchronous V2 background processing in Azure Maps
++ Last updated : 05/18/2021+++++
+
+
+# Creator Long-Running Operation API V2
+
+Some APIs in Azure Maps use an [Asynchronous Request-Reply pattern](/azure/architecture/patterns/async-request-reply). This pattern allows Azure Maps to provide highly available and responsive services. This article explains Azure Map's specific implementation of long-running asynchronous background processing.
+
+## Submit a request
+
+A client application starts a long-running operation through a synchronous call to an HTTP API. Typically, this call is in the form of an HTTP POST request. When an asynchronous workload is successfully created, the API will return an HTTP `202` status code, indicating that the request has been accepted. This response contains a `Location` header pointing to an endpoint that the client can poll to check the status of the long-running operation.
+
+### Example of a success response
+
+```HTTP
+Status: 202 Accepted
+Operation-Location: https://atlas.microsoft.com/service/operations/{operationId}
+
+```
+
+If the call doesn't pass validation, the API will instead return an HTTP `400` response for a Bad Request. The response body will provide the client more information on why the request was invalid.
+
+### Monitor the operation status
+
+The location endpoint provided in the accepted response headers can be polled to check the status of the long-running operation. The response body from operation status request will always contain the `status` and the `created` properties. The `status` property shows the current state of the long-running operation. Possible states include `"NotStarted"`, `"Running"`, `"Succeeded"`, and `"Failed"`. The `created` property shows the time the initial request was made to start the long-running operation. When the state is either `"NotStarted"` or `"Running"`, a `Retry-After` header will also be provided with the response. The `Retry-After` header, measured in seconds, can be used to determine when the next polling call to the operation status API should be made.
+
+### Example of running a status response
+
+```HTTP
+Status: 200 OK
+Retry-After: 30
+{
+ "operationId": "c587574e-add9-4ef7-9788-1635bed9a87e",
+ "created": "3/11/2020 8:45:13 PM +00:00",
+ "status": "Running"
+}
+```
+
+## Handle operation completion
+
+Upon completing the long-running operation, the status of the response will either be `"Succeeded"` or `"Failed"`. All responses will return an HTTP 200 OK code. When a new resource has been created from a long-running operation, the response will also contain a `Resource-Location` header that points to metadata about the resource. Upon a failure, the response will have an `error` property in the body. The error data adheres to the OData error specification.
+
+### Example of success response
+
+```HTTP
+Status: 200 OK
+Resource-Location: "https://atlas.microsoft.com/tileset/{tileset-id}"
+ {
+ "operationId": "c587574e-add9-4ef7-9788-1635bed9a87e",
+ "created": "2021-05-06T07:55:19.5256829+00:00",
+ "status": "Succeeded"
+}
+```
+
+### Example of failure response
+
+```HTTP
+Status: 200 OK
+
+{
+ "operationId": "c587574e-add9-4ef7-9788-1635bed9a87e",
+ "created": "3/11/2020 8:45:13 PM +00:00",
+ "status": "Failed",
+ "error": {
+ "code": "InvalidFeature",
+ "message": "The provided feature is invalid.",
+ "details": {
+ "code": "NoGeometry",
+ "message": "No geometry was provided with the feature."
+ }
+ }
+}
+```
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/sdk-connection-string.md
The key value pairs provide an easy way for users to define a prefix suffix comb
> [!IMPORTANT] > We don't recommend setting both Connection String and Instrumentation key. In the event that a user does set both, whichever was set last will take precedence.
+> [!TIP]
+> We recommend the use of connection strings over instrumentation keys.
## Scenario overview
azure-netapp-files Solutions Benefits Azure Netapp Files Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/solutions-benefits-azure-netapp-files-sql-server.md
na ms.devlang: na Previously updated : 03/19/2021 Last updated : 05/19/2021 # Benefits of using Azure NetApp Files for SQL Server deployment
-Azure NetApp Files reduces SQL Server total cost of ownership (TCO) as compared to block storage solutions. With block storage, virtual machines have imposed limits on I/O and bandwidth for disk operations, network bandwidth limits alone are applied against Azure NetApp Files only at that. In other words, no VM level I/O limits are applied to Azure NetApp Files. Without these I/O limits, SQL Server running on smaller virtual machines connected to Azure NetApp Files can perform as well as SQL Server running on much larger virtual machines. Sizing instances down as such reduces the compute cost to 25% of the former price tag. *You can reduce compute costs with Azure NetApp Files.*
+Azure NetApp Files reduces SQL Server total cost of ownership (TCO) as compared to block storage solutions. With block storage, virtual machines have imposed limits on I/O and bandwidth for disk operations. Only network bandwidth limits are applied against Azure NetApp Files, and on egress only at that. In other words, no VM level I/O limits are applied to Azure NetApp Files. Without these I/O limits, SQL Server running on smaller virtual machines connected to Azure NetApp Files can perform as well as SQL Server running on much larger virtual machines. Sizing instances down as such reduces the compute cost to 25% of the former price tag. *You can reduce compute costs with Azure NetApp Files.*
Compute costs, however, are small compared to SQL Server license costs. Microsoft SQL Server [licensing](https://download.microsoft.com/download/B/C/0/BC0B2EA7-D99D-42FB-9439-2C56880CAFF4/SQL_Server_2017_Licensing_Datasheet.pdf) is tied to physical core count. As such, decreasing instance size introduces an even larger cost saving for software licensing. *You can reduce software license costs with Azure NetApp Files.*
-The cost of the storage itself is variable depending on the actual size of the database. Regardless of the storage selected, capacity has cost, whether it is a managed disk or file share. As database sizes increase and the storage increases in cost, the storage contributes to the TCO increases, affecting the overall cost. As such, the assertion is adjusted to as follows: *You can reduce SQL Server deployment costs with Azure NetApp Files.*
- This article shows a detailed cost analysis and performance benefits about using Azure NetApp Files for SQL Server deployment. Not only do smaller instances have sufficient CPU to do the database work only possible with block on larger instances, *in many cases, the smaller instances are even more performant than their larger, disk-based counterparts because of Azure NetApp Files.* ## Detailed cost analysis The two sets of graphics in this section show the TCO example. The number and type of managed disks, the Azure NetApp Files service level, and the capacity for each scenario have been selected to achieve the best price-capacity-performance. Each graphic is made up of grouped machines (D16 with Azure NetApp Files, compared to D64 with managed disk by example), and prices are broken down for each machine type.
-The first set of graphic shows the overall cost of the solution using a 1-TiB database size, comparing the D16s_v3 to the D64, the D8 to the D32, and the D4 to the D16. The projected IOPs for each configuration are indicated by a green or yellow line and corresponds to the right-hand side Y axis.
+The first set of graphic shows the overall cost of the solution using a 1-TiB database size, comparing the D16s_v4 to the D64, the D8 to the D32, and the D4 to the D16. The projected IOPs for each configuration are indicated by a green or yellow line and corresponds to the right-hand side Y axis.
[ ![Graphic that shows overall cost of the solution using a 1-TiB database size.](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png) ](../media/azure-netapp-files/solution-sql-server-cost-1-tib.png#lightbox)
The second set of graphic shows the overall cost using a 50-TiB database. The co
## Performance, and lots of it
-To deliver on the significant cost reduction assertion requires lots of performance - the largest instances in the general Azure inventory support 80,000 disk IOPS by example. A single Azure NetApp Files volume can achieve 80,000 database IOPS, and instances such as the D16 are able to consume the same. The D16, normally capable of 25,600 disk IOPS, is 25% the size of the D64. The D64s_v3 is capable of 80,000 disk IOPS, and as such, presents an excellent upper level comparison point.
+To deliver on the significant cost reduction assertion requires lots of performance - the largest instances in the general Azure inventory support 80,000 disk IOPS by example. A single Azure NetApp Files volume can achieve 80,000 database IOPS, and instances such as the D16 are able to consume the same. The D16, normally capable of 25,600 disk IOPS, is 25% the size of the D64. The D64s_v4 is capable of 80,000 disk IOPS, and as such, presents an excellent upper level comparison point.
-The D16s_v3 can drive an Azure NetApp Files volume to 80,000 database IOPS. As proven by the SQL Storage Benchmark (SSB) benchmarking tool, the D16 instance achieved a workload 125% greater than that achievable to disk from the D64 instance. See the [SSB testing tool](#ssb-testing-tool) section for details about the tool.
+The D16s_v4 can drive an Azure NetApp Files volume to 80,000 database IOPS. As proven by the SQL Storage Benchmark (SSB) benchmarking tool, the D16 instance achieved a workload 125% greater than that achievable to disk from the D64 instance. See the [SSB testing tool](#ssb-testing-tool) section for details about the tool.
-Using a 1-TiB working set size and an 80% read, 20% update SQL Server workload, performance capabilities of most the instances in the D instance class were measured; most, not all, as the D2 and D64 instances themselves were excluded from testing. The former was left out as it doesn't support accelerated networking, and the latter because it's the comparison point. See the following graph to understand the limits of D4s_v3, D8s_v3, D16s_v3, and D32s_v3, respectively. Managed disk storage tests are not shown in the graph. Comparison values are drawn directly from the [Azure Virtual Machine limits table](../virtual-machines/dv3-dsv3-series.md) for the D class instance type.
+Using a 1-TiB working set size and an 80% read, 20% update SQL Server workload, performance capabilities of most the instances in the D instance class were measured; most, not all, as the D2 and D64 instances themselves were excluded from testing. The former was left out as it doesn't support accelerated networking, and the latter because it's the comparison point. See the following graph to understand the limits of D4s_v4, D8s_v4, D16s_v4, and D32s_v4, respectively. Managed disk storage tests are not shown in the graph. Comparison values are drawn directly from the [Azure Virtual Machine limits table](../virtual-machines/dv3-dsv3-series.md) for the D class instance type.
With Azure NetApp Files, each of the instances in the D class can meet or exceed the disk performance capabilities of instances two times larger. *You can reduce software license costs significantly with Azure NetApp Files.*
azure-sql Features Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/features-comparison.md
If you need more details about the differences, you can find them in the separat
- [Azure SQL Database vs. SQL Server differences](transact-sql-tsql-differences-sql-server.md) - [Azure SQL Managed Instance vs. SQL Server differences](../managed-instance/transact-sql-tsql-differences-sql-server.md) + ## Features of SQL Database and SQL Managed Instance The following table lists the major features of SQL Server and provides information about whether the feature is partially or fully supported in Azure SQL Database and Azure SQL Managed Instance, with a link to more information about the feature.
The Azure platform provides a number of PaaS capabilities that are added as an a
| [VNet](../../virtual-network/virtual-networks-overview.md) | Partial, it enables restricted access using [VNet Endpoints](vnet-service-endpoint-rule-overview.md) | Yes, SQL Managed Instance is injected in customer's VNet. See [subnet](../managed-instance/transact-sql-tsql-differences-sql-server.md#subnet) and [VNet](../managed-instance/transact-sql-tsql-differences-sql-server.md#vnet) | | VNet Service endpoint | [Yes](vnet-service-endpoint-rule-overview.md) | No | | VNet Global peering | Yes, using [Private IP and service endpoints](vnet-service-endpoint-rule-overview.md) | Yes, using [Virtual network peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913). |
+| [Private connectivity](../../private-link/private-link-overview.md) | Yes, using [Private Link](/database/private-endpoint-overview.md) | Yes, using VNet. |
## Tools
Azure SQL Database and Azure SQL Managed Instance support various data tools tha
| [SQL Server Management Studio (SSMS)](/sql/ssms/download-sql-server-management-studio-ssms) | Yes | Yes [version 18.0 and higher](/sql/ssms/download-sql-server-management-studio-ssms) | | [SQL Server PowerShell](/sql/relational-databases/scripting/sql-server-powershell) | Yes | Yes | | [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler) | No - see [Extended events](xevent-db-diff-from-svr.md) | Yes |
-| [System Center Operations Manager (SCOM)](/system-center/scom/welcome) | [Yes](https://www.microsoft.com/download/details.aspx?id=38829) | [Yes](https://www.microsoft.com/en-us/download/details.aspx?id=101203) |
+| [System Center Operations Manager](/system-center/scom/welcome) | [Yes](https://www.microsoft.com/download/details.aspx?id=38829) | [Yes](https://www.microsoft.com/en-us/download/details.aspx?id=101203) |
## Migration methods
azure-sql Gateway Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/gateway-migration.md
New SQL Gateways are being added to the following regions:
- Australia Central 2: 20.36.112.6 - Brazil South: 191.234.144.16 ,191.234.152.3 - Canada East: 40.69.105.9 ,40.69.105.10-- India Central: 104.211.86.30 , 104.211.86.31
+- Central India: 104.211.86.30 , 104.211.86.31
- East Asia: 13.75.32.14 - France Central: 40.79.137.8, 40.79.145.12 - France South: 40.79.177.10 ,40.79.177.12 - Korea Central: 52.231.17.22 ,52.231.17.23-- India West: 104.211.144.4
+- West India: 104.211.144.4
These SQL Gateways shall start accepting customer traffic on 31 January 2021.
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/recovery-using-backups.md
For a PowerShell script that shows how to perform geo-restore for a managed inst
### Geo-restore considerations
-You can't perform a point-in-time restore on a geo-secondary database. You can do so only on a primary database. For detailed information about using geo-restore to recover from an outage, see [Recover from an outage](../../key-vault/general/disaster-recovery-guidance.md).
+You can't perform a point-in-time restore on a geo-secondary database. You can do so only on a primary database. For detailed information about using geo-restore to recover from an outage, see [Recover from an outage](disaster-recovery-guidance.md#recover-using-geo-restore).
> [!IMPORTANT] > Geo-restore is the most basic disaster-recovery solution available in SQL Database and SQL Managed Instance. It relies on automatically created geo-replicated backups with a recovery point objective (RPO) up to 1 hour and an estimated recovery time of up to 12 hours. It doesn't guarantee that the target region will have the capacity to restore your databases after a regional outage, because a sharp increase of demand is likely. If your application uses relatively small databases and is not critical to the business, geo-restore is an appropriate disaster-recovery solution.
azure-sql Sql Database Vulnerability Assessment Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-vulnerability-assessment-storage.md
Last updated 12/01/2020
If you are limiting access to your storage account in Azure for certain VNets or services, you'll need to enable the appropriate configuration so that Vulnerability Assessment (VA) scanning for SQL Databases or Managed Instances have access to that storage account.
+## Prerequisites
+
+The SQL Vulnerability Assessment service needs permission to the storage account to save baseline and scan results. There are three methods:
+- **Use Storage Account key**: Azure creates the SAS key and saves it (though we don't save the account key)
+- **Use Storage SAS key**: The SAS key must have: Write | List | Read | Delete permissions
+- **Use SQL Server managed identity**: The SQL Server must have a managed identity. The storage account must have a role assignment for the SQL Managed Identity as StorageBlobContributor. When you apply the settings, the VA fields storageContainerSasKey and storageAccountAccessKey must be empty. When storage is behind a firewall or virtual network, then the SQL managed identity is required.
+
+When you use the Azure portal to save SQL VA settings, Azure checks if you have permission to assign a new role assignment for the managed identity as StorageBlobContributor on the storage. If permissions are assigned, Azure uses SQL Server managed identity, otherwise Azure uses the key method.
+ ## Enable Azure SQL Database VA scanning access to the storage account If you have configured your VA storage account to only be accessible by certain networks or services, you'll need to ensure that VA scans for your Azure SQL Database are able to store the scans on the storage account. You can use the existing storage account, or create a new storage account to store VA scan results for all databases on your [logical SQL server](logical-servers.md).
azure-sql Synchronize Vnet Dns Servers Setting On Virtual Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/synchronize-vnet-dns-servers-setting-on-virtual-cluster.md
If this change is implemented after [virtual cluster](connectivity-architecture-
User synchronizing DNS server configuration will need to have one of the following Azure roles: -- Subscription Owner role, or-- Managed Instance Contributor role, or
+- Subscription contributor role, or
- Custom role with the following permission: - `Microsoft.Sql/virtualClusters/updateManagedInstanceDnsServers/action`
azure-sql Frequently Asked Questions Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/frequently-asked-questions-faq.md
This article provides answers to some of the most common questions about running
No. The SQL Server license type is not an optional property when you're registering with the SQL IaaS Agent extension. You have to set the SQL Server license type as pay-as-you-go or Azure Hybrid Benefit when registering with the SQL IaaS Agent extension in all manageability modes (NoAgent, lightweight, and full). If you have any of the free versions of SQL Server installed, such as Developer or Evaluation edition, you must register with pay-as-you-go licensing. Azure Hybrid Benefit is only available for paid versions of SQL Server such as Enterprise and Standard editions.
+1. **What is the default license type when using the automatic registration feature?**
+
+ The license type automatically defaults to that of the VM image. If you use a pay-as-you-go image for your VM, then your license type will be `PAYG`, otherwise your license type will be `AHUB` by default.
+ 1. **Can I upgrade the SQL Server IaaS extension from NoAgent mode to full mode?** No. Upgrading the manageability mode to full or lightweight is not available for NoAgent mode. This is a technical limitation of Windows Server 2008. You will need to upgrade the OS first to Windows Server 2008 R2 or greater, and then you will be able to upgrade to full management mode.
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
The following is a quick checklist of storage configuration best practices for r
- Always stop the SQL Server service before changing the cache settings of your disk. - For development and test workloads consider using standard storage. It is not recommended to use Standard HDD/SDD for production workloads. - [Credit-based Disk Bursting](../../../virtual-machines/disk-bursting.md#credit-based-bursting) (P1-P20) should only be considered for smaller dev/test workloads and departmental systems.-- Provision the storage account in the same region as the SQL Server VM. -- Disable Azure geo-redundant storage (geo-replication) and use LRS (local redundant storage) on the storage account. - Format your data disk to use 64 KB allocation unit size for all data files placed on a drive other than the temporary `D:\` drive (which has a default of 4 KB). SQL Server VMs deployed through Azure Marketplace come with data disks formatted with allocation unit size and interleave for the storage pool set to 64 KB. To learn more, see the comprehensive [Storage best practices](performance-guidelines-best-practices-storage.md).
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
Review the following checklist for a brief overview of the storage best practice
- Always stop the SQL Server service before changing the cache settings of your disk. - For development and test workloads, and long-term backup archival consider using standard storage. It is not recommended to use Standard HDD/SDD for production workloads. - [Credit-based Disk Bursting](../../../virtual-machines/disk-bursting.md#credit-based-bursting) (P1-P20) should only be considered for smaller dev/test workloads and departmental systems.-- Provision the storage account in the same region as the SQL Server VM. -- Disable Azure geo-redundant storage (geo-replication) and use LRS (local redundant storage) on the storage account. - Format your data disk to use 64 KB block size (allocation unit size) for all data files placed on a drive other than the temporary `D:\` drive (which has a default of 4 KB). SQL Server VMs deployed through Azure Marketplace come with data disks formatted with a block size and interleave for the storage pool set to 64 KB. To compare the storage checklist with the others, see the comprehensive [Performance best practices checklist](performance-guidelines-best-practices-checklist.md).
azure-sql Sql Agent Extension Automatic Registration All Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-automatic-registration-all-vms.md
Registering your SQL Server VM with the [SQL IaaS Agent extension](sql-server-ia
When automatic registration is enabled, a job runs daily to detect whether or not SQL Server is installed on all the unregistered VMs in the subscription. This is done by copying the SQL IaaS agent extension binaries to the VM, then running a one-time utility that checks for the SQL Server registry hive. If the SQL Server hive is detected, the virtual machine is registered with the extension in lightweight mode. If no SQL Server hive exists in the registry, the binaries are removed. Automatic registration can take up to 4 days to detect newly created SQL Server VMs.
-Once automatic registration is enabled for a subscription, all current and future VMs that have SQL Server installed will be registered with the SQL IaaS Agent extension **in lightweight mode without downtime, and without restarting the SQL Server service**. You still need to [manually upgrade to full manageability mode](sql-agent-extension-manually-register-single-vm.md#upgrade-to-full) to take advantage of the full feature set.
+Once automatic registration is enabled for a subscription, all current and future VMs that have SQL Server installed will be registered with the SQL IaaS Agent extension **in lightweight mode without downtime, and without restarting the SQL Server service**. You still need to [manually upgrade to full manageability mode](sql-agent-extension-manually-register-single-vm.md#upgrade-to-full) to take advantage of the full feature set. The license type automatically defaults to that of the VM image. If you use a pay-as-you-go image for your VM, then your license type will be `PAYG`, otherwise your license type will be `AHUB` by default.
> [!IMPORTANT] > The SQL IaaS Agent extension collects data for the express purpose of giving customers optional benefits when using SQL Server within Azure Virtual Machines. Microsoft will not use this data for licensing audits without the customer's advance consent. See the [SQL Server privacy supplement](/sql/sql-server/sql-server-privacy#non-personal-data) for more information.
azure-sql Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/storage-configuration.md
For more throughput, you can add additional data disks and use disk striping. To
For example, the following PowerShell creates a new storage pool with the interleave size to 64 KB and the number of columns equal to the amount of physical disk in the storage pool:
+# [Windows Server 2016 +](#tab/windows2016)
+ ```powershell $PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like "*2" -or $_.FriendlyName -like "*3"}-
- New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Storage Spaces*" `
+
+ New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Windows Storage on <VM Name>" `
-PhysicalDisks $PhysicalDisks | New- VirtualDisk -FriendlyName "DataFiles" ` -Interleave 65536 -NumberOfColumns $PhysicalDisks .Count -ResiliencySettingName simple ` ΓÇôUseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
For example, the following PowerShell creates a new storage pool with the interl
-AllocationUnitSize 65536 -Confirm:$false ```
+In Windows Server 2016 and later, the default value for `-StorageSubsystemFriendlyName` is `Windows Storage on <VM Name>`
+++
+# [Windows Server 2008 - 2012 R2](#tab/windows2012)
+++
+ ```powershell
+ $PhysicalDisks = Get-PhysicalDisk | Where-Object {$_.FriendlyName -like "*2" -or $_.FriendlyName -like "*3"}
+
+ New-StoragePool -FriendlyName "DataFiles" -StorageSubsystemFriendlyName "Storage Spaces on <VMName>" `
+ -PhysicalDisks $PhysicalDisks | New- VirtualDisk -FriendlyName "DataFiles" `
+ -Interleave 65536 -NumberOfColumns $PhysicalDisks .Count -ResiliencySettingName simple `
+ ΓÇôUseMaximumSize |Initialize-Disk -PartitionStyle GPT -PassThru |New-Partition -AssignDriveLetter `
+ -UseMaximumSize |Format-Volume -FileSystem NTFS -NewFileSystemLabel "DataDisks" `
+ -AllocationUnitSize 65536 -Confirm:$false
+ ```
+
+In Windows Server 2008 to 2012 R2, the default value for `-StorageSubsystemFriendlyName` is `Storage Spaces on <VMName>`.
++++ * For Windows 2008 R2 or earlier, you can use dynamic disks (OS striped volumes) and the stripe size is always 64 KB. This option is deprecated as of Windows 8/Windows Server 2012. For information, see the support statement at [Virtual Disk Service is transitioning to Windows Storage Management API](/windows/win32/w8cookbook/vds-is-transitioning-to-wmiv2-based-windows-storage-management-api). * If you are using [Storage Spaces Direct (S2D)](/windows-server/storage/storage-spaces/storage-spaces-direct-in-vm) with [SQL Server Failover Cluster Instances](./failover-cluster-instance-storage-spaces-direct-manually-configure.md), you must configure a single pool. Although different volumes can be created on that single pool, they will all share the same characteristics, such as the same caching policy.
For example, the following PowerShell creates a new storage pool with the interl
## Next steps
-For other topics related to running SQL Server in Azure VMs, see [SQL Server on Azure Virtual Machines](sql-server-on-azure-vm-iaas-what-is-overview.md).
+For other topics related to running SQL Server in Azure VMs, see [SQL Server on Azure Virtual Machines](sql-server-on-azure-vm-iaas-what-is-overview.md).
+
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/custom-neural-voice.md
Previously updated : 02/01/2020 Last updated : 05/18/2021 # What is Custom Neural Voice?
-Custom Neural Voice is a
-[text-to-Speech](./text-to-speech.md)
-(TTS) feature that allows you to create a one-of-a-kind customized synthetic voice for your applications by providing your own audio data as a sample. Text-to-Speech works by converting text into synthetic speech using a machine learning model that sounds like a chosen voice. With the [REST API](./rest-text-to-speech.md),
-you can enable your apps to speak with [pre-built voices](./language-support.md#neural-voices)
-or your own [custom voice](./how-to-custom-voice-prepare-data.md)
-models developed through the Custom Neural Voice feature. Custom Neural
-Voice is based on Neural TTS technology that creates a natural sounding
-voice that is often indistinguishable when compared with a human voice.
-The realistic and natural sounding voice of Custom Neural Voice can
-represent brands, personify machines, and allow users to interact with
-applications conversationally in a natural way.
+Custom Neural Voice is a text-to-speech (TTS) feature that lets you create a one-of-a-kind customized synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data. Based on the Neural TTS technology and the multi-lingual multi-speaker universal model, Custom Neural Voice lets you create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally.
> [!NOTE] > The Custom Neural Voice feature requires registration, and access to it is limited based upon MicrosoftΓÇÖs eligibility and use criteria. Customers who wish to use this feature are required to register their use cases through the [intake form](https://aka.ms/customneural).
Neural TTS voice models are trained using deep neural networks based on
the recording samples of human voices. In this [blog](https://techcommunity.microsoft.com/t5/azure-ai/neural-text-to-speech-extends-support-to-15-more-languages-with/ba-p/1505911), we describe how Neural TTS works with state-of-the-art neural speech
-synthesis models. The blog also explains how a universal base model can be adapted with less
-than 2 hours of speech data (or less than 2,000 recorded utterances)
-from a target speaker, and learn to speak in that target speakerΓÇÖs voice. To read about how a neural vocoder is trained, see the [blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
-
-With the customization capability of Custom Neural Voice, you can adapt
-the Neural TTS engine to better fit your user scenarios. To create a
-custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded
-audio and corresponding scripts, train the model, and deploy the voice
-to a custom endpoint. Depending on the use case, Custom Neural Voice can
-be used to convert text into speech in real-time (e.g., used in a smart
-virtual assistant) or generate audio content offline (e.g., used as in
-audio book or instructions in e-learning applications) with the text
-input provided by the user. This is made available via the [REST API](./rest-text-to-speech.md), the
-[Speech SDK](./get-started-text-to-speech.md?pivots=programming-language-csharp&tabs=script%2cwindowsinstall),
-or the [web portal](https://speech.microsoft.com/audiocontentcreation).
+synthesis models. The blog also explains how a universal base model can be adapted to a target speaker's voice with less
+than 2 hours of speech data (or less than 2,000 recorded utterances), and additionally transfer the voice to another language or style. To read about how a neural vocoder is trained, see the [blog post](https://techcommunity.microsoft.com/t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860).
+
+Custom Neural Voice lets you adapt the Neural TTS engine to fit your scenarios. To create a custom neural voice, use [Speech Studio](https://speech.microsoft.com/customvoice) to upload the recorded audio and corresponding scripts, train the model, and deploy the voice to a custom endpoint. Custom Neural Voice can use text provided by the user to convert text into speech in real-time, or generate audio content offline with text input. This is made available via the [REST API](./rest-text-to-speech.md), the [Speech SDK](./get-started-text-to-speech.md), or the [web portal](https://speech.microsoft.com/audiocontentcreation).
+
+## Get started
+
+* To get started with Custom Neural Voice and create a project, see [Get started with Custom Voice](how-to-custom-voice.md).
+* To prepare and upload your audio data, see [Prepare training data](how-to-custom-voice-prepare-data.md).
+* To train and deploy your models, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
## Terms and definitions
To learn how to use Custom Neural Voice responsibly, see the [transparency note]
## Next steps * [Get started with Custom Neural Voice](how-to-custom-voice.md)
-* [Create and use your voice model](how-to-custom-voice-create-voice.md)
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
Title: "Improve synthesis with Custom Voice - Speech service"
+ Title: "Get started with Custom Neural Voice - Speech service"
-description: "Custom Voice is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. Follow the links below to start creating a custom speech-to-text experience."
+description: "Custom Neural Voice is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions."
Previously updated : 02/17/2020 Last updated : 05/18/2021
-# Get started with Custom Voice
+# Get started with Custom Neural Voice
-[Custom Voice](https://aka.ms/customvoice) is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. Follow the links below to start creating a custom text-to-speech experience.
-
-## What's in Custom Voice?
-
-Before starting with Custom Voice, you'll need an Azure account and a Speech service subscription. Once you've created an account, you can prepare your data, train and test your models, evaluate voice quality, and ultimately deploy your custom voice model.
-
-The diagram below highlights the steps to create a custom voice model using the [Custom Voice portal](https://aka.ms/customvoice). Use the links to learn more.
-
-![Custom Voice architecture diagram](media/custom-voice/custom-voice-diagram.png)
-
-1. [Subscribe and create a project](#set-up-your-azure-account) - Create an Azure account and create a Speech service subscription. This unified subscription gives you access to speech-to-text, text-to-speech, speech translation, and the Custom Voice portal. Then, using your Speech service subscription, create your first Custom Voice project.
-
-2. [Upload data](how-to-custom-voice-create-voice.md#upload-your-datasets) - Upload data (audio and text) using the Custom Voice portal or Custom Voice API. From the portal, you can investigate and evaluate pronunciation scores and signal-to-noise ratios. For more information, see [How to prepare data for Custom Voice](how-to-custom-voice-prepare-data.md).
-
-3. [Train your model](how-to-custom-voice-create-voice.md#train-your-custom-neural-voice-model) ΓÇô Use your data to create a custom text-to-speech voice model. You can train a model in different languages. After training, test your model, and if you're satisfied with the result, you can deploy the model.
-
-4. [Deploy your model](how-to-custom-voice-create-voice.md#create-and-use-a-custom-neural-voice-endpoint) - Create a custom endpoint for your text-to-speech voice model, and use it for speech synthesis in your products, tools, and applications.
-
-## Custom Neural voices
-
-Custom Voice currently supports both standard and neural tiers. Custom Neural Voice empowers users to build higher quality voice models while requiring less data, and provides measures to help you deploy AI responsibly. We recommend you should use Custom Neural Voice to develop more realistic voices for more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way. [Learn more about Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
+[Custom Neural Voice](https://aka.ms/customvoice) is a set of online tools that allow you to create a recognizable, one-of-a-kind voice for your brand. All it takes to get started are a handful of audio files and the associated transcriptions. Follow the links below to start creating a custom text-to-speech experience. See the supported [languages](language-support.md#customization) and [regions](regions.md#custom-voices) for Custom Neural Voice.
> [!NOTE] > As part of Microsoft's commitment to designing responsible AI, we have limited the use of Custom Neural Voice. You may gain access to the technology only after your applications are reviewed and you have committed to using it in alignment with our responsible AI principles. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural).
-> The [languages](language-support.md#customization) and [regions](regions.md#custom-voices) supported for the standard and neural version of Custom Voice are different. Check the details before you start.
-
+
## Set up your Azure account
-A Speech service subscription is required before you can use the Custom Speech portal to create a custom model. Follow these instructions to create a Speech service subscription in Azure. If you do not have an Azure account, you can sign up for a new one.
+A Speech service subscription is required before you can use Custom Neural Voice. Follow these instructions to create a Speech service subscription in Azure. If you do not have an Azure account, you can sign up for a new one.
Once you've created an Azure account and a Speech service subscription, you'll need to sign in Speech Studio and connect your subscription.
Once you've created an Azure account and a Speech service subscription, you'll n
Content like data, models, tests, and endpoints are organized into **Projects** in Speech Studio. Each project is specific to a country/language and the gender of the voice you want to create. For example, you may create a project for a female voice for your call center's chat bots that use English in the United States ('en-US').
-To create your first project, select the **Text-to-Speech/Custom Voice** tab, then click **Create project**. Follow the instructions provided by the wizard to create your project. After you've created a project, you will see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. Use the links provided in [Next steps](#next-steps) to learn how to use each tab.
-
-> [!IMPORTANT]
-> The [Custom Voice portal](https://aka.ms/custom-voice) was recently updated! If you created previous data, models, tests, and published endpoints in the CRIS.ai portal or with APIs, you need to create a new project in the new portal to connect to these old entities.
+To create your first project, select the **Text-to-Speech/Custom Voice** tab, then click **Create project**. Follow the instructions provided by the wizard to create your project. After you've created a project, you will see four tabs: **Set up voice talent**, **Prepare training data**, **Train model**, and **Deploy model**. Use the links provided in [next steps](#next-steps) to learn how to use each tab.
## Tips for creating a custom neural voice
The standard/non-neural training tier (adaptive, statistical parametric, concace
If you are using non-neural/standard Custom Voice, migrate to Custom Neural Voice immediately following the steps below. Moving to Custom Neural Voice will help you develop more realistic voices for even more natural conversational interfaces and enable your customers and end users to benefit from the latest Text-to-Speech technology, in a responsible way. 1. Learn more about our [policy on the limit access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) and [apply here](https://aka.ms/customneural). Note that the access to the Custom Neural Voice service is subject to MicrosoftΓÇÖs sole discretion based on our eligibility criteria. Customers may gain access to the technology only after their application is reviewed and they have committed to using it in alignment with our [Responsible AI principles](https://microsoft.com/ai/responsible-ai) and the [code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
-2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to the [Custom Voice portal](https://speech.microsoft.com/customvoice) using the same Azure subscription that you provide in your application.
+2. Once your application is approved, you will be provided with the access to the "neural" training feature. Make sure you log in to [Speech Studio](https://speech.microsoft.com) using the same Azure subscription that you provide in your application.
> [!IMPORTANT] > To protect voice talent and prevent training of voice models with unauthorized recording or without the acknowledgement from the voice talent, we require the customer to upload a recorded statement of the voice talent giving their consent. When preparing your recording script, make sure you include this sentence. > ΓÇ£I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice.ΓÇ¥
If you are using non-neural/standard Custom Voice, migrate to Custom Neural Voi
## Next steps -- [Prepare Custom Voice data](how-to-custom-voice-prepare-data.md)-- [Create a Custom Voice](how-to-custom-voice-create-voice.md)-- [Tutorial: Record your voice samples](record-custom-voice-samples.md)
+- [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md)
+- [Train and deploy a Custom Neural Voice](how-to-custom-voice-create-voice.md)
+- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/text-to-speech.md
This documentation contains the following article types:
* Neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see [supported languages](language-support.md#text-to-speech).
-* Adjust speaking styles with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document. See the [how-to](speech-synthesis-markup.md) for adjusting speaking styles.
+* Adjust speaking styles with SSML - Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document. With the multi-lingual voices, you can also adjust the speaking languages via SSML.See the [how-to](speech-synthesis-markup.md) for adjusting speaking styles.
* Visemes - [Visemes](how-to-speech-synthesis-viseme.md) are the key poses in observed speech, including the position of the lips, jaw and tongue when producing a particular phoneme. Visemes have a strong correlation with voices and phonemes. Using viseme events in Speech SDK, you can generate facial animation data, which can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently only supported for the `en-US` English (United States) [neural voices](language-support.md#text-to-speech).
digital-twins How To Create Twin Level Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-create-twin-level-role-based-access-control.md
-
-# Mandatory fields.
Title: Create twin-level RBAC with marker tags-
-description: Understand how to create twin-level RBAC using marker tags.
-- Previously updated : 5/12/2021---
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
-#
--
-# Use marker tags for twin-level RBAC
-
-While [Azure Digital Twins security](concepts-security.md) currently only supports instance-level role-based access control (RBAC), there are a few ways to implement twin-level RBAC in your solution. One method leverages **marker tags**, which are a flexible way of adding information at runtime when a twin is created. You can read more about marker tags in [How-to: Add tags to digital twins](how-to-use-tags.md).
-
-This article describes the marker tag strategy for implementing twin-level RBAC in your solution.
-
-## Prerequisites
-
-Before proceeding with creating twin-level RBAC for your solution, you should complete the following steps:
-1. Set up an **Azure Digital Twins instance** and the required permissions for using it. If you don't have this already set up, follow the instructions in [How-to: Set up an instance and authentication](how-to-set-up-instance-portal.md). The instructions contain information to help you verify that you've completed each step successfully.
-2. Review and familiarize yourself with the security details for Azure Digital Twins in [Concepts: Security for Azure Digital Twins solutions](concepts-security.md).
-3. (Recommended) Complete the Azure Digital Twins tutorial, [Connect an end-to-end solution](tutorial-end-to-end.md). Having an end-to-end solution will allow you to concretely implement and test out twin-level RBAC. This tutorial includes setting up and [configuring permissions for](tutorial-end-to-end.md#configure-permissions-for-the-function-app) an Azure Functions app, which is a great way to test out the twin-level RBAC described in this article.
-
-After completing these steps, you will have an Azure Digital Twins flow which includes instance-level RBAC for a function app user. Next, you can set up marker tags to control access to individual twins.
-
-## Sample scenario
-
-This article uses a sample scenario to explain the marker tag strategy.
-
-Imagine that you have an Azure Digital Twins instance which contains assets from three tenants represented in its graph: Earth Group, Fire Group, and Wind Group. The Earth and Fire groups should each respectively have access to a subset of twins, while the Wind group should have access to all twins in the graph.
--
-An important consideration here is that for this specific pool logic, we say that tenants can be assigned multiple roles (for instance, a tenant could be a member of both Earth Group and Fire Group), but each twin is only assigned one role. Therefore, a Wind Group user would need to be assigned a role for all of its subset groups (Fire Group, Earth Group) in order to have access to all resources. You may design your logical separations differently than this.
-
-Next, you'll read about how to implement marker tags to support the different permissions needed by the groups in this example.
-
-## Create custom Azure RBAC roles
-
-To ensure tenants cannot reach across these defined boundaries, you'll add an additional layer of authentication on top of instance-level RBAC. In Azure RBAC, you are able to define custom roles. This process is explained in [Azure custom roles - Azure RBAC](../role-based-access-control/custom-roles.md).
-
-For the sample scenario described [earlier](#sample-scenario), you can create custom roles for each group: **Fire**, **Earth**, and **Wind**.
-
-Then, use the identity associated with the function app from the [Prerequisites](#prerequisites) section to assign the custom roles to the function app. You can test that the twin-level RBAC is working for your app by assigning different role configurations to the function (such as all the roles, no roles, or only **Fire**) and verifying that access is working as expected for each situation.
-
-## Add tags
-
-Next, you'll re-create or update the models in your Azure Digital Twins instance to allow all twins to have a tag property.
-
-To accomplish this, a tag property of type Map is defined in the model for each twin. Then, each twin can define any key/value pair within the tag property. Later, you can perform queries filtering by these tags.
-
-Based on the sample scenario for this article, the addition to a twin model may look like this:
-
-```json
-{
- ΓÇ» "@type": "Property",
- ΓÇ» "name": "tags",
- ΓÇ» "schema": {
-     "@type": "Map",
-     "mapKey": {
-       "name": "role",
-       "schema": "string"
-     }
- ΓÇ» }
-}
-```
-
-For instructions on how to update an existing model in your instance, see [How-to: Manage models](how-to-manage-model.md#update-models).
-
-Once twins contain the Map property, you can now "tag" twins with the roles that are allowed to access them. For instructions on updating twin property values, see [How-to: Manage digital twins](how-to-manage-twin.md#update-a-digital-twin).
-
-## Query
-
-[Earlier in this article](#create-custom-azure-rbac-roles), you made use of a service principal to associate tenants with roles, and then updated the twins so that they would utilize marker tags to identify which role was allowed to access them. We can now see in action how this logic can enable us to check whether a tenant has access to a specific resource at runtime. Before accessing a resource, use the following code to fetch the roles associated with that tenant:
-
-```C#
-var roles = "['role1', 'role2'..'rolen']";
-// in our case
-var rolesExample = "['Fire', 'Earth', 'Wind']";
-```
-
-Now, when querying the graph on behalf of that tenant, you can include the following clause to only return twins that tenants with the specified roles have access to:
-
-```SQL
-WHERE is_defined(tags.role) AND tags.role IN ['Fire', 'Earth', 'Wind'])
-```
-
-You should find that the query results are now scoped to the specified roles.
-
-## Next steps
-
-Read more about the concepts discussed in this article:
-* [Azure RBAC](../role-based-access-control/overview.md)
-* [Concepts: Security for Azure Digital Twins solutions](concepts-security.md)
-* [How-to: Add tags to digital twins](how-to-use-tags.md)
hdinsight Apache Domain Joined Run Kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/domain-joined/apache-domain-joined-run-kafka.md
Based on the Ranger policies configured, **sales_user** can produce/consume topi
export KAFKABROKERS=<brokerlist>:9092 ```
- Example: `export KAFKABROKERS=wn0-khdicl.contoso.com:9092,wn1-khdicl.contoso.com:9092`
+ Example: `export KAFKABROKERS=<brokername1>.contoso.com:9092,<brokername2>.contoso.com:9092`
3. Follow Step 3 under **Build and deploy the example** in [Tutorial: Use the Apache Kafka Producer and Consumer APIs](../kafk#build-and-deploy-the-example) to ensure that the `kafka-producer-consumer.jar` is also available to **sales_user**.
hdinsight Apache Hbase Backup Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-backup-replication.md
The destination address is composed of the following three parts:
`<destinationAddress> = <ZooKeeperQuorum>:<Port>:<ZnodeParent>`
-* `<ZooKeeperQuorum>` is a comma-separated list of Apache ZooKeeper nodes, for example:
+* `<ZooKeeperQuorum>` is a comma-separated list of Apache ZooKeeper nodes FQDN names, for example:
- zk0-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,zk4-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,zk3-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net
+ \<zookeepername1>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,\<zookeepername2>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,\<zookeepername3>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net
* `<Port>` on HDInsight defaults to 2181, and `<ZnodeParent>` is `/hbase-unsecure`, so the complete `<destinationAddress>` would be:
- zk0-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,zk4-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,zk3-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net:2181:/hbase-unsecure
+ \<zookeepername1>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,\<zookeepername2>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,\<zookeepername3>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net:2181:/hbase-unsecure
See [Manually Collecting the Apache ZooKeeper Quorum List](#manually-collect-the-apache-zookeeper-quorum-list) in this article for details on how to retrieve these values for your HDInsight cluster.
curl -u admin:<password> -X GET -H "X-Requested-By: ambari" "https://<clusterNam
The curl command retrieves a JSON document with HBase configuration information, and the grep command returns only the "hbase.zookeeper.quorum" entry, for example: ```output
-"hbase.zookeeper.quorum" : "zk0-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,zk4-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,zk3-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net"
+"hbase.zookeeper.quorum" : "<zookeepername1>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,<zookeepername2>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,<zookeepername3>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net"
``` The quorum host names value is the entire string to the right of the colon.
To retrieve the IP addresses for these hosts, use the following curl command for
curl -u admin:<password> -X GET -H "X-Requested-By: ambari" "https://<clusterName>.azurehdinsight.net/api/v1/clusters/<clusterName>/hosts/<zookeeperHostFullName>" | grep "ip" ```
-In this curl command, `<zookeeperHostFullName>` is the full DNS name of a ZooKeeper host, such as the example `zk0-hdizc2.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net`. The output of the command contains the IP address for the specified host, for example:
+In this curl command, `<zookeeperHostFullName>` is the full DNS name of a ZooKeeper host, such as the example `<zookeepername1>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net`. The output of the command contains the IP address for the specified host, for example:
`100 "ip" : "10.0.0.9",`
hdinsight Apache Hbase Query With Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-query-with-phoenix.md
A portion of the output will look similar to:
```output {
- "href" : "http://hn*.432dc3rlshou3ocf251eycoapa.bx.internal.cloudapp.net:8080/api/v1/clusters/myCluster/hosts/zk0-brim.432dc3rlshou3ocf251eycoapa.bx.internal.cloudapp.net/host_components/ZOOKEEPER_SERVER",
+ "href" : "http://hn*.432dc3rlshou3ocf251eycoapa.bx.internal.cloudapp.net:8080/api/v1/clusters/myCluster/hosts/<zookeepername1>.432dc3rlshou3ocf251eycoapa.bx.internal.cloudapp.net/host_components/ZOOKEEPER_SERVER",
"HostRoles" : { "cluster_name" : "myCluster", "component_name" : "ZOOKEEPER_SERVER",
- "host_name" : "zk0-brim.432dc3rlshou3ocf251eycoapa.bx.internal.cloudapp.net"
+ "host_name" : "<zookeepername1>.432dc3rlshou3ocf251eycoapa.bx.internal.cloudapp.net"
} ```
hdinsight Apache Hbase Replication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hbase/apache-hbase-replication.md
The `print_usage()` section of the [script](https://github.com/Azure/hbase-utils
- **Copy specific tables (test1, test2, and test3) for all rows edited until now (current time stamp)**:
- `-m hn1 -t "test1::;test2::;test3::" -p "zk5-hbrpl2;zk1-hbrpl2;zk5-hbrpl2:2181:/hbase-unsecure" -everythingTillNow`
+ `-m hn1 -t "test1::;test2::;test3::" -p "<zookeepername1>;<zookeepername2>;<zookeepername3>:2181:/hbase-unsecure" -everythingTillNow`
Or:
- `-m hn1 -t "test1::;test2::;test3::" --replication-peer="zk5-hbrpl2;zk1-hbrpl2;zk5-hbrpl2:2181:/hbase-unsecure" -everythingTillNow`
+ `-m hn1 -t "test1::;test2::;test3::" --replication-peer="<zookeepername1>;<zookeepername2>;<zookeepername3>:2181:/hbase-unsecure" -everythingTillNow`
- **Copy specific tables with a specified time range**:
- `-m hn1 -t "table1:0:452256397;table2:14141444:452256397" -p "zk5-hbrpl2;zk1-hbrpl2;zk5-hbrpl2:2181:/hbase-unsecure"`
+ `-m hn1 -t "table1:0:452256397;table2:14141444:452256397" -p "<zookeepername1>;<zookeepername2>;<zookeepername3>:2181:/hbase-unsecure"`
## Disable replication
hdinsight Hdinsight Apache Storm With Kafka https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apache-storm-with-kafka.md
To create an Azure Virtual Network, and then create the Kafka and Storm clusters
The value returned is similar to the following text: ```output
- wn0-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092,wn1-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092
+ <brokername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092,<brokername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092
``` > [!IMPORTANT]
To create an Azure Virtual Network, and then create the Kafka and Storm clusters
The value returned is similar to the following text: ```output
- zk0-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181,zk2-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181
+ <zookeepername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181,<zookeepername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181
``` > [!IMPORTANT]
To create an Azure Virtual Network, and then create the Kafka and Storm clusters
3. Edit the `dev.properties` file in the root of the project. Add the Broker and Zookeeper hosts information for the __Kafka__ cluster to the matching lines in this file. The following example is configured using the sample values from the previous steps: ```bash
- kafka.zookeeper.hosts: zk0-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181,zk2-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181
- kafka.broker.hosts: wn0-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092,wn1-kafka.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092
+ kafka.zookeeper.hosts: <zookeepername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181,<zookeepername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:2181
+ kafka.broker.hosts: <brokername1>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092,<brokername2>.53qqkiavjsoeloiq3y1naf4hzc.ex.internal.cloudapp.net:9092
kafka.topic: stormtopic ```
hdinsight Hdinsight Plan Virtual Network Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-plan-virtual-network-deployment.md
Azure provides name resolution for Azure services that are installed in a virtua
* Any resource that is in the same Azure Virtual Network, by using the __internal DNS name__ of the resource. For example, when using the default name resolution, the following are examples of internal DNS names assigned to HDInsight worker nodes:
- * wn0-hdinsi.0owcbllr5hze3hxdja3mqlrhhe.ex.internal.cloudapp.net
- * wn2-hdinsi.0owcbllr5hze3hxdja3mqlrhhe.ex.internal.cloudapp.net
+ * \<workername1>.0owcbllr5hze3hxdja3mqlrhhe.ex.internal.cloudapp.net
+ * \<workername2>.0owcbllr5hze3hxdja3mqlrhhe.ex.internal.cloudapp.net
Both these nodes can communicate directly with each other, and other nodes in HDInsight, by using internal DNS names.
hdinsight Apache Hive Warehouse Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/apache-hive-warehouse-connector.md
Hive Warehouse Connector needs separate clusters for Spark and Interactive Query
1. From a web browser, navigate to `https://LLAPCLUSTERNAME.azurehdinsight.net/#/main/services/HIVE` where LLAPCLUSTERNAME is the name of your Interactive Query cluster.
-1. Navigate to **Summary** > **HiveServer2 Interactive JDBC URL** and note the value. The value may be similar to: `jdbc:hive2://zk0-iqgiro.rekufuk2y2ce.bx.internal.cloudapp.net:2181,zk1-iqgiro.rekufuk2y2ce.bx.internal.cloudapp.net:2181,zk4-iqgiro.rekufuk2y2ce.bx.internal.cloudapp.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive`.
+1. Navigate to **Summary** > **HiveServer2 Interactive JDBC URL** and note the value. The value may be similar to: `jdbc:hive2://<zookeepername1>.rekufuk2y2ce.bx.internal.cloudapp.net:2181,<zookeepername2>.rekufuk2y2ce.bx.internal.cloudapp.net:2181,<zookeepername3>.rekufuk2y2ce.bx.internal.cloudapp.net:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive`.
-1. Navigate to **Configs** > **Advanced** > **Advanced hive-site** > **hive.zookeeper.quorum** and note the value. The value may be similar to: `zk0-iqgiro.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:2181,zk1-iqgiro.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:2181,zk4-iqgiro.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:2181`.
+1. Navigate to **Configs** > **Advanced** > **Advanced hive-site** > **hive.zookeeper.quorum** and note the value. The value may be similar to: `<zookeepername1>.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:2181,<zookeepername2>.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:2181,<zookeepername3>.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:2181`.
1. Navigate to **Configs** > **Advanced** > **General** > **hive.metastore.uris** and note the value. The value may be similar to: `thrift://iqgiro.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:9083,thrift://hn*.rekufuk2y2cezcbowjkbwfnyvd.bx.internal.cloudapp.net:9083`.
hdinsight Interactive Query Troubleshoot Inaccessible Hive View https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/interactive-query/interactive-query-troubleshoot-inaccessible-hive-view.md
This article describes troubleshooting steps and possible resolutions for issues
The Hive View is inaccessible, and the logs in `/var/log/hive` show an error similar to the following: ```
-ERROR [Curator-Framework-0]: curator.ConnectionState (ConnectionState.java:checkTimeouts(200)) - Connection timed out for connection string (zk0-cluster.cloud.wbmi.com:2181,zk1-cluster.cloud.wbmi.com:2181,zk2-cluster.cloud.wbmi.com:2181) and timeout (15000) / elapsed (21852)
+ERROR [Curator-Framework-0]: curator.ConnectionState (ConnectionState.java:checkTimeouts(200)) - Connection timed out for connection string (<zookeepername1>.cloud.wbmi.com:2181,<zookeepername2>.cloud.wbmi.com:2181,<zookeepername3>.cloud.wbmi.com:2181) and timeout (15000) / elapsed (21852)
``` ## Cause
It is possible that Hive may fail to establish a connection to Zookeeper, which
1. Check if the Zookeeper service has a ZNode entry for Hive Server2. The value will be missing or incorrect. ```
- /usr/hdp/2.6.2.25-1/zookeeper/bin/zkCli.sh -server zk1-wbwdhs
- [zk: zk0-cluster(CONNECTED) 0] ls /hiveserver2-hive2
+ /usr/hdp/2.6.2.25-1/zookeeper/bin/zkCli.sh -server <zookeepername1>
+ [zk: <zookeepername1>(CONNECTED) 0] ls /hiveserver2-hive2
``` 1. To re-establish connectivity, reboot the Zookeeper nodes, and reboot HiveServer2.
hdinsight Apache Kafka Connector Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-connector-iot-hub.md
From your SSH connection to the edge node, use the following steps to configure
Copy the values for later use. The value returned is similar to the following text:
- `wn0-kafka.w5ijyohcxt5uvdhhuaz5ra4u5f.ex.internal.cloudapp.net:9092,wn1-kafka.w5ijyohcxt5uvdhhuaz5ra4u5f.ex.internal.cloudapp.net:9092`
+ `<brokername1>.w5ijyohcxt5uvdhhuaz5ra4u5f.ex.internal.cloudapp.net:9092,<brokername2>.w5ijyohcxt5uvdhhuaz5ra4u5f.ex.internal.cloudapp.net:9092`
1. Get the address of the Apache Zookeeper nodes. There are several Zookeeper nodes in the cluster, but you only need to reference one or two. Use the following command to the store the addresses in the variable `KAFKAZKHOSTS`:
hdinsight Apache Kafka Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-get-started.md
In this section, you get the host information from the Apache Ambari REST API on
This command returns information similar to the following text:
- `zk0-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,zk2-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181`
+ `<zookeepername1>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,<zookeepername2>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181`
1. To set an environment variable with Apache Kafka broker host information, use the following command:
In this section, you get the host information from the Apache Ambari REST API on
This command returns information similar to the following text:
- `wn1-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,wn0-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092`
+ `<brokername1>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,<brokername2>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092`
## Manage Apache Kafka topics
hdinsight Apache Kafka Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-quickstart-powershell.md
In this section, you get the host information from the Apache Ambari REST API on
This command returns information similar to the following text:
- `zk0-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,zk2-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181`
+ `<zookeepername1>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,<zookeepername2>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181`
5. To set an environment variable with Kafka broker host information, use the following command:
In this section, you get the host information from the Apache Ambari REST API on
This command returns information similar to the following text:
- `wn1-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,wn0-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092`
+ `<brokername1>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,<brokername2>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092`
## Manage Apache Kafka topics
hdinsight Apache Kafka Quickstart Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/kafka/apache-kafka-quickstart-resource-manager-template.md
In this section, you get the host information from the Ambari REST API on the cl
This command returns information similar to the following text:
- `zk0-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,zk2-kafka.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181`
+ `<zookeepername1>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181,<zookeepername2>.eahjefxxp1netdbyklgqj5y1ud.ex.internal.cloudapp.net:2181`
1. To set an environment variable with Kafka broker host information, use the following command:
In this section, you get the host information from the Ambari REST API on the cl
This command returns information similar to the following text:
- `wn1-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,wn0-kafka.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092`
+ `<brokername1>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092,<brokername2>.eahjefxxp1netdbyklgqj5y1ud.cx.internal.cloudapp.net:9092`
## Manage Apache Kafka topics
hdinsight Apache Spark Troubleshoot Outofmemory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/spark/apache-spark-troubleshoot-outofmemory.md
Livy Server cannot be started on an Apache Spark [(Spark 2.1 on Linux (HDI 3.6)]
17/07/27 17:52:50 INFO ZooKeeper: Client environment:user.name=livy 17/07/27 17:52:50 INFO ZooKeeper: Client environment:user.home=/home/livy 17/07/27 17:52:50 INFO ZooKeeper: Client environment:user.dir=/home/livy
-17/07/27 17:52:50 INFO ZooKeeper: Initiating client connection, connectString=zk2-kcspark.cxtzifsbseee1genzixf44zzga.gx.internal.cloudapp.net:2181,zk3-kcspark.cxtzifsbseee1genzixf44zzga.gx.internal.cloudapp.net:2181,zk6-kcspark.cxtzifsbseee1genzixf44zzga.gx.internal.cloudapp.net:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@25fb8912
+17/07/27 17:52:50 INFO ZooKeeper: Initiating client connection, connectString=<zookeepername1>.cxtzifsbseee1genzixf44zzga.gx.internal.cloudapp.net:2181,<zookeepername2>.cxtzifsbseee1genzixf44zzga.gx.internal.cloudapp.net:2181,<zookeepername3>.cxtzifsbseee1genzixf44zzga.gx.internal.cloudapp.net:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@25fb8912
17/07/27 17:52:50 INFO StateStore$: Using ZooKeeperStateStore for recovery. 17/07/27 17:52:50 INFO ClientCnxn: Opening socket connection to server 10.0.0.61/10.0.0.61:2181. Will not attempt to authenticate using SASL (unknown error) 17/07/27 17:52:50 INFO ClientCnxn: Socket connection established to 10.0.0.61/10.0.0.61:2181, initiating session
Delete all entries using steps detailed below.
1. Above command listed all the zookeepers for my cluster ```bash
- /etc/hadoop/conf/core-site.xml: <value>zk1-hwxspa.lnuwp5akw5ie1j2gi2amtuuimc.dx.internal.cloudapp.net:2181,zk2- hwxspa.lnuwp5akw5ie1j2gi2amtuuimc.dx.internal.cloudapp.net:2181,zk4-hwxspa.lnuwp5akw5ie1j2gi2amtuuimc.dx.internal.cloudapp.net:2181</value>
+ /etc/hadoop/conf/core-site.xml: <value><zookeepername1>.lnuwp5akw5ie1j2gi2amtuuimc.dx.internal.cloudapp.net:2181,<zookeepername2>.lnuwp5akw5ie1j2gi2amtuuimc.dx.internal.cloudapp.net:2181,<zookeepername3>.lnuwp5akw5ie1j2gi2amtuuimc.dx.internal.cloudapp.net:2181</value>
``` 1. Get all the IP address of the zookeeper nodes using ping Or you can also connect to zookeeper from headnode using zk name ```bash
- /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server zk2-hwxspa:2181
+ /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server <zookeepername1>:2181
``` 1. Once you are connected to zookeeper execute the following command to list all the sessions that are attempted to restart.
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-d2c.md
Title: Understand Azure IoT Hub message routing | Microsoft Docs description: Developer guide - how to use message routing to send device-to-cloud messages. Includes information about sending both telemetry and non-telemetry data.--++ Previously updated : 05/15/2019- Last updated : 05/14/2021+
Message routing enables you to send messages from your devices to cloud services in an automated, scalable, and reliable manner. Message routing can be used for:
-* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, and digital twin change events to the built-in-endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-pnp/concepts-digital-twin.md).
+* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, digital twin change events, and device connection state events to the built-in-endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-pnp/concepts-digital-twin.md).
* **Filtering data before routing it to various endpoints** by applying rich queries. Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. Learn more about using [queries in message routing](iot-hub-devguide-routing-query-syntax.md).
Use the following tutorials to learn how to read messages from an endpoint.
## Fallback route
-The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in-Event Hubs (**messages/events**), that is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is turned on, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in-endpoint, unless a route is created to that endpoint. If there are no routes to the built-in-endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in-endpoint. Also, if all existing routes are deleted, fallback route must be enabled to receive all data at the built-in-endpoint.
+The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in-Event Hubs (**messages/events**), that is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is turned on, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in-endpoint, unless a route is created to that endpoint. If there are no routes to the built-in-endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in-endpoint. Also, if all existing routes are deleted, fallback route must be enabled to receive all data at the built-in-endpoint.
You can enable/disable the fallback route in the Azure portal->Message Routing blade. You can also use Azure Resource Manager for [FallbackRouteProperties](/rest/api/iothub/iothubresource/createorupdate#fallbackrouteproperties) to use a custom endpoint for fallback route. ## Non-telemetry events
-In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, and digital twin change events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device was deleted or created. Finally, as part of [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin.
+In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, digital twin change events and device connection state events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device was deleted or created. As part of [Azure IoT Plug and Play](../iot-pnp/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **device connection state events**, IoT Hub sends a message indicating whether the device was connected or disconnected.
+ [IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario.
+## Limitations for device connection state events
+
+To receive device connection state events, a device must call either the *device-to-cloud send telemetry* or a *cloud-to-device receive message* operation with IoT Hub. However, if a device uses AMQP protocol to connect with IoT Hub, we recommend the device to call *cloud-to-device receive message* operation, otherwise their connection state notifications may be delayed by few minutes. If your device connects with MQTT protocol, IoT Hub keeps the cloud-to-device link open. To open the cloud-to-device link for AMQP, call the [Receive Async API](/rest/api/iothub/device/receivedeviceboundnotification).
+
+The device-to-cloud link stays open as long as the device sends telemetry.
+
+If the device connection flickers, meaning if device connects and disconnects frequently, IoT Hub doesn't send every single connection state, but publishes the current connection state taken at a periodic snapshot of 60sec until the flickering stops. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state.
+ ## Testing routes When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched the query, message did not match the query, or test couldn't run because the sample message or query syntax are incorrect. To learn more, see [Test Route](/rest/api/iothub/iothubresource/testroute) and [Test all routes](/rest/api/iothub/iothubresource/testallroutes).
iot-hub Iot Hub Event Grid Routing Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-event-grid-routing-comparison.md
While both message routing and Event Grid enable alert configuration, there are
| Feature | IoT Hub message routing | IoT Hub integration with Event Grid | | - | | - |
-| **Device messages and events** | Yes, message routing can be used for telemetry data, device twin changes, device lifecycle events, and digital twin change events. | Yes, Event Grid can be used for telemetry data and device events like device created/deleted/connected/disconnected. But Event grid cannot be used for device twin change events and digital twin change events. |
+| **Device messages and events** | Yes, message routing can be used for telemetry data, device twin changes, device lifecycle events, digital twin change events, and device connection state events. | Yes, Event Grid can be used for telemetry data and device events like device created/deleted/connected/disconnected. But Event grid cannot be used for device twin change events and digital twin change events. |
| **Ordering** | Yes, ordering of events is maintained. | No, order of events is not guaranteed. | | **Filtering** | Rich filtering on message application properties, message system properties, message body, device twin tags, and device twin properties. Filtering isn't applied to digital twin change events. For examples, see [Message Routing Query Syntax](iot-hub-devguide-routing-query-syntax.md). | Filtering based on event type, subject type and attributes in each event. For examples, see [Understand filtering events in Event Grid Subscriptions](../event-grid/event-filtering.md). When subscribing to telemetry events, you can apply additional filters on the data to filter on message properties, message body and device twin in your IoT Hub, before publishing to Event Grid. See [how to filter events](../iot-hub/iot-hub-event-grid.md#filter-events). | | **Endpoints** | <ul><li>Event Hubs</li> <li>Azure Blob Storage</li> <li>Service Bus queue</li> <li>Service Bus topics</li></ul><br>Paid IoT Hub SKUs (S1, S2, and S3) are limited to 10 custom endpoints. 100 routes can be created per IoT Hub. | <ul><li>Azure Functions</li> <li>Azure Automation</li> <li>Event Hubs</li> <li>Logic Apps</li> <li>Storage Blob</li> <li>Custom Topics</li> <li>Queue Storage</li> <li>Power Automate</li> <li>Third-party services through WebHooks</li></ul><br>500 endpoints per IoT Hub are supported. For the most up-to-date list of endpoints, see [Event Grid event handlers](../event-grid/overview.md#event-handlers). |
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-event-grid.md
To filter messages before telemetry data is sent, you can update your [routing q
## Limitations for device connected and device disconnected events
-To receive device connection state events, a device must do either a 'D2C Send Telemetry' OR a 'C2D Receive Message' operation with Iot Hub. However, note that if a device is using AMQP protocol to connect with Iot Hub, it is recommended that they do a 'C2D Receive Message' operation otherwise their connection state notifications may be delayed by few minutes. If your device is using MQTT protocol, IoT Hub will keep the C2D link open. For AMQP, you can open the C2D link by calling the Receive Async API for IoT Hub C# SDK, or [device client for AMQP](iot-hub-amqp-support.md#device-client).
+To receive device connection state events, a device must call either the *device-to-cloud send telemetry* or a *cloud-to-device receive message* operation with IoT Hub. However, if a device uses AMQP protocol to connect with IoT Hub, we recommend the device to call *cloud-to-device receive message* operation, otherwise their connection state notifications may be delayed by few minutes. If your device connects with MQTT protocol, IoT Hub keeps the cloud-to-device link open. To open the cloud-to-device link for AMQP, call the [Receive Async API](/rest/api/iothub/device/receivedeviceboundnotification).
-The D2C link is open if you are sending telemetry.
+The device-to-cloud link stays open as long as the device sends telemetry.
-If the device connection is flickering, which means the device connects and disconnects frequently, we will not send every single connection state, but will publish the current connection state taken at a periodic snapshot, till the flickering continues. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state.
+If the device connection flickers, meaning if device connects and disconnects frequently, IoT Hub doesn't send every single connection state, but publishes the current connection state taken at a periodic snapshot of 60sec until the flickering stops. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state.
## Tips for consuming events
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/administrator-guide.md
By default, each lab has its own virtual network. If you have virtual network p
## Shared image gallery
-A shared image gallery is attached to a lab account and serves as a central repository for storing images. An image is saved in the gallery when an educator chooses to export it from a lab's template VM. Each time an educator makes changes to the template VM and exports it, new versions of the image are saved and the previous versions are maintained.
+A shared image gallery is attached to a lab account and serves as a central repository for storing images. An image is saved in the gallery when an educator chooses to export it from a lab's template VM. Each time an educator makes changes to the template VM and exports it, new image definitions and\or versions are created in the gallery.
-Instructors can publish an image version from the shared image gallery when they create a new lab. Although the gallery stores multiple versions of an image, educators can select only the latest version during lab creation.
+Instructors can publish an image version from the shared image gallery when they create a new lab. Although the gallery stores multiple versions of an image, educators can select only the most recent version during lab creation. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch. For more information about versioning, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions).
-The Shared Image Gallery service is an optional resource that you might not need immediately if you're starting with only a few labs. However, Shared Image Gallery offers many benefits that are helpful as you scale up to additional labs:
+The shared image gallery service is an optional resource that you might not need immediately if you're starting with only a few labs. However, shared image gallery offers many benefits that are helpful as you scale up to additional labs:
- **You can save and manage versions of a template VM image**
- It's useful to create a custom image or make changes (software, configuration, and so on) to an image from the public Azure Marketplace gallery. For example, it's common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to a shared image gallery. You can then use these image versions when you create new labs.
+ It's useful to create a custom image or make changes (software, configuration, and so on) to an image from the Azure Marketplace gallery. For example, it's common for educators to require different software or tooling be installed. Rather than requiring students to manually install these prerequisites on their own, different versions of the template VM image can be exported to a shared image gallery. You can then use these image versions when you create new labs.
- **You can share and reuse template VM images across labs** You can save and reuse an image so that you don't have to configure it from scratch each time that you create a new lab. For example, if multiple classes need to use the same image, you can create it once and export it to the shared image gallery so that it can be shared across labs. -- **Image availability is ensured through automatic replication**
+- **You can upload your own custom images from other environments outside of labs**
- When you save an image from a lab to the shared image gallery, it's automatically replicated to other [regions within the same geography](https://azure.microsoft.com/global-infrastructure/regions/). If there's an outage for a region, publishing the image to your lab is unaffected, because an image replica from another region can be used. Publishing VMs from multiple replicas can also help with performance.
+ You can [upload custom images other environments outside of the context of labs](how-to-attach-detach-shared-image-gallery.md). For example, you can upload images from your own physical lab environment or from an Azure VM into shared image gallery. Once an image is imported into the gallery, you can then use the images to create labs.
To logically group shared images, you can do either of the following:
Instead, we recommend the second approach which is to install 3rd party software
If your school needs to do content filtering, contact us via the [Azure Lab Services' forums](https://techcommunity.microsoft.com/t5/azure-lab-services/bd-p/AzureLabServices) for more information.
+## Endpoint management
+
+Many endpoint management tools, such as [Microsoft Endpoint Manager](https://techcommunity.microsoft.com/t5/azure-lab-services/configuration-manager-azure-lab-services/ba-p/1754407), require Windows VMs to have unique machine security identifiers (SIDs). Using SysPrep to create a *generalized* image typically ensures that each Windows machine will have a new, unique machine SID generated when the VM boots from the image.
+
+With Lab Services, even if you use a *generalized* image to create a lab, the template VM and student VMs will all have the same machine SID. The VMs have the same SID because the template VM's image is in a *specialized* state when it's published to create the student VMs.
+
+For example, the Azure Marketplace images are generalized. If you create a lab from the Win 10 marketplace image and publish the template VM, all of the student VMs within a lab will have the same machine SID as the template VM. The machine SIDs can be verified by using a tool such as [PsGetSid](https://docs.microsoft.com/sysinternals/downloads/psgetsid).
+
+If you plan to use an endpoint management tool or similar software, we recommend that you test it with lab VMs to ensure that it works properly when machine SIDs are the same.
+ ## Pricing ### Azure Lab Services
Creating a shared image gallery and attaching it to your lab account is free. No
#### Storage charges
-To store image versions, a shared image gallery uses standard hard disk drive (HDD)-managed disks. The size of the HDD-managed disk that's used depends on the size of the image version that's being stored. To learn about pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
+To store image versions, a shared image gallery uses standard hard disk drive (HDD) managed disks by default. We recommend using HDD-managed disks when using shared image gallery with Lab Services. The size of the HDD-managed disk that's used depends on the size of the image version that's being stored. Lab Services supports image and disk sizes up to 128 GB. To learn about pricing, see [Managed disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/).
#### Replication and network egress charges
The total cost per month is estimated as:
* *Number of images &times; number of versions &times; number of replicas &times; managed disk price = total cost per month*
-In this example, the cost is:
+In this example, the cost is:
* 1 custom image (32 GB) &times; 2 versions &times; 8 US regions &times; $1.54 = $24.64 per month
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
Only one shared image gallery can be attached to a lab. If you would like to att
## Next steps To learn about how to save a lab image to the shared image gallery or use an image from the shared image gallery to create a VM, see [How to use shared image gallery](how-to-use-shared-image-gallery.md).
+To bring a Windows custom image to shared image gallery outside of the context of a lab, see [Bring custom image to shared image gallery](upload-custom-image-shared-image-gallery.md).
+ For more information about shared image galleries in general, see [shared image gallery](../virtual-machines/shared-image-galleries.md).
lab-services How To Configure Student Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-configure-student-usage.md
In this section, you can get the registration link from the portal and send it b
![List of registered users](./media/tutorial-track-usage/registered-users.png)
+ > [!NOTE]
+ > If you [republish a lab](how-to-create-manage-template.md#publish-the-template-vm) or [reset student VMs](how-to-set-virtual-machine-passwords.md#reset-vms), the students will remain registered for the labs' VMs. However, the contents of the VMs will be deleted and the VMs will be recreated with the template VM's image.
+ ## Set quotas for users You can set an hour quota for each student by doing the following:
lab-services How To Use Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-use-shared-image-gallery.md
Here are the couple of scenarios supported by this feature:
- A lab account admin attaches a shared image gallery to the lab account, and uploads an image to the shared image gallery outside the context of a lab. Then, lab creators can use that image from the shared image gallery to create labs. - A lab account admin attaches a shared image gallery to the lab account. A lab creator (instructor) saves the customized image of his/her lab to the shared image gallery. Then, other lab creators can select this image from the shared image gallery to create a template for their labs.
- When an image is saved to a shared image gallery, Azure Lab Services replicates the saved image to other regions available in the same [geography](https://azure.microsoft.com/global-infrastructure/geographies/). It ensures that the image is available for labs created in other regions in the same geography. Saving images to a shared image gallery incurs an additional cost, which includes cost for all replicated images. This cost is separate from the Azure Lab Services usage cost. For more information about Shared Image Gallery pricing, see [Shared Image Gallery ΓÇô Billing]( https://docs.microsoft.com/azure/virtual-machines/windows/shared-image-galleries#billing).
+ When an image is saved to a shared image gallery, Azure Lab Services replicates the saved image to other regions available in the same [geography](https://azure.microsoft.com/global-infrastructure/geographies/). It ensures that the image is available for labs created in other regions in the same geography. Saving images to a shared image gallery incurs an additional cost, which includes cost for all replicated images. This cost is separate from the Azure Lab Services usage cost. For more information about Shared Image Gallery pricing, see [Shared Image Gallery ΓÇô Billing](../virtual-machines/windows/shared-image-galleries.md#billing).
## Prerequisites - Create a shared image gallery by using either [Azure PowerShell](../virtual-machines/shared-images-powershell.md) or [Azure CLI](../virtual-machines/shared-images-cli.md).
After a shared image gallery is attached, a lab account admin or an educator can
2. On the **Export to Shared Image Gallery** dialog, enter a **name for the image**, and then select **Export**. ![Export to Shared Image Gallery dialog](./media/how-to-use-shared-image-gallery/export-to-shared-image-gallery-dialog.png)+ 3. You can see the progress of this operation on the **Template** page. This operation can take sometime. ![Export in progress](./media/how-to-use-shared-image-gallery/exporting-image-in-progress.png)
After a shared image gallery is attached, a lab account admin or an educator can
![Export completed](./media/how-to-use-shared-image-gallery/exporting-image-completed.png)
- After you save the image to the shared image gallery, you can use that image from the gallery when creating another lab. You can also upload an image to the shared image gallery outside the context of a lab. For more information, see [Shared image gallery overview](../virtual-machines/shared-images-powershell.md).
+ After you save the image to the shared image gallery, you can use that image from the gallery when creating another lab. You can also upload an image to the shared image gallery outside the context of a lab. For more information, see:
+
+ - [Shared image gallery overview](../virtual-machines/shared-images-powershell.md)
+ - [Upload custom image to shared image gallery](upload-custom-image-shared-image-gallery.md)
> [!IMPORTANT] > When you [save a template image of a lab](how-to-use-shared-image-gallery.md#save-an-image-to-the-shared-image-gallery) in Azure Lab Services to a shared image gallery, the image is uploaded to the gallery as a **specialized image**. [Specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) keep machine-specific information and user profiles. You can still directly upload a generalized image to the gallery outside of Azure Lab Services.
-## Use an image from the shared image gallery
-An educator can pick a custom image available in the shared image gallery for the template during new lab creation.
+## Use a custom image from the shared image gallery
+An educator can pick a custom image available in the shared image gallery for the template VM that is created when you setup a new lab.
![Use virtual machine image from the gallery](./media/how-to-use-shared-image-gallery/use-shared-image.png) > [!NOTE]
-> You can create a template VM based on both **generalized** and **specialized** images in Azure Lab Services.
+> You can create a template VM based on both **generalized** and **specialized** images in Azure Lab Services.
+
+### Resave a custom image to shared image gallery
+
+After you've created a lab from a custom image in a shared image gallery, you can make changes to the image using the template VM and reexport the image to shared image gallery. When you reexport, you have the option to either create a new image or to update the original image.
+ ![Reexport to Shared Image Gallery dialog](./media/how-to-use-shared-image-gallery/reexport-to-shared-image-gallery-dialog.png)
+
+If you choose **Create new image**, a new [image definition](../virtual-machines/shared-image-galleries.md#image-definitions) is created. This allows you to save an entirely new custom image without changing the original custom image that already exists in shared image gallery.
+
+If instead you choose **Update existing image**, the original custom image's definition is updated with a new [version](../virtual-machines/shared-image-galleries.md#image-versions). Lab Services automatically will use the most recent version the next time a lab is created using the custom image.
## Next steps
-For more information about shared image galleries, see [shared image gallery](../virtual-machines/shared-image-galleries.md).
+To learn about how to set up shared image gallery by attaching and detaching it to a lab account, see [How to attach and detach shared image gallery](how-to-attach-detach-shared-image-gallery.md).
+
+To bring a Windows custom image to shared image gallery outside of the context of a lab, see [Bring custom image to shared image gallery](upload-custom-image-shared-image-gallery.md).
+
+For more information about shared image galleries in general, see [shared image gallery](../virtual-machines/shared-image-galleries.md).
lab-services Upload Custom Image Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/upload-custom-image-shared-image-gallery.md
Title: Azure Lab Services - upload a custom image to Shared Image Gallery
-description: Describes how to upload a custom image to Shared Image Gallery. University IT departments will find importing images especially beneficial.
+ Title: Azure Lab Services - Bring a Windows custom image to shared image gallery
+description: Describes how to bring a Windows custom image to shared image gallery.
Last updated 09/30/2020
-# Upload a custom image to Shared Image Gallery
+# Bring a Windows custom image to shared image gallery
-Shared Image Gallery is available to you for importing your own custom images for creating labs in Azure Lab Services. University IT departments will find importing images especially beneficial for the following reasons:
+You can use shared image gallery to bring your own Windows custom images and use these images to create labs in Azure Lab Services. This article shows how to bring a custom image from:
-* You donΓÇÖt have to manually create images using a labΓÇÖs template VM.
-* You can upload images created using other tools, such as SCCM, Endpoint Manager, etc.
+* Your [physical lab environment](upload-custom-image-shared-image-gallery.md#bring-a-windows-custom-image-from-a-physical-lab-environment).
+* An [Azure virtual machine](upload-custom-image-shared-image-gallery.md#bring-a-windows-custom-image-from-an-azure-virtual-machine).
-This article describes steps that can be taken to bring a custom image and use it in Azure Lab Services.
+This task is typically performed by a school's IT department.
-> [!IMPORTANT]
-> When you move your images from a physical lab environment to Az Labs, you need to restructure them appropriatly. Don't simply reuse your existing images from physical labs. <br/>For details, see the [Moving from a Physical Lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931) blog post.
+## Bring a Windows custom image from a physical lab environment
+
+The steps in this section show how to import a custom image that starts from your physical lab environment. With this approach, you create a VHD from your physical environment and import the VHD into shared image gallery so that it can be used within Lab Services. Here are some key benefits with this approach:
-## Bring custom image from a physical lab environment
+* You have flexibility to create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images to use in your labs. Otherwise, if you use a [lab's template VM to export an image](how-to-use-shared-image-gallery.md), the image is always specialized.
+* You can upload images created using other tools, such as [Microsoft Endpoint Configuration Manager](https://docs.microsoft.com/mem/configmgr/core/understand/introduction), so that you don't have to manually set up an image using a lab's template VM.
-The following steps show how to import a custom image that starts from a physical lab environment. A VHD is then created from this environment and imported into Shared Image Gallery in Azure so that it can be used within Azure Lab Services.
+The steps in this section require that you have permission to create a [managed disk](../virtual-machines/managed-disks-overview.md) in your school's Azure subscription.
+
+> [!IMPORTANT]
+> When moving images from a physical lab environment to Lab Services, you should restructure each image so that it only includes software needed for a lab's class. For more information, read the [Moving from a Physical Lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931) blog post.
-Many options exist for creating a VHD from a physical lab environment. The following steps show how to create a VHD from a Windows Hyper-V VM:
+Many options exist for creating a VHD from a physical lab environment. The below steps show how to create a VHD from a Windows Hyper-V virtual machine (VM) using Hyper-V Manager.
-1. Start with a Hyper-V VM in your physical lab environment that has been created from your image.
- 1. The VM must be created as a Generation 1 VM.
- 1. The VM must use a fixed disk size. You also can specify the size of the disk in this window. The disk size must be no greater than 128 GB.<br/>
- Images with disk size > 128 GB are not supported by Azure Lab Services.
+1. Start with a Hyper-V VM in your physical lab environment that has been created from your image. Read the article on [how to create a virtual machine in Hyper-V](https://docs.microsoft.com/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v) for more information.
+ 1. The VM must be created as a **Generation 1** VM.
+ 1. The VM's virtual disk must be a fixed size VHD. The disk size must *not* be greater than 128 GB. When you create the VM, enter the size of the disk as shown in the below image.
- :::image type="content" source="./media/upload-custom-image-shared-image-gallery/connect-virtual-hard-disk.png" alt-text="Connect virtual hard disk":::
- 1. Image the VM as you normally would.
-1. [Connect to the VM and prepare it for Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md).
- 1. [Set Windows configurations for Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md#set-windows-configurations-for-azure)
- 1. [Check the Windows Services that are the minimum needed to ensure VM connectivity](../virtual-machines/windows/prepare-for-upload-vhd-image.md#check-the-windows-services)
- 1. [Update remote desktop registry settings](../virtual-machines/windows/prepare-for-upload-vhd-image.md#update-remote-desktop-registry-settings)
- 1. [Configure Windows Firewall rules](../virtual-machines/windows/prepare-for-upload-vhd-image.md#configure-windows-firewall-rules)
- 1. Install Windows Updates
+ :::image type="content" source="./media/upload-custom-image-shared-image-gallery/connect-virtual-hard-disk.png" alt-text="Connect virtual hard disk":::
+
+ > [!IMPORTANT]
+ > Images with disk size greater than 128 GB are *not* supported by Lab Services.
+
+1. Connect to the Hyper-V VM and [prepare it for Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md) by following these steps:
+ 1. [Set Windows configurations for Azure](../virtual-machines/windows/prepare-for-upload-vhd-image.md#set-windows-configurations-for-azure).
+ 1. [Check the Windows Services that are needed to ensure VM connectivity](../virtual-machines/windows/prepare-for-upload-vhd-image.md#check-the-windows-services).
+ 1. [Update remote desktop registry settings](../virtual-machines/windows/prepare-for-upload-vhd-image.md#update-remote-desktop-registry-settings).
+ 1. [Configure Windows Firewall rules](../virtual-machines/windows/prepare-for-upload-vhd-image.md#configure-windows-firewall-rules).
+ 1. [Install Windows Updates](../virtual-machines/windows/prepare-for-upload-vhd-image.md).
1. [Install Azure VM Agent and additional configuration as shown here](../virtual-machines/windows/prepare-for-upload-vhd-image.md#complete-the-recommended-configurations)
-
- Above steps will create a specialized image. If creating a generalized image, you also will need to run [SysPrep](../virtual-machines/windows/prepare-for-upload-vhd-image.md#determine-when-to-use-sysprep). <br/>
- You should create a specialized image if you want to maintain the User directory (which may contain files, user account info, etc.) that is needed by software included in the image.
+
+ You can upload either specialized or generalized images to shared image gallery and use them to create labs. The steps above will create a specialized image. If you need to instead create a generalized image, you also will need to [run SysPrep](../virtual-machines/windows/prepare-for-upload-vhd-image.md#determine-when-to-use-sysprep).
+
+ > [!IMPORTANT]
+ > You should create a specialized image if you want to maintain machine-specific information and user profiles. For more information about the differences between generalized and specialized images, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
+ 1. Since **Hyper-V** creates a **VHDX** file by default, you need to convert this to a VHD file. 1. Navigate to **Hyper-V Manager** -> **Action** -> **Edit Disk**.
- 1. Here, you'll have the option to **Convert** the disk from a VHDX to a VHD
- 1. When trying to expand the disk size, make sure to not exceed 128 GB.
+ 1. Next, **Convert** the disk from a VHDX to a VHD.
+ - If you expand the disk size, make sure that you do *not* exceed 128 GB.
:::image type="content" source="./media/upload-custom-image-shared-image-gallery/choose-action.png" alt-text="Choose action":::
-1. Upload VHD to Azure to create a managed disk.
- 1. You can use either Storage Explorer or AzCopy from the command line, as described in [Upload a VHD to Azure or copy a managed disk to another region](../virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md).
- If your machine goes to sleep or locks, the upload process may get interrupted and fail.
- 1. The result of this step is that you now have a managed disk that you can see in the Azure portal.
- You can use the Azure portal's "Size\PerformanceΓÇ¥ tab to choose your disk size. As mentioned before, the size has to be no > 128 GB.
+
+ For more information, read the article that shows how to [convert the virtual disk to a fixed size VHD](../virtual-machines/windows/prepare-for-upload-vhd-image.md#convert-the-virtual-disk-to-a-fixed-size-vhd).
+
+1. Upload the VHD to Azure to create a managed disk.
+ 1. You can use either Storage Explorer or AzCopy from the command line, as shown in [Upload a VHD to Azure or copy a managed disk to another region](../virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md).
+
+ 1. After you've uploaded the VHD, you should now have a managed disk that you can see in the Azure portal.
+
+ > [!WARNING]
+ > If your machine goes to sleep or locks, the upload process may get interrupted and fail.
+
+ > [!IMPORTANT]
+ > The Azure portal's **Size+Performance** tab for the managed disk allows you to change your disk size. As mentioned before, the size must *not* be greater than 128 GB.
+ 1. Take a snapshot of the managed disk.
- This can be done either from PowerShell, using the Azure portal, or from within Storage Explorer, as described in [Create a snapshot using the portal or PowerShell](../virtual-machines/windows/snapshot-copy-managed-disk.md).
-1. In Shared Image Gallery, create an image definition and version:
- 1. [Create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition).
- 1. You need to also specify here whether you are creating a specialized/generalized image.
-1. Create the lab in Azure Lab Services and select the custom image from the Shared Image Gallery.
+ This can be done either from PowerShell, using the Azure portal, or from within Storage Explorer, as shown in [Create a snapshot using the portal or PowerShell](../virtual-machines/windows/snapshot-copy-managed-disk.md).
- If you expanded disk after the OS was installed on the original Hyper-V VM, you also will need to extend the C drive in Windows to use the unallocated disk space. To do this, log into the template VM after the lab is created, then follow steps similar to what is shown in [Extend a basic volume](/windows-server/storage/disk-management/extend-a-basic-volume). There are options to do this through the UI as well as using PowerShell.
+1. In shared image gallery, create an image definition and version:
+ 1. [Create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition).
+ - Choose **Gen 1** for the **VM generation**.
+ - Choose whether you are creating a **specialized** or **generalized** image for the **Operating system state**.
+
+ For more information about the values you can specify for an image definition, see [Image definitions](../virtual-machines/shared-image-galleries.md#image-definitions).
+
+ > [!NOTE]
+ > You can also choose to use an existing image definition and create a new version for your custom image.
+
+1. [Create an image version](../virtual-machines/windows/shared-images-portal.md#create-an-image-version).
+ - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*. When you use Lab Services to create a lab and choose a custom image, the most recent version of the image is automatically used. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch.
+ - For the **Source**, choose **Disks and/or snapshots** from the drop-down list.
+ - For the **OS disk** property, choose the snapshot that you created in previous steps.
+
+ For more information about the values you can specify for an image definition, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions).
+
+1. [Create the lab](tutorial-setup-classroom-lab.md) in Lab Services and select the custom image from the shared image gallery.
+
+ If you expanded the disk *after* the OS was installed on the original Hyper-V VM, you also will need to extend the C drive in Windows to use the unallocated disk space:
+ - Log into the lab's template VM and follow steps similar to what is shown in [Extend a basic volume](https://docs.microsoft.com/windows-server/storage/disk-management/extend-a-basic-volume).
+
+## Bring a Windows custom image from an Azure virtual machine
+Another approach is to set up your Windows image using an [Azure virtual machine](../virtual-machines/windows/overview.md). Using an Azure virtual machine (VM) gives you flexibility to create either a specialized or generalized image to use with your labs. The process to prepare an image from an Azure VM are simpler compared to [creating an image from a VHD](upload-custom-image-shared-image-gallery.md#bring-a-windows-custom-image-from-a-physical-lab-environment) because the image is already prepared to run in Azure.
+
+You will need permission to create an Azure VM in your school's Azure subscription to complete the below steps:
+
+1. Create an Azure VM using the [Azure portal](../virtual-machines/windows/quick-create-portal.md), [PowerShell](../virtual-machines/windows/quick-create-powershell.md), the [Azure CLI](../virtual-machines/windows/quick-create-cli.md), or from an [ARM template](../virtual-machines/windows/quick-create-template.md).
+
+ - When you specify the disk settings, ensure the disk's size is *not* greater than 128 GB.
+
+1. Install software and make any necessary configuration changes to the Azure VM's image.
+
+1. Run [SysPrep](../virtual-machines/windows/capture-image-resource.md#generalize-the-windows-vm-using-sysprep) if you need to create a generalized image. For more information, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
+
+1. In shared image gallery, [create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition) or choose an existing image definition.
+ - Choose **Gen 1** for the **VM generation**.
+ - Choose whether you are creating a **specialized** or **generalized** image for the **Operating system state**.
+
+1. [Create an image version](../virtual-machines/windows/shared-images-portal.md#create-an-image-version).
+ - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*.
+ - For the **Source**, choose **Disks and/or snapshots** from the drop-down list.
+ - For the **OS disk** property, choose your Azure VM's disk that you created in previous steps.
+
+1. [Create the lab](tutorial-setup-classroom-lab.md) in Lab Services and select the custom image from the shared image gallery.
+
+You can also import your custom image from an Azure VM to shared image gallery using PowerShell. See the following script and accompanying ReadMe for more information:
+
+- [Bring image to shared image gallery script](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Scripts/BringImageToSharedImageGallery/)
## Next steps
-* [Shared Image Gallery overview](../virtual-machines/shared-image-galleries.md)
+* [Shared image gallery overview](../virtual-machines/shared-image-galleries.md)
+* [Attach or detach a shard image gallery](how-to-attach-detach-shared-image-gallery.md)
* [How to use shared image gallery](how-to-use-shared-image-gallery.md)
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
Comparing the two techniques:
|`OutputTabularDatasetConfig`| Higher performance | || Natural route from `OutputFileDatasetConfig` | || Data isn't persisted after pipeline run |
-|| [Notebook showing `OutputTabularDatasetConfig` technique](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) |
+|| |
| Registered `Dataset` | Lower performance | | | Can be generated in many ways | | | Data persists and is visible throughout workspace |
mysql How To Migrate Rds Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/how-to-migrate-rds-mysql-workbench.md
+
+ Title: Migrate Amazon RDS for MySQL to Azure Database for MySQL using the MySQL Workbench
+description: This article describes how to migrate Amazon RDS for MySQL to Azure Database for MySQL by using the MySQL Workbench Migration Wizard.
++++ Last updated : 05/20/2021++
+# Migrate Amazon RDS for MySQL to Azure Database for MySQL using the MySQL Workbench
+
+You can use various utilities, such as MySQL Workbench Export/Import, Azure Database Migration Service (DMS), and MySQL dump and restore, to migrate Amazon RDS for MySQL to Azure Database for MySQL. However, using the MySQL Workbench Migration Wizard provides an easy and convenient way to move your Amazon RDS for MySQL databases to Azure Database for MySQL.
+
+With the Migration Wizard, you can conveniently select which schemas and objects to migrate. It also allows you to view server logs to identify errors and bottlenecks in real time. As a result, you can edit and modify tables or database structures and objects during the migration process when an error is detected, and then resume migration without having to restart from scratch.
+
+> [!NOTE]
+> You can also use the Migration Wizard to migrate other sources, such as Microsoft SQL Server, Oracle, PostgreSQL, MariaDB, etc., which are outside the scope of this article.
+
+## Prerequisites
+
+Before you start the migration process, it's recommended that you ensure that several parameters and features are configured and set up properly, as described below.
+
+- Make sure the Character set of the source and target databases are the same.
+- Set the wait timeout to a reasonable time depending on the amount data or workload you want to import or migrate.
+- Set the `max_allowed_packet parameter` to a reasonable amount depending on the size of the database you want to import or migrate.
+- Verify that all of your tables use InnoDB, as Azure Database for MySQL Server only supports the InnoDB Storage engine.
+- Remove, replace, or modify all triggers, stored procedures, and other functions containing root user or super user definers (Azure Database for MySQL doesnΓÇÖt support the Super user privilege). To replace the definers with the name of the admin user that is running the import process, run the following command:
+
+ ```
+ DELIMITER; ;/*!50003 CREATE*/ /*!50017 DEFINER=`root`@`127.0.0.1`*/ /*!50003
+ DELIMITER;
+ /* Modified to */
+ DELIMITER;
+ /*!50003 CREATE*//*!50017 DEFINER=`AdminUserName`@`ServerName`*/ /*!50003
+ DELIMITER;
+
+ ```
+
+- If User Defined Functions (UDFs) are running on your database server, you need to delete the privilege for the mysql database. To determine if any UDFs are running on your server, use the following query:
+
+ ```
+ SELECT * FROM mysql.func;
+ ```
+
+ If you discover that UDFs are running, you can drop the UDFs by using the following query:
+
+ ```
+ DROP FUNCTION your_UDFunction;
+ ```
+
+- Make sure that the server on which the tool is running, and ultimately the export location, has ample disk space and compute power (vCores, CPU, and Memory) to perform the export operation, especially when exporting a very large database.
+- Create a path between the on-premises or AWS instance and Azure Database for MySQL if the workload is behind firewalls or other network security layers.
+
+## Begin the migration process
+
+1. To start the migration process, sign in to MySQL Workbench, and then select the home icon.
+2. In the left-hand navigation bar, select the Migration Wizard icon, as shown in the screenshot below.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/begin-the-migration.png" alt-text="MySQL Workbench start screen":::
+
+ The **Overview** page of the Migration Wizard is displayed, as shown below.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-wizard-welcome.png" alt-text="MySQL Workbench Migration Wizard welcome page":::
+
+3. Determine if you have an ODBC driver for MySQL Server installed by selecting **Open ODBC Administrator**.
+
+ In our case, on the **Drivers** tab, youΓÇÖll notice that there are already two MySQL Server ODBC drivers installed.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/obdc-administrator-page.png" alt-text="ODBC Data Source Administrator page":::
+
+ If a MySQL ODBC driver isn't installed, use the MySQL Installer you used to install MySQL Workbench to install the driver. For more information about MySQL ODBC driver installation, see the following resources:
+
+ - [MySQL :: MySQL Connector/ODBC Developer Guide :: 4.1 Installing Connector/ODBC on Windows](https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation-binary-windows.html)
+ - [ODBC Driver for MySQL: How to Install and Set up Connection (Step-by-step) ΓÇô {coding}Sight (codingsight.com)](https://codingsight.com/install-and-configure-odbc-drivers-for-mysql/)
+
+4. Close the **ODBC Data Source Administrator** dialog box, and then continue with the migration process.
+
+## Configure source database server connection parameters
+
+1. On the **Overview** page, select **Start Migration**.
+
+ The **Source Selection** page appears. Use this page to provide information about the RDBMS you're migrating from and the parameters for the connection.
+
+2. In the **Database System** field, select **MySQL**.
+3. In the **Stored Connection** field, select one of the saved connection settings for that RDBMS.
+
+ You can save connections by marking the checkbox at the bottom of the page and providing a name of your preference.
+
+4. In the **Connection Method** field, select **Standard TCP/IP**.
+5. In the **Hostname** field, specify the name of your source database server.
+6. In the **Port** field, specify **3306**, and then enter the username and password for connecting to the server.
+7. In the **Database** field, enter the name of the database you want to migrate if you know it; otherwise leave this field blank.
+8. Select **Test Connection** to check the connection to your MySQL Server instance.
+
+ If youΓÇÖve entered the correct parameters, a message appears indicating a successful connection attempt.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-connection-parameters.png" alt-text="Source database connection parameters page":::
+
+9. Select **Next**.
+
+## Configure target database server connection parameters
+
+1. On the **Target Selection** page, set the parameters to connect to your target MySQL Server instance using a process similar to that for setting up the connection to the source server.
+2. To verify a successful connection, select **Test Connection**.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/target-connection-parameters.png" alt-text="Target database connection parameters page":::
+
+3. Select **Next**.
+
+## Select the schemas to migrate
+
+The Migration Wizard will communicate to your MySQL Server instance and fetch a list of schemas from the source server.
+
+1. Select **Show logs** to view this operation.
+
+ The screenshot below shows how the schemas are being retrieved from the source database server.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/retrieve-schemas.png" alt-text="Fetch schemas list page":::
+
+2. Select **Next** to verify that all the schemas were successfully fetched.
+
+ The screenshot below shows the list of fetched schemas.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/schemas-selection.png" alt-text="Schemas selection page":::
+
+ You can only migrate schemas that appear in this list.
+
+3. Select the schemas that you want to migrate, and then select **Next**.
+
+## Object migration
+
+Next, specify the object(s) that you want to migrate.
+
+1. Select **Show Selection**, and then, under **Available Objects**, select and add the objects that you want to migrate.
+
+ When you've added the objects, they'll appear under **Objects to Migrate**, as shown in the screenshot below.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/source-objects.png" alt-text="Source objects selection page":::
+
+ In this scenario, weΓÇÖve selected all table objects.
+
+2. Select **Next**.
+
+## Edit data
+
+In this section, you have the option of editing the objects that you want to migrate.
+
+1. On the **Manual Editing** page, notice the **View** drop-down menu in the top-right corner.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/manual-editing.png" alt-text="Manual Editing selection page":::
+
+ The **View** drop-down box includes three items:
+
+ - **All Objects** ΓÇô Displays all objects. With this option, you can manually edit the generated SQL before applying them to the target database server. To do this, select the object and select Show Code and Messages. You can see (and edit!) the generated MySQL code that corresponds to the selected object.
+ - **Migration problems** ΓÇô Displays any problems that occurred during the migration, which you can review and verify.
+ - **Column Mapping** ΓÇô Displays column mapping information. You can use this view to edit the name and change column of the target object.
+
+2. Select **Next**.
+
+## Create the target database
+
+1. Select the **Create schema in target RDBMS** check box.
+
+ You can also choose to keep already existing schemas, so they won't be modified or updated.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-database.png" alt-text="Target Creation Options page":::
+
+ In this article, we've chosen to create the schema in target RDBMS, but you can also select the **Create a SQL script file** check box to save the file on your local computer or for other purposes.
+
+2. Select **Next**.
+
+## Run the MySQL script to create the database objects
+
+Since we've elected to create schema in the target RDBMS, the migrated SQL script will be executed in the target MySQL server. You can view its progress as shown in the screenshot below:
++
+1. After the creation of the schemas and their objects completes, select **Next**.
+
+ On the **Create Target Results** page, youΓÇÖre presented with a list of the objects created and notification of any errors that were encountered while creating them, as shown in the following screenshot.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/create-target-results.png" alt-text="Create Target Results page":::
+
+2. Review the detail on this page to verify that everything completed as intended.
+
+ For this article, we donΓÇÖt have any errors. If there's no need to address any error messages, you can edit the migration script.
+
+3. In the **Object** box, select the object that you want to edit.
+4. Under **SQL CREATE script for selected object**, modify your SQL script, and then select **Apply** to save the changes.
+5. Select **Recreate Objects** to run the script including your changes.
+
+ If the script fails, you may need to edit the generated script. You can then manually fix the SQL script and run everything again. In this article, weΓÇÖre not changing anything, so weΓÇÖll leave the script as it is.
+
+6. Select **Next**.
+
+## Transfer data
+
+This part of the process moves data from the source MySQL Server database instance into your newly created target MySQL database instance. Use the **Data Transfer Setup** page to configure this process.
++
+This page provides options for setting up the data transfer. For the purposes of this article, weΓÇÖll accept the default values.
+
+1. To begin the actual process of transferring data, select **Next**.
+
+ The progress of the data transfer process appears as shown in the following screenshot.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/bulk-data-transfer.png" alt-text="Bulk Data Transfer page":::
+
+ Note that the data transfer process will take a little time to complete.
+
+2. After the transfer completes, select **Next**.
+
+ The **Migration Report** page appears, providing a report summarizing the whole process, as shown on the screenshot below:
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/migration-report.png" alt-text="Migration Progress Report page":::
+
+3. Select **Finish** to close the Migration Wizard.
+
+ The migration is now successfully completed.
+
+## Verify consistency of the migrated schemas and tables
+
+1. Next, log into your MySQL target database instance to verify that the migrated schemas and tables are consistent with your MySQL source database.
+
+ In our case, you can see that all schemas (sakila, moda, items, customer, clothes, world, and world_x) from the Amazon RDS for MySQL: **MyjolieDB** database have been successfully migrated to the Azure Database for MySQL: **azmysql** instance.
+
+2. To verify the table and rows counts, run the following query on both instances:
+
+ `SELECT COUNT (*) FROM sakila.actor;`
+
+ You can see from the screenshot below that the row count for Amazon RDS MySQL is 200, which matches the Azure Database for MySQL instance.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-source.png" alt-text="Table and Row size source database":::
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/table-row-size-target.png" alt-text="Table and Row size target database":::
+
+ While you can run the above query on every single schema and table, that will be quite a bit of work if youΓÇÖre dealing with hundreds of thousands or even millions of tables. You can use the queries below to verify the schema (database) and table size instead.
+
+3. To check the database size, run the following query:
+
+ ```
+ SELECT table_schema AS "Database",
+ ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS "Size (MB)"
+ FROM information_schema.TABLES
+ GROUP BY table_schema;
+ ```
+
+4. To check the table size, run the following query:
+
+ ```
+ SELECT table_name AS "Table",
+ ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
+ FROM information_schema.TABLES
+ WHERE table_schema = "database_name"
+ ORDER BY (data_length + index_length) DESC;
+ ```
+
+ You see from the screenshots below that schema (database) size from the Source Amazon RDS MySQL instance is the same as that of the target Azure Database for MySQL Instance.
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-source.png" alt-text="Database size source database":::
+
+ :::image type="content" source="./media/how-to-migrate-rds-mysql-workbench/database-size-target.png" alt-text="Database size target database":::
+
+ Since the schema (database) sizes are the same in both instances, it's not really necessary to check individual table sizes. In any case, you can always use the above query to check your table sizes, as necessary.
+
+ YouΓÇÖve now confirmed that your migration completed successfully.
+
+## Next steps
+
+- For more information about migrating databases to Azure Database for MySQL, see the [Database Migration Guide](https://github.com/Azure/azure-mysql/tree/master/MigrationGuide).
+- View the video [Easily migrate MySQL/PostgreSQL apps to Azure managed service](https://medius.studios.ms/Embed/Video/THR2201?sid=THR2201), which contains a demo showing how to migrate MySQL apps to Azure Database for MySQL.
service-bus-messaging Migrate Jms Activemq To Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/migrate-jms-activemq-to-servicebus.md
Service Bus enables various enterprise security and high availability features.
For each Service Bus namespace, you publish metrics onto Azure Monitor. You can use these metrics for alerting and dynamic scaling of resources allocated to the namespace.
-For more information about the different metrics and how to set up alerts on them, see [Service Bus metrics in Azure Monitor](service-bus-metrics-azure-monitor.md). You can also find out more about [client side tracing for data operations](service-bus-end-to-end-tracing.md) and [operational/diagnostic logging for management operations](service-bus-diagnostic-logs.md).
+For more information about the different metrics and how to set up alerts on them, see [Service Bus metrics in Azure Monitor](monitor-service-bus-reference.md). You can also find out more about [client side tracing for data operations](service-bus-end-to-end-tracing.md) and [operational/diagnostic logging for management operations](service-bus-diagnostic-logs.md).
### Metrics - New Relic
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/monitor-service-bus-reference.md
+
+ Title: Monitoring Azure Service Bus data reference
+description: Important reference material needed when you monitor Azure Service Bus.
++ Last updated : 05/18/2021+++
+# Monitoring Azure Service Bus data reference
+See [Monitoring Azure Service Bus](monitor-service-bus.md) for details on collecting and analyzing monitoring data for Azure Service Bus.
+
+## Metrics
+This section lists all the automatically collected platform metrics collected for Azure Service Bus. The resource provider for these metrics is **Microsoft.ServiceBus/namespaces**.
+
+### Request metrics
+Counts the number of data and management operations requests.
+
+| Metric Name | Description |
+| - | -- |
+| Incoming Requests| The number of requests made to the Service Bus service over a specified period. <br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|Successful Requests|The number of successful requests made to the Service Bus service over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|Server Errors|The number of requests not processed because of an error in the Service Bus service over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|User Errors |The number of requests not processed because of user errors over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|Throttled Requests|The number of requests that were throttled because the usage was exceeded.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+
+The following two types of errors are classified as **user errors**:
+
+1. Client-side errors (In HTTP that would be 400 errors).
+2. Errors that occur while processing messages, such as [MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception).
++
+### Message metrics
+
+| Metric Name | Description |
+| - | -- |
+|Incoming Messages|The number of events or messages sent to Service Bus over a specified period. This metric doesn't include messages that are auto forwarded.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|Outgoing Messages|The number of events or messages received from Service Bus over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+| Messages| Count of messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
+| Active Messages| Count of active messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
+| Dead-lettered messages| Count of dead-lettered messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/>Dimension: Entity name |
+| Scheduled messages| Count of scheduled messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
+| Completed Messages| Count of completed messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
+| Abandoned Messages| Count of abandoned messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
+| Size | Size of an entity (queue or topic) in bytes. <br/><br/>Unit: Count <br/>Aggregation Type: Average <br/>Dimension: Entity name |
+
+> [!NOTE]
+> Values for messages, active, dead-lettered, scheduled, completed, and abandoned messages are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics.
+
+### Connection metrics
+
+| Metric Name | Description |
+| - | -- |
+|Active Connections|The number of active connections on a namespace and on an entity in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|Connections Opened |The number of open connections.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+|Connections Closed |The number of closed connections.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
+
+### Resource usage metrics
+
+> [!NOTE]
+> The following metrics are available only with the **premium** tier.
+>
+> The important metrics to monitor for any outages for a premium tier namespace are: **CPU usage per namespace** and **memory size per namespace**. [Set up alerts](../azure-monitor/alerts/alerts-metric.md) for these metrics using Azure Monitor.
+>
+> The other metric you could monitor is: **throttled requests**. It shouldn't be an issue though as long as the namespace stays within its memory, CPU, and brokered connections limits. For more information, see [Throttling in Azure Service Bus Premium tier](service-bus-throttling.md#throttling-in-azure-service-bus-premium-tier)
+
+| Metric Name | Description |
+| - | -- |
+|CPU usage per namespace|The percentage CPU usage of the namespace.<br/><br/> Unit: Percent <br/> Aggregation Type: Maximum <br/> Dimension: Entity name|
+|Memory size usage per namespace|The percentage memory usage of the namespace.<br/><br/> Unit: Percent <br/> Aggregation Type: Maximum <br/> Dimension: Entity name|
+
+## Metric dimensions
+
+Azure Service Bus supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level.
+
+|Dimension name|Description|
+| - | -- |
+|Entity Name| Service Bus supports messaging entities under the namespace.|
+
+## Resource logs
+This section lists the types of resource logs you can collect for Azure Service Bus.
+
+- Operational logs
+- Virtual network and IP filtering logs
+
+### Operational logs
+Operational log entries include elements listed in the following table:
+
+| Name | Description |
+| - | - |
+| ActivityId | Internal ID, used to identify the specified activity |
+| EventName | Operation name |
+| ResourceId | Azure Resource Manager resource ID |
+| SubscriptionId | Subscription ID |
+| EventTimeString | Operation time |
+| EventProperties | Operation properties |
+| Status | Operation status |
+| Caller | Caller of operation (the Azure portal or management client) |
+| Category | OperationalLogs |
+
+Here's an example of an operational log JSON string:
+
+```json
+{
+ "ActivityId": "0000000000-0000-0000-0000-00000000000000",
+ "EventName": "Create Queue",
+ "resourceId": "/SUBSCRIPTIONS/<AZURE SUBSCRPTION ID>/RESOURCEGROUPS/<RESOURCE GROUP NAME>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<SERVICE BUS NAMESPACE NAME>",
+ "SubscriptionId": "0000000000-0000-0000-0000-00000000000000",
+ "EventTimeString": "9/28/2016 8:40:06 PM +00:00",
+ "EventProperties": "{\"SubscriptionId\":\"0000000000-0000-0000-0000-00000000000000\",\"Namespace\":\"mynamespace\",\"Via\":\"https://mynamespace.servicebus.windows.net/f8096791adb448579ee83d30e006a13e/?api-version=2016-07\",\"TrackingId\":\"5ee74c9e-72b5-4e98-97c4-08a62e56e221_G1\"}",
+ "Status": "Succeeded",
+ "Caller": "ServiceBus Client",
+ "category": "OperationalLogs"
+}
+```
+
+### Events and operations captured in operational logs
+Operational logs capture all management operations that are performed on the Azure Service Bus namespace. Data operations aren't captured, because of the high volume of data operations that are conducted on Azure Service Bus.
+
+> [!NOTE]
+> To help you better track data operations, we recommend using client-side tracing.
+
+The following management operations are captured in operational logs:
+
+| Scope | Operation|
+|-| -- |
+| Namespace | <ul> <li> Create Namespace</li> <li> Update Namespace </li> <li> Delete Namespace </li> <li> Update Namespace SharedAccess Policy </li> </ul> |
+| Queue | <ul> <li> Create Queue</li> <li> Update Queue</li> <li> Delete Queue </li> <li> AutoDelete Delete Queue </li> </ul> |
+| Topic | <ul> <li> Create Topic </li> <li> Update Topic </li> <li> Delete Topic </li> <li> AutoDelete Delete Topic </li> </ul> |
+| Subscription | <ul> <li> Create Subscription </li> <li> Update Subscription </li> <li> Delete Subscription </li> <li> AutoDelete Delete Subscription </li> </ul> |
+
+> [!NOTE]
+> Currently, *Read* operations aren't tracked in the operational logs.
+
+## Azure Monitor Logs tables
+Azure Service Bus uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables the service uses, see [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#service-bus).
++
+## Next steps
+- For details on monitoring Azure Service Bus, see [Monitoring Azure Service Bus](monitor-service-bus.md).
+- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
service-bus-messaging Monitor Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/monitor-service-bus.md
+
+ Title: Monitoring Azure Service Bus
+description: Learn how to use Azure Monitor to view, analyze, and create alerts on metrics from Azure Service Bus.
++ Last updated : 05/18/2021++
+# Monitor Azure Service Bus
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Service Bus and how to analyze and alert on this data with Azure Monitor.
+
+## What is Azure Monitor?
+Azure Service Bus creates monitoring data using [Azure Monitor](../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources. It can also monitor resources in other clouds and on-premises.
+
+Start with the article [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md), which describes the following concepts:
+
+- What is Azure Monitor?
+- Costs associated with monitoring
+- Monitoring data collected in Azure
+- Configuring data collection
+- Standard tools in Azure for analyzing and alerting on monitoring data
+
+The following sections build on this article by describing the specific data gathered for Azure Service Bus. These sections also provide examples for configuring data collection and analyzing this data with Azure tools.
+
+> [!TIP]
+> To understand costs associated with Azure Monitor, see [Usage and estimated costs](../azure-monitor//usage-estimated-costs.md). To understand the time it takes for your data to appear in Azure Monitor, see [Log data ingestion time](../azure-monitor/logs/data-ingestion-time.md).
+
+## Monitoring data from Azure Service Bus
+Azure Service Bus collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+
+See [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md) for a detailed reference of the logs and metrics created by Azure Service Bus.
+
+## Collection and routing
+Platform metrics and the activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Service Bus are listed in [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md#resource-logs).
+
+### Azure Storage
+The diagnostic logging information is stored in containers named **insights-logs-operationlogs** and **insights-metrics-pt1m**.
+
+Sample URL for an operation log: `https://<Azure Storage account>.blob.core.windows.net/insights-logs-operationallogs/resourceId=/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/<Resource group name>/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/<Namespace name>/y=<YEAR>/m=<MONTH-NUMBER>/d=<DAY-NUMBER>/h=<HOUR>/m=<MINUTE>/PT1H.json`. The URL for a metric log is similar.
+
+### Azure Event Hubs
+The diagnostic logging information is stored in event hubs named **insights-logs-operationlogs** and **insights-metrics-pt1m**. You can also select your own event hub.
+
+### Log Analytics
+The diagnostic logging information is stored in tables named **AzureDiagnostics** and **AzureMetrics**.
+
+### Sample operational log output (formatted)
+
+```json
+{
+ "Environment": "PROD",
+ "Region": "East US",
+ "ScaleUnit": "PROD-BL2-002",
+ "ActivityId": "a097a88a-33e5-4c9c-9c64-20f506ec1375",
+ "EventName": "Retrieve Namespace",
+ "resourceId": "/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/SPSBUS0213RG/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/SPSBUS0213NS",
+ "SubscriptionId": "<Azure subscription ID>",
+ "EventTimeString": "5/18/2021 3:25:55 AM +00:00",
+ "EventProperties": "{\"SubscriptionId\":\"<Azure subscription ID>\",\"Namespace\":\"spsbus0213ns\",\"Via\":\"https://spsbus0213ns.servicebus.windows.net/$Resources/topics?api-version=2017-04&$skip=0&$top=100\",\"TrackingId\":\"a097a88a-33e5-4c9c-9c64-20f506ec1375_M8CH3_M8CH3_G8\"}",
+ "Status": "Succeeded",
+ "Caller": "rpfrontdoor",
+ "category": "OperationalLogs"
+}
+```
+
+### Sample metric log output (formatted)
+
+```json
+{
+ "count": 1,
+ "total": 4,
+ "minimum": 4,
+ "maximum": 4,
+ "average": 4,
+ "resourceId": "/SUBSCRIPTIONS/<Azure subscription ID>/RESOURCEGROUPS/SPSBUS0213RG/PROVIDERS/MICROSOFT.SERVICEBUS/NAMESPACES/SPSBUS0213NS",
+ "time": "2021-05-18T03:27:00.0000000Z",
+ "metricName": "IncomingMessages",
+ "timeGrain": "PT1M"
+}
+```
+
+> [!IMPORTANT]
+> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
+
+> [!NOTE]
+> When you enable metrics in a diagnostic setting, dimension information is not currently included as part of the information sent to a storage account, event hub, or log analytics.
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+You can analyze metrics for Azure Service Bus, along with metrics from other Azure services, by selecting **Metrics** from the **Azure Monitor** section on the home page for your Service Bus namespace. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool. For a list of the platform metrics collected, see [Monitoring Azure Service Bus data reference metrics](monitor-service-bus-reference.md#metrics).
+
+![Metrics Explorer with Service Bus namespace selected](./media/monitor-service-bus/metrics.png)
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+
+> [!TIP]
+> Azure Monitor metrics data is available for 90 days. However, when creating charts only 30 days can be visualized. For example, if you want to visualize a 90 day period, you must break it into three charts of 30 days within the 90 day period.
+
+### Filtering and splitting
+For metrics that support dimensions, you can apply filters using a dimension value. For example, add a filter with `EntityName` set to the name of a queue or a topic. You can also split a metric by dimension to visualize how different segments of the metric compare with each other. For more information of filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md).
+
+## Analyzing logs
+Using Azure Monitor Log Analytics requires you to create a diagnostic configuration and enable __Send information to Log Analytics__. For more information, see the [Collection and routing](#collection-and-routing) section. Data in Azure Monitor Logs is stored in tables, with each table having its own set of unique properties. Azure Service Bus stores data in the following tables: **AzureDiagnostics** and **AzureMetrics**.
+
+> [!IMPORTANT]
+> When you select **Logs** from the Azure Service Bus menu, Log Analytics is opened with the query scope set to the current workspace. This means that log queries will only include data from that resource. If you want to run a query that includes data from other databases or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+
+For a detailed reference of the logs and metrics, see [Azure Service Bus monitoring data reference](monitor-service-bus-reference.md).
+
+### Sample Kusto queries
+
+> [!IMPORTANT]
+> When you select **Logs** from the Azure Service Bus menu, Log Analytics is opened with the query scope set to the current Azure Service Bus namespace. This means that log queries will only include data from that resource. If you want to run a query that includes data from other workspaces or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+
+Following are sample queries that you can use to help you monitor your Azure Service Bus resources:
+++ Get management operations in the last 7 days. +
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(7d)
+ | where ResourceProvider =="MICROSOFT.SERVICEBUS"
+ | where Category == "OperationalLogs"
+ | summarize count() by EventName_s, _ResourceId
+ ```
+++ Get access attempts to a key vault that resulted in "key not found" error.+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.SERVICEBUS"
+ | where Category == "Error" and OperationName == "wrapkey"
+ | project Message, _ResourceId
+ ```
+++ Get errors from the past 7 days+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(7d)
+ | where ResourceProvider =="MICROSOFT.SERVICEBUS"
+ | where Category == "Error"
+ | summarize count() by EventName_s, _ResourceId
+ ```
+++ Get operations performed with a key vault to disable or restore the key.+
+ ```Kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.SERVICEBUS"
+ | where (Category == "info" and (OperationName == "disable" or OperationName == "restore"))
+ | project Message, _ResourceId
+ ```
+++ Get all the entities that have been autodeleted+
+ ```kusto
+ AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.SERVICEBUS"
+ | where Category == "OperationalLogs"
+ | where EventName_s startswith "AutoDelete"
+ | summarize count() by EventName_s, _ResourceId
+ ```
+
+## Alerts
+You can access alerts for Azure Service Bus by selecting **Alerts** from the **Azure Monitor** section on the home page for your Service Bus namespace. See [Create, view, and manage metric alerts using Azure Monitor](../azure-monitor/alerts/alerts-metric.md) for details on creating alerts.
++
+## Next steps
+
+- For a reference of the logs and metrics, see [Monitoring Azure Service Bus data reference](monitor-service-bus-reference.md).
+- For details on monitoring Azure resources, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
This section compares advanced capabilities provided by Storage queues and Servi
| Poison message support |**Yes** |**Yes** | | In-place update |**Yes** |**Yes** | | Server-side transaction log |**Yes** |**No** |
-| Storage metrics |**Yes**<br/><br/>**Minute Metrics** provides real-time metrics for availability, TPS, API call counts, error counts, and more. They're all in real time, aggregated per minute and reported within a few minutes from what just happened in production. For more information, see [About Storage Analytics Metrics](/rest/api/storageservices/fileservices/About-Storage-Analytics-Metrics). |**Yes**<br/><br/>For information about metrics supported by Azure Service Bus, see [Message metrics](service-bus-metrics-azure-monitor.md#message-metrics). |
+| Storage metrics |**Yes**<br/><br/>**Minute Metrics** provides real-time metrics for availability, TPS, API call counts, error counts, and more. They're all in real time, aggregated per minute and reported within a few minutes from what just happened in production. For more information, see [About Storage Analytics Metrics](/rest/api/storageservices/fileservices/About-Storage-Analytics-Metrics). |**Yes**<br/><br/>For information about metrics supported by Azure Service Bus, see [Message metrics](monitor-service-bus-reference.md#message-metrics). |
| State management |**No** |**Yes** (Active, Disabled, SendDisabled, ReceiveDisabled. For details on these states, see [Queue status](entity-suspend.md#queue-status)) | | Message autoforwarding |**No** |**Yes** | | Purge queue function |**Yes** |**No** |
service-bus-messaging Service Bus Metrics Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-metrics-azure-monitor.md
- Title: Azure Service Bus metrics in Azure Monitor| Microsoft Docs
-description: This article explains how to use Azure Monitor to monitor Service Bus entities (queues, topics, and subscriptions).
- Previously updated : 02/12/2021--
-# Azure Service Bus metrics in Azure Monitor
-
-Service Bus metrics give you the state of resources in your Azure subscription. With a rich set of metrics data, you can assess the overall health of your Service Bus resources, not only at the namespace level, but also at the entity level. These statistics can be important as they help you to monitor the state of Service Bus. Metrics can also help troubleshoot root-cause issues without needing to contact Azure support.
-
-Azure Monitor provides unified user interfaces for monitoring across various Azure services. For more information, see [Monitoring in Microsoft Azure](../azure-monitor/overview.md) and the [Retrieve Azure Monitor metrics with .NET](https://github.com/Azure-Samples/monitor-dotnet-metrics-api) sample on GitHub.
-
-> [!IMPORTANT]
-> When there has not been any interaction with an entity for 2 hours, the metrics will start showing "0" as a value until the entity is no longer idle.
-
-## Access metrics
-
-Azure Monitor provides multiple ways to access metrics. You can either access metrics through the [Azure portal](https://portal.azure.com), or use the Azure Monitor APIs (REST and .NET) and analysis solutions such as Azure Monitor logs and Event Hubs. For more information, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
-
-Metrics are enabled by default, and you can access the most recent 30 days of data. If you need to keep data for a longer period of time, you can archive metrics data to an Azure Storage account. This value is configured in [diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) in Azure Monitor.
-
-## Access metrics in the portal
-
-You can monitor metrics over time in the [Azure portal](https://portal.azure.com). The following example shows how to view successful requests and incoming requests at the account level:
-
-![Screenshot of the Monitor - Metrics (preview) page in the Azure portal.][1]
-
-You can also access metrics directly via the namespace. To do so, select your namespace and then select **Metrics**. To display metrics filtered to the scope of the entity, select the entity and then select **Metrics**.
-
-![Screenshot of the Monitor - Metrics (preview) page filtered to the scope of the entity.][2]
-
-For metrics supporting dimensions, you must filter with the desired dimension value.
-
-## Billing
-
-Metrics and Alerts on Azure Monitor are charged on a per alert basis. These charges should be available on the portal when the alert is set up and before it's saved.
-
-Additional solutions that ingest metrics data are billed directly by those solutions. For example, you're billed by Azure Storage if you archive metrics data to an Azure Storage account. you're also billed by Log Analytics if you stream metrics data to Log Analytics for advanced analysis.
-
-The following metrics give you an overview of the health of your service.
-
-> [!NOTE]
-> We are deprecating several metrics as they are moved under a different name. This might require you to update your references. Metrics marked with the "deprecated" keyword will not be supported going forward.
-
-All metrics values are sent to Azure Monitor every minute. The time granularity defines the time interval for which metrics values are presented. The supported time interval for all Service Bus metrics is 1 minute.
-
-## Request metrics
-
-Counts the number of data and management operations requests.
-
-| Metric Name | Description |
-| - | -- |
-| Incoming Requests| The number of requests made to the Service Bus service over a specified period. <br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-|Successful Requests|The number of successful requests made to the Service Bus service over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-|Server Errors|The number of requests not processed because of an error in the Service Bus service over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-|User Errors (see the following subsection)|The number of requests not processed because of user errors over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-|Throttled Requests|The number of requests that were throttled because the usage was exceeded.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-
-### User errors
-
-The following two types of errors are classified as user errors:
-
-1. Client-side errors (In HTTP that would be 400 errors).
-2. Errors that occur while processing messages, such as [MessageLockLostException](/dotnet/api/microsoft.azure.servicebus.messagelocklostexception).
--
-## Message metrics
-
-| Metric Name | Description |
-| - | -- |
-|Incoming Messages|The number of events or messages sent to Service Bus over a specified period. This metric doesn't include messages that are auto forwarded.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-|Outgoing Messages|The number of events or messages received from Service Bus over a specified period.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-| Messages| Count of messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
-| Active Messages| Count of active messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
-| Dead-lettered messages| Count of dead-lettered messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/>Dimension: Entity name |
-| Scheduled messages| Count of scheduled messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
-| Completed Messages| Count of completed messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
-| Abandoned Messages| Count of abandoned messages in a queue/topic. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimension: Entity name |
-| Size | Size of an entity (queue or topic) in bytes. <br/><br/>Unit: Count <br/>Aggregation Type: Average <br/>Dimension: Entity name |
-
-> [!NOTE]
-> Values for messages, active, dead-lettered, scheduled, completed, and abandoned messages are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics.
-
-## Connection metrics
-
-| Metric Name | Description |
-| - | -- |
-|Active Connections|The number of active connections on a namespace and on an entity in the namespace. Value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-|Connections Opened |The number of open connections.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-|Connections Closed |The number of closed connections.<br/><br/> Unit: Count <br/> Aggregation Type: Total <br/> Dimension: Entity name|
-
-## Resource usage metrics
-
-> [!NOTE]
-> The following metrics are available only with the **premium** tier.
->
-> The important metrics to monitor for any outages for a premium tier namespace are: **CPU usage per namespace** and **memory size per namespace**. [Set up alerts](../azure-monitor/alerts/alerts-metric.md) for these metrics using Azure Monitor.
->
-> The other metric you could monitor is: **throttled requests**. It shouldn't be an issue though as long as the namespace stays within its memory, CPU, and brokered connections limits. For more information, see [Throttling in Azure Service Bus Premium tier](service-bus-throttling.md#throttling-in-azure-service-bus-premium-tier)
-
-| Metric Name | Description |
-| - | -- |
-|CPU usage per namespace|The percentage CPU usage of the namespace.<br/><br/> Unit: Percent <br/> Aggregation Type: Maximum <br/> Dimension: Entity name|
-|Memory size usage per namespace|The percentage memory usage of the namespace.<br/><br/> Unit: Percent <br/> Aggregation Type: Maximum <br/> Dimension: Entity name|
-
-## Metrics dimensions
-
-Azure Service Bus supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level.
-
-|Dimension name|Description|
-| - | -- |
-|Entity Name| Service Bus supports messaging entities under the namespace.|
-
-## Set up alerts on metrics
-
-1. On the **Metrics** tab of the **Service Bus Namespace** page, select **Configure alerts**.
-
- ![Metrics page - Configure alerts menu](./media/service-bus-metrics-azure-monitor/metrics-page-configure-alerts-menu.png)
-2. Select the **Select target** option, and do the following actions on the **Select a resource** page:
- 1. Select **Service Bus Namespaces** for the **Filter by resource type** field.
- 2. Select your subscription for the **Filter by subscription** field.
- 3. Select the **service bus namespace** from the list.
- 4. Select **Done**.
-
- ![Select namespace](./media/service-bus-metrics-azure-monitor/select-namespace.png)
-1. Select **Add criteria**, and do the following actions on the **Configure signal logic** page:
- 1. Select **Metrics** for **Signal type**.
- 2. Select a signal. For example: **Service errors**.
-
- ![Select server errors](./media/service-bus-metrics-azure-monitor/select-server-errors.png)
- 1. Select **Greater than** for **Condition**.
- 2. Select **Total** for **Time Aggregation**.
- 3. Enter **5** for **Threshold**.
- 4. Select **Done**.
-
- ![Specify condition](./media/service-bus-metrics-azure-monitor/specify-condition.png)
-1. On the **Create rule** page, expand **Define alert details**, and do the following actions:
- 1. Enter a **name** for the alert.
- 2. Enter a **description** for the alert.
- 3. Select **severity** for the alert.
-
- ![Screenshot of the Create rule page. Define alert details is expanded and the fields for Alert rule name, Description, and Severity are highlighted.](./media/service-bus-metrics-azure-monitor/alert-details.png)
-1. On the **Create rule** page, expand **Define action group**, select **New action group**, and do the following actions on the **Add action group page**.
- 1. Enter a name for the action group.
- 2. Enter a short name for the action group.
- 3. Select your subscription.
- 4. Select a resource group.
- 5. For this walkthrough, enter **Send email** for **ACTION NAME**.
- 6. Select **Email/SMS/Push/Voice** for **ACTION TYPE**.
- 7. Select **Edit details**.
- 8. On the **Email/SMS/Push/Voice** page, do the following actions:
- 1. Select **Email**.
- 2. Type the **email address**.
- 3. Select **OK**.
-
- ![Screenshot of the Add action group page. An Action named "Send email" with the Action type Email/SMS/Push/Voice is being added to the group.](./media/service-bus-metrics-azure-monitor/add-action-group.png)
- 4. On the **Add action group** page, select **OK**.
-1. On the **Create rule** page, select **Create alert rule**.
-
- ![Create alert rule button](./media/service-bus-metrics-azure-monitor/create-alert-rule.png)
-
-## Next steps
-
-See the [Azure Monitor overview](../azure-monitor/overview.md).
-
-[1]: ./media/service-bus-metrics-azure-monitor/service-bus-monitor1.png
-[2]: ./media/service-bus-metrics-azure-monitor/service-bus-monitor2.png
service-bus-messaging Service Bus Premium Messaging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-premium-messaging.md
The number of messaging units allocated to the Service Bus Premium namespace can
There are a number of factors to take into consideration when deciding the number of messaging units for your architecture: - Start with ***1 or 2 messaging units*** allocated to your namespace.-- Study the CPU usage metrics within the [Resource usage metrics](service-bus-metrics-azure-monitor.md#resource-usage-metrics) for your namespace.
+- Study the CPU usage metrics within the [Resource usage metrics](monitor-service-bus-reference.md#resource-usage-metrics) for your namespace.
- If CPU usage is ***below 20%***, you might be able to ***scale down*** the number of messaging units allocated to your namespace. - If CPU usage is ***above 70%***, your application will benefit from ***scaling up*** the number of messaging units allocated to your namespace.
service-bus-messaging Service Bus Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-throttling.md
If the number of requests are more than the current resources can service, then
### How will I know that I'm being throttled? There are various ways to identifying throttling in Azure Service Bus Premium -
- * **Throttled Requests** show up on the [Azure Monitor Request metrics](service-bus-metrics-azure-monitor.md#request-metrics) to identify how many requests were throttled.
+ * **Throttled Requests** show up on the [Azure Monitor Request metrics](monitor-service-bus-reference.md#request-metrics) to identify how many requests were throttled.
* High **CPU Usage** indicates that current resource allocation is high and requests may get throttled if the current workload doesn't reduce. * High **Memory Usage** indicates that current resource allocation is high and requests may get throttled if the current workload doesn't reduce.
service-fabric Service Fabric Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-support.md
We have created a number of support request options to serve the needs of managing your Service Fabric clusters and application workloads, depending on the urgency of support needed and the severity of the issue.
-## Self help troubleshooting content
+## Self help troubleshooting
<div class='icon is-large'> <img alt='Self help content' src='./media/logos/i-article.svg'> </div>
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/language-packs.md
You need the following things to customize your Windows 10 Enterprise multi-sess
- If you use Local Experience Pack (LXP) ISO files to localize your images, you will also need to download the appropriate LXP ISO for the best language experience - If you're using Windows 10, version 1903 or 1909: - [Windows 10, version 1903 or 1909 LXP ISO](https://software-download.microsoft.com/download/pr/Win_10_1903_32_64_ARM64_MultiLng_LngPkAll_LXP_ONLY.iso)
- - If you're using Windows 10, version 2004 or 20H2, use the information in [Adding languages in Windows 10: Known issues](/windows-hardware/manufacture/desktop/language-packs-known-issue) to figure out which of the following LXP ISOs is right for you:
- - [Windows 10, version 2004 or 20H2 **9B** LXP ISO](https://software-download.microsoft.com/download/pr/Win_10_2004_64_ARM64_MultiLang_LangPckAll_LIP_LXP_ONLY)
- - [Windows 10, version 2004 or 20H2 **9C** LXP ISO](https://software-download.microsoft.com/download/pr/Win_10_2004_32_64_ARM64_MultiLng_LngPkAll_LIP_9C_LXP_ONLY)
- - [Windows 10, version 2004 or 20H2 **10C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2010C.iso)
- - [Windows 10, version 2004 or 20H2 **11C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2011C.iso)
- - [Windows 10, version 2004 or 20H2 **1C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2101C.iso)
- - [Windows 10, version 2004 or 20H2 **2C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2102C.iso)
- - [Windows 10, version 2004 or 20H2 **4B** LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2104B.iso)
- - [Windows 10, version 2004 or 20H2 **4C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2104C.iso)
+ - If you're using Windows 10, version 2004, 20H2, or 21H1, use the information in [Adding languages in Windows 10: Known issues](/windows-hardware/manufacture/desktop/language-packs-known-issue) to figure out which of the following LXP ISOs is right for you:
+ - [Windows 10, version 2004, 20H2, or 21H1 **9B** LXP ISO](https://software-download.microsoft.com/download/pr/Win_10_2004_64_ARM64_MultiLang_LangPckAll_LIP_LXP_ONLY)
+ - [Windows 10, version 2004, 20H2, or 21H1 **9C** LXP ISO](https://software-download.microsoft.com/download/pr/Win_10_2004_32_64_ARM64_MultiLng_LngPkAll_LIP_9C_LXP_ONLY)
+ - [Windows 10, version 2004, 20H2, or 21H1 **10C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2010C.iso)
+ - [Windows 10, version 2004, 20H2, or 21H1 **11C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2011C.iso)
+ - [Windows 10, version 2004, 20H2, or 21H1 **1C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2101C.iso)
+ - [Windows 10, version 2004, 20H2, or 21H1 **2C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2102C.iso)
+ - [Windows 10, version 2004, 20H2, or 21H1 **4B** LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2104B.iso)
+ - [Windows 10, version 2004, 20H2, or 21H1 **4C** LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2104C.iso)
- An Azure Files Share or a file share on a Windows File Server Virtual Machine
virtual-machine-scale-sets Vmss Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/vmss-support-help.md
+
+ Title: Azure virtual machine scale sets support and help options
+description: How to obtain help and support for questions or problems when you create solutions using Azure virtual machine scale sets.
++++ Last updated : 4/28/2021+++
+# Support and troubleshooting for Azure virtual machine scale sets
+
+Here are suggestions for where you can get help when developing your Azure virtual machine scale sets solutions.
+
+## Self help troubleshooting
+<div class='icon is-large'>
+ <img alt='Self help content' src='./media/logos/i-article.svg'>
+</div>
+
+Various articles explain how to determine, diagnose, and fix issues that you might encounter when using [Azure Virtual Machines](https://docs.microsoft.com/azure/virtual-machines/) and [virtual machine scale sets](overview.md).
+
+- [Azure Virtual Machine troubleshooting documentation](https://docs.microsoft.com/troubleshoot/azure/virtual-machines/welcome-virtual-machines)
+- [Frequently asked questions about Azure virtual machine scale sets](virtual-machine-scale-sets-faq.yml)
++
+## Post a question on Microsoft Q&A
+
+<div class='icon is-large'>
+ <img alt='Microsoft Q&A' src='./media/logos/microsoft-logo.png'>
+</div>
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure), AzureΓÇÖs preferred destination for community support.
+
+If you can't find an answer to your problem using search, submit a new question to Microsoft Q&A. Use one of the following tags when asking your question:
++
+| Area | Tag |
+|-|-|
+| [Azure virtual machine scale sets](overview.md) | [azure-virtual-machine-scale-set](/answers/topics/azure-virtual-machines-scale-set.html) |
+| [Azure Virtual Machines](../virtual-machines/linux/overview.md) | [azure-virtual-machines](/answers/topics/azure-virtual-machines.html) |
+| [Azure SQL Virtual Machines](https://docs.microsoft.com/azure/azure-sql/virtual-machines/) | [azure-sql-virtual-machines](/answers/topics/azure-sql-virtual-machines.html)|
+| [Azure Virtual Machine backup](../virtual-machines/backup-recovery.md) | [azure-virtual-machine-backup](/answers/questions/36892/azure-virtual-machine-backups.html) |
+| [Azure Virtual Machine extension](../virtual-machines/extensions/overview.md) | [azure-virtual-machine-extension](/answers/topics/azure-virtual-machines-extension.html)|
+| [Azure Virtual Machine Images](../virtual-machines/shared-image-galleries.md) | [azure-virtual-machine-images](/answers/topics/azure-virtual-machines-images.html) |
+| [Azure Virtual Machine migration](../virtual-machines/classic-vm-deprecation.md) | [azure-virtual-machine-migration](/answers/topics/azure-virtual-machines-migration.html) |
+| [Azure Virtual Machine monitoring](../azure-monitor/vm/monitor-vm-azure.md) | [azure-virtual-machine-monitoring](/answers/topics/azure-virtual-machines-monitoring.html) |
+| [Azure Virtual Machine networking](../virtual-machines/network-overview.md) | [azure-virtual-machine-networking](/answers/topics/azure-virtual-machines-networking.html) |
+| [Azure Virtual Machine storage](../virtual-machines/managed-disks-overview.md) | [azure-virtual-machine-storage](/answers/topics/azure-virtual-machines-storage.html) |
+
+## Create an Azure support request
+
+<div class='icon is-large'>
+ <img alt='Azure support' src='./media/logos/logo-azure.svg'>
+</div>
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+- If you already have an Azure Support Plan, [open a support request here](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+- To sign up for a new Azure Support Plan, [compare support plans](https://azure.microsoft.com/support/plans/) and select the plan that works for you.
++
+## Create a GitHub issue
+
+<div class='icon is-large'>
+ <img alt='GitHub-image' src='./media/logos/github-logo.png'>
+</div>
+
+If you need help with the language and tools used to develop and manage Azure virtual machine scale sets, open an issue in its repository on GitHub.
+
+| Library | GitHub issues URL|
+| | |
+| Azure PowerShell | https://github.com/Azure/azure-powershell/issues |
+| Azure CLI | https://github.com/Azure/azure-cli/issues |
+| Azure REST API | https://github.com/Azure/azure-rest-api-specs/issues |
+| Azure SDK for Java | https://github.com/Azure/azure-sdk-for-java/issues |
+| Azure SDK for Python | https://github.com/Azure/azure-sdk-for-python/issues |
+| Azure SDK for .NET | https://github.com/Azure/azure-sdk-for-net/issues |
+| Azure SDK for JavaScript | https://github.com/Azure/azure-sdk-for-js/issues |
+| Jenkins | https://github.com/Azure/jenkins/issues |
+| Terraform | https://github.com/Azure/terraform/issues |
+| Ansible | https://github.com/Azure/Ansible/issues |
+++
+## Submit feature requests on Azure Feedback
+
+<div class='icon is-large'>
+ <img alt='UserVoice' src='./media/logos/logo-uservoice.svg'>
+</div>
+
+To request new features, post them on Azure Feedback. Share your ideas for improving Azure virtual machine scale sets.
+
+| Service | Azure Feedback URL |
+|-||
+| Azure Virtual Machines | https://feedback.azure.com/forums/216843-virtual-machines
+
+## Stay informed of updates and new releases
+
+<div class='icon is-large'>
+ <img alt='Stay informed' src='./media/logos/i-blog.svg'>
+</div>
+
+Learn about important product updates, roadmap, and announcements in [Azure Updates](https://azure.microsoft.com/updates/?category=compute).
+
+News and information about Azure virtual machine scale sets is shared at the [Azure blog](https://azure.microsoft.com/blog/topics/virtual-machines/).
++
+## Next steps
+
+Learn more about [Azure virtual machine scale sets](overview.md)
virtual-machines Vm Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-support-help.md
Here are suggestions for where you can get help when developing your Azure Virtual Machines solutions.
-## Self help troubleshooting Content
+## Self help troubleshooting
<div class='icon is-large'> <img alt='Self help content' src='./media/logos/i-article.svg'> </div>
virtual-machines Hana Available Skus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-available-skus.md
vm-linux Last updated 5/13/2021-+
virtual-machines Hana Certification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-certification.md
vm-linux Last updated 05/13/2021-+
virtual-machines Hana Data Tiering Extension Nodes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-data-tiering-extension-nodes.md
vm-linux Last updated 05/17/2021-+
virtual-machines Hana Onboarding Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-onboarding-requirements.md
vm-linux Last updated 05/14/2021-+
virtual-machines Hana Operations Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-operations-model.md
vm-linux Last updated 05/17/2021-+
virtual-machines Hana Sizing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-sizing.md
vm-linux Last updated 05/14/2021-+
virtual-machines Hana Supported Scenario https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-supported-scenario.md
Title: Supported scenarios for SAP HANA on Azure (Large Instances)| Microsoft Docs
-description: Scenarios supported and their architecture details for SAP HANA on Azure (Large Instances)
+description: Learn about scenarios supported for SAP HANA on Azure (Large Instances) and their architectural details.
documentationcenter:
vm-linux Previously updated : 11/26/2019 Last updated : 05/18/2021 # Supported scenarios for HANA Large Instances
-This article describes the supported scenarios and architecture details for HANA Large Instances (HLI).
+This article describes the supported scenarios and architectural details for HANA Large Instances (HLI).
>[!NOTE] >If your required scenario is not mentioned in this article, contact the Microsoft Service Management team to assess your requirements. Before you set up the HLI unit, validate the design with SAP or your service implementation partner. ## Terms and definitions
-Let's understand the terms and definitions that are used in this article:
+Let's understand the terms and definitions used in this article:
-- **SID**: A system identifier for the HANA system-- **HLI**: Hana Large Instances-- **DR**: Disaster recovery-- **Normal DR**: A system setup with a dedicated resource for DR purposes only-- **Multipurpose DR**: A DR-site system that's configured to use a non-production environment alongside a production instance that's configured for a DR event -- **Single-SID**: A system with one instance installed-- **Multi-SID**: A system with multiple instances configured; also called an MCOS environment-- **HSR**: SAP HANA System Replication
+- **SID**: A system identifier for the HANA system.
+- **HLI**: Hana Large Instances.
+- **DR**: Disaster recovery (DR).
+- **Normal DR**: A system setup with a dedicated resource for DR purposes only.
+- **Multipurpose DR**: A DR site system that's configured to use a non-production environment alongside a production instance that's configured for a DR event.
+- **Single-SID**: A system with one instance installed.
+- **Multi-SID**: A system with multiple instances configured; also called an MCOS environment.
+- **HSR**: SAP HANA system replication.
## Overview HANA Large Instances supports a variety of architectures to help you accomplish your business requirements. The following sections cover the architectural scenarios and their configuration details.
-The derived architecture design is purely from an infrastructure perspective, and you must consult SAP or your implementation partners for the HANA deployment. If your scenarios are not listed in this article, contact the Microsoft account team to review the architecture and derive a solution for you.
+The derived architectural designs are purely from an infrastructure perspective. Consult SAP or your implementation partners for the HANA deployment. If your scenarios aren't listed in this article, contact the Microsoft account team to review the architecture and derive a solution for you.
> [!NOTE]
-> These architectures are fully compliant with Tailored Data Integration (TDI) design and supported by SAP.
+> These architectures are fully compliant with Tailored Data Integration (TDI) design and are supported by SAP.
This article describes the details of the two components in each supported architecture:
This article describes the details of the two components in each supported archi
Each provisioned server comes preconfigured with sets of Ethernet interfaces. The Ethernet interfaces configured on each HLI unit are categorized into four types: - **A**: Used for or by client access.-- **B**: Used for node-to-node communication. This interface is configured on all servers (irrespective of the topology requested) but used only for scale-out scenarios.
+- **B**: Used for node-to-node communication. This interface is configured on all servers irrespective of the topology requested. However, it is used only for scale-out scenarios.
- **C**: Used for node-to-storage connectivity. - **D**: Used for node-to-iSCSI device connection for STONITH setup. This interface is configured only when an HSR setup is requested.
Each provisioned server comes preconfigured with sets of Ethernet interfaces. Th
| C | TYPE II | vlan\<tenantNo+1> | team0.tenant+1 | Node-to-storage | | D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | STONITH |
-You choose the interface based on the topology that's configured on the HLI unit. For example, interface ΓÇ£BΓÇ¥ is set up for node-to-node communication, which is useful when you have a scale-out topology configured. This interface isn't used for single node, scale-up configurations. For more information about interface usage, review your required scenarios (later in this article).
+You choose the interface based on the topology that's configured on the HLI unit. For example, interface ΓÇ£BΓÇ¥ is set up for node-to-node communication, which is useful when you have a scale-out topology configured. This interface isn't used for single node scale-up configurations. For more information about interface usage, review your required scenarios (later in this article).
-If necessary, you can define additional NIC cards on your own. However, the configurations of existing NICs *can't* be changed.
+If necessary, you can define more NIC cards on your own. However, the configurations of existing NICs *can't* be changed.
>[!NOTE] >You might find additional interfaces that are physical interfaces or bonding. You should consider only the previously mentioned interfaces for your use case. Any others can be ignored.
-The distribution for units with two assigned IP addresses should look like:
+The distribution for units with two assigned IP addresses should look as follows:
-- Ethernet ΓÇ£AΓÇ¥ should have an assigned IP address that's within the server IP pool address range that you submitted to Microsoft. This IP address should be maintained in the */etc/hosts* directory of the OS.
+- Ethernet ΓÇ£AΓÇ¥ should have an assigned IP address that's within the server IP pool address range that you submitted to Microsoft. This IP address should be maintained in the */etc/hosts* directory of the operating system (OS).
- Ethernet ΓÇ£CΓÇ¥ should have an assigned IP address that's used for communication to NFS. This address does *not* need to be maintained in the *etc/hosts* directory to allow instance-to-instance traffic within the tenant.
-For HANA System Replication or HANA scale-out deployment, a blade configuration with two assigned IP addresses is not suitable. If you have only two assigned IP addresses and you want to deploy such a configuration, contact SAP HANA on Azure Service Management. They can assign you a third IP address in a third VLAN. For HANA Large Instances units with three assigned IP addresses on three NIC ports, the following usage rules apply:
+For HANA system replication or HANA scale-out deployment, a blade configuration with two assigned IP addresses isn't suitable. If you have only two assigned IP addresses and you want to deploy such a configuration, contact SAP HANA on Azure Service Management. They can assign you a third IP address in a third VLAN. For HANA Large Instances units with three assigned IP addresses on three NIC ports, the following usage rules apply:
-- Ethernet ΓÇ£AΓÇ¥ should have an assigned IP address that's outside of the server IP pool address range that you submitted to Microsoft. This IP address should not be maintained in the *etc/hosts* directory of the OS.
+- Ethernet ΓÇ£AΓÇ¥ should have an assigned IP address that's outside of the server IP pool address range that you submitted to Microsoft. This IP address shouldn't be maintained in the *etc/hosts* directory of the OS.
- Ethernet ΓÇ£BΓÇ¥ should be maintained exclusively in the *etc/hosts* directory for communication between the various instances. These are the IP addresses to be maintained in scale-out HANA configurations as the IP addresses that HANA uses for the inter-node configuration. -- Ethernet ΓÇ£CΓÇ¥ should have an assigned IP address that's used for communication to NFS storage. This type of address should not be maintained in the *etc/hosts* directory.
+- Ethernet ΓÇ£CΓÇ¥ should have an assigned IP address that's used for communication to NFS storage. This type of address shouldn't be maintained in the *etc/hosts* directory.
-- Ethernet ΓÇ£DΓÇ¥ should be used exclusively for access to STONITH devices for Pacemaker. This interface is required when you configure HANA System Replication and want to achieve auto failover of the operating system by using an SBD-based device.
+- Ethernet ΓÇ£DΓÇ¥ should be used exclusively for access to STONITH devices for Pacemaker. This interface is required when you configure HANA system replication and want to achieve auto failover of the operating system by using an SBD-based device.
### Storage
-Storage is preconfigured based on the requested topology. The volume sizes and mount points vary depending on the number of servers, the number of SKUs, and the configured topology. For more information, review your required scenarios (later in this article). If you require more storage, you can purchase it in 1-TB increments.
+Storage is preconfigured based on the requested topology. The volume sizes and mount points vary depending on the number of servers and SKUs, and the configured topology. For more information, review your required scenarios (later in this article). If you require more storage, you can purchase it in 1-TB increments.
>[!NOTE] >The mount point /usr/sap/\<SID> is a symbolic link to the /hana/shared mount point.
Storage is preconfigured based on the requested topology. The volume sizes and m
## Supported scenarios
-The architecture diagrams in the next sections use the following notations:
+The architectural diagrams in the next sections use the following notations:
[ ![Table of architecture diagrams](media/hana-supported-scenario/Legends.png)](media/hana-supported-scenario/Legends.png#lightbox)
The following mount points are preconfigured:
## HSR with STONITH for high availability
-This topology support two nodes for the HANA System Replication configuration. This configuration is supported only for single HANA instances on a node. This means that MCOS scenarios are *not* supported.
+This topology support two nodes for the HANA system replication configuration. This configuration is supported only for single HANA instances on a node. This means that MCOS scenarios are *not* supported.
> [!NOTE] > As of December 2019, this architecture is supported only for the SUSE operating system.
The following mount points are preconfigured:
## High availability with HSR and DR with storage replication
-This topology supports two nodes for the HANA System Replication configuration. Both normal and multipurpose DRs are supported. These configurations are supported only for single HANA instances on a node. This means that MCOS scenarios are *not* supported with these configurations.
+This topology supports two nodes for the HANA system replication configuration. Both normal and multipurpose DRs are supported. These configurations are supported only for single HANA instances on a node. This means that MCOS scenarios are *not* supported with these configurations.
In the diagram, a multipurpose scenario is depicted at the DR site, where the HLI unit is used for the QA instance while production operations are running from the primary site. During DR failover (or failover test), the QA instance at the DR site is taken down.
The following mount points are preconfigured:
## Scale-out with standby
-This topology supports multiple nodes in a scale-out configuration. There is one node with a master role, one or more nodes with a worker role, and one or more nodes as standby. However, there can be only one master node at any single point in time.
+This topology supports multiple nodes in a scale-out configuration. There is one node with a master role, one or more nodes with a worker role, and one or more nodes as standby. However, there can be only one master node at any point in time.
### Architecture diagram
The following mount points are preconfigured:
## Scale-out without standby
-This topology supports multiple nodes in a scale-out configuration. There is one node with a master role, and one or more nodes with a worker role. However, there can be only one master node at any single point in time.
+This topology supports multiple nodes in a scale-out configuration. There is one node with a master role, and one or more nodes with a worker role. However, there can be only one master node at any point in time.
### Architecture diagram
The following mount points are preconfigured:
## Single node with DR using HSR
-This topology supports one node in a scale-up configuration with one SID, with HANA System Replication to the DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary site, but multi-SID (MCOS) systems are supported as well.
+This topology supports one node in a scale-up configuration with one SID, with HANA system replication to the DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary site, but multi-SID (MCOS) systems are supported as well.
### Architecture diagram
The following network interfaces are preconfigured:
| D | TYPE II | vlan\<tenantNo+3> | team0.tenant+3 | Configured but not in use | ### Storage
-The following mount points are preconfigured on both the HLI units (Primary and DR):
+The following mount points are preconfigured on both HLI units (Primary and DR):
| Mount point | Use case | | | |
The following mount points are preconfigured on both the HLI units (Primary and
### Key considerations - /usr/sap/SID is a symbolic link to /hana/shared/SID. - For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in memory are supported in a multi-SID environment, see [Overview and architecture](./hana-overview-architecture.md).-- The primary node syncs with the DR node by using HANA System Replication.
+- The primary node syncs with the DR node by using HANA system replication.
- [Global Reach](../../../expressroute/expressroute-global-reach.md) is used to link the ExpressRoute circuits together to make a private network between your regional networks. ## Single node HSR to DR (cost optimized)
- This topology supports one node in a scale-up configuration with one SID, with HANA System Replication to the DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary site, but multi-SID (MCOS) systems are supported as well. At the DR site, an HLI unit is used for the QA instance while production operations are running from the primary site. During DR failover (or failover test), the QA instance at the DR site is taken down.
+ This topology supports one node in a scale-up configuration with one SID, with HANA system replication to the DR site for a primary SID. In the diagram, only a single-SID system is depicted at the primary site, but multi-SID (MCOS) systems are supported as well. At the DR site, an HLI unit is used for the QA instance while production operations are running from the primary site. During DR failover (or failover test), the QA instance at the DR site is taken down.
### Architecture diagram
The following mount points are preconfigured:
- For MCOS: Volume size distribution is based on the database size in memory. To learn what database sizes in memory are supported in a multi-SID environment, see [Overview and architecture](./hana-overview-architecture.md). - At the DR site: The volumes and mount points are configured (marked as ΓÇ£PROD Instance at DR siteΓÇ¥) for the production HANA instance installation at the DR HLI unit. - At the DR site: The data, log backups, log, and shared volumes for QA (marked as ΓÇ£QA instance installationΓÇ¥) are configured for the QA instance installation.-- The primary node syncs with the DR node by using HANA System Replication.
+- The primary node syncs with the DR node by using HANA system replication.
- [Global Reach](../../../expressroute/expressroute-global-reach.md) is used to link the ExpressRoute circuits together to make a private network between your regional networks. ## High availability and disaster recovery with HSR
- This topology support two nodes for the HANA System Replication configuration for the local regions' high availability. For the DR, the third node at the DR region syncs with the primary site by using HSR (async mode).
+ This topology support two nodes for the HANA system replication configuration for the local regions' high availability. For the DR, the third node at the DR region syncs with the primary site by using HSR (async mode).
### Architecture diagram
The following mount points are preconfigured:
### Key considerations - /usr/sap/SID is a symbolic link to /hana/shared/SID. - At the DR site: The volumes and mount points are configured (marked as ΓÇ£PROD DR instanceΓÇ¥) for the production HANA instance installation at the DR HLI unit. -- The primary site node syncs with the DR node by using HANA System Replication.
+- The primary site node syncs with the DR node by using HANA system replication.
- [Global Reach](../../../expressroute/expressroute-global-reach.md) is used to link the ExpressRoute circuits together to make a private network between your regional networks. ## High availability and disaster recovery with HSR (cost optimized)
- This topology supports two nodes for the HANA System Replication configuration for the local regions' high availability. For the DR, the third node at the DR region syncs with the primary site by using HSR (async mode), while another instance (for example, QA) is already running out from the DR node.
+ This topology supports two nodes for the HANA system replication configuration for the local regions' high availability. For the DR, the third node at the DR region syncs with the primary site by using HSR (async mode), while another instance (for example, QA) is already running out from the DR node.
### Architecture diagram
The following mount points are preconfigured:
- /usr/sap/SID is a symbolic link to /hana/shared/SID. - At the DR site: The volumes and mount points are configured (marked as ΓÇ£PROD DR instanceΓÇ¥) for the production HANA instance installation at the DR HLI unit. - At the DR site: The data, log backups, log, and shared volumes for QA (marked as ΓÇ£QA instance installationΓÇ¥) are configured for the QA instance installation.-- The primary site node syncs with the DR node by using HANA System Replication.
+- The primary site node syncs with the DR node by using HANA system replication.
- [Global Reach](../../../expressroute/expressroute-global-reach.md) is used to link the ExpressRoute circuits together to make a private network between your regional networks. ## Scale-out with DR using HSR
-This topology supports multiple nodes in a scale-out with a DR. You can request this topology with or without the standby node. The primary site node syncs with the DR site node by using HANA System Replication (async mode).
+This topology supports multiple nodes in a scale-out with a DR. You can request this topology with or without the standby node. The primary site node syncs with the DR site node by using HANA system replication (async mode).
### Architecture diagram
The following mount points are preconfigured:
### Key considerations - /usr/sap/SID is a symbolic link to /hana/shared/SID. - At the DR site: The volumes and mount points are configured for the production HANA instance installation at the DR HLI unit. -- The primary site node syncs with the DR node by using HANA System Replication.
+- The primary site node syncs with the DR node by using HANA system replication.
- [Global Reach](../../../expressroute/expressroute-global-reach.md) is used to link the ExpressRoute circuits together to make a private network between your regional networks. ## Next steps+
+Learn about:
+ - [Infrastructure and connectivity](./hana-overview-infrastructure-connectivity.md) for HANA Large Instances - [High availability and disaster recovery](./hana-overview-high-availability-disaster-recovery.md) for HANA Large Instances
virtual-machines Os Compatibility Matrix Hana Large Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-instance.md
Title: Operating System Compatibility Matrix for SAP HANA (Large Instances)| Microsoft Docs
-description: The compatibility matrix represents the compatibility of different versions of Operating System with different hardware types (Large Instances)
+ Title: Operating system compatibility matrix for SAP HANA (Large Instances)| Microsoft Docs
+description: The compatibility matrix represents the compatibility of different versions of operating system with different hardware types (Large Instances).
documentationcenter:-+ editor: vm-linux Previously updated : 08/21/2020- Last updated : 05/18/2021+
-# Compatible Operating Systems for HANA Large Instances
+# Compatible operating systems for HANA Large Instances
## HANA Large Instance Type I | Operating System | Availability | SKUs |
| RHEL 7.6 | Available | S72, S72m, S96, S144, S144m, S192, S192m, S192xm, S224, S224m |
-### Persistent Memory SKUs
+### Persistent memory SKUs
+ | Operating System | Availability | SKUs | ||--|-| | SLES 12 SP4 | Available | S224oo, S224om, S224ooo, S224oom |
| SLES 15 SP1 | Available | S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, S896m, S960m | | RHEL 7.6 | Available | S384, S384m, S384xm, S384xxm, S576m, S576xm, S768m, S768xm, S896m, S960m |
-## Related Documents
+## Next steps
-- To know more about [Available SKUs](hana-available-skus.md)-- To know about [Upgrading the Operating System](os-upgrade-hana-large-instance.md)
-
+Learn more about:
-
-
+- [Available SKUs](hana-available-skus.md)
+- [Upgrading the operating system](os-upgrade-hana-large-instance.md)
+- [Supported scenarios for HANA Large Instances](hana-supported-scenario.md)
+