Updates from: 03/11/2022 02:11:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Previously updated : 02/25/2022 Last updated : 03/10/2022
If the sign-in process is successful, your browser is redirected to `https://jwt
## Next steps
-Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
+- Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
+- Check out the Azure AD multi-tenant federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory), and how to pass Azure AD access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory-with-access-token)
::: zone-end
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
Update the relying party (RP) file that initiates the user journey that you crea
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. - ## Next steps
-Learn how to [pass Facebook token to your application](idp-pass-through-user-flow.md).
+- Learn how to [pass Facebook token to your application](idp-pass-through-user-flow.md).
+- Check out the Facebook federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#facebook), and how to pass Facebook access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#facebook-with-access-token)
+
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
The GitHub technical profile requires the **CreateIssuerUserId** claim transform
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+## Next steps
+
+- Learn how to [pass GitHub token to your application](idp-pass-through-user-flow.md).
+- Check out the GitHub federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#github), and how to pass GitHub access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#github-with-access-token)
+ ::: zone-end
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
You can define a Google account as a claims provider by adding it to the **Claim
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. - ## Next steps
-Learn how to [pass a Google token to your application](idp-pass-through-user-flow.md).
+- Learn how to [pass Google token to your application](idp-pass-through-user-flow.md).
+- Check out the Google federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google), and how to pass Google access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google-with-access-token)
++
active-directory-b2c Idp Pass Through User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/idp-pass-through-user-flow.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
Azure AD B2C supports passing the access token of [OAuth 2.0](add-identity-provi
::: zone pivot="b2c-custom-policy"
-Azure AD B2C supports passing the access token of [OAuth 2.0](authorization-code-flow.md) and [OpenID Connect](openid-connect.md) identity providers. For all other identity providers, the claim is returned blank.
+Azure AD B2C supports passing the access token of [OAuth 2.0](authorization-code-flow.md) and [OpenID Connect](openid-connect.md) identity providers. For all other identity providers, the claim is returned blank. For more details, check out the identity providers federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers).
::: zone-end
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
For [Applications](https://admin.bindid-sandbox.io/console/#/applications) to co
| Name | Azure AD B2C/your desired application name| | Domain | name.onmicrosoft.com| | Redirect URIs| https://jwt.ms |
-| Redirect URLs |Specify the page to which users are redirected after BindID authentication: https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
+| Redirect URLs |Specify the page to which users are redirected after BindID authentication: `https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp`<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
>[!NOTE] >BindID will provide you Client ID and Client Secret, which you'll need later to configure the Identity provider in Azure AD B2C.
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Title: Skip deletion of out of scope users in Azure Active Directory Application
description: Learn how to override the default behavior of de-provisioning out of scope users in Azure Active Directory. -+
This article describes how to use the Microsoft Graph API and the Microsoft Grap
* If ***SkipOutOfScopeDeletions*** is set to 0 (false), accounts that go out of scope will be disabled in the target. * If ***SkipOutOfScopeDeletions*** is set to 1 (true), accounts that go out of scope will not be disabled in the target. This flag is set at the *Provisioning App* level and can be configured using the Graph API.
-Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox.
+Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox. Note that in order to successfully complete this procedure you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md).
## Step 1: Retrieve your Provisioning App Service Principal ID (Object ID)
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
The following limitations apply to using SSPR from the Windows sign-in screen:
- *BlockNonAdminUserInstall* is set to enabled or 1 - *EnableLostMode* is set on the device - Explorer.exe is replaced with a custom shell
+ - Interactive logon: Require smart card is set to enabled or 1
- The combination of the following specific three settings can cause this feature to not work. - Interactive logon: Do not require CTRL+ALT+DEL = Disabled - *DisableLockScreenAppNotifications* = 1 or Enabled
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 03/03/2022 Last updated : 03/10/2022
These browsers support device authentication, allowing the device to be identifi
> [!NOTE] > Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario. > Safari is supported for device-based Conditional Access, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
+> [Firefox 91+](https://support.mozilla.org/kb/windows-sso) is supported for device-based Conditional Access, but "Allow Windows single sign-on for Microsoft, work, and school accounts" needs to be enabled.
#### Why do I see a certificate prompt in the browser
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
There are two scenarios that make up continuous access evaluation, critical even
### Critical event evaluation
-Continuous access evaluation is implemented by enabling services, like Exchange Online, SharePoint Online, and Teams, to subscribe to critical Azure AD events. Those events can then be evaluated and enforced near real time. Critical event evaluation doesn't rely on Conditional Access policies so is available in any tenant. The following events are currently evaluated:
+Continuous access evaluation is implemented by enabling services, like Exchange Online, SharePoint Online, and Teams, to subscribe to critical Azure AD events. Those events can then be evaluated and enforced near real time. Critical event evaluation doesn't rely on Conditional Access policies so it is available in any tenant. The following events are currently evaluated:
- User Account is deleted or disabled - Password for a user is changed or reset
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md
Previously updated : 10/28/2021 Last updated : 03/09/2022 -
If you need help with one of the Microsoft Authentication Libraries (MSAL), open
- [Azure Active Directory Identity Blog](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity): Get news and information about Azure AD. - [Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity/): Share your experiences, engage, and learn from experts.+
+## Share your product ideas
+
+Have an idea for improving the for the Microsoft identity platform? Browse and vote for ideas submitted by others or submit your own:
+
+https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789
++
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 03/01/2022 Last updated : 03/10/2022
Welcome to what's new in the Microsoft identity platform documentation. This art
## February 2022
-### New articles
--- [Quickstart: Sign in users and call the Microsoft Graph API from an Android app](mobile-app-quickstart-portal-android.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app](mobile-app-quickstart-portal-ios.md)- ### Updated articles - [Desktop app that calls web APIs: Acquire a token using WAM](scenario-desktop-acquire-token-wam.md)
Welcome to what's new in the Microsoft identity platform documentation. This art
### New articles - [Access Azure AD protected resources from an app in Google Cloud (preview)](workload-identity-federation-create-trust-gcp.md)-- [Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity](console-app-quickstart.md)-- [Quickstart: Acquire a token and call Microsoft Graph API from a desktop application](desktop-app-quickstart.md)-- [Quickstart: Add sign-in with Microsoft to a web app](web-app-quickstart.md)-- [Quickstart: Protect a web API with the Microsoft identity platform](web-api-quickstart.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from a mobile application](mobile-app-quickstart.md) ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Exchange a SAML token issued by AD FS for a Microsoft Graph access token](v2-saml-bearer-assertion.md) - [Logging in MSAL.js](msal-logging-js.md) - [Permissions and consent in the Microsoft identity platform](v2-permissions-and-consent.md)-- [Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity](quickstart-v2-java-daemon.md)-- [Quickstart: Acquire a token and call Microsoft Graph API from a Python console app using app's identity](quickstart-v2-python-daemon.md)-- [Quickstart: Add sign-in with Microsoft to a Java web app](quickstart-v2-java-webapp.md)-- [Quickstart: Add sign-in with Microsoft to a Python web app](quickstart-v2-python-webapp.md)-- [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](quickstart-v2-aspnet-core-webapp.md)-- [Quickstart: ASP.NET web app that signs in Azure AD users](quickstart-v2-aspnet-webapp.md)-- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)-- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from an Android app](quickstart-v2-android.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app](quickstart-v2-ios.md)
+- [Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity](console-app-quickstart.md)
+- [Quickstart: Acquire a token and call Microsoft Graph API from a desktop application](desktop-app-quickstart.md)
+- [Quickstart: Add sign-in with Microsoft to a web app](web-app-quickstart.md)
+- [Quickstart: Protect a web API with the Microsoft identity platform](web-api-quickstart.md)
+- [Quickstart: Sign in users and call the Microsoft Graph API from a mobile application](mobile-app-quickstart.md)
## December 2021
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Claims mapping policy type](reference-claims-mapping-policy-type.md) - [Microsoft identity platform developer glossary](developer-glossary.md)-- [Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow](quickstart-v2-javascript-auth-code-angular.md) - [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 02/16/2022 Last updated : 03/10/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on February 16th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on March 10th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| SKYPE FOR BUSINESS PSTN DOMESTIC CALLING | MCOPSTN1 | 0dab259f-bf13-4952-b7f8-7db8f131b28d | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | DOMESTIC CALLING PLAN (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING (120 Minutes)| MCOPSTN5 | 54a152dc-90de-4996-93d2-bc47e670fc06 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | DOMESTIC CALLING PLAN (54a152dc-90de-4996-93d2-bc47e670fc06) | | Skype for Business PSTN Usage Calling Plan | MCOPSTNPP | 06b48c5f-01d9-4b18-9015-03b52040f51a | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) |
+| Teams Phone with Calling Plan | MCOTEAMS_ESSENTIALS | ae2343d1-0999-43f6-ae18-d816516f6e78 | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | TELSTRA CALLING FOR O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) | | Universal Print | UNIVERSAL_PRINT | 9f3d9c1d-25a5-4aaa-8e59-23a1e6450a67 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9) |
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Selected policies should either have an **Include** or **Exclude** option checke
![ Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png) >[!NOTE]
->The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+>The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
### Virtual Server Properties
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 01/28/2022 Last updated : 03/07/2022
# Assign Azure AD roles with administrative unit scope
-In Azure Active Directory (Azure AD), for more granular administrative control, you can assign an Azure AD role with a scope that's limited to one or more administrative units.
+In Azure Active Directory (Azure AD), for more granular administrative control, you can assign an Azure AD role with a scope that's limited to one or more administrative units. When an Azure AD role is assigned at the scope of an administrative unit, role permissions apply only when managing members of the administrative unit itself, and do not apply to tenant-wide settings or configurations.
+
+For example, an administrator who is assigned the Groups Administrator role at the scope of an administrative unit can manage groups that are members of the administrative unit, but they cannot manage other groups in the tenant. They also cannot manage tenant-level settings related to groups, such as expiration or group naming policies.
+
+This article describes how to assign Azure AD roles with administrative unit scope.
## Prerequisites
The following Azure AD roles can be assigned with administrative unit scope:
| Role | Description | | --| -- | | [Authentication Administrator](permissions-reference.md#authentication-administrator) | Has access to view, set, and reset authentication method information for any non-admin user in the assigned administrative unit only. |
-| [Groups Administrator](permissions-reference.md#groups-administrator) | Can manage all aspects of groups and groups settings, such as naming and expiration policies, in the assigned administrative unit only. |
+| [Groups Administrator](permissions-reference.md#groups-administrator) | Can manage all aspects of groups in the assigned administrative unit only. |
| [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | Can reset passwords for non-administrators in the assigned administrative unit only. | | [License Administrator](permissions-reference.md#license-administrator) | Can assign, remove, and update license assignments within the administrative unit only. | | [Password Administrator](permissions-reference.md#password-administrator) | Can reset passwords for non-administrators within the assigned administrative unit only. |
-| [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) * | Can manage all aspects of the SharePoint service. |
-| [Teams Administrator](permissions-reference.md#teams-administrator) * | Can manage the Microsoft Teams service. |
+| [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. For SharePoint sites associated with Microsoft 365 groups in an administrative unit, can also update site properties (site name, URL, and external sharing policy) using the Microsoft 365 admin center. Cannot use the SharePoint admin center or SharePoint APIs to manage sites. |
+| [Teams Administrator](permissions-reference.md#teams-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. Can manage team members in the Microsoft 365 admin center for teams associated with groups in the assigned administrative unit only. Cannot use the Teams admin center. |
| [Teams Devices Administrator](permissions-reference.md#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | | [User Administrator](permissions-reference.md#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins within the assigned administrative unit only. |
-(*) The SharePoint Administrator and Teams Administrator roles can only be used for managing properties in the Microsoft 365 admin center. Teams admin center and SharePoint admin center currently do not support administrative unit-scoped administration.
- Certain role permissions apply only to non-administrator users when assigned with the scope of an administrative unit. In other words, administrative unit scoped [Helpdesk Administrators](permissions-reference.md#helpdesk-administrator) can reset passwords for users in the administrative unit only if those users do not have administrator roles. The following list of permissions are restricted when the target of an action is another administrator: - Read and modify user authentication methods, or reset user passwords
active-directory Sonarqube Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sonarqube-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Sonarqube test user
-In this section, you create a user called B.Simon in Sonarqube. Work with [Sonarqube Client support team](https://www.sonarsource.com/support/) to add the users in the Sonarqube platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Sonarqube. Work with [Sonarqube Client support team](https://sonarsource.com/company/contact/) to add the users in the Sonarqube platform. Users must be created and activated before you use single sign-on.
## Test SSO
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) drivers for Azure Disks on Azure Ku
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 10/15/2021 Last updated : 03/09/2022
Besides original in-tree driver features, Azure Disk CSI driver already provides
- `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported, check more details about [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md) - [Snapshot](#volume-snapshots) - [Volume clone](#clone-volumes)
+- [Resize disk PV without downtime](#resize-a-persistent-volume-without-downtime)
## Use CSI persistent volumes with Azure disks
outfile
test.txt ```
-## Resize a persistent volume
+## Resize a persistent volume without downtime
You can instead request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
Filesystem Size Used Avail Use% Mounted on
``` > [!IMPORTANT]
-> Currently, the Azure disk CSI driver only supports resizing PVCs with no pods associated (and the volume not mounted to a specific node).
-
-As such, let's delete the pod we created earlier:
-
-```console
-$ kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/nginx-pod-azuredisk.yaml
-
-pod "nginx-azuredisk" deleted
-```
+> Currently, Azure disk CSI driver supports resizing PVCs without downtime on specific regions.
+> Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature.
Let's expand the PVC by increasing the `spec.resources.requests.storage` field:
pvc-391ea1a6-0191-4022-b915-c8dc4216174a 15Gi RWO Delete
(...) ```
-> [!NOTE]
-> The PVC won't reflect the new size until it has a pod associated to it again.
-
-Let's create a new pod:
-
-```console
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/nginx-pod-azuredisk.yaml
-
-pod/nginx-azuredisk created
-```
-
-And, finally, confirm the size of the PVC and inside the pod:
+And after a few minutes, confirm the size of the PVC and inside the pod:
```console $ kubectl get pvc pvc-azuredisk
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Win
(...) ```
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
- ## Next steps - To learn how to use CSI drivers for Azure Files, see [Use Azure Files with CSI drivers](azure-files-csi.md).
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[azure-disk-volume]: azure-disk-volume.md [azure-files-pvc]: azure-files-dynamic-pv.md [premium-storage]: ../virtual-machines/disks-types.md
+[expand-an-azure-managed-disk]: ../virtual-machines/linux/expand-disks.md#expand-an-azure-managed-disk
[az-disk-list]: /cli/azure/disk#az_disk_list [az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create [az-disk-create]: /cli/azure/disk#az_disk_create
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[use-tags]: use-tags.md
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS) Previously updated : 03/01/2019 Last updated : 03/09/2019 #Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
The disk resource ID is displayed once the command has successfully completed, a
``` ## Mount disk as volume
+Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID. For example:
-To mount the Azure disk into your pod, configure the volume in the container spec. Create a new file named `azure-disk-pod.yaml` with the following contents. Update `diskName` with the name of the disk created in the previous step, and `diskURI` with the disk ID shown in output of the disk create command. If desired, update the `mountPath`, which is the path where the Azure disk is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-azuredisk
+spec:
+ capacity:
+ storage: 100Gi
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Retain
+ csi:
+ driver: disk.csi.azure.com
+ readOnly: false
+ volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ volumeAttributes:
+ fsType: ext4
+```
+
+Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
+
+```yaml
+apiVersion: v1
+metadata:
+ name: pvc-azuredisk
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Gi
+ volumeName: pv-azuredisk
+ storageClassName: ""
+```
+
+Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*.
+
+```console
+kubectl apply -f pv-azuredisk.yaml
+kubectl apply -f pvc-azuredisk.yaml
+```
+
+Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*.
+
+```console
+$ kubectl get pvc pvc-azuredisk
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+pvc-azuredisk Bound pv-azuredisk 100Gi RWO 5s
+```
+
+Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
```yaml apiVersion: v1
spec:
- name: azure mountPath: /mnt/azure volumes:
- - name: azure
- azureDisk:
- kind: Managed
- diskName: myAKSDisk
- diskURI: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ - name: azure
+ persistentVolumeClaim:
+ claimName: pvc-azuredisk
```
-Use the `kubectl` command to create the pod.
- ```console kubectl apply -f azure-disk-pod.yaml ```
-You now have a running pod with an Azure disk mounted at `/mnt/azure`. You can use `kubectl describe pod mypod` to verify the disk is mounted successfully. The following condensed example output shows the volume mounted in the container:
-
-```
-[...]
-Volumes:
- azure:
- Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
- DiskName: myAKSDisk
- DiskURI: /subscriptions/<subscriptionID/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
- Kind: Managed
- FSType: ext4
- CachingMode: ReadWrite
- ReadOnly: false
- default-token-z5sd7:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-z5sd7
- Optional: false
-[...]
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 1m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
- Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-z5sd7"
- Normal SuccessfulMountVolume 41s kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "azure"
-[...]
-```
-
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
- ## Next steps For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
For more information about AKS clusters interact with Azure disks, see the [Kube
[azure-files-volume]: azure-files-volume.md [operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
-[use-tags]: use-tags.md
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
$ kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Win
(...) ```
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
- ## Next steps - To learn how to use CSI drivers for Azure disks, see [Use Azure disks with CSI drivers](azure-disk-csi.md).
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[az-provider-register]: /cli/azure/provider#az_provider_register [node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [storage-skus]: ../storage/common/storage-redundancy.md
-[use-tags]: use-tags.md
+[use-tags]: use-tags.md
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 01/29/2022 Last updated : 03/9/2022 #Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
spec:
- mfsymlinks - cache=strict - nosharesock
+ - nobrl
``` Create a *azurefile-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Serv
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 10/15/2021 Last updated : 03/10/2022
The Container Storage Interface (CSI) is a standard for exposing arbitrary block
The CSI storage driver support on AKS allows you to natively use: - [*Azure disks*](azure-disk-csi.md), which can be used to create a Kubernetes *DataDisk* resource. Disks can use Azure Premium Storage, backed by high-performance SSDs, or Azure Standard Storage, backed by regular HDDs or Standard SSDs. For most production and development workloads, use Premium Storage. Azure disks are mounted as *ReadWriteOnce*, so are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.-- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs.
+- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0/3.1 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs.
> [!IMPORTANT] > Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default. These drivers are the future of storage support in Kubernetes.
Whilst explicit migration to the CSI provider is not needed for your storage cla
Migration of these storage classes will involve deleting the existing storage classes, and re-provisioning them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
-Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
-
-> [!IMPORTANT]
-> If your Storage class reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
+Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
### Migrating Storage Class provisioner
parameters:
The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
+### Migrating in-tree disk persistent volumes
+
+> [!IMPORTANT]
+> If your in-tree Persistent Volume reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
+> ```console
+> $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
+> ```
+
+If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
+ ## Next steps - To use the CSI drive for Azure disks, see [Use Azure disks with CSI drivers](azure-disk-csi.md).
The CSI storage system supports the same features as the In-tree drivers, so the
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md
+[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-volume
[azure-files-pvc]: azure-files-dynamic-pv.md [premium-storage]: ../virtual-machines/disks-types.md [az-disk-list]: /cli/azure/disk#az_disk_list
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This example shows one way to verify a reference token with an authorization ser
| mode="string" | Determines whether this is a new request or a copy of the current request. In outbound mode, mode=copy does not initialize the request body. | No | New | | response-variable-name="string" | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. | Yes | N/A | | timeout="integer" | The timeout interval in seconds before the call to the URL fails. | No | 60 |
-| ignore-error | If true and the request results in an error:<br /><br /> - If response-variable-name was specified it will contain a null value.<br />- If response-variable-name was not specified, context.Request will not be updated. | No | false |
+| ignore-error | If true and the request results in an error, the error will be ignored, and the response variable will contain a null value. | No | false |
| name | Specifies the name of the header to be set. | Yes | N/A | | exists-action | Specifies what action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - override - replaces the value of the existing header.<br />- skip - does not replace the existing header value.<br />- append - appends the value to the existing header value.<br />- delete - removes the header from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result. | No | override |
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
If the virtual network is in a different subscription than the app, you must ens
### Routes
-There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of you app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic is routed from your virtual network and out.
+You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of you app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic is routed from your virtual network and out.
+
+By default, only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) sent from your app is routed through the virtual network integration. Unless you configure application routing or configuration routing options, all other traffic will not be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it is sent through the virtual network integration.
#### Application routing
-Application routing affects all the traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
+Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
> [!NOTE]
-> * When **Route All** is enabled, all app traffic is subject to the NSGs and UDRs that are applied to your integration subnet. When **Route All** is enabled, outbound traffic is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
+> * Only traffic configured in applicaiton or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet.
+> * When **Route All** is enabled, outbound traffic from your app is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
> * Regional virtual network integration can't use port 25. Learn [how to configure application routing](./configure-vnet-integration-routing.md).
We recommend that you use the **Route All** configuration setting to enable rout
#### Configuration routing
-When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, configuration traffic will go directly over the public route, but individual components you actively configure it to be routed through the virtual network integration.
+When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
> [!NOTE] > * Windows containers don't support routing App Service Key Vault references or pulling custom container images over virtual network integration.
App settings using Key Vault references will attempt to get secrets over the pub
#### Network routing
-You can use route tables to route outbound traffic from your app to wherever you want. Route tables affect your destination traffic. When **Route All** is disabled in [application routing](#application-routing), only private traffic (RFC1918) is affected by your route tables. Common destinations can include firewall devices or gateways. Routes that are set on your integration subnet won't affect replies to inbound app requests.
+You can use route tables to route outbound traffic from your app to wherever you want. Route tables affect your destination traffic. Route tables only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Common destinations can include firewall devices or gateways. Routes that are set on your integration subnet won't affect replies to inbound app requests.
-When you want to route all outbound traffic on-premises, you can use a route table to send all outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back.
+When you want to route outbound traffic on-premises, you can use a route table to send outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back.
Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. Similar to user-defined routes, BGP routes affect traffic according to your routing scope setting. ### Network security groups
-An app that uses regional virtual network integration can use a [network security group](../virtual-network/network-security-groups-overview.md) to block outbound traffic to resources in your virtual network or the internet. To block traffic to public addresses, enable [Route All](#application-routing) to the virtual network. When **Route All** isn't enabled, NSGs are only applied to RFC1918 traffic.
+An app that uses virtual network integration can use a [network security group](../virtual-network/network-security-groups-overview.md) to block outbound traffic to resources in your virtual network or the internet. To block traffic to public addresses, enable [Route All](#application-routing). When **Route All** isn't enabled, NSGs are only applied to RFC1918 traffic from your app.
An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
In your Python code, you use these settings as environment variables with statem
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
+> [!NOTE]
+> If you want to try an alternative approach to connect your app to the Postgres database in Azure, see the [Service Connector version](../service-connector/tutorial-django-webapp-postgres-cli.md) of this tutorial. Service Connector is a new Azure service that is currently in public preview. [Section 4.2](../service-connector/tutorial-django-webapp-postgres-cli.md#42-configure-environment-variables-to-connect-the-database) of that tutorial introduces a simplified process for creating the connection.
+ ### 4.3 Run Django database migrations Django database migrations ensure that the schema in the PostgreSQL on Azure database matches with those described in your code.
Learn how to map a custom DNS name to your app:
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
applied-ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-create-immersive-reader.md
The script is designed to be flexible. It will first look for existing Immersive
-ResourceGroupName 'MyResourceGroupName' -ResourceGroupLocation 'westus2' -AADAppDisplayName 'MyOrganizationImmersiveReaderAADApp'
- -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'
+ -AADAppIdentifierUri 'api://MyOrganizationImmersiveReaderAADApp'
-AADAppClientSecret 'SomeStrongPassword' -AADAppClientSecretExpiration '2021-12-31' ```
The script is designed to be flexible. It will first look for existing Immersive
| ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group does not already exist, a new one with this name will be created. | | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. | | AADAppDisplayName |The Azure Active Directory application display name. If an existing Azure AD application is not found, a new one with this name will be created. This parameter is optional if the Azure AD application already exists. |
- | AADAppIdentifierUri |The URI for the Azure AD app. If an existing Azure AD app is not found, a new one with this URI will be created. For example, `https://immersivereaderaad-mycompany`. |
+ | AADAppIdentifierUri |The URI for the Azure AD app. If an existing Azure AD app is not found, a new one with this URI will be created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we are using the default Azure AD URI scheme prefix of `api://` for compatibility with the [Azure AD policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |
| AADAppClientSecret |A password you create that will be used later to authenticate when acquiring a token to launch the Immersive Reader. The password must be at least 16 characters long, contain at least 1 special character, and contain at least 1 numeric character. To manage Azure AD application client secrets after you've created this resource please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> `[AADAppDisplayName]` -> Certificates and Secrets blade -> Client Secrets section (as shown in the "Manage your Azure AD application secrets" screenshot below). | | AADAppClientSecretExpiration |The date or datetime after which your `[AADAppClientSecret]` will expire (e.g. '2020-12-31T11:59:59+00:00' or '2020-12-31'). |
azure-app-configuration Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-soft-delete.md
+
+ Title: Soft Delete in Azure App Configuration
+description: Soft Delete in Azure App Configuration
+++++ Last updated : 03/01/2022++
+# Soft delete
+
+Azure App Configuration's Soft delete feature allows recovery of your data such as key-values, feature flags, and revision history of a deleted store. It's automatically enabled for all stores in the standard tier. In this article, learn more about the soft delete feature and its functionality.
+
+Learn how to [recover Azure App Configuration stores](./howto-recover-deleted-stores-in-azure-app-configuration.md) using the soft delete feature.
+
+> [!NOTE]
+> When an App Configuration store is soft-deleted, services that are integrated with the store will be deleted. For example Azure RBAC roles assignments, managed identity, Event Grid subscriptions, and private endpoints. Recovering a soft-deleted App Configuration store will not restore these services. They will need to be recreated.
+
+## Scenarios
+
+The soft delete feature addresses the recovery of the deleted stores, whether the deletion was accidental or intentional. The soft delete feature will act as a safeguard in the following scenarios:
+
+* **Recovery of a deleted App Configuration store**: A deleted app configuration store could be recovered in the retention time period.
+
+* **Permanent deletion of App Configuration store**: This feature helps you to permanently delete an app configuration store.
+
+## Recover
+Recover is the operation to get the stores in a soft deleted state back to an active state where one can request the store for configuration and feature management.
+
+## Retention period
+A variable to specify the time period, in days, for which a soft deleted store will be retained. This value can only be set at the creation of store and once set, it can't be changed. Once the retention period elapses, the store will be permanently deleted automatically.
+
+## Purge
+Purge is the operation to permanently delete the stores in a soft deleted state, provided the store doesn't have purge-protection enabled. To recreate the App Configuration store with the same name as a deleted store, you need to purge the store first if it's not already past the retention period.
+
+## Purge protection
+With Purge protection enabled, soft deleted stores can't be purged in the retention period. If disabled, the soft deleted store can be purged before the retention period expires. Once purge protection is enabled on a store, it can't be disabled.
+
+## Permissions to recover or purge store
+
+A user has to have below permissions to recover or purge a soft-deleted app configuration store. The built-in Contributor and Owner roles already have the required permissions to recover and purge.
+
+- Permission to recover - `Microsoft.AppConfiguration/configurationStores/write`
+
+- Permission to purge - `Microsoft.AppConfiguration/configurationStores/action`
+
+## Billing implications
+
+There won't be any charges for the soft deleted stores. Once you recover a soft deleted store, the usual charges will start applying. Soft delete isn't available with free tier.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Recover Azure App Configuration stores](./howto-recover-deleted-stores-in-azure-app-configuration.md)
azure-app-configuration Howto Recover Deleted Stores In Azure App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md
+
+ Title: Recover Azure App Configuration stores (Preview)
+description: Recover/Purge Azure App Configuration soft deleted Stores
+++++ Last updated : 03/01/2022++
+# Recover Azure App Configuration stores (Preview)
+
+This article covers the soft delete feature of Azure App Configuration stores. You'll learn about how to set retention policy, enable purge protection, recover and purge a soft-deleted store.
+
+To learn more about the concept of soft delete feature, see [Soft-Delete in Azure App Configuration](./concept-soft-delete.md).
+
+## Prerequisites
+
+* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
+
+* Refer to the [Soft-Delete in Azure App Configuration](./concept-soft-delete.md#permissions-to-recover-or-purge-store) for permissions requirements.
+
+## Set retention policy and enable purge protection at store creation
+
+To create a new App Configuration store in the Azure portal, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). In the upper-left corner of the home page, select **Create a resource**. In the **Search the Marketplace** box, type *App Configuration* and press Enter.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-3.png" alt-text="In MarketPlace Search results, App Configuration is highlighted":::
+
+1. Select **App Configuration** from the search results, and then select **Create**.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-7.png" alt-text="In Snapshot, Create option is highlighted":::
+
+1. On the **Create App Configuration** pane, enter the following settings:
+
+ | Setting | Suggested value | Description |
+ ||||
+ | **Subscription** | Your subscription | Select the Azure subscription for your store |
+ | **Resource group** | Your resource group | Select the Azure resource group for your store |
+ | **Resource name** | Globally unique name | Enter a unique resource name to use for the App Configuration store. This name can't be the same name as the previous configuration store. |
+ | **Location** | Your desired Location | Select the region you want to create your configuration store in. |
+ | **Pricing tier** | *Standard* | Select the standard pricing tier. For more information, see the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration). |
+ | **Days to retain deleted stores** | Retention period for soft deleted stores | Select the number of days for which you would want the soft deleted stores and their content to be retained. |
+ | **Enable Purge protection** | Purge protection status | Check to enable Purge protection on the store so no one can purge it before the retention period expires. |
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-6.png" alt-text="In Create, Recovery options are highlighted":::
+
+1. Select **Review + create** to validate your settings.
+1. Select **Create**. The deployment might take a few minutes.
+
+## Enable Purge Protection in an existing store
+
+1. Log in to the Azure portal.
+1. Select your standard tier App Configuration store.
+1. Refer to the screenshot below on where to check for the soft delete status of an existing store.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-1.png" alt-text="In Overview, Soft-delete is highlighted.":::
+
+1. Click on the **Enabled** value of Soft Delete. You'll be redirected to the **properties** of your store. At the bottom of the page, you can review the information related to soft delete. The Retention period is shown as "Days to retain deleted stores". You can't change this value once it's set. The Purge protection check box shows whether purge protection is enabled for this particular store or not. Once enabled, purge protection can't be disabled.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-2.png" alt-text="In Properties, Soft delete, Days to retain are highlighted.":::
+
+## List, recover, or purge a soft deleted App Configuration store
+
+1. Log in to the Azure portal.
+1. Click on the search bar at the top of the page.
+1. Search for "App Configuration" and click on **App Configuration** under **Services**. Don't click on an individual App Configuration store.
+1. At the top of the screen, click the option to **Manage deleted stores**. A context pane will open on the right side of your screen.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-4.png" alt-text="On App Configuration stores, the Manage deleted stores option is highlighted.":::
+
+1. Select your subscription from the drop box. If you've deleted one or more App Configuration stores, these stores will appear in the context pane on the right. Click "Load More" at the bottom of the context pane if not all deleted stores are loaded.
+1. Once you find the store that you wish to recover or purge, select the checkbox next to it. You can select multiple stores
+1. Please click **Recover** at the bottom of the context pane to recover the store OR
+ click **Purge** option to permanently delete the store. Note you won't be able to purge a store when purge protection is enabled.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-5.png" alt-text="On Manage deleted stores panel, one store is selected, and the Recover button is highlighted.":::
+
+## Recover an App Configuration store with customer-managed key enabled
+
+When recovering stores that use customer-managed keys, there are extra steps that need to be performed to access the recovered data. This is because the recovered store, will no longer have a managed identity assigned that has access to the customer-managed key. A new managed identity should be assigned to the store and the customer managed key settings should be reconfigured to use the newly assigned identity. When updating the managed key settings to use the newly assigned identity, ensure to continue using the same key from the key vault. For more details on how to use customer-managed keys in App Configuration stores, refer to [Use customer-managed keys to encrypt your App Configuration data](./concept-customer-managed-keys.md).
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Soft-Delete in Azure App Configuration](./concept-soft-delete.md)
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
| -- | - | | `https://management.azure.com` (for Azure Cloud), `https://management.usgovcloudapi.net` (for Azure US Government) | Required for the agent to connect to Azure and register the cluster. | | `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. |
-| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
+| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us`, `<region>.login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. | | `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Only used when deploying to a Windows or Linux Premium plan or to a Windows Consumption plan. Not supported for Linux Consumption plans or Windows or Linux Dedicated plans. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
## WEBSITE\_CONTENTOVERVNET
The file path to the function app code and configuration in an event-driven scal
||| |WEBSITE_CONTENTSHARE|`functionapp091999e2`|
-Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Only used when deploying to a Windows or Linux Premium plan or to a Windows Consumption plan. Not supported for Linux Consumption plans or Windows or Linux Dedicated plans. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
When using an Azure Resource Manager template to create a function app during deployment, don't include WEBSITE_CONTENTSHARE in the template. This slot setting is generated during deployment. To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
This section describes the configuration settings available for this binding, wh
"maxAutoLockRenewalDuration": "00:05:00", "maxConcurrentCalls": 16, "maxConcurrentSessions": 8,
- "maxMessages": 1000,
+ "maxMessageBatchSize": 1000,
"sessionIdleTimeout": "00:01:00", "enableCrossEntityTransactions": false }
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing the Azure Monitor agent (AMA) on Azure virtual
Previously updated : 01/27/2022 Last updated : 03/09/2022
We strongly recommended to update to generally available versions listed as foll
| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> | | December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> | | January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
+| Feburary 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li></ul> | 1.2.0.0 | Not yet available |
<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1, v1.1.5.0. Please use hotfixed versions listed above. <sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 3/3/2022 Last updated : 3/9/2022 # Azure Monitor agent overview
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
+The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
+Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure Portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
## Relationship to other agents The Azure Monitor agent replaces the following legacy agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
The following table shows the current support for the Azure Monitor agent with A
| Azure Monitor feature | Current support | More information | |:|:|:| | File based logs and Windows IIS logs | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+| Windows Client OS installer | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
| [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Connect using private links](azure-monitor-agent-data-collection-endpoint.md) | Public preview | No sign-up needed |
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
Add the following dependencies to your pom.xml or build.gradle file:
* [Application Insights spring-boot-starter](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-starter) 2.5.0 or later * Micrometer Azure Registry 1.1.0 or above
-* [Micrometer Spring Legacy](https://micrometer.io/docs/ref/spring/1.5) 1.1.0 or above (this backports the autoconfig code in the Spring framework).
+* [Micrometer Spring Legacy](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-metrics) 1.1.0 or above (this backports the autoconfig code in the Spring framework).
* [ApplicationInsights Resource](./create-new-resource.md) Steps
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file
-By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.2.2.jar` file.
+By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
+that holds the `applicationinsights-agent-3.2.7.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing.
+If no log file is generated, check that your Java application has write permission to the directory that holds the
+`applicationinsights-agent-3.2.7.jar` file.
+
+If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x
+should log any errors to stdout that would prevent it from logging to its normal location.
+ ## JVM fails to start If the JVM fails to start with "Error opening zip file or JAR manifest missing",
In this case, the server side is the Application Insights ingestion endpoint or
If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom java runtime using `jlink` please make sure to include the same module.
+Otherwise, these cipher suites should already be part of modern Java 8+ distributions,
+so it is recommended to check where you installed your Java distribution from, and investigate why the security
+providers in that Java distribution's `java.security` configuration file differ from standard Java distributions.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
You can currently configure the following tables for Basic Logs:
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) do not support Basic Logs. + ## Set table configuration
+# [API](#tab/api-1)
+ To configure a table for Basic Logs or Analytics Logs, call the **Tables - Update** API: ```http
PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups
> [!IMPORTANT] > Use the Bearer token for authentication. Read more about [using Bearer tokens](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx).
-### Request body
+**Request body**
+ |Name | Type | Description | | | | | |properties.plan | string | The table plan. Possible values are *Analytics* and *Basic*.|
-### Example
+**Example**
+ This example configures the `ContainerLog` table for Basic Logs.
-#### Sample request
+
+**Sample request**
```http PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
Use this request body to change to Analytics Logs:
} ```
-#### Sample response
+**Sample response**
+ This is the response for a table changed to Basic Logs. Status code: 200
Status code: 200
} ```
+# [CLI](#tab/cli-1)
+
+To configure a table for Basic Logs or Analytics Logs, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and set the `--plan` parameter to `Basic` or `Analytics`.
+
+For example:
+
+- To set Basic Logs:
+
+ ```azurecli
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name ContainerLog --plan Basic
+ ```
+
+- To set Analytics Logs:
+
+ ```azurecli
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name ContainerLog --plan Analytics
+ ```
+
+ ## Check table configuration # [Portal](#tab/portal-1)
Status code: 200
} ```
+# [CLI](#tab/cli-2)
+
+To check the configuration of a table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Syslog --output table \
+```
+ ## Retention and archiving of Basic Logs
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
To set the default workspace retention policy:
## Set retention and archive policy by table
-You can set retention policies for individual tables, except for workspaces in the legacy Free Trial pricing tier, using Azure Resource Manager APIs. You cannot currently configure data retention for individual tables in the Azure portal.
+You can set retention policies for individual tables, except for workspaces in the legacy Free Trial pricing tier, using Azure Resource Manager APIs. You canΓÇÖt currently configure data retention for individual tables in the Azure portal.
You can keep data in interactive retention between 4 and 730 days. You can set the archive period for a total retention time of up to 2,555 days (seven years).
-Each table is a sub-resource of the workspace it's in. For example, you can address the `SecurityEvent` table in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
+Each table is a subresource of the workspace it's in. For example, you can address the `SecurityEvent` table in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
``` /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent ```
-Note that the table name is case-sensitive.
+The table name is case-sensitive.
-### Get retention and archive policy by table
-
-To get the retention policy of a particular table (in this example, `SecurityEvent`), Call the **Tables - Get** API:
-
-```JSON
-GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview
-```
-
-To get all table-level retention policies in your workspace, don't set a table name; for example:
-
-```JSON
-GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2021-12-01-preview
-```
-### Set the retention and archive policy for a table
+# [API](#tab/api-1)
To set the retention and archive duration for a table, call the **Tables - Update** API:
You can use either PUT or PATCH, with the following difference:
- The **PUT** API sets *retentionInDays* and *totalRetentionInDays* to the default value if you don't set non-null values. - The **PATCH** API doesn't change the *retentionInDays* or *totalRetentionInDays* values if you don't specify values.
+**Request body**
-#### Request body
The request body includes the values in the following table. |Name | Type | Description |
The request body includes the values in the following table.
|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, or 2556. <br/>Setting this property to null will default to the workspace retention. For a Basic Logs table, the value is always 8. | |properties.totalRetentionInDays | integer | The table's total data retention including archive period. Set this property to null if you don't want to archive data. |
-#### Example
-The following table sets table retention to workspace default of 30 days, and total of 2 years. This means that the archive duration would be 23 months.
-###### Request
+**Example**
+
+This example sets the table's interactive retention to the workspace default of 30 days, and the total retention to two years. This means the archive duration is 23 months.
+
+**Request**
```http PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/CustomLog_CL?api-version=2021-12-01-preview ```
-#### Request body
+**Request body**
```http { "properties": {
PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-0000000
} ```
-###### Response
+**Response**
Status code: 200
Status code: 200
... } ```+
+# [CLI](#tab/cli-1)
+
+To set the retention and archive duration for a table, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and pass the `--retention-time` and `--total-retention-time` parameters.
+
+This example sets table's interactive retention to 30 days, and the total retention to two years. This means the archive duration is 23 months:
+
+```azurecli
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+--name AzureMetrics --retention-time 30 --total-retention-time 730
+```
+
+To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command with the `--retention-time` and `--total-retention-time` parameters set to `-1`.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Syslog --retention-time -1 --total-retention-time -1
+```
++
+## Get retention and archive policy by table
+
+# [API](#tab/api-2)
+
+To get the retention policy of a particular table (in this example, `SecurityEvent`), call the **Tables - Get** API:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview
+```
+
+To get all table-level retention policies in your workspace, don't set a table name; for example:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2021-12-01-preview
+```
+
+# [CLI](#tab/cli-2)
+
+To get the retention policy of a particular table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name SecurityEvent
+```
+++ ## Purge retained data When you shorten an existing retention policy, it takes several days for Azure Monitor to remove data that you no longer want to keep.
-If you set the data retention policy to 30 days, you can purge older data immediately using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. This can be useful when you need to remove personal data immediately. The immediate purge functionality is not available through the Azure portal.
+If you set the data retention policy to 30 days, you can purge older data immediately using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. The purge functionality is useful when you need to remove personal data immediately. The immediate purge functionality isn't available through the Azure portal.
Note that workspaces with a 30-day retention policy might actually keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
-You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#how-to-export-and-delete-private-data), which removes personal data. You cannot purge data from archived logs.
+You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#how-to-export-and-delete-private-data), which removes personal data. You canΓÇÖt purge data from archived logs.
The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.**
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Azure Monitor Logs Dedicated Clusters are a deployment option that enables advanced capabilities for Azure Monitor Logs customers. Customers can select which of their Log Analytics workspaces should be hosted on dedicated clusters.
-Dedicated clusters require customers to commit for at least 500 GB of data ingestion per day. You can migrate an existing workspace to a dedicated cluster with no data loss or service interruption.
+Dedicated clusters require customers to commit for at least 500 GB of data ingestion per day. You can link existing workspace to a dedicated cluster and unlink it with no data loss or service interruption.
Capabilities that require dedicated clusters:
Capabilities that require dedicated clusters:
Dedicated clusters are managed with an Azure resource that represents Azure Monitor Log clusters. Operations are performed programmatically using [CLI](/cli/azure/monitor/log-analytics/cluster), [PowerShell](/powershell/module/az.operationalinsights) or the [REST](/rest/api/loganalytics/clusters).
-Once a cluster is created, workspaces can be linked to it and new ingested data to them is stored on the cluster. Workspaces can be unlinked from a cluster at any time and new data is stored in shared Log Analytics clusters. The link and unlink operation doesn't affect your queries and the access to data before and after the operation with subjection to retention in workspaces. The Cluster and workspaces must be in the same region to allow linking.
+Once a cluster is created, workspaces can be linked to it, and new ingested data to them is stored on the cluster. Workspaces can be unlinked from a cluster at any time and new data then stored on shared Log Analytics clusters. The link and unlink operation doesn't affect your queries and access to data before, and after the operation. The Cluster and workspaces must be in the same region.
All operations on the cluster level require the `Microsoft.OperationalInsights/clusters/write` action permission on the cluster. This permission could be granted via the Owner or Contributor that contains the `*/write` action or via the Log Analytics Contributor role that contains the `Microsoft.OperationalInsights/*` action. For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
The cluster Commitment Tier level is configured programmatically with Azure Reso
There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when configuring your cluster.
-1. **Cluster (default)**: Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster.
+1. **Cluster (default)**--Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster.
-2. **Workspaces**: The Commitment Tier costs for your Cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) This full details of this pricing model are explained [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
+2. **Workspaces**--The Commitment Tier costs for your Cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) Details of pricing model are explained [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
-If your workspace is using legacy Per Node pricing tier, when it is linked to a cluster it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+If your linked workspace is using legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+
+When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on cluster's Commitment Tier. Workspaces can be unlinked from a cluster at any time, and pricing tier change to per-GB.
Complete details are billing for Log Analytics dedicated clusters are available [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
The user account that creates the clusters must have the standard Azure resource
After you create your cluster resource, you can edit additional properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
-You can have up to 2 active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to 4 reserved clusters per subscription per region (active or recently deleted).
+You can have up to two active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to four reserved clusters per subscription per region (active or recently deleted).
> [!NOTE] > Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete.
Authorization: Bearer <token>
## Change cluster properties
-After you create your cluster resource and it is fully provisioned, you can edit additional properties using CLI, PowerShell or REST API. The additional properties that can be set after the cluster has been provisioned include the following:
+After you create your cluster resource and it's fully provisioned, you can edit additional properties using CLI, PowerShell or REST API. The additional properties that can be set after the cluster has been provisioned include the following:
- **keyVaultProperties** - Contains the key in Azure Key Vault with the following parameters: *KeyVaultUri*, *KeyName*, *KeyVersion*. See [Update cluster with Key identifier details](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details). - **Identity** - The identity used to authenticate to your Key Vault. This can be System-assigned or User-assigned. - **billingType** - Billing attribution for the cluster resource and its data. Includes on the following values:
- - **Cluster (default)** - The costs for your cluster are attributed to the cluster resource.
- - **Workspaces** - The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters) to learn more about the cluster pricing model.
+ - **Cluster (default)**--The costs for your cluster are attributed to the cluster resource.
+ - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters) to learn more about the cluster pricing model.
>[!IMPORTANT] >Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations. > [!NOTE]
-> The *billingType* property is not supported in CLI.
+> The *billingType* property isn't supported in CLI.
## Get all clusters in resource group
Content-type: application/json
### Update billingType in cluster
+### PowerShell
+
+```powershell
+Select-AzSubscription "cluster-subscription-id"
+
+Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -BillingType "Workspaces"
+```
+ The *billingType* property determines the billing attribution for the cluster and its data: - *Cluster* (default) -- The billing is attributed to the Cluster resource - *Workspaces* -- The billing is attributed to linked workspaces proportionally. When data volume from all workspaces is below the Commitment Tier level, the remaining volume is attributed to the cluster
Content-type: application/json
### Unlink a workspace from cluster
-You can unlink a workspace from a cluster. After unlinking a workspace from the cluster, new data associated with this workspace is not sent to the dedicated cluster. Also, the workspace billing is no longer done via the cluster.
+You can unlink a workspace from a cluster, and new data to workspace isn't ingested to cluster. Also, the workspace pricing tier is set to per-GB.
Old data of the unlinked workspace might be left on the cluster. If this data is encrypted using customer-managed keys (CMK), the Key Vault secrets are kept. The system is abstracts this change from Log Analytics users. Users can just query the workspace as usual. The system performs cross-cluster queries on the backend as needed with no indication to users. > [!WARNING]
Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam
## Delete cluster
-It's recommended that you unlink all workspaces from a dedicated cluster before deleting it. You need to have *write* permissions on the cluster resource. When deleting a cluster, you are losing access to all data ingested to the cluster from linked workspaces and from workspaces that were linked previously. This operation is not reversible. If you delete your cluster when workspaces are linked, these get unlinked automatically and new data get ingested to Log Analytics storage instead.
+It's recommended that you unlink all workspaces from a dedicated cluster before deleting it. You need to have *write* permissions on the cluster resource. When deleting a cluster, you're losing access to all data ingested to the cluster from linked workspaces and from workspaces that were linked previously. This operation isn't reversible. If you delete your cluster when workspaces are linked, these get unlinked automatically and new data get ingested to Log Analytics storage instead.
-A cluster resource that was deleted in the last 14 days is kept in soft-delete state and its name remained reserved. After the soft-delete period, the cluster is permanently deleted and it's name can be used.
+A cluster resource that was deleted in the last 14 days is kept in soft-delete state and its name remained reserved. After the soft-delete period, the cluster is permanently deleted and its name can be reused to create a cluster.
> [!WARNING] > - The recovery of soft-deleted clusters isn't supported and it can't be recovered once deleted.
-> - There is a limit of 4 clusters per subscription. Both active and soft-deleted clusters are counted as part of this. Customers should not create recurrent procedures that create and delete clusters. It has a significant impact on Log Analytics backend systems.
+> - There is a limit of 4 clusters per subscription. Both active and soft-deleted clusters are counted as part of this. Customers shouldn't create recurrent procedures that create and delete clusters. It has a significant impact on Log Analytics backend systems.
Use the following commands to delete a cluster:
Authorization: Bearer <token>
- [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) is configured automatically for clusters created from October 2020 in supported regions. You can verify if your cluster is configured for double encryption by sending a GET request on the cluster and observing that the `isDoubleEncryptionEnabled` value is `true` for clusters with Double encryption enabled. - If you create a cluster and get an error "region-name doesn't support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body.
- - Double encryption setting can not be changed after the cluster has been created.
+ - Double encryption setting can't can not be changed after the cluster has been created.
+
+- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
## Troubleshooting -- If you get conflict error when creating a cluster, it may be that you have deleted your cluster in the last 14 days and it's in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
+- If you get conflict error when creating a cluster, it may be that you've deleted your cluster in the last 14 days and it's in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
- If you update your cluster while the cluster is at provisioning or updating state, the update will fail.
Authorization: Bearer <token>
### Cluster Create -- 400 -- Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.-- 400 -- The body of the request is null or in bad format.-- 400 -- SKU name is invalid. Set SKU name to capacityReservation.-- 400 -- Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.-- 400 -- Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.-- 400 -- Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.-- 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.-- 400 -- Identity is null or empty. Set Identity with systemAssigned type.-- 400 -- KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.-- 400 -- Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
+- 400--Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
+- 400--The body of the request is null or in bad format.
+- 400--SKU name is invalid. Set SKU name to capacityReservation.
+- 400--Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
+- 400--Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
+- 400--Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
+- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
+- 400--Identity is null or empty. Set Identity with systemAssigned type.
+- 400--KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.
+- 400--Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
### Cluster Update -- 400 -- Cluster is in deleting state. Async operation is in progress . Cluster must complete its operation before any update operation is performed.-- 400 -- KeyVaultProperties is not empty but has a bad format. See [key identifier update](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details).-- 400 -- Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesn't exist. Verify that you [set key and access policy](../logs/customer-managed-keys.md#grant-key-vault-permissions) in Key Vault.-- 400 -- Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)-- 400 -- Operation cannot be executed now. Wait for the Async operation to complete and try again.-- 400 -- Cluster is in deleting state. Wait for the Async operation to complete and try again.
+- 400--Cluster is in deleting state. Async operation is in progress. Cluster must complete its operation before any update operation is performed.
+- 400--KeyVaultProperties is not empty but has a bad format. See [key identifier update](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details).
+- 400--Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesn't exist. Verify that you [set key and access policy](../logs/customer-managed-keys.md#grant-key-vault-permissions) in Key Vault.
+- 400--Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)
+- 400--Operation cannot be executed now. Wait for the Async operation to complete and try again.
+- 400--Cluster is in deleting state. Wait for the Async operation to complete and try again.
### Cluster Get
+ - 404--Cluster not found, the cluster may have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
### Cluster Delete
+ - 409--Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
### Workspace link -- 404 -- Workspace not found. The workspace you specified doesn't exist or was deleted.-- 409 -- Workspace link or unlink operation in process.-- 400 -- Cluster not found, the cluster you specified doesn't exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
+- 404--Workspace not found. The workspace you specified doesn't exist or was deleted.
+- 409--Workspace link or unlink operation in process.
+- 400--Cluster not found, the cluster you specified doesn't exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
### Workspace unlink-- 404 -- Workspace not found. The workspace you specified doesn't exist or was deleted.-- 409 -- Workspace link or unlink operation in process.
+- 404--Workspace not found. The workspace you specified doesn't exist or was deleted.
+- 409--Workspace link or unlink operation in process.
## Next steps
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
The restore operation creates the restore table and allocates additional compute
The destination table provides a view of the underlying source data, but does not affect it in any way. The table has no retention setting, and you must explicitly [dismiss the restored data](#dismiss-restored-data) when you no longer need it.
-## Restore data using API
+## Restore data
+
+# [API](#tab/api-1)
To restore data from a table, call the **Tables - Create or Update** API. The name of the destination table must end with *_RST*. ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{user defined name}_RST?api-version=2021-12-01-preview ```
-### Request body
+
+**Request body**
+ The body of the request must include the following values: |Name | Type | Description |
The body of the request must include the following values:
|properties.restoredLogs.startRestoreTime | string | Start of the time range to restore. | |properties.restoredLogs.endRestoreTime | string | End of the time range to restore. |
-### Restore table status
+**Restore table status**
+ The **provisioningState** property indicates the current state of the restore table operation. The API returns this property when you start the restore, and you can retrieve this property later using a GET operation on the table. The **provisioningState** property has one of the following values: | Value | Description
The **provisioningState** property indicates the current state of the restore ta
| Succeeded | Restore operation completed. | | Deleting | Deleting the restored table. |
-#### Sample request
+**Sample request**
+ This sample restores data from the month of January 2020 from the *Usage* table to a table called *Usage_RST*. **Request**
PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
} } ```
+# [CLI](#tab/cli-1)
+To restore data from a table, run the [az monitor log-analytics workspace table restore create](/cli/azure/monitor/log-analytics/workspace/table/restore#az-monitor-log-analytics-workspace-table-restore-create) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table restore create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Heartbeat_RST --restore-source-table Heartbeat --start-restore-time "2022-01-01T00:00:00.000Z" --end-restore-time "2022-01-08T00:00:00.000Z" --no-wait
+```
++ ## Dismiss restored data To save costs, dismiss restored data when you no longer need it by deleting the restored table.
+Deleting the restored table does not delete the data in the source table.
+
+> [!NOTE]
+> Restored data is available as long as the underlying source data is available. When you delete the source table from the workspace or when the source table's retention period ends, the data is dismissed from the restored table. However, the empty table will remain if you do not delete it explicitly.
+
+# [API](#tab/api-2)
To delete a restore table, call the **Tables - Delete** API: ```http DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{user defined name}_RST?api-version=2021-12-01-preview ```
-Deleting the restored table does not delete the data in the source table.
+# [CLI](#tab/cli-2)
-> [!NOTE]
-> Restored data is available as long as the underlying source data is available. When you delete the source table from the workspace or when the source table's retention period ends, the data is dismissed from the restored table. However, the empty table will remain if you do not delete it explicitly.
+To delete a restore table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
+
+For example:
+```azurecli
+az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Heartbeat_RST
+```
++ ## Limitations Restore is subject to the following limitations.
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs are asynchronous queries that fetch records into a new search table
## When to use search jobs
-Use a search job when the log query timeout of 10 minutes is not enough time to search through large volumes of data or when you are running a slow query.
+Use a search job when the log query timeout of 10 minutes isn't enough time to search through large volumes of data or when you're running a slow query.
Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to:
Search jobs also let you retrieve records from [Archived Logs](data-retention-ar
A search job sends its results to a new table in the same workspace as the source data. The results table is available as soon as the search job begins, but it may take time for results to begin to appear.
-The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans-preview) table that is available for log queries or any other features of Azure Monitor that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this retention once the table is created.
+The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans-preview) table that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this value after the table is created.
The search results table schema is based on the source table schema and the specified query. The following additional columns help you track the source records:
The search results table schema is based on the source table schema and the spec
Queries on the results table appear in [log query auditing](query-audit.md) but not the initial search job. ## Create a search job+
+# [API](#tab/api-1)
To run a search job, call the **Tables - Create or Update** API. The call includes the name of the results table to be created. The name of the results table must end with *_SRCH*. ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
-### Request body
+**Request body**
+ Include the following values in the body of the request: |Name | Type | Description |
Include the following values in the body of the request:
|properties.searchResults.endSearchTime | string | End of the time range to search. |
-### Sample request
+**Sample request**
+ This example creates a table called *Syslog_suspected_SRCH* with the results of a query that searches for particular records in the *Syslog* table. **Request**+ ```http PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_suspected_SRCH?api-version=2021-12-01-preview ``` **Request body**+ ```json { "properties": {
PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
} ```
-**Response**<br>
+**Response**
+ Status code: 202 accepted.
+# [CLI](#tab/cli-1)
+
+To run a search job, run the [az monitor log-analytics workspace table search-job create](/cli/azure/monitor/log-analytics/workspace/table/search-job#az-monitor-log-analytics-workspace-table-search-job-create) command. The name of the results table, which you set using the `--name` parameter, must end with *_SRCH*.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table search-job create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name HeartbeatByIp_SRCH --search-query 'Heartbeat | where ComputerIP has "00.000.00.000"' --limit 1500 \
+ --start-search-time "2022-01-01T00:00:00.000Z" --end-search-time "2022-01-08T00:00:00.000Z" --no-wait
+```
++ ## Get search job status and details
+# [API](#tab/api-2)
+ Call the **Tables - Get** API to get the status and details of a search job: ```http GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
-### Table status
+**Table status**<br>
+ Each search job table has a property called *provisioningState*, which can have one of the following values: | Status | Description |
Each search job table has a property called *provisioningState*, which can have
| Deleting | Deleting the search job table. |
-#### Sample request
+**Sample request**
+ This example retrieves the table status for the search job in the previous example. **Request**+ ```http GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_SRCH?api-version=2021-12-01-preview ```
-**Response**<br>
+**Response**
+ ```json { "properties": {
GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
} ```
+# [CLI](#tab/cli-2)
+
+To check the status and details of a search job table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name HeartbeatByIp_SRCH --output table \
+```
++ ## Delete search job table
-We recommend deleting the search job table when you're done querying the table. This reduces workspace clutter and additional charges for data retention.
+We recommend deleting the search job table when you're done querying the table. This reduces workspace clutter and extra charges for data retention.
+
+# [API](#tab/api-3)
To delete a table, call the **Tables - Delete** API:
To delete a table, call the **Tables - Delete** API:
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
+# [CLI](#tab/cli-3)
+
+To delete a search table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name HeartbeatByIp_SRCH
+```
+++ ## Limitations Search jobs are subject to the following limitations:
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/vmext-troubleshoot.md
If the *Log Analytics agent for Linux* VM extension is not installing or reporti
2. For other unhealthy statuses, review the Log Analytics agent for Linux VM extension logs files in `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/extension.log` and `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/CommandExecution.log` 3. If the extension status is healthy, but data is not being uploaded review the Log Analytics agent for Linux log files in `/var/opt/microsoft/omsagent/log/omsagent.log`
-For more information, see [troubleshooting Linux extensions](../../virtual-machines/extensions/oms-linux.md).
- ## Next steps
-For additional troubleshooting guidance related to the Log Analytics agent for Linux hosted on computers outside of Azure, see [Troubleshoot Azure Log Analytics Linux Agent](../agents/agent-linux-troubleshoot.md).
+For additional troubleshooting guidance related to the Log Analytics agent for Linux, see [Troubleshoot Azure Log Analytics Linux Agent](../agents/agent-linux-troubleshoot.md).
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
Title: Allow the Azure portal URLs on your firewall or proxy server description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist. Previously updated : 12/13/2021 Last updated : 03/09/2022
The URL endpoints to allow for the Azure portal are specific to the Azure cloud
#### [China Government Cloud](#tab/china-government-cloud) ```
+aadcdn.msauth.cn
+aadcdn.msftauth.cn
+login.live.com
*.azure.cn *.microsoft.cn *.microsoftonline.cn *.chinacloudapi.cn *.trafficmanager.cn
-*.chinacloudsites.cn
*.windowsazure.cn ```
azure-resource-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/cli-samples.md
# Azure CLI Samples for Azure Managed Applications
-The following table includes links to bash scripts for Azure Managed Applications that use the Azure CLI.
+The following table includes links to a sample CLI script for Azure Managed Applications.
| Create managed application | Description | | -- | -- |
-| [Create managed application definition](scripts/managed-application-cli-sample-create-definition.md) | Creates a managed application definition in the service catalog. |
-| [Deploy managed application](scripts/managed-application-cli-sample-create-application.md) | Deploys a managed application from the service catalog. |
-|**Update managed resource group**| **Description** |
-| [Get resources in managed resource group and resize VMs](scripts/managed-application-cli-sample-get-managed-group-resize-vm.md) | Gets resources from the managed resource group, and resizes the VMs. |
+| [Define and create a managed application](scripts/managed-application-define-create-cli-sample.md) | Creates a managed application definition in the service catalog and then deploys the managed application from the service catalog. |
azure-resource-manager Managed Application Cli Sample Create Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-cli-sample-create-application.md
- Title: Azure CLI script sample - Deploy a managed application
-description: Provides Azure CLI sample script that deploys an Azure Managed Application definition to the subscription.
--- Previously updated : 10/25/2017----
-# Deploy a managed application for service catalog with Azure CLI
-
-This script deploys a managed application definition from the service catalog.
----
-## Sample script
-
-[!code-azurecli[main](../../../../cli_scripts/managed-applications/create-application/create-application.sh "Create application")]
--
-## Script explanation
-
-This script uses the following command to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp create](/cli/azure/managedapp#az_managedapp_create) | Create a managed application. Provide the definition ID and parameters for the template. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Cli Sample Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-cli-sample-create-definition.md
- Title: Create Managed Application definition - Azure CLI
-description: Provides an Azure CLI script sample that creates a managed application definition in the subscription.
--- Previously updated : 10/25/2017----
-# Create a managed application definition with Azure CLI
-
-This script publishes a managed application definition to a service catalog.
----
-## Sample script
-
-[!code-azurecli[main](../../../../cli_scripts/managed-applications/create-definition/create-definition.sh "Create definition")]
--
-## Script explanation
-
-This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp definition create](/cli/azure/managedapp/definition#az_managedapp_definition_create) | Create a managed application definition. Provide the package that contains the required files. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Cli Sample Get Managed Group Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-cli-sample-get-managed-group-resize-vm.md
- Title: Get managed resource group & resize VMs - Azure CLI
-description: Provides Azure CLI sample script that gets a managed resource group in an Azure Managed Application. The script resizes VMs.
--- Previously updated : 10/25/2017----
-# Get resources in a managed resource group and resize VMs with Azure CLI
-
-This script retrieves resources from a managed resource group, and resizes the VMs in that resource group.
----
-## Sample script
-
-[!code-azurecli[main](../../../../cli_scripts/managed-applications/get-application/get-application.sh "Get application")]
--
-## Script explanation
-
-This script uses the following commands to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp list](/cli/azure/managedapp#az_managedapp_list) | List managed applications. Provide query values to focus the results. |
-| [az resource list](/cli/azure/resource#az_resource_list) | List resources. Provide a resource group and query values to focus the result. |
-| [az vm resize](/cli/azure/vm#az_vm_resize) | Update a virtual machine's size. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Define Create Cli Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-define-create-cli-sample.md
+
+ Title: Create managed application definition - Azure CLI
+description: Provides an Azure CLI script sample that publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
+
+ms.devlang: azurecli
+ Last updated : 03/07/2022++++
+# Create a managed application definition to service catalog and deploy managed application from service catalog with Azure CLI
+
+This script publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $appResourceGroup -y
+az group delete --name $appDefinitionResourceGroup -y
+```
+
+## Sample reference
+
+This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
+
+| Command | Notes |
+|||
+| [az managedapp definition create](/cli/azure/managedapp/definition#az_managedapp_definition_create) | Create a managed application definition. Provide the package that contains the required files. |
+
+## Next steps
+
+* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
+* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automated-backups-overview.md
PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444
```json { "properties":{
- "retentionDays":28
+ "retentionDays":28,
"diffBackupIntervalInHours":24 } }
PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444
"name": "default", "type": "Microsoft.Sql/resourceGroups/servers/databases/backupShortTermRetentionPolicies", "properties": {
- "retentionDays": 28
+ "retentionDays": 28,
"diffBackupIntervalInHours":24 } }
azure-sql Database Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-export.md
When you need to export a database for archiving or for moving to another platfo
> [!NOTE] > BACPACs are not intended to be used for backup and restore operations. Azure automatically creates backups for every user database. For details, see [business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md) and [SQL Database backups](automated-backups-overview.md).
+> [!NOTE]
+> [Import and Export using Private Link](database-import-export-private-link.md) is in preview.
+ ## The Azure portal Exporting a BACPAC of a database from [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) or from a database in the [Hyperscale service tier](service-tier-hyperscale.md) using the Azure portal is not currently supported. See [Considerations](#considerations).
azure-sql Database Import Export Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import-export-private-link.md
$importRequest = New-AzSqlDatabaseExport -ResourceGroupName "<resourceGroupName>
### Create Import-Export Private link using REST API Existing APIs to perform Import and Export jobs have been enhanced to support Private Link. Refer to [Import Database API](/rest/api/sql/2021-08-01-preview/servers/import-database)
+## Limitations
+
+- Import using Private Link does not support specifying a backup storage redundancy while creating a new database and creates with the default geo-redundant backup storage redundancy. As a work around, first create an empty database with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into this empty database.
+- Import and Export operations are not supported in Azure SQL DB Hyperscale tier yet.
+- Import using REST API with private link can only be done to existing database since the API uses database extensions. To workaround this create an empty database with desired name and call Import REST API with Private link.
++ ## Next steps - [Import or Export Azure SQL Database without allowing Azure services to access the server](database-import-export-azure-services-off.md) - [Import a database from a BACPAC file](database-import.md)
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import.md
You can import a SQL Server database into Azure SQL Database or SQL Managed Inst
> [!IMPORTANT] > After importing your database, you can choose to operate the database at its current compatibility level (level 100 for the AdventureWorks2008R2 database) or at a higher level. For more information on the implications and options for operating a database at a specific compatibility level, see [ALTER DATABASE Compatibility Level](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level). See also [ALTER DATABASE SCOPED CONFIGURATION](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql) for information about additional database-level settings related to compatibility levels.
+> [!NOTE]
+> [Import and Export using Private Link](database-import-export-private-link.md) is in preview.
+ ## Using Azure portal Watch this video to see how to import from a BACPAC file in the Azure portal or continue reading below:
Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $Se
- Import does not support specifying a backup storage redundancy while creating a new database and creates with the default geo-redundant backup storage redundancy. To workaround, first create an empty database with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into this empty database. - Storage behind a firewall is currently not supported.
-> [!NOTE]
-> Azure SQL Database Configurable Backup Storage Redundancy is currently available in public preview in Southeast Asia Azure region only.
## Import using wizards
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Last updated 01/26/2022
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] > [!div class="op_single_selector"]
-> * [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
-> * [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
-> * [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
-
+>
+> - [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
+> - [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
+> - [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
-Configure an [auto-failover group](auto-failover-group-sql-db.md) for an Azure SQL Database elastic pool and test failover using the Azure portal.
+Configure an [auto-failover group](auto-failover-group-sql-db.md) for an Azure SQL Database elastic pool and test failover using the Azure portal.
In this tutorial, you'll learn how to:
To complete the tutorial, make sure you have the following items:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -- ## 1 - Create a single database
In this step, you create your elastic pool and add your database to the elastic
Set these additional parameter values for use in creating the elastic pool. ### Create elastic pool on primary server Use the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command to create an elastic pool. ### Add database to elastic pool Use the [az sql db update](/cli/azure/sql/db#az_sql_db_update) command to add a database to an elastic pool. This portion of the tutorial uses the following Azure CLI cmdlets:
Set these additional parameter values for use in creating the failover group.
Change the failover location as appropriate for your environment. ### Create secondary server
Use the [az sql server create](/cli/azure/sql/server#az_sql_server_create) comma
> [!NOTE] > The server login and firewall settings must match that of your primary server. ### Create elastic pool on secondary server Use the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command to create an elastic pool on the secondary server. ### Create failover group Use the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command to create a failover group. ### Add database to the failover group Use the [az sql failover-group update](/cli/azure/sql/failover-group#az_sql_failover_group_update) command to add a database to the failover group. ### Azure CLI failover group creation reference
Test failover using the Azure CLI.
Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to confirm the roles of each server in the failover group. ### Fail over to the secondary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail over to the secondary server. Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to verify a successful failover. ### Revert failover group back to the primary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail back to the primary server. ### Azure CLI failover group management reference
This script uses the following commands. Each command in the table links to comm
# [Azure CLI](#tab/azure-cli) # [Azure portal](#tab/azure-portal)
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
To complete the tutorial, make sure you have the following items:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -- ## 1 - Create a database
Set these additional parameter values for use in creating the failover group, in
Change the failover location as appropriate for your environment. ### Create the secondary server
Use the [az sql server create](/cli/azure/sql/server#az_sql_server_create) comma
> [!NOTE] > The server login and firewall settings must match that of your primary server. ### Create the failover group Use the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command to create a failover group. ### Azure CLI failover group creation reference
Test failover using the Azure CLI.
Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to confirm the roles of each server. ### Fail over to the secondary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) to fail over to the secondary server. Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to verify a successful failover. ### Revert failover group back to the primary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail back to the primary server. ### Azure CLI failover group management reference
This script uses the following commands. Each command in the table links to comm
# [Azure CLI](#tab/azure-cli) This script uses the following commands. Each command in the table links to command specific documentation.
azure-sql Metrics Diagnostic Telemetry Logging Streaming Export Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md
Previously updated : 11/17/2021 Last updated : 3/10/2022 # Configure streaming export of Azure SQL Database and SQL Managed Instance diagnostic telemetry
Learn more about [database wait statistics](/sql/relational-databases/system-dyn
|ElasticPoolName_s|Name of the elastic pool for the database, if any | |DatabaseName_s|Name of the database | |ResourceId|Resource URI |
-|error_state_d|Error state code |
+|error_state_d|A numeric state value associated with the query timeout (an [attention](/sql/relational-databases/errors-events/mssqlserver-3617-database-engine-error) event) |
|query_hash_s|Query hash, if available | |query_plan_hash_s|Query plan hash, if available |
azure-sql Add Database To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/add-database-to-failover-group-cli.md
Last updated 01/26/2022
-# Use Azure CLI to add a database to a failover group
+# Add a database to a failover group using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a database in Azure SQL Database, creates
### Run the script ## Clean up resources
azure-sql Add Elastic Pool To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/add-elastic-pool-to-failover-group-cli.md
Last updated 01/26/2022
-# Use CLI to add an Azure SQL Database elastic pool to a failover group
+# Add an Azure SQL Database elastic pool to a failover group using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a single database, adds it to an elastic p
### Run the script ## Clean up resources
azure-sql Auditing Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/auditing-threat-detection-cli.md
Last updated 01/26/2022
-# Use CLI to configure SQL Database auditing and Advanced Threat Protection
+# Configure SQL Database auditing and Advanced Threat Protection using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example configures SQL Database auditing and Advanced Thre
### Run the script ## Clean up resources
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/backup-database-cli.md
Last updated 01/26/2022
-# Use CLI to backup an Azure SQL single database to an Azure storage container
+# Backup an Azure SQL single database to an Azure storage container using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI example backs up a database in SQL Database to an Azure storage c
### Run the script ## Clean up resources
azure-sql Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/copy-database-to-new-server-cli.md
Last updated 01/26/2022
-# Use CLI to copy a database in Azure SQL Database to a new server
+# Copy a database in Azure SQL Database to a new server using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a copy of an existing database in a new se
### Run the script ## Clean up resources
azure-sql Create And Configure Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/create-and-configure-database-cli.md
Last updated 01/26/2022
-# Use Azure CLI to create a single database and configure a firewall rule
+# Create a single database and configure a firewall rule using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a single database in Azure SQL Database an
### Run the script ## Clean up resources
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
Last updated 01/26/2022
-# Use CLI to import a BACPAC file into a database in SQL Database
+# Import a BACPAC file into a database in SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example imports a database from a *.bacpac* file into a da
### Run the script ## Clean up resources
azure-sql Monitor And Scale Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/monitor-and-scale-database-cli.md
Last updated 01/26/2022
-# Use the Azure CLI to monitor and scale a single database in Azure SQL Database
+# Monitor and scale a single database in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example scales a single database in Azure SQL Database to
### Run the script > [!TIP] > Use [az sql db op list](/cli/azure/sql/db/op?#az_sql_db_op_list) to get a list of operations performed on the database, and use [az sql db op cancel](/cli/azure/sql/db/op#az_sql_db_op_cancel) to cancel an update operation on the database.
azure-sql Move Database Between Elastic Pools Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/move-database-between-elastic-pools-cli.md
Last updated 01/26/2022
-# Use Azure CLI to move a database in SQL Database in a SQL elastic pool
+# Move a database in SQL Database in a SQL elastic pool using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates two elastic pools, moves a pooled database
### Run the script ## Clean up resources
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/restore-database-cli.md
Last updated 02/11/2022
-# Use CLI to restore a single database in Azure SQL Database to an earlier point in time
+# Restore a single database in Azure SQL Database to an earlier point in time using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI example restores a single database in Azure SQL Database to a spe
### Run the script ## Clean up resources
azure-sql Scale Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/scale-pool-cli.md
Last updated 01/26/2022
-# Use the Azure CLI to scale an elastic pool in Azure SQL Database
+# Scale an elastic pool in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates elastic pools in Azure SQL Database, moves
### Run the script ## Clean up resources
azure-sql Setup Geodr Failover Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-failover-database-cli.md
Last updated 01/26/2022
-# Use CLI to configure active geo-replication for a single database in Azure SQL Database
+# Configure active geo-replication for a single database in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example configures active geo-replication for a single dat
### Run the script ## Clean up resources
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
Last updated 01/26/2022
-# Use CLI to configure a failover group for a group of databases in Azure SQL Database
+# Configure a failover group for a group of databases in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
Last updated 01/26/2022
### Run the script ## Clean up resources
azure-sql Setup Geodr Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-failover-pool-cli.md
Last updated 01/26/2022
-# Use CLI to configure active geo-replication for a pooled database in Azure SQL Database
+# Configure active geo-replication for a pooled database in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example configures active geo-replication for a pooled dat
### Run the script ## Clean up resources
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-hyperscale.md
The vCore-based service tiers are differentiated based on database availability
| **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance)| | **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB | | **IOPS** | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS |
-|**Availability** | 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
+| **Availability** | 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
| **Backups** | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) |
-|||||
<sup>1</sup> Elastic pools are not supported in the Hyperscale service tier.
These are the current limitations to the Hyperscale service tier as of GA. We'r
| Database integrity check | DBCC CHECKDB isn't currently supported for Hyperscale databases. DBCC CHECKTABLE ('TableName') WITH TABLOCK and DBCC CHECKFILEGROUP WITH TABLOCK may be used as a workaround. See [Data Integrity in Azure SQL Database](https://azure.microsoft.com/blog/data-integrity-in-azure-sql-database/) for details on data integrity management in Azure SQL Database. | | Elastic Jobs | Using a Hyperscale database as the Job database is not supported. However, elastic jobs can target Hyperscale databases in the same way as any other Azure SQL database. | |Data Sync| Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale database can be a member database in a Data Sync topology. |
+|Import Export | Import-Export service is currently not supported for Hyperscale databases. |
## Next steps
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/single-database-create-quickstart.md
The following values are used in subsequent commands to create the database and
Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address. ### Create a resource group Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location: ### Create a server Create a server with the [az sql server create](/cli/azure/sql/server) command. ### Configure a server-based firewall rule Create a firewall rule with the [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule) command. ### Create a single database Create a database with the [az sql db create](/cli/azure/sql/db) command in the [serverless compute tier](serverless-tier-overview.md). ```azurecli
+echo "Creating $database in serverless tier"
az sql db create \ --resource-group $resourceGroup \ --server $server \
The following values are used in subsequent commands to create the database and
Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. > [!NOTE] > [az sql up](/cli/azure/sql#az_sql_up) is currently in preview and does not currently support the serverless compute tier. Also, the use of non-alphabetic and non-numeric characters in the database name are not currently supported.
Use the [az sql up](/cli/azure/sql#az_sql_up) command to create and configure a
--database-name $database \\ --admin-user $login \ --admin-password $password- ``` 2. A server firewall rule is automatically created. If the server declines your IP address, create a new firewall rule using the `az sql server firewall-rule create` command and specifying appropriate start and end IP addresses.
azure-sql Link Feature Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature-best-practices.md
+
+ Title: The link feature best practices
+
+description: Learn about best practices when using the link feature for Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/10/2022+
+# Best practices with link feature for Azure SQL Managed Instance (preview)
+
+This article outlines best practices when using the link feature for Azure SQL Managed Instance. The link feature for Azure SQL Managed Instance connects your SQL Servers hosted anywhere to SQL Managed Instance, providing near real-time data replication to the cloud.
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+## Take log backups regularly
+
+The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space.
++
+To minimize the risk of running out of space on your primary instance due to log file growth, make sure to take database log backups regularly. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
+
+You can use a Transact-SQL (T-SQL) script to back up the log file, such as the sample provided in this section. Replace the placeholders in the sample script with name of your database, name and path of the backup file, and the description.
+
+To back up your transaction log, use the following sample Transact-SQL (T-SQL) script:
+
+```sql
+
+USE [<DatabaseName>]
+--Set current database inside job step or script
+--Check that you are executing the script on the primary instance
+if (SELECT role
+ FROM sys.dm_hadr_availability_replica_states AS a
+ JOIN sys.availability_replicas AS b
+ ON b.replica_id = a.replica_id
+WHERE b.replica_server_name = @@SERVERNAME) = 1
+BEGIN
+-- Take log backup
+BACKUP LOG [<DatabaseName>]
+TO DISK = N'<DiskPathandFileName>'
+WITH NOFORMAT, NOINIT,
+NAME = N'<Description>', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 1
+END
+```
++
+Use the following Transact-SQL (T-SQL) command to check the log spaced used by your database:
+
+```sql
+DBCC SQLPERF(LOGSPACE);
+```
+
+The query output looks like the following example below for sample database **tpcc**:
++
+In this example, the database has used 76% of the available log, with an absolute log file size of approximately 27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication that you should take a log backup to truncate log file and free up some space.
+
+## Add startup trace flags
+
+There are two trace flags (`-T1800` and `-T9567`) that, when added as start up parameters, can optimize the performance of data replication through the link. See [Enable startup trace flags](managed-instance-link-preparation.md#enable-startup-trace-flags) to learn more.
+
+## Next steps
+
+To get started with the link feature, [prepare your environment for replication](managed-instance-link-preparation.md).
+
+For more information on the link feature, see the following articles:
+
+- [Managed Instance link ΓÇô overview](link-feature.md)
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature.md
Last updated 02/04/2022
-# Link feature for Azure SQL Managed Instance (limited preview)
+# Link feature for Azure SQL Managed Instance (preview)
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)] The new link feature in Azure SQL Managed Instance connects your SQL Servers hosted anywhere to SQL Managed Instance, providing hybrid flexibility and database mobility. With an approach that uses near real-time data replication to the cloud, you can offload workloads to a read-only secondary in Azure to take advantage of Azure-only features, performance, and scale. After a disastrous event, you can continue running your read-only workloads on SQL Managed Instance in Azure. You can also choose to migrate one or more applications from SQL Server to SQL Managed Instance at the same time, at your own pace, and with the best possible minimum downtime compared to other solutions in Azure today.
-## Sign-up for link
- To use the link feature, you'll need: -- SQL Server 2019 Enterprise Edition with [CU15 (or above)](https://support.microsoft.com/en-us/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) installed on-premises, or on an Azure VM.
+- SQL Server 2019 Enterprise Edition or Developer Edition with [CU15 (or above)](https://support.microsoft.com/en-us/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) installed on-premises, or on an Azure VM.
- Network connectivity between your SQL Server and managed instance is required. If your SQL Server is running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two separate subnets. - Azure SQL Managed Instance provisioned on any service tier.
-Use the following link to sign-up for the limited preview of the link feature.
-
-> [!div class="nextstepaction"]
-> [Sign up for link feature preview](https://aka.ms/mi-link-signup)
+> [!NOTE]
+> SQL Managed Instance link feature is available in regions: Australia Central, Australia Central 2, Australia Southeast, Brazil South, Brazil Southeast, France Central, France South, South India, Central India, West India, Japan West, Japan East, Jio India West, Jio India Central, Korea Central, Korea South, North Central US, North Europe, Norway West, Norway East, South Africa North, South Africa West, South Central US, Southeast Asia, Sweden Central, Switzerland North, Switzerland West, UK South, UK West, West Central US, West Europe, West US, West US 2, West US 3. We are working on enabling link feature in all regions.
## Overview
The underlying technology of near real-time data replication between SQL Server
There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can leverage the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
-You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if/when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance.
+You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if or when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance.
## Supported scenarios
Secure connectivity, such as VPN or Express Route is used between an on-premises
There could exist up to 100 links from the same, or various SQL Server sources to a single SQL Managed Instance. This limit is governed by the number of databases that could be hosted on a managed instance at this time. Likewise, a single SQL Server can establish multiple parallel database replication links with several managed instances in different Azure regions in a 1 to 1 relationship between a database and a managed instance . The feature requires CU13 or higher to be installed on SQL Server 2019.
-> [!NOTE]
-> The link feature is released in limited public preview with support for currently only SQL Server 2019 Enterprise Edition CU13 (or above). [Sign-up now](https://aka.ms/mi-link-signup) to participate in the limited public preview.
- ## Limitations This section describes the productΓÇÖs functional limitations.
Some Managed Instance link features and capabilities are limited **at this time*
## Next steps
+If you are interested in using Link feature for Azure SQL Managed Instance with versions and editions that are currently not supported, sign-up [here](https://aka.ms/mi-link-signup).
+ For more information on the link feature, see the following: - [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).
+- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).
+- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
For other replication scenarios, consider:
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
+
+ Title: Prepare environment for link feature
+
+description: This guide teaches you how to prepare your environment to use the SQL Managed Instance link to replicate your database over to Azure SQL Managed Instance, and possibly failover.
++++
+ms.devlang:
++++ Last updated : 03/07/2022++
+# Prepare environment for link feature - Azure SQL Managed Instance
+
+This article teaches you to prepare your environment for the [Managed Instance link feature](link-feature.md) so that you can replicate your databases from your instance of SQL Server to your instance of Azure SQL Managed Instance.
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+## Prerequisites
+
+To use the Managed Instance link feature, you need the following prerequisites:
+
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019?filetype=EXE), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+++
+## Prepare your SQL Server instance
+
+To prepare your SQL Server instance, you need to validate you're on the minimum supported version, you've enabled the availability group feature, and you've added the proper trace flags at startup. You will need to restart SQL Server for these changes to take effect.
+
+### Install CU15 (or higher)
+
+The link feature for SQL Managed Instance was introduced in CU15 of SQL Server 2019.
+
+To check your SQL Server version, run the following Transact-SQL (T-SQL) script:
+
+```sql
+-- Shows the version and CU of the SQL Server
+SELECT @@VERSION
+```
+
+If your SQL Server version is lower than CU15 (15.0.4198.2), either install the minimally supported [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6), or the current latest cumulative update. Your SQL Server instance will be restarted during the update.
++
+### Enable availability groups feature
+
+The link feature for SQL Managed Instance relies on the Always On availability groups feature, which is not enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
+
+To confirm the Always On availability groups feature is enabled, run the following Transact-SQL (T-SQL) script:
+
+```sql
+-- Is HADR enabled on this SQL Server?
+declare @IsHadrEnabled sql_variant = (select SERVERPROPERTY('IsHadrEnabled'))
+select
+ @IsHadrEnabled as IsHadrEnabled,
+ case @IsHadrEnabled
+ when 0 then 'The Always On availability groups is disabled.'
+ when 1 then 'The Always On availability groups is enabled.'
+ else 'Unknown status.'
+ end as 'HadrStatus'
+```
+
+If the availability groups feature is not enabled, follow these steps to enable it:
+
+1. Open the **SQL Server Configuration Manager**.
+1. Choose the SQL Server service from the navigation pane.
+1. Right-click on the SQL Server service, and select **Properties**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot showing S Q L Server configuration manager.":::
+
+1. Go to the **Always On Availability Groups** tab.
+1. Select the checkbox to enable **Always On Availability Groups**. Select **OK**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/always-on-availability-groups-properties.png" alt-text="Screenshot showing always on availability groups properties.":::
+
+1. Select **OK** on the dialog box to restart the SQL Server service.
+
+### Enable startup trace flags
+
+To optimize Managed Instance link performance, enabling trace flags `-T1800` and `-T9567` at startup is highly recommended:
+
+- **-T1800**: This trace flag optimizes SQL Server performance when the disks hosting the log files for the primary and secondary replica in an availability group have different sector sizes, such as 512 bytes and 4k. If both primary and secondary replicas have a disk sector size of 4k, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c).
+- **-T9567**: This trace flag enables compression of the data stream for availability groups during automatic seeding, which increases the load on the processor but can significantly reduce transfer time during seeding.
+
+To enable these trace flags at startup, follow these steps:
+
+1. Open **SQL Server Configuration Manager**.
+1. Choose the SQL Server service from the navigation pane.
+1. Right-click on the SQL Server service, and select **Properties**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot showing S Q L Server configuration manager.":::
+
+1. Go to the **Startup Parameters** tab. In **Specify a startup parameter**, enter `-T1800` and select **Add** to add the startup parameter. After the trace flag has been added, enter `-T9567` and select **Add** to add the other trace flag as well. Select **Apply** to save your changes:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/startup-parameters-properties.png" alt-text="Screenshot showing Startup parameter properties.":::
+
+1. Select **OK** to close the **Properties** window.
++
+To learn more, review [enabling trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql).
++
+### Restart SQL Server and validate configuration
++
+After you've validated you're on a supported version of SQL Server, enabled the Always On availability groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these changes.
+
+To restart your SQL Server instance, follow these steps:
+
+1. Open **SQL Server Configuration Manager**.
+1. Choose the SQL Server service from the navigation pane.
+1. Right-click on the SQL Server service, and select **Restart**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-restart.png" alt-text="Screenshot showing S Q L Server restart command call.":::
+
+After the restart, use Transact-SQL to validate the configuration of your SQL Server. Your SQL Server version should be 15.0.4198.2 or greater, the Always On availability groups feature should be enabled, and you should have the Trace flags -T1800 and -T9567 enabled.
+
+To validate your configuration, run the following Transact-SQL (T-SQL) script:
+
+```sql
+-- Shows the version and CU of SQL Server
+SELECT @@VERSION
+
+-- Shows if Always On availability groups feature is enabled
+SELECT SERVERPROPERTY ('IsHadrEnabled')
+
+-- Lists all trace flags enabled on the SQL Server
+DBCC TRACESTATUS
+```
+
+The following screenshot is an example of the expected outcome for a SQL Server that's been properly configured:
+++
+## Configure network connectivity
+
+For the Managed Instance link to work, there must be network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server resides - whether it's on-premises or on a virtual machine (VM).
+
+### SQL Server on Azure VM
+
+Deploying your SQL Server to an Azure VM in the same Azure virtual network (VNet) that hosts your SQL Managed Instance is the simplest method, as there will automatically be network connectivity between the two instances. To learn more, see the detailed tutorial [Deploy and configure an Azure VM to connect to Azure SQL Managed Instance](./connect-vm-instance-configure.md).
+
+If your SQL Server on Azure VM is in a different VNet to your managed instance, either connect the two Azure VNets using [Global VNet peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913), or configure [VPN gateways](../../vpn-gateway/tutorial-create-gateway-portal.md).
+
+>[!NOTE]
+> Global VNet peering is enabled by default on managed instances provisioned after November 2020. [Raise a support ticket](../database/quota-increase-request.md) to enable Global VNet peering on older instances.
++
+### SQL Server outside of Azure
+
+If your SQL Server is hosted outside of Azure, establish a VPN connection between your SQL Server and your SQL Managed Instance with either option:
+
+- [Site-to-site virtual private network (VPN) connection](/office365/enterprise/connect-an-on-premises-network-to-a-microsoft-azure-virtual-network)
+- [Azure Express Route connection](../../expressroute/expressroute-introduction.md)
+
+> [!TIP]
+> Azure Express Route is recommended for the best network performance when replicating data. Ensure to provision a gateway with sufficiently large bandwidth for your use case.
+
+### Open network ports between the environments
+
+Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard port used for availability groups, and cannot be changed or customized.
+
+The following table describes port actions for each environment:
+
+|Environment|What to do|
+|:|:--|
+|SQL Server (in Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of the SQL Managed Instance. If necessary, do the same on the Windows firewall as well. Create an NSG rule in the virtual network hosting the VM that allows communication on port 5022. |
+|SQL Server (outside of Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of the SQL Managed Instance. If necessary, do the same on the Windows firewall as well. |
+|SQL Managed Instance |[Create an NSG rule](../../virtual-network/manage-network-security-group.md#create-a-security-rule) in the Azure portal to allow inbound and outbound traffic from the IP address of the SQL Server on port 5022 to the virtual network hosting the SQL Managed Instance. |
+
+Use the following PowerShell script on the host SQL Server to open ports in the Windows Firewall:
+
+```powershell
+New-NetFirewallRule -DisplayName "Allow TCP port 5022 inbound" -Direction inbound -Profile Any -Action Allow -LocalPort 5022 -Protocol TCP
+New-NetFirewallRule -DisplayName "Allow TCP port 5022 outbound" -Direction outbound -Profile Any -Action Allow -LocalPort 5022 -Protocol TCP
+```
++
+## Test bidirectional network connectivity
+
+Bidirectional network connectivity between SQL Server and SQL Managed Instance is necessary for the Managed Instance link feature to work. After opening your ports on the SQL Server side, and configuring an NSG rule on the SQL Managed Instance side, test connectivity.
++
+### Test connection from SQL Server to SQL Managed Instance
+
+To check if SQL Server can reach your SQL Managed Instance use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name of the Azure SQL Managed Instance.
+
+```powershell
+tnc <ManagedInstanceFQDN> -port 5022
+```
+
+A successful test shows `TcpTestSucceeded : True`:
+++
+If the response is unsuccessful, verify the following:
+- There are rules in both the network firewall *and* the windows firewall that allow traffic to the *subnet* of the SQL Managed Instance.
+- There is an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance.
++
+#### Test connection from SQL Managed Instance to SQL Server
+
+To check that the SQL Managed Instance can reach your SQL Server, create a test endpoint, and then use the SQL Agent to execute a PowerShell script with the `tnc` command pinging SQL Server on port 5022.
+++
+Connect to the SQL Managed Instance and run the following Transact-SQL (T-SQL) script to create test endpoint:
+
+```sql
+-- Create certificate needed for the test endpoint
+USE MASTER
+CREATE CERTIFICATE TEST_CERT
+WITH SUBJECT = N'Certificate for SQL Server',
+EXPIRY_DATE = N'3/30/2051'
+GO
+
+-- Create test endpoint
+USE MASTER
+CREATE ENDPOINT TEST_ENDPOINT
+ STATE=STARTED
+ AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
+ FOR DATABASE_MIRRORING (
+ ROLE=ALL,
+ AUTHENTICATION = CERTIFICATE TEST_CERT,
+ ENCRYPTION = REQUIRED ALGORITHM AES
+ )
+```
+
+Next, create a new SQL Agent job called `NetHelper`, using the public IP address or DNS name that can be resolved from the SQL Managed Instance for `SQL_SERVER_ADDRESS`.
+
+To create the SQL Agent Job, run the following Transact-SQL (T-SQL) script:
++
+```sql
+-- SQL_SERVER_ADDRESS should be public IP address, or DNS name that can be resolved from the Managed Instance host machine.
+DECLARE @SQLServerIpAddress NVARCHAR(MAX) = '<SQL_SERVER_ADDRESS>'
+DECLARE @tncCommand NVARCHAR(MAX) = 'tnc ' + @SQLServerIpAddress + ' -port 5022 -InformationLevel Quiet'
+DECLARE @jobId BINARY(16)
+
+EXEC msdb.dbo.sp_add_job @job_name=N'NetHelper',
+ @enabled=1,
+ @description=N'Test Managed Instance to SQL Server network connectivity on port 5022.',
+ @category_name=N'[Uncategorized (Local)]',
+ @owner_login_name=N'cloudSA', @job_id = @jobId OUTPUT
+
+EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'tnc step',
+ @step_id=1,
+ @os_run_priority=0, @subsystem=N'PowerShell',
+ @command = @tncCommand,
+ @database_name=N'master',
+ @flags=40
+
+EXEC msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
+
+EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
+
+EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper'
+```
++
+Execute the SQL Agent job by running the following T-SQL command:
+
+```sql
+EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper'
+```
+
+Execute the following query to show the log of the SQL Agent job:
+
+```sql
+SELECT
+ sj.name JobName, sjs.step_id, sjs.step_name, sjsl.log, sjsl.date_modified
+FROM
+ msdb.dbo.sysjobs sj
+ LEFT OUTER JOIN msdb.dbo.sysjobsteps sjs
+ ON sj.job_id = sjs.job_id
+ LEFT OUTER JOIN msdb.dbo.sysjobstepslogs sjsl
+ ON sjs.step_uid = sjsl.step_uid
+WHERE
+ sj.name = 'NetHelper'
+```
+
+If the connection is successful, the log will show `True`. If the connection is unsuccessful, the log will show `False`.
++
+Finally, drop the test endpoint and certificate with the following Transact-SQL (T-SQL) commands:
+
+```sql
+DROP ENDPOINT TEST_ENDPOINT
+GO
+DROP CERTIFICATE TEST_CERT
+GO
+```
+
+If the connection is unsuccessful, verify the following:
+- The firewall on the host SQL Server allows inbound and outbound communication on port 5022.
+- There is an NSG rule for the virtual network hosting the SQL Managed instance that allows communication on port 5022.
+- If your SQL Server is on an Azure VM, there is an NSG rule allowing communication on port 5022 on the virtual network hosting the VM.
+- SQL Server is running.
+
+> [!CAUTION]
+> Proceed with the next steps only if there is validated network connectivity between your source and target environments. Otherwise, please troubleshoot network connectivity issues before proceeding any further.
++
+## Install SSMS
+
+SQL Server Management Studio (SSMS) v18.11.1 is the easiest way to use the Managed Instance Link. [Download SSMS version 18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms) and install it to your client machine.
+
+After installation completes, open SSMS and connect to your supported SQL Server instance. Right-click a user database, and validate you see the "Azure SQL Managed Instance link" option in the menu:
++
+## Next steps
+
+After your environment has been prepared, you're ready to start [replicating your database](managed-instance-link-use-ssms-to-replicate-database.md). To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
Title: Managed Instance link - Use SSMS to failover database
+ Title: Failover database with link feature in SSMS
-description: This tutorial teaches you how to use Managed Instance link and SSMS to failover database from SQL Server to Azure SQL Managed Instance.
+description: This guide teaches you how to use the SQL Managed Instance link in SQL Server Management Studio (SSMS) to failover database from SQL Server to Azure SQL Managed Instance.
-+ ms.devlang: -+ Last updated 03/07/2022
-# Tutorial: Perform Managed Instance link database failover with SSMS
+# Failover database with link feature in SSMS - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-Managed Instance link is in preview.
+This article teaches you to use the [Managed Instance link feature](link-feature.md) to failover your database from SQL Server to Azure SQL Managed Instance in SQL Server Management Studio (SSMS).
-Managed Instance link feature enables you to replicate and optionally migrate your database hosted on SQL Server to Azure SQL Managed Instance.
+Failing over your database from your SQL Server instance to your SQL Managed Instance breaks the link between the two databases, stopping replication, and leaving both databases in an independent state, ready for individual read-write workloads.
-Once Managed Instance link database failover is performed from SSMS, the Managed Instance link is cut. Database hosted on SQL Server will become independent from database on Managed Instance and both databases will be able to perform read-write workload. This tutorial will cover performing Managed Instance link database failover by using latest version of SSMS (v18.11 and newer).
+Before failing over your database, make sure you've [prepared your environment](managed-instance-link-preparation.md) and [configured replication through the link feature](managed-instance-link-use-ssms-to-replicate-database.md).
-## Managed Instance link database failover (migration)
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
-Follow the steps described in this section to perform Managed Instance link database failover.
+## Prerequisites
-1. Managed Instance link database failover starts with connecting to SQL Server from SSMS.
- To perform Managed Instance link database failover and migrate database from SQL Server to Managed Instance, open the context menu of the SQL Server database. Then select Azure SQL Managed Instance link and then choose Failover database option.
+To failover your databases to Azure SQL Managed Instance, you need the following prerequisites:
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-database-context-failover-database.png" alt-text="Screenshot showing database's context menu option for database failover.":::
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- [Prepared your environment for replication](managed-instance-link-preparation.md)
+- Setup the [link feature and replicated your database to your managed instance in Azure](managed-instance-link-use-ssms-to-replicate-database.md).
-2. When the wizard starts, click Next.
+## Failover database
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-introduction.png" alt-text="Screenshot showing Introduction window.":::
+Use the **Failover database to Managed Instance** wizard in SQL Server Management Studio (SSMS) to failover your database from your instance of SQL Server to your instance of SQL Managed Instance. The wizard takes you through the failing over your database, breaking the link between the two instances in the process.
-3. On the Log in to Azure window, sign-in to your Azure account, select Subscription that is hosting the Managed Instance and click Next.
+> [!CAUTION]
+> If you are performing a planned manual failover, stop the workload on the database hosted on the source SQL Server to allow the replicated database on the SQL Managed Instance to completely catch up and failover without data loss. If you are performing a forced failover, there may be data loss.
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure window.":::
+To failover your database, follow these steps:
-4. On the Failover type window, select the failover type, fill in the required details and click Next.
+1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
+1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Failover database** to open the **Failover database to Managed Instance** wizard:
- In regular situations you should choose planned manual failover option and confirm that the workload on SQL Server database is stopped.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-ssms-database-context-failover-database.png" alt-text="Screenshot showing database's context menu option for database failover.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-failover-type.png" alt-text="Screenshot showing Failover Type window.":::
+1. Select **Next** on the **Introduction** page of the **Failover database to Managed Instance** wizard:
+
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-introduction.png" alt-text="Screenshot showing Introduction page.":::
++
+3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign into your Azure account. Select the subscription that is hosting the your SQL Managed Instance from the drop-down and then select **Next**:
+
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure page.":::
+
+4. On the **Failover type** page, choose the type of failover you're performing and check the box to confirm that you've either stopped the workload for a planned failover, or you understand that there may be data loss for a forced failover. Select **Next**:
+
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-failover-type.png" alt-text="Screenshot showing Failover Type page.":::
+
+1. On the **Clean up (optional)**, choose to drop the availability group if it was created solely for the purpose of migrating your database to Azure and you no longer need the availability group. If you want to keep the availability group, then leave the boxes unchecked. Select **Next**:
-> [!NOTE]
-> If you are performing planned manual failover, you should stop the workload on the database hosted on the SQL Server to allow Managed Instance link to completely catch up with the replication, so that failover without data loss is possible.
-5. In case Availability Group and Distributed Availability Group were created only for the purpose of Managed Instance link, you can choose to drop these objects on the Clean-up window. Dropping these objects is optional. Click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-cleanup-optional.png" alt-text="Screenshot showing Cleanup (optional) page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-cleanup-optional.png" alt-text="Screenshot showing Cleanup (optional) window.":::
+1. On the **Summary** page, review the actions that will be performed for your failover. (Optionally) You can also create a script to save and run yourself at a later time. When you're ready to proceed with the failover, select **Finish**:
-6. In the Summary window, you will be able to review the upcoming process. Optionally you can create the script to save it, or to execute it manually. If everything is as expected and you want to proceed with the Managed Instance link database failover, click Finish.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-summary.png" alt-text="Screenshot showing Summary page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-summary.png" alt-text="Screenshot showing Summary window.":::
+7. The **Executing actions** page displays the progress of each action:
-7. You will be able to track the progress of the process.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-executing-actions.png" alt-text="Screenshot showing Executing actions page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-executing-actions.png" alt-text="Screenshot showing Executing actions window.":::
+8. After all steps complete, the **Results** page shows a completed status, with checkmarks next to each successfully completed action. You can now close the window:
-8. Once all steps are completed, click Close.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-results.png" alt-text="Screenshot showing Results window.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-results.png" alt-text="Screenshot showing Results window.":::
+## View failed over database
-9. After this, Managed Instance link no longer exists. Both databases on SQL Server and Managed Instance can execute read-write workload and are independent.
- With this step, the migration of the database from SQL Server to Managed Instance is completed.
+During the failover process, the Managed Instance link is dropped and no longer exists. Both databases on the source SQL Server instance and target SQL Managed Instance can execute a read-write workload, and are completely independent.
- Database on SQL Server.
+You can validate this by reviewing the database on the SQL Server:
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-sql-server-database.png" alt-text="Screenshot showing database on SQL Server in SSMS.":::
- Database on Managed Instance.
+And then reviewing the database on the SQL Managed Instance:
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-managed-instance-database.png" alt-text="Screenshot showing database on Managed Instance in SSMS.":::
## Next steps For more information about Managed Instance link feature, see the following resources: -- [Managed Instance link feature](./link-feature.md)
+To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Title: Managed Instance link - Use SSMS to replicate database
+ Title: Replicate database with link feature in SSMS
-description: This tutorial teaches you how to use Managed Instance link and SSMS to replicate database from SQL Server to Azure SQL Managed Instance.
+description: This guide teaches you how to use the SQL Managed Instance link in SQL Server Management Studio (SSMS) to replicate database from SQL Server to Azure SQL Managed Instance.
-+ ms.devlang: -+ Last updated 03/07/2022
-# Tutorial: Create Managed Instance link and replicate database with SSMS
+# Replicate database with link feature in SSMS - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-Managed Instance link is in public preview.
+This article teaches you to use the [Managed Instance link feature](link-feature.md) to replicate your database from SQL Server to Azure SQL Managed Instance in SQL Server Management Studio (SSMS).
-Managed Instance link feature enables you to replicate your database hosted on SQL Server to Azure SQL Managed Instance. This tutorial will cover setting up Managed Instance link. More specifically, setting up database replication from SQL Server to Managed Instance with latest version of SSMS. This functionality is available in SSMS version 18.11 and newer.
+Before configuring replication for your database through the link feature, make sure you've [prepared your environment](managed-instance-link-preparation.md).
-## Managed Instance link database replication setup
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
-Follow the steps described in this section to create Managed Instance link.
+## Prerequisites
-1. Managed Instance link database replication setup starts with connecting to SQL Server from SSMS.
- In the object explorer, select the database you want to replicate to Azure SQL Managed Instance. From databaseΓÇÖs context menu, choose ΓÇ£Azure SQL Managed Instance linkΓÇ¥ and then ΓÇ£Replicate databaseΓÇ¥, as shown in the screenshot below.
+To replicate your databases to Azure SQL Managed Instance, you need the following prerequisites:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option for replicate database.":::
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- A properly [prepared environment](managed-instance-link-preparation.md).
-2. Wizard that takes you thought the process of creating Managed Instance link will be started. Once the link is created, your source database will get its read-only replica on your target Azure SQL Managed Instance.
- Once the wizard starts, you'll see the Introduction window. Click Next to proceed.
+## Replicate database
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-introduction.png" alt-text="Screenshot showing the introduction window for Managed Instance link replicate database wizard.":::
+Use the **New Managed Instance link** wizard in SQL Server Management Studio (SSMS) to setup the link between your instance of SQL Server and your instance of SQL Managed Instance. The wizard takes you through the process of creating the Managed Instance link. Once the link is created, your source database gets a read-only replica copy on your target Azure SQL Managed Instance.
-3. Wizard will check Managed Instance link requirements. If all requirements are met and you'll be able to click the Next button to continue.
+To set up the Managed Instance link, follow these steps:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-sql-server-requirements.png" alt-text="Screenshot showing SQL Server requirements window.":::
+1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
+1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Replicate database** to open the **New Managed Instance link** wizard:
-4. On the Select Databases window, choose one or more databases to be replicated via Managed Instance link. Make database selection and click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option to replicate database after hovering over Azure SQL Managed Instance link.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-select-databases.png" alt-text="Screenshot showing Select Databases window.":::
+1. Select **Next** on the **Introduction** page of the **New Managed Instance link** wizard:
-5. On the Login to Azure and select Managed Instance window you'll need to sign-in to Microsoft Azure, select Subscription, Resource Group and Managed Instance. Finally, you'll need to provide login details for the chosen Managed Instance.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-introduction.png" alt-text="Screenshot showing the introduction page for Managed Instance link replicate database wizard.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-login-to-azure.png" alt-text="Screenshot showing Login to Azure and select Managed Instance window.":::
+1. On the **Requirements** page, the wizard validates requirements to establish a link to your SQL Managed Instance. Select **Next** once all the requirements are validated:
-6. Once all of that is populated, you'll be able to click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-sql-server-requirements.png" alt-text="Screenshot showing S Q L Server requirements page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-login-to-azure-populated.png" alt-text="Screenshot showing Login to Azure and select Managed Instance populated window.":::
+1. On the **Select Databases** page, choose one or more databases you want to replicate to your SQL Managed Instance via the Managed Instance link. Select **Next**:
-7. On the Specify Distributed AG Options window, you'll see prepopulated values for the various parameters. Unless you need to customize something, you can proceed with the default options and click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-select-databases.png" alt-text="Screenshot showing Select Databases page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-distributed-ag-options.png" alt-text="Screenshot showing Specify Distributed AG options window.":::
+1. On the **Login to Azure and select Managed Instance** page, select **Sign In...** to sign into Microsoft Azure. Choose the subscription, resource group, and target managed instance from the drop-downs. Select **Login** and provide login details for the SQL Managed Instance:
-8. On the Summary window you'll be able to see the steps for creating Managed Instance link. Optionally, you can generate the setup Script to save it or to run it yourself.
- Complete the wizard process by clicking on the Finish.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure.png" alt-text="Screenshot showing Login to Azure and select Managed Instance page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-summary.png" alt-text="Screenshot showing Summary window.":::
+1. After providing all necessary information, select **Next**:
-9. The Executing actions window will display the progress of the process.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure-populated.png" alt-text="Screenshot showing Login to Azure and select Managed Instance populated page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-executing-actions.png" alt-text="Screenshot showing Executing actions window.":::
+1. Review the prepopulated values on the **Specify Distributed AG Options** page, and change any that need customization. When ready, select **Next**.
-10. Results window will show up once the process is completed and all steps are marked with a green check sign. At this point, you can close the wizard.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-distributed-ag-options.png" alt-text="Screenshot showing Specify Distributed A G options page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-results.png" alt-text="Screenshot showing Results window.":::
+1. Review the actions on the **Summary** page, and select **Finish** when ready. (Optionally) You can also create a script to save and run yourself at a later time.
-11. With this, Managed Instance link has been created and chosen databases are being replicated to the Managed Instance.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-summary.png" alt-text="Screenshot showing Summary window.":::
- In Object explorer, you'll see that the source database hosted on SQL Server is now in ΓÇ£SynchronizedΓÇ¥ state. Also, under Always On High Availability, Availability Groups that Availability Group and Distributed Availability Group are created for Managed Instance link.
+1. The **Executing actions** page displays the progress of each action:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-sql-server-database.png" alt-text="Screenshot showing the state of SQL Server database and Availability Group and Distributed Availability Group in SSMS.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-executing-actions.png" alt-text="Screenshot showing Executing actions page.":::
- We can also see a new database under target Managed Instance. Depending on the database size and network speed, initially you may see the database on the Managed Instance side in the ΓÇ£RestoringΓÇ¥ state. Once the seeding from the SQL Server to Managed Instance is done, the database will be ready for read-only workload and visible as in the screenshot below.
+1. After all steps complete, the **Results** page shows a completed status, with checkmarks next to each successfully completed action. You can now close the window:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-managed-instance-database.png" alt-text="Screenshot showing the state of Managed Instance database.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-results.png" alt-text="Screenshot showing Results page.":::
-## Next steps
+## View replicated database
+
+After the Managed Instance link is created, the selected databases are replicated to the SQL Managed Instance.
+
+Use **Object Explorer** on your SQL Server instance to view the `Synchronized` status of the replicated database, and expand **Always On High Availability** and **Availability Groups** to view the distributed availability group that is created for the Managed Instance link.
+
-For more information about Managed Instance link feature, see the following resources:
+Connect to your SQL Managed Instance and use **Object Explorer** to view your replicated database. Depending on the database size and network speed, the database may initially be in a `Restoring` state. After initial seeding completes, the database is restored to the SQL Managed Instance and ready for read-only workloads:
++
+## Next steps
-- [Managed Instance link feature](./link-feature.md)
+To break the link and failover your database to the SQL Managed Instance, see [failover database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware generations (public preview) is currentl
| Australia Central | Yes | | | Australia East | Yes | Yes | | Canada Central | Yes | |
-| Canada East | Yes | |
-| Central US | Yes | |
-| East US | Yes | |
-| East US 2 | Yes | |
-| Germany West Central | | Yes |
| Japan East | Yes | | | Korea Central | Yes | | | North Central US | Yes | |
-| North Europe | Yes | |
| South Central US | Yes | Yes | | Southeast Asia | Yes | | | West Europe | | Yes |
azure-sql Create Configure Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-cli.md
Last updated 01/26/2022
-# Use CLI to create an Azure SQL Managed Instance
+# Create an Azure SQL Managed Instance using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqlmi.md)]
This Azure CLI script example creates an Azure SQL Managed Instance in a dedicat
### Run the script ## Clean up resources
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
Last updated 02/11/2022
-# Use CLI to restore a Managed Instance database to another geo-region
+# Restore a Managed Instance database to another geo-region using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqlmi.md)]
This sample requires an existing pair of managed instances, see [Use Azure CLI t
### Run the script ## Clean up resources
azure-sql Transparent Data Encryption Byok Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md
Last updated 01/26/2022
-# Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault
+# Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqlmi.md)]
This sample requires an existing Managed Instance, see [Use Azure CLI to create
### Run the script ## Clean up resources
backup Backup Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks.md
Title: Back up Azure Managed Disks description: Learn how to back up Azure Managed Disks from the Azure portal. Previously updated : 11/25/2021 Last updated : 03/10/2022
To configure disk backup, follow these steps:
- You can't create an incremental snapshot for a particular disk outside of that disk's subscription. So, choose the resource group within the same subscription where the disk needs to be backed up. [Learn more](../virtual-machines/disks-incremental-snapshots.md#restrictions) about incremental snapshot for managed disks.
- - Once you configure the backup of a disk, you canΓÇÖt change the Snapshot Resource Group thatΓÇÖs assigned to a backup instance.
-
- - During a backup operation, Azure Backup creates a Storage Account in the Snapshot resource group. Only one Storage Account is created per snapshot resource group. The account is reused across multiple disk backup instances that use the same resource group as the snapshot resource group.
-
- - The Storage account doesnΓÇÖt store the Snapshots. The Managed-diskΓÇÖs incremental snapshots are ARM resources created on resource group and not in a Storage Account.
- - Storage Account stores the metadata for each recovery point. Azure Backup service creates a blob container per disk backup instance. For each recovery point, a block blob will be created to store metadata describing the recovery point (such as subscription, disk ID, disk attributes, and so on) that occupies a small space (in a few KiBs).
- - Storage Account is created as RA GZRS if the region supports zonal redundancy. If the region doesnΓÇÖt support Zonal redundancy, the Storage Account is created as RA GRS.
- If any existing policy stops the creation of a Storage Account on the subscription or resource group with GRS redundancy, the Storage Account is created as LRS. The Storage Account thatΓÇÖs created is **General Purpose v2**, with block blobs stored on the hot tier in the Blob container.
- - The number of recovery points is determined by the Backup policy used to configure backup of the disk backup instance. According to the Garbage collection process, the older block blobs are deleted, as the corresponding older recovery points are pruned.
-
- - DonΓÇÖt apply resource lock or policies or firewall on the snapshot resource group or Storage Account created by Azure Backup service. The service creates and manages resources in this Snapshot resource group thatΓÇÖs assigned to a backup instance when you configure a disk backup. The service creates the Storage Account and its resources, and this shouldnΓÇÖt be deleted or moved.
-
- >[!Note]
- >If a Storage Account is deleted, backups will fail, and restore will fail for all existing recovery points.
+ - Once you configure the backup of a disk, you canΓÇÖt change the Snapshot Resource Group thatΓÇÖs assigned to a backup instance.
:::image type="content" source="./media/backup-managed-disks/validate-snapshot-resource-group-inline.png" alt-text="Screenshot showing the process to initiate prerequisites checks." lightbox="./media/backup-managed-disks/validate-snapshot-resource-group-expanded.png":::
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md
Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Previously updated : 05/27/2021 Last updated : 03/10/2022+++ # Overview of Azure Disk Backup
Incremental snapshots are always stored on standard storage, irrespective of the
The snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur Snapshot Storage charges. ForTo more details about the snapshot pricing, see [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a Protected Instance fee and Backup Storage cost doesn't apply.
-During a backup operation, the Azure Backup service creates a Storage Account in the Snapshot Resource Group, where the snapshots are stored. Managed diskΓÇÖs incremental snapshots are ARM resources created on Resource group and not in Storage Account.
-
-Storage Account is used to store metadata for each recovery point. Azure Backup service creates a Blob container per disk backup instance. For each recovery point, a block blob is created to store metadata information describing the recovery point, such as subscription, disk ID, disk attributes, and so on, that occupies a small space (in a few KiBs).
-
-The storage account is created as RA GZRS if the region supports zonal redundancy. If the region doesnΓÇÖt support Zonal redundancy, the storage account is created as RAGRS. If your existing policy stops creation of storage accounts on the subscription or resource group with GRS redundancy, the Storage account is created as LRS. The storage account created is General Purpose v2 with block blobs stored on Hot tier in the blob container. YouΓÇÖre charged for the Storage Account according to the storage account's redundancy. These charges are for the size of the block blobs. However, this will be a minimal amount as it stores metadata only, which are few KiBs per recovery point.
- The number of recovery points is determined by the Backup policy used to configure backups of the disk backup instances. Older block blobs are deleted according to the garbage collection process as the corresponding older recovery points are pruned. ## Next steps
cognitive-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-batch-processing.md
The batch processing kit offers three modes, using the `--run-mode` parameter.
#### [REST](#tab/rest)
-`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a python module extension, or importing as a submodule.
+`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a Python module extension, or importing as a submodule.
:::image type="content" source="media/containers/batch-rest-api-mode.png" alt-text="A diagram showing the batch-kit container processing files in REST mode.":::
cognitive-services Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-python.md
from requests import Request
from mmlspark.io.http import HTTPTransformer, http_udf from pyspark.sql.functions import udf, col
-# Use any requests from the python requests library
+# Use any requests from the Python requests library
def world_bank_request(country): return Request("GET", "http://api.worldbank.org/v2/country/{}?format=json".format(country))
cognitive-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
Last updated 04/27/2020
-#Customer intent: As a python developer, I want use Personalizer in an Azure Notebook so that I can understand the end to end lifecycle of a Personalizer loop.
+#Customer intent: As a Python developer, I want use Personalizer in an Azure Notebook so that I can understand the end to end lifecycle of a Personalizer loop.
# Tutorial: Use Personalizer in Azure Notebook
These values have a very short duration in order to show changes in this tutoria
Run each executable cell and wait for it to return. You know it is done when the brackets next to the cell display a number instead of a `*`. The following sections explain what each cell does programmatically and what to expect for the output.
-### Include the python modules
+### Include the Python modules
-Include the required python modules. The cell has no output.
+Include the required Python modules. The cell has no output.
```python import json
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The tables below summarize current availability:
| : | :-- | : | : | :- | : | | Denmark | Toll-Free | Not Available | Not Available | Public Preview | Public Preview\* | | Denmark | Local | Not Available | Not Available | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Local | Not Available | Not Available | Public Preview | Public Preview\* |
\* Available through Azure Bot Framework and Dynamics only
communication-services Dominant Speaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/dominant-speaker.md
Title: Get active speakers description: Use Azure Communication Services SDKs to render the active speakers in a call.--++ Last updated 08/10/2021
+zone_pivot_groups: acs-plat-web-ios-android-windows
#Customer intent: As a developer, I want to get a list of active speakers within a call.
During an active call, you may want to get a list of active speakers in order to
- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) [!INCLUDE [Dominant Speaker JavaScript](./includes/dominant-speaker/dominant-speaker-web.md)]+++ ## Next steps - [Learn how to manage video](./manage-video.md)
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
pip install azure.confidentialledger
[!INCLUDE [Register the microsoft.ConfidentialLedger resource provider](../../includes/confidential-ledger-register-rp.md)]
-## Create your python app
+## Create your Python app
### Initialization
-We can now start writing our python application. First, we'll import the required packages.
+We can now start writing our Python application. First, we'll import the required packages.
```python # Import the Azure authentication library
cosmos-db Diagnostic Queries Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/diagnostic-queries-cassandra.md
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
:::image type="content" source="./media/cassandra-log-analytics/log-analytics-questions-bubble.png" alt-text="Image of a bubble word map with possible questions on how to leverage Log Analytics within Cosmos DB"::: ### RU consumption-- What application queries are causing high RU consumption
+- Cassandra operations that are consuming high RU/s.
```kusto CDBCassandraRequests
-| where DatabaseName startswith "azure"
+| where DatabaseName=="azure_comos" and CollectionName=="user"
| project TimeGenerated, RequestCharge, OperationName, requestType=split(split(PIICommandText,'"')[3], ' ')[0]
-| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType);
+| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType), OperationName;
``` -- Monitoring RU Consumption per operation on logical partition keys.
+- Monitoring RU consumption per operation on logical partition keys.
```kusto CDBPartitionKeyRUConsumption
-| where DatabaseName startswith "azure"
+| where DatabaseName=="azure_comos" and CollectionName=="user"
| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId | order by TotalRequestCharge; CDBPartitionKeyRUConsumption
-| where DatabaseName startswith "azure"
+| where DatabaseName=="azure_comos" and CollectionName=="user"
| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by OperationName, PartitionKey | order by TotalRequestCharge; - CDBPartitionKeyRUConsumption
-| where DatabaseName startswith "azure"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey, PartitionKeyRangeId
+| where DatabaseName=="azure_comos" and CollectionName=="user"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey
| render timechart; ``` - What are the top queries impacting RU consumption? ```kusto
-let topRequestsByRUcharge = CDBDataPlaneRequests
-| where TimeGenerated > ago(24h)
-| project RequestCharge , TimeGenerated, ActivityId;
CDBCassandraRequests
-| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0]
-| join kind=inner topRequestsByRUcharge on ActivityId
-| project DatabaseName, CollectionName, tostring(queryText), RequestCharge, TimeGenerated
-| order by RequestCharge desc
-| take 10;
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where TimeGenerated > ago(24h)
+| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0], RequestCharge, TimeGenerated
+| order by RequestCharge desc;
```-- RU Consumption based on variations in payload sizes for read and write operations.
+- RU consumption based on variations in payload sizes for read and write operations.
```kusto // This query is looking at read operations
-CDBDataPlaneRequests
-| where OperationName in ("Read", "Query")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), OperationName
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName =="SELECT"
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
// This query is looking at write operations
-CDBDataPlaneRequests
-| where OperationName in ("Create", "Upsert", "Delete", "Execute")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), OperationName
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
// Write operations over a time period.
-CDBDataPlaneRequests
-| where OperationName in ("Create", "Update", "Delete", "Execute")
-| summarize maxResponseLength=max(ResponseLength) by bin(TimeGenerated, 1m), OperationName
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+| render timechart;
+
+// Read operations over a time period.
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName =="SELECT"
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
| render timechart; ```
+- RU consumption based on read and write operations by logical partition.
+```kusto
+CDBPartitionKeyRUConsumption
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where OperationName in ("Delete", "Read", "Upsert")
+| summarize totalRU=max(RequestCharge) by OperationName, PartitionKeyRangeId
+```
+ - RU consumption by physical and logical partition. ```kusto CDBPartitionKeyRUConsumption
-| where DatabaseName==ΓÇ¥uprofileΓÇ¥ and AccountName startswith ΓÇ£azureΓÇ¥
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| summarize totalRequestCharge=sum(RequestCharge) by PartitionKey, PartitionKeyRangeId; ``` -- Is there a high RU consumption because of having hot partition?
+- Is a hot partition leading to high RU consumption?
```kusto CDBPartitionKeyStatistics
-| where AccountName startswith ΓÇ£azureΓÇ¥
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| where TimeGenerated > now(-8h) | summarize StorageUsed = sum(SizeKb) by PartitionKey | order by StorageUsed desc
CDBPartitionKeyStatistics
| project AccountName=tolower(AccountName), PartitionKey, SizeKb; CDBCassandraRequests | project AccountName=tolower(AccountName),RequestCharge, ErrorCode, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName
-| where DatabaseName != "<empty>"
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| join kind=inner storageUtilizationPerPartitionKey on $left.AccountName==$right.AccountName | where ErrorCode != -1 //successful | project AccountName, PartitionKey,ErrorCode,RequestCharge,SizeKb, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName;
CDBCassandraRequests
### Latency - Number of server-side timeouts (Status Code - 408) seen in the time window. ```kusto
-CDBDataPlaneRequests
-| where TimeGenerated >= now(-6h)
-| where AccountName startswith "azure"
-| where StatusCode == 408
-| summarize count() by bin(TimeGenerated, 10m)
-| render timechart
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where ErrorCode in (4608, 4352) //Corresponding code in Cassandra
+| summarize max(DurationMs) by bin(TimeGenerated, 10m), ErrorCode
+| render timechart;
``` - Do we observe spikes in server-side latencies in the specified time window? ```kusto
-CDBDataPlaneRequests
+CDBCassandraRequests
| where TimeGenerated > now(-6h)
-| where AccountName startswith "azure"
+| DatabaseName=="azure_cosmos" and CollectionName=="user"
| summarize max(DurationMs) by bin(TimeGenerated, 10m)
-| render timechart
+| render timechart;
``` -- Query operations that are getting throttled.
+- Operations that are getting throttled.
```kusto CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| project RequestLength, ResponseLength, RequestCharge, DurationMs, TimeGenerated, OperationName, query=split(split(PIICommandText,'"')[3], ' ')[0]
CDBCassandraRequests
``` - What queries are causing your application to throttle with a specified time period looking specifically at 429. ```kusto
-let throttledRequests = CDBDataPlaneRequests
-| where StatusCode==429
-| project OperationName , TimeGenerated, ActivityId;
CDBCassandraRequests
-| project PIICommandText, ActivityId, DatabaseName , CollectionName
-| join kind=inner throttledRequests on ActivityId
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where ErrorCode==4097 // Corresponding error code in Cassandra
| project DatabaseName , CollectionName , CassandraCommands=split(split(PIICommandText,'"')[3], ' ')[0] , OperationName, TimeGenerated; ```
cosmos-db Migrate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data.md
You can move data from existing Cassandra workloads to Azure Cosmos DB by using
Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html#cqlshrc) to copy local data to the Cassandra API account in Azure Cosmos DB.
+> [!WARNING]
+> Only use the CQL COPY to migrate small datasets. To move large datasets, [migrate data by using Spark](#migrate-data-by-using-spark).
+ 1. To be certain that your csv file contains the correct file structure, use the `COPY TO` command to export data directly from your source Cassandra table to a csv file (ensure that cqlsh is connected to the source table using the appropriate credentials): ```bash
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator released versions and it details
## Release notes
+### 2.14.6 (March 7, 2022)
+
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update there are couple issues that were addressed in this release:
+ * Fix for an issue related to high CPU usage when the emulator is running.
+ * Add PowerShell option to set the Mongo API version: "-MongoApiVersion". Valid setting are: "3.2", "3.6" and "4.0"
+ ### 2.14.5 (January 18, 2022) - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. One other important update with this release is to reduce the number of services executed in the background and start them as needed.
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
if (response.isSuccessStatusCode()) {
Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [NPM Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0) > [!NOTE]
-> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the Javascript SDK
+> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the JavaScript SDK
resolves the partition key values from the items through the container's partition key definition.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
The script in this article demonstrates creating an Azure Cosmos DB account, key
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
The script in this article demonstrates creating an Azure Cosmos DB account, key
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
The script in this article demonstrates preventing resources from being deleted
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
The script in this article demonstrates creating a serverless Azure Cosmos DB ac
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
The script in this article creates a Cassandra keyspace with shared throughput a
### Run the script ## Clean up resources
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
The script in this article demonstrates creating a Cosmos DB account with defaul
### Run the script ## Clean up resources
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
The script in this article demonstrates four operations.
### Run the script ## Clean up resources
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
This script uses a SQL (Core) API account, but these operations are identical ac
### Run the script ## Clean up resources
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
This script uses a SQL (Core) API account. To use this sample for other APIs, ap
### Run the script ## Clean up resources
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
This script uses a Core (SQL) API account. To use this sample for other APIs, ap
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
The script in this article demonstrates creating a Gremlin API database and grap
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
The script in this article demonstrates creating a Gremlin database and graph.
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
The script in this article demonstrates creating a Gremlin serverless account, d
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
The script in this article creates a Gremlin database with shared throughput and
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
The script in this article demonstrates creating a MongoDB API database with aut
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
The script in this article demonstrates creating a MongoDB API database and coll
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
The script in this article demonstrates creating a MongoDB API serverless accoun
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
The script in this article creates a MongoDB database with shared throughput and
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/autoscale.md
The script in this article demonstrates creating a SQL API database and containe
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/create.md
The script in this article demonstrates creating a SQL API database and containe
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/serverless.md
The script in this article demonstrates creating a SQL API serverless account wi
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/throughput.md
The script in this article creates a Core (SQL) API database with shared through
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
The script in this article demonstrates creating a Table API table with autoscal
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
The script in this article demonstrates creating a Table API table.
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
The script in this article demonstrates creating a Table API serverless account
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
The script in this article creates a Table API table then updates the throughput
### Run the script ## Clean up resources
data-factory Concepts Data Flow Performance Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-pipelines.md
If your data flows execute in parallel, we recommend that you don't enable the A
## Execute data flows sequentially
-If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. The service will reuse the compute resources, resulting in a faster cluster start-up time. Each activity will still be isolated and receive a new Spark context for each execution. To reduce the time between sequential activities even more, set the **quick re-use** checkbox on the Azure IR to tell the service to re-use the existing cluster.
+If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. The service will reuse the compute resources, resulting in a faster cluster start-up time. Each activity will still be isolated and receive a new Spark context for each execution.
## Overloading a single data flow
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md
The following properties are supported for file system under `location` settings
| Property | Description | Required | | - | | -- | | type | The type property under `location` in dataset must be set to **FileServerLocation**. | Yes |
-| folderPath | The path to folder. If you want to use wildcard to filter folder, skip this setting and specify in activity source settings. | No |
+| folderPath | The path to folder. If you want to use wildcard to filter folder, skip this setting and specify in activity source settings. Note that you will need to setup the file share location in your Windows or Linux environment to expose the folder for sharing. | No |
| fileName | The file name under the given folderPath. If you want to use wildcard to filter files, skip this setting and specify in activity source settings. | No | **Example:**
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
The Core Count and Compute Type properties can be set dynamically to adjust to t
Choose which Integration Runtime to use for your Data Flow activity execution. By default, the service will use the auto-resolve Azure Integration runtime with four worker cores. This IR has a general purpose compute type and runs in the same region as your service instance. For operationalized pipelines, it is highly recommended that you create your own Azure Integration Runtimes that define specific regions, compute type, core counts, and TTL for your data flow activity execution.
-A minimum compute type of General Purpose (compute optimized is not recommended for large workloads) with an 8+8 (16 total v-cores) configuration and a 10-minute is the minimum recommendation for most production workloads. By setting a small TTL, the Azure IR can maintain a warm cluster that will not incur the several minutes of start time for a cold cluster. You can speed up the execution of your data flows even more by select "Quick re-use" on the Azure IR data flow configurations. For more information, see [Azure integration runtime](concepts-integration-runtime.md).
+A minimum compute type of General Purpose with an 8+8 (16 total v-cores) configuration and a 10-minute Time to live (TTL) is the minimum recommendation for most production workloads. By setting a small TTL, the Azure IR can maintain a warm cluster that will not incur the several minutes of start time for a cold cluster. For more information, see [Azure integration runtime](concepts-integration-runtime.md).
:::image type="content" source="media/data-flow/ir-new.png" alt-text="Azure Integration Runtime":::
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Finally, you must create the private endpoint in your data factory.
| **Private DNS integration** | | | Integrate with private DNS zone | Leave the default of **Yes**. | | Subscription | Select your subscription. |
- | Private DNS zones | Leave the default of **(New) privatelink.azurewebsites.net**.
+ | Private DNS zones | Leave the default value in both Target sub-resources: 1. datafactory: **(New) privatelink.datafactory.azure.net**. 2. portal: **(New) privatelink.adf.azure.com**.|
7. Select **Review + create**.
data-factory Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/transform-data-spark-powershell.md
This sample PowerShell script creates a pipeline that transforms data in the clo
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] ## Prerequisites
-* **Azure Storage account**. Create a python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
+* **Azure Storage account**. Create a Python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
-### Upload python script to your Blob Storage account
-1. Create a python file named **WordCount_Spark.py** with the following content:
+### Upload Python script to your Blob Storage account
+1. Create a Python file named **WordCount_Spark.py** with the following content:
```python import sys
data-factory Transform Data Using Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-spark.md
The following table describes the JSON properties used in the JSON definition:
| getDebugInfo | Specifies when the Spark log files are copied to the Azure storage used by HDInsight cluster (or) specified by sparkJobLinkedService. Allowed values: None, Always, or Failure. Default value: None. | No | ## Folder structure
-Spark jobs are more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the java CLASSPATH), python files (placed on the PYTHONPATH), and any other files.
+Spark jobs are more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the java CLASSPATH), Python files (placed on the PYTHONPATH), and any other files.
-Create the following folder structure in the Azure Blob storage referenced by the HDInsight linked service. Then, upload dependent files to the appropriate sub folders in the root folder represented by **entryFilePath**. For example, upload python files to the pyFiles subfolder and jar files to the jars subfolder of the root folder. At runtime, the service expects the following folder structure in the Azure Blob storage:
+Create the following folder structure in the Azure Blob storage referenced by the HDInsight linked service. Then, upload dependent files to the appropriate sub folders in the root folder represented by **entryFilePath**. For example, upload Python files to the pyFiles subfolder and jar files to the jars subfolder of the root folder. At runtime, the service expects the following folder structure in the Azure Blob storage:
| Path | Description | Required | Type | | | - | -- | |
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-* **Azure Storage account**. You create a python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
+* **Azure Storage account**. You create a Python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
-### Upload python script to your Blob Storage account
-1. Create a python file named **WordCount_Spark.py** with the following content:
+### Upload Python script to your Blob Storage account
+1. Create a Python file named **WordCount_Spark.py** with the following content:
```python import sys
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-json-scripting-reference.md
Note the following points:
- The **type** property is set to **HDInsightSpark**. - The **rootPath** is set to **adfspark\\pyFiles** where adfspark is the Azure Blob container and pyFiles is fine folder in that container. In this example, the Azure Blob Storage is the one that is associated with the Spark cluster. You can upload the file to a different Azure Storage. If you do so, create an Azure Storage linked service to link that storage account to the data factory. Then, specify the name of the linked service as a value for the **sparkJobLinkedService** property. See Spark Activity properties for details about this property and other properties supported by the Spark Activity.-- The **entryFilePath** is set to the **test.py**, which is the python file.
+- The **entryFilePath** is set to the **test.py**, which is the Python file.
- The **getDebugInfo** property is set to **Always**, which means the log files are always generated (success or failure). > [!IMPORTANT]
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Before you set up a compute role on your Azure Stack Edge Pro device, make sure
To configure a client to access Kubernetes cluster, you will need the Kubernetes endpoint. Follow these steps to get Kubernetes API endpoint from the local UI of your Azure Stack Edge Pro device.
-1. In the local web UI of your device, go to **Devices** page.
-2. Under the **Device endpoints**, copy the **Kubernetes API service** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
+1. In the local web UI of your device, go to **Device** page.
+2. Under the **Device endpoints**, copy the **Kubernetes API** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
![Device page in local UI](./media/azure-stack-edge-gpu-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
To configure a client to access Kubernetes cluster, you will need the Kubernetes
4. While you are in the local web UI, you can:
- - Go to Kubernetes API, select **advanced settings**, and download an advanced configuration file for Kubernetes.
+ - If you have been provided a key from Microsoft (select users may have a key), go to Kubernetes API, select **Advanced config**, and download an advanced configuration file for Kubernetes.
![Device page in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-1.png)-
- If you have been provided a key from Microsoft (select users may have a key), then you can use this config file.
-
+
![Device page in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-2.png) - You can also go to **Kubernetes dashboard** endpoint and download an `aseuser` config file.
databox-online Azure Stack Edge Mini R Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-overview.md
Previously updated : 10/04/2021 Last updated : 03/09/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Mini R is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Mini R has the following capabilities:
|Accelerated AI inferencing| Enabled by the Intel Movidius Myriad X VPU. | |Wired and wireless | Allows wired and wireless data transfers.| |Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.|
-|Disconnected mode| Device and service can be optionally managed via Azure Stack Hub. Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.|
+|Disconnected mode| Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.|
|Supported file transfer protocols |Supports standard SMB, NFS, and REST protocols for data ingestion. <br> For more information on supported versions, go to [Azure Stack Edge Mini R system requirements](azure-stack-edge-gpu-system-requirements.md).| |Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).| |Double encryption | Use of self-encrypting drive provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https* . <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).| |Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).|
-|Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center (Preview). <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).|
+|Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center. <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).|
## Use cases
For a discussion of considerations for choosing a region for the Azure Stack Edg
## Next steps -- Review the [Azure Stack Edge Mini R system requirements](azure-stack-edge-gpu-system-requirements.md).
+- Review the [Azure Stack Edge Mini R system requirements](azure-stack-edge-gpu-system-requirements.md).
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 03/03/2022 Last updated : 03/10/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low | | **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium | | **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
-| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
+| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | | **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium | | **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container detected suspicious download of a remote file. | Persistence | Low | | **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container detected suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium | | **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container detected suspicious use of the useradd command. | Persistence | Medium |
-| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
+| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High | | **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container indicates a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low | | **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low | | **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | | **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational | | **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
-| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
-| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
-| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
+| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) <sup>[2](#footnote2)</sup> | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
+| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
+| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) <sup>[2](#footnote2)</sup> | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low | | **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
-| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
-| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
+| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
+| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium | | **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | | **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High | | **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
-| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | | **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium | | **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low | | **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium | | **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
-| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low | | **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium | | **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| | | | | <sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
-
+
+<sup><a name="footnote2"></a>2</sup>: This alert is supported by Windows.
+ ## <a name="alerts-sql-db-and-warehouse"></a>Alerts for SQL Database and Azure Synapse Analytics [Further details and notes](defender-for-sql-introduction.md)
defender-for-cloud Defender For Container Registries Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-usage.md
Title: How to use Microsoft Defender for container registries
-description: Learn about using Microsoft Defender for container registries to scan Linux images in your Linux-hosted registries
Previously updated : 12/09/2021
+ Title: How to use Defender for Containers
+description: Learn how to use Defender for Containers to scan Linux images in your Linux-hosted registries
Last updated : 03/07/2022
-# Use Microsoft Defender for container registries to scan your images for vulnerabilities
+# Use Defender for Containers to scan your ACR images for vulnerabilities
[!INCLUDE [Banner for top of topics](./includes/banner.md)] This page explains how to use the built-in vulnerability scanner to scan the container images stored in your Azure Resource Manager-based Azure Container Registry.
-When **Microsoft Defender for container registries** is enabled, any image you push to your registry will be scanned immediately. In addition, any image pulled within the last 30 days is also scanned.
+When **Defender for Containers** is enabled, any image you push to your registry will be scanned immediately. In addition, any image pulled within the last 30 days is also scanned.
When the scanner reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
When the scanner reports vulnerabilities to Defender for Cloud, Defender for Clo
To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
-1. Enable **Microsoft Defender for container registries** for your subscription. Defender for Cloud is now ready to scan images in your registries.
+1. Enable **Defender for Containers** for your subscription. Defender for Cloud is now ready to scan images in your registries.
>[!NOTE] > This feature is charged per image.
To enable vulnerability scans of images stored in your Azure Resource Manager-ba
1. Follow the steps in the remediation section of this pane.
-1. When you have taken the steps required to remediate the security issue, replace the image in your registry:
+1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
- 1. Push the updated image. This will trigger a scan.
+ 1. Push the updated image to trigger a scan.
1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648). If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
- 1. When you are sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+ 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
## Disable specific findings > [!NOTE] > [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
-If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 02/28/2022 Last updated : 03/09/2022 # Overview of Microsoft Defender for Containers
Microsoft Defender for Containers is the cloud-native solution for securing your
On this page, you'll learn how you can use Defender for Containers to improve, monitor, and maintain the security of your clusters, containers, and their applications.
-## Availability
+## Microsoft Defender for Containers plan availability
| Aspect | Details | |--|--|
-| Release state: | General availability (GA)<br>Where indicated, specific features are in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
+| Release state: | General availability (GA)<br> Certain features are in preview, for a full list see the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section. |
+| Feature availability | Refer to the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section for additional information on feature release state and availability.|
| Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| Kubernetes distributions and configurations: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br>ΓÇóThe AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google GKE Standard clusters](https://cloud.google.com/kubernetes-engine/) <br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) (Except for preview features)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects|
+| Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
| | | ## What are the benefits of Microsoft Defender for Containers?
The **Azure Policy add-on for Kubernetes** collects cluster and workload configu
| azuredefender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No | | azuredefender-publisher-ds-* | kube-system | [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers' backend service where the data will be processed for and analyzed. | N/A | memory: 64Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
-\* resource limits are not configurable
+\* resource limits aren't configurable
### [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
Defender for Containers includes an integrated vulnerability scanner for scannin
There are four triggers for an image scan: -- **On push** - Whenever an image is pushed to your registry, Defender for container registries automatically scans that image. To trigger the scan of an image, push it to your repository.
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
There are four triggers for an image scan:
- **Continuous scan**- This trigger has two modes:
- - A Continuous scan based on an image pull. This scan is performed every 7 days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+ - A Continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
- - (Preview) Continuous scan for running images. This scan is performed every 7 days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
Defender for Cloud filters, and classifies findings from the scanner. When an im
### View vulnerabilities for running images
-Defender for Containers expands on the registry scanning features of the Defender for container registries plan by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
+Defender for Containers expands on the registry scanning features by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
+
+> [!NOTE]
+> There's no Defender profile for Windows, it's only available on Linux OS.
The new recommendation, **Running container images should have vulnerability findings resolved**, only shows vulnerabilities for running images, and relies on the Defender security profile, or extension to discover which images are currently running. This recommendation groups running images that have vulnerabilities, and provides details about the issues discovered, and how to remediate them. The Defender profile, or extension is used to gain visibility into vulnerable containers that are active.
-This recommendation shows running images, and their vulnerabilities based on ACR image image. Images that are deployed from a non ACR registry, will not be scanned, and will appear under the Not applicable tab.
+This recommendation shows running images, and their vulnerabilities based on ACR image. Images that are deployed from a non ACR registry, won't be scanned, and will appear under the Not applicable tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable" lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
The full list of available alerts can be found in the [Reference table of alerts
## FAQ - Defender for Containers -- [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for container registries enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-container-registries-enabled)
+- [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-containers-enabled)
- [Is Defender for Containers a mandatory upgrade?](#is-defender-for-containers-a-mandatory-upgrade) - [Does the new plan reflect a price increase?](#does-the-new-plan-reflect-a-price-increase) - [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)
-### What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for container registries enabled?
+### What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?
Subscriptions that already have one of these plans enabled can continue to benefit from it.
If you haven't enabled them yet, or create a new subscription, these plans can n
### Is Defender for Containers a mandatory upgrade?
-No. Subscriptions that have either Microsoft Defender for Kubernetes or Microsoft Defender for container registries enabled don't need to be upgraded to the new Microsoft Defender for Containers plan. However, they won't benefit from the new and improved capabilities and theyΓÇÖll have an upgrade icon shown alongside them in the Azure portal.
+No. Subscriptions that have either Microsoft Defender for Kubernetes or Microsoft Defender for Containers Registries enabled doesn't need to be upgraded to the new Microsoft Defender for Containers plan. However, they won't benefit from the new and improved capabilities and theyΓÇÖll have an upgrade icon shown alongside them in the Azure portal.
### Does the new plan reflect a price increase? No. ThereΓÇÖs no direct price increase. The new comprehensive Container security plan combines Kubernetes protection and container registry image scanning, and removes the previous dependency on the (paid) Defender for Servers plan.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for servers. Previously updated : 11/09/2021 Last updated : 03/08/2022 # Introduction to Microsoft Defender for servers
To protect machines in hybrid and multi-cloud environments, Defender for Cloud u
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) > [!TIP]
-> For details of which Defender for servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers-).
+> For details of which Defender for servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers-).
## What are the benefits of Microsoft Defender for servers?
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Title: Endpoint protection recommendations in Microsoft Defender for Clouds description: How the endpoint protection solutions are discovered and identified as healthy. Previously updated : 12/14/2021 Last updated : 03/08/2022 # Endpoint protection assessment and recommendations in Microsoft Defender for Cloud [!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations:
+Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations:
- [Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) - [Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000)
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations Previously updated : 02/28/2022 Last updated : 03/08/2022 # Protect your Kubernetes workloads
This page describes how to use Microsoft Defender for Cloud's set of security re
> [!TIP] > For a list of the security recommendations that might appear for Kubernetes clusters and nodes, see the [Container recommendations](recommendations-reference.md#container-recommendations) of the recommendations reference table.
-## Availability
-
-| Aspect | Details |
-|--|--|
-| Release state: | AKS - General availability (GA) <br> Arc enabled Kubernetes - Preview |
-| Pricing: | Free for AKS workloads<br>For Azure Arc-enabled Kubernetes, it's billed according to the Microsoft Defender for Containers plan |
-| Required roles and permissions: | **Owner** or **Security admin** to edit an assignment<br>**Reader** to view the recommendations |
-| Environment requirements: | Kubernetes v1.14 (or newer) is required<br>No PodSecurityPolicy resource (old PSP model) on the clusters<br>Windows nodes are not supported |
-| Azure Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) |
-| Non-Azure Clouds, and On-prem: | supported via Arc enabled Kubernetes. |
-| | |
- ## Set up your workload protection Microsoft Defender for Cloud includes a bundle of recommendations that are available once you've installed the **Azure Policy add-on for Kubernetes or extensions**.
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Defender for Cloud depends on the [Log Analytics agent](../azure-monitor/agents/
Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](enable-data-collection.md#manual-agent)
-To learn more about the specific Defender for Cloud features available on Windows and Linux, see [Feature coverage for machines](supported-machines-endpoint-solutions-clouds.md).
+To learn more about the specific Defender for Cloud features available on Windows and Linux, see [Feature coverage for machines](supported-machines-endpoint-solutions-clouds-containers.md).
> [!NOTE] > Even though **Microsoft Defender for servers** is designed to protect servers, most of its features are supported for Windows 10 machines. One feature that isn't currently supported is [Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 02/27/2022 Last updated : 03/10/2022 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an account with one of two
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Containers** extends Defender for Cloud's container threat detection and advanced defenses to your **Amazon EKS clusters**.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md?tabs=tab/features-multi-cloud) table.
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud) table.
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
|-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]| |Pricing:|The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
-|Required roles and permissions:|**Owner** on the relevant Azure subscription<br>**Contributor** can also connect an AWS account if an owner provides the service principal details (required for the Defender for servers plan)|
+|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription.|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
- The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region. - **To enable the Defender for servers plan**, you'll need:
+
- Microsoft Defender for servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
+
- An active AWS account, with EC2 instances.
+
- Azure Arc for servers installed on your EC2 instances. - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing, and future EC2 instances managed by AWS Systems Manager (SSM) and using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon: - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
- - To manually install Azure Arc on your existing and future EC2 instances, follow the instructions in the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation.
+ > [!NOTE]
+ > To enable the Azure Arc auto-provisioning, you'll need an **Owner** permission on the relevant Azure subscription.
+
+ - To manually install Azure Arc on your existing and future EC2 instances, follow the instructions in the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation.
+
- Additional extensions should be enabled on the Arc-connected machines. These extensions are currently configured in the subscription level. It means that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to these components. - Microsoft Defender for Endpoint - VA solution (TVM/ Qualys)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To protect your GCP-based resources, you can connect an account in two different
- **Environment settings page** (Recommended) - This page provides the onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your GCP resources: - **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds.md)
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md)
- **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. :::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 03/02/2022 Last updated : 03/08/2022 # Archive for what's new in Defender for Cloud?
We've added two **preview** recommendations to deploy and maintain the endpoint
|Recommendation |Description |Severity | |||| |[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. <br> <a href="/azure/defender-for-cloud/endpoint-protection-recommendations-technical">Learn more about how Endpoint Protection for machines is evaluated.</a><br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |High |
-|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented [here](./supported-machines-endpoint-solutions-clouds.md?tabs=features-windows). Endpoint protection assessment is documented <a href='/azure/defender-for-cloud/endpoint-protection-recommendations-technical'>here</a>.<br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |Medium |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented [here](./supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows). Endpoint protection assessment is documented <a href='/azure/defender-for-cloud/endpoint-protection-recommendations-technical'>here</a>.<br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |Medium |
||| > [!NOTE]
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/03/2022 Last updated : 03/10/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in March include: - [Deprecated the recommendations to install the network traffic data collection agent](#deprecated-the-recommendations-to-install-the-network-traffic-data-collection-agent)-
+- [Defender for Containers can now scan for vulnerabilities in Windows images (preview)](#defender-for-containers-can-now-scan-for-vulnerabilities-in-windows-images-preview)
+- [New alert for Microsoft Defender for Storage (preview)](#new-alert-for-microsoft-defender-for-storage-preview)
+- [Configure email notifications settings from an alert](#configure-email-notifications-settings-from-an-alert)
+
### Deprecated the recommendations to install the network traffic data collection agent Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, the following two recommendations and their related policies were deprecated.
Changes in our roadmap and priorities have removed the need for the network traf
|[Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24d8af06-d441-40b4-a49c-311421aa9f58) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats. |Medium | |||
+### Defender for Containers can now scan for vulnerabilities in Windows images (preview)
+
+Defender for Container's image scan now supports Windows images that are hosted in Azure Container Registry. This feature is free while in preview, and will incur a cost when it becomes generally available.
+
+Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-container-registries-usage.md).
+
+### New alert for Microsoft Defender for Storage (preview)
+
+To expand the threat protections provided by Microsoft Defender for Storage, we've added a new preview alert.
+
+Threat actors use applications and tools to discover and access storage accounts. Microsoft Defender for Storage detects these applications and tools so that you can block them and remediate your posture.
+
+This preview alert is called `Access from a suspicious application`. The alert is relevant to Azure Blob Storage, and ADLS Gen2 only.
+
+| Alert (alert type) | Description | MITRE tactic | Severity |
+|--|--|--|--|
+| **PREVIEW - Access from a suspicious application**<br>(Storage.Blob_SuspiciousApp) | Indicates that a suspicious application has successfully accessed a container of a storage account with authentication.<br>This might indicate that an attacker has obtained the credentials necessary to access the account, and is exploiting it. This could also be an indication of a penetration test carried out in your organization.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Initial Access | Medium |
+
+### Configure email notifications settings from an alert
+
+A new section has been added to the alert User Interface (UI) which allows you to view and edit who will receive email notifications for alerts that are triggered on the current subscription.
++
+Learn how to [Configure email notifications for security alerts](configure-email-notifications.md).
+ ## February 2022 Updates in February include:
The new automated onboarding of GCP environments allows you to protect GCP workl
- **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP compute instances. This plan includes the integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more.
- For a full list of available features, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md). Automatic onboarding capabilities will allow you to easily connect any existing, and new compute instances discovered in your environment.
+ For a full list of available features, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities will allow you to easily connect any existing, and new compute instances discovered in your environment.
Learn how to protect, and [connect your GCP projects](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
+
+ Title: Microsoft Defender for Containers feature availability
+description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment.
+ Last updated : 03/08/2022+++
+# Defender for Containers feature availability
++
+The **tabs** below show the features of Microsoft Defender for Cloud that are available for Windows and Linux machines.
+
+## Supported features by environment
+
+### [**Azure (AKS)**](#tab/azure-aks)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
+|--|--|--|--|--|--|--|--|
+| Compliance | Docker CIS | VMs | GA | X | Log Analytics agent | Defender for Servers | |
+| VA | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| VA | View vulnerabilities for running images | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Hardening | Kubernetes data plane recommendations | AKS | GA | X | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime Threat Detection | Agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime Threat Detection | Agent-based threat detection | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and Auto provisioning | Auto provisioning of Defender profile | AKS | GA | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### [**AWS (EKS)**](#tab/aws-eks)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers |
+| VA | Registry scan | N/A | - | - | - | - |
+| VA | View vulnerabilities for running images | N/A | - | - | - | - |
+| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers |
+| Runtime Threat Detection | Agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
+| Runtime Threat Detection | Agent-based threat detection | EKS | Preview | X | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | EKS | Preview | X | Agentless | Free |
+| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Defender extension | N/A | N/A | X | - | - |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | N/A | N/A | X | - | - |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### [**GCP (GKE)**](#tab/gcp-gke)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers |
+| VA | Registry scan | N/A | - | - | - | - |
+| VA | View vulnerabilities for running images | N/A | - | - | - | - |
+| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers |
+| Runtime Threat Detection | Agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
+| Runtime Threat Detection | Agent-based threat detection | GKE | Preview | X | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | GKE | Preview | X | Agentless | Free |
+| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Defender DaemonSet | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### [**On-prem/IasS (ARC)**](#tab/iass-arc)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers |
+| VA | Registry scan | ACR, Private ACR | Preview | Γ£ô | Agentless | Defender for Containers |
+| VA | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
+| Runtime Threat Detection | Threat detection via auditlog | Arc enabled K8s clusters | - | Γ£ô | Defender extension | Defender for Containers |
+| Runtime Threat Detection | Agent-based threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
+| Discovery and Auto provisioning | Auditlog collection for threat detection | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+++
+## Additional information
+
+### Registries and images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
++
+### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<sup>[1](#footnote1)</sup><br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br> |
+
+<sup><a name="footnote1"></a>1</sup>The AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br>
+<sup><a name="footnote2"></a>2</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
+<sup><a name="footnote3"></a>3</sup>To get [Microsoft Defender for Containers](../azure-arc/kubernetes/overview.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](https://mseng.visualstudio.com/TechnicalContent/_workitems/recentlyupdated/) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+
+## Next steps
+
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
+- Learn how [Defender for Cloud manages and safeguards data](data-security.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
+
+ Title: Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud
+description: Learn about the availability of Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud deployment.
+ Last updated : 03/08/2022+++
+# Feature coverage for machines
++
+The **tabs** below show the features of Microsoft Defender for Cloud that are available for Windows and Linux machines.
+
+## Supported features for virtual machines and servers <a name="vm-server-features"></a>
+
+### [**Windows machines**](#tab/features-windows)
+
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for servers required** |
+|--|::|::|::|::|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
+| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | - | - | - | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions-) | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No |
+| Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
+| | | | | |
+
+### [**Linux machines**](#tab/features-linux)
+
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for servers required** |
+|--|::|::|::|::|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | - | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | - | - | - | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
+| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions-) | - | - | - | No |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No |
+| Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
+| | | | | |
++
+### [**Multi-cloud machines**](#tab/features-multi-cloud)
+
+| **Feature** | **Availability in AWS** | **Availability in GCP** |
+|--|:-:|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | - | - |
+| [Just-in-time VM access](just-in-time-access-usage.md) | - | - |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö |
+| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö |
+| [Network map](protect-network-resources.md#network-map) | - | - |
+| [Adaptive network hardening](adaptive-network-hardening.md) | - | - |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö |
+| [Docker host hardening](harden-docker-hosts.md) | Γ£ö | Γ£ö |
+| Missing OS patches assessment | Γ£ö | Γ£ö |
+| Security misconfigurations assessment | Γ£ö | Γ£ö |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions-) | Γ£ö | Γ£ö |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) |
+| Third-party vulnerability assessment | - | - |
+| [Network security assessment](protect-network-resources.md) | - | - |
+| | |
+
+
+
+> [!TIP]
+>To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
++
+## Supported endpoint protection solutions <a name="endpoint-supported"></a>
+
+The following table provides a matrix of supported endpoint protection solutions and whether you can use Microsoft Defender for Cloud to install each solution for you.
+
+For information about when recommendations are generated for each of these solutions, see [Endpoint Protection Assessment and Recommendations](endpoint-protection-recommendations-technical.md).
+
+| Solution | Supported platforms | Defender for Cloud installation |
+||||
+| Microsoft Defender Antivirus | Windows Server 2016 or later | No (built into OS) |
+| System Center Endpoint Protection (Microsoft Antimalware) | Windows Server 2012 R2 | Via extension |
+| Trend Micro ΓÇô Deep Security | Windows Server (all) | No |
+| Symantec v12.1.1100+ | Windows Server (all) | No |
+| McAfee v10+ | Windows Server (all) | No |
+| McAfee v10+ | Linux (GA) | No |
+| Microsoft Defender for Endpoint for Linux<sup>[1](#footnote1)</sup> | Linux (GA) | Via extension |
+| Sophos V9+ | Linux (GA) | No |
+| | | |
+
+<sup><a name="footnote1"></a>1</sup> It's not enough to have Microsoft Defender for Endpoint on the Linux machine: the machine will only appear as healthy if the always-on scanning feature (also known as real-time protection (RTP)) is active. By default, the RTP feature is **disabled** to avoid clashes with other AV software.
++++
+## Feature support in government and national clouds
+
+| Feature/Service | Azure | Azure Government | Azure China 21Vianet |
+||-|--|--|
+| **Defender for Cloud free features** | | | |
+| - [Continuous export](./continuous-export.md) | GA | GA | GA |
+| - [Workflow automation](./workflow-automation.md) | GA | GA | GA |
+| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
+| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
+| - [Email notifications for security alerts](./configure-email-notifications.md) | GA | GA | GA |
+| - [Auto provisioning for agents and extensions](./enable-data-collection.md) | GA | GA | GA |
+| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
+| - [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
+| - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps-) | GA | Not Available | Not Available |
+| **Microsoft Defender plans and extensions** | | | |
+| - [Microsoft Defender for servers](./defender-for-servers-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for container registries](./defender-for-container-registries-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA <sup>[2](#footnote2)</sup> | GA <sup>[2](#footnote2)</sup> |
+| - [Microsoft Defender for container registries scanning of images in CI/CD workflows](./defender-for-container-registries-cicd.md) <sup>[3](#footnote3)</sup> | Public Preview | Not Available | Not Available |
+| - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[4](#footnote4)</sup> | GA | GA | GA |
+| - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[10](#footnote4)</sup> | GA | GA | GA |
+| - [Defender extension for Azure Arc-enabled Kubernetes clusters, servers or data services](./defender-for-kubernetes-azure-arc.md) <sup>[5](#footnote5)</sup> | Public Preview | Not Available | Not Available |
+| - [Microsoft Defender for Azure SQL database servers](./defender-for-sql-introduction.md) | GA | GA | GA <sup>[9](#footnote9)</sup> |
+| - [Microsoft Defender for SQL servers on machines](./defender-for-sql-introduction.md) | GA | GA | Not Available |
+| - [Microsoft Defender for open-source relational databases](./defender-for-databases-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for Key Vault](./defender-for-key-vault-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for Resource Manager](./defender-for-resource-manager-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for Storage](./defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA | Not Available |
+| - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Public Preview | Not Available | Not Available |
+| - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA |
+| - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available |
+| **Microsoft Defender for servers features** <sup>[7](#footnote7)</sup> | | | |
+| - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA |
+| - [File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
+| - [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA |
+| - [Adaptive network hardening](./adaptive-network-hardening.md) | GA | Not Available | Not Available |
+| - [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA |
+| - [Integrated Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md) | GA | Not Available | Not Available |
+| - [Regulatory compliance dashboard & reports](./regulatory-compliance-dashboard.md) <sup>[8](#footnote8)</sup> | GA | GA | GA |
+| - [Microsoft Defender for Endpoint deployment and integrated license](./integration-defender-for-endpoint.md) | GA | GA | Not Available |
+| - [Connect AWS account](./quickstart-onboard-aws.md) | GA | Not Available | Not Available |
+| - [Connect GCP project](./quickstart-onboard-gcp.md) | GA | Not Available | Not Available |
+| | | | |
+
+<sup><a name="footnote1"></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview.
+
+<sup><a name="footnote2"></a>2</sup> Vulnerability scans of container registries on the Azure Government cloud can only be performed with the scan on push feature.
+
+<sup><a name="footnote3"></a>3</sup> Requires Microsoft Defender for container registries.
+
+<sup><a name="footnote4"></a>4</sup> Partially GA: Support for Azure Arc-enabled clusters is in public preview and not available on Azure Government.
+
+<sup><a name="footnote5"></a>5</sup> Requires Microsoft Defender for Kubernetes or Microsoft Defender for Containers.
+
+<sup><a name="footnote6"></a>6</sup> Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage are in public preview.
+
+<sup><a name="footnote7"></a>7</sup> These features all require [Microsoft Defender for servers](./defender-for-servers-introduction.md).
+
+<sup><a name="footnote8"></a>8</sup> There may be differences in the standards offered per cloud type.
+
+<sup><a name="footnote9"></a>9</sup> Partially GA: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.
+
+<sup><a name="footnote4"></a>10</sup> Partially GA: Support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is in public preview and not available on Azure Government. Run-time visibility of vulnerabilities in container images is also a preview feature.
+
+## Next steps
+
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
+- Learn how [Defender for Cloud manages and safeguards data](data-security.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 03/06/2022 Last updated : 03/08/2022 # Important upcoming changes to Microsoft Defender for Cloud
When the recommendations are released to general availability, they will replace
- Assessment key for the **GA** recommendation: 3bcd234d-c9c7-c2a2-89e0-c01f419c1a8a Learn more:-- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds.md#endpoint-supported)
+- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported)
- [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md) ### AWS recommendations to GA
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
Title: Configure a lab to use Remote Desktop Gateway
-description: Learn how to configure a lab in Azure DevTest Labs with a remote desktop gateway to ensure secure access to the lab VMs without having to expose the RDP port.
+ Title: Configure a lab to use a remote desktop gateway
+description: Learn how to configure a remote desktop gateway in Azure DevTest Labs for secure access to lab VMs without exposing RDP ports.
Previously updated : 06/26/2020 Last updated : 03/07/2022
-# Configure your lab in Azure DevTest Labs to use a remote desktop gateway
-In Azure DevTest Labs, you can configure a remote desktop gateway for your lab to ensure secure access to the lab virtual machines (VMs) without having to expose the RDP port. The lab provides a central place for your lab users to view and connect to all virtual machines they have access to. The **Connect** button on the **Virtual Machine** page creates a machine-specific RDP file that you can open to connect to the machine. You can further customize and secure the RDP connection by connecting your lab to a remote desktop gateway.
+# Configure and use a remote desktop gateway in Azure DevTest Labs
-This approach is more secure because the lab user authenticates directly to the gateway machine or can use company credentials on a domain-joined gateway machine to connect to their machines. The lab also supports using token authentication to the gateway machine that allows users to connect to their lab virtual machines without having the RDP port exposed to the internet. This article walks through an example on how to set up a lab that uses token authentication to connect to lab machines.
+This article describes how to set up and use a gateway for secure remote desktop access to lab virtual machines (VMs) in Azure DevTest Labs. Using a gateway improves security because you don't expose the VMs' remote desktop protocol (RDP) ports to the internet. This remote desktop gateway solution also supports token authentication.
-Looking to connect through Bastion, read "[Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md)".
+DevTest Labs provides a central place for lab users to view and connect to their VMs. Selecting **Connect** > **RDP** on a lab VM's **Overview** page creates a machine-specific RDP file, and users can open the file to connect to the VM.
-## Architecture of the solution
+With a remote desktop gateway, lab users connect to their VMs through a gateway machine. Users authenticate directly to the gateway machine, and can use company-supplied credentials on domain-joined machines. Token authentication provides an extra layer of security.
-![Architecture of the solution](./media/configure-lab-remote-desktop-gateway/architecture.png)
+Another way to securely access lab VMs without exposing ports or IP addresses is through a browser with Azure Bastion. For more information, see [Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md).
-1. The [Get RDP file contents](/rest/api/dtl/virtualmachines/getrdpfilecontents) action is called when you select the **Connect** button.1.
-1. The Get RDP file contents action invokes `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to request an authentication token.
- 1. `{gateway-hostname}` is the gateway hostname specified on the **Lab Settings** page for your lab in the Azure portal.
- 1. `{lab-machine-name}` is the name of the machine that you're trying to connect.
- 1. `{port-number}` is the port on which the connection needs to be made. Usually this port is 3389. If the lab VM is using the [shared IP](devtest-lab-shared-ip.md) feature in DevTest Labs, the port will be different.
-1. The remote desktop gateway defers the call from `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to an Azure function to generate the authentication token. The DevTest Labs service automatically includes the function key in the request header. The function key is to be saved in the lab's key vault. The name for that secret to be shown as **Gateway token secret** on the **Lab Settings** page for the lab.
-1. The Azure function is expected to return a token for certificate-based token authentication against the gateway machine.
-1. The Get RDP file contents action then returns the complete RDP file, including the authentication information.
-1. You open the RDP file using your preferred RDP connection program. Remember that not all RDP connection programs support token authentication. The authentication token does have an expiration date, set by the function app. Make the connection to the lab VM before the token expires.
-1. Once the remote desktop gateway machine authenticates the token in the RDP file, the connection is forwarded to your lab machine.
+## Architecture
-### Solution requirements
-To work with the DevTest Labs token authentication feature, there are a few configuration requirements for the gateway machines, domain name services (DNS), and functions.
+The following diagram shows how a remote desktop gateway applies token authentication and connects to DevTest Labs VMs.
-### Requirements for remote desktop gateway machines
-- TLS/SSL certificate must be installed on the gateway machine to handle HTTPS traffic. The certificate must match the fully qualified domain name (FQDN) of the load balancer for the gateway farm or the FQDN of the machine itself if there's only one machine. Wild-card TLS/SSL certificates don't work. -- A signing certificate installed on gateway machine(s). Create a signing certificate by using [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1) script.-- Install the [Pluggable Authentication](https://code.msdn.microsoft.com/windowsdesktop/Remote-Desktop-Gateway-517d6273) module that supports token authentication for the remote desktop gateway. One example of such a module is `RDGatewayFedAuth.msi` that comes with [System Center Virtual Machine Manager (VMM) images](/system-center/vmm/install-console?view=sc-vmm-1807&preserve-view=true). For more information about System Center, see [System Center documentation](/system-center/) and [pricing details](https://www.microsoft.com/cloud-platform/system-center-pricing). -- The gateway server can handle requests made to `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}`.
+![Diagram that shows the remote desktop gateway architecture.](./media/configure-lab-remote-desktop-gateway/architecture.png)
- The gateway-hostname is the FQDN of the load balancer of the gateway farm or the FQDN of machine itself if there's only one machine. The `{lab-machine-name}` is the name of the lab machine that you're trying to connect, and the `{port-number}` is port on which the connection will be made. By default, this port is 3389. However, if the virtual machine is using the [shared IP](devtest-lab-shared-ip.md) feature in DevTest Labs, the port will be different.
-- The [Application Routing Request](/iis/extensions/planning-for-arr/using-the-application-request-routing-module) module for Internet Information Server (IIS) can be used to redirect `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` requests to the azure function, which handles the request to get a token for authentication.
+1. Selecting **Connect** > **RDP** from a lab VM invokes the [getRdpFileContents](/rest/api/dtl/virtualmachines/getrdpfilecontents) REST command:
+ ```http
+ POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevTestLab/labs/{labName}/virtualmachines/{name}/getRdpFileContents
+ ```
-## Requirements for Azure function
-Azure function handles request with format of `https://{function-app-uri}/app/host/{lab-machine-name}/port/{port-number}` and returns the authentication token based on the same signing certificate installed on the gateway machines. The `{function-app-uri}` is the uri used to access the function. The function key is automatically be passed in the header of the request. For a sample function, see [https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/src/RDGatewayAPI/Functions/CreateToken.cs](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/src/RDGatewayAPI/Functions/CreateToken.cs).
+1. When the lab has a gateway configured, the `getRdpFileContents` action invokes `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to request an authentication token.
+ - `{gateway-hostname}`, or `{lb-uri}` for a load balancer, is the gateway hostname specified on the **Lab settings** page for the lab.
+ - `{lab-machine-name}` is the name of the VM to connect to.
+ - `{port-number}` is the port to use for the connection. Usually this port is 3389, but if the lab VM uses a [shared IP](devtest-lab-shared-ip.md), the port number is different.
+1. The remote desktop gateway uses `https://{function-app-uri}/api/host/{lab-machine-name}/port/{port-number}` to defer the call to an Azure Functions function app.
-## Requirements for network
+ > [!NOTE]
+ > The request header automatically includes the function key, which it gets from the lab's key vault. The function key secret's name is the **Gateway token secret** on the lab's **Lab settings** page.
-- DNS for the FQDN associated with the TLS/SSL certificate installed on the gateway machines must direct traffic to the gateway machine or the load balancer of the gateway machine farm.-- If the lab machine uses private IPs, there must be a network path from the gateway machine to the lab machine, either through sharing the same virtual network or using peered virtual networks.
+1. The Azure function generates and returns a token for certificate-based authentication on the gateway machine.
-## Configure the lab to use token authentication
-This section shows how to configure a lab to use a remote desktop gateway machine that supports token authentication. This section doesn't cover how to set up a remote desktop gateway farm itself. For that information, See the [Sample to create a remote desktop gateway](#sample-to-create-a-remote-desktop-gateway) section at the end of this article.
+1. The `getRdpFileContents` action returns the complete RDP file, including the authentication token.
-Before you update the lab settings, store the key needed to successfully execute the function to return an authentication token in the lab's key vault. You can get the function key value in the **Manage** page for the function in the Azure portal. For more information on how to save a secret in a key vault, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Save the name of the secret for later use.
+When an RDP connection program opens the RDP file, the remote desktop gateway authenticates the token, and the connection forwards to the lab VM.
-To find the ID of the lab's key vault, run the following Azure CLI command:
+> [!NOTE]
+> Not all RDP connection programs support token authentication.
-```azurecli
-az resource show --name {lab-name} --resource-type 'Microsoft.DevTestLab/labs' --resource-group {lab-resource-group-name} --query properties.vaultName
-```
+> [!IMPORTANT]
+> The Azure function sets an expiration date for the authentication token. A user must connect to the VM before the token expires.
+
+## Configuration requirements
+
+There are some configuration requirements for gateway machines, Azure Functions, and networks to work with DevTest Labs RDP access and token authentication:
+
+### Gateway machine requirements
-Configure the lab to use the token authentication by using these steps:
+The gateway machine must have the following configuration:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All Services**, and then select **DevTest Labs** from the list.
-1. From the list of labs, select your **lab**.
-1. On the lab's page, select **Configuration and policies**.
-1. On the left menu, in the **Settings** section, select **Lab settings**.
-1. In the **Remote desktop** section, enter the fully qualified domain name (FQDN) or IP address of the remote desktop services gateway machine or farm for the **Gateway hostname** field. This value must match the FQDN of the TLS/SSL certificate used on gateway machines.
+- A TLS/SSL certificate to handle HTTPS traffic. The certificate must match the fully qualified domain name (FQDN) of the gateway machine if there's only one machine, or the load balancer of a gateway farm. Wild-card TLS/SSL certificates don't work.
- ![Remote desktop options in lab settings](./media/configure-lab-remote-desktop-gateway/remote-desktop-options-in-lab-settings.png)
-1. In the **Remote desktop** section, for **Gateway token** secret, enter the name of the secret created earlier. This value isn't the function key itself, but the name of the secret in the lab's key vault that holds the function key.
+- A signing certificate. You can create a signing certificate by using the [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1) PowerShell script.
- ![Gateway token secret in lab settings](./media/configure-lab-remote-desktop-gateway/gateway-token-secret.png)
-1. **Save** Changes.
+- A [pluggable authentication module](https://en.wikipedia.org/wiki/Pluggable_authentication_module) that supports token authentication. One example is *RDGatewayFedAuth.msi*, which comes with [System Center Virtual Machine Manager (VMM)](/system-center/vmm/install-console?view=sc-vmm-1807&preserve-view=true) images.
- > [!NOTE]
- > By clicking **Save**, you agree to [Remote Desktop Gateway's license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
+- The ability to handle requests to `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}`.
+You can use the [Application Routing Request module for Internet Information Server (IIS)](/iis/extensions/planning-for-arr/using-the-application-request-routing-module) to redirect `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` requests to the function app.
-If configuring the lab via automation is preferred, see [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) for a sample PowerShell script to set **gateway hostname** and **gateway token secret** settings. The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) also provides an Azure Resource Manager template that creates or updates a lab with the **gateway hostname** and **gateway token secret** settings.
+### Azure Functions requirements
-## Configure network security group
-To further secure the lab, a network security group (NSG) can be added to the virtual network used by the lab virtual machines. For instructions how to set up an NSG, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).
+An Azure Functions function app handles requests with the `https://{function-app-uri}/app/host/{lab-machine-name}/port/{port-number}` format, and creates and returns the authentication token based on the gateway machine's signing certificate. The `{function-app-uri}` is the URI used to access the function.
-Here is an example NSG that only allows traffic that first goes through the gateway to reach lab machines. The source in this rule is the IP address of the single gateway machine, or the IP address of the load balancer in front of the gateway machines.
+The request header must pass the function key, which it gets from the lab's key vault.
-![Network security group - rules](./media/configure-lab-remote-desktop-gateway/network-security-group-rules.png)
+For a sample function, see [CreateToken.cs](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/src/RDGatewayAPI/Functions/CreateToken.cs).
-## Sample to create a remote desktop gateway
+### Network requirements
+
+- The DNS for the FQDN associated with the gateway machine's TLS/SSL certificate must direct traffic to the gateway machine or to the load balancer of a gateway machine farm.
+
+- If the lab VM uses a private IP address, there must be a network path from the gateway machine to the lab machine. The two machines must either share the same virtual network or use peered virtual networks.
+
+## Create a remote desktop gateway
+
+The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) has Azure Resource Manager (ARM) templates that help set up DevTest Labs token authentication and remote desktop gateway resources. There are templates for gateway machine creation, lab settings, and a function app.
> [!NOTE]
-> By using the sample templates, you agree to [Remote Desktop Gateway's license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
+> By using the sample templates, you agree to the [Remote Desktop Gateway license terms](https://www.microsoft.com/licensing/product-licensing/products).
-The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) provides a few samples to help setup the resources needed to use token authentication and remote desktop gateway with DevTest Labs. These samples include Azure Resource Manager templates for gateway machines, lab settings, and function app.
+Follow these steps to set up a sample remote desktop gateway farm.
-Follow these steps to set up a sample solution for the remote desktop gateway farm.
+1. Create a signing certificate.
-1. Create a signing certificate. Run [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1). Save the thumbprint, password, and Base64 encoding of the created certificate.
-2. Get a TLS/SSL certificate. FQDN associated with the TLS/SSL certificate must be for the domain you control. Save the thumbprint, password, and Base64 encoding for this certificate. To get thumbprint using PowerShell, use the following commands.
+ Run [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1). Record the thumbprint, password, and Base64 encoding of the created certificate to use later.
+
+1. Get a TLS/SSL certificate. The FQDN associated with the TLS/SSL certificate must be for a domain you control.
- ```powershell
- $cer = New-Object System.Security.Cryptography.X509Certificates.X509Certificate;
- $cer.Import('path-to-certificate');
- $hash = $cer.GetCertHashString()
- ```
+1. Record the password, thumbprint, and Base64 encoding for the TLS/SSL certificate to use later.
- To get the Base64 encoding using PowerShell, use the following command.
+ - To get the thumbprint, use the following PowerShell commands:
- ```powershell
- [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes('path-to-certificate'))
- ```
-3. Download files from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway).
-
- The template requires access to a few other Resource Manager templates and related resources at the same base URI. Copy all the files from [https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/arm/gateway) and RDGatewayFedAuth.msi to a blob container in a storage account.
-4. Deploy **azuredeploy.json** from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway). The template takes the following parameters:
- - adminUsername ΓÇô Required. Administrator user name for the gateway machines.
- - adminPassword ΓÇô Required. Password for the administrator account for the gateway machines.
- - instanceCount ΓÇô Number of gateway machines to create.
- - alwaysOn ΓÇô Indicates whether to keep the created Azure Functions app in a warm state or not. Keeping the Azure Functions app will avoid delays when users first try to connect to their lab VM, but it does have cost implications.
- - tokenLifetime ΓÇô The length of time the created token will be valid. Format is HH:MM:SS.
- - sslCertificate ΓÇô The Base64 encoding of the TLS/SSL certificate for the gateway machine.
- - sslCertificatePassword ΓÇô The password of the TLS/SSL certificate for the gateway machine.
- - sslCertificateThumbprint - The certificate thumbprint for identification in the local certificate store of the TLS/SSL certificate.
- - signCertificate ΓÇô The Base64 encoding for signing certificate for the gateway machine.
- - signCertificatePassword ΓÇô The password for signing certificate for the gateway machine.
- - signCertificateThumbprint - The certificate thumbprint for identification in the local certificate store of the signing certificate.
- - _artifactsLocation ΓÇô URI location where all supporting resources can be found. This value must be a fully qualified UIR, not a relative path.
- - _artifactsLocationSasToken ΓÇô The Shared Access Signature (SAS) token used to access supporting resources, if the location is an Azure storage account.
-
- The template can be deployed using the Azure CLI by using the following command:
-
- ```azurecli
- az deployment group create --resource-group {resource-group} --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -ΓÇôparameters _artifactsLocation="{storage-account-endpoint}/{container-name}" -ΓÇôparameters _artifactsLocationSasToken = "?{sas-token}"
+ ```powershell
+ $cer = New-Object System.Security.Cryptography.X509Certificates.X509Certificate;
+ $cer.Import('path-to-certificate');
+ $hash = $cer.GetCertHashString()
+ ```
+
+ - To get the Base64 encoding, use the following PowerShell command:
+
+ ```powershell
+ [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes('path-to-certificate'))
+ ```
+
+1. Download all the files from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway). Copy all the files and *RDGatewayFedAuth.msi* to a blob container in a storage account.
+
+1. Open *azuredeploy.json* from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway), and fill out the following parameters:
+
+ - `adminUsername` ΓÇô **Required**. Administrator user name for the gateway machines.
+ - `adminPassword` ΓÇô **Required**. Password for the administrator account for the gateway machines.
+ - `instanceCount` ΓÇô Number of gateway machines to create.
+ - `alwaysOn` ΓÇô Whether to keep the created Azure Functions app in a warm state or not. Keeping the Azure Functions app on avoids delays when users first try to connect to their lab VMs, but has cost implications.
+ - `tokenLifetime` ΓÇô The length of time in HH:MM:SS format that the created token will be valid.
+ - `sslCertificate` ΓÇô **Required**. The Base64 encoding of the TLS/SSL certificate for the gateway machine.
+ - `sslCertificatePassword` ΓÇô **Required**. The password of the TLS/SSL certificate for the gateway machine.
+ - `sslCertificateThumbprint` - **Required**. The certificate thumbprint for identification in the local certificate store of the TLS/SSL certificate.
+ - `signCertificate` ΓÇô **Required**. The Base64 encoding for the signing certificate for the gateway machine.
+ - `signCertificatePassword` ΓÇô **Required**. The password for the signing certificate for the gateway machine.
+ - `signCertificateThumbprint` - **Required**. The certificate thumbprint for identification in the local certificate store of the signing certificate.
+ - `_artifactsLocation` ΓÇô **Required**. The URI location to find artifacts this template requires. This value must be a fully qualified URI, not a relative path. The artifacts include other templates, PowerShell scripts, and the Remote Desktop Gateway Pluggable Authentication module, expected to be named *RDGatewayFedAuth.msi*, that supports token authentication.
+ - `_artifactsLocationSasToken` ΓÇô **Required**. The shared access signature (SAS) token to access artifacts, if the `_artifactsLocation` is an Azure storage account.
+
+1. Deploy *azuredeploy.json* by using the following Azure CLI command:
+
+ ```azurecli
+ az deployment group create --resource-group {resource-group} --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -ΓÇôparameters _artifactsLocation="{storage-account-endpoint}/{container-name}" -ΓÇôparameters _artifactsLocationSasToken = "?{sas-token}"
```
- Here are the descriptions of the parameters:
+ - Get the `{storage-account-endpoint}` by running
+ `az storage account show --name {storage-account-name} --query primaryEndpoints.blob`.
+
+ - Get the `{sas-token}` by running
+ `az storage container generate-sas --name {container-name} --account-name {storage-account-name} --https-only ΓÇôpermissions drlw ΓÇôexpiry {utc-expiration-date}`.
+
+ - `{storage-account-name}` is the name of the storage account that holds the files you uploaded.
+ - `{container-name}` is the container in the `{storage-account-name}` that holds the files you uploaded.
+ - `{utc-expiration-date}` is the date, in UTC, when the SAS token will expire and can no longer be used to access the storage account.
+
+1. Record the values for `gatewayFQDN` and `gatewayIP` from the template deployment output. Also save the value of the key for the newly created function, which you can find in the function app's [Application settings tab](../azure-functions/functions-how-to-use-azure-function-app-settings.md#settings).
+
+1. Configure DNS so that the FQDN of the TLS/SSL certificate directs to the `gatewayIP` IP address.
- - The {storage-account-endpoint} can be obtained by running `az storage account show --name {storage-acct-name} --query primaryEndpoints.blob`. The {storage-acct-name} is the name of the storage account that holds files that you uploaded.
- - The {container-name} is the name of the container in the {storage-acct-name} that holds files that you uploaded.
- - The {sas-token} can be obtained by running `az storage container generate-sas --name {container-name} --account-name {storage-acct-name} --https-only ΓÇôpermissions drlw ΓÇôexpiry {utc-expiration-date}`.
- - The {storage-acct-name} is the name of the storage account that holds files that you uploaded.
- - The {container-name} is the name of the container in the {storage-acct-name} that holds files that you uploaded.
- - The {utc-expiration-date} is the date, in UTC, at which the SAS token will expire and the SAS token can no longer be used to access the storage account.
+After you create the remote desktop gateway farm and update DNS, you can configure Azure DevTest Labs to use the gateway.
- Record the values for gatewayFQDN and gatewayIP from the template deployment output. You'll also need to save the value of the function key for the newly created function, which can be found in the [Function app settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) tab.
-5. Configure DNS so that FQDN of TLS/SSL cert directs to IP address of gatewayIP from previous step.
+## Configure the lab to use token authentication
- After the Remote Desktop Gateway farm is created and appropriate DNS updates are made, it's ready to be used by a lab in DevTest Labs. The **gateway hostname** and **gateway token secret** settings must be configured to use the gateway machine(s) you deployed.
+Before you update the lab settings, store the key for the authentication token function in the lab's key vault. You can get the function key value on the function's **Function Keys** page in the Azure portal.
- > [!NOTE]
- > If the lab machine uses private IPs, there must be a network path from the gateway machine to the lab machine, either through sharing the same virtual network or using a peered virtual network.
+To find the ID of the lab's key vault, run the following Azure CLI command:
+
+```azurecli
+az resource show --name {lab-name} --resource-type 'Microsoft.DevTestLab/labs' --resource-group {lab-resource-group-name} --query properties.vaultName
+```
+
+For more information on how to save a secret in a key vault, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Record the secret name to use later. This value isn't the function key itself, but the name of the key vault secret that holds the function key.
+
+To configure a lab's **Gateway hostname** and **Gateway token secret** to use token authentication with the gateway machine(s), follow these steps:
+
+1. On the lab's **Overview** page, select **Configuration and policies** from the left navigation.
+
+1. On the **Configuration and policies** page, select **Lab settings** from the **Settings** section of the left navigation.
+
+1. In the **Remote desktop** section:
+
+ - For the **Gateway hostname** field, enter the FQDN or IP address of the remote desktop services gateway machine or farm. This value must match the FQDN of the TLS/SSL certificate used on gateway machines.
+
+ - For **Gateway token**, enter the secret name you recorded earlier. This value isn't the function key itself, but the name of the key vault secret that holds the function key.
- Once both gateway and lab are configured, the connection file created when the lab user clicks on the **Connect** will automatically include information necessary to connect using token authentication.
+ ![Screenshot of Remote desktop options in Lab settings.](./media/configure-lab-remote-desktop-gateway/remote-desktop-options-in-lab-settings.png)
+
+1. Select **Save**.
+
+ > [!NOTE]
+ > By selecting **Save**, you agree to [Remote Desktop Gateway license terms](https://www.microsoft.com/licensing/product-licensing/products).
+
+Once you configure both the gateway and the lab, the RDP connection file created when the lab user selects **Connect** includes the necessary information to connect to the gateway and use token authentication.
+
+### Configure a lab via automation
+
+- [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings.
+
+- The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) has [Gateway sample ARM templates](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) that create or update a lab with **Gateway hostname** and **Gateway token secret** settings.
+
+### Configure a network security group
+
+To further secure the lab, you can add a network security group (NSG) to the virtual network the lab VMs use. For instructions, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).
+
+For example, an NSG could allow only traffic that first goes through the gateway to reach lab VMs. The rule source is the IP address of the gateway machine or load balancer for the gateway farm.
+
+![Screenshot of a Network security group rule.](./media/configure-lab-remote-desktop-gateway/network-security-group-rules.png)
## Next steps
-See the following article to learn more about Remote Desktop
+
+- [Remote Desktop Services documentation](/windows-server/remote/remote-desktop-services/Welcome-to-rds)
+- [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure)
+- [System Center documentation](/system-center/)
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
API metrics such as requests, latency, and failure rate can be viewed in the [Az
From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's menu to bring up the **Metrics** page. From here, you can view the metrics for your instance and create custom views.
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-high-availability-disaster-recovery.md
To view Service Health events...
:::image type="content" source="media/concepts-high-availability-disaster-recovery/issue-updates.png" alt-text="Screenshot of the Azure portal showing the 'Health History' page with the 'Issue updates' tab highlighted. The tab displays the status of entries." lightbox="media/concepts-high-availability-disaster-recovery/issue-updates.png":::
-The information displayed in this tool isn't specific to one Azure Digital instance. After using Service Health to understand what's going with the Azure Digital Twins service in a certain region or subscription, you can take monitoring a step further by using the [Resource health tool](troubleshoot-resource-health.md) to drill down into specific instances and see whether they're affected.
+The information displayed in this tool isn't specific to one Azure Digital instance. After using Service Health to understand what's going with the Azure Digital Twins service in a certain region or subscription, you can take monitoring a step further by using [Azure Resource Health](how-to-monitor-resource-health.md) to drill down into specific instances and see whether they're affected.
## Best practices
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
Routing metrics such as count, latency, and failure rate can be viewed in the [A
From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's navigation menu on the left to bring up the **Metrics** page. From here, you can view the metrics for your instance and create custom views.
-For more on viewing Azure Digital Twins metrics, see [Troubleshooting: Metrics](troubleshoot-metrics.md).
+For more on viewing Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
## Next steps
digital-twins How To Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-alerts.md
+
+# Mandatory fields.
+ Title: Monitor with alerts
+
+description: Learn how to troubleshoot Azure Digital Twins by setting up alerts based on service metrics.
++ Last updated : 03/10/2022++++
+# Monitor Azure Digital Twins with alerts
+
+In this article, you'll learn how to set up *alerts* in the [Azure portal](https://portal.azure.com). These alerts will notify you when configurable conditions you've defined based on the metrics of your Azure Digital Twins instance are met, allowing you to take important actions.
+
+Azure Digital Twins collects [metrics](how-to-monitor-metrics.md) for your service instance that give information about the state of your resources. You can use these metrics to assess the overall health of Azure Digital Twins service and the resources connected to it.
+
+Alerts proactively notify you when important conditions are found in your metrics data. They allow you to identify and address issues before the users of your system notice them. You can read more about alerts in [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
+
+## Turn on alerts
+
+Here's how to enable alerts for your Azure Digital Twins instance:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. Select **Alerts** from the menu, then **+ New alert rule**.
+
+ :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot of the Azure portal showing the button to create a new alert rule in the Alerts section of an Azure Digital Twin instance." lightbox="media/how-to-monitor-alerts/alerts-pre.png":::
+
+3. On the **Create alert rule** page that follows, you can follow the prompts to define conditions, actions to be triggered, and alert details.
+ * **Scope** details should fill automatically with the details for your instance
+ * You'll define **Condition** and **Action group** details to customize alert triggers and responses. For more information about this process, see the [Select conditions](#select-conditions) section later in this article.
+ * In the **Alert rule details** section, enter a name and optional description for your rule.
+ - You can select the **Enable alert rule upon creation** checkbox if you want the alert to become active as soon as it's created.
+ - You can select the **Automatically resolve alerts** checkbox if you want to resolve the alert when the condition isn't met anymore.
+ - This section is also where you select a **subscription**, **resource group**, and **Severity** level.
+
+4. Select the **Create alert rule** button to create your alert rule.
+
+ :::image type="content" source="media/how-to-monitor-alerts/create-alert-rule.png" alt-text="Screenshot of the Azure portal showing the Create Alert Rule page with sections for scope, condition, action group, and alert rule details." lightbox="media/how-to-monitor-alerts/create-alert-rule.png":::
+
+For a guided walkthrough of filling out these fields, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). Below are some examples of what the steps will look like for Azure Digital Twins.
+
+## Select conditions
+
+Here's an excerpt from the **Select condition** process illustrating what types of alert signals are available for Azure Digital Twins. On this page you can filter the type of signal, and select the signal that you want from a list.
++
+After selecting a signal, you'll be asked to configure the logic of the alert. You can filter on a dimension, set a threshold value for your alert, and set the frequency of checks for the condition. Here's an example of setting up an alert for when the average Routing Failure Rate metric goes above 5%.
++
+## Verify success
+
+After setting up alerts, they'll show up back on the **Alerts** page for your instance.
+
+
+## Next steps
+
+* For more information about alerts with Azure Monitor, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
+* For information about the Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
+* To see how to enable diagnostics logging for your Azure Digital Twins metrics, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-diagnostics.md
+
+# Mandatory fields.
+ Title: Monitor with diagnostic logs
+
+description: In this article, learn how to enable logging with diagnostics settings and query the logs for immediate viewing. Also, learn about the log categories and their schemas.
++ Last updated : 03/10/2022+++++
+# Monitor Azure Digital Twins with diagnostics logs
+
+This article shows you how to configure diagnostic settings in the [Azure portal](https://portal.azure.com), including what types of logs to collect and where to store them (such as Log Analytics or a storage account of your choice). Then, you can query the logs to quickly gather custom insights.
+
+Azure Digital Twins can collect *logs* for your service instance to monitor its performance, access, and other data. You can use these logs to get an idea of what is happening in your Azure Digital Twins instance, and analyze root causes on issues without needing to contact Azure support.
+
+This article also contains information about all the log categories that Azure Digital Twins can collect, and their schemas.
+
+## Turn on diagnostic settings
+
+Turn on diagnostic settings to start collecting logs on your Azure Digital Twins instance. You can also choose the destination where the exported logs should be stored. Here's how to enable diagnostic settings for your Azure Digital Twins instance.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. Select **Diagnostic settings** from the menu, then **Add diagnostic setting**.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings.png" alt-text="Screenshot showing the diagnostic settings page in the Azure portal and button to add." lightbox="media/how-to-monitor-diagnostics/diagnostic-settings.png":::
+
+3. On the page that follows, fill in the following values:
+ * **Diagnostic setting name**: Give the diagnostic settings a name.
+ * **Category details**: Choose which operations you want to monitor, and check the boxes to enable diagnostics for those operations. The operations that diagnostic settings can report on are:
+ - DigitalTwinsOperation
+ - EventRoutesOperation
+ - ModelsOperation
+ - QueryOperation
+ - AllMetrics
+
+ For more details about these categories and the information they contain, see the [Log categories](#log-categories) section below.
+ * **Destination details**: Choose where you want to send the logs. You can select any combination of the three options:
+ - Send to Log Analytics
+ - Archive to a storage account
+ - Stream to an event hub
+
+ You may be asked to fill in more details if they're necessary for your destination selection.
+
+4. Save the new settings.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings-details.png" alt-text="Screenshot showing the diagnostic setting page in the Azure portal where the user has filled in a diagnostic setting information." lightbox="media/how-to-monitor-diagnostics/diagnostic-settings-details.png":::
+
+New settings take effect in about 10 minutes. After that, logs appear in the configured target back on the **Diagnostic settings** page for your instance.
+
+For more detailed information on diagnostic settings and their setup options, you can visit [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
+
+## View and query logs
+
+After configuring storage details of your Azure Digital Twins logs, you can write *custom queries* for them to generate insights and troubleshoot issues. The service also provides a few example queries that can help you get started, by addressing common questions that customers may have about their instances.
+
+Here's how to query the logs for your instance.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. Select **Logs** from the menu to open the log query page. The page opens to a window called **Queries**.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/logs.png" alt-text="Screenshot showing the Logs page for an Azure Digital Twins instance in the Azure portal with the Queries window overlaid, showing prebuilt queries." lightbox="media/how-to-monitor-diagnostics/logs.png":::
+
+ These queries are prebuilt examples written for various logs. You can select one of the queries to load it into the query editor and run it to see these logs for your instance.
+
+ You can also close the **Queries** window without running anything to go straight to the query editor page, where you can write or edit custom query code.
+
+3. After exiting the **Queries** window, you'll see the main query editor page. Here you can view and edit the text of the example queries, or write your own queries from scratch.
+ :::image type="content" source="media/how-to-monitor-diagnostics/logs-query.png" alt-text="Screenshot showing the Logs page for an Azure Digital Twins instance in the Azure portal. It includes a list of logs, query code, and Queries History." lightbox="media/how-to-monitor-diagnostics/logs-query.png":::
+
+ In the left pane,
+ - The **Tables** tab shows the different Azure Digital Twins [log categories](#log-categories) that are available to use in your queries.
+ - The **Queries** tab contains the example queries that you can load into the editor.
+ - The **Filter** tab lets you customize a filtered view of the data that the query returns.
+
+For more detailed information on log queries and how to write them, you can visit [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
+
+## Log categories
+
+Here are more details about the categories of logs that Azure Digital Twins collects.
+
+| Log category | Description |
+| | |
+| ADTModelsOperation | Log all API calls related to Models |
+| ADTQueryOperation | Log all API calls related to Queries |
+| ADTEventRoutesOperation | Log all API calls related to to Event Routes and egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs, and Service Bus |
+| ADTDigitalTwinsOperation | Log all API calls related to individual twins |
+
+Each log category consists of operations of write, read, delete, and action. These categories map to REST API calls as follows:
+
+| Event type | REST API operations |
+| | |
+| Write | PUT and PATCH |
+| Read | GET |
+| Delete | DELETE |
+| Action | POST |
+
+Here's a comprehensive list of the operations and corresponding [Azure Digital Twins REST API calls](/rest/api/azure-digitaltwins/) that are logged in each category.
+
+>[!NOTE]
+> Each log category contains several operations/REST API calls. In the table below, each log category maps to all operations/REST API calls underneath it until the next log category is listed.
+
+| Log category | Operation | REST API calls and other events |
+| | | |
+| ADTModelsOperation | Microsoft.DigitalTwins/models/write | Digital Twin Models Update API |
+| | Microsoft.DigitalTwins/models/read | Digital Twin Models Get By ID and List APIs |
+| | Microsoft.DigitalTwins/models/delete | Digital Twin Models Delete API |
+| | Microsoft.DigitalTwins/models/action | Digital Twin Models Add API |
+| ADTQueryOperation | Microsoft.DigitalTwins/query/action | Query Twins API |
+| ADTEventRoutesOperation | Microsoft.DigitalTwins/eventroutes/write | Event Routes Add API |
+| | Microsoft.DigitalTwins/eventroutes/read | Event Routes Get By ID and List APIs |
+| | Microsoft.DigitalTwins/eventroutes/delete | Event Routes Delete API |
+| | Microsoft.DigitalTwins/eventroutes/action | Failure while attempting to publish events to an endpoint service (not an API call) |
+| ADTDigitalTwinsOperation | Microsoft.DigitalTwins/digitaltwins/write | Digital Twins Add, Add Relationship, Update, Update Component |
+| | Microsoft.DigitalTwins/digitaltwins/read | Digital Twins Get By ID, Get Component, Get Relationship by ID, List Incoming Relationships, List Relationships |
+| | Microsoft.DigitalTwins/digitaltwins/delete | Digital Twins Delete, Delete Relationship |
+| | Microsoft.DigitalTwins/digitaltwins/action | Digital Twins Send Component Telemetry, Send Telemetry |
+
+## Log schemas
+
+Each log category has a schema that defines how events in that category are reported. Each individual log entry is stored as text and formatted as a JSON blob. The fields in the log and example JSON bodies are provided for each log type below.
+
+`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema. `ADTEventRoutesOperation` extends the schema to contain an `endpointName` field in properties.
+
+### API log schemas
+
+This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, except the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+The schema contains information pertinent to API calls to an Azure Digital Twins instance.
+
+Here are the field and property descriptions for API logs.
+
+| Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `OperationVersion` | String | The API Version used during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultType` | String | Outcome of the event |
+| `ResultSignature` | String | Http status code for the event |
+| `ResultDescription` | String | Additional details about the event |
+| `DurationMs` | String | How long it took to perform the event in milliseconds |
+| `CallerIpAddress` | String | A masked source IP address for the event |
+| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `RequestUri` | Uri | The endpoint used during the event |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+
+Below are example JSON bodies for these types of logs.
+
+#### ADTDigitalTwinsOperation
+
+```json
+{
+ "time": "2020-03-14T21:11:14.9918922Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/digitaltwins/write",
+ "operationVersion": "2020-10-31",
+ "category": "DigitalTwinOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": 8,
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "2f6a8e64-94aa-492a-bc31-16b9f0b16ab3",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+#### ADTModelsOperation
+
+```json
+{
+ "time": "2020-10-29T21:12:24.2337302Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/models/write",
+ "operationVersion": "2020-10-31",
+ "category": "ModelsOperation",
+ "resultType": "Success",
+ "resultSignature": "201",
+ "resultDescription": "",
+ "durationMs": "80",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "9dcb71ea-bb6f-46f2-ab70-78b80db76882",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/Models?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+#### ADTQueryOperation
+
+```json
+{
+ "time": "2020-12-04T21:11:44.1690031Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/query/action",
+ "operationVersion": "2020-10-31",
+ "category": "QueryOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": "314",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "1ee2b6e9-3af4-4873-8c7c-1a698b9ac334",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/query?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+#### ADTEventRoutesOperation
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that isn't of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+```json
+ {
+ "time": "2020-10-30T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/write",
+ "operationVersion": "2020-10-31",
+ "category": "EventRoutesOperation",
+ "resultType": "Success",
+ "resultSignature": "204",
+ "resultDescription": "",
+ "durationMs": 42,
+ "callerIpAddress": "212.100.32.*",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/EventRoutes/egressRouteForEventHub?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+ },
+```
+
+### Egress log schemas
+
+The following example is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details related to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
+
+|Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultDescription` | String | Additional details about the event |
+| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+| `EndpointName` | String | The name of egress endpoint created in Azure Digital Twins |
+
+Below are example JSON bodies for these types of logs.
+
+#### ADTEventRoutesOperation for Microsoft.DigitalTwins/eventroutes/action
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
+
+```json
+{
+ "time": "2020-11-05T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/action",
+ "operationVersion": "",
+ "category": "EventRoutesOperation",
+ "resultType": "",
+ "resultSignature": "",
+ "resultDescription": "Unable to send EventHub message to [myPath] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
+ "durationMs": -1,
+ "callerIpAddress": "",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "",
+ "properties": {
+ "endpointName": "myEventHub"
+ },
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+},
+```
+
+## Next steps
+
+* For more information about configuring diagnostics, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
+* For information about the Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
+* To see how to enable alerts for your Azure Digital Twins metrics, see [Monitor with alerts](how-to-monitor-alerts.md).
digital-twins How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-metrics.md
+
+# Mandatory fields.
+ Title: Monitor with metrics
+
+description: Learn how to view Azure Digital Twins metrics in Azure Monitor to troubleshoot and oversee your instance.
++ Last updated : 03/10/2022+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Monitor Azure Digital Twins with metrics
+
+The metrics described in this article give you information about the state of Azure Digital Twins resources in your Azure subscription. Azure Digital Twins metrics help you assess the overall health of the Azure Digital Twins service and the resources connected to it. These user-facing statistics help you see what is going on with your Azure Digital Twins and help analyze the root causes of issues without needing to contact Azure support.
+
+Metrics are enabled by default. You can view Azure Digital Twins metrics from the [Azure portal](https://portal.azure.com).
+
+## View the metrics
+
+1. Create an Azure Digital Twins instance. You can find instructions on how to set up an Azure Digital Twins instance in [Set up an instance and authentication](how-to-set-up-instance-portal.md).
+
+2. Find your Azure Digital Twins instance in the [Azure portal](https://portal.azure.com) (you can open the page for it by typing its name into the portal search bar).
+
+ From the instance's menu, select **Metrics**.
+
+ :::image type="content" source="media/how-to-monitor-metrics/azure-digital-twins-metrics.png" alt-text="Screenshot showing the metrics page for Azure Digital Twins in the Azure portal.":::
+
+ This page displays the metrics for your Azure Digital Twins instance. You can also create custom views of your metrics by selecting the ones you want to see from the list.
+
+3. You can choose to send your metrics data to an Event Hubs endpoint or an Azure Storage account by selecting **Diagnostics settings** from the menu, then **Add diagnostic setting**.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings.png" alt-text="Screenshot showing the diagnostic settings page and button to add in the Azure portal.":::
+
+ For more information about this process, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
+
+4. You can choose to set up alerts for your metrics data by selecting **Alerts** from the menu, then **+ New alert rule**.
+ :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot showing the Alerts page and button to add in the Azure portal.":::
+
+ For more information about this process, see [Monitor with alerts](how-to-monitor-alerts.md).
+
+## List of metrics
+
+Azure Digital Twins provides several metrics to give you an overview of the health of your instance and its associated resources. You can also combine information from multiple metrics to paint a bigger picture of the state of your instance.
+
+The following tables describe the metrics tracked by each Azure Digital Twins instance, and how each metric relates to the overall status of your instance.
+
+#### Metrics for tracking service limits
+
+You can configure these metrics to track when you're approaching a [published service limit](reference-service-limits.md#functional-limits) for some aspect of your solution.
+
+To set up tracking, use the [alerts](how-to-monitor-alerts.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
+| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
+
+#### API request metrics
+
+Metrics having to do with API requests:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| ApiRequests | API Requests | Count | Total | The number of API Requests made for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text |
+| ApiRequestsFailureRate | API Requests Failure Rate | Percent | Average | The percentage of API requests that the service receives for your instance that give an internal error (500) response code for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text
+| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This value refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
+
+#### Billing metrics
+
+Metrics having to do with billing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
+| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this limit will be counted as additional messages in 1 KB increments (so a message between 1 KB and 2 KB will be counted as 2 messages, between 2 KB and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5 KB in the response body, for example, will be billed as 2 operations. | Meter ID |
+| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There's also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
+
+For more information on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
+
+#### Ingress metrics
+
+Metrics having to do with data ingress:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| IngressEvents | Ingress Events | Count | Total | The number of incoming telemetry events into Azure Digital Twins. | Result |
+| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result |
+| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
+
+#### Routing metrics
+
+Metrics having to do with routing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| MessagesRouted | Messages Routed | Count | Total | The number of messages routed to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they're routed from Azure Digital Twins to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it's posted to the endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+
+## Dimensions
+
+Dimensions help identify more details about the metrics. Some of the routing metrics provide information per endpoint. The table below lists possible values for these dimensions.
+
+| Dimension | Values |
+| | |
+| Authentication | OAuth |
+| Operation (for API Requests) | Microsoft.DigitalTwins/digitaltwins/delete, <br>Microsoft.DigitalTwins/digitaltwins/write, <br>Microsoft.DigitalTwins/digitaltwins/read, <br>Microsoft.DigitalTwins/eventroutes/read, <br>Microsoft.DigitalTwins/eventroutes/write, <br>Microsoft.DigitalTwins/eventroutes/delete, <br>Microsoft.DigitalTwins/models/read, <br>Microsoft.DigitalTwins/models/write, <br>Microsoft.DigitalTwins/models/delete, <br>Microsoft.DigitalTwins/query/action |
+| Endpoint Type | Event Grid, <br>Event Hubs, <br>Service Bus |
+| Protocol | HTTPS |
+| Result | Success, <br>Failure |
+| Status Code | 200, 404, 500, and so on. |
+| Status Code Class | 2xx, 4xx, 5xx, and so on. |
+| Status Text | Internal Server Error, Not Found, and so on. |
+
+## Next steps
+
+To learn more about managing recorded metrics for Azure Digital Twins, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-resource-health.md
+
+# Mandatory fields.
+ Title: Monitor resource health
+
+description: Learn how to use Azure Resource Health to check the health of your Azure Digital Twins instance.
++ Last updated : 03/10/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Monitor Azure Digital Twins resource health
+
+[Azure Service Health](../service-health/index.yml) is a suite of experiences that can help you diagnose and get support for service problems that affect your Azure resources. It contains resource health, service health, and status information, and reports on both current and past health information.
+
+## Use Azure Resource Health
+
+[Azure Resource Health](../service-health/resource-health-overview.md) can help you monitor whether your Azure Digital Twins instance is up and running. You can also use it to learn whether a regional outage is impacting the health of your instance.
+
+To check the health of your instance, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. From your instance's menu, select **Resource health** under Support + troubleshooting. This will take you to the page for viewing resource health history.
+
+ :::image type="content" source="media/how-to-monitor-resource-health/resource-health.png" alt-text="Screenshot showing the 'Resource health' page. There is a 'Health history' section showing a daily report from the last nine days.":::
+
+In the image above, this instance is showing as **Available**, and has been for the past nine days. To learn more about the Available status and the other status types that may appear, see [Resource Health overview](../service-health/resource-health-overview.md).
+
+You can also learn more about the different checks that go into resource health for different types of Azure resources in [Resource types and health checks in Azure resource health](../service-health/resource-health-checks-resource-types.md).
+
+## Use Azure Service Health
+
+[Azure Service Health](../service-health/service-health-overview.md) can help you check the health of the entire Azure Digital Twins service in a certain region, and be aware of events like ongoing service issues and upcoming planned maintenance.
+
+To check service health, sign in to the [Azure portal](https://portal.azure.com) and navigate to the **Service Health** service. You can find it by typing "service health" into the portal search bar.
+
+You can then filter service issues by subscription, region, and service.
+
+For more information on using Azure Service Health, see [Service Health overview](../service-health/service-health-overview.md).
+
+## Use Azure status
+
+The [Azure status](../service-health/azure-status-overview.md) page provides a global view of the health of Azure services and regions. While Azure Service Health and Azure Resource Health are personalized to your specific resource, Azure status has a larger scope and can be useful to understand incidents with wide-ranging impact.
+
+To check Azure status, navigate to the [Azure status](https://status.azure.com/status/) page. The page displays a table of Azure services along with health indicators per region. You can view Azure Digital Twins by searching for its table entry on the page.
+
+For more information on using the Azure status page, see [Azure status overview](../service-health/azure-status-overview.md).
+
+## Next steps
+
+Read about other ways to monitor your Azure Digital Twins instance in the following articles:
+* [Monitor with metrics](how-to-monitor-metrics.md)
+* [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
+* [Monitor with alerts](how-to-monitor-alerts.md)
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-service-limits.md
When a limit is reached, any requests beyond it are throttled by the service, wh
To manage the throttling, here are some recommendations for working with limits. * Use retry logic. The [Azure Digital Twins SDKs](concepts-apis-sdks.md) implement retry logic for failed requests, so if you're working with a provided SDK, this functionality is already built-in. Otherwise, consider implementing retry logic in your own application. The service sends back a `Retry-After` header in the failure response, which you can use to determine how long to wait before retrying.
-* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](troubleshoot-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Troubleshooting: Alerts](troubleshoot-alerts.md). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
+* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](how-to-monitor-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Monitor with alerts](how-to-monitor-alerts.md). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
* Deploy at scale across multiple instances. Avoid having a single point of failure. Instead of one large graph for your entire deployment, consider sectioning out subsets of twins logically (like by region or tenant) across multiple instances. >[!NOTE]
digital-twins Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-performance.md
# Mandatory fields. Title: "Troubleshooting: Performance"
+ Title: "Troubleshooting performance"
description: Tips for troubleshooting performance of an Azure Digital Twins instance. Previously updated : 10/8/2021- Last updated : 03/10/2022+ # Optional fields. Don't forget to remove # if you need a field.
#
-# Troubleshooting Azure Digital Twins: Performance
+# Troubleshooting Azure Digital Twins performance
If you're experiencing delays or other performance issues when working with Azure Digital Twins, use the tips in this article to help you troubleshoot. ## Isolate the source of the delay
-Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For instructions on how to view Azure Monitor metrics for an Azure Digital Twins instance, see [Troubleshooting: Metrics](troubleshoot-metrics.md).
+Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For instructions on how to view Azure Monitor metrics for an Azure Digital Twins instance, see [Monitor with metrics](how-to-monitor-metrics.md).
## Check regions
If your solution uses Azure Digital Twins in combination with other Azure servic
## Check logs
-Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Troubleshooting: Diagnostics logs](troubleshoot-diagnostics.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
+Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Monitor with diagnostic logs](how-to-monitor-diagnostics.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
## Check API frequency
If you're still experiencing performance issues after troubleshooting with the s
Follow these steps:
-1. Gather [metrics](troubleshoot-metrics.md) and [logs](troubleshoot-diagnostics.md) for your instance.
+1. Gather [metrics](how-to-monitor-metrics.md) and [logs](how-to-monitor-diagnostics.md) for your instance.
2. Navigate to [Azure Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. Use the prompts to provide details of your issue, see recommended solutions, share your metrics/log files, and submit any other information that the support team can use to help investigate your issue. For more information on creating support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ## Next steps
-Read about other ways to troubleshoot your Azure Digital Twins instance in the following articles:
-* [Troubleshooting: Metrics](troubleshoot-metrics.md)
-* [Troubleshooting: Diagnostics logs](troubleshoot-diagnostics.md).
-* [Troubleshooting: Alerts](troubleshoot-alerts.md)
-* [Troubleshooting: Resource health](troubleshoot-resource-health.md)
+Read about other ways to monitor your Azure Digital Twins instance to help with troubleshooting:
+* [Monitor with metrics](how-to-monitor-metrics.md)
+* [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
+* [Monitor with alerts](how-to-monitor-alerts.md)
+* [Monitor resource health](how-to-monitor-resource-health.md)
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription (required if creating a new DMS service). > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
> [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. If you picked the first option for network share, provide details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | |**Storage account details** |The resource group and storage account where backup files will be uploaded to. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
-1. If you picked the second option for backups stored in an Azure Blob Container specify the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the Target database name,
+Resource group, Azure storage account, Blob container from the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-quickstart-template-samples.md
na Previously updated : 02/16/2022 Last updated : 03/10/2022
+zone_pivot_groups: front-door-tiers
# Azure Resource Manager deployment model templates for Front Door The following table includes links to Azure Resource Manager deployment model templates for Azure Front Door.
-## Azure Front Door
-
-| Template | Description |
-| | |
-| [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
-| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
-| [Onboard a custom domain and managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain)| Add a custom domain to your Front Door and use a Front Door-managed TLS certificate. |
-| [Onboard a custom domain and customer-managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain-customer-certificate)| Add a custom domain to your Front Door and use your own TLS certificate by using Key Vault. |
-| [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
-| [Control Health Probes for your backends on Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-health-probes)| Update your Front Door to change the health probe settings by updating the probe path and also the intervals in which the probes will be sent. |
-| [Create Front Door with Active/Standby backend configuration](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-priority-lb)| Creates a Front Door that demonstrates priority-based routing for Active/Standby application topology, that is, by default send all traffic to the primary (highest-priority) backend until it becomes unavailable. |
-| [Create Front Door with caching enabled for certain routes](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-caching)| Creates a Front Door with caching enabled for the defined routing configuration thus caching any static assets for your workload. |
-| [Configure Session Affinity for your Front Door host names](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-session-affinity) | Updates a Front Door to enable session affinity for your frontend host, thereby, sending subsequent traffic from the same user session to the same backend. |
-| [Configure Front Door for client IP allowlisting or blocklisting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-clientip)| Configures a Front Door to restrict traffic certain client IPs using custom access control using client IPs. |
-| [Configure Front Door to take action with specific http parameters](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-http-params)| Configures a Front Door to allow or block certain traffic based on the http parameters in the incoming request by using custom rules for access control using http parameters. |
-| [Configure Front Door rate limiting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-rate-limiting)| Configures a Front Door to rate limit incoming traffic for a given frontend host. |
-| | |
-
-## Azure Front Door Standard/Premium (Preview)
| Sample | Description | |-|-| | [Front Door (quick create)](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium/) | Creates a basic Front Door profile including an endpoint, origin group, origin, and route. | | [Rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rule-set/) | Creates a Front Door profile and rule set. |
+|**Custom domains**| **Description** |
+| [Custom domain and managed TLS certificate](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-custom-domain/) | Creates a Front Door profile with a custom domain and a Microsoft-managed TLS certificate. |
+| [Custom domain and customer-managed TLS certificate](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-custom-domain-customer-certificate/) | Creates a Front Door profile with a custom domain and use your own TLS certificate by using Key Vault. |
+| [Custom domain and Azure DNS](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-custom-domain-azure-dns/) | Creates a Front Door profile with a custom domain and an Azure DNS zone. |
+|**Web Application Firewall**| **Description** |
| [WAF policy with managed rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-waf-managed/) | Creates a Front Door profile and WAF with managed rule set. | | [WAF policy with custom rule](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-waf-custom/) | Creates a Front Door profile and WAF with custom rule. | | [WAF policy with rate limit](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rate-limit/) | Creates a Front Door profile and WAF with a custom rule to perform rate limiting. |
The following table includes links to Azure Resource Manager deployment model te
| [Virtual machine with Private Link service](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-vm-private-link) | Creates a virtual machine and Private Link service, and a Front Door profile. | | | | ++
+| Template | Description |
+| | |
+| [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
+| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
+| [Onboard a custom domain and managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain)| Add a custom domain to your Front Door and use a Front Door-managed TLS certificate. |
+| [Onboard a custom domain and customer-managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain-customer-certificate)| Add a custom domain to your Front Door and use your own TLS certificate by using Key Vault. |
+| [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
+| [Control Health Probes for your backends on Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-health-probes)| Update your Front Door to change the health probe settings by updating the probe path and also the intervals in which the probes will be sent. |
+| [Create Front Door with Active/Standby backend configuration](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-priority-lb)| Creates a Front Door that demonstrates priority-based routing for Active/Standby application topology, that is, by default send all traffic to the primary (highest-priority) backend until it becomes unavailable. |
+| [Create Front Door with caching enabled for certain routes](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-caching)| Creates a Front Door with caching enabled for the defined routing configuration thus caching any static assets for your workload. |
+| [Configure Session Affinity for your Front Door host names](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-session-affinity) | Updates a Front Door to enable session affinity for your frontend host, thereby, sending subsequent traffic from the same user session to the same backend. |
+| [Configure Front Door for client IP allowlisting or blocklisting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-clientip)| Configures a Front Door to restrict traffic certain client IPs using custom access control using client IPs. |
+| [Configure Front Door to take action with specific http parameters](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-http-params)| Configures a Front Door to allow or block certain traffic based on the http parameters in the incoming request by using custom rules for access control using http parameters. |
+| [Configure Front Door rate limiting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-rate-limiting)| Configures a Front Door to rate limit incoming traffic for a given frontend host. |
+| | |
++ ## Next steps +
+- Learn how to [create a Front Door profile](standard-premium/create-front-door-portal.md).
+++ - Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).+
frontdoor Front Door Url Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-redirect.md
Title: Azure Front Door - URL Redirect | Microsoft Docs description: This article helps you understand how Azure Front Door supports URL redirection for their routing rules. Previously updated : 09/28/2020 Last updated : 03/09/2022
+zone_pivot_groups: front-door-tiers
# URL redirect+ Azure Front Door can redirect traffic at each of the following levels: protocol, hostname, path, query string. These functionalities can be configured for individual microservices since the redirection is path-based. This can simplify application configuration by optimizing resource usage, and supports new redirection scenarios including global and path-based redirection.
-</br>
++
+In Azure Front Door Standard/Premium tier, you can configure URL redirect using a Rule Set.
++
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++ :::image type="content" source="./media/front-door-url-redirect/front-door-url-redirect.png" alt-text="Azure Front Door URL Redirect"::: + ## Redirection types A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported:
The destination fragment is the portion of URL after '#', which is used by the b
## Next steps -- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+* Learn how to [create a Front Door](quickstart-create-front-door.md).
+* Learn more about [Azure Front Door Rule Set](front-door-rules-engine.md).
+* Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
Title: Azure Front Door - URL Rewrite | Microsoft Docs
-description: This article helps you understand how Azure Front Door does URL Rewrite for your routes, if configured.
+description: This article helps you understand how URL rewrites works in Azure Front Door.
-+ Previously updated : 09/28/2020 Last updated : 03/09/2022
+zone_pivot_groups: front-door-tiers
-# URL rewrite (custom forwarding path)
+# URL rewrite
++
+Azure Front Door Standard/Premium supports URL rewrite to change the path of a request that is being routed to your origin. URL rewrite also allows you to add conditions to make sure that the URL or the specified headers gets rewritten only when certain conditions gets met. These conditions are based on the request and response information.
+
+With this feature, you can redirect users to different origins based on scenarios, device types, or the requested file type.
+
+URL rewrite settings can be found in the Rule set configuration.
++
+## Source pattern
+
+Source pattern is the URL path in the source request to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (/) as the source pattern value.
+
+For URL rewrite source pattern, only the path after the route configuration ΓÇ£patterns to matchΓÇ¥ is considered. For example, you have the following incoming URL format `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-Source-pattern>`, only `/<Rule-URL-Rewrite-Source-pattern>` will be considered by the rule engine as the source pattern to be rewritten. Therefore, when you have a URL rewrite rule using source pattern match, the format for the outgoing URL will be `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-destination>`.
+
+For scenarios, where `/<route-patterns-to-match-path` segment of the URL path must be removed, set the Origin path of the Origin group in route configuration to `/`.
+
+## Destination
+
+You can define the destination path to use in the rewrite. The destination path overwrites the source pattern.
+
+## Preserve unmatched path
+
+Preserve unmatched path allows you to append the remaining path after the source pattern to the new path.
+
+For example, if I set **Preserve unmatched path to Yes**.
+* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination get set to `/foo/`, and the content get served from `/foo/sub/1`.jpg from the origin.
+
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/`, the content get served from `/foo/image/1.jpg` from the origin.
+
+For example, if I set **Preserve unmatched path to No**.
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/2.jpg`, the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
+++ Azure Front Door supports URL rewrite by configuring an optional **Custom Forwarding Path** to use when constructing the request to forward to the backend. By default, if a custom forwarding path isn't provided, the Front Door will copy the incoming URL path to the URL used in the forwarded request. The Host header used in the forwarded request is as configured for the selected backend. Read [Backend Host Header](front-door-backend-pool.md#hostheader) to learn what it does and how you can configure it.
-The powerful part of URL rewrite is that the custom forwarding path will copy any part of the incoming path that matches to a wildcard path to the forwarded path (these path segments are the **green** segments in the example below):
-</br>
+The robust part of URL rewrite is that the custom forwarding path will copy any part of the incoming path that matches the wildcard path to the forwarded path (these path segments are the **green** segments in the example below):
:::image type="content" source="./media/front-door-url-rewrite/front-door-url-rewrite-example.jpg" alt-text="Azure Front Door URL Rewrite"::: ## URL rewrite example+ Consider a routing rule with the following combination of frontend hosts and paths configured:
-| Hosts | Paths |
-||-|
-| www\.contoso.com | /\* |
-| | /foo |
-| | /foo/\* |
-| | /foo/bar/\* |
+| Hosts | Paths |
+|--|--|
+| www\.contoso.com | /\* |
+| | /foo |
+| | /foo/\* |
+| | /foo/bar/\* |
The first column of the table below shows examples of incoming requests and the second column shows what would be the "most-specific" matching route 'Path'. The third and ensuing columns of the table are examples of configured **Custom Forwarding Paths**. For example, if we read across the second row, it's saying that for incoming request `www.contoso.com/sub`, if the custom forwarding path was `/`, then the forwarded path would be `/sub`. If the custom forwarding path was `/fwd/`, then the forwarded path would be `/fwd/sub`. And so forth, for the remaining columns. The **emphasized** parts of the paths below represent the portions that are part of the wildcard match.
-| Incoming request | Most-specific match path | / | /fwd/ | /foo/ | /foo/bar/ |
-||--||-|-|--|
-| www\.contoso.com/ | /\* | / | /fwd/ | /foo/ | /foo/bar/ |
-| www\.contoso.com/**sub** | /\* | /**sub** | /fwd/**sub** | /foo/**sub** | /foo/bar/**sub** |
-| www\.contoso.com/**a/b/c** | /\* | /**a/b/c** | /fwd/**a/b/c** | /foo/**a/b/c** | /foo/bar/**a/b/c** |
-| www\.contoso.com/foo | /foo | / | /fwd/ | /foo/ | /foo/bar/ |
-| www\.contoso.com/foo/ | /foo/\* | / | /fwd/ | /foo/ | /foo/bar/ |
-| www\.contoso.com/foo/**bar** | /foo/\* | /**bar** | /fwd/**bar** | /foo/**bar** | /foo/bar/**bar** |
+| Incoming request | Most-specific match path | / | /fwd/ | /foo/ | /foo/bar/ |
+|--|--|--|--|--|--|
+| www\.contoso.com/ | /\* | / | /fwd/ | /foo/ | /foo/bar/ |
+| www\.contoso.com/**sub** | /\* | /**sub** | /fwd/**sub** | /foo/**sub** | /foo/bar/**sub** |
+| www\.contoso.com/**a/b/c** | /\* | /**a/b/c** | /fwd/**a/b/c** | /foo/**a/b/c** | /foo/bar/**a/b/c** |
+| www\.contoso.com/foo | /foo | / | /fwd/ | /foo/ | /foo/bar/ |
+| www\.contoso.com/foo/ | /foo/\* | / | /fwd/ | /foo/ | /foo/bar/ |
+| www\.contoso.com/foo/**bar** | /foo/\* | /**bar** | /fwd/**bar** | /foo/**bar** | /foo/bar/**bar** |
> [!NOTE]
-> Azure Front Door only supports URL rewrite from a static path to another static path. Preserve unmatched path is supported with Azure Front Door Standard/Premium SKU. See [preserve unmatched path](standard-premium/concept-rule-set-url-redirect-and-rewrite.md#preserve-unmatched-path) for more details.
+> Azure Front Door only supports URL rewrite from a static path to another static path. Preserve unmatched path is supported with Azure Front Door Standard/Premium SKU. For more information, see [Preserve unmatched path](front-door-url-rewrite.md#preserve-unmatched-path).
> ## Optional settings
-There are additional optional settings you can also specify for any given routing rule settings:
+
+There are extra optional settings you can also specify for any given routing rule settings:
* **Cache Configuration** - If disabled or not specified, requests that match to this routing rule won't attempt to use cached content and instead will always fetch from the backend. Read more about [Caching with Front Door](front-door-caching.md). + ## Next steps - Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+- Learn more about [Azure Front Door Rules engine](front-door-rules-engine.md)
+- Learn about [Azure Front Door routing architecture](front-door-routing-architecture.md).
frontdoor Concept Rule Set Url Redirect And Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-rule-set-url-redirect-and-rewrite.md
- Title: 'URL redirect and URL rewrite with Azure Front Door Standard/Premium (Preview)'
-description: This article helps you understand how Azure Front Door supports URL redirection and URL rewrite using Azure Front Door Rule Set.
---- Previously updated : 02/18/2021---
-# URL redirect and URL rewrite with Azure Front Door Standard/Premium (Preview)
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-This article helps you understand how Azure Front Door Standard/Premium supports URL redirect and URL rewrite used in a Rule Set.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## URL redirect
-
-Azure Front Door can redirect traffic at each of the following levels: protocol, hostname, path, query string, and fragment. These functionalities can be configured for individual micro-service since the redirection is path-based. With URL redirect you can simplify application configuration by optimizing resource usage, and supports new redirection scenarios including global and path-based redirection.
-
-You can configure URL redirect via Rule Set.
--
-### Redirection types
-A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported:
-
-* **301 (Moved permanently)**: Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource will use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
-* **302 (Found)**: Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests.
-* **307 (Temporary redirect)**: Indicates that the target resource is temporarily under a different URI. The user agent MUST NOT change the request method if it does an automatic redirection to that URI. Since the redirection can change over time, the client ought to continue using the original effective request URI for future requests.
-* **308 (Permanent redirect)**: Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource should use one of the enclosed URIs.
-
-### Redirection protocol
-You can set the protocol that will be used for redirection. The most common use cases of the redirect feature, is to set HTTP to HTTPS redirection.
-
-* **HTTPS only**: Set the protocol to HTTPS only, if you're looking to redirect the traffic from HTTP to HTTPS. Azure Front Door recommends that you should always set the redirection to HTTPS only.
-* **HTTP only**: Redirects the incoming request to HTTP. Use this value only if you want to keep your traffic HTTP that is, non-encrypted.
-* **Match request**: This option keeps the protocol used by the incoming request. So, an HTTP request remains HTTP and an HTTPS request remains HTTPS post redirection.
-
-### Destination host
-As part of configuring a redirect routing, you can also change the hostname or domain for the redirect request. You can set this field to change the hostname in the URL for the redirection or otherwise preserve the hostname from the incoming request. So, using this field you can redirect all requests sent on `https://www.contoso.com/*` to `https://www.fabrikam.com/*`.
-
-### Destination path
-For cases where you want to replace the path segment of a URL as part of redirection, you can set this field with the new path value. Otherwise, you can choose to preserve the path value as part of redirect. So, using this field, you can redirect all requests sent to `https://www.contoso.com/\*` to `https://www.contoso.com/redirected-site`.
-
-### Query string parameters
-You can also replace the query string parameters in the redirected URL. To replace any existing query string from the incoming request URL, set this field to 'Replace' and then set the appropriate value. Otherwise, you can keep the original set of query strings by setting the field to 'Preserve'. As an example, using this field, you can redirect all traffic sent to `https://www.contoso.com/foo/bar` to `https://www.contoso.com/foo/bar?&utm_referrer=https%3A%2F%2Fwww.bing.com%2F`.
-
-### Destination fragment
-The destination fragment is the portion of URL after '#', which is used by the browser to land on a specific section of a web page. You can set this field to add a fragment to the redirect URL.
-
-## URL rewrite
-
-Azure Front Door supports URL rewrite to rewrite the path of a request that's en route to your origin. URL rewrite allows you to add conditions to ensure that the URL or the specified headers get rewritten only when certain conditions get met. These conditions are based on the request and response information.
-
-With this feature, you can redirect users to different origins based on scenario, device type, and requested file type.
-
-You can configure URL redirect via Rule Set.
--
-### Source pattern
-
-Source pattern is the URL path in the source request to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (/) as the source pattern value.
-
-For URL rewrite source pattern, only the path after the route configuration ΓÇ£patterns to matchΓÇ¥ is considered. For example, you have the following incoming URL format `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-Source-pattern>`, only `/<Rule-URL-Rewrite-Source-pattern>` will be considered by the rule engine as the source pattern to be rewritten. Therefore, when you have a URL rewrite rule using source pattern match, the format for the outgoing URL will be `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-destination>`.
-
-For scenarios, where `/<route-patterns-to-match-path` segment of the URL path must be removed, set the Origin path of the Origin group in route configuration to `/`.
-
-### Destination
-
-You can define the destination path to use in the rewrite. The destination path overwrites the source pattern.
-
-### Preserve unmatched path
-
-Preserve unmatched path allows you to append the remaining path after the source pattern to the new path.
-
-For example, if I set **Preserve unmatched path to Yes**.
-* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination get set to `/foo/`, and the content get served from `/foo/sub/1`.jpg from the origin.
-
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/`, the content get served from `/foo/image/1.jpg` from the origin.
-
-For example, if I set **Preserve unmatched path to No**.
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/2.jpg`, the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
-
-## Next steps
-
-* Learn more about [Azure Front Door Standard/Premium Rule Set](../front-door-rules-engine.md).
frontdoor Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/faq.md
Azure Front Door is a globally distributed multi-tenant service. The infrastruct
### Is HTTP->HTTPS redirection supported?
-Yes. In fact, Azure Front Door supports host, path, query string redirection, and part of URL redirection. Learn more about [URL redirection](concept-rule-set-url-redirect-and-rewrite.md).
+Yes. In fact, Azure Front Door supports host, path, query string redirection, and part of URL redirection. Learn more about [URL redirection](../front-door-url-redirect.md).
### How do I lock down the access to my backend to only Azure Front Door?
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Key capabilities in an IoT Central application include:
IoT Central lets you manage the fleet of [IoT devices](#devices) that are sending data to your solution. For example, you can: -- Control which devices can [connect](concepts-get-connected.md) to your application and how they authenticate.
+- Control which devices can [connect](overview-iot-central-developer.md#how-devices-connect) to your application and how they authenticate.
- Use [device templates](concepts-device-templates.md) to define the types of device that can connect to your application. - Manage devices by setting properties or calling commands on connected devices. For example, set a target temperature property for a thermostat device or call a command to trigger a device to update its firmware. You can set properties and call commands on: - Individual devices through a [customizable](concepts-device-templates.md#views) web UI.
In an IoT Central application, you can view and analyze data for individual devi
In an IoT Central application you can manage the following security aspects of your solution: -- [Device connectivity](concepts-get-connected.md): Create, revoke, and update the security keys that your devices use to establish a connection to your application.
+- [Device authentication](concepts-device-authentication.md): Create, revoke, and update the security keys that your devices use to establish a connection to your application.
- [App integrations](howto-authorize-rest-api.md#get-an-api-token): Create, revoke, and update the security keys that other applications use to establish secure connections with your application. - [Data export](howto-export-data.md#connection-options): Use managed identities to secure the connection to your data export destinations. - [User management](howto-manage-users-roles.md): Manage the users that can sign in to the application and the roles that determine what permissions those users have.
A device can use properties to report its state, such as whether a valve is open
IoT Central can also control devices by calling commands on the device. For example, instructing a device to download and install a firmware update.
-The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Associate a device with a device template](concepts-get-connected.md#associate-a-device-with-a-device-template).
+The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/libraries-sdks.md).
iot-central Concepts Device Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-authentication.md
+
+ Title: Device authentication in Azure IoT Central | Microsoft Docs
+description: This article introduces key concepts relating to device authentication in Azure IoT Central
++ Last updated : 03/02/2022++++++
+# This article applies to operators and device developers.
++
+# Device authentication concepts in IoT Central
+
+This article describes how devices authenticate to an IoT Central application. To learn more about the overall connection process, see [Connect a device](overview-iot-central-developer.md#how-devices-connect).
+
+Devices authenticate with the IoT Central application by using either a _shared access signature (SAS) token_ or an _X.509 certificate_. X.509 certificates are recommended in production environments.
+
+You use _enrollment groups_ to manage the device authentication options in your IoT Central application.
+
+This article describes the following device authentication options:
+
+- [X.509 enrollment group](#x509-enrollment-group)
+- [SAS enrollment group](#sas-enrollment-group)
+- [Individual enrollment](#individual-enrollment)
+
+## X.509 enrollment group
+
+In a production environment, using X.509 certificates is the recommended device authentication mechanism for IoT Central. To learn more, see [Device Authentication using X.509 CA Certificates](../../iot-hub/iot-hub-x509ca-overview.md).
+
+An X.509 enrollment group contains a root or intermediate X.509 certificate. Devices can authenticate if they have a valid leaf certificate that's derived from the root or intermediate certificate.
+
+To connect a device with an X.509 certificate to your application:
+
+1. Create an _enrollment group_ that uses the **Certificates (X.509)** attestation type.
+1. Add and verify an intermediate or root X.509 certificate in the enrollment group.
+1. Generate a leaf certificate from the root or intermediate certificate in the enrollment group. Install the leaf certificate on the device for it to use when it connects to your application.
+
+To learn more, see [How to connect devices with X.509 certificates](how-to-connect-devices-x509.md)
+
+### For testing purposes only
+
+In a production environment, use certificates from your certificate provider. For testing only, you can use the following utilities to generate root, intermediate, and device certificates:
+
+- [Tools for the Azure IoT Device Provisioning Device SDK](https://github.com/Azure/azure-iot-sdk-node/blob/main/provisioning/tools/readme.md): a collection of Node.js tools that you can use to generate and verify X.509 certificates and keys.
+- [Manage test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md): a collection of PowerShell and Bash scripts to:
+ - Create a certificate chain.
+ - Save the certificates as .cer files to upload to your IoT Central application.
+ - Use the verification code from the IoT Central application to generate the verification certificate.
+ - Create leaf certificates for your devices using your device IDs as a parameter to the tool.
+
+## SAS enrollment group
+
+A SAS enrollment group contains group-level SAS keys. Devices can authenticate if they have a valid SAS token that's derived from a group-level SAS key.
+
+To connect a device with device SAS token to your application:
+
+1. Create an _enrollment group_ that uses the **Shared Access Signature (SAS)** attestation type.
+1. Copy the group primary or secondary key from the enrollment group.
+1. Use the Azure CLI to generate a device token from the group key:
+
+ ```azurecli
+ az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
+ ```
+
+1. Use the generated device token when the device connects to your IoT Central application.
+
+> [!NOTE]
+> To use existing SAS keys in your enrollment groups, disable the **Auto generate keys** toggle and manually enter your SAS keys.
+
+## Individual enrollment
+
+Typically, devices connect by using credentials derived from an enrollment group X.509 certificate or SAS key. However, if your devices each have their own credentials, you can use individual enrollments. An individual enrollment is an entry for a single device that's allowed to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
+
+> [!NOTE]
+> When you create an individual enrollment for a device, it takes precedence over the default enrollment group options in your IoT Central application.
+
+### Create individual enrollments
+
+IoT Central supports the following attestation mechanisms for individual enrollments:
+
+- **Symmetric key attestation:** Symmetric key attestation is a simple approach to authenticating a device with the DPS instance. To create an individual enrollment that uses symmetric keys, open the **Device connection** page for the device, select **Individual enrollment** as the authentication type, and **Shared access signature (SAS)** as the authentication method. Enter the base64 encoded primary and secondary keys, and save your changes. Use the **ID scope**, **Device ID**, and either the primary or secondary key to connect your device.
+
+ > [!TIP]
+ > For testing, you can use **OpenSSL** to generate base64 encoded keys: `openssl rand -base64 64`
+
+- **X.509 certificates:** To create an individual enrollment with X.509 certificates, open the **Device Connection** page, select **Individual enrollment** as the authentication type, and **Certificates (X.509)** as the authentication method. Device certificates used with an individual enrollment entry have a requirement that the issuer and subject CN are set to the device ID.
+
+ > [!TIP]
+ > For testing, you can use [Tools for the Azure IoT Device Provisioning Device SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/tools) to generate a self-signed certificate: `node create_test_cert.js device "mytestdevice"`
+
+- **Trusted Platform Module (TPM) attestation:** A [TPM](../../iot-dps/concepts-tpm-attestation.md) is a type of hardware security module. Using a TPM is one of the most secure ways to connect a device. This article assumes you're using a discrete, firmware, or integrated TPM. Software emulated TPMs are well suited for prototyping or testing, but they don't provide the same level of security as discrete, firmware, or integrated TPMs. Don't use software TPMs in production. To create an individual enrollment that uses a TPM, open the **Device Connection** page, select **Individual enrollment** as the authentication type, and **TPM** as the authentication method. Enter the TPM endorsement key and save the device connection information.
+
+## Automatically register devices
+
+This scenario enables OEMs to mass manufacture devices that can connect without first being registered in an application. An OEM generates suitable device credentials, and configures the devices in the factory.
+
+To automatically register devices that use X.509 certificates:
+
+1. Generate the leaf-certificates for your devices using the root or intermediate certificate you added to your [X.509 enrollment group](#x509-enrollment-group). Use the device IDs as the `CNAME` in the leaf certificates. A device ID can contain letters, numbers, and the `-` character.
+
+1. As an OEM, flash each device with a device ID, a generated X.509 leaf-certificate, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
+
+1. When you switch on a device, it first connects to DPS to retrieve its IoT Central connection information.
+
+1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
+
+1. The IoT Central application uses the model ID sent by the device to [assign the registered device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
+
+To automatically register devices that use SAS tokens:
+
+1. Copy the group primary key from the **SAS-IoT-Devices** enrollment group:
+
+ :::image type="content" source="media/concepts-device-authentication/group-primary-key.png" alt-text="Group primary key from S A S - I o T - Devices enrollment group.":::
+
+1. Use the `az iot central device compute-device-key` command to generate the device SAS keys. Use the group primary key from the previous step. The device ID can contain letters, numbers, and the `-` character:
+
+ ```azurecli
+ az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
+ ```
+
+1. As an OEM, flash each device with the device ID, the generated device SAS key, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
+
+1. When you switch on a device, it first connects to DPS to retrieve its IoT Central registration information.
+
+1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
+
+1. The IoT Central application uses the model ID sent by the device to [assign the registered device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
+
+## Next steps
+
+Some suggested next steps are to:
+
+- Review [best practices](concepts-device-implementation.md#best-practices) for developing devices.
+- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
+- Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)
+- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
+- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
+
+ Title: Device implementation in Azure IoT Central | Microsoft Docs
+description: This article introduces the key concepts and best practices for implementing a device that connects to your IoT Central application.
++ Last updated : 03/04/2022++++++
+# This article applies to device developers.
++
+# Device implementation and best practices for IoT central
+
+This article provides information about how to implement devices that connect to your IoT central application. It also includes some best practices. To learn more about the overall connection process, see [Connect a device](overview-iot-central-developer.md#how-devices-connect).
+
+For sample device implementation code, see [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+
+## Implement the device
+
+Devices that connect to IoT Central should follow the _IoT Plug and Play conventions_. One of these conventions is that a device should send the _model ID_ of the device model it implements when it connects. The model ID enables the IoT Central application to assign the device to the correct device template.
+
+An IoT Central device template includes a _model_ that specifies the behaviors a device of that type should implement. Behaviors include telemetry, properties, and commands.
+
+Each model has a unique _device twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then assign the correct device template to the device.
+
+[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a DTDL model.
+
+The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and Play conventions.
+
+### Device model
+
+A device model is defined by using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl) modeling language. This language lets you define:
+
+- The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double.
+- The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean.
+- The properties the device can receive from IoT Central. Optionally, you can mark a property as writable. For example, IoT Central sends a target temperature as a double to a device.
+- The commands a device responds to. The definition includes the name of the command, and the names and data types of any parameters. For example, a device responds to a reboot command that specifies how many seconds to wait before rebooting.
+
+A DTDL model can be a _no-component_ or a _multi-component_ model:
+
+- No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _root component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.
+- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.
+
+> [!TIP]
+> You can export the model from an IoT Central device template as a [Digital Twins Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl) JSON file.
+
+To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
+
+### Conventions
+
+A device should follow the IoT Plug and Play conventions when it exchanges data with IoT Central. The conventions include:
+
+- Send the DTMI when it connects to IoT Central.
+- Send correctly formatted JSON payloads and metadata to IoT Central.
+- Correctly respond to writable properties and commands from IoT Central.
+- Follow the naming conventions for component commands.
+
+> [!NOTE]
+> Currently, IoT Central does not fully support the DTDL **Array** and **Geospatial** data types.
+
+To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
+
+To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
+
+### Device SDKs
+
+Use one of the [Azure IoT device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) to implement the behavior of your device. The code should:
+
+- Register the device with DPS and use the information from DPS to connect to the internal IoT hub in your IoT Central application.
+- Announce the DTMI of the model the device implements.
+- Send telemetry in the format that the device model specifies. IoT Central uses the model in the device template to determine how to use the telemetry for visualizations and analysis.
+- Synchronize property values between the device and IoT Central. The model specifies the property names and data types so that IoT Central can display the information.
+- Implement command handlers for the commands specified in the model. The model specifies the command names and parameters that the device should use.
+
+For more information about the role of device templates, see [What are device templates?](./concepts-device-templates.md).
+
+The following table summarizes how Azure IoT Central device features map on to IoT Hub features:
+
+| Azure IoT Central | Azure IoT Hub |
+| -- | - |
+| Telemetry | [Device-to-cloud messaging](../../iot-hub/iot-hub-devguide-messages-d2c.md) |
+| Offline commands | [Cloud-to-device messaging](../../iot-hub/iot-hub-devguide-messages-c2d.md) |
+| Property | [Device twin reported properties](../../iot-hub/iot-hub-devguide-device-twins.md) |
+| Property (writable) | [Device twin desired and reported properties](../../iot-hub/iot-hub-devguide-device-twins.md) |
+| Command | [Direct methods](../../iot-hub/iot-hub-devguide-direct-methods.md) |
+
+### Communication protocols
+
+Communication protocols that a device can use to connect to IoT Central include MQTT, AMQP, and HTTPS. Internally, IoT Central uses an IoT hub to enable device connectivity. For more information about the communication protocols that IoT Hub supports for device connectivity, see [Choose a communication protocol](../../iot-hub/iot-hub-devguide-protocols.md).
+
+If your device can't use any of the supported protocols, use Azure IoT Edge to do protocol conversion. IoT Edge supports other intelligence-on-the-edge scenarios to offload processing from the Azure IoT Central application.
+
+## Best practices
+
+These recommendations show how to implement devices to take advantage of the [built-in high availability, disaster recovery, and automatic scaling](concepts-faq-scalability-availability.md) in IoT Central.
+
+### Handle connection failures
+
+For scaling or disaster recovery purposes, IoT Central may update its underlying IoT hubs. To maintain connectivity, your device code should handle specific connection errors by establishing a connection to a new IoT Hub endpoint.
+
+If the device gets any of the following errors when it connects, it should reprovision the device with DPS to get a new connection string. These errors mean the connection string is no longer valid:
+
+- Unreachable IoT Hub endpoint.
+- Expired security token.
+- Device disabled in IoT Hub.
+
+If the device gets any of the following errors when it connects, it should use a back-off strategy to retry the connection. These errors mean the connection string is still valid, but transient conditions are stopping the device from connecting:
+
+- Operator blocked device.
+- Internal error 500 from the service.
+
+To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md).
+
+### Test failover capabilities
+
+The Azure CLI lets you test the failover capabilities of your device code. The CLI command works by temporarily switching a device registration to a different internal IoT hub. To verify the device failover worked, check that the device still sends telemetry and responds to commands.
+
+To run the failover test for your device, run the following command:
+
+```azurecli
+az iot central device manual-failover \
+ --app-id {Application ID of your IoT Central application} \
+ --device-id {Device ID of the device you're testing} \
+ --ttl-minutes {How to wait before moving the device back to it's original IoT hub}
+```
+
+> [!TIP]
+> To find the **Application ID**, navigate to **Application > Management** in your IoT Central application.
+
+If the command succeeds, you see output that looks like the following:
+
+```output
+Command group 'iot central device' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+{
+ "hubIdentifier": "6bd4...bafa",
+ "message": "Success! This device is now being failed over. You can check your device'ΓÇÖ's status using 'iot central device registration-info' command. The device will revert to its original hub at Tue, 18 May 2021 11:03:45 GMT. You can choose to failback earlier using device-manual-failback command. Learn more: https://aka.ms/iotc-device-test"
+}
+```
+
+To learn more about the CLI command, see [az iot central device manual-failover](/cli/azure/iot/central/device#az_iot_central_device_manual_failover).
+
+You can now check that telemetry from the device still reaches your IoT Central application.
+
+> [!TIP]
+> To see sample device code that handles failovers in various programing languages, see [IoT Central high availability clients](/samples/azure-samples/iot-central-high-availability-clients/iotc-high-availability-clients/).
+
+## Next steps
+
+Some suggested next steps are to:
+
+- Complete the tutorial [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
+- Review [Device authentication concepts in IoT Central](concepts-device-authentication.md)
+- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
+- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
A solution builder adds device templates to an IoT Central application. A device
A device template includes the following sections: -- _A device model_. This part of the device template defines how the device interacts with your application. A device developer implements the behaviors defined in the model.
- - _Root component_. Every device model has a root component. The root component's interface describes capabilities that are specific to the device model.
- - _Components_. A device model may include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example several phone device models could use the same camera interface.
- - _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the root component.
+- _A device model_. This part of the device template defines how the device interacts with your application. Every device model has a unique ID. A device developer implements the behaviors defined in the model.
+ - _Root component_. Every device model has a root component. The root component's interface describes capabilities that are specific to the device model.
+ - _Components_. A device model may include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example, several phone device models could use the same camera interface.
+ - _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the root component.
- _Cloud properties_. This part of the device template lets the solution developer specify any device metadata to store. Cloud properties are never synchronized with devices and only exist in the application. Cloud properties don't affect the code that a device developer writes to implement the device model. - _Customizations_. This part of the device template lets the solution developer override some of the definitions in the device model. Customizations are useful if the solution developer wants to refine how the application handles a value, such as changing the display name for a property or the color used to display a telemetry value. Customizations don't affect the code that a device developer writes to implement the device model. - _Views_. This part of the device template lets the solution developer define visualizations to view data from the device, and forms to manage and control a device. The views use the device model, cloud properties, and customizations. Views don't affect the code that a device developer writes to implement the device model.
+## Assign a device to a device template
+
+For a device to interact with IoT Central, it must be assigned to a device template. This assignment is done in one of four ways:
+
+- When you register a device on the **Devices** page, you can identify the template the device should use.
+- When you bulk import a list of devices, you can choose the device template all the devices on the list should use.
+- You can manually assign an unassigned device to a device template after it connects.
+- You can automatically assign a device to a device template by sending a model ID when the device first connects to your application.
+
+### Automatic assignment
+
+IoT Central can automatically assign a device to a device template when the device connects. A device should send a [model ID](../../iot-fundamentals/iot-glossary.md?toc=/azure/iot-central/toc.json&bc=/azure/iot-central/breadcrumb/toc.json#model-id) when it connects. IoT Central uses the model ID to identify the device template for that specific device model. The discovery process works as follows:
+
+1. If the device template is already published in the IoT Central application, the device is assigned to the device template.
+1. If the device template isn't already published in the IoT Central application, IoT Central looks for the device model in the [public model repository](https://github.com/Azure/iot-plugandplay-models). If IoT Central finds the model, it uses it to generate a basic device template.
+1. If IoT Central doesn't find the model in the public model repository, the device is marked as **Unassigned**. An operator can either create a device template for the device and then migrate the unassigned device to the new device template, or [autogenerate a device template](howto-set-up-template.md#autogenerate-a-device-template) based on the data the device sends.
+
+The following screenshot shows you how to view the model ID of a device template in IoT Central. In a device template, select a component, and then select **Edit identity**:
++
+You can view the [thermostat model](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-1.json) in the public model repository. The model ID definition looks like:
+
+```json
+"@id": "dtmi:com:example:Thermostat;1"
+```
+
+Use the following DPS payload to assign the device to a device template:
+
+```json
+{
+ "modelId":"dtmi:com:example:TemperatureController;2"
+}
+```
+
+To lean more about the DPS payload, see the sample code used in the [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+
+ ## Device models A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can import a JSON file that defines the device model into a device template, or use the web UI in IoT Central to create or edit a device model.
iot-central Concepts Faq Scalability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-scalability-availability.md
Before a device connects to IoT Central, it must be registered and provisioned i
When a device first connects to your IoT Central application, DPS provisions the device in one of the enrollments group's linked IoT hubs. The device is then associated with that IoT hub. DPS uses an allocation policy to load balance the provisioning across the IoT hubs in the application. This process makes sure each IoT hub has a similar number of provisioned devices.
-To learn more about registration and provisioning in IoT Central, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+To learn more about registration and provisioning in IoT Central, see [IoT Central device connectivity guide](overview-iot-central-developer.md#how-devices-connect).
### Device connections After DPS provisions a device to an IoT hub, the device always tries to connect to that hub. If a device can't reach the IoT hub it's provisioned to, it can't connect to your IoT Central application. To handle this scenario, your device firmware should include a retry strategy that reprovisions the device to another hub.
-To learn more about how device firmware should handle connection errors and connect to a different hub, see [Best practices](overview-iot-central-developer.md#best-practices).
+To learn more about how device firmware should handle connection errors and connect to a different hub, see [Best practices](concepts-device-implementation.md#best-practices).
-To learn more about how to verify your device firmware can handle connection failures, see [Test failover capabilities](overview-iot-central-developer.md#test-failover-capabilities).
+To learn more about how to verify your device firmware can handle connection failures, see [Test failover capabilities](concepts-device-implementation.md#test-failover-capabilities).
## Data export
iot-central Concepts Get Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-get-connected.md
- Title: Device connectivity in Azure IoT Central | Microsoft Docs
-description: This article introduces key concepts relating to device connectivity in Azure IoT Central
-- Previously updated : 12/21/2021------
-# This article applies to operators and device developers.
--
-# Get connected to Azure IoT Central
-
-This article describes how devices connect to an Azure IoT Central application. Before a device can exchange data with IoT Central, it must:
--- *Authenticate*. Authentication with the IoT Central application uses either a _shared access signature (SAS) token_ or an _X.509 certificate_. X.509 certificates are recommended in production environments.-- *Register*. Devices must be registered with the IoT Central application. You can view registered devices on the **Devices** page in the application.-- *Associate with a device template*. In an IoT Central application, device templates define the UI that operators use to view and manage connected devices.-
-IoT Central supports the following two device registration scenarios:
--- *Automatic registration*. The device is registered automatically when it first connects. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. An OEM generates suitable device credentials, and configures the devices in the factory. Optionally, you can require an operator to approve the device before it starts sending data. This scenario requires you to configure an X.509 or SAS _group enrollment_ in your application.-- *Manual registration*. Operators either register individual devices on the **Devices** page, or [import a CSV file](howto-manage-devices-in-bulk.md#import-devices) to bulk register devices. In this scenario you can use X.509 or SAS _group enrollment_, or X.509 or SAS _individual enrollment_.-
-Devices that connect to IoT Central should follow the *IoT Plug and Play conventions*. One of these conventions is that a device should send the _model ID_ of the device model it implements when it connects. The model ID enables the IoT Central application to associate the device with the correct device template.
-
-IoT Central uses the [Azure IoT Hub Device Provisioning service (DPS)](../../iot-dps/about-iot-dps.md) to manage the connection process. A device first connects to a DPS endpoint to retrieve the information it needs to connect to your application. Internally, your IoT Central application uses an IoT hub to handle device connectivity. Using DPS enables:
--- IoT Central to support onboarding and connecting devices at scale.-- You to generate device credentials and configure the devices offline without registering the devices through IoT Central UI.-- You to use your own device IDs to register devices in IoT Central. Using your own device IDs simplifies integration with existing back-office systems.-- A single, consistent way to connect devices to IoT Central.-
-This article describes the following device connection steps:
--- [X.509 group enrollment](#x509-group-enrollment)-- [SAS group enrollment](#sas-group-enrollment)-- [Individual enrollment](#individual-enrollment)-- [Device registration](#device-registration)-- [Associate a device with a device template](#associate-a-device-with-a-device-template)-
-## X.509 group enrollment
-
-In a production environment, using X.509 certificates is the recommended device authentication mechanism for IoT Central. To learn more, see [Device Authentication using X.509 CA Certificates](../../iot-hub/iot-hub-x509ca-overview.md).
-
-To connect a device with an X.509 certificate to your application:
-
-1. Create an *enrollment group* that uses the **Certificates (X.509)** attestation type.
-1. Add and verify an intermediate or root X.509 certificate in the enrollment group.
-1. Generate a leaf certificate from the root or intermediate certificate in the enrollment group. Send the leaf certificate from the device when it connects to your application.
-
-To learn more, see [How to connect devices with X.509 certificates](how-to-connect-devices-x509.md)
-
-### For testing purposes only
-
-For testing only, you can use the following utilities to generate root, intermediate, and device certificates:
--- [Tools for the Azure IoT Device Provisioning Device SDK](https://github.com/Azure/azure-iot-sdk-node/blob/main/provisioning/tools/readme.md): a collection of Node.js tools that you can use to generate and verify X.509 certificates and keys.-- [Manage test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md): a collection of PowerShell and Bash scripts to:
- - Create a certificate chain.
- - Save the certificates as .cer files to upload to your IoT Central application.
- - Use the verification code from the IoT Central application to generate the verification certificate.
- - Create leaf certificates for your devices using your device IDs as a parameter to the tool.
-
-## SAS group enrollment
-
-To connect a device with device SAS key to your application:
-
-1. Create an *enrollment group* that uses the **Shared Access Signature (SAS)** attestation type.
-1. Copy the group primary or secondary key from the enrollment group.
-1. Use the Azure CLI to generate a device key from the group key:
-
- ```azurecli
- az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
- ```
-
-1. Use the generated device key when the device connects to your IoT Central application.
-
-> [!NOTE]
-> To use existing SAS keys in your enrollment groups, disable the **Auto generate keys** toggle and type-in the SAS keys.
-
-## Individual enrollment
-
-Customers connecting devices that each have their own authentication credentials, use individual enrollments. An individual enrollment is an entry for a single device that's allowed to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. A device ID can contain letters, numbers, and the `-` character. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
-
-> [!NOTE]
-> When you create an individual enrollment for a device, it takes precedence over the default group enrollment options in your IoT Central application.
-
-### Create individual enrollments
-
-IoT Central supports the following attestation mechanisms for individual enrollments:
--- **Symmetric key attestation:** Symmetric key attestation is a simple approach to authenticating a device with the DPS instance. To create an individual enrollment that uses symmetric keys, open the **Device connection** page for the device, select **Individual enrollment** as the connection method, and **Shared access signature (SAS)** as the mechanism. Enter base64 encoded primary and secondary keys, and save your changes. Use the **ID scope**, **Device ID**, and either the primary or secondary key to connect your device.-
- > [!TIP]
- > For testing, you can use **OpenSSL** to generate base64 encoded keys: `openssl rand -base64 64`
--- **X.509 certificates:** To create an individual enrollment with X.509 certificates, open the **Device Connection** page, select **Individual enrollment** as the connection method, and **Certificates (X.509)** as the mechanism. Device certificates used with an individual enrollment entry have a requirement that the issuer and subject CN are set to the device ID.-
- > [!TIP]
- > For testing, you can use [Tools for the Azure IoT Device Provisioning Device SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/tools) to generate a self-signed certificate: `node create_test_cert.js device "mytestdevice"`
--- **Trusted Platform Module (TPM) attestation:** A [TPM](../../iot-dps/concepts-tpm-attestation.md) is a type of hardware security module. Using a TPM is one of the most secure ways to connect a device. This article assumes you're using a discrete, firmware, or integrated TPM. Software emulated TPMs are well suited for prototyping or testing, but they don't provide the same level of security as discrete, firmware, or integrated TPMs. Don't use software TPMs in production. To create an individual enrollment that uses a TPM, open the **Device Connection** page, select **Individual enrollment** as the connection method, and **TPM** as the mechanism. Enter the TPM endorsement key and save the device connection information.-
-## Device registration
-
-Before a device can connect to an IoT Central application, it must be registered in the application:
--- Devices can automatically register themselves when they first connect. To use this option, you must use either [X.509 group enrollment](#x509-group-enrollment) or [SAS group enrollment](#sas-group-enrollment).-- An operator can import a CSV file to bulk register a list of devices in the application.-- An operator can manually register an individual device on the **Devices** page in the application.-
-IoT Central enables OEMs to mass manufacture devices that can register themselves automatically. An OEM generates suitable device credentials, and configures the devices in the factory. When a customer turns on a device for the first time, it connects to DPS, which then automatically connects the device to the correct IoT Central application. Optionally, you can require an operator to approve the device before it starts sending data to the application.
-
-> [!TIP]
-> On the **Administration > Device connection** page, the **Auto approve** option controls whether an operator must manually approve the device before it can start sending data.
-
-### Automatically register devices that use X.509 certificates
-
-1. Generate the leaf-certificates for your devices using the root or intermediate certificate you added to your [X.509 enrollment group](#x509-group-enrollment). Use the device IDs as the `CNAME` in the leaf certificates. A device ID can contain letters, numbers, and the `-` character.
-
-1. As an OEM, flash each device with a device ID, a generated X.509 leaf-certificate, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
-
-1. When you switch on a device, it first connects to DPS to retrieve its IoT Central connection information.
-
-1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
-
-The IoT Central application uses the model ID sent by the device to [associate the registered device with a device template](#associate-a-device-with-a-device-template).
-
-### Automatically register devices that use SAS tokens
-
-1. Copy the group primary key from the **SAS-IoT-Devices** enrollment group:
-
- :::image type="content" source="media/concepts-get-connected/group-primary-key.png" alt-text="Group primary key from SAS-IoT-Devices enrollment group":::
-
-1. Use the `az iot central device compute-device-key` command to generate the device SAS keys. Use the group primary key from the previous step. The device ID can contain letters, numbers, and the `-` character:
-
- ```azurecli
- az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
- ```
-
-1. As an OEM, flash each device with the device ID, the generated device SAS key, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
-
-1. When you switch on a device, it first connects to DPS to retrieve its IoT Central registration information.
-
-1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
-
-The IoT Central application uses the model ID sent by the device to [associate the registered device with a device template](#associate-a-device-with-a-device-template).
-
-### Bulk register devices in advance
-
-To register a large number of devices with your IoT Central application, use a CSV file to [import device IDs and device names](howto-manage-devices-in-bulk.md#import-devices).
-
-If your devices use SAS tokens to authenticate, [export a CSV file from your IoT Central application](howto-manage-devices-in-bulk.md#export-devices). The exported CSV file includes the device IDs and the SAS keys.
-
-If your devices use X.509 certificates to authenticate, generate X.509 leaf certificates for your devices using the root or intermediate certificate in you uploaded to your X.509 enrollment group. Use the device IDs you imported as the `CNAME` value in the leaf certificates.
-
-Devices must use the **ID Scope** value for your application and send a model ID when they connect.
-
-> [!TIP]
-> You can find the **ID Scope** value in **Administration > Device connection**.
-
-### Register a single device in advance
-
-This approach is useful when you're experimenting with IoT Central or testing devices. Select **+ New** on the **Devices** page to register an individual device. You can use the device connection SAS keys to connect the device to your IoT Central application. Copy the _device SAS key_ from the connection information for a registered device:
-
-![SAS keys for an individual device](./media/concepts-get-connected/single-device-sas.png)
-
-## Associate a device with a device template
-
-IoT Central automatically associates a device with a device template when the device connects. A device sends a [model ID](../../iot-fundamentals/iot-glossary.md?toc=/azure/iot-central/toc.json&bc=/azure/iot-central/breadcrumb/toc.json#model-id) when it connects. IoT Central uses the model ID to identify the device template for that specific device model. The discovery process works as follows:
-
-1. If the device template is already published in the IoT Central application, the device is associated with the device template.
-1. If the device template isn't already published in the IoT Central application, IoT Central looks for the device model in the [public model repository](https://github.com/Azure/iot-plugandplay-models). If IoT Central finds the model, it uses it to generate a basic device template.
-1. If IoT Central doesn't find the model in the public model repository, the device is marked as **Unassociated**. An operator can either create a device template for the device and then migrate the unassociated device to the new device template, or [autogenerate a device template](howto-set-up-template.md#autogenerate-a-device-template) based on the data the device sends.
-
-The following screenshot shows you how to view the model ID of a device template in IoT Central. In a device template, select a component, and then select **Edit identity**:
--
-You can view the [thermostat model](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-1.json) in the public model repository. The model ID definition looks like:
-
-```json
-"@id": "dtmi:com:example:Thermostat;1"
-```
-
-Use the following DPS payload to associate the device to a device template:
-
-```json
-{
- "modelId":"dtmi:com:example:TemperatureController;2"
-}
-```
-
-To lean more about the DPS payload, see the sample code used in the [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
-
-## Device status values
-
-When a real device connects to your IoT Central application, its device status changes as follows:
-
-1. The device status is first **Registered**. This status means the device is created in IoT Central, and has a device ID. A device is registered when:
- - A new real device is added on the **Devices** page.
- - A set of devices is added using **Import** on the **Devices** page.
-
-1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
-
-1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
-
-1. If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled. An operator must explicitly approve a device before it starts sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials will have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button.
-
-1. If the device status is **Unassociated**, it means the device connecting to IoT Central doesn't have an associated device template. This situation typically happens in the following scenarios:
-
- - A set of devices is added using **Import** on the **Devices** page without specifying the device template.
- - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
-
- An operator can associate a device to a device template from the **Devices** page using the **Migrate** button.
-
-## Device connection status
-
-When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events are not sent by the device, they are generated internally by IoT Central.
-
-The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute.
--
-Watch the following video to learn more about how to monitor device connection status:
-
-> [!VIDEO https://www.youtube.com/embed/EUZH_6Ihtto]
-
-You can include connection and disconnection events in [exports from IoT Central](howto-export-data.md#set-up-a-data-export). To learn more, see [React to IoT Hub events > Limitations for device connected and device disconnected events](../../iot-hub/iot-hub-event-grid.md#limitations-for-device-connected-and-device-disconnected-events).
-
-## SDK support
-
-The Azure Device SDKs offer the easiest way for you implement your device code. The following device SDKs are available:
--- [Azure IoT SDK for C](https://github.com/azure/azure-iot-sdk-c)-- [Azure IoT SDK for Python](https://github.com/azure/azure-iot-sdk-python)-- [Azure IoT SDK for Node.js](https://github.com/azure/azure-iot-sdk-node)-- [Azure IoT SDK for Java](https://github.com/azure/azure-iot-sdk-java)-- [Azure IoT SDK for .NET](https://github.com/azure/azure-iot-sdk-csharp)-
-### SDK features and IoT Hub connectivity
-
-All device communication with IoT Hub uses the following IoT Hub connectivity options:
--- [Device-to-cloud messaging](../../iot-hub/iot-hub-devguide-messages-d2c.md)-- [Cloud-to-device messaging](../../iot-hub/iot-hub-devguide-messages-c2d.md)-- [Device twins](../../iot-hub/iot-hub-devguide-device-twins.md)-
-The following table summarizes how Azure IoT Central device features map on to IoT Hub features:
-
-| Azure IoT Central | Azure IoT Hub |
-| -- | - |
-| Telemetry | Device-to-cloud messaging |
-| Offline commands | Cloud-to-device messaging |
-| Property | Device twin reported properties |
-| Property (writable) | Device twin desired and reported properties |
-| Command | Direct methods |
-
-### Protocols
-
-The Device SDKs support the following network protocols for connecting to an IoT hub:
--- MQTT-- AMQP-- HTTPS-
-For information about these difference protocols and guidance on choosing one, see [Choose a communication protocol](../../iot-hub/iot-hub-devguide-protocols.md).
-
-If your device can't use any of the supported protocols, use Azure IoT Edge to do protocol conversion. IoT Edge supports other intelligence-on-the-edge scenarios to offload processing from the Azure IoT Central application.
-
-## Security
-
-All data exchanged between devices and your Azure IoT Central is encrypted. IoT Hub authenticates every request from a device that connects to any of the device-facing IoT Hub endpoints. To avoid exchanging credentials over the wire, a device uses signed tokens to authenticate. For more information, see, [Control access to IoT Hub](../../iot-hub/iot-hub-devguide-security.md).
-
-## Next steps
-
-Some suggested next steps are to:
--- Review [best practices](overview-iot-central-developer.md#best-practices) for developing devices.-- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)-- Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)-- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)-- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
An IoT Edge device can be:
## IoT Edge devices and IoT Central
-IoT Edge devices can use *shared access signature* tokens or X.509 certificates to authenticate with IoT Central. You can manually register your IoT Edge devices in IoT Central before they connect for the first time, or use the Device Provisioning Service to handle the registration. To learn more, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+IoT Edge devices can use *shared access signature* tokens or X.509 certificates to authenticate with IoT Central. You can manually register your IoT Edge devices in IoT Central before they connect for the first time, or use the Device Provisioning Service to handle the registration. To learn more, see [How devices connect](overview-iot-central-developer.md#how-devices-connect).
IoT Central uses [device templates](concepts-device-templates.md) to define how IoT Central interacts with a device. For example, a device template specifies:
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
If you enable the **Queue if offline** option in the device template UI for the
## Next steps
-Now that you've learned about device templates, a suggested next steps is to read [Get connected to Azure IoT Central](./concepts-get-connected.md) to learn more about how to register devices with IoT Central and how IoT Central secures device connections.
+Now that you've learned about device templates, a suggested next steps is to read [IoT Central device connectivity guide](overview-iot-central-developer.md) to learn more about how to register devices with IoT Central and how IoT Central secures device connections.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
zone_pivot_groups: programming-languages-set-ten
# How to connect devices with X.509 certificates to IoT Central Application
-IoT Central supports both shared access signatures (SAS) and X.509 certificates to secure the communication between a device and your application. The [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial uses SAS. In this article, you learn how to modify the code sample to use X.509 certificates. X.509 certificates are recommended in production environments. For more information, see [Get connected to Azure IoT Central](./concepts-get-connected.md).
+IoT Central supports both shared access signatures (SAS) and X.509 certificates to secure the communication between a device and your application. The [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial uses SAS. In this article, you learn how to modify the code sample to use X.509 certificates. X.509 certificates are recommended in production environments. For more information, see [Device authentication concepts](concepts-device-authentication.md).
This guide shows two ways to use X.509 certificates - [group enrollments](how-to-connect-devices-x509.md#use-group-enrollment) typically used in a production environment, and [individual enrollments](how-to-connect-devices-x509.md#use-individual-enrollment) useful for testing. The article also describes how to [roll device certificates](#roll-x509-device-certificates) to maintain connectivity when certificates expire.
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-build-iotc-device-bridge.md
# Mandatory fields. See more on aka.ms/skyeye/meta. Title: Deploy the Azure IoT Central device bridge | Microsoft Docs
-description: Deploy the IoT Central device bridge to connect other IoT clouds to your IoT Central app. Other IoT clouds include Sigfox, Particle Device Cloud, and The Things Network.
+description: Deploy the IoT Central device bridge to connect other IoT clouds to your IoT Central app. Examples of other IoT clouds include Sigfox, Particle Device Cloud, and The Things Network.
The function app transforms the data into a format accepted by IoT Central and f
:::image type="content" source="media/howto-build-iotc-device-bridge/azure-function.png" alt-text="Screenshot of Azure Functions.":::
-If your IoT Central application recognizes the device ID in the forwarded message, the telemetry from the device appears in IoT Central. If the device ID isn't recognized by your IoT Central application, the function app attempts to register a new device with the device ID. The new device appears as an **Unassociated device** on the **Devices** page in your IoT Central application. From the **Devices** page, you can associate the new device with a device template and then view the telemetry.
+If your IoT Central application recognizes the device ID in the forwarded message, the telemetry from the device appears in IoT Central. If the device ID isn't recognized by your IoT Central application, the function app attempts to register a new device with the device ID. The new device appears as an **Unassigned** device on the **Devices** page in your IoT Central application. From the **Devices** page, you can assign the new device to a device template and then view the telemetry.
## Deploy the device bridge
Each key in the `measurements` object must match the name of a telemetry type in
You can include a `timestamp` field in the body to specify the UTC date and time of the message. This field must be in ISO 8601 format. For example, `2020-06-08T20:16:54.602Z`. If you don't include a timestamp, the current date and time is used.
-You can include a `modelId` field in the body. Use this field to associate the device with a device template during provisioning. This functionality is only supported by [V3 applications](howto-faq.yml#how-do-i-get-information-about-my-application-).
+You can include a `modelId` field in the body. Use this field to assign the device to a device template during provisioning. This functionality is only supported by [V3 applications](howto-faq.yml#how-do-i-get-information-about-my-application-).
The `deviceId` must be alphanumeric, lowercase, and may contain hyphens.
-If you don't include the `modelId` field, or if IoT Central doesn't recognize the model ID, then a message with an unrecognized `deviceId` creates a new _unassociated device_ in IoT Central. An operator can manually migrate the device to the correct device template. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices-individually.md).
+If you don't include the `modelId` field, or if IoT Central doesn't recognize the model ID, then a message with an unrecognized `deviceId` creates a new _unassigned device_ in IoT Central. An operator can manually migrate the device to the correct device template. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices-individually.md).
-In [V2 applications](howto-faq.yml#how-do-i-get-information-about-my-application-), the new device appears on the **Device Explorer > Unassociated devices** page. Select **Associate** and choose a device template to start receiving incoming telemetry from the device.
+In [V2 applications](howto-faq.yml#how-do-i-get-information-about-my-application-), the new device appears an unassigned device on the **Devices** page. Select **Assign template** and choose a device template to start receiving incoming telemetry from the device.
> [!NOTE]
-> Until the device is associated to a template, all HTTP calls to the function return a 403 error status.
+> Until the device is assigned to a template, all HTTP calls to the function return a 403 error status.
To switch on logging for the function app with Application Insights, navigate to **Monitoring > Logs** in your function app in the Azure portal. Select **Turn on Application Insights**.
The Resource Manager template provisions the following resources in your Azure s
The key vault stores the SAS group key for your IoT Central application.
-The function app runs on a [consumption plan](https://azure.microsoft.com/pricing/details/functions/). While this option doesn't offer dedicated compute resources, it enables the device bridge to handle hundreds of device messages per minute, suitable for smaller fleets of devices or devices that send messages less frequently. If your application depends on streaming a large number of device messages, replace the consumption plan with a dedicated a [App service plan](https://azure.microsoft.com/pricing/details/app-service/windows/). This plan offers dedicated compute resources, which give faster server response times. Using a standard App Service Plan, the maximum observed performance of the function from Azure in this repository was around 1,500 device messages per minute. To learn more, see [Azure Functions hosting options](../../azure-functions/functions-scale.md).
+The function app runs on a [consumption plan](https://azure.microsoft.com/pricing/details/functions/). While this option doesn't offer dedicated compute resources, it enables the device bridge to handle hundreds of device messages per minute, suitable for smaller fleets of devices or devices that send messages less frequently. If your application depends on streaming a large number of device messages, replace the consumption plan with a dedicated [App service plan](https://azure.microsoft.com/pricing/details/app-service/windows/). This plan offers dedicated compute resources, which give faster server response times. Using a standard App Service Plan, the maximum observed performance of the function from Azure in this repository was around 1,500 device messages per minute. To learn more, see [Azure Functions hosting options](../../azure-functions/functions-scale.md).
To use a dedicated App Service plan instead of a consumption plan, edit the custom template before deploying. Select **Edit template**.
To connect a Particle device through the device bridge to IoT Central, go to the
} ```
-Paste in the **function URL** from your function app, and you see Particle devices appear as unassociated devices in IoT Central. To learn more, see the [Here's how to integrate your Particle-powered projects with Azure IoT Central](https://blog.particle.io/2019/09/26/integrate-particle-with-azure-iot-central/) blog post.
+Paste in the **function URL** from your function app, and you see Particle devices appear as unassigned devices in IoT Central. To learn more, see the [Here's how to integrate your Particle-powered projects with Azure IoT Central](https://blog.particle.io/2019/09/26/integrate-particle-with-azure-iot-central/) blog post.
### Example 2: Connecting Sigfox devices through the device bridge
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-rigado-cascade-500.md
You're now ready to use your C500 device in your IoT Central application.
Some suggested next steps are to: -- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md)
+- Read about [How devices connect](overview-iot-central-developer.md#how-devices-connect)
- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Connect Ruuvi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-ruuvi.md
To create a simulated RuuviTag:
Some suggested next steps are to: -- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md)
+- [How devices connect](overview-iot-central-developer.md#how-devices-connect)
- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data.md
Each exported message contains a normalized form of the full message the device
- `messageSource`: The source for the message - `telemetry`. - `deviceId`: The ID of the device that sent the telemetry message. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this message was received by IoT Central. - `enrichments`: Any enrichments set up on the export. - `module`: The IoT Edge module that sent this message. This field only appears if the message came from an IoT Edge module.
Each message or record represents changes to device and cloud properties. Inform
- `deviceId`: The ID of the device that sent the telemetry message. - `schema`: The name and version of the payload schema. - `enqueuedTime`: The time at which this change was detected by IoT Central.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `properties`: An array of properties that changed, including the names of the properties and values that changed. The component and module information is included if the property is modeled within a component or an IoT Edge module. - `enrichments`: Any enrichments set up on the export.
Each message or record represents a connectivity event from a single device. Inf
- `messageType`: Either `connected` or `disconnected`. - `deviceId`: The ID of the device that was changed. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this change occurred in IoT Central. - `enrichments`: Any enrichments set up on the export.
Each message or record represents one change to a single device. Information in
- `messageType`: The type of change that occurred. One of: `registered`, `deleted`, `provisioned`, `enabled`, `disabled`, `displayNameChanged`, and `deviceTemplateChanged`. - `deviceId`: The ID of the device that was changed. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this change occurred in IoT Central. - `enrichments`: Any enrichments set up on the export.
Each message or record represents one change to a single published device templa
- `messageSource`: The source for the message - `deviceTemplateLifecycle`. - `messageType`: Either `created`, `updated`, or `deleted`. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this change occurred in IoT Central. - `enrichments`: Any enrichments set up on the export.
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
Enter a job name and description, and then select **Rerun job**. A new job is su
## Import devices
-To connect large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/master/bulk-upload-devices). The CSV file should include the following column headers:
+To register a large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/master/bulk-upload-devices). The CSV file should include the following column headers:
| Column | Description | | - | - |
To bulk-register devices in your application:
If the device import operation fails, you see an error message on the **Device Operations** panel. A log file capturing all the errors is generated that you can download.
+If your devices use SAS tokens to authenticate, [export a CSV file from your IoT Central application](#export-devices). The exported CSV file includes the device IDs and the SAS keys.
+
+If your devices use X.509 certificates to authenticate, generate X.509 leaf certificates for your devices using the root or intermediate certificate in your X.509 enrollment group. Use the device IDs you imported as the `CNAME` value in the leaf certificates.
++ ## Export devices To connect a real device to IoT Central, you need its connection string. You can export device details in bulk to get the information you need to create device connection strings. The export process creates a CSV file with the device identity, device name, and keys for all the selected devices.
To bulk export devices from your application:
* IOTC_X509THUMBPRINT_PRIMARY * IOTC_X509THUMBPRINT_SECONDARY
-For more information about connecting real devices to your IoT Central application, see [Device connectivity in Azure IoT Central](concepts-get-connected.md).
+For more information about connecting real devices to your IoT Central application, see [How devices connect](overview-iot-central-developer.md#how-devices-connect).
## Next steps
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
Title: Manage devices individually in your Azure IoT Central application | Microsoft Docs
-description: Learn how to manage devices individually in your Azure IoT Central application. Create, delete, and update devices.
+description: Learn how to manage devices individually in your Azure IoT Central application. Monitor, manage, create, delete, and update devices.
Previously updated : 12/27/2021 Last updated : 03/02/2022
To view an individual device:
> [!TIP] > You can use the filter tool on this page to view devices in a specific organization.
+## Monitor your devices
+
+USe the **Devices** page to monitor and manage your devices
+
+### Device status values
+
+When a device connects to your IoT Central application, its device status changes as follows:
+
+1. The device status is first **Registered**. This status means the device is created in IoT Central, and has a device ID. A device is registered when:
+ - A new real device is added on the **Devices** page.
+ - A set of devices is added using **Import** on the **Devices** page.
+
+1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
+
+1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
+
+1. If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled. An operator must explicitly approve a device before it starts sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials will have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button.
+
+1. If the device status is **Unassigned**, it means the device connecting to IoT Central isn't assigned to a device template. This situation typically happens in the following scenarios:
+
+ - A set of devices is added using **Import** on the **Devices** page without specifying the device template.
+ - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
+
+ An operator can assign a device to a device template from the **Devices** page using the **Migrate** button.
+
+### Device connection status
+
+When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events are not sent by the device, they are generated internally by IoT Central.
+
+The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute.
++
+Watch the following video to learn more about how to monitor device connection status:
+
+> [!VIDEO https://www.youtube.com/embed/EUZH_6Ihtto]
+
+You can include connection and disconnection events in [exports from IoT Central](howto-export-data.md#set-up-a-data-export). To learn more, see [React to IoT Hub events > Limitations for device connected and device disconnected events](../../iot-hub/iot-hub-event-grid.md#limitations-for-device-connected-and-device-disconnected-events).
+ ## Add a device To add a device to your Azure IoT Central application:
To move a device to a different organization, you must have access to both the s
## Migrate devices to a template
-If you register devices by starting the import under **All devices**, then the devices are created without any device template association. Devices must be associated with a template to explore the data and other details about the device. Follow these steps to associate devices with a template:
+If you register devices by starting the import under **All devices**, then the devices are created without any device template association. Devices must be assigned to a template to explore the data and other details about the device. Follow these steps to assign devices to a template:
1. Choose **Devices** on the left pane. 1. On the left panel, choose **All devices**:
- :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-1.png" alt-text="Screenshot showing unassociated devices.":::
+ :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-1.png" alt-text="Screenshot showing unassigned devices.":::
-1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassociated** for any of your devices.
+1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassigned** for any of your devices.
-1. Select the devices you want to associate with a template:
+1. Select the devices you want to assign to a template:
1. Select **Migrate**:
- :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-2.png" alt-text="Screenshot showing how to associate a device.":::
+ :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-2.png" alt-text="Screenshot showing how to assign a device.":::
1. Choose the template from the list of available templates and select **Migrate**.
-1. The selected devices are associated with the device template you chose.
+1. The selected devices are assigned to the device template you chose.
## Delete a device
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
To learn more, see [Create an IoT Central organization](howto-create-organizatio
Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. An administrator manages the group certificates or keys that these device credentials are derived from. To learn more, see: -- [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment)-- [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment)
+- [X.509 group enrollment](concepts-device-authentication.md#x509-enrollment-group)
+- [SAS group enrollment](concepts-device-authentication.md#sas-enrollment-group)
- [How to roll X.509 device certificates](how-to-connect-devices-x509.md). An administrator can also create and manage the API tokens that a client application uses to authenticate with your IoT Central application. Client applications use the REST API to interact with IoT Central. To learn more, see:
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
Title: Azure IoT Central device connectivity guide | Microsoft Docs
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to connect IoT devices to your IoT Central application. After a device connects, it uses telemetry to send streaming data and properties to report device state. Iot Central can set device state using writable properties and call commands on a device. This article outlines best practices for device connectivity.
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how IoT devices connect to your IoT Central application. After a device connects, it uses telemetry to send streaming data and properties to report device state. Iot Central can set device state using writable properties and call commands on a device.
Previously updated : 01/28/2022 Last updated : 03/02/2022
To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central appli
A gateway device manages one or more downstream devices that connect to your IoT Central application. A gateway device can process the telemetry from the downstream devices before it's forwarded to your IoT Central application. Both IoT devices and IoT Edge devices can act as gateways. To learn more, see [Define a new IoT gateway device type in your Azure IoT Central application](./tutorial-define-gateway-device-type.md) and [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
-## Connect a device
+## How devices connect
-Azure IoT Central uses the [Azure IoT Hub Device Provisioning service (DPS)](../../iot-dps/about-iot-dps.md) to manage all device registration and connection.
+As you connect a device to IoT Central, it goes through the following stages: _registered_, _provisioned_, and _connected_.
+
+To learn how to monitor the status of a device, see [Monitor your devices](howto-manage-devices-individually.md#monitor-your-devices).
+
+### Register a device
+
+When you register a device with IoT Central, you're telling IoT Central the ID of a device that you want to connect to the application. Optionally at this stage, you can assign the device to a [device template](concepts-device-templates.md) that declares the capabilities of the device to your application.
+
+> [!TIP]
+> A device ID can contain letters, numbers, and the `-` character.
+
+There are three ways to register a device in an IoT Central application:
+
+- Use the **Devices** page in your IoT Central application to register devices individually. To learn more, see [Add a device](howto-manage-devices-individually.md#add-a-device).
+- Add devices in bulk from a CSV file. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices).
+- Automatically register devices when they first try to connect. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. To learn more, see [Automatically register devices](concepts-device-authentication.md#automatically-register-devices).
+
+ Optionally, you can require an operator to approve the device before it starts sending data.
+
+ > [!TIP]
+ > On the **Administration > Device connection** page, the **Auto approve** option controls whether an operator must manually approve the device before it can start sending data.
+
+You only need to register a device once in your IoT Central application.
+
+### Provision a device
+
+When a device first tries to connect to your IoT Central application, it starts the process by connecting to the Device Provisioning Service (DPS). DPS checks the device's credentials and, if they're valid, provisions the device with connection string for one of IoT Central's internal IoT hubs. DPS uses the _group enrollment_ configurations in your IoT Central application to manage this provisioning process for you.
+
+> [!TIP]
+> The device also sends the **ID scope** value that tells DPS which IoT Central application the device is connecting to. You can look up the **ID scope** in your IoT Central application on the **Permissions > Device connection groups** page.
+
+Typically, a device should cache the connection string it receives from DPS but should be prepared to retrieve new connection details if the current connection fails. To learn more, see [Handle connect failures](concepts-device-implementation.md#handle-connection-failures).
Using DPS enables: -- IoT Central to support onboarding and connecting devices at scale.
+- IoT Central to onboard and connect devices at scale.
- You to generate device credentials and configure the devices offline without registering the devices through IoT Central UI. - You to use your own device IDs to register devices in IoT Central. Using your own device IDs simplifies integration with existing back-office systems. - A single, consistent way to connect devices to IoT Central.
-To learn more, see [Get connected to Azure IoT Central](./concepts-get-connected.md) and [best practices](#best-practices).
+### Authenticate and connect device
-### Security
+A device uses its credentials and the connection string it received from DPS to connect to and authenticate with your IoT Central application. A device should also send a [model ID that identifies the device template it's assigned to](concepts-device-templates.md#assign-a-device-to-a-device-template).
-The connection between a device and your IoT Central application is secured by using either [shared access signatures](./concepts-get-connected.md#sas-group-enrollment) or industry-standard [X.509 certificates](./concepts-get-connected.md#x509-group-enrollment).
+IoT Central supports two types of device credential:
-### Communication protocols
+- Shared access signatures
+- X.509 certificates
-Communication protocols that a device can use to connect to IoT Central include MQTT, AMQP, and HTTPS. Internally, IoT Central uses an IoT hub to enable device connectivity. For more information about the communication protocols that IoT Hub supports for device connectivity, see [Choose a communication protocol](../../iot-hub/iot-hub-devguide-protocols.md).
+To learn more, see [Device authentication concepts](concepts-device-authentication.md).
-## Connectivity patterns
+All data exchanged between devices and your Azure IoT Central is encrypted. IoT Hub authenticates every request from a device that connects to any of the device-facing IoT Hub endpoints. To avoid exchanging credentials over the wire, a device uses signed tokens to authenticate. For more information, see, [Control access to IoT Hub](../../iot-hub/iot-hub-devguide-security.md).
-Device developers typically use one of the device SDKs to implement devices that connect to an IoT Central application. Some scenarios, such as for devices that can't connect to the internet, also require a gateway. To learn more about the device connectivity options available to device developers, see:
+## Connectivity patterns
-- [Get connected to Azure IoT Central](concepts-get-connected.md)-- [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md)
+Device developers typically use one of the device SDKs to implement devices that connect to an IoT Central application. Some scenarios, such as for devices that can't connect to the internet, also require a gateway.
A solution design must take into account the required device connectivity pattern. These patterns fall in to two broad categories. Both categories include devices sending telemetry to your IoT Central application: ### Persistent connections
-Persistent connections are required your solution needs *command and control* capabilities. In command and control scenarios, the IoT Central application sends commands to devices to control their behavior in near real time. Persistent connections maintain a network connection to the cloud and reconnect whenever there's a disruption. Use either the MQTT or the AMQP protocol for persistent device connections to IoT Central.
+Persistent connections are required your solution needs _command and control_ capabilities. In command and control scenarios, the IoT Central application sends commands to devices to control their behavior in near real time. Persistent connections maintain a network connection to the cloud and reconnect whenever there's a disruption. Use either the MQTT or the AMQP protocol for persistent device connections to IoT Central.
The following options support persistent device connections: - Use the IoT device SDKs to connect devices and send telemetry:
- The device SDKs enable both the MQTT and AMQP protocols for creating persistent connections to IoT Central. To learn more, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+ The device SDKs enable both the MQTT and AMQP protocols for creating persistent connections to IoT Central.
- Connect devices over a local network to an IoT Edge device that forwards telemetry to IoT Central:
The following options are available for custom transformations or computations b
To learn more, see [Transform data for IoT Central](howto-transform-data.md).
-## Implement the device
-
-An IoT Central device template includes a _model_ that specifies the behaviors a device of that type should implement. Behaviors include telemetry, properties, and commands.
-
-To learn more, see [Edit an existing device template](howto-edit-device-template.md).
-
-> [!TIP]
-> You can export the model from IoT Central as a [Digital Twins Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl) JSON file.
-
-Each model has a unique _device twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then associate the correct device template with the device.
-
-[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a DTDL model.
-
-The [Azure IoT device SDKs](#languages-and-sdks) include support for the IoT Plug and Play conventions.
-
-### Device model
-
-A device model is defined by using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl) modeling language. This language lets you define:
--- The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double.-- The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean.-- The properties the device can receive from IoT Central. Optionally, you can mark a property as writable. For example, IoT Central sends a target temperature as a double to a device.-- The commands a device responds to. The definition includes the name of the command, and the names and data types of any parameters. For example, a device responds to a reboot command that specifies how many seconds to wait before rebooting.-
-A DTDL model can be a _no-component_ or a _multi-component_ model:
--- No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _root component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.-- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.-
-To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
-
-### Conventions
-
-A device should follow the IoT Plug and Play conventions when it exchanges data with IoT Central. The conventions include:
--- Send the DTMI when it connects to IoT Central.-- Send correctly formatted JSON payloads and metadata to IoT Central.-- Correctly respond to writable properties and commands from IoT Central.-- Follow the naming conventions for component commands.-
-> [!NOTE]
-> Currently, IoT Central does not fully support the DTDL **Array** and **Geospatial** data types.
-
-To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
-
-To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
-
-### Device SDKs
-
-Use one of the [Azure IoT device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) to implement the behavior of your device. The code should:
--- Register the device with DPS and use the information from DPS to connect to the internal IoT hub in your IoT Central application.-- Announce the DTMI of the model the device implements.-- Send telemetry in the format that the device model specifies. IoT Central uses the model in the device template to determine how to use the telemetry for visualizations and analysis.-- Synchronize property values between the device and IoT Central. The model specifies the property names and data types so that IoT Central can display the information.-- Implement command handlers for the commands specified in the model. The model specifies the command names and parameters that the device should use.-
-For more information about the role of device templates, see [What are device templates?](./concepts-device-templates.md).
-
-For some sample code, see [Create and connect a client application](./tutorial-connect-device.md).
-
-### Languages and SDKs
-
-For more information about the supported languages and SDKs, see [Understand and use Azure IoT Hub device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks).
-
-## Best practices
-
-These recommendations show how to implement devices to take advantage of the built-in disaster recovery and automatic scaling in IoT Central.
-
-The following steps show the high-level flow when a device connects to IoT Central:
-
-1. Use DPS to provision the device and get a device connection string.
-
-1. Use the connection string to connect IoT Central's internal IoT Hub endpoint. Send data to and receive data from your IoT Central application.
-
-1. If the device gets connection failures, then depending on the error type, either retry the connection or reprovision the device.
-
-### Use DPS to provision the device
-
-To provision a device with DPS, use the scope ID, credentials, and device ID from your IoT Central application. To learn more about the credential types, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment) and [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment). To learn more about device IDs, see [Device registration](concepts-get-connected.md#device-registration).
-
-On success, DPS returns a connection string the device can use to connect to your IoT Central application. To troubleshoot provisioning errors, see [Check the provisioning status of your device](troubleshoot-connection.md#check-the-provisioning-status-of-your-device).
-
-The device can cache the connection string to use for later connections. However, the device must be prepared to [handle connection failures](#handle-connection-failures).
-
-### Handle connection failures
-
-For scaling or disaster recovery purposes, IoT Central may update its underlying IoT hub. To maintain connectivity, your device code should handle specific connection errors by establishing a connection to the new IoT Hub endpoint.
-
-If the device gets any of the following errors when it connects, it should reprovision the device with DPS to get a new connection string. These errors mean the connection string is no longer valid:
--- Unreachable IoT Hub endpoint.-- Expired security token.-- Device disabled in IoT Hub.-
-If the device gets any of the following errors when it connects, it should use a back-off strategy to retry the connection. These errors mean the connection string is still valid, but transient conditions are stopping the device from connecting:
--- Operator blocked device.-- Internal error 500 from the service.-
-To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md).
-
-### Test failover capabilities
-
-The Azure CLI lets you test the failover capabilities of your device code. The CLI command works by temporarily switching a device registration to a different internal IoT hub. To verify the device failover worked, check that the device still sends telemetry and responds to commands.
-
-To run the failover test for your device, run the following command:
-
-```azurecli
-az iot central device manual-failover \
- --app-id {Application ID of your IoT Central application} \
- --device-id {Device ID of the device you're testing} \
- --ttl-minutes {How to wait before moving the device back to it's original IoT hub}
-```
-
-> [!TIP]
-> To find the **Application ID**, navigate to **Administration > Your application** in your IoT Central application.
-
-If the command succeeds, you see output that looks like the following:
-
-```output
-Command group 'iot central device' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
-{
- "hubIdentifier": "6bd4...bafa",
- "message": "Success! This device is now being failed over. You can check your device'ΓÇÖ's status using 'iot central device registration-info' command. The device will revert to its original hub at Tue, 18 May 2021 11:03:45 GMT. You can choose to failback earlier using device-manual-failback command. Learn more: https://aka.ms/iotc-device-test"
-}
-```
-
-To learn more about the CLI command, see [az iot central device manual-failover](/cli/azure/iot/central/device#az_iot_central_device_manual_failover).
-
-You can now check that telemetry from the device still reaches your IoT Central application.
-
-> [!TIP]
-> To see sample device code that handles failovers in various programing languages, see [IoT Central high availability clients](/samples/azure-samples/iot-central-high-availability-clients/iotc-high-availability-clients/).
- ## Next steps If you're a device developer and want to dive into some code, the suggested next step is to [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md).
+If you want to learn more about device implementation, see [Device implementation and best practices for IoT central](concepts-device-implementation.md).
+ To learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Built-in features of IoT Central you can use to extract business value include:
- To learn more about dashboards, see [Create and manage multiple dashboards](howto-manage-dashboards.md) and [Configure the application dashboard](howto-manage-dashboards.md).
- - When a device connects to an IoT Central, the device is associated with a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. You can create and customize the available views for each device type. To learn more, see [Add views](howto-set-up-template.md#views).
+ - When a device connects to an IoT Central, the device is assigned to a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. You can create and customize the available views for each device type. To learn more, see [Add views](howto-set-up-template.md#views).
- Use built-in rules and analytics:
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
https://aka.ms/iotcentral-docs-dps-SAS",
| - | - | - | | Provisioned | No immediately recognizable issue. | N/A | | Registered | The device has not yet connected to IoT Central. | Check your device logs for connectivity issues. |
-| Blocked | The device is blocked from connecting to IoT Central. | Device is blocked from connecting to the IoT Central application. Unblock the device in IoT Central and retry. To learn more, see [Block devices](concepts-get-connected.md#device-status-values). |
-| Unapproved | The device is not approved. | Device isn't approved to connect to the IoT Central application. Approve the device in IoT Central and retry. To learn more, see [Approve devices](concepts-get-connected.md#device-registration) |
-| Unassociated | The device is not associated with a device template. | Associate the device with a device template so that IoT Central knows how to parse the data. |
+| Blocked | The device is blocked from connecting to IoT Central. | Device is blocked from connecting to the IoT Central application. Unblock the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values). |
+| Unapproved | The device is not approved. | Device isn't approved to connect to the IoT Central application. Approve the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values) |
+| Unassigned | The device is not assigned to a device template. | Assign the device to a device template so that IoT Central knows how to parse the data. |
-Learn more about [device status codes](concepts-get-connected.md#device-status-values).
+Learn more about [Device status values](howto-manage-devices-individually.md#device-status-values).
### Error codes
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-create-telemetry-rules.md
To create a telemetry rule, the device template must include at least one teleme
1. Enter the name _Temperature monitor_ to identify the rule and press Enter.
-1. Select the **Sensor Controller** device template. By default, the rule automatically applies to all the devices associated with the device template. To filter for a subset of the devices, select **+ Filter** and use device properties to identify the devices. To disable the rule, toggle the **Enabled/Disabled** button:
+1. Select the **Sensor Controller** device template. By default, the rule automatically applies to all the devices assigned to the device template. To filter for a subset of the devices, select **+ Filter** and use device properties to identify the devices. To disable the rule, toggle the **Enabled/Disabled** button:
:::image type="content" source="media/tutorial-create-telemetry-rules/device-filters.png" alt-text="Screenshot that shows the selection of the device template in the rule definition":::
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-define-gateway-device-type.md
Both your simulated downstream devices are now connected to your simulated gatew
In the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial, the sample code shows how to include the model ID from the device template in the provisioning payload the device sends.
-When you connect a downstream device, you can modify the provisioning payload to include the the ID of the gateway device. The model ID lets IoT Central associate the device with the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
+When you connect a downstream device, you can modify the provisioning payload to include the the ID of the gateway device. The model ID lets IoT Central assign the device to the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
```json {
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/about-iot-dps.md
DPS automates device provisioning with Azure IoT Hub. Learn more about [IoT Hub]
IoT Central applications use an internal DPS instance to manage device connections. To learn more, see:
-* [Get connected to Azure IoT Central](../iot-central/core/concepts-get-connected.md)
+* [How devices connect to IoT Central](../iot-central/core/overview-iot-central-developer.md)
* [Tutorial: Create and connect a client application to your Azure IoT Central application](../iot-central/core/tutorial-connect-device.md) ## Next steps
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Install [Visual Studio Code](https://code.visualstudio.com/) first and then add
You'll also need to install some additional, language-specific tools to develop your module: -- C#, including Azure Functions: [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1)
+- C#, including Azure Functions: [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet/3.1)
- Python: [Python](https://www.python.org/downloads/) and [Pip](https://pip.pypa.io/en/stable/installing/#installation) for installing Python packages (typically included with your Python installation).
iot-fundamentals Iot Phone App How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-phone-app-how-to.md
After you register the device in IoT Central, you can connect the smartphone app
1. On the **Settings > Registration** page, you can see the device ID and ID scope that the app used to connect to IoT Central.
-To learn more about how devices connect to IoT Central, see [Get connected to Azure IoT Central](../iot-central/core/concepts-get-connected.md).
+To learn more about how devices connect to IoT Central, see [How devices connect](../iot-central/core/overview-iot-central-developer.md).
### Verify the connection
iot-hub Monitor Iot Hub Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub-reference.md
To learn about metrics supported by other Azure services, see [Supported metrics
**Topics in this section** -- [Monitoring Azure IoT Hub data reference](#monitoring-azure-iot-hub-data-reference)
- - [Metrics](#metrics)
- - [Supported aggregations](#supported-aggregations)
- - [Cloud to device command metrics](#cloud-to-device-command-metrics)
- - [Cloud to device direct methods metrics](#cloud-to-device-direct-methods-metrics)
- - [Cloud to device twin operations metrics](#cloud-to-device-twin-operations-metrics)
- - [Configurations metrics](#configurations-metrics)
- - [Daily quota metrics](#daily-quota-metrics)
- - [Device metrics](#device-metrics)
- - [Device telemetry metrics](#device-telemetry-metrics)
- - [Device to cloud twin operations metrics](#device-to-cloud-twin-operations-metrics)
- - [Event grid metrics](#event-grid-metrics)
- - [Jobs metrics](#jobs-metrics)
- - [Routing metrics](#routing-metrics)
- - [Twin query metrics](#twin-query-metrics)
- - [Metric dimensions](#metric-dimensions)
- - [Resource logs](#resource-logs)
- - [Connections](#connections)
- - [Device telemetry](#device-telemetry)
- - [Cloud-to-device commands](#cloud-to-device-commands)
- - [Device identity operations](#device-identity-operations)
- - [File upload operations](#file-upload-operations)
- - [Routes](#routes)
- - [Device-to-cloud twin operations](#device-to-cloud-twin-operations)
- - [Cloud-to-device twin operations](#cloud-to-device-twin-operations)
- - [Twin queries](#twin-queries)
- - [Jobs operations](#jobs-operations)
- - [Direct Methods](#direct-methods)
- - [Distributed Tracing (Preview)](#distributed-tracing-preview)
- - [IoT Hub D2C (device-to-cloud) logs](#iot-hub-d2c-device-to-cloud-logs)
- - [IoT Hub ingress logs](#iot-hub-ingress-logs)
- - [IoT Hub egress logs](#iot-hub-egress-logs)
- - [Configurations](#configurations)
- - [Device Streams (Preview)](#device-streams-preview)
- - [Azure Monitor Logs tables](#azure-monitor-logs-tables)
- - [See Also](#see-also)
+- [Supported aggregations](#supported-aggregations)
+- [Cloud to device command metrics](#cloud-to-device-command-metrics)
+- [Cloud to device direct methods metrics](#cloud-to-device-direct-methods-metrics)
+- [Cloud to device twin operations metrics](#cloud-to-device-twin-operations-metrics)
+- [Configurations metrics](#configurations-metrics)
+- [Daily quota metrics](#daily-quota-metrics)
+- [Device metrics](#device-metrics)
+- [Device telemetry metrics](#device-telemetry-metrics)
+- [Device to cloud twin operations metrics](#device-to-cloud-twin-operations-metrics)
+- [Event grid metrics](#event-grid-metrics)
+- [Jobs metrics](#jobs-metrics)
+- [Routing metrics](#routing-metrics)
+- [Twin query metrics](#twin-query-metrics)
### Supported aggregations
To learn more about metric dimensions, see [Multi-dimensional metrics](../azure-
## Resource logs
-This section lists all the resource log category types and schemas collected for Azure IoT Hub. The resource provider and type for all IoT Hub logs is [Microsoft.Devices/IotHubs](../azure-monitor/essentials/resource-logs-categories.md#microsoftdevicesiothubs).
+This section lists all the resource log category types and schemas collected for Azure IoT Hub. The resource provider and type for all IoT Hub logs is [Microsoft.Devices/IotHubs](../azure-monitor/essentials/resource-logs-categories.md#microsoftdevicesiothubs). Be aware that events are emitted only for errors in some categories.
**Topics in this section** -- [Monitoring Azure IoT Hub data reference](#monitoring-azure-iot-hub-data-reference)
- - [Metrics](#metrics)
- - [Supported aggregations](#supported-aggregations)
- - [Cloud to device command metrics](#cloud-to-device-command-metrics)
- - [Cloud to device direct methods metrics](#cloud-to-device-direct-methods-metrics)
- - [Cloud to device twin operations metrics](#cloud-to-device-twin-operations-metrics)
- - [Configurations metrics](#configurations-metrics)
- - [Daily quota metrics](#daily-quota-metrics)
- - [Device metrics](#device-metrics)
- - [Device telemetry metrics](#device-telemetry-metrics)
- - [Device to cloud twin operations metrics](#device-to-cloud-twin-operations-metrics)
- - [Event grid metrics](#event-grid-metrics)
- - [Jobs metrics](#jobs-metrics)
- - [Routing metrics](#routing-metrics)
- - [Twin query metrics](#twin-query-metrics)
- - [Metric dimensions](#metric-dimensions)
- - [Resource logs](#resource-logs)
- - [Connections](#connections)
- - [Device telemetry](#device-telemetry)
- - [Cloud-to-device commands](#cloud-to-device-commands)
- - [Device identity operations](#device-identity-operations)
- - [File upload operations](#file-upload-operations)
- - [Routes](#routes)
- - [Device-to-cloud twin operations](#device-to-cloud-twin-operations)
- - [Cloud-to-device twin operations](#cloud-to-device-twin-operations)
- - [Twin queries](#twin-queries)
- - [Jobs operations](#jobs-operations)
- - [Direct Methods](#direct-methods)
- - [Distributed Tracing (Preview)](#distributed-tracing-preview)
- - [IoT Hub D2C (device-to-cloud) logs](#iot-hub-d2c-device-to-cloud-logs)
- - [IoT Hub ingress logs](#iot-hub-ingress-logs)
- - [IoT Hub egress logs](#iot-hub-egress-logs)
- - [Configurations](#configurations)
- - [Device Streams (Preview)](#device-streams-preview)
- - [Azure Monitor Logs tables](#azure-monitor-logs-tables)
- - [See Also](#see-also)
+- [Connections](#connections)
+- [Device telemetry](#device-telemetry)
+- [Cloud-to-device commands](#cloud-to-device-commands)
+- [Device identity operations](#device-identity-operations)
+- [File upload operations](#file-upload-operations)
+- [Routes](#routes)
+- [Device-to-cloud twin operations](#device-to-cloud-twin-operations)
+- [Cloud-to-device twin operations](#cloud-to-device-twin-operations)
+- [Twin queries](#twin-queries)
+- [Jobs operations](#jobs-operations)
+- [Direct Methods](#direct-methods)
+- [Distributed Tracing (Preview)](#distributed-tracing-preview)
+ - [IoT Hub D2C (device-to-cloud) logs](#iot-hub-d2c-device-to-cloud-logs)
+ - [IoT Hub ingress logs](#iot-hub-ingress-logs)
+ - [IoT Hub egress logs](#iot-hub-egress-logs)
+- [Configurations](#configurations)
+- [Device Streams (Preview)](#device-streams-preview)
### Connections
iot-hub Monitor Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub.md
The following screenshot shows a diagnostic setting for routing the resource log
:::image type="content" source="media/monitor-iot-hub/diagnostic-setting-portal.png" alt-text="Diagnostic Settings pane for an IoT hub.":::
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs).
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
When routing IoT Hub platform metrics to other locations, be aware that:
In Azure portal, you can select **Logs** under **Monitoring** on the left-pane o
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Azure Monitor Logs tables in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#azure-monitor-logs-tables).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
# Manage health probes for Azure Load Balancer using the Azure portal
-Azure Load Balancer supports health probes to monitor the health of backend instances. In this article, you'll learn how to manage health probes for Azure Load Balancer.
+Azure Load Balancer uses health probes to monitor the health of backend instances. In this article, you'll learn how to manage health probes for Azure Load Balancer.
There are three types of health probes:
In this article, you learned how to manage health probes for an Azure Load Balan
For more information about Azure Load Balancer, see: - [What is Azure Load Balancer?](load-balancer-overview.md) - [Frequently asked questions - Azure Load Balancer](load-balancer-faqs.yml)-- [Azure Load Balancer health probes](load-balancer-custom-probe-overview.md)
+- [Azure Load Balancer health probes](load-balancer-custom-probe-overview.md)
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
This article describes the capabilities for the Schedule built-in triggers and a
## Schedule triggers
-You can start your logic app workflow by using the Recurrence trigger or Sliding Window trigger, which isn't associated with any specific service or system. These triggers start and run your workflow based on your specified recurrence where you select the interval and frequency, such as the number of seconds, minutes, hours, days, weeks, or months. You can also set the start date and time along with the time zone. Each time that a trigger fires, Azure Logic Apps creates and runs a new workflow instance for your logic app.
+You can start your logic app workflow by using the [Recurrence trigger](../connectors/connectors-native-recurrence.md) or [Sliding Window trigger](../connectors/connectors-native-sliding-window.md), which isn't associated with any specific service or system. These triggers start and run your workflow based on your specified recurrence where you select the interval and frequency, such as the number of seconds, minutes, hours, days, weeks, or months. You can also set the start date and time along with the time zone. Each time that a trigger fires, Azure Logic Apps creates and runs a new workflow instance for your logic app.
Here are the differences between these triggers: * **Recurrence**: Runs your workflow at regular time intervals based on your specified schedule. If the trigger misses recurrences, for example, due to disruptions or disabled workflows, the Recurrence trigger doesn't process the missed recurrences but restarts recurrences with the next scheduled interval.
- If you select **Day** as the frequency, you can specify the hours of the day and minutes of the hour, for example, every day at 2:30. If you select **Week** as the frequency, you can also select days of the week, such as Wednesday and Saturday. You can also specify a start date and time along with a time zone for your recurrence schedule.
+ If you select **Day** as the frequency, you can specify the hours of the day and minutes of the hour, for example, every day at 2:30. If you select **Week** as the frequency, you can also select days of the week, such as Wednesday and Saturday. You can also specify a start date and time along with a time zone for your recurrence schedule. For more information about time zone formatting, see [Add a Recurrence trigger](../connectors/connectors-native-recurrence.md#add-the-recurrence-trigger).
> [!IMPORTANT] > If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance:
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
Following this `Microsoft.Web/connections` resource definition, make sure that y
{ "type": "Microsoft.Web/connections/accessPolicies", "apiVersion": "2016-06-01",
- "name": "[concat('<connection-name>'),'/','<object-ID>')]",
+ "name": "[concat('<connection-name>','/','<object-ID>')]",
"location": "<location>", "dependsOn": [ "[resourceId('Microsoft.Web/connections', parameters('connection_name'))]"
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## March 9, 2022
+
+[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
+
+Version: 21.12.03
+
+Windows 2019 DSVM will now be supported under publisher: microsoft-dsvm, offer ID: dsvm-win-2019, plan ID/SKU ID: winserver-2019
+
+Users using ARM template / VMSS to deploy the Windows DSVM machines, should configure the SKU with winserver-2019 instead of server-2019, since we will continue to ship updates to Windows DSVM images on the new SKU from March, 2022.
+ ## December 3, 2021 New image for [Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview).
media-services Concept Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/concept-media-reserved-units.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-Media Reserved Units (MRUs) were previously used in Azure Media Services v2 to control encoding concurrency and performance. You no longer need to manage MRUs or request quota increases for any media services account as the system will automatically scale up and down based on load. You will also see performance that is equal to or improved in comparison to using MRUs.
+Media Reserved Units (MRUs) were previously used in Azure Media Services v2 to control encoding concurrency and performance. You no longer need to manage MRUs or request quota increases for any media services account as the system will automatically scale up and down based on load. You'll also see performance that is equal to or improved in comparison to using MRUs.
-If you have an account that was created using a version prior to the 2020-05-01 API, you will still have access to APIΓÇÖs for managing MRUs, however none of the MRU configuration that you set will be used to control encoding concurrency or performance. If you donΓÇÖt see the option to manage MRUs in the Azure portal, you have an account that was created with the 2020-05-01 API or later.
+If you have an account that was created using a version prior to the 2020-05-01 API, you'll still have access to APIs for managing MRUs, however none of the MRU configuration that you set will be used to control encoding concurrency or performance. If you donΓÇÖt see the option to manage MRUs in the Azure portal, you have an account that was created with the 2020-05-01 API or later.
## Billing
-While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units. For more information on billing for encoding jobs, please see [Encoding video and audio with Media Services](encoding-concept.md)
+While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units. For more information on billing for encoding jobs, see [Encoding video and audio with Media Services](encoding-concept.md)
-For accounts created in with the **2020-05-01** version of the API (i.e. the v3 version) or through the Azure portal, scaling and media reserved units are no longer required. Scaling is now automatically handled by the service internally. Media reserved units are no longer needed or supported for any Azure Media Services account. See [Media reserved units (legacy)](concept-media-reserved-units.md) for additional information.
+For accounts created in with the **2020-05-01** version of the API, that is, the v3 version, or through the Azure portal, scaling and media reserved units are no longer required. Scaling is now automatically handled by the service internally. Media reserved units are no longer needed or supported for any Azure Media Services account. See [Media reserved units (legacy)](concept-media-reserved-units.md) for additional information.
## See also * [Migrate from Media Services v2 to v3](migrate-v-2-v-3-migration-introduction.md)
-* [Scale Media Reserved Units with CLI](media-reserved-units-cli-how-to.md)
+* [Scale Media Reserved Units with CLI](media-reserved-units-how-to.md)
media-services Configure Connect Dotnet Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/configure-connect-dotnet-howto.md
namespace ConsoleApp1
- [Tutorial: Analyze videos with Media Services v3 - .NET](analyze-videos-tutorial.md) - [Create a job input from a local file - .NET](job-input-from-local-file-how-to.md) - [Create a job input from an HTTPS URL - .NET](job-input-from-http-how-to.md)-- [Encode with a custom Transform - .NET](transform-custom-presets-how-to.md)
+- [Encode with a custom Transform - .NET](transform-custom-transform-how-to.md)
- [Use AES-128 dynamic encryption and the key delivery service - .NET](drm-playready-license-template-concept.md) - [Use DRM dynamic encryption and license delivery service - .NET](drm-protect-with-drm-tutorial.md)-- [Get a signing key from the existing policy - .NET](drm-get-content-key-policy-dotnet-how-to.md)
+- [Get a signing key from the existing policy - .NET](drm-get-content-key-policy-how-to.md)
- [Create filters with Media Services - .NET](filters-dynamic-manifest-dotnet-how-to.md) - [Advanced video on-demand examples of Azure Functions v2 with Media Services v3](https://aka.ms/ams3functions)
media-services Drm Add Option Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-add-option-content-key-policy-how-to.md
+
+ Title: Add an option to a content key policy
+description: This article shows how to add an option to a content key policy.
+++++ Last updated : 03/10/2022+++
+# Add an option to a content key policy
++
+## Methods
+
+Use the following methods to add an option to a content key policy.
+
+## [CLI](#tab/cli/)
+++
media-services Drm Content Key Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-content-key-policy-concept.md
Usually, you associate your content key policy with your [Streaming Locator](str
## Example
-To get to the key, use `GetPolicyPropertiesWithSecretsAsync`, as shown in the [Get a signing key from the existing policy](drm-get-content-key-policy-dotnet-how-to.md#get-contentkeypolicy-with-secrets) example.
+To get to the key, use `GetPolicyPropertiesWithSecretsAsync`, as shown in the [Get a signing key from the existing policy](drm-get-content-key-policy-how-to.md#get-contentkeypolicy-with-secrets) example.
## Filtering, ordering, paging
media-services Drm Create Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-create-content-key-policy-how-to.md
+
+ Title: Create a content key policy
+description: This article shows how to create a content key policy.
+++++ Last updated : 03/10/2022+++
+# Create a content key policy
++
+## Methods
+
+Use the following methods to create a content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Delete Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-delete-content-key-policy-how-to.md
+
+ Title: Delete a content key policy
+description: This article shows how to delete a content key policy.
+++++ Last updated : 03/10/2022+++
+# Delete a content key policy
++
+## Methods
+
+Use the following methods to delete a content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Get Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-get-content-key-policy-how-to.md
+
+ Title: Get a signing key from a policy
+description: This topic shows how to get a signing key from the existing policy using Media Services v3.
++++ Last updated : 03/09/2022+++
+# Get a signing key from the existing policy
++
+One of the key design principles of the v3 API is to make the API more secure. v3 APIs do not return secrets or credentials on **Get** or **List** operations. See the detailed explanation here: For more information, see [Azure RBAC and Media Services accounts](security-rbac-concept.md)
+
+The example in this article shows how to get a signing key from the existing policy.
+
+## Download
+
+Clone a GitHub repository that contains the full .NET sample to your machine using the following command:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials.git
+ ```
+
+The ContentKeyPolicy with secrets example is located in the [EncryptWithDRM](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/tree/main/AMSV3Tutorials/EncryptWithDRM) folder.
+
+## [.NET](#tab/net/)
+
+## Get ContentKeyPolicy with secrets
+
+To get to the key, use **GetPolicyPropertiesWithSecretsAsync**, as shown in the example below.
+
+[!code-csharp[Main](../../../media-services-v3-dotnet-tutorials/AMSV3Tutorials/EncryptWithDRM/Program.cs#GetOrCreateContentKeyPolicy)]
++
media-services Drm List Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-list-content-key-policy-how-to.md
+
+ Title: List the content key policies
+description: This article shows how to list the content key policies.
+++++ Last updated : 03/10/2022+++
+# List the content key policies
++
+## Methods
+
+Use the following methods to list the content key policies.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Offline Fairplay For Ios Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-offline-fairplay-for-ios-concept.md
Title: Media Services v3 offline FairPlay Streaming for iOS description: This topic gives an overview and shows how to use Azure Media Services v3 to dynamically encrypt your HTTP Live Streaming (HLS) content with Apple FairPlay in offline mode.- Previously updated : 05/25/2021 Last updated : 03/09/2022 + # Offline FairPlay Streaming for iOS with Media Services v3 [!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
Before you implement offline DRM for FairPlay on an iOS 10+ device:
You will need to modify the code in [Encrypt with DRM using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/tree/main/AMSV3Tutorials/EncryptWithDRM) to add FairPlay configurations.
+## [.NET](#tab/net/)
+ ## Configure content protection in Azure Media Services In the [GetOrCreateContentKeyPolicyAsync](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/main/AMSV3Tutorials/EncryptWithDRM/Program.cs#L192) method, do the following:
Three test samples in Media Services cover the following three scenarios:
You can find these samples at [this demo site](https://aka.ms/poc#22), with the corresponding application certificate hosted in an Azure web app. With either the version 3 or version 4 sample of the FPS Server SDK, if a master playlist contains alternate audio, during offline mode it plays audio only. Therefore, you need to strip the alternate audio. In other words, the second and third samples listed previously work in online and offline mode. The sample listed first plays audio only during offline mode, while online streaming works properly. ++ ## Offline Fairplay questions See [offline fairplay questions in the FAQ](frequently-asked-questions.yml).
media-services Drm Offline Playready Streaming For Windows 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-offline-playready-streaming-for-windows-10.md
Title: Configure offline PlayReady streaming
description: This article shows how to configure your Azure Media Services v3 account for streaming PlayReady for Windows 10 offline. keywords: DASH, DRM, Widevine Offline Mode, ExoPlayer, Android -+ -- Previously updated : 08/31/2020--+ Last updated : 03/09/2022+ # Offline PlayReady Streaming for Windows 10 with Media Services v3
Below are two sets of test assets, the first one using PlayReady license deliver
For playback testing, we used a Universal Windows Application on Windows 10. In [Windows 10 Universal samples](https://github.com/Microsoft/Windows-universal-samples), there is a basic player sample called [Adaptive Streaming Sample](https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/AdaptiveStreaming). All we have to do is to add the code for us to pick downloaded video and use it as the source, instead of adaptive streaming source. The changes are in button click event handler:
+## [.NET](#tab/net/)
+ ```csharp private async void LoadUri_Click(object sender, RoutedEventArgs e) {
In summary, we have achieved offline mode on Azure Media
* Content can be hosted in Azure Media Services or Azure Storage for progressive download; * PlayReady license delivery can be from Azure Media Services or elsewhere; * The prepared smooth streaming content can still be used for online streaming via DASH or smooth with PlayReady as the DRM.++
media-services Drm Offline Widevine For Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-offline-widevine-for-android.md
-- Previously updated : 05/25/2021+ Last updated : 03/09/2022
Before implementing offline DRM for Widevine on Android devices, you should firs
- [ExoPlayer Developer Guide](https://google.github.io/ExoPlayer/guide.html) - [EoPlayer Developer Blog](https://medium.com/google-exoplayer)
+## [.NET](#tab/net/)
+ ## Configure content protection in Azure Media Services In the [GetOrCreateContentKeyPolicyAsync](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/main/AMSV3Tutorials/EncryptWithDRM/Program.cs#L192) method, the following necessary steps are present:
The above open-source PWA app is authored in Node.js. If you want to host your o
- The certificate must have trusted CA and a self-signed development certificate does not work - The certificate must have a CN matching the DNS name of the web server or gateway ++ ## More information For more information, see [Content Protection in the FAQ](frequently-asked-questions.yml).
media-services Drm Remove Option Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-remove-option-content-key-policy-how-to.md
+
+ Title: Remove an option from a content key policy
+description: This article shows how to remove an option from a content key policy.
+++++ Last updated : 03/10/2022+++
+# Remove an option from a content key policy
++
+## Methods
+
+Use the following methods to remove an option from a content key policy.
+
+## [CLI](#tab/cli/)
+++
media-services Drm Show Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-show-content-key-policy-how-to.md
+
+ Title: Show an existing content key policy
+description: This article shows how to show an existing content key policy.
+++++ Last updated : 03/10/2022+++
+# Show an existing content key policy
++
+## Methods
+
+Use the following methods to show an existing content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Update Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-update-content-key-policy-how-to.md
+
+ Title: Update an existing content key policy
+description: This article shows how to update an existing content key policy.
+++++ Last updated : 03/10/2022+++
+# Update an existing content key policy
++
+## Methods
+
+Use the following methods to update an existing content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Update Option Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-update-option-content-key-policy-how-to.md
+
+ Title: Update an option in a content key policy
+description: This article shows how to update an option in a content key policy.
+++++ Last updated : 03/10/2022+++
+# Update an option in a content key policy
++
+## Methods
+
+Use the following methods to update an option in a content key policy.
+
+## [CLI](#tab/cli/)
+++
media-services Encode Concept Preset Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-concept-preset-overrides.md
When encoding or using analytics with Media Services you can define custom prese
Preset overrides allow you the ability to pass in a customized preset that will override the settings supplied to a transform object after it was first created. This property is available on the [job output asset](/dotnet/api/microsoft.azure.management.media.models.joboutputasset) when submitting a new job to a transform.
-This can be useful for situations where you need to override some properties of your custom defined transforms, or a property on a built-in preset. For example, consider the scenario where you have created a custom transform that uses the [audio analyzer built-in preset](/rest/api/media/transforms/create-or-update#audioanalyzerpreset), but you initially set up that preset to use the audio language setting of "en-us" for English. This would result in a transform where each jobs submitted would be sent to the speech-to-text transcription engine as US English only. Every job submitted to that transform would be locked to the "en-us" language setting. You could work around this scenario by having a transform defined for every language, but that would be much more difficult to manage and you could hit transform quota limitations in your account.
+This can be useful for situations where you need to override some properties of your custom defined transforms, or a property on a built-in preset. For example, consider the scenario where you have created a custom transform that uses the [audio analyzer built-in preset](/rest/api/media/transforms/create-or-update#audioanalyzerpreset), but you initially set up that preset to use the audio language setting of "en-us" for English. This would result in a transform where each job submitted would be sent to the speech-to-text transcription engine as US English only. Every job submitted to that transform would be locked to the "en-us" language setting. You could work around this scenario by having a transform defined for every language, but that would be much more difficult to manage and you could hit transform quota limitations in your account.
To best solve for this scenario, you use a preset override on the job output asset prior to submitting the job to the transform. You can then define a single "Audio transcription" transform and pass in the required language settings per-job. The preset override provides you a way to pass in a new custom preset definition with each job submitted to the transform. This property is available on the [job output](/dotnet/api/microsoft.azure.management.media.models.joboutput) entity in all SDK versions based off the 2021-06-01 version of the API.
For reference, see the [presetOverride](https://github.com/Azure/azure-rest-api-
## Example of preset override in .NET
-A complete example using the .NET SDK for Media Services showing how to use preset override with a basic audio analyzer transform is available in github.
+A complete example using the .NET SDK for Media Services showing how to use preset override with a basic audio analyzer transform is available in GitHub.
See the [Analyze a media file with a audio analyzer preset](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/AudioAnalytics/AudioAnalyzer) sample for details on how to use the preset override property of the job output. ## Sample code of preset override in .NET
Check out the [Azure Media Services community](media-services-community.md) arti
* [Upload, encode, and stream using Media Services](stream-files-tutorial-with-api.md). * [Encode from an HTTPS URL using built-in presets](job-input-from-http-how-to.md). * [Encode a local file using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
+* [Build a custom preset to target your specific scenario or device requirements](transform-custom-transform-how-to.md).
media-services Encode Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-concept.md
To encode with Media Services v3, you need to create a [Transform](/rest/api/med
When encoding with Media Services, you use presets to tell the encoder how the input media files should be processed. In Media Services v3, you use Standard Encoder to encode your files. For example, you can specify the video resolution and/or the number of audio channels you want in the encoded content.
-You can get started quickly with one of the recommended built-in presets based on industry best practices or you can choose to build a custom preset to target your specific scenario or device requirements. For more information, see [Encode with a custom Transform](transform-custom-presets-how-to.md).
+You can get started quickly with one of the recommended built-in presets based on industry best practices or you can choose to build a custom preset to target your specific scenario or device requirements. For more information, see [Encode with a custom Transform](transform-custom-transform-how-to.md).
Starting with January 2019, when encoding with the Standard Encoder to produce MP4 file(s), a new .mpi file is generated and added to the output Asset. This MPI file is intended to improve performance for [dynamic packaging](encode-dynamic-packaging-concept.md) and streaming scenarios.
You can specify to create a [Job](/rest/api/media/jobs/create) with a single cli
See examples:
-* [Subclip a video with .NET](transform-subclip-video-dotnet-how-to.md)
-* [Subclip a video with REST](transform-subclip-video-rest-how-to.md)
+* [Subclip a video with .NET](transform-subclip-video-how-to.md)
## Built-in presets
Media Services fully supports customizing all values in presets to meet your spe
#### Examples -- [Customize presets with .NET](transform-custom-presets-how-to.md)-- [Customize presets with CLI](transform-custom-preset-cli-how-to.md)-- [Customize presets with REST](transform-custom-preset-rest-how-to.md)-
+- [Customize presets with .NET](transform-custom-transform-how-to.md)
## Preset schema
In Media Services v3, presets are strongly typed entities in the API itself. You
## Scaling encoding in v3
-To scale media processing, see [Scale with CLI](media-reserved-units-cli-how-to.md).
+To scale media processing, see [Scale with CLI](media-reserved-units-how-to.md).
For accounts created with the **2020-05-01** or later version of the API or through the Azure portal, scaling and media reserved units are no longer required. Scaling will be automatic and handled by the service internally. ## Billing
For accounts created with the **2020-05-01** or later version of the API or thro
Media Services does not bill for canceled or errored jobs. For example, a job that has reached 50% progress and is canceled is not billed at 50% of the job minutes. You are only charged for finished jobs. For more information, see [pricing](https://azure.microsoft.com/pricing/details/media-services/).-
-## Ask questions, give feedback, get updates
-
-Check out the [Azure Media Services community](media-services-community.md) article to see different ways you can ask questions, give feedback, and get updates about Media Services.
-
-## Next steps
-
-* [Upload, encode, and stream using Media Services](stream-files-tutorial-with-api.md).
-* [Encode from an HTTPS URL using built-in presets](job-input-from-http-how-to.md).
-* [Encode a local file using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
media-services Encode Dynamic Packaging Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-dynamic-packaging-concept.md
The advantages of just-in-time packaging are the following:
* You can store all your files in standard MP4 file format * You do not need to store multiple copies of static packaged HLS and DASH formats in blob storage, reducing the amount of video content stored and lowering your overall costs of storage * You can instantly take advantage of new protocol updates and changes to the specifications as they evolve over time without need of re-packaging the static content in your catalog
-* You can deliver content with our without encryption and DRM using the same MP4 files in storage
+* You can deliver content with or without encryption and DRM using the same MP4 files in storage
* You can dynamically filter or alter the manifests with simple asset-level or global filters to remove specific tracks, resolutions, languages, or provide shorter highlight clips from the same MP4 files without re-encoding or re-rendering the content. ## To prepare your source files for delivery
The following articles show examples of [how to encode a video with Media Servic
* [Use content aware encoding](encode-content-aware-concept.md). * [Encode from an HTTPS URL by using built-in presets](job-input-from-http-how-to.md). * [Encode a local file by using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
+* [Build a custom preset to target your specific scenario or device requirements](transform-custom-transform-how-to.md).
* [Code samples for encoding with Standard Encoder using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding) See the list of supported Standard Encoder input [formats and codecs](encode-media-encoder-standard-formats-reference.md).
media-services Encode Media Encoder Standard Formats Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-media-encoder-standard-formats-reference.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article contains a list of the most common import and export file formats that you can use with [StandardEncoderPreset](/rest/api/medi).
+This article contains a list of the most common import and export file formats that you can use with [StandardEncoderPreset](/rest/api/medi).
## Input container/file formats
The following table lists the codecs and file formats that are supported for exp
| | | | | MP4 <br/><br/>(including multi-bitrate MP4 containers) |H.264 (High, Main, and Baseline Profiles), HEVC (H.265) 8-bit |AAC-LC, HE-AAC v1, HE-AAC v2 | | MPEG2-TS |H.264 (High, Main, and Baseline Profiles) |AAC-LC, HE-AAC v1, HE-AAC v2 |-
-## Next steps
-
-[Create a transform with a custom preset](transform-custom-presets-how-to.md)
media-services Job Cancel How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-cancel-how-to.md
+
+ Title: Cancel a job
+description: This article shows how to cancel a job.
+++++ Last updated : 03/10/2022+++
+# Cancel a job
++
+## Methods
+
+Use the following methods to cancel a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Create How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-create-how-to.md
Last updated 03/01/2022
-# CLI example: Create and submit a job
+# Create a job
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
In Media Services v3, when you submit Jobs to process your videos, you have to t
[Create a Media Services account](./account-create-how-to.md).
-## [Portal](#tab/rest/)
+## [Portal](#tab/portal/)
## [CLI](#tab/cli/)
-## Example script
-When you run `az ams job start`, you can set a label on the job's output. The label can later be used to identify what this output asset is for.
+## [REST](#tab/rest/)
-- If you assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=labelΓÇ¥-- If you do not assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=ΓÇ¥.
- Notice that you add "=" to the `output-assets`.
-
-```azurecli
-az ams job start \
- --name testJob001 \
- --transform-name testEncodingTransform \
- --base-uri 'https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/' \
- --files 'Ignite-short.mp4' \
- --output-assets testOutputAssetName= \
- -a amsaccount \
- -g amsResourceGroup
-```
-
-You get a response similar to this:
-
-```
-{
- "correlationData": {},
- "created": "2019-02-15T05:08:26.266104+00:00",
- "description": null,
- "id": "/subscriptions/<id>/resourceGroups/amsResourceGroup/providers/Microsoft.Media/mediaservices/amsaccount/transforms/testEncodingTransform/jobs/testJob001",
- "input": {
- "baseUri": "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
- "files": [
- "Ignite-short.mp4"
- ],
- "label": null,
- "odatatype": "#Microsoft.Media.JobInputHttp"
- },
- "lastModified": "2019-02-15T05:08:26.266104+00:00",
- "name": "testJob001",
- "outputs": [
- {
- "assetName": "testOutputAssetName",
- "error": null,
- "label": "",
- "odatatype": "#Microsoft.Media.JobOutputAsset",
- "progress": 0,
- "state": "Queued"
- }
- ],
- "priority": "Normal",
- "resourceGroup": "amsResourceGroup",
- "state": "Queued",
- "type": "Microsoft.Media/mediaservices/transforms/jobs"
-}
-```
media-services Job Delete How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-delete-how-to.md
+
+ Title: Delete a job
+description: This article shows how to delete a job.
+++++ Last updated : 03/10/2022+++
+# Delete a job
++
+## Methods
+
+Use the following methods to delete a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Download Results How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-download-results-how-to.md
Title: Download the results of a job - Azure Media Services description: This article demonstrates how to download the results of a job.- ---+ Previously updated : 08/31/2020 Last updated : 03/09/2022 - # Download the results of a job
In Azure Media Services, when processing your videos (for example, encoding or a
This article demonstrates how to download the results using Java and .NET SDKs.
-## Java
-
-```java
-/**
- * Use Media Service and Storage APIs to download the output files to a local folder
- * @param manager The entry point of Azure Media resource management
- * @param resourceGroup The name of the resource group within the Azure subscription
- * @param accountName The Media Services account name
- * @param assetName The asset name
- * @param outputFolder The output folder for downloaded files.
- * @throws StorageException
- * @throws URISyntaxException
- * @throws IOException
- */
-private static void downloadResults(MediaManager manager, String resourceGroup, String accountName,
- String assetName, File outputFolder) throws StorageException, URISyntaxException, IOException {
- ListContainerSasInput parameters = new ListContainerSasInput()
- .withPermissions(AssetContainerPermission.READ)
- .withExpiryTime(DateTime.now().plusHours(1));
- AssetContainerSas assetContainerSas = manager.assets()
- .listContainerSasAsync(resourceGroup, accountName, assetName, parameters).toBlocking().first();
-
- String strSas = assetContainerSas.assetContainerSasUrls().get(0);
- CloudBlobContainer container = new CloudBlobContainer(new URI(strSas));
-
- File directory = new File(outputFolder, assetName);
- directory.mkdir();
-
- ArrayList<ListBlobItem> blobs = container.listBlobsSegmented(null, true, EnumSet.noneOf(BlobListingDetails.class), 200, null, null, null).getResults();
-
- for (ListBlobItem blobItem: blobs) {
- if (blobItem instanceof CloudBlockBlob) {
- CloudBlockBlob blob = (CloudBlockBlob)blobItem;
- File downloadTo = new File(directory, blob.getName());
-
- blob.downloadToFile(downloadTo.getPath());
- }
- }
-
- System.out.println("Download complete.");
-}
-```
-
-See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-java/blob/master/VideoEncoding/EncodingWithMESPredefinedPreset/src/main/java/sample/EncodingWithMESPredefinedPreset.java)
+## Methods
-## .NET
+## [.NET](#tab/net/)
```csharp /// <summary>
private async static Task DownloadResults(IAzureMediaServicesClient client, stri
} ```
-See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_PredefinedPreset/Program.cs)
+## Code sample
-## Next steps
+See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_PredefinedPreset/Program.cs)
-[Create a job input from an HTTPS URL](job-input-from-http-how-to.md).
+
media-services Job Input From Http How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-input-from-http-how-to.md
In Media Services v3, when you submit Jobs to process your videos, you have to t
> [!TIP] > Before you start developing, review [Developing with Media Services v3 APIs](media-services-apis-overview.md) (includes information on accessing APIs, naming conventions, etc.)
+## Methods
+
+## [.NET](#tab/net/)
+ ## .NET sample The following code shows how to create a job with an HTTPS URL input. [!code-csharp[Main](../../../media-services-v3-dotnet-quickstarts/AMSV3Quickstarts/EncodeAndStreamFiles/Program.cs#SubmitJob)]
-## Job error codes
-
-See [Error codes](/rest/api/media/jobs/get#joberrorcode).
-
-## Next steps
-
-[Create a job input from a local file](job-input-from-local-file-how-to.md).
+
media-services Job Input From Local File How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-input-from-local-file-how-to.md
Title: Create a job input from a local file description: This article demonstrates how to create an Azure Media Services job input from a local file.- --+ Previously updated : 05/25/2021 Last updated : 03/09/2022
In Media Services v3, when you submit Jobs to process your videos, you have to t
* [Create a Media Services account](./account-create-how-to.md).
+## [.NET](#tab/net/)
+ ## .NET sample The following code shows how to create an input asset and use it as the input for the job. The CreateInputAsset function performs the following actions:
The following code snippet submits an encoding job:
See [Error codes](/rest/api/media/jobs/get#joberrorcode).
-## Next steps
-
-[Create a job input from an HTTPS URL](job-input-from-http-how-to.md).
+
media-services Job List How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-list-how-to.md
+
+ Title: List jobs
+description: This article shows how to list jobs.
+++++ Last updated : 03/10/2022+++
+# List jobs
++
+## Methods
+
+Use the following methods to list jobs.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Multiple Transform Outputs How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-multiple-transform-outputs-how-to.md
This topic shows how to create a Transform with two Transform Outputs. The first
> [!TIP] > Before you start developing, review [Developing with Media Services v3 APIs](media-services-apis-overview.md) (includes information on accessing APIs, naming conventions, etc.)
+## [.NET](#tab/net/)
+ ## Create a transform The following code shows how to create a transform that produces two outputs.
private static async Task<Job> SubmitJobAsync(IAzureMediaServicesClient client,
return job; } ```
-## Job error codes
-
-See [Error codes](/rest/api/media/jobs/get#joberrorcode).
-## Next steps
-
-[Azure Media Services v3 samples using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/master/)
+
media-services Job Show How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-show-how-to.md
+
+ Title: Show or get a job
+description: This article shows how to show or get a job.
+++++ Last updated : 03/10/2022+++
+# Show the details of a job
++
+## Methods
+
+Use the following methods to show or get a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Update How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-update-how-to.md
+
+ Title: Update a job
+description: This article shows how to update a job.
+++++ Last updated : 03/10/2022+++
+# Update a job
++
+## Methods
+
+Use the following methods to update a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Live Event Cloud Dvr Time How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/live-event-cloud-dvr-time-how-to.md
For more information, see:
## Next steps
-* [Subclip your videos](transform-subclip-video-rest-how-to.md).
+* [Subclip your videos](transform-subclip-video-how-to.md).
* [Define filters for your assets](filters-dynamic-manifest-rest-howto.md).
media-services Media Reserved Units How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/media-reserved-units-how-to.md
+
+ Title: Scale Media Reserved Units (MRUs)
+description: This topic shows how to use to scale media processing with Azure Media Services.
+++++ Last updated : 08/25/2021++
+# How to scale media reserved units (legacy)
++
+This article shows you how to scale Media Reserved Units (MRUs) for faster encoding.
+
+> [!WARNING]
+> This command will no longer work for Media Services accounts that are created with the 2020-05-01 (or later) version of the API or later. For these accounts media reserved units are no longer needed as the system will automatically scale up and down based on load. If you donΓÇÖt see the option to manage MRUs in the Azure portal, youΓÇÖre using an account that was created with the 2020-05-01 API or later.
+> The purpose of this article is to document the legacy process of using MRUs
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md).
+
+Understand [Media Reserved Units](concept-media-reserved-units.md).
+
+## [CLI](#tab/cli/)
+
+## Scale Media Reserved Units with CLI
+
+Run the `mru` command.
+
+The following [az ams account mru](/cli/azure/ams/account/mru) command sets Media Reserved Units on the "amsaccount" account using the **count** and **type** parameters.
+
+```azurecli
+az ams account mru set -n amsaccount -g amsResourceGroup --count 10 --type S3
+```
+
+## Billing
+
+ While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units.
+
+## See also
+
+* [Migrate from Media Services v2 to v3](migrate-v-2-v-3-migration-introduction.md)
++
media-services Media Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/media-services-overview.md
Title: Azure Media Services v3 overview : Azure Media Services description: A high-level overview of Azure Media Services v3 with links to quickstarts, tutorials, and code samples.-
-tags: ''
-keywords: azure media services, stream, broadcast, live, offline
- - Previously updated : 3/10/2021 Last updated : 03/09/2022 -+ #Customer intent: As a developer or a content provider, I want to encode, stream (on demand or live), analyze my media content so that my customers can: view the content on a wide variety of browsers and devices, gain valuable insights from recorded content.
How-to guides contain code samples that demonstrate how to complete a task. In t
* [Encode with HTTPS as job input - .NET](job-input-from-http-how-to.md) * [Monitor events - Portal](monitoring/monitor-events-portal-how-to.md) * [Encrypt dynamically with multi-DRM - .NET](drm-protect-with-drm-tutorial.md)
-* [How to encode with a custom transform - CLI](transform-custom-preset-cli-how-to.md)
+* [How to encode with a custom transform - CLI](transform-custom-transform-how-to.md)
## Ask questions, give feedback, get updates
media-services Migrate V 2 V 3 Migration Scenario Based Content Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-content-protection.md
You should first unpublish (remove all Streaming Locators) on the Asset via the
### How to guides -- [Get a signing key from the existing policy](drm-get-content-key-policy-dotnet-how-to.md)
+- [Get a signing key from the existing policy](drm-get-content-key-policy-how-to.md)
- [Offline FairPlay Streaming for iOS with Media Services v3](drm-offline-fairplay-for-ios-concept.md) - [Offline Widevine streaming for Android with Media Services v3](drm-offline-widevine-for-android.md) - [Offline PlayReady Streaming for Windows 10 with Media Services v3](drm-offline-playready-streaming-for-windows-10.md)
media-services Migrate V 2 V 3 Migration Scenario Based Encoding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-encoding.md
For customers using the Indexer v1 processor in the v2 API, you need to create a
- [Create a job input from an HTTPS URL](job-input-from-http-how-to.md) - [Create a job input from a local file](job-input-from-local-file-how-to.md) - [Create a basic audio transform](transform-create-basic-audio-how-to.md)-- With .NET
- - [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md)
- - [How to create an overlay with Media Encoder Standard](transform-create-overlay-how-to.md)
- - [How to generate thumbnails using Encoder Standard with .NET](transform-generate-thumbnails-dotnet-how-to.md)
-- With Azure CLI
- - [How to encode with a custom transform - Azure CLI](transform-custom-preset-cli-how-to.md)
-- With REST
- - [How to encode with a custom transform - REST](transform-custom-preset-rest-how-to.md)
- - [How to generate thumbnails using Encoder Standard with REST](transform-generate-thumbnails-rest-how-to.md)
-- [Subclip a video when encoding with Media Services - .NET](transform-subclip-video-dotnet-how-to.md)-- [Subclip a video when encoding with Media Services - REST](transform-subclip-video-rest-how-to.md)
+- [How to encode with a custom transform](transform-custom-transform-how-to.md)
+- [How to create an overlay with Media Encoder Standard](transform-create-overlay-how-to.md)
+- [How to generate thumbnails using Encoder Standard](transform-generate-thumbnails-dotnet-how-to.md)
+- [Subclip a video when encoding with Media Services - REST](transform-subclip-video-how-to.md)
## Samples
media-services Migrate V 2 V 3 Migration Scenario Based Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-media-reserved-units.md
Please migrate your MRUs based on the following scenarios:
* If you are an existing V2 customer, you need to create a new V3 account to support your existing application prior to the completion of migration. * Indexer V1 or other media processors that are not fully deprecated yet may need to be enabled again.
-For more information about MRUs, see [Media Reserved Units](concept-media-reserved-units.md) and [How to scale media reserved units](media-reserved-units-cli-how-to.md).
+For more information about MRUs, see [Media Reserved Units](concept-media-reserved-units.md) and [How to scale media reserved units](media-reserved-units-how-to.md).
## MRU concepts, tutorials and how to guides
For more information about MRUs, see [Media Reserved Units](concept-media-reserv
### How to guides
-[How to scale media reserved units](media-reserved-units-cli-how-to.md)
+[How to scale media reserved units](media-reserved-units-how-to.md)
## Samples
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/release-notes.md
Azure Media Services is now available in the Norway East region in the Azure por
### Basic Audio Analysis
-The Audio Analysis preset now includes a Basic mode pricing tier. The new Basic Audio Analyzer mode provides a low-cost option to extract speech transcription, and format output captions and subtitles. This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The output of this mode includes an Insights JSON file including only the keywords, transcription,and timing information. Automatic language detection and speaker diarization are not included in this mode. See the list of [supported languages.](analyze-video-audio-files-concept.md#built-in-presets)
+The Audio Analysis preset now includes a Basic mode pricing tier. The new Basic Audio Analyzer mode provides a low-cost option to extract speech transcription, and format output captions and subtitles. This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The output of this mode includes an Insights JSON file including only the keywords, transcription, and timing information. Automatic language detection and speaker diarization are not included in this mode. See the list of [supported languages.](analyze-video-audio-files-concept.md#built-in-presets)
Customers using Indexer v1 and Indexer v2 should migrate to the Basic Audio Analysis preset.
This functionality works with any [Transform](/rest/api/media/transforms) that i
See examples:
-* [Subclip a video with .NET](transform-subclip-video-dotnet-how-to.md)
-* [Subclip a video with REST](transform-subclip-video-rest-how-to.md)
+* [Subclip a video with REST](transform-subclip-video-how-to.md)
## May 2019
The CLI 2.0 module is now available for [Azure Media Services v3 GA](/cli/azure/
- [az ams live-output](/cli/azure/ams/live-output) - [az ams streaming-endpoint](/cli/azure/ams/streaming-endpoint) - [az ams streaming-locator](/cli/azure/ams/streaming-locator)-- [az ams account mru](/cli/azure/ams/account/mru) - enables you to manage Media Reserved Units. For more information, see [Scale Media Reserved Units](media-reserved-units-cli-how-to.md).
+- [az ams account mru](/cli/azure/ams/account/mru) - enables you to manage Media Reserved Units. For more information, see [Scale Media Reserved Units](media-reserved-units-how-to.md).
### New features and breaking changes
Starting with this release, you can use Resource Manager templates to create Liv
The following improvements were introduced: - Ingest from HTTP(s) URLs or Azure Blob Storage SAS URLs.-- Specify you own container names for Assets.
+- Specify your own container names for Assets.
- Easier output support to create custom workflows with Azure Functions. #### New Transform object
media-services Security Rbac Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/security-rbac-concept.md
See the following articles for more information:
## Next steps - [Developing with Media Services v3 APIs](media-services-apis-overview.md)-- [Get content key policy using Media Services .NET](drm-get-content-key-policy-dotnet-how-to.md)
+- [Get content key policy using Media Services .NET](drm-get-content-key-policy-how-to.md)
media-services Signal Descriptive Audio Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/signal-descriptive-audio-howto.md
Title: Signal descriptive audio tracks with Media Services v3 description: Follow the steps of this tutorial to upload a file, encode the video, add descriptive audio tracks, and stream your content with Media Services v3. - - Previously updated : 08/31/2020 Last updated : 03/09/2022
This article shows how to encode a video, upload an audio-only MP4 file (AAC cod
- Review [Dynamic packaging](encode-dynamic-packaging-concept.md). - Review the [Upload, encode, and stream videos](stream-files-tutorial-with-api.md) tutorial.
-## Create an input asset and upload a local file into it
+## [.NET](#tab/net/)
+
+## Create an input asset and upload a local file into it
The **CreateInputAsset** function creates a new input [Asset](/rest/api/media/assets) and uploads the specified local video file into it. This **Asset** is used as the input to your encoding Job. In Media Services v3, the input to a **Job** can either be an **Asset**, or it can be content that you make available to your Media Services account via HTTPS URLs.
To test the stream, this article uses Azure Media Player.
Azure Media Player can be used for testing but should not be used in a production environment.
-## Next steps
-
-[Analyze videos](analyze-videos-tutorial.md)
+
media-services Stream Files Tutorial With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/stream-files-tutorial-with-api.md
When encoding or processing content in Media Services, it's a common pattern to
When creating a new [Transform](/rest/api/medi).
-You can use a built-in EncoderNamedPreset or use custom presets. For more information, see [How to customize encoder presets](transform-custom-presets-how-to.md).
+You can use a built-in EncoderNamedPreset or use custom presets. For more information, see [How to customize encoder presets](transform-custom-transform-how-to.md).
When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesnΓÇÖt exist (a case-insensitive check on the name).
media-services Transform Create Basic Audio How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-basic-audio-how-to.md
Title: Create a basic audio transform description: Create a basic audio transform using Media Services API. - - Previously updated : 11/18/2020 Last updated : 03/09/2022
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+## [REST](#tab/rest/)
+ ### Using the REST API [!INCLUDE [media-services-cli-instructions.md](./includes/task-create-basic-audio-rest.md)]
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Next steps [!INCLUDE [transforms next steps](./includes/transforms-next-steps.md)]++
media-services Transform Create Copy Video Audio How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-copy-video-audio-how-to.md
Title: Create a CopyVideo CopyAudio transform description: Create a CopyVideo CopyAudio transform using Media Services API. - - Previously updated : 11/19/2020 Last updated : 03/09/2022
This article shows how to create a `CopyVideo/CopyAudio` transform.
-This transform allows you have input video / input audio streams copied from the input asset to the output asset without any changes. This can be of value with multi bitrate encoding output where the input video and/or audio would be part of the output. It simply writes the manifest and other files needed to stream content.
+This transform allows you to have input video/input audio streams copied from the input asset to the output asset without any changes. This can be of value with multi bitrate encoding output where the input video and/or audio would be part of the output. It simply writes the manifest and other files needed to stream content.
## Prerequisites
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+## [REST](#tab/rest/)
+ ### Using the REST API [!INCLUDE [task-create-copy-video-audio-rest.md](./includes/task-create-copy-video-audio-rest.md)]
-## Next steps
-+
media-services Transform Create Copyallbitratenoninterleaved How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-copyallbitratenoninterleaved-how-to.md
Title: Create a CopyAllBitrateNonInterleaved transform
-description: Create a CopyAllBitrateNonInterleaved transform using Media Services API.
-
+description: Create a CopyAllBitrateNonInterleaved transform.
- - Previously updated : 10/23/2020 Last updated : 03/09/2022
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+## [REST](#tab/rest/)
+ ### Using the REST API [!INCLUDE [task-create-copyallbitratenoninterleaved.md](./includes/task-create-copyallbitratenoninterleaved.md)]
-## Next steps
-+
media-services Transform Create Overlay How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-overlay-how-to.md
Previously updated : 08/31/2020 Last updated : 03/09/2022 # How to create an image overlay
Last updated 08/31/2020
Media Services allows you to overlay an image, audio file, or another video on top of a video. The input must specify exactly one image file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file in a supported file format.
+## [.NET](#tab/net/)
## Prerequisites
Media Services allows you to overlay an image, audio file, or another video on t
If you aren't already familiar with the creation of Transforms, it is recommended that you complete the following activities: * Read [Encoding video and audio with Media Services](encode-concept.md)
-* Read [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md). Follow the steps in that article to set up the .NET needed to work with transforms, then return here to try out an overlays preset sample.
+* Read [How to encode with a custom transform - .NET](transform-custom-transform-how-to.md). Follow the steps in that article to set up the .NET needed to work with transforms, then return here to try out an overlays preset sample.
* See the [Transforms reference document](/rest/api/media/transforms). Once you are familiar with Transforms, download the overlays sample.
The sample also publishes the content for streaming and will output the full HLS
* [Filters](/rest/api/media/transforms/create-or-update#filters) * [StandardEncoderPreset](/rest/api/media/transforms/create-or-update#standardencoderpreset) - [!INCLUDE [reference dotnet sdk references](./includes/reference-dotnet-sdk-references.md)]
-## Next steps
-+
media-services Transform Crop How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-crop-how-to.md
+
+ Title: How to crop video files with Media Services
+description: Cropping is the process of selecting a rectangular window within the video frame, and encoding just the pixels within that window. This topic shows how to crop video files with Media Services.
+++++ Last updated : 03/09/2022+++
+# How to crop video files with Media Services
++
+You can use Media Services to crop an input video. Cropping is the process of selecting a rectangular window within the video frame, and encoding just the pixels within that window. The following diagram helps illustrate the process.
+
+## Pre-processing stage
+
+Cropping is a pre-processing stage, so the *cropping parameters* in the encoding preset apply to the *input* video. Encoding is a subsequent stage, and the width/height settings apply to the *pre-processed* video, and not to the original video. When designing your preset, do the following:
+
+1. Select the crop parameters based on the original input video
+1. Select your encode settings based on the cropped video.
+
+> [!WARNING]
+> If you do not match your encode settings to the cropped video, the output will not be as you expect.
+
+For example, your input video has a resolution of 1920x1080 pixels (16:9 aspect ratio), but has black bars (pillar boxes) at the left and right, so that only a 4:3 window or 1440x1080 pixels contains active video. You can crop the black bars, and encode the 1440x1080 area.
+
+## [.NET](#tab/net/)
+
+## Transform code
+
+The following code snippet illustrates how to write a transform in .NET to crop videos. The code assumes that you have a local file to work with.
+
+- Left is the left-most location of the crop.
+- Top is the top-most location of the crop.
+- Width is the final width of the crop.
+- Height is the final height of the crop.
+
+```dotnet
+var preset = new StandardEncoderPreset
+
+ {
+
+ Filters = new Filters
+
+ {
+
+ Crop = new Rectangle
+
+ {
+
+ Left = "200",
+
+ Top = "200",
+
+ Width = "1280",
+
+ Height = "720"
+
+ }
+
+ },
+
+ Codecs =
+
+ {
+
+ new AacAudio(),
+
+ new H264Video()
+
+ {
+
+ Layers =
+
+ {
+
+ new H264Layer
+
+ {
+
+ Bitrate = 1000000,
+
+ Width = "1280",
+
+ Height = "720"
+
+ }
+
+ }
+
+ }
+
+ },
+
+ Formats =
+
+ {
+
+ new Mp4Format
+
+ {
+
+ FilenamePattern = "{Basename}_{Bitrate}{Extension}"
+
+ }
+
+ }
+
+ }
+
+```
++
media-services Transform Custom Preset Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-preset-cli-how-to.md
- Title: Encode a custom transform CLI
-description: This topic shows how to use Azure Media Services v3 to encode a custom transform using Azure CLI.
------- Previously updated : 08/31/2020---
-# How to encode with a custom transform - Azure CLI
--
-When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets, based on industry best practices, as demonstrated in the [Streaming files](stream-files-cli-quickstart.md#create-a-transform-for-adaptive-bitrate-encoding) quickstart. You can also build a custom preset to target your specific scenario or device requirements.
-
-## Considerations
-
-When creating custom presets, the following considerations apply:
-
-* All values for height and width on AVC content must be a multiple of 4.
-* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
-
-## Prerequisites
-
-[Create a Media Services account](./account-create-how-to.md).
-
-Make sure to remember the resource group name and the Media Services account name.
-
-## Define a custom preset
-
-The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
-
-In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
-
-We are going to save this transform in a file. In this example, we name the file `customPreset.json`.
-
-```json
-{
- "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
- "codecs": [
- {
- "@odata.type": "#Microsoft.Media.AacAudio",
- "channels": 2,
- "samplingRate": 48000,
- "bitrate": 128000,
- "profile": "AacLc"
- },
- {
- "@odata.type": "#Microsoft.Media.H264Video",
- "keyFrameInterval": "PT2S",
- "stretchMode": "AutoSize",
- "sceneChangeDetection": false,
- "complexity": "Balanced",
- "layers": [
- {
- "width": "1280",
- "height": "720",
- "label": "HD",
- "bitrate": 3400000,
- "maxBitrate": 3400000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- },
- {
- "width": "640",
- "height": "360",
- "label": "SD",
- "bitrate": 1000000,
- "maxBitrate": 1000000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- }
- ]
- },
- {
- "@odata.type": "#Microsoft.Media.PngImage",
- "stretchMode": "AutoSize",
- "start": "25%",
- "step": "25%",
- "range": "80%",
- "layers": [
- {
- "width": "50%",
- "height": "50%"
- }
- ]
- }
- ],
- "formats": [
- {
- "@odata.type": "#Microsoft.Media.Mp4Format",
- "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
- "outputFiles": []
- },
- {
- "@odata.type": "#Microsoft.Media.PngFormat",
- "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
- }
- ]
-}
-```
-
-## Create a new transform
-
-In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first check if one already exist. If the Transform exists, reuse it. The following `show` command returns the `customTransformName` transform if it exists:
-
-```azurecli-interactive
-az ams transform show -a amsaccount -g amsResourceGroup -n customTransformName
-```
-
-The following Azure CLI command creates the Transform based on the custom preset (defined earlier).
-
-```azurecli-interactive
-az ams transform create -a amsaccount -g amsResourceGroup -n customTransformName --description "Basic Transform using a custom encoding preset" --preset customPreset.json
-```
-
-For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Quickstart: Stream video files - Azure CLI](stream-files-cli-quickstart.md).
-
-## See also
-
-[Azure CLI](/cli/azure/ams)
media-services Transform Custom Preset Rest How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-preset-rest-how-to.md
- Title: Encode a custom transform REST
-description: This topic shows how to use Azure Media Services v3 to encode a custom transform using REST.
------- Previously updated : 08/31/2020---
-# How to encode with a custom transform - REST
--
-When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets, based on industry best practices, as demonstrated in the [Streaming files](stream-files-tutorial-with-rest.md#create-a-transform) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
---
-## Considerations
-
-When creating custom presets, the following considerations apply:
-
-* All values for height and width on AVC content must be a multiple of 4.
-* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
-
-## Prerequisites
--- [Create a Media Services account](./account-create-how-to.md). <br/>Make sure to remember the resource group name and the Media Services account name. -- [Configure Postman for Azure Media Services REST API calls](setup-postman-rest-how-to.md).<br/>Make sure to follow the last step in the topic [Get Azure AD Token](setup-postman-rest-how-to.md#get-azure-ad-token). -
-## Define a custom preset
-
-The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
-
-In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
-
-```json
-{
- "properties": {
- "description": "Basic Transform using a custom encoding preset",
- "outputs": [
- {
- "onError": "StopProcessingJob",
- "relativePriority": "Normal",
- "preset": {
- "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
- "codecs": [
- {
- "@odata.type": "#Microsoft.Media.AacAudio",
- "channels": 2,
- "samplingRate": 48000,
- "bitrate": 128000,
- "profile": "AacLc"
- },
- {
- "@odata.type": "#Microsoft.Media.H264Video",
- "keyFrameInterval": "PT2S",
- "stretchMode": "AutoSize",
- "sceneChangeDetection": false,
- "complexity": "Balanced",
- "layers": [
- {
- "width": "1280",
- "height": "720",
- "label": "HD",
- "bitrate": 3400000,
- "maxBitrate": 3400000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- },
- {
- "width": "640",
- "height": "360",
- "label": "SD",
- "bitrate": 1000000,
- "maxBitrate": 1000000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- }
- ]
- },
- {
- "@odata.type": "#Microsoft.Media.PngImage",
- "stretchMode": "AutoSize",
- "start": "25%",
- "step": "25%",
- "range": "80%",
- "layers": [
- {
- "width": "50%",
- "height": "50%"
- }
- ]
- }
- ],
- "formats": [
- {
- "@odata.type": "#Microsoft.Media.Mp4Format",
- "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
- "outputFiles": []
- },
- {
- "@odata.type": "#Microsoft.Media.PngFormat",
- "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
- }
- ]
- }
- }
- ]
- }
-}
-
-```
-
-## Create a new transform
-
-In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first use [Get](/rest/api/media/transforms/get) to check if one already exists. If the Transform exists, reuse it.
-
-In the Postman's collection that you downloaded, select **Transforms and Jobs**->**Create or Update Transform**.
-
-The **PUT** HTTP request method is similar to:
-
-```
-PUT https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/transforms/:transformName?api-version={{api-version}}
-```
-
-Select the **Body** tab and replace the body with the json code you [defined earlier](#define-a-custom-preset). For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform.
-
-Select **Send**.
-
-For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Tutorial: Stream video files - REST](stream-files-tutorial-with-rest.md).
-
-## Next steps
-
-See [other REST operations](/rest/api/media/)
media-services Transform Custom Presets How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-presets-how-to.md
- Title: Encode custom transform .NET
-description: This topic shows how to use Azure Media Services v3 to encode a custom transform using .NET.
------ Previously updated : 05/11/2021----
-# How to encode with a custom transform - .NET
--
-When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets based on industry best practices as demonstrated in the [Streaming files](stream-files-tutorial-with-api.md) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
-
-## Considerations
-
-When creating custom presets, the following considerations apply:
-
-* All values for height and width on AVC content must be a multiple of 4.
-* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
-
-## Prerequisites
-
-[Create a Media Services account](./account-create-how-to.md)
-
-## Download the sample
-
-Clone a GitHub repository that contains the full .NET Core sample to your machine using the following command:
-
- ```bash
- git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
- ```
-
-The custom preset sample is located in the [Encoding with a custom preset using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_H264) folder.
-
-## Create a transform with a custom preset
-
-When creating a new [Transform](/rest/api/media/transforms), you need to specify what you want it to produce as an output. The required parameter is a [TransformOutput](/rest/api/media/transforms/createorupdate#transformoutput) object, as shown in the code below. Each **TransformOutput** contains a **Preset**. The **Preset** describes the step-by-step instructions of video and/or audio processing operations that are to be used to generate the desired **TransformOutput**. The following **TransformOutput** creates custom codec and layer output settings.
-
-When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesn't exist (a case-insensitive check on the name).
-
-### Example custom transform
-
-The following example defines a set of outputs that we want to be generated when this Transform is used. We first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75%} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
-
-[!code-csharp[Main](../../../media-services-v3-dotnet/VideoEncoding/Encoding_H264/Program.cs#EnsureTransformExists)]
-
-## Next steps
-
-[Streaming files](stream-files-tutorial-with-api.md)
media-services Transform Custom Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-transform-how-to.md
+
+ Title: Encode custom transform
+description: This topic shows how to use Azure Media Services v3 to encode a custom transform.
+++++ Last updated : 03/09/2022+++
+# How to encode with a custom transform
++
+When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets based on industry best practices as demonstrated in the [Streaming files](stream-files-tutorial-with-api.md) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
+
+## Considerations
+
+When creating custom presets, the following considerations apply:
+
+* All values for height and width on AVC content must be a multiple of 4.
+* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md)
+
+## [CLI](#tab/cli/)
+
+## Define a custom preset
+
+The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
+
+In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+We are going to save this transform in a file. In this example, we name the file `customPreset.json`.
+
+```json
+{
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.AacAudio",
+ "channels": 2,
+ "samplingRate": 48000,
+ "bitrate": 128000,
+ "profile": "AacLc"
+ },
+ {
+ "@odata.type": "#Microsoft.Media.H264Video",
+ "keyFrameInterval": "PT2S",
+ "stretchMode": "AutoSize",
+ "sceneChangeDetection": false,
+ "complexity": "Balanced",
+ "layers": [
+ {
+ "width": "1280",
+ "height": "720",
+ "label": "HD",
+ "bitrate": 3400000,
+ "maxBitrate": 3400000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ },
+ {
+ "width": "640",
+ "height": "360",
+ "label": "SD",
+ "bitrate": 1000000,
+ "maxBitrate": 1000000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ }
+ ]
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "25%",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+}
+```
+
+## Create a new transform
+
+In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first check if one already exist. If the Transform exists, reuse it. The following `show` command returns the `customTransformName` transform if it exists:
+
+```azurecli-interactive
+az ams transform show -a amsaccount -g amsResourceGroup -n customTransformName
+```
+
+The following Azure CLI command creates the Transform based on the custom preset (defined earlier).
+
+```azurecli-interactive
+az ams transform create -a amsaccount -g amsResourceGroup -n customTransformName --description "Basic Transform using a custom encoding preset" --preset customPreset.json
+```
+
+For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Quickstart: Stream video files - Azure CLI](stream-files-cli-quickstart.md).
+
+## [REST](#tab/rest/)
+
+## Define a custom preset
+
+The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
+
+In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+```json
+{
+ "properties": {
+ "description": "Basic Transform using a custom encoding preset",
+ "outputs": [
+ {
+ "onError": "StopProcessingJob",
+ "relativePriority": "Normal",
+ "preset": {
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.AacAudio",
+ "channels": 2,
+ "samplingRate": 48000,
+ "bitrate": 128000,
+ "profile": "AacLc"
+ },
+ {
+ "@odata.type": "#Microsoft.Media.H264Video",
+ "keyFrameInterval": "PT2S",
+ "stretchMode": "AutoSize",
+ "sceneChangeDetection": false,
+ "complexity": "Balanced",
+ "layers": [
+ {
+ "width": "1280",
+ "height": "720",
+ "label": "HD",
+ "bitrate": 3400000,
+ "maxBitrate": 3400000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ },
+ {
+ "width": "640",
+ "height": "360",
+ "label": "SD",
+ "bitrate": 1000000,
+ "maxBitrate": 1000000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ }
+ ]
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "25%",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+
+```
+
+## Create a new transform
+
+In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first use [Get](/rest/api/media/transforms/get) to check if one already exists. If the Transform exists, reuse it.
+
+In the Postman's collection that you downloaded, select **Transforms and Jobs**->**Create or Update Transform**.
+
+The **PUT** HTTP request method is similar to:
+
+```
+PUT https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/transforms/:transformName?api-version={{api-version}}
+```
+
+Select the **Body** tab and replace the body with the json code you [defined earlier](#define-a-custom-preset). For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform.
+
+Select **Send**.
+
+For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Tutorial: Stream video files - REST](stream-files-tutorial-with-rest.md).
+
+## [.NET](#tab/net/)
+
+## Download the sample
+
+Clone a GitHub repository that contains the full .NET Core sample to your machine using the following command:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
+ ```
+
+The custom preset sample is located in the [Encoding with a custom preset using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_H264) folder.
+
+## Create a transform with a custom preset
+
+When creating a new [Transform](/rest/api/media/transforms), you need to specify what you want it to produce as an output. The required parameter is a [TransformOutput](/rest/api/media/transforms/createorupdate#transformoutput) object, as shown in the code below. Each **TransformOutput** contains a **Preset**. The **Preset** describes the step-by-step instructions of video and/or audio processing operations that are to be used to generate the desired **TransformOutput**. The following **TransformOutput** creates custom codec and layer output settings.
+
+When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesn't exist (a case-insensitive check on the name).
+
+### Example custom transform
+
+The following example defines a set of outputs that we want to be generated when this Transform is used. We first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75%} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+[!code-csharp[Main](../../../media-services-v3-dotnet/VideoEncoding/Encoding_H264/Program.cs#EnsureTransformExists)]
++
media-services Transform Generate Thumbnails Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-generate-thumbnails-dotnet-how-to.md
- Title: Generate thumbnails using Media Encoder Standard .NET
-description: This article shows how to use .NET to encode an asset and generate thumbnails at the same time using Media Encoder Standard.
------ Previously updated : 12/01/2020---
-# How to generate thumbnails using Encoder Standard with .NET
--
-You can use Media Encoder Standard to generate one or more thumbnails from your input video in [JPEG](https://en.wikipedia.org/wiki/JPEG) or [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image file formats.
-
-## Recommended reading and practice
-
-It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md).
-
-## Transform code example
-
-The below code example creates just a thumbnail. You should set the following parameters:
--- **start** - The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.-- **step** - The intervals at which thumbnails are generated. The value can be in ISO 8601 format (for example, PT05S for one image every 5 seconds), or a frame count (for example, 30 for one image every 30 frames), or a relative value to stream duration (for example, 10% for one image every 10% of stream duration). Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and step position from start time as the first output. As the default value is 10%, it means that if the stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select a reasonable value for step if the first thumbnail is expected be close to start time, or set the range value to 1 if only one thumbnail is needed at start time.-- **range** - The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.-- **layers** - A collection of output image layers to be produced by the encoder.-
-```csharp
-
-private static Transform EnsureTransformExists(IAzureMediaServicesClient client, string resourceGroupName, string accountName, string transformName)
-{
- // Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
- // also uses the same recipe or Preset for processing content.
- Transform transform = client.Transforms.Get(resourceGroupName, accountName, transformName);
-
- if (transform == null)
- {
- // Create a new Transform Outputs array - this defines the set of outputs for the Transform
- TransformOutput[] outputs = new TransformOutput[]
- {
- // Create a new TransformOutput with a custom Standard Encoder Preset
- // This demonstrates how to create custom codec and layer output settings
-
- new TransformOutput(
- new StandardEncoderPreset(
- codecs: new Codec[]
- {
- // Generate a set of PNG thumbnails
- new PngImage(
- start: "25%",
- step: "25%",
- range: "80%",
- layers: new PngLayer[]{
- new PngLayer(
- width: "50%",
- height: "50%"
- )
- }
- )
- },
- // Specify the format for the output files for the thumbnails
- formats: new Format[]
- {
- new PngFormat(
- filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
- )
- }
- ),
- onError: OnErrorType.StopProcessingJob,
- relativePriority: Priority.Normal
- )
- };
-
- string description = "A transform that includes thumbnails.";
- // Create the custom Transform with the outputs defined above
- transform = client.Transforms.CreateOrUpdate(resourceGroupName, accountName, transformName, outputs, description);
- }
-
- return transform;
-}
-```
media-services Transform Generate Thumbnails How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-generate-thumbnails-how-to.md
+
+ Title: Generate thumbnails using Media Encoder Standard
+description: This article shows how to encode an asset and generate thumbnails at the same time using Media Encoder Standard.
+++++ Last updated : 03/09/2022++
+# How to generate thumbnails using Encoder Standard
++
+You can use Media Encoder Standard to generate one or more thumbnails from your input video in [JPEG](https://en.wikipedia.org/wiki/JPEG), [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics), or [BMP](https://en.wikipedia.org/wiki/BMP_file_format) image file formats.
++
+## [REST](#tab/rest/)
+
+## Recommended reading and practice
+
+It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform](transform-custom-transform-how-to.md).
+
+## Thumbnail parameters
+
+You should set the following parameters:
+
+- **start** - The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
+- **step** - The intervals at which thumbnails are generated. The value can be in ISO 8601 format (for example, PT05S for one image every 5 seconds), or a frame count (for example, 30 for one image every 30 frames), or a relative value to stream duration (for example, 10% for one image every 10% of stream duration). Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and step position from start time as the first output. As the default value is 10%, it means that if the stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select a reasonable value for step if the first thumbnail is expected be close to start time, or set the range value to 1 if only one thumbnail is needed at start time.
+- **range** - The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
+- **layers** - A collection of output image layers to be produced by the encoder.
+
+## Example of a "single PNG file" preset
+
+The following JSON preset can be used to produce a single output PNG file from the first few seconds of the input video, where the encoder makes a best-effort attempt at finding an ΓÇ£interestingΓÇ¥ frame. Note that the output image dimensions have been set to 100%, meaning these match the dimensions of the input video. Note also how the ΓÇ£FormatΓÇ¥ setting in "Outputs" is required to match the use of "PngLayers" in the ΓÇ£CodecsΓÇ¥ section.
+
+```json
+{
+ "properties": {
+ "description": "Basic Transform using a custom encoding preset for thumbnails",
+ "outputs": [
+ {
+ "onError": "StopProcessingJob",
+ "relativePriority": "Normal",
+ "preset": {
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "{Best}",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+
+```
+
+## Example of a "series of JPEG images" preset
+
+The following JSON preset can be used to produce a set of 10 images at timestamps of 5%, 15%, …, 95% of the input timeline, where the image size is specified to be one quarter that of the input video.
+
+### JSON preset
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "JpgLayers": [
+ {
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "25%",
+ "Height": "25%"
+ }
+ ],
+ "Start": "5%",
+ "Step": "10%",
+ "Range": "96%",
+ "Type": "JpgImage"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Index}{Extension}",
+ "Format": {
+ "Type": "JpgFormat"
+ }
+ }
+ ]
+}
+```
+
+## Example of a "one image at a specific timestamp" preset
+
+The following JSON preset can be used to produce a single JPEG image at the 30-second mark of the input video. This preset expects the input video to be more than 30 seconds in duration (else the job fails).
+
+### JSON preset
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "JpgLayers": [
+ {
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "25%",
+ "Height": "25%"
+ }
+ ],
+ "Start": "00:00:30",
+ "Step": "1",
+ "Range": "1",
+ "Type": "JpgImage"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Index}{Extension}",
+ "Format": {
+ "Type": "JpgFormat"
+ }
+ }
+ ]
+}
+```
+
+## Example of a "thumbnails at different resolutions" preset
+
+The following preset can be used to generate thumbnails at different resolutions in one task. In the example, at positions 5%, 15%, …, 95% of the input timeline, the encoder generates two images – one at 100% of the input video resolution and the other at 50%.
+
+Note the use of {Resolution} macro in the FileName; it indicates to the encoder to use the width and height that you specified in the Encoding section of the preset while generating the file name of the output images. This also helps you distinguish between the different images easily.
+
+### JSON preset
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "JpgLayers": [
+{
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "100%",
+ "Height": "100%"
+},
+{
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "50%",
+ "Height": "50%"
+}
+
+ ],
+ "Start": "5%",
+ "Step": "10%",
+ "Range": "96%",
+ "Type": "JpgImage"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Resolution}_{Index}{Extension}",
+ "Format": {
+"Type": "JpgFormat"
+ }
+ }
+ ]
+}
+```
+
+## Example of generating a thumbnail while encoding
+
+While all of the above examples have discussed how you can submit an encoding task that only produces images, you can also combine video/audio encoding with thumbnail generation. The following JSON preset tells Encoder Standard to generate a thumbnail during encoding.
+
+### JSON preset
+
+For information about schema, see [this](../previous/media-services-mes-schema.md) article.
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "KeyFrameInterval": "00:00:02",
+ "SceneChangeDetection": "true",
+ "H264Layers": [
+ {
+ "Profile": "Auto",
+ "Level": "auto",
+ "Bitrate": 4500,
+ "MaxBitrate": 4500,
+ "BufferWindow": "00:00:05",
+ "Width": 1280,
+ "Height": 720,
+ "ReferenceFrames": 3,
+ "EntropyMode": "Cabac",
+ "AdaptiveBFrame": true,
+ "Type": "H264Layer",
+ "FrameRate": "0/1"
+
+ }
+ ],
+ "Type": "H264Video"
+ },
+ {
+ "JpgLayers": [
+ {
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "100%",
+ "Height": "100%"
+ }
+ ],
+ "Start": "{Best}",
+ "Type": "JpgImage"
+ },
+ {
+ "Channels": 2,
+ "SamplingRate": 48000,
+ "Bitrate": 128,
+ "Type": "AACAudio"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Index}{Extension}",
+ "Format": {
+ "Type": "JpgFormat"
+ }
+ },
+ {
+ "FileName": "{Basename}_{Resolution}_{VideoBitrate}.mp4",
+ "Format": {
+ "Type": "MP4Format"
+ }
+ }
+ ]
+}
+```
+
+## [.NET](#tab/net/)
+
+## Recommended reading and practice
+
+It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform](transform-custom-transform-how-to.md).
+
+## Transform code example
+
+The below code example creates just a thumbnail. You should set the following parameters:
+
+- **start** - The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
+- **step** - The intervals at which thumbnails are generated. The value can be in ISO 8601 format (for example, PT05S for one image every 5 seconds), or a frame count (for example, 30 for one image every 30 frames), or a relative value to stream duration (for example, 10% for one image every 10% of stream duration). Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and step position from start time as the first output. As the default value is 10%, it means that if the stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select a reasonable value for step if the first thumbnail is expected be close to start time, or set the range value to 1 if only one thumbnail is needed at start time.
+- **range** - The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
+- **layers** - A collection of output image layers to be produced by the encoder.
+
+```csharp
+
+private static Transform EnsureTransformExists(IAzureMediaServicesClient client, string resourceGroupName, string accountName, string transformName)
+{
+ // Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
+ // also uses the same recipe or Preset for processing content.
+ Transform transform = client.Transforms.Get(resourceGroupName, accountName, transformName);
+
+ if (transform == null)
+ {
+ // Create a new Transform Outputs array - this defines the set of outputs for the Transform
+ TransformOutput[] outputs = new TransformOutput[]
+ {
+ // Create a new TransformOutput with a custom Standard Encoder Preset
+ // This demonstrates how to create custom codec and layer output settings
+
+ new TransformOutput(
+ new StandardEncoderPreset(
+ codecs: new Codec[]
+ {
+ // Generate a set of PNG thumbnails
+ new PngImage(
+ start: "25%",
+ step: "25%",
+ range: "80%",
+ layers: new PngLayer[]{
+ new PngLayer(
+ width: "50%",
+ height: "50%"
+ )
+ }
+ )
+ },
+ // Specify the format for the output files for the thumbnails
+ formats: new Format[]
+ {
+ new PngFormat(
+ filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
+ )
+ }
+ ),
+ onError: OnErrorType.StopProcessingJob,
+ relativePriority: Priority.Normal
+ )
+ };
+
+ string description = "A transform that includes thumbnails.";
+ // Create the custom Transform with the outputs defined above
+ transform = client.Transforms.CreateOrUpdate(resourceGroupName, accountName, transformName, outputs, description);
+ }
+
+ return transform;
+}
+```
+
media-services Transform Stitch How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-stitch-how-to.md
Title: How to stitch two or more video files with .NET | Microsoft Docs
+ Title: How to stitch two or more video files | Microsoft Docs
description: This article shows how to stitch two or more video files. Previously updated : 03/24/2021 Last updated : 03/09/2022 -
-# How to stitch two or more video files with .NET
+# How to stitch two or more video files
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
The following example illustrates how you can generate a preset to stitch two or
> [!NOTE] > Video files edited together should share properties (video resolution, frame rate, audio track count, etc.). You should take care not to mix videos of different frame rates, or with different number of audio tracks.
+## [.NET](#tab/net/)
+ ## Prerequisites Clone or download the [Media Services .NET samples](https://github.com/Azure-Samples/media-services-v3-dotnet/).
media-services Transform Subclip Video Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-subclip-video-dotnet-how-to.md
- Title: Subclip a video when encoding with Media Services
-description: This topic describes how to subclip a video when encoding with Azure Media Services using .NET SDK
------ Previously updated : 06/09/2019---
-# Subclip a video when encoding with Media Services - .NET
-
-You can trim or subclip a video when encoding it using a [Job](/rest/api/media/jobs). This functionality works with any [Transform](/rest/api/media/transforms) that is built using either the [BuiltInStandardEncoderPreset](/rest/api/media/transforms/createorupdate#builtinstandardencoderpreset) presets, or the [StandardEncoderPreset](/rest/api/media/transforms/createorupdate#standardencoderpreset) presets.
-
-The following C# example creates a job that trims a video in an Asset as it submits an encoding job.
-
-## Prerequisites
-
-To complete the steps described in this topic, you have to:
--- [Create an Azure Media Services account](./account-create-how-to.md)-- Create a Transform and an input and output Assets. You can see how to create a Transform and input and output Assets in the [Upload, encode, and stream videos using .NET](stream-files-tutorial-with-api.md) tutorial.-- Review the [Encoding concept](encode-concept.md) topic.-
-## Example
-
-```csharp
-/// <summary>
-/// Submits a request to Media Services to apply the specified Transform to a given input video.
-/// </summary>
-/// <param name="client">The Media Services client.</param>
-/// <param name="resourceGroupName">The name of the resource group within the Azure subscription.</param>
-/// <param name="accountName"> The Media Services account name.</param>
-/// <param name="transformName">The name of the transform.</param>
-/// <param name="jobName">The (unique) name of the job.</param>
-/// <param name="inputAssetName">The name of the input asset.</param>
-/// <param name="outputAssetName">The (unique) name of the output asset that will store the result of the encoding job. </param>
-// <SubmitJob>
-private static async Task<Job> JobWithBuiltInStandardEncoderWithSingleClipAsync(
- IAzureMediaServicesClient client,
- string resourceGroupName,
- string accountName,
- string transformName,
- string jobName,
- string inputAssetName,
- string outputAssetName)
-{
- var jobOutputs = new List<JobOutputAsset>
- {
- new JobOutputAsset(state: JobState.Queued, progress: 0, assetName: outputAssetName)
- };
-
- var clipStart = new AbsoluteClipTime()
- {
- Time = new TimeSpan(0, 0, 20)
- };
-
- var clipEnd = new AbsoluteClipTime()
- {
- Time = new TimeSpan(0, 0, 30)
- };
-
- var jobInput = new JobInputAsset(assetName: inputAssetName, start: clipStart, end: clipEnd);
-
- Job job = await client.Jobs.CreateAsync(
- resourceGroupName,
- accountName,
- transformName,
- jobName,
- new Job(input: jobInput, outputs: jobOutputs.ToArray(), name: jobName)
- {
- Description = $"A Job with transform {transformName} and single clip.",
- Priority = Priority.Normal,
- });
-
- return job;
-}
-```
-
-## Next steps
-
-[How to encode with a custom transform](transform-custom-presets-how-to.md)
media-services Transform Subclip Video How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-subclip-video-how-to.md
+
+ Title: Subclip a video when encoding with Media Services
+description: This topic describes how to subclip a video when encoding with Azure Media Services.
++++ Last updated : 03/09/2022+++
+# Subclip a video
+
+You can trim or subclip a video when encoding it using a Media Services Job.
+
+This functionality works with any Transform that is built using either the BuiltInStandardEncoderPreset presets, or the StandardEncoderPreset presets.
+
+## [REST](#tab/rest/)
+
+## Subclip a video when encoding with Media Services - REST
+
+You can trim or subclip a video when encoding it using a [Job](/rest/api/media/jobs). This functionality works with any [Transform](/rest/api/media/transforms) that is built using either the [BuiltInStandardEncoderPreset](/rest/api/media/transforms/createorupdate#builtinstandardencoderpreset) presets, or the [StandardEncoderPreset](/rest/api/media/transforms/createorupdate#standardencoderpreset) presets.
+
+The REST example in this topic creates a job that trims a video as it submits an encoding job.
++
+## Prerequisites
+
+To complete the steps described in this topic, you have to:
+
+- [Create an Azure Media Services account](./account-create-how-to.md).
+- [Configure Postman for Azure Media Services REST API calls](setup-postman-rest-how-to.md).
+
+ Make sure to follow the last step in the topic [Get Azure AD Token](setup-postman-rest-how-to.md#get-azure-ad-token).
+- Create a Transform and an output Assets. You can see how to create a Transform and an output Assets in the [Encode a remote file based on URL and stream the video - REST](stream-files-tutorial-with-rest.md) tutorial.
+- Review the [Encoding concept](encode-concept.md) topic.
+
+## Create a subclipping job
+
+1. In the Postman collection that you downloaded, select **Transforms and jobs** -> **Create Job with Sub Clipping**.
+
+ The **PUT** request looks like this:
+
+ ```
+ https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/transforms/:transformName/jobs/:jobName?api-version={{api-version}}
+ ```
+1. Update the value of "transformName" environment variable with your transform name.
+1. Select the **Body** tab and update the "myOutputAsset" with your output Asset name.
+
+ ```json
+ {
+ "properties": {
+ "description": "A Job with transform cb9599fb-03b3-40eb-a2ff-7ea909f53735 and single clip.",
+
+ "input": {
+ "@odata.type": "#Microsoft.Media.JobInputHttp",
+ "baseUri": "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
+ "files": [
+ "Ignite-short.mp4"
+ ],
+ "start": {
+ "@odata.type": "#Microsoft.Media.AbsoluteClipTime",
+ "time": "PT10S"
+ },
+ "end": {
+ "@odata.type": "#Microsoft.Media.AbsoluteClipTime",
+ "time": "PT40S"
+ }
+ },
+
+ "outputs": [
+ {
+ "@odata.type": "#Microsoft.Media.JobOutputAsset",
+ "assetName": "myOutputAsset"
+ }
+ ],
+ "priority": "Normal"
+ }
+ }
+ ```
+1. Press **Send**.
+
+ You see the **Response** with the info about the job that was created and submitted and the job's status.
+
+## [.NET](#tab/net/)
+
+The following C# example creates a job that trims a video in an Asset as it submits an encoding job.
+
+## Prerequisites
+
+To complete the steps described in this topic, you have to:
+
+- [Create an Azure Media Services account](./account-create-how-to.md)
+- Create a Transform and an input and output Assets. You can see how to create a Transform and input and output Assets in the [Upload, encode, and stream videos using .NET](stream-files-tutorial-with-api.md) tutorial.
+- Review the [Encoding concept](encode-concept.md) topic.
+
+## Example
+
+```csharp
+/// <summary>
+/// Submits a request to Media Services to apply the specified Transform to a given input video.
+/// </summary>
+/// <param name="client">The Media Services client.</param>
+/// <param name="resourceGroupName">The name of the resource group within the Azure subscription.</param>
+/// <param name="accountName"> The Media Services account name.</param>
+/// <param name="transformName">The name of the transform.</param>
+/// <param name="jobName">The (unique) name of the job.</param>
+/// <param name="inputAssetName">The name of the input asset.</param>
+/// <param name="outputAssetName">The (unique) name of the output asset that will store the result of the encoding job. </param>
+// <SubmitJob>
+private static async Task<Job> JobWithBuiltInStandardEncoderWithSingleClipAsync(
+ IAzureMediaServicesClient client,
+ string resourceGroupName,
+ string accountName,
+ string transformName,
+ string jobName,
+ string inputAssetName,
+ string outputAssetName)
+{
+ var jobOutputs = new List<JobOutputAsset>
+ {
+ new JobOutputAsset(state: JobState.Queued, progress: 0, assetName: outputAssetName)
+ };
+
+ var clipStart = new AbsoluteClipTime()
+ {
+ Time = new TimeSpan(0, 0, 20)
+ };
+
+ var clipEnd = new AbsoluteClipTime()
+ {
+ Time = new TimeSpan(0, 0, 30)
+ };
+
+ var jobInput = new JobInputAsset(assetName: inputAssetName, start: clipStart, end: clipEnd);
+
+ Job job = await client.Jobs.CreateAsync(
+ resourceGroupName,
+ accountName,
+ transformName,
+ jobName,
+ new Job(input: jobInput, outputs: jobOutputs.ToArray(), name: jobName)
+ {
+ Description = $"A Job with transform {transformName} and single clip.",
+ Priority = Priority.Normal,
+ });
+
+ return job;
+}
+
+```
mysql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-cli-errors.md
description: This topic gives guidance on troubleshooting common issues with Azu
-+ Last updated 08/24/2021 # Troubleshoot Azure Database for MySQL Flexible Server CLI errors+ [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] This doc will help you troubleshoot common issues with Azure CLI when using MySQL Flexible Server.
This doc will help you troubleshoot common issues with Azure CLI when using MySQ
If you receive and error that a command **is misspelled or not recognized by the system**. This could mean that CLI version on your client machine may not be up to date. Run ```az upgrade``` to upgrade to latest version. Doing an upgrade of your CLI version can help resolve issues with incompatibilities of a command due to any API changes.
-
## Debug deployment failures + Currently, Azure CLI doesn't support turning on debug logging, but you can retrieve debug logging following the steps below. >[!NOTE]
+>
> - Replace ```examplegroup``` and ```exampledeployment``` with the correct resource group and deployment name for your database server. > - You can see the Deployment name in the deployments page in your resource group. See [how to find the deployment name](../../azure-resource-manager/templates/deployment-history.md?tabs=azure-portal).
Currently, Azure CLI doesn't support turning on debug logging, but you can retri
## Next steps -- If you are still experiencing issues, please [report the issue](https://github.com/Azure/azure-cli/issues).
+- If you are still experiencing issues, please [report the issue](https://github.com/Azure/azure-cli/issues).
- If you have questions, visit our Stack Overflow page: https://aka.ms/azcli/questions. - Let us know how we are doing with this short survey https://aka.ms/azureclihats.
mysql Sample Cli Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-audit-logs.md
This sample CLI script enables [audit logs](../concepts-audit-logs.md) on an Azu
### Run the script ## Clean up resources
mysql Sample Cli Change Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-change-server-parameters.md
This sample CLI script lists all available [server parameters](../concepts-serve
### Run the script ## Clean up resources
mysql Sample Cli Create Connect Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-private-access.md
This sample CLI script creates an Azure Database for MySQL - Flexible Server in
### Run the script ## Test connectivity to the MySQL server from the VM
mysql Sample Cli Create Connect Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-public-access.md
Once the script runs successfully, the MySQL Flexible Server will be accessible
### Run the script ## Clean up resources
mysql Sample Cli Monitor And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-monitor-and-scale.md
This sample CLI script scales compute, storage and IOPS for a single Azure Datab
### Run the script ## Clean up resources
mysql Sample Cli Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-read-replicas.md
This sample CLI script creates and manages [read replicas](../concepts-read-repl
### Run the script ## Clean up resources
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
Also, see [stop/start limitations](../concepts-limitations.md#stopstart-operatio
### Run the script ## Clean up resources
mysql Sample Cli Restore Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restore-server.md
The new Flexible Server is created with the original server's configuration and
### Run the script ## Clean up resources
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
Currently, Same-Zone high availability is supported only for the General purpose
### Run the script ## Clean up resources
mysql Sample Cli Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-slow-query-logs.md
This sample CLI script configures [slow query logs](../concepts-slow-query-logs.
### Run the script ## Clean up resources
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
Currently, Zone-Redundant high availability is supported only for the General pu
### Run the script ## Clean up resources
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-fix-corrupt-database.md
description: In this article, you'll learn about how to fix database corruption
-+ Last updated 09/21/2020
You typically notice a database or table is corrupt when your application access
## Use the dump and restore method We recommend that you resolve corruption problems by using a *dump and restore* method. This method involves:+ 1. Accessing the corrupt table.
-1. Using the mysqldump utility to create a logical backup of the table. The backup will retain the table structure and the data within it.
-1. Reloading the table into the database.
+2. Using the mysqldump utility to create a logical backup of the table. The backup will retain the table structure and the data within it.
+3. Reloading the table into the database.
### Back up your database or tables > [!Important]
+>
> - Make sure you have configured a firewall rule to access the server from your client machine. For more information, see [configure a firewall rule on Single Server](howto-manage-firewall-using-portal.md) and [configure a firewall rule on Flexible Server](flexible-server/how-to-connect-tls-ssl.md). > - Use SSL option `--ssl-cert` for mysqldump if you have SSL enabled.
mysql Howto Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-troubleshoot.md
description: Learn how to troubleshoot data encryption in Azure Database for MyS
-+ Last updated 02/13/2020
mysql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-vnet-using-cli.md
VNets and Azure service resources can be in the same or different subscriptions.
### Run the script ## Clean up resources
mysql Howto Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-errors.md
-+ Last updated 5/21/2021
The above error may occur while executing CREATE VIEW with DEFINER statements as
**Resolution**:
-* Use the definer user to execute CREATE VIEW if possible. It's likely that there are many views with different definers having different permissions so this may not be feasible. OR
-* Edit the dump file or CREATE VIEW script and remove the DEFINER= statement from the dump file OR
+* Use the definer user to execute CREATE VIEW if possible. It's likely that there are many views with different definers having different permissions, so this may not be feasible. OR
+* Edit the dump file or CREATE VIEW script and remove the DEFINER= statement from the dump file. OR
* Edit the dump file or CREATE VIEW script and replace the definer values with user with admin permissions who is performing the import or execute the script file. > [!Tip]
mysql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-change-server-configuration.md
This sample CLI script lists all available configuration parameters as well as t
### Run the script ## Clean up resources
mysql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-create-server-and-firewall-rule.md
This sample CLI script creates an Azure Database for MySQL server and configures
### Run the script ## Clean up resources
mysql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-point-in-time-restore.md
This sample CLI script restores a single Azure Database for MySQL server to a pr
### Run the script ## Clean up resources
mysql Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-scale-server.md
This sample CLI script scales compute and storage for a single Azure Database fo
### Run the script ## Clean up resources
mysql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-server-logs.md
This sample CLI script enables and downloads the slow query logs of a single Azu
### Run the script ## Clean up resources
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a high-level features and capabilities comparisons
| Saturation | Backup storage used, CPU %, IO %, Memory %, Server log storage limit, server log storage %, server log storage used, Storage limit, Storage %, Storage used | Backup storage used, CPU credits consumed, CPU credits remaining, CPU %, Disk queue depth, IOPS, Memory %, Read IOPS, Read throughput bytes/s, storage free, storage %, storage used, Transaction log storage used, Write IOPS, Write throughput bytes/s | | Traffic | Active connections, Network In, Network out | Active connections, Max. used transaction ID, Network In, Network Out, succeeded connections | | **Extensions** | | (offers latest versions)|
-| TimescaleDB, orafce, plv8 | Yes | No |
+| TimescaleDB, orafce | Yes | Yes |
| PgCron, lo, pglogical | No | Yes | | pgAudit | Preview | Yes | | **Security** | | |
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
The health of primary and standby servers are continuously monitored and appropr
| **Status** | **Description** | | - | |
-| <b> Initializing | In the process of creating a new standby server |
+| <b> Initializing | In the process of creating a new standby server. |
| <b> Replicating Data | After the standby is created, it is catching up with the primary. | | <b> Healthy | Replication is in steady state and healthy. | | <b> Failing Over | The database server is in the process of failing over to the standby. |
The health of primary and standby servers are continuously monitored and appropr
| <b> Not Enabled | Zone redundant high availability is not enabled. | >[!NOTE]
-> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during post-create stage, it is recommended to perform the opreation when the primary server activity is low.
+> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during post-create stage, it is recommended to perform the operation when the primary server activity is low.
## Steady-state operations
PostgreSQL client applications are connected to the primary server using the DB
:::image type="content" source="./media/business-continuity/concepts-high-availability-steady-state.png" alt-text="zone redundant high availability - steady state":::
-1. Clients connect to the flexible server and performs write operations.
+1. Clients connect to the flexible server and perform write operations.
2. Changes are replicated to the standby site. 3. Primary receives acknowledgment. 4. Writes/commits are acknowledged.
For flexible servers configured with high availability, these maintenance activi
## Failover process - unplanned downtimes
-Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recovery any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations.
+Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recover any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations.
>[!NOTE] > Flexible servers configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
After the failover, while a new standby server is being provisioned, application
1. Primary database server is down and the clients lose database connectivity. 2. Standby server is activated to become the new primary server. The client connects to the new primary server using the same connection string. Having the client application in the same zone as the primary database server reduces latency and improves performance. 3. Standby server is established in the same zone as the old primary server and the streaming replication is initiated.
-4. Once the steady-state replication is established, the client application commits and writes are acknowledged after the data is persisted on both the sites.
+4. Once the steady-state replication is established, the client application commits and writes are acknowledged after the data is persisted on both sites.
## On-demand failover
Application downtime is expected to start after step #1 and persists until step
You can use this feature for failing over to the standby server with reduced downtime. For example, after an unplanned failover, your primary could be on a different availability zone than the application, and you want to bring the primary server back to the previous zone to colocate with your application.
-When executing this feature, the standby server is first prepared to make sure it is caught up with recent transactions allowing the application to continue to perform read/writes. The standby is then promoted and the connections to the primary is severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover.
+When executing this feature, the standby server is first prepared to make sure it is caught up with recent transactions allowing the application to continue to perform read/writes. The standby is then promoted and the connections to the primary are severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover.
| **Step** | **Description** | **App downtime expected?** | | - | | -- |
See [this guide](how-to-manage-high-availability-portal.md) for managing high av
## Point-in-time restore of HA servers
-Flexible servers that are configured with high availability, log data is replicated in real time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates are replicated to the standby replica as well. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform point-in-time restore from the backup. Using flexible server's point-in-time restore capability, you can restore to the time before the error occurred. For databases configured with high availability, a new database server will be restored as a single zone flexible server with a new user-provided server name. You can use the restored server for few use cases:
+Flexible servers that are configured with high availability, log data is replicated in real time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates are replicated to the standby replica as well. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform point-in-time restore from the backup. Using flexible server's point-in-time restore capability, you can restore to the time before the error occurred. For databases configured with high availability, a new database server will be restored as a single zone flexible server with a new user-provided server name. You can use the restored server for few use cases:
- 1. You can use the restored server for production usage and can optionally enable zone-redundant high availability.
- 2. If you just want to restore an object, you can then export the object from the restored database server and import it to your production database server.
+ 1. You can use the restored server for production usage and can optionally enable zone-redundant high availability.
+ 2. If you just want to restore an object, you can then export the object from the restored database server and import it to your production database server.
3. If you want to clone your database server for testing and development purposes, or you want to restore for any other purposes, you can perform point-in-time restore. ## Zone redundant high availability - features
postgresql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/howto-manage-vnet-using-cli.md
VNets and Azure service resources can be in the same or different subscriptions.
### Run the script ## Clean up deployment
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/quickstart-create-server-database-azure-cli.md
The following values are used in subsequent commands to create the database and
Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address. ## Create a resource group Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location: ## Create a server Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command. > [!NOTE] >
Create a server with the [az postgres server create](/cli/azure/postgres/server#
Create a firewall rule with the [az postgres server firewall-rule create](/cli/azure/mysql/server/firewall-rule) command to give your local environment access to connect to the server. > [!TIP] > If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
Create a firewall rule with the [az postgres server firewall-rule create](/cli/a
To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command. The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
postgresql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-change-server-configuration.md
This sample CLI script lists all available configuration parameters as well as t
### Run the script ## Clean up deployment
postgresql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-and-firewall-rule.md
This sample CLI script creates an Azure Database for PostgreSQL server and confi
### Run the script ## Clean up deployment
postgresql Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-with-vnet-rule.md
This sample CLI script creates an Azure Database for PostgreSQL server and confi
### Run the script ## Clean up resources
This script uses the commands outlined in the following table:
| **Command** | **Notes** | ||| | [az group create](/cli/azure/group#az_group_create) | Creates a resource group in which all resources are stored. |
-| [az postgresql server create](/cli/azure/postgres/server/vnet-rule?view=azure-cli-latest#az-postgres-server-vnet-rule-create) | Creates a PostgreSQL server that hosts the databases. |
+| [az postgresql server create](/cli/azure/postgres/server/vnet-rule#az-postgres-server-vnet-rule-create) | Creates a PostgreSQL server that hosts the databases. |
| [az network vnet list-endpoint-services](/cli/azure/network/vnet#az-network-vnet-list-endpoint-services#az-network-vnet-list-endpoint-services) | List which services support VNET service tunneling in a given region. | | [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) | Creates a virtual network. | | [az network vnet subnet create](/cli/azure/network/vnet#az-network-vnet-subnet-create) | Create a subnet and associate an existing NSG and route table. | | [az network vnet subnet show](/cli/azure/network/vnet#az-network-vnet-subnet-show) |Shows details of a subnet. |
-| [az postgresql server vnet-rule create](/cli/azure/postgres/server/vnet-rule?view=azure-cli-latest#az-postgres-server-vnet-rule-create) | Create a virtual network rule to allows access to a PostgreSQL server. |
+| [az postgresql server vnet-rule create](/cli/azure/postgres/server/vnet-rule#az-postgres-server-vnet-rule-create) | Create a virtual network rule to allows access to a PostgreSQL server. |
| [az group delete](/cli/azure/group#az_group_delete) | Deletes a resource group including all nested resources. | ## Next steps
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-point-in-time-restore.md
This sample CLI script restores a single Azure Database for PostgreSQL server to
### Run the script ## Clean up deployment
postgresql Sample Scale Server Up Or Down https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-scale-server-up-or-down.md
This sample CLI script scales compute and storage for a single Azure Database fo
### Run the script ## Clean up deployment
postgresql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-server-logs.md
This sample CLI script enables and downloads the slow query logs of a single Azu
### Run the script ## Clean up deployment
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/tutorial-design-database-using-azure-cli.md
The following values are used in subsequent commands to create the database and
Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address. ## Create a resource group Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location: ## Create a server Create a server with the [az postgres server create](/cli/azure/postgres/server#az-postgres-server-create) command. > [!NOTE] >
Create a server with the [az postgres server create](/cli/azure/postgres/server#
Create a firewall rule with the [az postgres server firewall-rule create](/azure/postgresql/concepts-firewall-rules) command to give your local environment access to connect to the server. > [!TIP] > If you don't know your IP address, go to [WhatIsMyIPAddress.com](https://whatismyipaddress.com/) to get it.
Create a firewall rule with the [az postgres server firewall-rule create](/azure
To list the existing server firewall rules, run the [az postgres server firewall-rule list](/cli/azure/postgres/server/firewall-rule) command. The output lists the firewall rules, if any, by default in JSON format. You may use the switch `--output table` for a more readable table format as the output.
purview Catalog Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-managed-vnet.md
Previously updated : 01/13/2022 Last updated : 03/10/2022 # Customer intent: As a Azure Purview admin, I want to set up Managed Virtual Network and managed private endpoints for my Azure Purview account.
> Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions: > - Australia East > - Canada Central
+> - East US
> - East US 2
+> - North Europe
> - West Europe ## Conceptual overview
This article describes how to configure Managed Virtual Network and managed priv
### Supported regions Currently, Managed Virtual Network and managed private endpoints are available for Azure Purview accounts that are deployed in the following regions:-- Australia East-- Canada Central-- East US 2-- West Europe
+> - Australia East
+> - Canada Central
+> - East US
+> - East US 2
+> - North Europe
+> - West Europe
### Supported data sources
Currently, the following data sources are supported to have a managed private en
- Azure Blob Storage - Azure Data Lake Storage Gen 2 - Azure SQL Database
+- Azure SQL Database Managed Instances
- Azure Cosmos DB - Azure Synapse Analytics - Azure Files
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
Previously updated : 11/22/2021 Last updated : 03/09/2022 # Access control in Azure Purview
Azure Purview uses a set of predefined roles to control who can access what with
- **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms. - **Data source administrator** - a role that allows a user to manage data sources and scans. If a user is granted only to **Data source admin** role on a given data source, they can run new scans using an existing scan rule. To create new scan rules, the user must be also granted as either **Data reader** or **Data curator** roles. - **Policy author (Preview)** - a role that allows a user to view, update, and delete Azure Purview policies through the policy management app within Azure Purview.
+- **Workflow administrator** - a role that allows a user to access the workflow authoring page in the Azure Purview studio, and publish workflows on collections where they have access permissions. Workflow administrator only has access to authoring, and so will need at least Data reader permission on a collection to be able to access the Purview Studio.
> [!NOTE] > At this time, Azure Purview Policy author role is not sufficient to create policies. The Azure Purview Data source admin role is also required.
Azure Purview uses a set of predefined roles to control who can access what with
|I need to enable a Service Principal or group to set up and monitor scans in Azure Purview without allowing them to access the catalog's information |Data source administrator| |I need to put users into roles in Azure Purview | Collection administrator | |I need to create and publish access policies | Data source administrator and policy author |
+|I need to create workflows for my Azure Purview account | Workflow administrator |
>[!NOTE] > **\*Data source administrator permissions on Policies** - Data source administrators are also able to publish data policies.
When an Azure Purview account is created, it starts with a root collection that
Sources, assets, and objects can be added directly to this root collection, but so can other collections. Adding collections will give you more control over who has access to data across your Azure Purview account.
-All other users can only access information within the Azure Purview account if they, or a group they're in, are given one of the above roles. This means, when you create an Azure Purview account, no one but the creator can access or use its APIs until they are [added to one or more of the above roles in a collection](how-to-create-and-manage-collections.md#add-role-assignments).
+All other users can only access information within the Azure Purview account if they, or a group they're in, are given one of the above roles. This means, when you create an Azure Purview account, no one but the creator can access or use its APIs until they're [added to one or more of the above roles in a collection](how-to-create-and-manage-collections.md#add-role-assignments).
Users can only be added to a collection by a collection admin, or through permissions inheritance. The permissions of a parent collection are automatically inherited by its subcollections. However, you can choose to [restrict permission inheritance](how-to-create-and-manage-collections.md#restrict-inheritance) on any collection. If you do this, its subcollections will no longer inherit permissions from the parent and will need to be added directly, though collection admins that are automatically inherited from a parent collection can't be removed.
Similarly with the Data Curator and Data Source Admin roles, permissions for tho
### Add users to roles
-Role assignment is managed through the collections. Only a user with the [collection admin role](#roles) can grant permissions to other users on that collection. When new permissions need to be added, a collection admin will access the [Azure Purview Studio](https://web.purview.azure.com/resource/), navigate to data map, then the collections tab, and select the collection where a user needs to be added. From the Role Assignments tab they will be able to add and manage users who need permissions.
+Role assignment is managed through the collections. Only a user with the [collection admin role](#roles) can grant permissions to other users on that collection. When new permissions need to be added, a collection admin will access the [Azure Purview Studio](https://web.purview.azure.com/resource/), navigate to data map, then the collections tab, and select the collection where a user needs to be added. From the Role Assignments tab they'll be able to add and manage users who need permissions.
For full instructions, see our [how-to guide for adding role assignments](how-to-create-and-manage-collections.md#add-role-assignments).
purview Concept Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-workflow.md
+
+ Title: Workflows in Azure Purview
+description: This article describes workflows in Azure Purview, the roles they play, and who can create and manage them.
+++++ Last updated : 03/09/2022+++
+# Workflows in Azure Purview
++
+Workflows are automated, repeatable business processes that users can create within Azure Purview to validate and orchestrate CUD (create, update, delete) operations on their data entities. Enabling these processes allow organizations to track changes, enforce policy compliance, and ensure quality data across their data landscape.
+
+Since the workflows are created and managed within Azure Purview, manual change monitoring or approval are no longer required to ensure quality updates to the data catalog.
+
+## What are workflows?
+
+Workflows are automated processes that are made up of [connectors](#workflow-connectors) that contain a common set of pre-established actions and are run when specified operations occur in your data catalog.
+
+For example: A user attempts to delete a business glossary term that is bound to a workflow. When the user submits this operation, the workflow runs through its actions instead of, or before, the original delete operation.
+
+Workflow [actions](#workflow-connectors) include things like generating approval requests or sending a notification, that allow users to automate validation and notification systems across their organization.
+
+Currently, there are two kinds of workflows:
+
+* **Data governance** - for data policy, access governance, and loss prevention. [Scoped](#workflow-scope) at the collection level.
+* **Data catalog** - to manage approvals for CUD (create, update, delete) operations for glossary terms. [Scoped](#workflow-scope) at the glossary level.
+
+These workflows can be built from pre-established [workflow templates](#workflow-templates) provided in the Azure Purview studio, but are fully customizable using the available workflow connectors.
++
+## Workflow templates
+
+For all the different types of user defined workflows enabled and available for your use, Azure Purview provides templates to help [workflow administrators](#who-can-manage-workflows) create workflows without needing to build them from scratch. The templates are built into the authoring experience and automatically populate based on the workflow being created, so there's no need to search for them.
+
+Templates are available to launch the workflow authoring experience. However, a workflow admin can customize the template to meet the requirements in their organization.
+
+## Workflow connectors
+
+Workflow connectors are a common set of actions applicable across all workflows. They can be used in any workflow in Azure Purview to create processes customized to your organization. Currently, the available connectors are:
+
+- **Approval connector** ΓÇô Generates approval requests and assign the requests to individual users or Microsoft Azure Active Directory groups.
+
+ Azure Purview workflow approval connector currently supports two types of approval types:
+ * First to Respond ΓÇô This implies that the first approverΓÇÖs outcome (Approve/Reject) is considered final.
+ * Everyone must approve ΓÇô This implies everyone identified as an approver must approve the request for the request to be considered approved. If one approver rejects the request, regardless of other approvers, the request is rejected.
+
+- **Task Connector** - Creates, assigns, and tracks a task to a user or Azure AD group as part of a workflow.
+
+- **Send Email** ΓÇô Sends emails as part of a workflow.
+
+## Workflow scope
+
+Once a workflow is created and enabled, it can be bound to a particular scope. This gives you the flexibility to run different workflows for different areas/departments in your organization.
+
+Data governance workflows are scoped to collections, and can be bound to the root collection to govern the whole Azure Purview catalog, or any subcollection.
+
+Data catalog workflows are scoped to the glossary and can be bound to the entire glossary, any single term, or any parent term to manage child-terms.
+
+If there's no workflow directly associated with a scope, the workflow engine will traverse upward in the scope hierarchy to determine closest workflow, and run that workflow for the operation.
+
+For example, the AdatumCorp Purview account has the following collection hierarchy:
+
+Root Collection > Sales | Finance | Marketing
+
+- **Root collection** has the workflow _Self-Service data access default workflow_ defined and bound.
+- **Sales** has _Self-Service data access for sales collection_ defined and bound.
+- **Finance** has _Self-Service data access for finance collection_ defined and bound.
+- **Marketing** has no workflows directly bound.
+
+In the above setup, when an access request is made for a data asset in Finance collection, the _Self-Service data access for finance collection_ workflow is run.
+
+However, when a request access is made for a data asset in Marketing collection, _Self-Service data access default workflow_ is triggered. Because there are no workflows bound at the Marketing scope, the workflow engine traverses to the next level in the scope hierarchy, which is Marketing's parent: the root collection. The workflow at the parent scope, the root collection scope, is run.
+
+## Who can manage workflows?
+
+A new role, **Workflow Admin** is being introduced with workflow functionality.
+
+A Workflow admin defined for a collection can create self-service workflows and bind these workflows to the collections they have access to.
+
+A Workflow admin defined for any collection can create approval workflows for the business glossary. In order to bind the glossary workflows to a term you need to have at least [Data reader permissions](catalog-permissions.md).
+
+## Next steps
+
+Now that you understand what workflows are, you can follow these guides to use them in your Azure Purview account:
+
+- [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
+- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+- [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
+
purview How To Certify Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-certify-assets.md
To certify an asset, you must be a **data curator** for the collection containin
:::image type="content" source="media/how-to-certify-assets/view-certified-asset.png" alt-text="An asset with a certified label" border="true"::: > [!NOTE]
-> PowerBI assets can only be [certified in a PowerBI workspace](https://docs.microsoft.com/power-bi/collaborate-share/service-endorse-content). PowerBI endorsement labels are displayed in Azure Purview's search and browse experiences.
+> PowerBI assets can only be [certified in a PowerBI workspace](/power-bi/collaborate-share/service-endorse-content). PowerBI endorsement labels are displayed in Azure Purview's search and browse experiences.
### Certify assets in bulk
When search or browsing the data catalog, you'll see a certification label on an
Discover your assets in the Azure Purview data catalog by either: - [Browsing the data catalog](how-to-browse-catalog.md)-- [Searching the data catalog](how-to-search-catalog.md)
+- [Searching the data catalog](how-to-search-catalog.md)
purview How To Create Import Export Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-import-export-glossary.md
Title: How to create, import, and export glossary terms
-description: Learn how to create, import, and export glossary terms in Azure Purview.
+ Title: How to create, import, export, and manage glossary terms
+description: Learn how to create, import, export, and manage business glossary terms in Azure Purview.
Previously updated : 09/27/2021 Last updated : 03/09/2022 # How to create, import, and export glossary terms
This article describes how to work with the business glossary in Azure Purview.
## Create a new term
-To create a new glossary term, do the following steps:
+To create a new glossary term, follow these steps:
1. Select **Data catalog** in the left navigation on the home page, and then select the **Manage glossary** button in the center of the page.
To create a new glossary term, do the following steps:
These status markers are metadata associated with the term. Currently you can set the following status on each term:
- - **Draft**: This term is not yet officially implemented.
+ - **Draft**: This term isn't yet officially implemented.
- **Approved**: This term is official/standard/approved. - **Expired**: This term should no longer be used. - **Alert**: This term needs attention.
Notice that term names are case-sensitive. For example, `Sample` and `saMple` co
:::image type="content" source="media/how-to-create-import-export-glossary/select-term-template-for-import.png" alt-text="Screenshot of the Glossary terms page, Import terms button.":::
-3. Download the csv template and use it to enter your terms you would like to add. When naming your template csv file, the name needs to start with a letter and can only include letters, numbers, spaces, '_', or other non-ascii unicode characters. Special characters in the file name will create an error.
+3. Download the csv template and use it to enter your terms you would like to add. Give your template csv file a name that starts with a letter and only includes letters, numbers, spaces, '_', or other non-ascii unicode characters. Special characters in the file name will create an error.
> [!Important] > The system only supports importing columns that are available in the template. The "System Default" template will have all the default attributes.
You should be able to export terms from glossary as long as the selected terms b
> [!Important] > If the terms in a hierarchy belong to different term templates then you need to split them into different .CSV files for import. Also, updating a parent of a term is currently not supported using import process.
+## Delete terms
+
+1. Select **Data catalog** in the left navigation on the home page, and then select the **Manage glossary** button in the center of the page.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/find-glossary.png" alt-text="Screenshot of the data catalog with the glossary highlighted." border="true":::
+
+1. Using checkboxes, select the terms you want to delete. You can select a single term, or multiple terms for deletion.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-terms.png" alt-text="Screenshot of the glossary, with a few terms selected." border="true":::
+
+1. Select the **Delete** button in the top menu.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-delete.png" alt-text="Screenshot of the glossary, with the Delete button highlighted in the top menu." border="true":::
++
+1. You'll be presented with a window that shows all the terms selected for deletion.
+
+ > [!NOTE]
+ > If a parent is selected for deletion all the children for that parent are automatically selected for deletion.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/delete-window.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted. The Revenue term is a parent to two other terms, and because it was selected to be deleted, its child terms are also in the list to be deleted." border="true":::
+
+1. Review the list. You can remove the terms you don't want to delete after review by selecting **Remove**.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/select-remove.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Remove' column highlighted on the right." border="true":::
+
+1. You can also see which terms will require an approval process in the column **Approval Needed**. If Approval needed is **Yes**, the term will go through an approval workflow before deletion. If the value is **No** then the term will be deleted without any approvals.
+
+ > [!NOTE]
+ > If a parent has an associated approval process, but the child does not, the parent delete term workflow will be triggered. This is because the selection is done on the parent and you are acknowledging to delete child terms along with parent.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/approval-needed.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Approval needed' column highlighted." border="true":::
+
+1. If there's a least one term that needs to be approved you'll be presented with **Submit for approval** and **Cancel** buttons. Selecting **Submit for approval** will delete all the terms where approval isn't needed and will trigger approval workflows for terms that require it.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/yes-approval-needed.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Approval needed' column highlighted. An item is listed as approval needed, so at the bottom, buttons available are 'Submit for approval' and 'Cancel'." border="true":::
+
+1. If there are no terms that need to be approved you'll be presented with **Delete** and **Cancel** buttons. Selecting **Delete** will delete all the selected terms.
+
+ :::image type="content" source="media/how-to-create-import-export-glossary/no-approval-needed.png" alt-text="Screenshot of the glossary delete window, with a list of all terms to be deleted, and the 'Approval needed' column highlighted. All items are listed as no approval needed, so at the bottom, buttons available are 'Delete' and 'Cancel'." border="true":::
++
+## Business terms with approval workflow enabled
+
+If [workflows](concept-workflow.md) are enabled on a term, then any creates, updates, or deletes to the term will go through an approval before they're saved in data catalog.
+
+- **New terms** - when a create approval workflow is enabled on the parent term, during the creation process you'll see **Submit for approval** instead of **Create** after you've entered all the details. Selecting **Submit for approval** will trigger the workflow. You'll receive notification when your request is approved or rejected.
+
+- **Updates to existing terms** - when an update approval workflow is enabled on parent, you'll see **Submit for approval** instead of **Save** when updating the term. Selecting **Submit for approval** will trigger the workflow. The changes won't be saved in catalog until all the approvals are met.
+
+- **Deletion** - when a delete approval workflow is enabled on the parent term, you'll see **Submit for approval** instead of **Delete** when deleting the term. Selecting **Submit for approval** will trigger the workflow. However, the term won't be deleted from catalog until all the approvals are met.
+
+- **Importing terms** - when an import approval workflow enabled for Azure Purview's glossary, you'll see **Submit for approval** instead of **OK** in the Import window when importing terms via csv. Selecting **Submit for approval** will trigger the workflow. However, the terms in the file won't be updated in catalog until all the approvals are met.
++ ## Next steps
-* For more information about glossary terms, see the [glossary reference](reference-azure-purview-glossary.md)
+* For more information about glossary terms, see the [glossary reference](reference-azure-purview-glossary.md)
+* For more information about approval workflows of business glossary, see the [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
purview How To Request Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-request-access.md
+
+ Title: How to request access to a data source in Azure Purview.
+description: This article describes how a user can request access to a data source from within Azure Purview.
+++++ Last updated : 03/01/2022+++
+# How to request access for a data asset
++
+If you discover a data asset in the catalog that you would like access to, you can request access directly through Azure Purview.
+
+The request will trigger a workflow that will request that the owners of the data resource grant you access to that data source.
+
+This article outlines how to make an access request.
+
+1. To find a data asset, use Azure Purview's [search](how-to-search-catalog.md) or [browse](how-to-browse-catalog.md) functionality.
+
+ :::image type="content" source="./media/how-to-request-access/search-or-browse.png" alt-text="Screenshot of the Azure Purview studio, with the search bar and browse buttons highlighted.":::
+
+1. Select the asset to go to asset details.
+
+1. Select **Request access**.
+
+ :::image type="content" source="./media/how-to-request-access/request-access.png" alt-text="Screenshot of a data asset's overview page, with the Request button highlighted in the mid-page menu.":::
+
+1. The **Request access** window will open. You can provide comments on why data access is requested.
+1. Select **Send** to trigger the self-service data access workflow.
+
+ :::image type="content" source="./media/how-to-request-access/send.png" alt-text="Screenshot of a data asset's overview page, with the Request access window overlaid. The Send button is highlighted at the bottom of the Request access window.":::
+
+ > [!NOTE]
+ > A request access to resource set will actually submit the data access request for the folder one level up which contains all these resource set files.
+
+1. Data owners will be notified of your request and will either approve or reject the request.
++
+## Next steps
+
+- [What are Azure Purview workflows](concept-workflow.md)
+- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+- [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
purview How To Workflow Business Terms Approval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-business-terms-approval.md
+
+ Title: Business terms approval workflow
+description: This article describes how to create and manage workflows to approve business terms in Azure Purview.
+++++ Last updated : 03/01/2022++++
+# Approval workflow for business terms
++
+This guide will take you through the creation and management of approval workflows for business terms.
+
+## Create and enable a new approval workflow for business terms
+
+1. Sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
+
+1. To create new workflows, select **Authoring** in the workflow section. This will take you to the workflow authoring experiences.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-authoring-experience.png" alt-text="Screenshot showing the authoring workflows page, showing a list of all workflows.":::
+
+ >[!NOTE]
+ >If the authoring tab is greyed out, you don't have the permissions to be able to author workflows. You'll need the [workflow admin role](catalog-permissions.md).
+
+1. To create a new workflow, select **+New** button.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-authoring-select-new.png" alt-text="Screenshot showing the authoring workflows page, with the + New button highlighted.":::
+
+1. To create **Approval workflows for business terms** Select **Data Catalog** and select **Continue**
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/select-data-catalog.png" alt-text="Screenshot showing the new workflows menu, with Data Catalog selected.":::
+
+1. In the next screen, you'll see all the templates provided by Azure Purview to create a workflow. Select the template using which you want to start your authoring experiences and select **Continue**. Each of these templates specifies the kind of action that will trigger the workflow. In the screenshot below we've selected **Create glossary term**. The four different templates available for business glossary are:
+ * Create glossary term - when a term is created, approval will be requested.
+ * Update glossary term - when a term is updated, approval will be requested.
+ * Delete glossary term - when a term is deleted, approval will be requested.
+ * Import terms - when terms are imported, approval will be requested.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/create-glossary-term-select-continue.png" alt-text="Screenshot showing the new data catalog workflow menu, showing template options, with the Continue button selected.":::
+
+1. Next, enter a workflow name and optionally add a description. Then select **Continue**.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/name-and-continue.png" alt-text="Screenshot showing the new data catalog workflow menu with a name entered into the name textbox.":::
+
+1. You'll now be presented with a canvas where the selected template is loaded by default.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-authoring-canvas-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with the selected template workflow populated in the central workspace." lightbox="./media/how-to-workflow-business-terms-approval/workflow-authoring-canvas-expanded.png":::
+
+1. The default template can be used as it is by populating the approver's email address in **Start and Wait for approval** Connector.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/add-approver-email-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with the start and wait for an approval step opened, and the Assigned to textbox highlighted." lightbox="./media/how-to-workflow-business-terms-approval/add-approver-email-expanded.png":::
+
+ The default template has the following steps:
+ 1. Trigger when a glossary term is created/updated/deleted/imported depending on the template selected.
+ 1. Approval connector that specifies a user or group that will be contacted to approve the request.
+ 1. Condition to check approval status
+ - If approved:
+ 1. Create/update/delete/import the glossary term
+ 1. Send an email to requestor that their request is approved, and term CUD (create, update, delete) operation is successful.
+ - If rejected:
+ 1. Send email to requestor that their request is denied.
+
+1. You can also modify the template by adding more connectors to suit your organizational needs. Add a new step to the end of the template by selecting the **New step** button. Add steps between any already existing steps by selecting the arrow icon between any steps.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/modify-template-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with a + button highlighted on the arrow between the two top steps, and the Next Step button highlighted at the bottom of the workspace." lightbox="./media/how-to-workflow-business-terms-approval/modify-template-expanded.png":::
+
+1. Once you're done defining a workflow, you need to bind the workflow to a glossary hierarchy path. The binding implies that this workflow is triggered only for CUD operations within the specified glossary hierarchy path. A workflow can be bound to only one hierarchy path. To bind a workflow or to apply a scope to a workflow, you need to select ΓÇÿApply workflowΓÇÖ. Select the scopes you want this workflow to be associated with and select **OK**.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/select-apply-workflow.png" alt-text="Screenshot showing the new data catalog workflow menu with the Apply Workflow button highlighted at the top of the workspace.":::
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/select-okay.png" alt-text="Screenshot showing the apply workflow window, showing a list of items that the workflow can be applied to. At the bottom of the window, the O K button is selected.":::
+
+ >[!NOTE]
+ > - The Azure Purview workflow engine will always resolve to the closest workflow that the term hierarchy path is associated with. In case a direct binding is not found, it will traverse up in the tree to find the workflow associated with the closest parent in the glossary tree.
+ > - Import terms can only be bound to root glossary path as the .CSV can contain terms from different hierarchy paths.
+
+1. By default, the workflow will be enabled. To disable, toggle the Enable button in the top menu.
+
+1. Finally select **Save and close** to create and the workflow.
+
+ :::image type="content" source="./media/how-to-workflow-business-terms-approval/workflow-enabled.png" alt-text="Screenshot showing the workflow authoring page, showing the newly created workflow listed among all other workflows.":::
+
+## Edit an existing workflow
+
+To modify an existing workflow, select the workflow and then select **Edit** in the top menu. You'll then be presented with the canvas containing workflow definition. Modify the workflow and select **Save** to commit changes.
++
+## Disable a workflow
+
+To disable a workflow, select the workflow and then select **Disable** in the top menu. You can also disable the workflow by selecting **Edit** and changing the enable toggle in workflow canvas.
++
+## Delete a workflow
+
+To delete a workflow, select the workflow and then select **Delete** in the top menu.
++
+## Limitations for business terms with approval workflow enabled
+
+* Non-approved glossary terms aren't saved in Purview catalog.
+* The behavior of tagging terms to assets/schemas is same as today. That is, previously created draft terms can be tagged to assets/schemas.
+
+## Next steps
+
+For more information about workflows, see these articles:
+
+- [What are Azure Purview workflows](concept-workflow.md)
+- [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
+- [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
purview How To Workflow Manage Requests Approvals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-requests-approvals.md
+
+ Title: Manage workflow requests and approvals
+description: This article outlines how to manage requests and approvals generated by a workflow in Azure Purview.
+++++ Last updated : 03/09/2022+++
+# Manage workflow requests and approvals
++
+This article outlines how to manage requests and approvals that generated by a [workflow](concept-workflow.md) in Azure Purview.
+
+To view requests you've made or request for approvals that have been sent to you by a workflow instance, navigate to management center in the [Azure Purview Studio](https://web.purview.azure.com/resource/), and select **Requests and Approvals**.
+++
+You'll be presented with three tabs:
+
+* [Waiting for a response](#waiting-for-a-response) - This tab shows the requests (tasks) and approvals that are waiting for you to act on.
+* [Pending requests](#pending-requests) - You can view all the approval requests and tasks you've submitted in this tab.
+* [History](#history) - All the completed approvals and tasks are moved to this tab.
+
+## Waiting for a response
+
+This tab shows the requests (tasks) and approvals that are waiting for your action.
++
+Select the request to take action.
+
+### Approvals
+
+1. To approve/reject a request, select the request and you'll be presented with the following window:
+
+ :::image type="content" source="./media/how-to-workflow-manage-requests-approval/select-request.png" alt-text="Screenshot showing that the Waiting for a response tab is selected, and an open request has been selected. The Respond page is shown with some details, a space for a response, and a space for commentary.":::
+
+1. An approval activity has the following available responses:
+ - **Approved** ΓÇô An approver can mark the response as **Approved** indicating their approval for the changes proposed by the requestor.
+ - **Rejected** - An approver can mark the response as **Rejected** indicating that they don't approve the changes proposed by the requestor.
+1. Optionally, select the value to see details of the request. In the screenshot below, it shows the term details. If the approval is for a data asset, you'll be able to view the details, as shown below.
+
+ :::image type="content" source="./media/how-to-workflow-manage-requests-approval/select-value.png" alt-text="Screenshot showing the respond page is open, with the Value detail highlighted.":::
+
+ :::image type="content" source="./media/how-to-workflow-manage-requests-approval/view-details.png" alt-text="Screenshot showing the request value has been selected, so the request details page is open showing an overview of the request, related information, and contacts.":::
+
+1. If there are updates, you'll also to be able to see the current value and proposed value.
+1. Choose your response, optionally add comments, and select **Confirm**.
+
+ :::image type="content" source="./media/how-to-workflow-manage-requests-approval/select-option-and-confirm.png" alt-text="Screenshot showing the Respond page is open, with the response section Highlighted, a response selected, and the confirm button highlighted at the bottom.":::
+
+### Tasks
+
+1. To complete a task, select the task request and you'll be presented with the following window:
+
+ :::image type="content" source="./media/how-to-workflow-manage-requests-approval/task-request.png" alt-text="Screenshot showing the task selected and the Respond page is open, with details, a status, and a place for comments.":::
+
+1. A task has the following statues:
+ - Not started ΓÇô This is the status of the task when it's initially created by a workflow.
+ - In Progress ΓÇô A task owner can mark the task as **In progress** to indicate that they're currently working on it.
+ - Complete ΓÇô Once the task is complete, a task owner can change the status as **Complete**. This marks the completion of task activity, and the workflow will now move to the next step.
+
+1. Select the correct status, add any comments, and select **Confirm**.
+
+## Pending requests
+
+In this tab you can view all the approval requests and tasks that you've submitted.
+
+Select the request to see the status and the outcomes for each approver/task owner.
++
+## History
+
+All the completed approvals and tasks are moved to this tab.
++
+Select an approval or task to see details and responses from all approvals or task owners.
+
+## Email notifications
+
+Purview approvals and task connectors have in-built email capabilities. Every time an approval/task action is triggered in workflow; it sends email to all the users who need to act on it.
++
+Users can respond by selecting the links in the email, or by navigating to the Azure Purview studio and viewing their pending tasks.
+
+## Next steps
+
+- [What are Azure Purview workflows](concept-workflow.md)
+- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+- [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
+- [Manage workflow runs](how-to-workflow-manage-runs.md)
purview How To Workflow Manage Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-manage-runs.md
+
+ Title: Manage workflow runs
+description: This article outlines how to manage workflow runs.
+++++ Last updated : 03/01/2022+++
+# Manage workflow runs
++
+This article outlines how to manage workflows that are already running.
+
+1. To view workflow runs you triggered, sign in to the [Azure Purview Studio](https://web.purview.azure.com/resource/), select the Management center, and select **Workflow runs**.
+
+ :::image type="content" source="./media/how-to-workflow-manage-runs/select-workflow-runs.png" alt-text="Screenshot of the management menu in the Azure Purview studio. The Workflow runs tab is highlighted.":::
+
+1. You'll be presented with the list of workflow runs and their statuses.
+
+ :::image type="content" source="./media/how-to-workflow-manage-runs/workflow-runs.png" alt-text="Screenshot of the workflow runs page, showing a list of all workflow runs, their status, and their run IDs.":::
+
+1. You can filter the results by using workflow name, status, or time.
+
+ :::image type="content" source="./media/how-to-workflow-manage-runs/filters.png" alt-text="Screenshot of the workflow runs page, with the keyword, name, status, and time filters highlighted above the list of workflows.":::
+
+1. Select a workflow name to see the details of the workflow run.
+
+1. This will present a window that shows all the actions that are completed, actions that are in-progress, and the next action for that workflow run.
+
+ :::image type="content" source="./media/how-to-workflow-manage-runs/workflow-details.png" alt-text="Screenshot of the workflow runs page, with an example workflow name selected, and the workflow details page overlaid, showing workflow run, submission time, run I D, status, and a list of all steps in the request timeline.":::
+
+1. You can select any of the actions in the request timeline to see the specific status and substep details.
+
+ :::image type="content" source="./media/how-to-workflow-manage-runs/select-stages.png" alt-text="Screenshot of the workflow runs page, with the workflow details page overlaid. Some workflow run actions in the request timeline have been expanded to show more information and sub steps.":::
++
+## Next steps
+
+- [What are Azure Purview workflows](concept-workflow.md)
+- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+- [Self-service data access workflow for hybrid data estates](how-to-workflow-self-service-data-access-hybrid.md)
+- [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
purview How To Workflow Self Service Data Access Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-workflow-self-service-data-access-hybrid.md
+
+ Title: Self-service hybrid data access workflows
+description: This article describes how to create and manage hybrid self-service data access workflows in Azure Purview.
+++++ Last updated : 03/09/2022+++
+# Self-service data access workflows for hybrid data estates
++
+This guide will take you through the creation and management of self-service data access [workflows](concept-workflow.md) for hybrid data estates.
+
+## Create and enable self-service data access workflow
+
+1. Sign in to [Azure Purview Studio](https://web.purview.azure.com/resource/) and select the Management center. You'll see three new icons in the table of contents.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-section.png" alt-text="Screenshot showing the management center left menu with the new workflow section highlighted.":::
+
+1. To create new workflows, select Authoring. This will take you to the workflow authoring experience.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-authoring-experience.png" alt-text="Screenshot showing the authoring workflows page, showing a list of all workflows.":::
+
+ >[!NOTE]
+ >If the authoring tab is greyed out, you don't have the permissions to be able to author workflows. You'll need the [workflow admin role](catalog-permissions.md).
+
+1. To create a new self-service workflow, select **+New** button.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-authoring-select-new.png" alt-text="Screenshot showing the authoring workflows page, with the + New button highlighted.":::
+
+1. You'll be presented with different categories workflows creatable in Azure Purview. To create **an access request workflow** Select **Governance** and select **Continue**.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/select-governance.png" alt-text="Screenshot showing the new workflow window, with the Governance option selected.":::
+
+1. In the next screen, you'll see all the templates provided by Azure Purview to create a self-service data access workflow. Select the template **Data access request** and select **Continue**.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/select-data-access-request.png" alt-text="Screenshot showing the new workflow window, with the Data access request option selected.":::
+
+1. Next, enter workflow a name and optionally add a description. Then select **Continue**.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/name-and-continue.png" alt-text="Screenshot showing the new workflow window, with a name entered in the textbox.":::
+
+1. You'll now be presented with a canvas where the selected template is loaded by default.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/workflow-canvas-inline.png" alt-text="Screenshot showing the workflow canvas with the selected template workflow steps displayed." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/workflow-canvas-expanded.png":::
+
+ The template has the following steps:
+ 1. Trigger when a data access request is made.
+ 1. Approval connector that specifies a user or group that will be contacted to approve the request.
+ 1. Condition to check approval status
+ - If approved:
+ 1. Condition to check if data source is registered for use governance (policy)
+ 1. If a data source is registered with policy:
+ 1. Create self-service policy
+ 1. Send email to requestor that access is provided
+ 1. If data source isn't registered with policy:
+ 1. Task connector to assign a task to a user or Microsoft Azure Active Directory group to manually provide access to requestor.
+ 1. Send an email to requestor that access is provided once the task is complete.
+ - If rejected:
+ 1. Send an email to requestor that data access request is denied.
+1. The default template can be used as it is by populating two fields:
+ * Adding an approver's email address or Microsoft Azure Active Directory group in **Start and Wait for approval** Connector
+ * Adding a user's email address or Microsoft Azure Active Directory group in **Create task** connector to denote who is responsible for manually providing access if the source isn't registered with policy.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-inline.png" alt-text="Screenshot showing the workflow canvas with the start and wait for an approval step, and the Create Task and wait for task completion steps highlighted, and the Assigned to textboxes highlighted within those steps." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/required-fields-for-template-expanded.png":::
+
+ > [!NOTE]
+ > Please configure the workflow to create self-service policies ONLY for sources supported by Azure purview's policy feature. To see what's supported by policy, check the [Data owner policies documentation](tutorial-data-owner-policies-storage.md).
+
+1. You can also modify the template by adding more connectors to suit your organizational needs.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/more-connectors-inline.png" alt-text="Screenshot showing the workflow authoring canvas, with a + button highlighted on the arrow between the two top steps, and the Next Step button highlighted at the bottom of the workspace." lightbox="./media/how-to-workflow-self-service-data-access-hybrid/more-connectors-expanded.png":::
+
+1. Once you're done defining a workflow, you need to bind the workflow to a collection hierarchy path. The binding (or scoping) implies that this workflow is triggered only for data access requests in that collection. To bind a workflow or to apply a scope to a workflow, you need to select **Apply workflow**. Select the scope you want this workflow to be associated with and select **OK**.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/apply-workflow.png" alt-text="Screenshot showing the workflow workspace with the Apply workflow button selected at the top of the space, and the Apply workflow menu open, showing a list of items. One item is selected, and the O K button is highlighted at the bottom.":::
+
+ >[!NOTE]
+ > Purview workflow engine will always resolve to the closest workflow that the collection hierarchy path is associated with. In case a direct binding is not found, it will traverse up in the tree to find the workflow associated with the closest parent in the collection tree.
+
+1. By default, the workflow will be enabled. You can disable by selecting the Enable toggle.
+1. Finally select **Save and close** to create and enable the workflow.
+
+ :::image type="content" source="./media/how-to-workflow-self-service-data-access-hybrid/completed-workflows.png" alt-text="Screenshot showing the workflow authoring page with the newly created workflow listed among the other workflows.":::
+
+## Edit an existing workflow
+
+To modify an existing workflow, select the workflow and then select the **Edit** button. You'll now be presented with the canvas containing workflow definition. Modify the workflow and select **Save** to commit changes.
++
+## Disable a workflow
+
+To disable a workflow, you can select the workflow and then select **Disable**. You can also disable the workflow by selecting **Edit** and changing the enable toggle in workflow canvas then saving.
++
+## Delete a workflow
+
+To delete a workflow, select the workflow and then select **Delete**.
++
+## Next steps
+
+For more information about workflows, see these articles:
+
+- [What are Azure Purview workflows](concept-workflow.md)
+- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+- [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
+
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
These credential types are supported in Azure Purview:
- Basic authentication: You add the **password** as a secret in key vault. - Service Principal: You add the **service principal key** as a secret in key vault. - SQL authentication: You add the **password** as a secret in key vault.
+- Windows authentication: You add the **password** as a secret in key vault.
- Account Key: You add the **account key** as a secret in key vault. - Role ARN: For an Amazon S3 data source, add your **role ARN** in AWS. - Consumer Key: For Salesforce data sources, you can add the **password** and the **consumer secret** in key vault.
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
This section describes how to register an on-premises SQL server instance in Azu
### Authentication for registration
-There is only one way to set up authentication for SQL server on-premises:
+There are two ways to set up authentication for SQL server on-premises:
- SQL Authentication
+- Windows Authentication
-#### SQL Authentication to register
+#### Set up SQL server authentication
-Ensure the SQL Server deployment is configured to allow SQL Server and Windows Authentication.
+If SQL Authentication is applied, ensure the SQL Server deployment is configured to allow SQL Server and Windows Authentication.
To enable this, within SQL Server Management Studio (SSMS), navigate to "Server Properties" and change from "Windows Authentication Mode" to "SQL Server and Windows Authentication mode". :::image type="content" source="media/register-scan-on-premises-sql-server/enable-sql-server-authentication.png" alt-text="The Server Properties window is open with the security page selected. Under Server authentication, S Q L Server and Windows Authentication mode is selected.":::
+If Windows Authentication is applied, configure the SQL Server deployment to use Windows Authentication mode.
+ A change to the Server Authentication will require a restart of the SQL Server Instance and SQL Server Agent, this can be triggered within SSMS by navigating to the SQL Server instance and selecting "Restart" within the right-click options pane. ##### Creating a new login and user If you would like to create a new login and user to be able to scan your SQL server, follow the steps below:
-The SQL account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Azure Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
+The account must have access to the **master** database. This is because the `sys.databases` is in the master database. The Azure Purview scanner needs to enumerate `sys.databases` in order to find all the SQL databases on the server.
> [!Note] > All the steps below can be executed using the code provided [here](https://github.com/Azure/Purview-Samples/blob/master/TSQL-Code-Permissions/grant-access-to-on-prem-sql-databases.sql)
-1. Navigate to SQL Server Management Studio (SSMS), connect to the server, navigate to security, select and hold (or right-click) on login and create New login. Make sure to select "SQL authentication".
+1. Navigate to SQL Server Management Studio (SSMS), connect to the server, navigate to security, select and hold (or right-click) on login and create New login. If Windows Authentication is applied, select "Windows authentication". If SQL Authentication is applied, make sure to select "SQL authentication".
:::image type="content" source="media/register-scan-on-premises-sql-server/create-new-login-user.png" alt-text="Create new login and user.":::
The SQL account must have access to the **master** database. This is because the
1. Select OK to save.
-1. Navigate again to the user you created, by selecting and holding (or right-clicking) and selecting **Properties**. Enter a new password and confirm it. Select the 'Specify old password' and enter the old password. **It is required to change your password as soon as you create a new login.**
+1. If SQL Authentication is applied, navigate again to the user you created, by selecting and holding (or right-clicking) and selecting **Properties**. Enter a new password and confirm it. Select the 'Specify old password' and enter the old password. **It is required to change your password as soon as you create a new login.**
:::image type="content" source="media/register-scan-on-premises-sql-server/change-password.png" alt-text="change password.":::
The SQL account must have access to the **master** database. This is because the
1. Select **+ Generate/Import** and enter the **Name** and **Value** as the *password* from your SQL server login 1. Select **Create** to complete 1. If your key vault is not connected to Azure Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-azure-purview-account)
-1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to set up your scan.
+1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the **username** and **password** to set up your scan. Make sure the right authentication method is selected when creating a new credential. If SQL Authentication is applied, select "SQL authentication" as the authentication method. If Windows Authentication is applied, then select "Windows authentication".
### Steps to register
To create and run a new scan, do the following:
1. Select **New scan**
-1. Select the credential to connect to your data source.
+1. Select the credential to connect to your data source. The credentials are grouped and listed under different authentication methods.
- :::image type="content" source="media/register-scan-on-premises-sql-server/on-premises-sql-set-up-scan.png" alt-text="Set up scan":::
+ :::image type="content" source="media/register-scan-on-premises-sql-server/on-premises-sql-set-up-scan-win-auth.png" alt-text="Set up scan":::
1. You can scope your scan to specific tables by choosing the appropriate items in the list.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
To create and run a new scan, do the following:
This scenario can be used when Azure Purview and Power PI tenant or both, are configured to use private endpoint and deny public access. Additionally, this option is also applicable if Azure Purview and Power PI tenant are configured to allow public access.
-> [!Note]
-> Additional configuration may be required for your Power BI tenant and Azure Purview account, if you are planning to scan Power BI tenant through private network where either Azure Purview account, Power BI tenant or both are configured with private endpoint with public access denied.
-> For more information related to Power BI network, see [How to configure private endpoints for accessing Power BI](/power-bi/admin/service-security-private-links.md).
+> [!IMPORTANT]
+> Additional configuration may be required for your Power BI tenant and Azure Purview account, if you are planning to scan Power BI tenant through private network where either Azure Purview account, Power BI tenant or both are configured with private endpoint with public access denied.
+>
+> For more information related to Power BI network, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links).
+>
> For more information about Azure Purview network settings, see [Use private endpoints for your Azure Purview account](catalog-private-link.md). To create and run a new scan, do the following:
search Query Lucene Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-lucene-syntax.md
description: Reference for the full Lucene query syntax, as used in Azure Cognitive Search for wildcard, fuzzy search, RegEx, and other advanced query constructs. --++ Last updated 06/08/2021
search Query Simple Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-simple-syntax.md
description: Reference for the simple query syntax used for full text search queries in Azure Cognitive Search. --++ Last updated 12/14/2020
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
description: Review differences in API versions and learn which actions are required to migrate existing code to the newest Azure Cognitive Search service REST API version. --++ Last updated 09/16/2021
search Search Dotnet Mgmt Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-mgmt-sdk-migration.md
description: Upgrade to the Azure Search .NET Management SDK from previous versions. Learn about new features and the code changes necessary for migration. --++ ms.devlang: csharp
search Search Dotnet Sdk Migration Version 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-1.md
description: Migrate code to the Azure Search .NET SDK version 1.1 from older API versions. Learn what's new and what code changes are required. --++ ms.devlang: csharp
search Search Dotnet Sdk Migration Version 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-5.md
description: Migrate code to the Azure Search .NET SDK version 5 from older versions. Learn what is new and which code changes are required. --++ ms.devlang: csharp
search Search Dotnet Sdk Migration Version 9 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-9.md
description: Migrate code to the Azure Search .NET SDK version 9 from older versions. Learn what is new and which code changes are required. --++ ms.devlang: csharp
search Search Dotnet Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration.md
description: Migrate code to the Azure Search .NET SDK version 3 from older versions. Learn what's new and which code changes are required. --++ ms.devlang: csharp
search Search Howto Complex Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-complex-data-types.md
description: Nested or hierarchical data structures can be modeled in an Azure Cognitive Search index using ComplexType and Collections data types. --++ tags: complex data types; compound data types; aggregate data types
search Search Howto Connecting Azure Sql Mi To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md
Previously updated : 06/26/2021 Last updated : 03/10/2022 # Indexer connections to Azure SQL Managed Instance through a public endpoint
-If you are setting up an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) that connects to an Azure SQL managed instance, you'll need to enable a public endpoint on the managed instance as a prerequisite. By default, an indexer connects to a managed instance over a public endpoint. You can also use a [private endpoint](search-indexer-howto-access-private.md).
+If you are setting up an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) that connects to an Azure SQL managed instance, you'll need to enable a public endpoint on the managed instance as a prerequisite. By default, an indexer connects to a managed instance over a public endpoint.
This article provides basic steps that include collecting information necessary for data source configuration. For more information and methodologies, see [Configure public endpoint in Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md).
Copy the connection string to use in the search indexer's data source connection
## Next steps
-With configuration out of the way, you can now specify a [SQL Managed Instance as an indexer data source](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
+With configuration out of the way, you can now specify a [SQL Managed Instance as an indexer data source](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
search Search Howto Dotnet Sdk V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-dotnet-sdk-v10.md
description: Learn how to create and manage search objects in a .NET application using C# and Microsoft.Azure.Search (version 10) of the .NET SDK. Code snippets demonstrate connecting to the service, creating indexes, and queries. --++ ms.devlang: csharp
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
Previously updated : 02/11/2022 Last updated : 03/10/2022 # Set up a connection to an Azure Storage account using a managed identity
This article assumes familiarity with indexer concepts and configuration. If you
For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub.
+> [!NOTE]
+> If your indexer has an attached skillset that writes back to Azure Storage (for example, it creates a knowledge store or caches enriched content), a managed identity won't work if the storage account is behind a firewall or has IP restrictions. This is a known limitation that will be lifted when managed identity support for skillset scenarios becomes generally available. The solution is to use a full access connection string instead of a managed identity.
+ ## Prerequisites * [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service.
search Search Indexer Howto Access Trusted Service Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-trusted-service-exception.md
Previously updated : 02/11/2022 Last updated : 03/10/2022 # Make indexer connections to Azure Storage as a trusted service
Indexers in an Azure Cognitive Search service that access blob data in Azure Sto
+ Content in Azure Blob Storage or Azure Data Lake Storage Gen2 (ADLS Gen2) that you want to index. > [!NOTE]
-> This capability is limited to blobs and ADLS Gen2 on Azure Storage. The trusted service exception is not supported for indexer connections to Azure Table Storage and Azure File Storage.
+> This capability is limited to blobs and ADLS Gen2 on Azure Storage. The trusted service exception is not supported for indexer connections to Azure Table Storage and Azure File Storage. It's also not currently supported for indexers that invoke skillsets that write to Azure Storage (knowledge store, enrichment cache, or debug sessions).
## Check service identity
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
Previously updated : 02/18/2022 Last updated : 03/10/2022 # Indexer access to content protected by Azure network security features
Your Azure resources could be protected using any number of the network isolatio
| Resource | IP Restriction | Private endpoint | | | | - |
-| Azure Storage (blobs, tables, ADLS Gen 2) | Supported only if the storage account and search service are in different regions | Supported |
+| Azure Storage for text-based indexing (blobs, tables, ADLS Gen 2) | Supported only if the storage account and search service are in different regions. | Supported |
+| Azure Storage for AI enrichment (caching, knowledge store, debug sessions) | Supported only if the storage account and search service are in different regions, and when the search service connects using a full access connection string. Managed identity is not currently supported for write back operations to an IP restricted storage account. | Unsupported |
| Azure Cosmos DB - SQL API | Supported | Supported | | Azure Cosmos DB - MongoDB API | Supported | Unsupported | | Azure Cosmos DB - Gremlin API | Supported | Unsupported |
search Search Query Odata Collection Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-collection-operators.md
description: When creating filter expressions in Azure Cognitive Search queries, use "any" and "all" operators in lambda expressions when the filter is on a collection or complex collection field. --++ Last updated 09/16/2021
search Search Query Odata Comparison Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-comparison-operators.md
description: Syntax and reference documentation for using OData comparison operators (eq, ne, gt, lt, ge, and le) in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-filter.md
description: OData language reference and full syntax used for creating filter expressions in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Full Text Search Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-full-text-search-functions.md
description: OData full-text search functions, search.ismatch and search.ismatchscoring, in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Geo Spatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-geo-spatial-functions.md
description: Syntax and reference documentation for using OData geo-spatial functions, geo.distance and geo.intersects, in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Logical Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-logical-operators.md
description: Syntax and reference documentation for using OData logical operators, and, or, and not, in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Orderby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-orderby.md
description: Syntax and language reference documentation for using order-by in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Search In Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-search-in-function.md
description: Syntax and reference documentation for using the search.in function in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Search Score Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-search-score-function.md
description: Syntax and reference documentation for using the search.score function in Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-select.md
description: Syntax and language reference for explicit selection of fields to return in the search results of Azure Cognitive Search queries. --++ Last updated 09/16/2021
search Search Query Odata Syntax Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-syntax-reference.md
description: Formal grammar and syntax specification for OData expressions in Azure Cognitive Search queries. --++ Last updated 09/16/2021
-translation.priority.mt:
- - "de-de"
- - "es-es"
- - "fr-fr"
- - "it-it"
- - "ja-jp"
- - "ko-kr"
- - "pt-br"
- - "ru-ru"
- - "zh-cn"
- - "zh-tw"
+ # OData expression syntax reference for Azure Cognitive Search
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-normalization-schema.md
For more information, see the [Internet Assigned Numbers Authority (IANA) DNS pa
### Common fields
-Fields common to all schemas are described in the [ASIM schema overview](normalization-about-schemas.md#common). The following fields have specific guidelines for DNS Events:
+> [!IMPORTANT]
+> Fields common to all schemas are described in the [ASIM schema overview](normalization-about-schemas.md#common). The following list mentions only fields that have specific guidelines for DNS events.
+>
| **Field** | **Class** | **Type** | **Description** | | | | | | | **EventType** | Mandatory | Enumerated | Indicates the operation reported by the record. <br><br> For DNS records, this value would be the [DNS op code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `lookup`|
-| **EventSubType** | Optional | Enumerated | Either **request** or **response**. <br><br>For most sources, [only the responses are logged](#guidelines-for-collecting-dns-events), and therefore the value is often **response**. |
-| <a name=eventresultdetails></a>**EventResultDetails** | Alias | | Reason or details for the result reported in the **_EventResult_** field. <br><br> Aliases the [ResponseCodeName](#responsecodename) field.|
+| **EventSubType** | Optional | Enumerated | Either `request` or `response`. <br><br>For most sources, [only the responses are logged](#guidelines-for-collecting-dns-events), and therefore the value is often **response**. |
+| <a name=eventresultdetails></a>**EventResultDetails** | Mandatory | Enumerated | For DNS events, this field provides the [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA doesn't define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` |
| **EventSchemaVersion** | Mandatory | String | The version of the schema documented here is **0.1.3**. | | **EventSchema** | Mandatory | String | The name of the schema documented here is **Dns**. | | **Dvc** fields| - | - | For DNS events, device fields refer to the system that reports the DNS event. |
The fields listed in this section are specific to DNS events, although many are
| **DnsQueryType** | Optional | Integer | The [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `28`| | **DnsQueryTypeName** | Recommended | Enumerated | The [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA doesn't define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value.<br><br>Example: `AAAA`| | <a name=responsename></a>**DnsResponseName** | Optional | String | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source-agnostic analytics. Therefore the information model doesn't require parsing and normalization, and Microsoft Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).|
-| <a name=responsecodename></a>**DnsResponseCodeName** | Mandatory | Enumerated | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA doesn't define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. <br><br>Example: `NXDOMAIN` |
+| <a name=responsecodename></a>**DnsResponseCodeName** | Alias | | Alias to [EventResultDetails](#eventresultdetails) |
| **DnsResponseCode** | Optional | Integer | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>Example: `3`| | <a name="transactionidhex"></a>**TransactionIdHex** | Recommended | String | The DNS query unique ID as assigned by the DNS client, in hexadecimal format. Note that this value is part of the DNS protocol and different from [DnsSessionId](#dnssessionid), the network layer session ID, typically assigned by the reporting device. | | **NetworkProtocol** | Optional | Enumerated | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. <br><br>Example: `UDP`|
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
The descriptor `Dvc` is used for the reporting device, which is the local system
### Common fields > [!IMPORTANT]
-> Fields common to all schemas are described in the [ASIM schema overview](normalization-about-schemas.md#common). The following list mentions only fields that have specific guidelines for user management events.
+> Fields common to all schemas are described in the [ASIM schema overview](normalization-about-schemas.md#common). The following list mentions only fields that have specific guidelines for network session events.
> | Field | Class | Type | Description |
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md
Last updated 07/28/2021
Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in some cases even required by industry regulations.
-Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions when such failures occur. If a Service Bus namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the risk is outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
+Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete racks across clusters that span multiple failure domains within a datacenter and it implements transparent failure detection and failover mechanisms such that the service will continue to operate within the assured service-levels and typically without noticeable interruptions when such failures occur. If a Service Bus namespace has been created with the enabled option for [availability zones](../availability-zones/az-overview.md), the outage risk is further spread across three physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete, catastrophic loss of the entire facility.
The all-active Azure Service Bus cluster model with availability zone support is superior to any on-premises message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even those measures can't sufficiently defend against.
service-bus-messaging Service Bus Migrate Standard Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md
Title: Migrate Azure Service Bus namespaces - standard to premium description: Guide to allow migration of existing Azure Service Bus standard namespaces to premium Previously updated : 09/20/2021 Last updated : 03/09/2022 # Migrate existing Azure Service Bus standard namespaces to the premium tier
The downtime that is experienced by the application is limited to the time it ta
No, there are no code or configuration changes needed to do the migration. The connection string that sender and receiver applications use to access the standard Namespace is automatically mapped to act as an alias for the premium namespace.
-### What happens when I abort the migration?
+### How do I abort the migration?
The migration can be aborted either by using the `Abort` command or by using the Azure portal.
When it's complete, you see the following page:
:::image type="content" source="./media/service-bus-standard-premium-migration/abort3.png" alt-text="Image showing the Abort complete page.":::
+### What happens when I abort the migration?
When the migration process is aborted, it aborts the process of copying the entities (topics, subscriptions, and filters) from the standard to the premium namespace and breaks the pairing. The connection string isn't updated to point to the premium namespace. Your existing applications continue to work as they did before you started the migration.
site-recovery Azure To Azure How To Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md
You can modify the default target settings used by Site Recovery.
- If you want Linux VMs to be part of a replication group, ensure the outbound traffic on port 20004 is manually opened according to guidance for the specific Linux version. ![Screenshot that shows the Multi-VM consistency settings.](./media/azure-to-azure-how-to-enable-replication/multi-vm-settings.PNG)
-5. Click **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings.
+5. Click **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group.
+
+ Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group, or use an existing one. For more information on how capacity reservation works, [read here](https://aka.ms/on-demand-capacity-reservations-docs).
+ ![Screenshot that shows the Capacity Reservation settings.](./media/azure-to-azure-how-to-enable-replication/capacity-reservation-edit-button.png)
-6. Click **Create target resource** > **Enable Replication**.
-7. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**
+1. Click **Create target resource** > **Enable Replication**.
+1. After the VMs are enabled for replication, you can check the status of VM health under **Replicated items**
>[!NOTE] >
spring-cloud How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-appdynamics-java-agent-monitor.md
To activate an application through the Azure CLI, use the following steps.
--env APPDYNAMICS_AGENT_APPLICATION_NAME=<your-app-name> \ APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=<your-agent-access-key> \ APPDYNAMICS_AGENT_ACCOUNT_NAME=<your-agent-account-name> \
- APPDYNAMICS_AGENT_NODE_NAME=<your-agent-node-name> \
+ APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME=true \
+ APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX=<your-agent-node-name> \
APPDYNAMICS_AGENT_TIER_NAME=<your-agent-tier-name> \ APPDYNAMICS_CONTROLLER_HOST_NAME=<your-AppDynamics-controller-host-name> \ APPDYNAMICS_CONTROLLER_SSL_ENABLED=true \
To activate an application through the Azure portal, use the following steps.
1. Select **Apps** from the **Settings** section of the left navigation pane.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png" alt-text="Azure portal screenshot showing the Apps section" lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png" alt-text="Azure portal screenshot showing the Apps section." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-list.png":::
1. Select the application to navigate to the **Overview** page.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png" alt-text="Azure portal screenshot the app's Overview page" lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png" alt-text="Azure portal screenshot the app's Overview page." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-overview.png":::
1. Select **Configuration** in the left navigation pane to add, update, or delete the environment variables of the application.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png" alt-text="Azure portal screenshot showing the 'Environment variables' section of the app's Configuration page " lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png" alt-text="Azure portal screenshot showing the 'Environment variables' section of the app's Configuration page." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-env.png":::
1. Select **General settings** to add, update, or delete the JVM options of the application.
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png" alt-text="Azure portal screenshot showing the 'General settings' section of the app's Configuration page, with 'JVM options' highlighted" lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png" alt-text="Azure portal screenshot showing the 'General settings' section of the app's Configuration page, with 'JVM options' highlighted." lightbox="media/how-to-appdynamics-java-agent-monitor/azure-spring-cloud-app-configuration-general.png":::
## Automate provisioning
resource "azurerm_spring_cloud_java_deployment" "example" {
"APPDYNAMICS_AGENT_APPLICATION_NAME" : "<your-app-name>", "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" : "<your-agent-access-key>", "APPDYNAMICS_AGENT_ACCOUNT_NAME" : "<your-agent-account-name>",
- "APPDYNAMICS_AGENT_NODE_NAME" : "<your-agent-node-name>",
+ "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME" : "true",
+ "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX" : "<your-agent-node-name>",
"APPDYNAMICS_AGENT_TIER_NAME" : "<your-agent-tier-name>", "APPDYNAMICS_CONTROLLER_HOST_NAME" : "<your-AppDynamics-controller-host-name>", "APPDYNAMICS_CONTROLLER_SSL_ENABLED" : "true",
To configure the environment variables in an ARM template, add the following cod
"APPDYNAMICS_AGENT_APPLICATION_NAME" : "<your-app-name>", "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" : "<your-agent-access-key>", "APPDYNAMICS_AGENT_ACCOUNT_NAME" : "<your-agent-account-name>",
- "APPDYNAMICS_AGENT_NODE_NAME" : "<your-agent-node-name>",
+ "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME" : "true",
+ "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX" : "<your-agent-node-name>",
"APPDYNAMICS_AGENT_TIER_NAME" : "<your-agent-tier-name>", "APPDYNAMICS_CONTROLLER_HOST_NAME" : "<your-AppDynamics-controller-host-name>", "APPDYNAMICS_CONTROLLER_SSL_ENABLED" : "true",
This section shows various reports in AppDynamics.
The following screenshot shows an overview of your apps in the AppDynamics dashboard: The **Application Dashboard** shows the overall information for each of your apps, as shown in the following screenshots using example applications: - `api-gateway`
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-api-gateway.jpg" alt-text="AppDynamics screenshot showing the Application Dashboard for the example api-gateway app" lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-api-gateway.jpg":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-api-gateway.jpg" alt-text="AppDynamics screenshot showing the Application Dashboard for the example api-gateway app." lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-api-gateway.jpg":::
- `customers-service`
- :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service.jpg" alt-text="AppDynamics screenshot showing the Application Dashboard for the example customers-service app" lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service.jpg":::
+ :::image type="content" source="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service.jpg" alt-text="AppDynamics screenshot showing the Application Dashboard for the example customers-service app." lightbox="media/how-to-appdynamics-java-agent-monitor/appdynamics-dashboard-customers-service.jpg":::
The following screenshot shows how you can get basic information from the **Database Calls** dashboard. You can also get information about the slowest database calls, as shown in these screenshots: The following screenshot shows memory usage analysis in the **Heap** section of the **Memory** page: You can also see the garbage collection process, as shown in this screenshot: The following screenshot shows the **Slow Transactions** page: You can define more metrics for the JVM, as shown in this screenshot of the **Metric Browser**: ## View AppDynamics Agent logs
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
# Authorize access to blobs using Azure Active Directory
-Azure Storage supports using Azure Active Directory (Azure AD) to authorize requests to blob data. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, group, or application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service. Note that this is only supported for API versions 2017-11-09 and later. For more information, see [Versioning for the Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests).
+Azure Storage supports using Azure Active Directory (Azure AD) to authorize requests to blob data. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, group, or application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can then be used to authorize a request against the Blob service.
Authorizing requests against Azure Storage with Azure AD provides superior security and ease of use over Shared Key authorization. Microsoft recommends using Azure AD authorization with your blob applications when possible to assure access with minimum required privileges.
The authorization step requires that one or more Azure RBAC roles be assigned to
Native applications and web applications that make requests to the Azure Blob service can also authorize access with Azure AD. To learn how to request an access token and use it to authorize requests for blob data, see [Authorize access to Azure Storage with Azure AD from an Azure Storage application](../common/storage-auth-aad-app.md).
+Authorizing blob data operations with Azure AD is supported only for REST API versions 2017-11-09 and later. For more information, see [Versioning for the Azure Storage services](/rest/api/storageservices/versioning-for-the-azure-storage-services#specifying-service-versions-in-requests).
+ ## Assign Azure roles for access rights Azure Active Directory (Azure AD) authorizes access rights to secured resources through Azure RBAC. Azure Storage defines a set of built-in RBAC roles that encompass common sets of permissions used to access blob data. You can also define custom roles for access to blob data. To learn more about assigning Azure roles for blob access, see [Assign an Azure role for access to blob data](../blobs/assign-azure-role-data-access.md).
storage Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-powershell.md
All blob data is stored within containers, so you'll need at least one container
```azurepowershell #Create a container object
-$container = New-AzStorageContainer -Name "demo-container" -Context $ctx
+$container = New-AzStorageContainer -Name "myContainer" -Context $ctx
``` When you use the following examples, you'll need to replace the placeholder values in brackets with your own values. For more information about signing into Azure with PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
The following example specifies a `-File` parameter value to upload a single, na
```azurepowershell #Set variables $path = "C:\temp\"
-$containerName = "demo-container"
+$containerName = "myContainer"
$filename = "demo-file.txt" $imageFiles = $path + "*.png" $file = $path + $filename
The following example shows several approaches used to provide a list of blobs.
```azurepowershell #Set variables $namedContainer = "named-container"
-$demoContainer = "demo-container"
+$demoContainer = "myContainer"
$containerPrefix = "demo" $maxCount = 1000
The following sample code provides an example of both single and multiple downlo
```azurepowershell #Set variables
-$containerName = "demo-container"
+$containerName = "myContainer"
$path = "C:\temp\downloads\" $blobName = "demo-file.txt" $fileList = "*.png"
To read blob properties or metadata, you must first retrieve the blob from the s
The following example retrieves a blob and lists its properties. ```azurepowershell
-$blob = Get-AzStorageBlob -Blob "blue-moon.mp3" -Container "demo-container" -Context $ctx
+$blob = Get-AzStorageBlob -Blob "blue-moon.mp3" -Container "myContainer" -Context $ctx
$properties = $blob.BlobClient.GetProperties() Echo $properties.Value ```
The example below first updates and then commits a blob's metadata, and then ret
```azurepowershell #Set variable
-$container = "demo-container"
+$container = "myContainer"
$blobName = "blue-moon.mp3" #Retrieve blob
You can delete either a single blob or series of blobs with the `Remove-AzStorag
```azurepowershell #Create variables
-$containerName = "demo-container"
+$containerName = "myContainer"
$blobName = "demo-file.txt" $prefixName = "file"
file3.txt BlockBlob 22 application/octet-stream 2021-12-17 00:14:24Z C
file4.txt BlockBlob 22 application/octet-stream 2021-12-17 00:14:25Z Cool True ```
-## Restore a soft-deleted blob
-As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore containers deleted within the associated retention period.
+## Restore a deleted blob
+As mentioned in the [List blobs](#list-blobs) section, you can configure the soft delete data protection option on your storage account. When enabled, it's possible to restore blobs deleted within the associated retention period. You may also use versioning to maintain previous versions of your blobs for each recovery and restoration.
-The following example explains how to restore a soft-deleted blob with the `BlobBaseClient.Undelete` method. Before you can follow this example, you'll need to enable soft delete and configure it on at least one of your storage accounts.
+If blob versioning and blob soft delete are both enabled, then modifying, overwriting, deleting, or restoring a blob automatically creates a new version. The method you'll use to restore a deleted blob will depend upon whether versioning is enabled on your storage account.
+
+The following code sample restores all soft-deleted blobs or, if versioning is enabled, restores the latest version of a blob. It first determines whether versioning is enabled with the `Get-AzStorageBlobServiceProperty` cmdlet.
+
+If versioning is enabled, the `Get-AzStorageBlob` cmdlet retrieves a list of all uniquely-named blob versions. Next, the blob versions on the list are retrieved and ordered by date. If no versions are found with the `LatestVersion` attribute value, the `Copy-AzBlob` cmdlet is used to make an active copy of the latest version.
+
+If versioning is disabled, the `BlobBaseClient.Undelete` method is used to restore each soft-deleted blob in the container.
+
+Before you can follow this example, you'll need to enable soft delete or versioning on at least one of your storage accounts.
To learn more about the soft delete data protection option, refer to the [Soft delete for blobs](soft-delete-blob-overview.md) article. ```azurepowershell
-#Create variables
-$container = "demo-container"
-$prefix = "file"
-
-#Retrieve all blobs, filter deleted resources, restore deleted
-$blobs = Get-AzStorageBlob -Container "demo-container" -Prefix "file" -Context $ctx -IncludeDeleted
-Foreach($blob in $blobs)
+$accountName ="myStorageAccount"
+$groupName ="myResourceGroup"
+$containerName ="myContainer"
++
+$blobSvc = Get-AzStorageBlobServiceProperty `
+ -StorageAccountName $accountName `
+ -ResourceGroupName $groupName
+
+# If soft delete is enabled
+if($blobSvc.DeleteRetentionPolicy.Enabled)
+{
+ # If versioning is enabled
+ if($blobSvc.IsVersioningEnabled -eq $true)
+ {
+ # Set context
+ $ctx = New-AzStorageContext `
+ -StorageAccountName $accountName `
+ -UseConnectedAccount
+
+ # Get all blobs and versions using -Unique
+ # to avoid processing duplicates/versions
+ $blobs = Get-AzStorageBlob `
+ -Container $containerName `
+ -Context $ctx -IncludeVersion | `
+ Where-Object {$_.VersionId -ne $null} | `
+ Sort-Object -Property Name -Unique
+
+ # Iterate the collection
+ foreach ($blob in $blobs)
+ {
+
+ # Process versions
+ if($blob.VersionId -ne $null)
+ {
+
+ # Get all versions of the blob, newest to oldest
+ $delBlob = Get-AzStorageBlob `
+ -Container $containerName `
+ -Context $ctx `
+ -Prefix $blob.Name `
+ -IncludeDeleted -IncludeVersion | `
+ Sort-Object -Property VersionId -Descending
+
+ # Verify that the newest version is NOT the latest (that the version is "deleted")
+ if (-Not $delBlob[0].IsLatestVersion)
+ {
+ $delBlob[0] | Copy-AzStorageBlob `
+ -DestContainer $containerName `
+ -DestBlob $delBlob[0].Name
+ }
+
+ #Dispose the temporary object
+ $delBlob = $null
+
+ }
+
+ }
+
+ }
+
+ # Otherwise (if versioning is disabled)
+ else
+ {
+ $blobs = Get-AzStorageBlob `
+ -Container $containerName `
+ -Context $ctx -IncludeDeleted | `
+ Where-Object {$_.IsDeleted}
+ foreach($blob in $blobs)
+ {
+ if($blob.IsDeleted) { $blob.BlobBaseClient.Undelete() }
+ }
+ }
+}
+else
{
- if($blob.IsDeleted) { $blob.BlobBaseClient.Undelete() }
+ echo "Sorry, the delete retention policy is not enabled."
} ```
Foreach($blob in $blobs)
- [Run PowerShell commands with Azure AD credentials to access blob data](./authorize-data-operations-powershell.md) - [Create a storage account](../common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)-- [Manage blob containers using PowerShell](blob-containers-powershell.md)
+- [Manage blob containers using PowerShell](blob-containers-powershell.md)
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md
When you enable NFS 3.0 protocol support, some Blob Storage features will be ful
To see how each Blob Storage feature is supported in accounts that have NFS 3.0 support enabled, see [Blob Storage feature support for Azure Storage accounts](storage-feature-support-in-storage-accounts.md).
+> [!NOTE]
+> Static websites is an example of a partially supported feature because the configuration page for static websites does not yet appear in the Azure Portal for accounts that have NFS 3.0 support enabled. You can enable static websites only by using PowerShell or Azure CLI.
+ ## See also - [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md)
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
When you connect to Blob Storage by using an SFTP client, you might be prompted
> [!div class="mx-tdBreakAll"] > | Region | Host key type | SHA 256 fingerprint <sup>1</sup> | Public key | > |||||
-> | europewest | rsa-sha2-256 | `IeHrQ+N6WAdLMKSMsJiML4XqMrkF1kyOiTeTjh1PFyc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDZL63ZKHrWlwN8gkPvq43uTh88n0V6GwlTH2/sEpIyPxN56/gpgWW6aDyzyv6PIRI/zlLjZNdOBhqmEO+MhnBPkAI8edlvFoVOA6c/ft5RljQOhv+nFzgELyP8qAlZOi1iQHx7UeB1NGkQ5AIwNIkRDImeft9Iga+bDF6yWu60gY43QdGQCTNhjglNuZ6lkGnrTxQtPSC01AyU51V1yXKHzgaTByrA4tK6cGtwjFjMBsnXtX2+yoyyuQz/xNnIN63awqpQxZameGOtjAYhLhtEgl39XEIgvpAs1hXDWcSEBSMWP4z04U/tw2R5mtorL3QU1CmokWmuAQZNQcLSLLlt` |
-> | europewest | rsa-sha2-512 | `7+VdJ21y+HcaNRZZeaaBtk1AjkCNK4weG5mkkoyabi0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDYAmiv6Tk/o02McJi79dlIdPLu1I5HfhsdPlUycW+t1zQwZL+WaI182G6SY728hJOGzAz51XqD4e5yueAZYjOJwcGhHVq6MfabbhvT1sxWQplnk3QKrUMRXnyuuSua1j+AwXsm957RlbW9bi1aQKdJgKq3y2yz+hqBS76SX9d8BxOHWJl5KwCIFaaJWb0u32W2HGb9eLDMQNipzHyANEQXI9Uq2qRL7Z20GiRGyy7VPP6AbPYTprrivo3QpYXSXe9VUuuXA9g3Bz3itxmOw6RV9aGQhCSp22BdJKDl70FMxTm1d87LEwOQmAViqelEeY+DEowPHwVLQs3rIJrZHxYV` |
-> | europewest | ecdsa-sha2-nistp256 | `0WNMHmCNJE1YFBpHNeADuT5h+PfJ/jJPtUDHCxCSrO0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANx85rJLXM8QZi33y8fzvUbH+O5Cujn0oJFDGQrwhGJQTHsjIhd5bhFFgDvJ64/4SGrtP1LHDKLwr9+ltzgxIE=` |
-> | europewest | ecdsa-sha2-nistp384 | `90g+JfQChjbb3OOV0YIGSVTkNotnefCV2NcSuMdPrzY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJgtrLFy2zsyhNvXlwHUmDBw1De++05pr1ZTxOIVnB17XZix0Euwq/wZTs0cE01c5/kYdAp+gQHEz594e7AQXBTCTqUiIS1a4+IXzfiCcShVfMsLFBvzjm9Yn8qgW9Ofg==` |
-> | useast | rsa-sha2-256 | `F6pNN5Py68/1hVRGEoCwpY5H7vWhXZM/4L442lY4ydE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAiUB94zwLf0e/++OeiAjE0X7Od2nuqyLyAqpOb7nfQUAOWyqgRL04yaan6R2Ir2YtI0FRwA6yRETUBf2+NuVhIONgLNsgPw3RakL1BUqAEzZAyF4sOjWnYE5/s/1KmYOE052SefzMciqjgkBV2+YrPW1CLivNhL4d1vuQh05kADLgHJiAVD6BqSM7Z6VoLhW+hfP4JklyQAojCF6ejXW7ZGWdqQGKLCUhdaOPSRAxjOmr9gZxJ69OvdJT2Cy6KO1YQt2gY2GbPs+4uAeNrz40swffjut4zn1NILImpHi8PTM+wcGYzbW4Nn7t5lhvT9kmX9BkSYXLVTlI9p1neT9t` |
-> | useast | rsa-sha2-512 | `MIpoRIiCtEKI23MN+S2bLqm5GKClzgmRpMnh90DaHx8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8Ut7Rq7Vak26F29czig5auq334N9mNaJmdtWoT32uDzWjZNw/N8uxXQS51oSeD7c0oXWIMBklH0AS8JR1xvMUGVnv5aRXwubicQ6z4poG5RSudYDA3BjMs61LZUKZH/DRj7qR/KUBMNieT1X+0DbopZkO9etxXdKx+VqJaK3fRC5Zflxj5Z9Stfx/XlaBXptDdqnInHZAUbZxnNziPYrBOuXYl5/Cd6W4lR7dBsMCbjINSIShvrhPpVfd3qOv/xPpU172nqkOx2VsV4mrfqqg62ZdcenLJDYsiXd/AVNUAL+dvzmj1/3/yVtFwadA2l83Em6CgGpqUmvK6brY3bPh` |
-> | useast | ecdsa-sha2-nistp256 | `ixDeCdmQOB9ROqdJiVdXyFVsRqLmJJUb2M4shrWj8gI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNrdcVT12fftnDcjYL8K3zLX3bdMwYLjNu2ZJ1kUZwpVHSjNc+1KWB2CGHca+cMLuPSao4TxjIX0drn9/o+GHMQ=` |
-> | useast | ecdsa-sha2-nistp384 | `DPTC6EIORrsxzpGt6IZzAN67nlZUXzg5ANQ3QGz987Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEP3CUvPVWNVnFuojR43KRxTQt1xiClbgDzqN/s9F5luivP+Gh0QrK5UHf6diEju4ZQ9k2O10MEDs6c46g4fT56rY8CQkeBsaaBq8WYLRhSQsFZ6SZuw14oFNodniAO33g==` |
-> | indiawest | rsa-sha2-256 | `Fkh7r/tOJy1cZC6nI75VsO1sS3ugMvJ56U02uGGJHFo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDHCzLI51bbBLWK7TcXvXvEHaLQMzuYKEwyoS1/oC5EN3NsLZl4BV5d2zbLETFDjsky/btWiAkCvHuzxealxGgzw69ll90aWSOEY/epaYJvueOTvGy4+rJY8Xyc64VdHml8n3EEZTQmBEi3Tn6bViLwvC0iT2/noLeYGXh0/NL0T3BeblwSm3cNXyemkBQO/zyYcchqRtKJu8w8brYVZYFINlTeBu4LyDP1k9DMtuewGoeH8SmvDxUmiIGh2VDlPmXe3IkMR0nSgz10jMl3F0fei7ZJ+8zdCVbBuIqsJf+koJa/q9npstWGMFddMX3nR0A3HnG4v5aCAGVmfl11iC0J` |
-> | indiawest | rsa-sha2-512 | `xDtcgfElRGUUgWlU9tRmSQ58WEOKoUSKrHFDruhgDIM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCXehufp18nKehU4/GOWMkXJ87t22TyG5bNdVCPO2AgLJ88FBwZJvDurLgdPRDRuJImysbD7ucwk2WoDNC39q0TWtCRyIKTXfwvPmyG+JZKkT+/QfslMqiAXAPIQtVr2iXTeuHmn3tk+PksGXnTwb3oFV4wv40Wi1CbwvtCkUsBSujq4AR7BqksPnAqPrAyw+fFR3w4iD3EdtHBdIVULez3lkpMH/d04rf2bjh6lpI9YUdcdAmTGYeMtsf/ef8z0G2xpN2aniLCoCPQP85cooKq7YEhBDR8Lzem3vWnqS3gPc4rUrCJoDkGm0iL/4GCWRyG+RPi70WSdVysJ+HIm0Ct` |
-> | indiawest | ecdsa-sha2-nistp256 | `t+PVPMSVEgQ3FPNploXz7mO25PFiEwzxutMjypoA2DM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCzR5dhW3wfN5bRqLfeZ2hlj7iRerE4lF5jk+iQl6HJHKXIsH6lQ63Wyg7wOzF65jNnvubAJoEmzyyYig+D3A+w=` |
-> | indiawest | ecdsa-sha2-nistp384 | `pLODd+3JNeLVcPYYnI0rSWoemhMWws0jLc3J8cV6+GU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL2PEknfZpPAT4ejqBJW8InHPELP1G7hGvroW5J3evJr8Qrr//voa6aH8ZF7Ak0HcVVOOCSzfjcEpZYjjrXrzuCOekU48DkSF8i1kKqV4iXejNNQ1ohDCbsiAyoxQMY9cA==` |
-> | useast2 | rsa-sha2-256 | `K+QQglmdpev3bvEKUgBTiOGMxwTlbD7gaYnLZhPfe1c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOA2Aj1tIG/TUXoVFHOltyW8qnCOtm2gq2NFTdfyDFw3/C4jk2HHQf2smDX54g8ixcLuSX3bRDtKRJItUhrKFY6A0AfN1+r46kkJJdFjzdcgi7C3M0BehH7HlHZ7Fv2u01VdROiXocHpNOVeLFKyt516ooe6b6bxrdc480RrgopPYpf6etJUm8d4WrDtGXB3ctip8p6Z2Z/ORfK77jTeKO4uzaHLM0W7G5X+nZJWn3axaf4H092rDAIH1tjEuWIhEivhkG9stUSeI3h6zw7q9FsJTGo0mIMZ9BwgE+Q2WLZtE2uMpwQ0mOqEPDnm0uJ5GiSmQLVyaV6E5SqhTfvVZ1` |
-> | useast2 | rsa-sha2-512 | `UKT1qPRfpm+yzpRMukKpBCRFnOd257uSxGizI7fPLTw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/HCjYc4tMVNKmbEDT0HXVhyYkyzufrad8pvGb3bW1qGnpM1ZT3qauJrKizJFIuT3cPu43slhwR/Ryy79x6fLTKXNNucHHEpwT/yzf5H6L14N+i0rB/KWvila2enB2lTDVkUW50Fo+k5U/JPTn8vdLPkYJbtx9s0s3RMwaRrRBkW6+36Xrh0h7rxV5LfY/EI1331f+1bgNM7xD59D3U76OafZMh5VfSbCisvDWyIPebXkOMF/eL8ATlaOfab0TAC8lriCkLQolR+El9ARZ69CJtKg4gBB3IY766Ag3+rry1/J97kr4X3aVrDxMps1Pq+Q8TCOf4zFDPf2JwZhUpDPp` |
-> | useast2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
-> | useast2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
-> | uswest | rsa-sha2-256 | `kqxoK1j6vHU8o8XyaiV87tZFEX9nE6o/yU0lOR5S6lE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAd7gh0mFw3iRAKusX3ai6OE0KO5O2CMlezvZOAJEH88fzWQ/zp0RZ1j7zJ8sbwslA6v3oRQ7Cx9ptAMTrL8SW4CZYcwETlfL3ZP39Llh+t7rZovIgvCDU0tijYvsa1W0T9XZgcwWEm6cWQzdm+i9U0KUdh7KgsubPAhGQ7xrOVEqgB9MYMofSSdIfKMt8K7xOSam6mhWiTSSIEGgeMTIZ9TgXkgAEJ8TNl3QHRoM8HxMnRFjtkXbT3EeSg6VOqi69Cei3hrmS64qvdzt2WwoTQwTFjxHocWGgA+Ow53wqWt8iYgOudpoB1neXiIcF4p0CN8zjvXNiRbZPg9lXFM9R` |
-> | uswest | rsa-sha2-512 | `/PP9B/9KEa+QUyump1Yt05Lfk0LY/eyQhHyojh5zMEg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8R8bFe8QSTYKK+4evMpnlB8y0rQCqikTyviqD4rva7i4f1f/JxmptJQ/wkipHPXk6E7Du6oK/iJaZ+wjZ03tNIWwAGn0SdlTvWuwQwigK9k3JRlLYO+Uj/SSnBQWf8Dmp+cA6RDalteHpM2KwaUK65BHYC75bWKHaNntadTIU4kQ0BvFzmNRcJWL6otd5RkdYXjJWHu21zcv4EpRHGmVCD0na+UWce6UGDbLDtsZVJd2Q7IyeTrXpWxEO0fFN2Gu9gINfWC1FpuffGaqWSa4nK69n39lUKz4PUdu6Owmd9aNbLXknvtnW4+xGbX6oQa8wYulINHjdNz8Ez6nOiNZ9` |
-> | uswest | ecdsa-sha2-nistp256 | `peqBbfcWZRW4QzLi69HicUUTwdtfW7/E9WGkgRMheAo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcTos/zmSn15kzn1Lk8N8QQh9hzOwqOSOf/bCpu6AQbWJtvjf1pHMuZlS2PpIV7G+/ImxXGpqpHqQlcD+Lg8Ro=` |
-> | uswest | ecdsa-sha2-nistp384 | `sg63Cc3Mvnn9hoapGaEuZByscUEMa+xgw/3ruz49szk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzX2t9ALjFwpcBLm4B0+/D47PMrDya0KTva5w4E5GZNb5OwQujQvtUS2owd8BcKdMBeXx2S7qbcw6cQFffRxE+ZTr4J+3GoCmDM0PqraXxJHBRxyeK6vlrSR8ojRzIApA==` |
-> | useast2euap | rsa-sha2-256 | `dkP64W5LSbRoRlv2MV02TwH5wFPbV6D3R3nyTGivVfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC3PqLDKkkqUXrZSAbiEZsI6T1jYRh5cp+F5ktPCw7aXq6E9Vn2e6Ngu+vr+nNrwwtHqPzcZhxuu9ej2vAKTfp2FcExvy3fKKEhJKq0fJX8dc/aBNAGihKqxTKUI7AX5XsjhtIf0uuhig506g9ZssyaDWXuQ/3gvTDn923R9Hz5BdqQEH9RSHKW+intO8H4CgbhgwfuVZ0mD4ioJKCwfdhakJ2cKMDfgi/FS6QQqeh1wI+uPoS7DjW8Zurd7fhXEfJQFyuy5yZ7CZc7qV381kyo/hV1az6u3W4mrFlGPlNHhp9TmGFBij5QISC6yfmyFS4ZKMbt6n8xFZTJODiU2mT1` |
-> | useast2euap | rsa-sha2-512 | `M39Ofv6366yGPdeFZ0/2B7Ui6JZeBUoTpxmFPkwIo4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+1NvYoRon15Tr2wwSNGmL+uRi7GoVKwBsKFVhbRHI/w8oa3kndnXWI4rRRyfOS7KVlwFgtb/WolWzBdKOGVe6IaUHBU8TjOx2nKUhUvL605O0aNuaGylACJpponYxy7Kazftm2rV/WfxCcV7TmOGV1159mbbILCXdEWbHXZkA3qWe4JPGCT+XoEzrsXdPUDsXuUkSGVp0wWFI2Sr13KvygrwFdv4jxH1IkzJ5uk6Sxn0iVE+efqUOmBftQdVetleVdgR9qszQxxye0P2/FuXr0S+LUrwX4+lsWo3TSxXAUHxDd8jZoyYZFjAsVYGdp0NDQ+Y6yOx5L9bR6whSvKE1` |
-> | useast2euap | ecdsa-sha2-nistp256 | `X+c1NIpAJGvWU31UJ3Vd2Os4J7bCfgvyZGh35b2oSBQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+U6CE6con74cCntkFAm6gxbzGxm9RgjboKuLcwBiFanNs/uYywMCpj+1PMYXVx/nMM4vFbAjEOA20fJeoQtN8=` |
-> | useast2euap | ecdsa-sha2-nistp384 | `Q3zIFfOI1UfCrMq6Eh7nP1/VIvgPn3QluTBkyZ2lfCw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWRjO+e8kZpalcdg7HblZ4I3q9yzURY5VXGjvs5+XFuvxyq4CoAIPskCsgtDLjB5u6NqYeFMPzlvo406XeugO4qAui+zUMoQDY8prNjTGk5t7JVc4wYeAWbBJ2WUFyMrQ==` |
-> | australiac | rsa-sha2-256 | `q2pDjwwgUuAMU3irDl2D+sbH8wQpPB5LHBOFFzwM9Sk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDnqOrNklxmyreRYe7N72ylBCensxxPTBWX/CfbdbGfEbcGRtMGHReeojkvf4iJ7mDMZRzecgYxZ9o2bwTH9UImNkuZTsFNH6APuJ075WyxoDgdBX1UAQ3eE6BrCNI0BcwLakU9lq0rNhmxMpt/quBXxxWbRieKR9liTOg5CGSqoUPo7TpwaZQBltJCEf7rN5wGUlHV49iuiJIasSldYT6F1c3vS4bJb2sdIvVnKVLq+yTMzaPzWn34BD+KHx/pkB+s7/vQtdMfBBEdgEdPVvMPsyXtIKhx4Q79LnfZT19RDY8KW1mJrbPo67oEcjJYTXSZTKysjCUNmNNrnXvp6sHd` |
-> | australiac | rsa-sha2-512 | `+tdLVAC4I+7DhQn9JguFBPu0/Hdt5Ru2zjuOOat+Opw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCnd0ETMwpU8w7uwWu1AWDv6COIwLKMmxXu+/1rR429cNXuPrBkMldUfI7NKVaiwnu1kLPuJsgUDkvs/fc7lxx2l5i6mYBWJOXcWzAfXSBfB1a+1SK+2tDPYT3j4/W/KRW74DFPokWTINre22UVc+8sbzkmdtX/FzZdVcqI4+xJSjwdsp2hbzcsVWkxWhrFzKmBU40m5E/YwKQwAcbkzmX6AN5O8s66TQs2uPkRuTItDWI3ShW7QzW05jb6W8TeYdkouZ5PY0Yz/h3/oysFzo4VaUc0y3JP98KRWNXPiBrmvphpKnU1TQrjvVkYEsiCBHMOUnNVHdR1oIHd2zPRneK5` |
-> | australiac | ecdsa-sha2-nistp256 | `m2HCt3ESvMLlVBMwuo9jsQd9hJzPc/fe0WOJcoqO3RA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBElXRuNJbnDPWZF84vNtTjt4I/842dWBPvPi2fkgOV//2e/Y9gh0koVVAYp6MotNodg4L9MS7IfV9nnFSKaJW3o=` |
-> | australiac | ecdsa-sha2-nistp384 | `uoYLwsgkLp4D5diAulDKlLb7C5nT4gMCyf9MFvjr7qg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARO/VI5TyirrsNZZkI2IBS0TelywsJKj71zjBGB8+mmki+mmdtooSTPgH0zmmyWb/z3iJG+BnEEv/58zIvJ+cXsVoRChzN+ewvsqdfzfCqVrzwyro52x5ymB08yBwDYig==` |
-> | usnorth | rsa-sha2-256 | `9AV5CnZNkf9nd6WO6WGNu7x6c4FdlxyC0k6w6wRO0cs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJTv+aoDs1ngYi5OPrRl1R6hz+ko4D35hS0pgPTAjx/VbktVC9WGIlZMRjIyerfalN6niJkyUqYMzE4OoR9Z2NZCtHN+mJ7rc88WKg7RlXmQJUYtuAVV3BhNEFniufXC7rB/hPfAJSl+ogfZoPW4MeP/2V2g+jAKvGyjaixqMczjC2IVAA1WHB5zr/JqP2p2B6JiNNqNrsFWwrTScbQg0OzR4zcLcaICJWqLo3fWPo5ErNIPsWlLLY6peO0lgzOPrIZe4lRRdNc1D//63EajPgHzvWeT30fkl8fT/gd7WTyGjnDe4TK3MEEBl3CW8GB71I4NYlH4QBx13Ra20IxMlN` |
-> | usnorth | rsa-sha2-512 | `R3HlMn2cnNblX4qnHxdReba31GMPphUl9+BQYSeR6+E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDeM6MOS9Av7a5PGhYLyLmT09xETbcvdt9jgNE1rFnZho5ikzjzRH4nz60cJsUbbOxZ38+DDyZdR84EfTOYR2Fyvv08mg98AYXdKVWMyFlx08w1xI4vghjN2QQWa8cfWI02RgkxBHMlxxvkBYEyfXcV1wrKHSggqBtzpxPO94mbrqqO+2nZrPrPFkBg4xbiN8J2j+8c7d6mXJjAbSddVfwEbRs4mH8GwK8yd/PXPd1U0+f62bJRIbheWbB+NTfOnjND5XFGL9vziCTXO8AbFEz0vEZ9NmxfFTuVVxGtJBePVdCAYbifQbxe/gRTEGiaJnwDRnQHn/zzK+RUNesJuuFJ` |
-> | usnorth | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
-> | usnorth | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
-> | brazilsouth | rsa-sha2-256 | `qNzxx1kid41tZGcmbbyZrzlCIPJ9TFa20pUqvRbcjro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC04g5K8emsS4NpL6jCT3wlpi6Msb5ax6QGlefO3IKp3wDKWAEqN+PvqBdrNp1PsitTKeyRSCLofq9k2wzeAMzV2n3UVqmUpNf9Q0Yd8SuXPhKG6VhqG2hL5+ztrlVTMI2Ak18SLaAEA1x7y9Z1lkEYGvCzJQaAw5EG8kd7XHGaI9nSCJ7RFOdJQF/40gq8z6E+bWW9Xs55JpWQ0i44i/ZvQUEiv5nyAa7D86y23wk1pTIFkRT99Kwdua0GtyUlcgCRDDTOzsCTn4qTo/MAF1Uq/ol4G0ZxwKnAEkazSZ1c+zEmh6GJNwT64nWBZ+pt5Rp3ugW+iDc/mIlXtxEV2k7V` |
-> | brazilsouth | rsa-sha2-512 | `KAmGT8A7nRdxxQD7gulgmGTJvRhRdWPVDdagGCDmJug=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6W0FiaS21Dze6Sr6CTB8rLBu1T1Zej+11m7Kt283PSkQNYmjDDPUx0wSgylHoElTnFcXG+eFMznnjxDqkH+GnYBmlpW3nxxdTYD/MrdP4dX9ibPCFjDupIDJ4thv+9xWCw/0RpGc1NlUx2YmenDVMFJtYqjB1IDa2UUEeUHeQa1qmiBs1tbBQdlws1MCFnfldliB5H+cO4xnbAUjOlaa01k7GKqPf0H75+R83VcIcFw8hSuCvgMT+86H6jRRfqiIzE7WGbQBTPQs0rGcvxcGR3oGOmtB2UmOD232XTEk+sG3q2RxtPKWTz8wz1Tt2c1BOxmtuXTtzXnigZjB2t8y5` |
-> | brazilsouth | ecdsa-sha2-nistp256 | `rbOdmodk5Aq+nxHt04TN7g6WyuwbW5o+sDbj86l6jp8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNFqueXBigofrM5Kp2eA4wd4XxHcwcNgNFWGgEd0EoNdKWt9NroU47bN43f79Y5vPiSa4prKW1ccMBl40nNN4S4=` |
-> | brazilsouth | ecdsa-sha2-nistp384 | `cenQeg58JZ+Dvu3AC7P7lC/Jq7V3+YPaS37/BBn3OlQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHBhfnlfXV9/m6ZgOoqmSgX3VPnRdTOraBhMv8v7lEN1lWwyBpiWepu52KS0jR1RhttfXB+n+p6i2+9djJ1zT7fHq4sNn/d/3k2J6IjJlymZ32GwPvDk+fGefupUtabvRQ==` |
-> | ukwest | rsa-sha2-256 | `2NQ5z6fQjt4SZKdViPS+I2kX7GoXOx3fVE81t8/BCVE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNq0xtA0tdZmkSDTNgA05YLH5ZuLFKD7RbruzuL4KVU2In0DQUtJkVqRXIaB3f+cEBTs9QrMUqolOdCCunhzosr5FvCO3I6HZ8BLnVNshtUBf2C1aT9yonlkdiIyc2pCHonds8vHKC4SBNu3Jr584bhyan8NuzJqzPCnKTdHwyWjf8m5mB4liK/ka4QGiaLLYTAjCCXmaXXOVZI2u0yDcJQXAjAP5niCOQaPHgdGk6oSjs0YKB29V+lIdB8twUnBaJA9jgECM2brywksmXrAyUPnIFD6AVEiFZsUH3iwgFAH7O6PLZTOSgJuu994CNwigrOXTbABfpH2YMjvUF///5` |
-> | ukwest | rsa-sha2-512 | `MrfRlQmVjukl5Q5KvQ6YDYulC3EWnGH9StlLnR2JY7Q=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQClZODHJMnlU29q0Dk1iwWFO0Sa0whbWIvUlJJFLgOKF5hGBz9t9L6JhKFd1mKzDJYnP9FXaK1x9kk7l0Rl+u1A4BJMsIIhESuUBYq62atL5po18YOQX5zv8mt0ou2aFlUDJiZQ4yuWyKd44jJCD5xUaeG8QVV4A8IgxKIUu2erV5hvfVDCmSK07OCuDudZGlYcRDOFfhu8ewu/qNd7M0LCU5KvTwAvAq55HiymifqrMJdXDhnjzojNs4gfudiwjeTFTXCYg02uV/ubR1iaSAKeLV649qxJekwsCmusjsEGQF5qMUkezl2WbOQcRsAVrajjqMoW/w1GEFiN6c70kYil` |
-> | ukwest | ecdsa-sha2-nistp256 | `bNYdYVgicvl1yaOR/1xLqocxT8bamjezGFqFdO6Od0I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKWKoJuxB3EO5bKjxnviF+QTv3PBSViD1SNKbfj0qYfAjObQKZuiqcFYeDoPrkhk9jfan2jU6oCEN4+KDwivz3k=` |
-> | ukwest | ecdsa-sha2-nistp384 | `6V8vLtRf6I5BjuLgToJ1cROM72UqPD+SC0N9L9WG6PA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+7R/5qSfsXACmseiErhfwhiE7Rref/TNHONqiFlAZq2KCW3w3u8+O4gpJEflibMFP/Mj5YeoygdPUwflFNcST9K+vnkEL3/lqzoGOarGBYIKtEZwixv3qlBR+KyoRUkw==` |
-> | uswestcentral | rsa-sha2-256 | `aSNxepEhr3CEijxbPB4D5I+vj8Um7OO6UtpzJ/iVeRg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDWmd8Zd7dCfamYd/c1i4wYhhRnaIgUmK7z/o8ehr4bzJgWRbjrxMtbkD2y7ImjE2NIBG5xglz6v9z4CFNjCKUmoUl7+Le3Rsc5sJ/JmHAmEXb0uiDMzhq9f6Qztp+Pb9uqLfsPmm6pt1WOcpu+KNpiGtPWTL21sJApv6JPKU+msUrrCIekutsHtW6044YPXNVOnvUXv08BaPFhbpeGZ4zkrji0mCdGfz2RNcgLw0y3ZzgUuv0Lw+xV0/xwanJu4IOFI1X9Ab7NnoGMkqN/upBLJ4lRhjYVTNEv01IX2/r5WZzTn4c38Nfw4Ma3hR0BiLMTFfklFVGg2R64Z7IILoB` |
-> | uswestcentral | rsa-sha2-512 | `vVHVYoH1kU1IZk+uZnStj3Qv2UCyOR9qVxJfmTc20jQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9Q8Tvvnea8hdaqt+SZr4XN1JIeR43nX6vrdhcS6yyfRgaTcEdKbAKQbwj9Fu3kq80c4F+SNzh1KQWlqLu3MJHSaSdQLN9RaHO1Dd+iVK1WgZtsPM9+6U7wupMZq8Hdmao5sqaMT5lj7g+win2J+Wibz7t8YwS7g2Xi+ode8tFPFKduZ5WvKLjI0EiAS4mvcyWEWca142E8fxV9TobUjAICfgtL4vCpmLYKnSL/kUgplD0ow86k/MHp9zghDLVSVDj8MGMra+IJEpgHOUrFNnuyua2WSJVuXR2ITfaecRKrGg7Z4IJzExPoQzDIWdCHptiGLAqvtKT0NE2rPj9U4Rp` |
-> | uswestcentral | ecdsa-sha2-nistp256 | `rkHjcTK2BvryQAFvjugTHdbpYBGfOdbBUNOuzctzJqM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMjEAUTIttG+f5eocMzRIhRx5GjHH7yYPzh/h9rp9Yb3c9q2Yxw/j35JNWxpGwpkb9W1QG86Hjt4xbB+7q/D8c=` |
-> | uswestcentral | ecdsa-sha2-nistp384 | `gS9SYvaH6dCqyugArvFb13cwi8q90glNaK+fyfPie+Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0HqM8ubcDBRMwuruX5zqCWSp1DaLcS9cA9ndXbQHzb2gJ5bJkjzxZEeIOM+AHPJB8UUZoD12It4tCRCCOkFnKgruT61hXbn0GSg4zjpTslLRYsbJzJ/q6F2DjlsOnvQQ==` |
-> | uscentral | rsa-sha2-256 | `GOPn34T1cCkLHO0xjLwmkEphxKKBQIjIf9QE1OAk3lU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9oA4N2MxbHjcdSrOlJOdIPjTB2LpQUMwJJj+aI2KEALYDGWWJnv0E14XjY1/M35jk8z0hX4MHGE/MEocSsTVdFRdWdW9CKTWT6eICpg9frTj6wfkB/Dxz/BAYb/YXq5OMrFlkFJUG8FMp9N80W6UWichcltmSrCpRi5N3ZGpVXEYhJF+I0mCH7Yheoq2KzIG2iWU/EJT5fui4t51wD8CQ1NWG8/THnNr0gjCr3AtB+ZPAl/6N7i2vO3FlZEHUj6BHoQ4dhIGjGCkgFDNU6RpdifqMJRihP9fSMOq4qksch1TE5sOnp0sOaP/RQvChb4oXB8Pru+j45RxPzIvzzOZZ` |
-> | uscentral | rsa-sha2-512 | `VLhZbZjHzoNRMyRSa3GYvk2rgacjmldxQ2YNzvsMpZA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPnuJixcZd6gAIifABQ377Mn0ZootRmdJs1J3R8/u7mbdfmpX2ItI0VfgMh4BzGEdgCqewx4BjADhfXRurfimuP8P9PLRq89AHX2V+IfeizLZkrnrxKiijjGEz640gORzzwIp2X+bmnBABWzEZjSNOeE3CKVr4ONvH80bYGFFqR4+arOelDqWEgxktM1QBlId7xR7efmtEGAuAhFbZVaqjBNsbqyiR/hlkMQfmWn1bjGSoenUoPojc7UAp9+Xf6ujkhCihRV/O4A69tVvp5E0Qv5MJ1Qj3kzAYbHQcIQ2l47MQq1wdZYxkYBHmH5leAjHgQbbccPalOLSbLRYjF169` |
-> | uscentral | ecdsa-sha2-nistp256 | `qN1Fm+zcCQ4xEkNTarKiQduCd9S+Aq3vH8BlfCaqL74=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN6KpNy9XBIlV6jsqyRDSxPO2niTAEesFjIScsq8q36bZpKTXOLV4MjML0rOTD4VLm0mPGCwhY5riLZ743fowWA=` |
-> | uscentral | ecdsa-sha2-nistp384 | `9no3/m09BEugyFxhaChilKiwyOaykwghTlV+dWfPT6c=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiEYrlw9pskKzDp/6HsA2g0uMXNxJKrO5n16cHwXS1lVlgYMK3mmQdA+CjzMdJflvVw7sZO2caApr+sxI3rMmGWCt5gNvBaU6E9oUN8kdcNDvsfFavCb3vguOgtgbvHTg==` |
-> | europenorth | rsa-sha2-256 | `vTEOsEjvg/jHYH1xIWf2rKrtENlIScpBx450ROw52UI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQChnfrsd1M0nb7mOYhWqgjpA+ChNf7Ch6Eul6wnGbs7ZLxXtXIPyEkFKlEUw4bnozSRDCfrGFY78pjx4FXrPe5/m1sCCojZX8iaxCOyj00ETj+oIgw/87Mke1pQPjyPCL29TeId16e7Wmv5XlRhop8IN6Z9baeLYxg6phTH9ilA5xwc9a1AQVoQslG0k/eTyL4gVNVOgjhz94dlPYjwcsmMFif6nq2YgQgJlIjFJ+OwMqFIzCEZIIME1Mc04tRtPlClnZN/I+Hgnxl8ysroLBJrNXGYhuRMJjJm0J1AZyFIugp/z3X1SmBIjupu1RFn/M/iB6AxziebQcsaaFEkee0l` |
-> | europenorth | rsa-sha2-512 | `c4FqTQY/IjTcovY/g7RRxOVS5oObxpiu3B0ZFvC0y+A=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCanDNwi72RmI2j6ZEhRs4/tWoeDE4HTHgKs5DRgRfkH/opK6KHM64WnVADFxAvwNws1DYT1cln3eUs6VvxUDq5mVb6SGNSz4BWGuLQ4onRxOUS/L90qUgBp4JNgQvjxBI1LX2VNmFSed34jUkkdZnLfY+lCIA/svxwzMFDw5YTp+zR0pyPhTsdHB6dST7qou+gJvyRwbrcV4BxdBnZZ7gpJxnAPIYV0oLECb9GiNOlLiDZkdsG+SpL7TPduCsOrKb/J0gtjjWHrAejXoyfxP5R054nDk+NfhIeOVhervauxZPWeLPvqdskRNiEbFBhBzi9PZSTsV4Cvh5S5bkGCfV5` |
-> | europenorth | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
-> | europenorth | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
-> | uaen | rsa-sha2-256 | `Vazz+KIADh85GQHAylrlI1tTY8/ckoRqTe/kbLXPmd0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDRGQHLLR9ruI0GcNF2u3EpS2CbHdZlqcgSR1bkaOXA9ZufHyxuhIpzG2IgYQ8wrjGzIilYds6UIH7CAw9FApKLNpLR6qdm8qjM0tJiyHLm3KloU27FfjCQjE9JhmsbTWCRH3N52A9HXIdiVCE3BBSoXhg/mF+3cvm1JvabKr1twoyfbUgDFuF7fDyhSxJ/MTig8SpgzWqcd5J+wbzjXG0ob2yWVhwtrcB6k97g25p77EKXo3VhSs0jN7VR+SAHupVwWsUgx4fZzi2I5xTUTBdOXW+e3EiXytPL2N5N/MtFKVY/JVhFkKkcTRgeuOds51tkByteSkc32kakcUxw6CjJ` |
-> | uaen | rsa-sha2-512 | `NDeTZPUor2OuTdgSjLLhSaqJiTJUdfwTAzpkjNbNQUY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAx9LfiyVmWwGD/rjQeHiHTMWYaE/mMP6rxmfs9/I4wEFkaTBbc4qewxUlrB1jd7Se2a0kljI3lqQJ9h+gjtH/IaVTZOKCOZD8yV9Dh4ZENRqH/TOVz6LCvZifVbjUtxRtbvOuh1lJIVBSBFciNr0HThFMnTEIwcs5V48EFIT6eS9Krggu+cWAX2RbjM0VQnIgkA5BeM33MjSjNz86zhO+e7e1lhflPKL5RTIswtWbwatgkyvfM33pJql/zJz+3/usSpIA/pgWw23c8WziYXiHPTShJXN+N+9iLKf9YUkpzQUZSaRw8XDPyjJNx327Lot0Bh4YLpe37R0SrOvitBsN` |
-> | uaen | ecdsa-sha2-nistp256 | `vAuGgsr0IQnOLUaWCCOBt+Jg0DV9C6rqHhnoJnwORM8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYpnxgANJNJ4IIvSwvrRtjgbejCpTc3D+l5iob7dBK4KQ7MB40rq+CtdBDGZ1J7d6oCevW6gb1SIxU/PxCuvMI=` |
-> | uaen | ecdsa-sha2-nistp384 | `A5fa4Pzkdl0H2kVJxlNiEQkOhPzBYkrfQrcviQUUWUA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOz4ENDgFpo0547D5XCRCJLg8brp+iUyId2IdEhZAhuNX9spxlVe6uSkiQbd+8D5hHPVNuLFTFx7v2wXObycM8tr/WGejn/934BvSUhM6lDpU+d5n+ZcxEEhp4gDiy1l+Q==` |
-> | germanywc | rsa-sha2-256 | `0SKtGye+E9pp4QLtWNLLiPSx+qKvDLNjrqHwYcDjyZ8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsbkjxK9IJP8K98j+4/wGJQVdkO/x0Msf89wpjd/3O4VIbmZuQ/Ilfo6OClSMVxah6biDdt3ErqeyszSaDH9n3qnaLxSd5f+317oVpBlgr2FRoxBEgzLvR/a2ZracH14zWLiEmCePp/5dgseeN7TqPtFGalvGewHEol6y0C6rkiSBzuWwFK+FzXgjOFvme7M6RYbUS9/MF7cbQbq696jyetw2G5lzEdPpXuOxJdf0GqYWpgU7XNVm+XsMXn66lp87cijNBYkX7FnXyn4XhlG4Q6KlsJ/BcM3BMk+WxT+equ7R7sU/oMQ0ti/QNahd5E/5S/hDWxg6ZI1zN8WTzypid` |
-> | germanywc | rsa-sha2-512 | `9OYO7Hn5p+JJeGGVsTSanmHK3rm+iC6KKhLEWRPD9ro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwrSTqa0GD36iymT4ZxSMz3mf5iMIHk6APQ2snhR5FUvacnqTOHt3xhMF+UwYmGLbQtmr4HdXIKd7Dgn5EzHcfaYFbaLJs2aDngfv7Pd6TyLW3TtSgJ6K+mC1MDI/vHzGvRxizuxwdN0uMXv5kflQvnEtWlsKAHW/H7Ypk4R8s+Kl2AIVEKdy+PYwzRd2ojqqNs+4T2tPP5Y6pnJpzTlcHkIIIf7V0Bk/bFG2B7r73DG2cSUlnJz8QW9pLXIn7268YDOR/5nozSXj7DinVDBlE5oeZh4qkdMHO1FSWynj/isUCm5qBn76WNa6sAeMBS3dYiJHUgmKMc+ZHgpu6sqgd` |
-> | germanywc | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
-> | germanywc | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
-> | switzerlandw | rsa-sha2-256 | `yoVjbjB+U4Cp/ZpMgKKuji9T2pIFtdeXnJudyeNvPs0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFl9NO3CJyKTdYxDIgCjygwIxlT1ppJQm/ykv2zDz6C7mjweiuuwhVM3LRua3WyP5mbgl3qYm+PHlA7UyIMY5jtsg7GaSfhiBSGZAdfgfDgOp3qRkgyep84P69SLb2b0hwgsPVkx8eWLDDVbOEdQLLx7TVndyxtdw+X4bZs6UdEcLMvLUWl7v3SoD5oiuJN6vOJPQl0VBeEaK/uhujjFgnlEu7/31rYEKQ8vQBbx22a4kIyBtUSAGo/VfKGRWF9oXL7Umh2xHAPwNbGwP+DdCKUY27wWG7Qe18O+QS9AOu0yL4+MRIHZg8ODLQsk0Hp3q8Iw2JjohSkk4lcjHYgb69` |
-> | switzerlandw | rsa-sha2-512 | `UgWxFaVY0YYMiNQ82Wt3D1LDg3xta1DfRUUKWjZYllk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6svukqfg7147raZZrA1bZFOO/EDFgi+WRsxoOfH/EEWGmZ89QQ5m855TpsTPZ5ZARQD9kxrYEtqefcSPuWgth4Ze5PNVwRfAwedsSfnYwHZqHRlRM54bOQ6Img7T292ERl4KNJUI7SLyF+kKB7eXqp5nMBrTZ4rSHXoeibv2yZAph0cyf4V/NnfRj6KZSf6YDs0LW1VuovWAC6S7mpBjwtabBmd1gIiJleWhB7Jj48yiyh0m7L9oIoR4NRiuFC535JwqCYhrgFwujuk6iIR9ScRdayEr6gVcv6tBms3MyR16ytA/MHRxYHfPKb1kHUrpFjDQZZZswoDJDnhQGOm8Z` |
-> | switzerlandw | ecdsa-sha2-nistp256 | `5MyZiuHQIMDh/+QEnbr3Zm6/HnsLpYT2GXetsWD6M8Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj5nXHEjkVlLcf9R9fPQw9k2QGyUUP6NrFRj1gbxKzwHsgG2YKWDdOJiyguiro0xV9+JRdW3VC49/psIYUFDPA=` |
-> | switzerlandw | ecdsa-sha2-nistp384 | `nS9RIUnm5ULmNIG+d7qSeIl/kNzuJxAX9/PcwfCxcB0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB/Ps4Wp15xhNenavSHZijwVXdZcvhzVq8IcfHR3+Gz3tKLed36OdHRTdWpvjrg0mENw4L1mEZnHnDx96WMtA+FfagGWXMVMMfcyM4riIedemHsz45KAR2suqcdkNHfdVA==` |
-> | swedenc | rsa-sha2-256 | `feu0rEf3KhvHGfhxEjcuFcPtIl+f0ZVzOMXyxy+f8f4=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOimUzZHr0DxrjdWEPQqkrBudLW2P2dvPE9DoaXSNbehU13bxzsF6lzO65JBPh9rlNwwyt2yWtrR4XI0Qh/QSXmBntefOeH6BZVrN06aHrsd1dQBr4UFT5chCwy6Keu0ARW3fY8kO9lycTmMIeoiaYahicxyRRC8WLs0cSCH8tO0dA2aoaMxafBWqR6D5dNzu00rIcsCxvyjtN3Y8C4fw3YnNvPB/qWHdZ4aNcu7sQMRhCYVNPqX9UNGeXkbw8gHf9uL9dFu1c+P+VFIEs5bIecgT5HiGvtuXsWRdtEcM1v3mrRnNdmeWWQIqXzLrs5svipMIbnYXekhhLYHIlVo4d` |
-> | swedenc | rsa-sha2-512 | `5fx+Ic5p/MMR6TZvjj2yrb4HMHwc1TgM4x1xQw4aD3Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2nRaxWTg4KGLClTZLQ5QgPZPyQ/XYbH4prjhg1uK7m/JKlmJw5LjmIUVKnlXS38qTKpWpJZyGU/eBCa5FPQODvoAXfNncgtIQxd7j00P8aO2tho+uIxSgiTCte8sgrAyx22uIJlORJn2x1cBFBJrlgQDJOKEAs9IakMNdLvlfjJV405gk7pstF4eeIANRWC3eOTrMs0O1gCTt2rnWR5BNQJu8swj9FEWreNQ3PvUliM6Ig6u8b+4d8ryYGuzh5+E8wy/aNxlowkoCI4D/+dBnH43pSYyjhrVx966JMlrJZjDmbgtygkJI+FoEEfBoFlrpIGfisqIX41Np9ZRre4Ux` |
-> | swedenc | ecdsa-sha2-nistp256 | `6HikgYBMSL9VguDq9bmwRaVXdOIUKEQUf4hlOjfvv6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBErZhZNNmDhMKbSUXLB1VcTmR7pXcXWAqfFpdI81OP1FeCxBtpRNpIeWMyVoP3FeO3yWcODLm/ZkK7BICFpjleo=` |
-> | swedenc | ecdsa-sha2-nistp384 | `apRb96GLQ3LZ3E+rt2dyr9imMFDXYbaZERiireEO6ks=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKA5kwsqDKzZWmQCjIFBBjZun3rjg62pv8BOULwvwImaPvMFuR2OipExQZIyKSbR7wS9HA4/QKVA5rLRrSGpYvOBG438/7fwVZy5rOj3GXq6X7Havr1ExRXwsw5rJ56acA==` |
-> | asiaeast | rsa-sha2-256 | `XYuEB+zABdpXRklca8RCoWy4hZp9wAxjk50MD9w6UjE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNKlaGhRiHdomU5uGvkcEjFQ0mhmNnWXsHAUNoGUhH6BU8LmsgWS61QOKHf1d3qQ0C9bPaRWMpemAa3DKGGqbgIdRrI2Yd9op+tqM+3hrZ8cBvBCgqKgaj4ZitoFnYm+iwwuReOz+x0I2/NmWUxuQlbiHTzcu8TVIln/5sj+n9PbwXC8Zk6vsCt6aon/P7hESHBJ4yf2E+Io30m+vaPNzxQGmwHjmBrZXzX8gAjGi6p823v5zdL4mq3tT5aPPsFQcfjkSMRDGq6yFSMMEA7i2dfczBQmTIJkYihdS8LBE0Ir5islJbaoPQxeXIrF+EgYgla505kJEogrLprcTGCY/t` |
-> | asiaeast | rsa-sha2-512 | `FUYhL0FaN8Zkj/M0/VJnm8jPL+2WxMsHrrc/G+xo5QM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7x8s74EH+il+k99G2aLl1ip5cfGfO/WUd3foiwwq+qT/95xdtstPYmOP77VBQ4G6EnauP2dY6RHKzSM2qUdmBWiYaK0aaI/2lCAaPxco12Te5Htf7igWyAHYz7W99I2CfJCEUm1Soa0v/57gLtvUg/HOqTgFX44W+PEOstMhqGoU9bSpw2IKlos9ZP87C6IQB5xPQQ1HlnIQRIzluJoFCuT7YHXFWU+F4ZOwq5+uofNH3tLlCy7D4JlxLQ0hkqq3IhF4y5xXJyuWaBYF2H8OGjOL4QN+r9osrP7iithf1Q0EZwuPYqcT1QeIhgqI7OIYEKwqMfMIMNxZwnzKgnDC1` |
-> | asiaeast | ecdsa-sha2-nistp256 | `/iq1i88fRFHFBw4DBtZUX7GRbT5dQq4g7KfUi5346co=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCvI7Dc7W3K919GK2VHZZkzJhTM+n2tX3mxq4EAI7l8p0HO0UHSmucHdQhpKApTIBR0j9O/idZ/Ew6Yr4nusBwE=` |
-> | asiaeast | ecdsa-sha2-nistp384 | `KssXSE1WC6Oca0dS2CNySgObkbVshqRGE2JcaNsUvpA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEYGYGolx8LNs5TVJRF/yoxOCal3a4C0fw1Wlj1BxzUsDtxaQAxSfzQhZG+lFCF7RVQyiUwKjCxmWoZbSb19aE7AnRx9UOVmrbTt2PMD3dx8VmPj1K8rsPOSq+XX4KGdQ==` |
-> | southafrican | rsa-sha2-256 | `qU1qry+E/fBbRtDoO+CdKiLxxKNfGaI9gAplekDpYvk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2UBC1KeTx8/tQIxVEBUypcu/5n3B/g0zqE7tFmPYMFYngrXqEysIzgAdpiu2+ZX/vY8AF/0UkhYec/X/rwKQL8CCVwYqa2hufbSrX/qSuUHZd/95LFB2Nh+hJ23fn3EK8Gpgo/Xkmx9YVZoaQPGPsWVWVKjU6aVpM54cd6iuDT3y9SAnqbUMqgwwz3mK7bQGFPrbUVOUwVIcYKZD9HMNZhpo8HpjllKYIt1AFy4db8lSrLyuX8Nn/U7XAlPUndUCpKsAfWw8SemyuxSHziFDHF5xo8eLU+QYxdtzirgDAgEYWv9aa0TSx5Q2Mq8XJ7POffQxKj44ocHzmMGq/wPS1` |
-> | southafrican | rsa-sha2-512 | `1/ogzd+xjh3itFg3IpAYA2pwj1o3DprEabjObSpY/DY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDLAkEygbVyp189UwvaslGRgaqcGWXaYJVq+gUB0906xkkjGoJeqSgTW5C/77vOk0zBCZM3yBgtDFZL1d6lze1QJZ6kGGPynJa5SeyydAds9G745yaFFuE53zJUyMy+y5I1ytfx003PKvk8+fHZK3rPYYr+LKm2u+9BmnuDB/0t561oFg1ZiMCPgNnDdUwkya2EtsJAifkUaBlYmzBZAFbIYyGfb898utZHyI+ix2TrMS/RHEDIchG8qSBMpOPmcpa29ADVsmAQDd5ds5D7WjirfMXwBxgJTMyuy+N9rJRgHoqDnt/GsgI2GtoPM7YSET8uYug941hAvFm5TI/dW3YR` |
-> | southafrican | ecdsa-sha2-nistp256 | `e6v7pRdZE0i1U2/VePcQLguy7d+bHXdQf3RZ4jhae+g=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEQemJxERZKre+A+MAs0T0R7++E6XanZ7uiKXZEFCyDgqjVcjk8Xvtrpk5pqo4+tMWM7DbtE0sgm1XmKhDSWFs=` |
-> | southafrican | ecdsa-sha2-nistp384 | `NmxPlXzK2GpozWY374nvAFnYUBwJ2cCs9v/VEnk0N6Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKgEuS9xgExVxicW0HMK4RLO5ZC6S0ZyENe5XVVJY0WKZ5IfIXEhVTkYXMnbtrYIdfrTdDuHstoWY9uu4bS8PtFDheNn3MyNfObqpoBPAh1qJdwfJgzo5e7pEoxVORUMnw==` |
-> | uksouth | rsa-sha2-256 | `3nrDdWUOwG0XgfrFgW27xhWSizttjabHXTRX8AOHmGw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdLm+9OROp5zrc6nLKBJWNrTnUeCeo8n1v9Y3qWicwYMqmRs/sS9t5V3ABWnus4TxH3bqgnQW3OqWLgOHse/3S+K1wGERmBbEdKOl7A7kQ9QgDkWEZoftwJ9hp+AMVTfCYhcOOsG+gW021difNx+WW2O5TldL31pk+UvdhnQKRHLX31cqx5vuUmiwq4mlbBx+rY8B/xngP2bzx/oYXdy1I9fZbWWAQ6FwJBav1sSWL0l7snRdOsy5ASeMnYollEw1IATwYeUv8g3PzrryZuru+7gu/Ku9w8d5jbFyI6Up4KLwjs/gZNuqQ5dif7utiQYbVe4L0TPWOmuLA25JJRZaF` |
-> | uksouth | rsa-sha2-512 | `Csnl8SFblkdpVVsJC1jNVSyc2eDWdCBVQj9t6J3KHvw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIwNEfrP6Httmm5GoxwprQ57AyD6b3EOVe5pTGQWIOzxnrIw2KnDPL07KNa33xZOmtXro5PYyhr5eNXUkFiQMEe+RblilZSNAvc4MHbp2TVD0L9N7Pdy2SetoF4m5BCXdC48kZntqgkpzXoDbFiaAVln5zQCHB5fOuBPS1id8+k3zqG0o+K0MHb6qcbYV8gdQeOn/PlJzKE4M0Ie8na3aWHdGvfJjDdK/hNN0J+eUK8qIb9KCJkSMDj/l3rnue9L8XgeKKA2Pkvh3nch4VBXCcCsDVhgSf+aoiJ0Fy8GVOTk2s7QDMzD9y37D9V2OPl66q4pjFGOfK0mJmrgqxWNy5` |
-> | uksouth | ecdsa-sha2-nistp256 | `weMVzOmQnlMdMp5XBoU9SdN5meBbx/8nvA8dB45w8Ck=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEnBllEm4/HsTP+ZMhlc8YnSAYWF23tibZDqGxf0yBRTU/ncuaavuQdIJ5TcJb0NcXG7skEmq3StwHT0FPMWN8Y=` |
-> | uksouth | ecdsa-sha2-nistp384 | `HpsZ8zoOCCsUbpD3nAOtxpuKIvn0L8KGyg1KMLuMUqU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd/672brwX1kOhH31ZTdBRj+bcEmemcdmTEe0J88cJ3RRQy7nDFs25UrnR+h3P0ov9Uq24EJQS8auxRgNCUJ3i3ZH9QjcwX/MDRFPnUrNosH8NkcPmJ/pezVeMJLqs3Qw==` |
-> | australiasoutheast | rsa-sha2-256 | `YafIMxux7NtoKCrjQ2eDxaoRKHBzpanatwsYbRhrDKQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7omLu37G00gLoGvrPOJXpRcI5GTszUSldKjrARq0WeJIXEdekaSTz5qv2kSN/JaBDJiO9E4AJFI9q5AvchdmMVs4I59EIJ0bsR9lK+9eRP4071EEc8pb3u/EPFaZQ8plKkvINJrdK6p0R2FhlFxa7wrRlKybenF1v7aU3Vw79RQYYXaZifiNrIQFB8XQy3QQj2DcWoEEOjbOgZir9XzPBvmeR8LLEWPTLYangYd3TsQtybDpP6acpOKaGYDEyXiA8Lxv8O276LUBny6katPrNvfGZScxn6vbTEZyog+By8vyXMWKEbC1Qc/ecBBk5obFzbUnq3RP1VXLJspo99cex` |
-> | australiasoutheast | rsa-sha2-512 | `FpFCo9sNUkdnD1kOSkWDIfnasYhExvRr1pJlE631QrA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmuW2VZAhR6IoIOr32WnLlsr/rt3y4bPFpFcNhXaLifCenkflj9BufX3lk5aEXadcemfKlUJJdwBTvTt1j4+X3P2ecCZX1/GSsRKSTuiivuOgkPxk3UlfggkgN9flE9EdUxHi/jN/OQ9CjGtHxxk72NJSMNAjvIe0Ixs7TfqqyEytYAcirYcSGcc0r70juiiWozflXlt+bS7mXvkxpqMjjIivX+wTAizzzJRaC6WcRbjQAkL2GP6UCFfBI1o9NBfXbz+qvs1KTmNA0ugRQ7g6MdiNOePHrvoF1JgTlCxEjy+/IqPiC8nNQUVCW6/gcATQoDQn0n9Lwm1ekycS35xEh` |
-> | australiasoutheast | ecdsa-sha2-nistp256 | `4xc49pnNg4t/tr91pdtbZLDkqzQVCguwyUc16ACuYTc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdswzJ+/Bw5ia/wRMaa0llZOjlz67MyZXkq7Ye38XMSHbS4k/GwM0AzdX+qFEwR00lxZCmpHH28SS+RyCdIzO0=` |
-> | australiasoutheast | ecdsa-sha2-nistp384 | `DEyjMyyAYkegwLtMBROR/8STr1kNoQzEV+EZbAMhb1s=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJRZx6caZTXnbTW/zRzKfoKC4LGzvD5fnr2p8yGWxDq27CjUEMxkctWcT6XS3AstF2MLMTOFp/UkkSr8eP0vXI8g99YDeFGXtoBiFSIgYF2Aiu/kpfEu3feiIUl3SVzxKw==` |
-> | frances | rsa-sha2-256 | `aywTR4RYJBQrwWsiALXc1lDDHpJ34jIEnq3DQhYny0g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDELY4UcRAMkJpEBZT40Oh5TIxI6o6Enmlv+KxWkkcyFcNJlFtaF2Hl+afWlysrg+lB5Un4XpveWY64pl7a/dSju7aPfujcXowELIPqFSoWW7xQ+jkfJdyI0daa0l2h2oNCPqWnx8+04Vx5kcb2GktlNG4RMLx7Q6COJgQ3pGHtyfZ5fnmrWNBsuv4mvsXp0u1KGWX6s2LZtO+BpKE6DegSNLMVapAZ0ju8pagqtm6aeWEtqmkAvsI0U31qhL25FQX4DzjIbGzXd6I25AJcSXcpnwQefsaOwO/ztvIKeIf3i/h2rXdigXV1wyhvIdKm1uWwj6ph4XvOiHMZhsRUe02B` |
-> | frances | rsa-sha2-512 | `+y5oZsLMVG6kfdlHltp475WoKuqhFbTZnvY0KvLyOpA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmsS9WimMMG95CMXFZiStR/peQU1VA6dklMbGmYwLqpxLNxxsaQuQi6NpyU6/TS8C3CX0832v1uutW38IfQGrQfcTGdAz+GjKverzaSXqZGgTMh/JSj06rxreSKvRjYae596aPdxX5P+9YVuTEeTMSdzeklpxaElPfOoZ7Ba5A2iCnB/5l/piHiN8qlXBPmfGLdZrTUFtgRkE4Ie4zaoWo19611XgUDMDX4N4be/qilb95cUBE73ceXwdVKJ3QVQinZgbwWFUq0fMlyd8ZNb9XN6bwXH7K6cLS6HYGgG6uJhkYSAqpAZK2pOFn3MCh8gw2BkM/Rg+1ahqPNAzGPVz9` |
-> | frances | ecdsa-sha2-nistp256 | `LHWlPtDIQAHBlMkOagvMJUO0Wr41aGgM+B/zjsDXCl0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdj2SxQdvNbizz8DPcRSZHLyw5dOtQbzNgjedSmFwOqiRuZ2Vzu88m2v5achBwIj9gp0Ga14T7YMGyAm04OOA0=` |
-> | frances | ecdsa-sha2-nistp384 | `btqtCD/hJWVahHWz/qftHV3B+ezJPY1I3JEI/WpgOuQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB2rbgGSTtFMciVSpWMvmGGTu8p1vGYfS2nlm+5pAM85A4Em1mYlgHfVZx+SdG5FSYcsX4vTWt4Yw2OnDmxV3W0ycrKBs4Bx3ASX4rx3oZezVafHsUUV0ErM+LmdmKfH8g==` |
-> | uswest2 | rsa-sha2-256 | `ktnBebdoHk7eDo2tvJuv36XnMtfauhvF/r0uSo6DBfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDoskHzExtM+YSXGK6cBgmnLlXsZLWXkEexPKC7wHdt0kSqkIk9F31wD+2LefZzaTGfAmY5/EWrOsyBJvIgOoksH+ZPMnE9+TOWqy6vsS+Ml/ITvUkWajS1bKDPDSoIrCM1rQ9PlbgMQFg4o0FfyxLVCP7hcgvHO+aycOxkiDqtvwANvIn2Qwt7xwpIv1Mnc4OpcBSxbigb7ISlrvR9XWivE/piWjXS3IEYkGv7VitAlmWEoTt9L7K94bYol2nCXSXJ33X6xVVwVNpdxVtnUQBIRinN+vkOccgG0jvWtWPtrMjyDg/lyvr6lBdO/CQy4VO4VrIBuL6pjsS8KfIfTxKd` |
-> | uswest2 | rsa-sha2-512 | `i8v3Xxh/phaa5EyZEr5NM4nTSC/Rz7Nz0KJLdvAL0Ls=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOOo5f0ACypThvoDEokPfzGJUxbkyMoQKca9AgEb3YkQ/lsYCfLtfGxMr2FTOGQyx5wfhOrN0B2SpI4DBgF3B0YSLK0omZRY7fpVPspWWHrsbTOJm/Fn7bWCM+p63xurZ6RUPCA6J1gXd3xbdW7WQXLGBJZ6fjG7PbqphIOfFtwcs/JvjhjhvleHrXOtfGw9b4Jr8W1ldtgKslGCU1mnUhOWWXUi+AhwGFTI0G/AShlpX8ywulk2R+fxet3SNGNQmjydnNkcsrBI/EMytO1kwoJB3KmLHEeijaQzK7iJxRDZEHlHWos6G7jwaGPI4rV5/S1N+xnG+OhCDYAUbunp5R` |
-> | uswest2 | ecdsa-sha2-nistp256 | `rt5kaA0neIFDIWTP2MjWk9cOSapzEyafirEgPGt+9HM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKEKP+1QZf3GfEvkNZtzoKr05iAwGq+yPhUsVdyA7uKnwvTwZAi7NBr4hMkGIIdgQlGrMNNXKS0V+rhMNI1sH48=` |
-> | uswest2 | ecdsa-sha2-nistp384 | `g0vDKd4G5MKnxWewbnCYahCv1lZgfnZeEXfPAhv+trs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1+/Qu9Y1BqqV3seN0+0ITYCueuv0TFAnfG9z1Io8VYjmxLvdmaDvTi9kJ0ShjFJRKjbCfYKNekqYZDW4gBfmE9EyvMPI6VXPVLNY3TQ/z+Y7qO/oa28cSirW9fhp7vbA==` |
-> | indiasouth | rsa-sha2-256 | `5gFLJvQvQodZxKBi3DnGywpf9dliWguiMTqcgkTmtu8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDlxVnaYnmg1cK+g/PI1jB1fgQQJiX39ZmfBss3mSW3kUxP3KWhm7lHBTkrbnfhVHnGpP6GcGFy09YBQa6UiyVpD8p8APtx0j9Jp8m3yhhgqOIjup0C7crl49NqMVryOZmCLOvA7KTyTxxV37GpRI+ffqQ8LOO+anWVWVaJlVCYBMct/OVhA7ePXblcbJg5eu5JjUiWW+cPdVqAqWojNHZzzprCFEBTCvYaZtzBx4kFGiipPmJSN6yvBPEfnA7Lzr/T9iXV/XkmI1txuJRBasoQMt+4jCZG25sCCN8y4iuUJCioUELr//TWaDyTsQAR4MbRW+L/GSIM9VUY4Uc+Impp` |
-> | indiasouth | rsa-sha2-512 | `T4mrHCEHbFNAQSng//m0Viu/hXfi11JMnyA0PqAuTtg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz9tQa7D4dyrULCLH75yKwH27AQMRNWFUqgUQQXYHR1MYegLf7JEmFn126bxgEHPRO0bNwBM9S626Gcr1R1uDI/luL6uvG0Q57k+Pmv7HNQtv12J3fAuxuhSPcppE5IE5QR94Qgd1RzGXv954TK1Z+kCXHyLA583XTQ4btOEwqUo/16tSCqaoTSdyNp17q8BrOCPaTWMqT774lSGELIDc6RaGKHRu/Qa+F5FRMswdZt5YJDEKtlKdvbyIiSfIP2GZGhWBiSW2D6xpnzSjstR3LfRfFek/ryGkDPu5c5HNaVJwc1fatP6ggAbhVCcyDgWiCCpEBICV2wnPpGfDUdbRh` |
-> | indiasouth | ecdsa-sha2-nistp256 | `7PQhzR5S6sEFYkn2s3GxK6k8bwHgAy0000zb07YvI44=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgZw/ouE23XQnzO8bBPSCJp/KR+N/xfuJS5QtWU/PzlNLmSYS20b65GRP6ThwZdaigMhwHOEc8twpJ7aA7LBu0=` |
-> | indiasouth | ecdsa-sha2-nistp384 | `sXR2nhTTNof58ne5K+Xjm9Pu8miEbKJn4Bo9NYoqQs4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwbzUI8q9f5YTLIs6ddRTPlHdb35xrbsJeOQII/nEXhlNjzpdL9XnDJjQunQL2vg6XND1pfp3TNBJ9LF3mud442LbpwSt9B7EZD8tQ5u0+2NeNjn8JnCu6/tdvS+xoNiA==` |
-> | japanwest | rsa-sha2-256 | `DRVsSje7BbYlVJCfXqLzIzncbVU4/ETFGzdxFwocl8E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDl/rlTgQpomq4FmJKSR2fjgAklV818RcjR/e/C1VUJVpbntJoWUlBhKYDFPTVQaHXDTK5HyJU5APsdy6CJo8ia32qc2E/573LDNk4dgFFrh+KFRiD+ULt3IH15i1DieVw61MAVOvzh+DmTJHPLaTufXoQ62YACm3yC1st1kXv4bawfXs0ssmeqrBcCOQvMvW/DexnnGXO6QXYTcjUktNrO2h2dd355n5FP4fcsBEdGmfT79HYPM6ZoqkItRZEO6Nel65KxtenAwQub8SK3iJgFyJwd3zIH4OCHp3z4tcGXw5yNAX15dJMSnls0zvzhx0f4ThwfgB4t1g9jVb47Ig7B` |
-> | japanwest | rsa-sha2-512 | `yLl9t2jlkrTVWAxsZ59Wbpq+ZCnwHfdMW8foUmMvGwI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9zrpnjY7c0dHpE1BMv+sUp+hmvkBl3zPW/uCInYM5SgtViSQqn/DowySeq+2qMEnOpHGZ8DnEjq55PmHEumkYGWUUAs38xVGdvRZk6yU7TxGU42GBz0fT/sdungPHLQ2WvqTZYOFqBeulRaWsSBgovrOnQEa2bNTejK9m353/dmAtKHfu68zVT+XYADrT3PY5KZ1tpKJA0ZO9/ScUvXEAYs20WSYRZBcNDoSC9xz4K8hv9/6w3O3k0LyBKMFM5ZW8WVDfpZx1X0GBCypqS+RNZuVvx81h3nxVAZSx80CygYcV4UHml7wtnWDYEIBSyVRsJWVNGBlQrQ4voNdoTrk5` |
-> | japanwest | ecdsa-sha2-nistp256 | `VYWgC6A4ol1B7MolIeKuF2zhhgdzQAoGBj5WgnQj9XE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLIuhTo1et/bNvYUj+sanWnQEqiqs9ArKANIWG19F9Db6HVMtX8Y4i7qX6eFxXhZL17YB2cEpReNSEjs+DcEw4=` |
-> | japanwest | ecdsa-sha2-nistp384 | `+gvZrOQRq3lVOUqDqgsSawKvj6v/IWraGInqvqOmC6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3ZiyS1p7F1xdf6sJ3ebarzA5DbQl1HazzLUJCqnrA84U8yliLvPolUQJw4aYORIb5pMgijsN3v9l0spRYwjGHxbJZY/V6tmcaGbNPekJWzgXA1DY35EbFYJTkxh/Yezw==` |
-> | norwaye | rsa-sha2-256 | `vmcel/XXoNut7YsRo79fP5WAKYhTQUOrUcwnbasj/fQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC4Y1b2Bomv8tc/JwPgW0jR5YQhF031XOk4G0l3FOdZWY31L8fLTW6rOaJdizOnWCvMwYQK39tyHe6deN9TZESobh0kVVuCWaZNI6NUR0PSHi0OfbUkuV0gm/nwtwJkH5G9QbtiJ5miNb4Ys3+467/7JkqFZmqN6vBLhL9RVInO00LPYkUGtGfTv+/hmsPDGzSAujNDCFybti4c+wMgkrIH6/uqenGfA1zW3AjBYN2bBBDZopzysLHNJYQi3nQHQSiD4Mdl7IGZtJQeC/tH9CKH5R4U4jdPN1TmvNMuaBR/Etw4+v0vrDALG1aTmWJ7kJiBXEZKoWq/vWRfLzhxd4oB` |
-> | norwaye | rsa-sha2-512 | `JZPRhXmx44tEnXp+wPvexDys1tSYq9EDkolj9k6nRCM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC11j19LeEqRzJOs8sWeNarue+bknE3vvkvSnsewApVMQH35t9kpqRGMSr6RTU2QCYDiQTCKI2vzLSTLGoizoPBiY/7lvdylDRCbeEpuFUkgvKZrapkJ6JqKOySPpFNhqCs27rdY5dJ2C7/nmTL/kvcyhXFXZT2lJaOIdRSKv/1Q3DAWQ9icNGbDokQDubF5etlkquqTV6r/ioFuh7hdKE+fJooyHa2oYTD+j5cNDKBxrJWBEidOe2HwplR4lYPggUcVtGu9aoSVIMmswztFF6+MNIdOT1kdvHewKLjkVB1hbIHl/E+uexsyMGcCg5fPy7dDIipFi1aED+6R7CnAynJ` |
-> | norwaye | ecdsa-sha2-nistp256 | `mE43kdFMTV2ioIOQxwwHD7z+VvI4lvLCYW8ZRDtWCxI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWP6vJCbOhnvdmr7gPe8awR/E+Bx+c8fhjeFLRwp6/0xvhcywT9a1AFp7FdAhkVahNKuNMU1dZ0WTbOEhEGvdg=` |
-> | norwaye | ecdsa-sha2-nistp384 | `cKF2asIQufOuV0C/wau4exb9ioVTrGUJjJDWfj+fcxg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGb8w8jVrPU1n68/hz9lblILow6YA9SPOYh5r9ClAW0VdaVvCIR/9cvQCHljOMJQbWwfQOcBXUQkO5yI4kgAN3oCTwLpFYcCNEK6RVug9Q5ULQh1MRcGCy3IcUcmvnYdg==` |
-> | francec | rsa-sha2-256 | `zYLnY1rtM2sgP5vwYCtaU8v2isldoWWcR8eMmQSQ9KQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCmdsufvzqydsoecjXzxxL9AqnnRNCjlIRPRGohdspT9AfApKA9ZmoJUPY8461hD9qzsd7ps8RSIOkbGzgNfDUU9+ekEZLnhvrc7sSS9bikWyKmGtjDdr3PrPSZ/4zePAlYwDzRqtlWa/GKzXQrnP/h9SU4/3pj21gyUssOu2Mpr6zdPk59lO/n/w2JRTVVmkRghCmEVaWV25qmIEslWmbgI3WB5ysKfXZp79YRuByVZHZpuoQSBbU0s7Kjh3VRX8+ZoUnBuq7HKnIPwt+YzSxHx7ePHR+Ny4EEwU7NFzyfVYiUZflBK+Sf8e1cHnwADjv/qu/nhSinf3JcyQDG1lN` |
-> | francec | rsa-sha2-512 | `ixum/Dragma5DAMBzA/c5/MY02FjUBD/gI8+XQDzJvc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDjTJ9EvMFWicBCcmYF0zO2GaWZJXLc7F5QrvFv6Nm/6pV72YrRmSdiY9znZowNK0NvwnucKjjQj0RkJOlwVEnsq7OVS+RqGA35vN6u6c0iGl4q2Jp+XLRm8nazC1B5uLVurVzYCH0SOl1vkkeXTqMOAZQlhj9e7RiFibDdv8toxU3Fl87KtexFYeSm3kHBVBJHoo5sD2CdeCv5/+nw9/vRQVhFKy2DyLaxtS+l2b0QXUqh6Op7KzjaMr3hd168yCaqRjtm8Jtth/Nzp+519H7tT0c0M+pdAeB7CQ9PAUqieXZJK+IvycM5gfi0TnmSoGRG8TPMGHMFQlcmr3K1eZ8h` |
-> | francec | ecdsa-sha2-nistp256 | `N61PH8SVCAXOq7Z7eIV4mRnotafmNoPrpc+TaLxtPX4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3UBFa/Ke9y3aLs1q1b8gh/tXiS7lpOTzUiDFpXdbq00/V9Ag+v2z5MIaicFdum9Ih4fls1Mg07Ert16bi5M8E=` |
-> | francec | ecdsa-sha2-nistp384 | `/CkQnHA57ehNeC9ZHkTyvVr8yVyl/P1dau2AwCg579k=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/x6qX+DRtmxOoMZwe7d7ZckHyeLkBWxB7SNH6Wnw2tXvtNekI9d9LGl1DaSmiZLJnawtX+MPj64S31v8AhZcVle9OPVIvH5im3IcoPSKQ6TIfZ26e2WegwJxuc1CjZZg==` |
-> | uswest3 | rsa-sha2-256 | `pOKzaf3mrTJhfdR/9dbodyNza30TpQrYRFwKAndeaMo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC0KEDBaFSLsI28jdc854Rq6AL9Ku8g8L+OWQfWvb1ooBChMMd/oqVvFF9hkLzJ8nFPQw7+esVKys5uFwRTpBNuobF/RVtY0zLsNd+jkPxoUhs7Yl0hI2XXAPdp3uCsID56O+OrB7XbOsPCrJ2aXfiaRheRQg84/92c357uQ/epsva8XCMjIIGOAyEL6d4mnCNJ2Y0mXPJT1lfswoC8i2GSUKdJZhTLCe9zVDvTCTWuZJSH3A8nM3RVtnNgMXfNjh2blwW9YFv5BrMOXA205fahuDcPjwvXo9OMfEneDsrODmiEGYzbYLby/5/KPzz5OVn7BDJma6HL0z07i3PmEzXN` |
-> | uswest3 | rsa-sha2-512 | `KKcoWCeuJeepexnJCxoFqKJM88XrpsPKavXOoNFEGuY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNzhiVgDjCIarGEjKgmSxRh4vWjV6PxFbNK3cD0M4jWGlxPx/otJNEXCMee0hW29b7bwo2+aiyv3AEt7JYTeM/G9SHmenU6MTpqD/lC/LABtqTB7EV9FIFkc8MbbOvEkdTnRJw1d09MTqqwbkR9wq297AWggSzCuPDqMq+268UzsthMzODRVqW3yTr3M6vhlBCPfN5ptcvYwqRaa7Yhe4bdRZ+xYB5I2+ZMkalfn7SQiySSgAGjUJxrxK+LnJKSi32CfqTU8KjWNjCc40eAqexLFjg6AN9BtC0+ZYcD2KQmeqJ8oRCWw9r4CsaduSmcjc7XD75RKGdArjYzjeiVSlt` |
-> | uswest3 | ecdsa-sha2-nistp256 | `j4NlZP/wOXKnM3MkNEcTksqwBdF0Z46+rdi2Ic1Oj54=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBETvvRvehAQ2Ol0FfTt649/4Xsd0DQQ7vyZ666B92wRhvyziGIrOhy8klXHcijmRYRz3EjTHyXHZ4W8kcSKB4Lo=` |
-> | uswest3 | ecdsa-sha2-nistp384 | `DkJet/6Pm6EXfpz2Ut6aahJ94OvjG3R7+dlK0H4O1ts=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEu+HpgDp0a02miiJjD5qVcMcjWiZg5iIExECqD/KQVkfyraJ3WZ8P28JwB+IYlEGa2SHQxScDjG2t3iOSuU9BtpA0KK5PGtu3ZxhN1UmZbQgz6ANov7/+WHChg7/lhK0Q==` |
-> | indiacentral | rsa-sha2-256 | `OcX6wPaofbb+UG/lLYr30CPhntKUXyTW7PUAhC6iDN0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWuKbOJR8ZZqhE3k2HMBWO99rNWoHwUa+PVWPQHyCELyLR19hfrygNL9uugMQKTvPYPWx8VM6PrQBrvioifktc/HMNRsoOxlBifQETfRgBseXcIWorNlslfFhBnSn6ZGn8q4XICGgZ1hWUj9z1PUmcM2LZDjJS33LLzd23uIdLePizAliJAzlPyea8JNpCVjfmwnNwtuxXc48uAUXlmX+e0ZXRwuEGble8c1PbrWWTWU4xhWNJ+MInyvIGv9s6cGN7+fxAFaUAJS0wNEa3poCnhyNxrckvaqiI3WhPQ8Hefy2DpXTY03mdxCz8PZPcLWdQU3H5nmuAc/pypnc0Avax` |
-> | indiacentral | rsa-sha2-512 | `HSgc5u8s+QILdyBq6wGJkxRcK5nxj81gxvpkR5bcH6k=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDSO/R/yw8q33yLkSHOw0Bi2WKDWQPrll8skh3hdRUB6wtw9dvtQFEV3suvFJsTVvAbnGBe2Fjgi69X0zkIygxg74XuQsx7GZO6gyaKDwljyanFoCzer+OzFSpDcVJ0zOfhY99uHeYT6k4leb2ngABqjiqieDHMZ9JQX12KOK3cAks/oytrNUo9krGb1Nyv5BYu4dWXHmuFgtigDd043khaARfdWkg88lKgb6G9k+vQTGKphLnFMqhada/aP8GsaA2Dq5d/LH5P5CTU7MRPA8TuuyLOtbv8FtQ2TyaAXhYCplCQELtto1yXZ79WVjQE/uKuX8xK5M2rfOH+H5ck/Rxl` |
-> | indiacentral | ecdsa-sha2-nistp256 | `zBKGtf770MPVvxgqLl/4pJinGPJDlvh/mM963AwH6rs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjHx8+PF0VBspl6l9Xa3BGyJwSx2eDX0qTDnhrdayzHMWsHGX3vz0wr7oMeBVdQ26dOckExa6iPrEDSt8foV1M=` |
-> | indiacentral | ecdsa-sha2-nistp384 | `PzKXWvO/DR/KnUElcVWIwSdabp6ZJqce37DJZzNl3Sk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJwEy1f+GYN4rxhlCAkXGgqAU1S7ssI4JPEJs8z1mcs8dDAAVy1cqdsir9yZ9RSZzOz/BOIubZsG137G2+po0Pz0FfJ0jZVGzlx1UHXu7OMuKQ7d2+1TkPpBYFy6PiCa3w==` |
-> | koreasouth | rsa-sha2-256 | `J1W5chMr9yRceU2fqpywvhEQLG7jC6avayPoqUDQTXHtB2oTlQy2rQB` |
-> | koreasouth | rsa-sha2-512 | `sHzKpDvhndbXaRAfJUskmpCCB3HgPbsDFI/9HFrSi3U=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCfGUmJIogHgbhxjEunkOALMjG77m+jgZqujO3MwTIQxQNd/mDeNDQaWDBVb2FJrw15TD3uvkctztGn2ear3lLOfPFt0NjYAaZ8u5g9JYCtdZUTo5CETQFU/sfbu2P2RJ/vIucMMg8HuuuIMO059+etsDZ5dZHu9cySfwbz/XtGA0jDaTlWG0ZDT+evOE0KmFABjgMFWyPnupzmSEXAjzlD/muGeeUhtXUB8F6HVUCXLz7ffzgYiYj+1OB0eZlG/cF8+aW7MOpnWvfpBxwm16soSE1gmZnXhPrz/KXlqPmEhgIhq7Cwk54r3rgfg/wCqFw+1JcbNOv5d4levu/aA7pt` |
-> | koreasouth | ecdsa-sha2-nistp256 | `XM5xNHAWcYsC5WxEMUMIFCoNJU/g90kjk/rfLdqK7aw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHTHpO85vgsI6/SEJWgEhP5VDTikLrNrSi6myIqoJvRx6x8+doTPkH87L/bOe/pTU/rCgkuPi1kXTC7iUTSzZYk=` |
-> | koreasouth | ecdsa-sha2-nistp384 | `6T8uMI9gcG3HtjYUYqNNxi99ksghHvsDitIYpdQ4BL4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgPPIDWZqvB/kuIguFnmCws7F4vzb6QG7pqSG/L9E1VfhlJBeKfngQwyUJxzS2tCSwXlto/1/W302g0HQSIzCtsR4vSbx827Hu2pGMGECPJmNrN3g82P8M0zz7y3dSJPA==` |
-> | ussouth | rsa-sha2-256 | `n7P8NrxY8pWNSaNIh8tSZxi9rXi11g3JuzWZF93Ws4g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD4PgB8PxPPpGfvrIUGSiiDFIfkRk2/u1DmhcoGoIfW+/KR8KC2JA0kY4Yj+AceGnDUiBbSPz7lcmy2eGATfCCL6fC5swgJoDoYDJiJoaKACuVA0Nk2y0OeO58kS6qVHGX/XHzx8+IkfGdlhUUttcga7RNeppT5iqSz49q9x6Ly42yrV3DIAkOgh+f9SsMMfR6dQQmvWN3HYDOtiO2DvVN+ZenViQVcsynspF3z4ysk53ZYw5YcLhZu8JFw4u0F6QJAznR6TfNqIlhSjR1ub8DiHvIwrmDNf8TgG5kPVGhIcibYPf+y0B0M8nr9OKCxZzUTlXX4Xcnx+VOQ1e1qGHvV` |
-> | ussouth | rsa-sha2-512 | `B2oOtHpXzwezblrKxGcNBc3QJLQG/TiVgOjnmNorqkA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+LJA8W3BcwITzJv6CAkx/0HBPdy3LjKPK2NQgV9mxSMw8mhz4Ere59u2vRsVFcdW6iAeGrH66VF6mJSCgUKiYnyZAfTp1O6p6DnUg4tktMQFo4BEwSz1S5SGDuRhpWvoKjzvljESf/vZBqgms7nMRWe3MGuvlUWBqB+2CnJ7bxhvGQCdBTQeoPO9EZKYKi/fPlcxBmLFGcZnRRpB6nu/Cxhhj1aHLJdjqCd+4ahtjBHeFrPxeQv9gTJ1B+EipJZu7WgPZOTI8iZaIcnCbhuGOy0iOFXeuexC9/ptHDW9UEgKVLyZ4UIPJkSLFVgW5NRujWyZ/thc5+EfHY9Db3UAl` |
-> | ussouth | ecdsa-sha2-nistp256 | `Wg9hTlPmrRH9aC9lTSf8hGFqa85AnW3jqvSXjmHAdg4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnEz4iwyq7aaBNKiABce+CsVIUfiw9Jw3pp6pGbL6cUaJs9mEVg1RMLHgPg2I+7XV0doisYhYb/XtufxzGCe94=` |
-> | ussouth | ecdsa-sha2-nistp384 | `rgRhPelmxAix6TBDahmGqXnKjdImdI3MnDPVc6qhF2o=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKXGKbWfVe18G9gbCxFQiBGkGYM9LktSPKkRI18WRQ50qyuxVRXRDoV+iIEJyCQTpuFTPprQ6glQYeF+ztEb4MZaXpVrcs1/Og191dcEtty3UWuJBCrv/t1kezlwBWKyXg==` |
-> | koreacentral | rsa-sha2-256 | `Ek+yOmuAfsZhTF4w7ToRcWdOevgZPYXCxLiM10q44oA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCyUTae7QtAd3lmH+4lKJNEBNWnPUB+PELE9f4us5GxP8rGYRar1v3ZGXiP2gzPF1km1cGNrPvBChlwFMjW+O5HavIFYugVIe8NzfI7S3t+kgTylXegSo1cWen18MAZe6Q5vxqqFzfs+ZChWEa/P37lTXVkLVOYCe5NJUPm8Zvip7DHB2vk25Fk3HMHG9M50KNj1Hp4etPI7yiLNLNCh5V410mf3xhZChMUrH6PMl/A+sVv68ulcVeIZ68eMuQktxz1ULohBdSExZGmknVrwfF/fLTKWxHlVBjB3yDlLIJO3nTFKaQ4RzPa/0If+FcbY+hIdzSjIAK6W3fRlbVuWHYR` |
-> | koreacentral | rsa-sha2-512 | `KAji7Q8E2lT3+lSe7h74L6rfPnLEfGVzYZ/xyM96//0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDxZYb5eIWhBmWSwNU6G9FFDRgqlZjYYorMSXJ4swHm4YYHKGZTf4JOE5d87MNtkVgKe2942TQxA1t2TaENlmNejeVG5QZ4to+nVnwsFov2iqAYChoI6GlhpwzyPsO0RkqLB8mvhoKMel1sNGfmxjxYVFt4OSPHDzNIU4XjGfW24YURx/xRkLU1M9zBNADDx+41EMNRT7aBXrKW9MzsxkfCM3bYwjdBbI2Yi2nUqARm+e/sBPLTqVfjuMFvosacYc43MqepFSQoZE5snwYxkLJzltAbxNUysJs277isnGgezh9p5T2MCxtCERU0lvp7M52hd1p75QEtNrdadfDprzT9` |
-> | koreacentral | ecdsa-sha2-nistp256 | `XjVSEyGlBtkONdvdw11tA0X1nKpw5nlCvN/0vXEy1Gc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYiomaLB3ROxZvdfqiilpZ+2XJiPDeIIv4/fnQRZxnCBCFrUm7ATB6bMBSUTd00WfMhnOGj4hKRGFjkE+7SPy4=` |
-> | koreacentral | ecdsa-sha2-nistp384 | `p/jQTk9VsbsKZYk09opQVQi3HzvJ/GOKjALEMYmCRHw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN3NA7U4ZC576ibkC/ACWl6/jHvQixx+C6i2gUeYxp7Tq6k4YLn7Gr1GNr+XIitT6Jy5lNgTkTqmuLTOp7Bx9rGIw9Or37EGf7keUM42Urtd+9xF1dyVKbBw0pIuUSSy+w==` |
-> | asiasoutheast | rsa-sha2-256 | `f0cyRMVxUgtpsa9J6pwAMysk2MY/sybo5ioPjhy9LZk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWPK6PAGMTdzNkwKZt+A3Dhbnete6jyLLboOXWdv/QdhvjR2pNCMhGuWUxadaiLUxzZM7IvugSLGexQlZi5aCJ06DpaVYqZk/Q8l+QUydp9TfNg/kP+0OJXCJ6XdsVggboDIfrEN8ku4nfasD4QTo2tnmqZhmbIDUr38SP16PsH2bQAi2lZKg4DfWgnSFyj5sbMSDLljBEY6JQkLGiPcbqlYEN4kjB5mudE9c/ts6Jn1fhizBwJY/pE3kOydq8dCMXYFMZ6NafPacCi7Pe5zcTKfi/daioVlSXQhWK3jNzCVENonF2xWSPH+1T5F2IOV0wb0HL2l8d02x5Bw2Su4aF` |
-> | asiasoutheast | rsa-sha2-512 | `vh8Uh40NCD3iHVh5KEcURUZrT3hictlF9pMDEoK5Rxk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdL+E/W2RpmJiWMRg5EtMs0AE7BF2Qb5jnXXaIbwqr5/BGuUPLm43eVJJt5R0BmEJe2lYfYLAzinC9MhsxKSTHIt5u8QleyIAxI759M3DWZwFSKngjsHFRe/SvZOzc7gvtR7osdnVaXCTXY5NccLT34gDybEbjlmp+SEvSZZmXyy2wmUR3O022euBifKN0t9Tk1mkLYhbfRySQi0ZADWazjd7loM9ZHArVe8y9oDrs7QYX4eHIVRbgtsBbkR3g9zP3VWVMERFyi6cU0Dyvue8DCx9YzNsdmKjkB2dvYTMVcUkad81pbO81jpLb1wL25WPHIPHqTOLZhdn9JxLn245Z` |
-> | asiasoutheast | ecdsa-sha2-nistp256 | `q7OsE02p9SZ6E63b+Mxri1wbI5WfkdWcIJgAP2+WTg8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEbvjkwSA0RQuT2nQf8ABKc21s/kcC/7I5431oNEwQPZQ8S18RAKktv6ti19Ju8op6NOZZ3Up9lOn3iybxHgy+s=` |
-> | asiasoutheast | ecdsa-sha2-nistp384 | `HpneuSwbRG7eiqHGEAkSXF0HtjvccoT3OIgeQbPDzoE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGAMUN+0oyuXuf6rkS+eopeoISA2US3UrgAovMwoqAeYSPoHKy9n/WKczsHPy/G+FKsXM4VlMHtNhEAxYwjtueF0Sb2GRZFzngeXMfVZPVL5Twph/pT6ZJnUD8iloW0Mw==` |
-> | australiaeast | rsa-sha2-256 | `MrPZLU8llsG+SzgBN8eH702H4zuynyYgqqQLQmWGDEs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsRwHZ+DKINZZNP0GO6l7mFIIgTRnJy7ikg07h54iuk+KStoB2Cwppj+ulPs0NiR2RgAsP5nchWRZQysjsfYDui8wha6JEXKvWPlQ90rEYEs96gpUcbVQesgfH8ILXK06Kn1xY/4CWAHEc5U++66e+pHQulkkFyDXTsRYHsjTk574OiUI1` |
-> | australiaeast | rsa-sha2-512 | `jkDaVBMh+d9CUJq0QtH5LNwCIpc9DuWxetgJsE5vgNc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFHirQKaYqkcecqdutyMQr1eFwIaIM/h302KjROiocgb4iywMAJkBLXmhJn+sSbagM5tzIk4K4k5LRjAizEIhC26sc2aa7spyvDu7+HMqDmNQ+nRgBgvO7kpxVRcK45ZjFsgZ6+mq9jK/eRnA8wgG3LnM+3zWaNLhWlrcCM0Pdy87Cswev/CEFZu6o6E6PgpBGw0MiPVY8CbdhFoTkT8Nt6tx9VhMTpcA2yzkd3LT7JGdC2I6MvRpuyZH1q+VhW9bC4eUVoVuIHJ81hH0vzzhIci2DKsikz2P4pJT0osg5YE/o9hVJs+4CG5n1MZN/l11K8lVb9Ns7oXYsvVdtR2Jp` |
-> | australiaeast | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
-> | australiaeast | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
-> | japaneast | rsa-sha2-256 | `P3w0fZQMpmRcFBtnIQH2R88eWc+fYudlPy7fT5NaQbY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCZucqkz4UicI20DdIyMMeuFs+xUxMytNp7QaqufmA2SgUOoM387jesl27rwvadT6PlJmzFIBCSnFzjWe5xYy3GE59hv4Q3Fp3HMr5twlvAdYc5Ns5BEBEKiU0m88VPIXgsXfoWbF0wzhChx8duxHgG4Cm+F8SOsEw/yvl+Z/d42U9YzliQ1AafNj4siFVcAkoytxKZZgIqIL4VUI322uc93K5OBi9lgBqciFnvLMiVjxTWS/wXtVEjORFqbuTAu/gM4FuKHqKzD1o39hvBenyZF2BjIAfkiE6iYqROd75KaVfZlBSOOIIgrkdhvyj9IfaZFYs3HkLc7XgawYe6JVPR` |
-> | japaneast | rsa-sha2-512 | `4adNtgbPGYD+r/yLQZfuSpkirI9zD5ase01a+G7ppDw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCjHai98wsFv0iy+RPFPxcSv8fvTs3hN/YnuPxesS21tUtf0j5t8BTZiicFg6MLOQJxT4jv5AfwEwlfTqvSj3db6lZaUf/7qs/X9aN1gSoQNnUvALgnQDYGjNYO8frhR7S0/D/WggQo2YKMAeNLRScT7Pg/MJaOI12UhoUloCXbTAP1c85hYx0TGKlGWpFjfen/2fwYEKR1vuqaQxj+amRatnG+k18KWsqvHKze8I2D19cn5fp2VkqXzh6zQ1s5AMc5B9qIF48NIec9FAemb9pXzOoYBDFna0qNT4dfeWOQK6tM/Ll10jafaw2P32dGBF8MQKXB2sxtcC0nU4EEtS5d` |
-> | japaneast | ecdsa-sha2-nistp256 | `IFt/j4bH2Jc0UvhUUADfcy3TvesQO+vhVdY4KPBeZY8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVq+uiJXmIlYS367Ir9AFq/mL3iliLgUNIWqdLSh7XV+R8UJUz1jpcT1F6sJlCdGovM3R5xW/PrTQOr3DmikyI=` |
-> | japaneast | ecdsa-sha2-nistp384 | `9XLsxg1xqDtoZOsvWZ/m74I8HwdOw9dx7rqbYGZokqA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFh7i1cfUoXeyAgXs+LxFGo7NwrO2vjDwCmONLuPMnwPT+Ujt7xelTlAW72G3aPeG2eoLgr6zkE48VguyhzSSQKy7fSpLkJCKt9s0DZg2w0+Bqs44XuB43ao6ZnxbMelJQ==` |
-> | canadaeast | rsa-sha2-256 | `SRhd9gnvJS630A8VtCYMqc4djz5R8EiG7spwAUCYSJk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD2nSByIh/NC3ZHsjK3zt7mspcUUXcq9Y/jc9QQsfHXQetOH/fBalf17d5odCwQSyNY5Mm+RWTt+Aae5t8kGm0f+sKVO/4HcBIihNlAnXkf1ah5NoeJ+R0eFxRs6Uz/cJILD4wuJnyDptRk1GFhpAphvBi0fLEnvn6lGJbrfOxuHJSXhjJcxDCbmcTlcWoU1l+1SaYfOzkVBcqelYIimspCmIznMdE2D9vNar77FVaNlx4J9Ew+HQRPSLG1zAh5ae1806B6CHG1+4puuTUFxJR1AO+BuT6fqy1p0V77CrhkBTHs8DNqw9ZYI27fjyTrSW4SixyfcH16DAegeHO+d2YZ` |
-> | canadaeast | rsa-sha2-512 | `60yzcSSOHlubdGkuNPWMXB9j21HqIkIzGdJUv0J57iY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDmA4meGZwkdDzrgA9jAgcrlglZro0+IVzkLDCo791vsjJ29bTM6UbXVYFoKEkYliXSueL0q92W91IaFH/NhlOdW81Dbjs3jE+CuE4OX5pMisIMKx45QDcYCx3MJxqZrIOkDdS+m8JLs6XwM07LxiTX+6bH5vSwuGwvqg5gpnYfUpN0U5o7Wq7H7UplyUN8vsiDvTux3glXBLAI3ugjn6FC/YVPwMOq7Luwry3kxwEMx4Fnewe6hAlz47lbBHW6l/qmzzu4wfhJC20GqPzMJHD3kjHEGFBHpcmRbyijUUIyd7QBrnfS4J0xPVLftGJsrOOUP7Oq8AAru66/00We501` |
-> | canadaeast | ecdsa-sha2-nistp256 | `YPqDobCavdQ/zGV7FuR/gzYqgUIzWePgERDTQjYEE0M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlfnJ9/rlO5/YGeGle1K6I6Ctan4Z3cKpGE3W9BPe1ZcSfkXq47u/f6F/nR7WgrC6+NwJHaMkhiBGadEWbuA3Q=` |
-> | canadaeast | ecdsa-sha2-nistp384 | `Y6FK9rWscBkyKN7mgPAEj0jKFXrv4mGNzoaZ9ttc4io=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDS8gYaqmJ8eEjmDF2ET7d2d6WAO7SgBQdTvqt6cUEjp7I11AYATKVN4Isz1hx8qBCWGIjA42X1/jNzk3YR7Bv/hgXO7PgAfDZ41AcT4+cJd0WrAWnxv0xgOvgLKL/8GYQ==` |
-> | canadacentral | rsa-sha2-256 | `KOYkeGvx4egH9DTGgxiONDMvSlkEkoU8cXWnynOEQRE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7jhZvp5GMrYyA2gYjbQXTC/QoSeDeluBUpji6ndy52KuqBNXelmHIsaEBc69MbixqfoopaFyJshdu7X8maxcRSsdDcnhbCgUO/MnJ+am6yb33v/25qtLToqzJRXb5y86o9/WtyA9DXbJMwwzQFqxIsa1gB` |
-> | canadacentral | rsa-sha2-512 | `tdixmLr++BVpFMpiWyVkr5iAXM4TDmj3jp5EC0x8mrw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNMZwL0AuF2Uyn4NIK+XdaHuon2jEBwUFSNAXo4JP7WDfmewISzMWqqi1btip/7VwZbxiz98C4NUEcsPNweaw3VdpYiXXXc7NN45cC32uM8yFeV6TqizoerHf+8Hm8avWQOfBv17kvGihob2vx8wZo4HkZg9KacQGvyuUyfUKa9LJI9BnpI2Wo3RPue4kbaV3JKmzxl8sF9i6OTT8Adj6+H7SkluITm105NX32uKBMjipEeMwDSQvkWGwlh2oZwJpL+Tvi2G0hQ/Q/FCQS5MAW9MCwnp0SSPWZaLiA9EDnzFrugFoundyBa0vRjNGZoj+X4+8MVG2fYgOzDED1JSPB` |
-> | canadacentral | ecdsa-sha2-nistp256 | `HhbpllbdxrinWvNsk/OvkowI9nWd9ZRVXXkQmwn2cq4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuyYEUpBjzEnYljSwksmHMxl5uoErbC30R8wstMIDLexpjSpdUxty1u2nDE3WY7m4W/doyXVSBYiHUUYhdNFjg=` |
-> | canadacentral | ecdsa-sha2-nistp384 | `EjEadkKaEgaNfdwXtzlqanUbDigzsdzcZJeTzJfQXP0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBORAcpaBXKmSUyCLbAOzghHvH8NKzk0khR0QGHdru0kiFiE16uz9j07aV9AiQQ3PRyRZzsf+dnheD7zuEZAewRiWc54Vg8v8QVi9VUkOHCeSNaYxzaDTcMsKP/A7lR2AOQ==` |
-> | switzerlandn | rsa-sha2-256 | `4cXg5pca9HCvAxDMrE7GdwvUZl5RlaivApaqz8gl7vs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCqqSS6hVSmykLqNCqZntOao0QSS1xG89BiwNaR7uQvz7Y2H+gJiXhgot6wtc4/A5743t7svXZqsCBGPvkpK05JMNZDUy0UTwQ1eI9WAcgFAHqzmazKT1B5/aK0P5IMcK00dVap4jTwxaoQbtc973E5XAiUW1ZRt6YComeoZB6cFVX28MaE6auWOPdEaSg8SlcmWyw73Q9X5SsJkDTW5543tzjJI5hnH03LAvPIs8pIvqxntsKPEeWnyIMHWtc5Vpg8LB7CnAr4C86++hxt3mws7+AOtcjfUu2LmLzG1A34B1yEa/wLqJCz7jWV/Wm21KlTp1VdBk+4qFoVfy2IFeX9` |
-> | switzerlandn | rsa-sha2-512 | `E63lmwPWd5a6K3wJLj4ksx0wPab1lqle2a4kwjXuR4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCtSlbkDdzwqHy2C/pAteV2mrkZFpJHAlL05iOrJSFk0dhq8iwsmOmQiF9Xwth6T1n3NVVncAodIN2MyHR7pQTUJu1dmHcikG/JU6wGPVN8law0+3f9aClbqWRV5tdOx1vWQP3uPrppYlT90bWbD0IBmmHnxPJXsXm+7tI1n+P1/bKewG7FvU1yF+gqOXyTXrdb3sEZOD6IYW/PusR44mDl/rV5dFilBvmluHY5155hk1O2HBOWlCiDGBdEIOmB73waUQabqBCicAWfyloGZqB1n8Eay6FksLtRSAUcCSyBSnA81phYdLiLBd9UmiVKPC7gvdBWPztWB+2MeLsXtim9` |
-> | switzerlandn | ecdsa-sha2-nistp256 | `DfyPsw04f2rU6PXeLx8iVRu+hrtSLushETT3zs5Dq7U=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJICveabT6GPfbyaCSeU7D553Q4Rr/IgGjTMC8vMCIUJKUzazeCeS3q46mXL2kwnBLIge9wTzzvP7JSWf+I2Fis=` |
-> | switzerlandn | ecdsa-sha2-nistp384 | `Rw0TLDVU4PqsXbOunR2BZcn2/wqFty6rCgWN4cCD/1Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLhGaEyHYvfVU05lmKV4Rnrl9YiuSSOCXjUaJjJJRhe5ZXbDMHeiC67CAWW3mm/+c5i1hoob/8pHg7vmeC+ve+Ztu/ww12JsC4qy/CG8qIIQvlnDDqnfmOgr0Svw3/Izw==` |
-> | uaec | rsa-sha2-256 | `GW5lrSx75BsjFe4y4vwJFdg454fndPjm4ez2mYsG3zs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQiEpj9zkZ8F3iDkDDbZV4A3+1RC/0Un6HZVYv5MCVYKqsVzmyn+7rbseUTkZMO/EqgF8+VWlwSU5C2JOesZtKXAgNzXBSOER3NbiucB5v1b1cC+8Qo4C2+iTHXyJSKxV0bTz55crCfhKO1KTQw3uZoYh6jE9xI1RzCI1J4qP+afZQQhn3H+7q+8kTMhmlQrfKuMWennoWZih+uTe9LPHjlvzwYiXkS2sOIlKtx8eLDJJg2ONl7YKSE4XVq7K33807Gz5sCD/ZV+Bn+NyP2yX14QKcyI97pkrFdcJf2DZi7LdTuEVPx3qK/rHzmzotwe6ne6sfV+FJpowUUTbKgT5` |
-> | uaec | rsa-sha2-512 | `zflL4olL2bga9JCxPA/qfvT2jSYmIfr2RY6GagpUjkE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAtxSG7lHzGFclWVuErRZZo6VG5uaWy1ikhb67rJSXdTLuSGDU+4Boj4wKxK0EyVKXpdQ3VrIwC4rOEy/lKAlnI2PrkrMjluau2aetlwW0hCBKAcgEOpMeMJJxCvv9EVatmEhvCe0ARyVM539058da9LzoZ2geFnFIbh3t8fNCaJZTNSS5PW1SLkspSqYXUYJWzu8Kx9l3LTzlmJT1DukKLIKj5ZDwuzOIN5m1ePYp4MzfIeBN6ys8df8HqXLoEXE+vOZWOzwkPVWoTsYvwB8j9+FHECAVf4Gcm8sPvRZA/RKDn1dGW2THzVw/VI/F87fFC7stLmZJ1v+a9TTFE649` |
-> | uaec | ecdsa-sha2-nistp256 | `P3KxgoZgjHHxid66gbkRETjPsHUsNiPt5/TFU0Kby6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvHAXCWC9HGJnr5SRW8I1zZWsyHIczEdPpzmafrU8drYmhpRxlD6HlKnY7iXqfq8bOIK063tpVOsPbrVevAKPs=` |
-> | uaec | ecdsa-sha2-nistp384 | `E+jKxd6hnfVIXPQYreABXpZB7tppZnWUxAelvEDh874=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDLyroqceuIpmDQk/gvHHzFup7NZbyzjXMdGrkDvZDE2H+6XTthCGSVNVmwqdyHE4yGw88jgW1TfWTAZxCxTfXD+xF72iYyBAsejgiyYY/0x9NKM/lrtw8mnRtkZzLyrA==` |
-> | germanyn | rsa-sha2-256 | `ppHnlruDLR73KzW/m7yc3dHQ0JvxzuC1QKJWHPom9KU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNNjCcDxjL3ess1QQEkb9n5bPYpxXpekd32ZX4oTcXXFDOu+tz/jpA8JZL8lOBAcBQ5n+mZF0Pot1o+B1JxQOHHiEZdcdKtLtPWrI2OQyxZnvo7sCBeSk+r/j3mjqpvq3+KpwoTZKpYF/oNRXVHU4VFs+MzvqWd6vgLXsDwtJrriojtkrWy0bTa4NjjN/+olsITxDmR0TGAu+epCJptdpKjTcgcn25QuIKy37/zVW8BJ5QsZmIRwvlCYxj11UOAoDcbapJcnzJYpOmQTNpdzkazjViX17DZW17Jmfhc6Dk3H+TEseilkbq1ZjsUyGBBxklWHid7+BgKVXOoIgG6+0x` |
-> | germanyn | rsa-sha2-512 | `m/OFTRHkc3HxfhCKk1+jY1rPJrT9t4FYtQ/Wmo3MOUE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDkN3CN1VITaHy/CduQaZIkuKSC/+oX19sYntRdgCblJlIzUBmiGlhmnKXyhA29lwWWAYxSbUu0jEJUZfQ6xdQ4uALOb815DLNZtVrxqSm4SjvP5anNa7zRyCFfo4V8M4i6ji6NB+u+PwH5DOhxKLu6/Ml9pF8hWyfLRft8cg4wORLLhwGt2+agizq7N7vF2nmLBojmS0MMmpH5ON/NFshYIDNKPEeK9ehpaARf4fuXm440Zqzy/FfpptSspJIhbY2zsg4qGQgYGZyuRxkLzYgtD/uKW5ieFwXPn+tvVeVzezZTmGMoDlkPX18HSsuNaRkdnwpX8yk1/uoBCsuOFSph` |
-> | germanyn | ecdsa-sha2-nistp256 | `F4o8Z9llB5SRp0faYFwKMQtNw/+JYFKZdNoIuO7XUU0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMoIo/OXMP7W5a5QRDAVBo+9YQg4YBrl3J7xM91PUGUiALDE1Iw8Uq4e1gLiSNA6A46om5yY/6oGj4iqEk8Ar8Y=` |
-> | germanyn | ecdsa-sha2-nistp384 | `BgW5e9lciYG1oIxolnVUcpdh3JpN/eQxfOyeyuZ6ZjI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ69kH0urhCdcMMaqpID2m+u8MECowtNlYjYXoSUn6oEhj7VPxvCRZi5R02vHrtrTJslsrbpgYHXz+/jSLplKpccQGJFaZso9WWgEJH1k7tJOuOv0NIjoBTv7fY5IxeAvQ==` |
-> | australiac2 | rsa-sha2-256 | `sqVq1zdzD3OiAbnDjs70/why2c3UZOPMTuk5sXeOu4Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDKNZVZ5RVnGa0fYSn+Nx3tnt526fmMf+VufOBOy5/hEnqV6mPKXMiDijx2gFhKY4nyy957jYUwcqp1XasweWX6ISuhfg4QWcygW0HgmVdlSDezobPDueuP0WdhVsG3vXGbEYnrZOUR5kQHagX/wWf6Diy1J5Cn2ojIKGuSuGY/9bu3bnZzKt08fj+gQCEd1GxpPoBUfjF/73MM57IRhdmv919rsGD5nsyZCBmqFoKlLH/gKYZ4B3hylqf/6gER7OeZmG2S/U/fRAN0hVK7RkHNf2CFoCmuxXS6r87BznT5vF3nmd7tsf0akaxLjfWRbKLMWWyZkzU4/jijpbDDuu1x` |
-> | australiac2 | rsa-sha2-512 | `p6vLHCTBcKOkqz7eiVCT6pLuIg7h4Jp41lvL/WOQLWM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDcqD2zICW1RLKweRXMG9wtOGxA5unQO/nd9yslfOIo54Ef0dlhAXGFPmCd3Yj60Gt/CIpqguzKLGm4D3nf19KjXE8V59cD7/lN6mVrFrm+6CU44JAzKN9ERUelxhSQKi/dsDR773wt4jsAt4SLBRrs19RC2fkYnxZgC/LzNZKXXY3FFb06uwheJjGOHyeQJbGpaV3hlelhOSV1UF2JAB8v6d8+9+S+b666EcpQ70JtxtA8h1s30hqhTKgYdRYMPfz7lqKXvact2NBXlqYRPod5cLW7lYBb2LzqTk1D44d8cwDknX2pYQJpgeFwJhB6SO9mF/Ot+jk+jV/CxUI55DPd` |
-> | australiac2 | ecdsa-sha2-nistp256 | `m7Go9P1bwcPHAcZzRSXdwYroDIdZzt0jhxkJW42YGKY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHp76felOL7GAHcJoW6vcCS83jkeR6RdFCwUk0Jf6v7SFoqYNZfTaryy2n0vwG1W1dAyHvOjB1+gzTZOkHN/cAI=` |
-> | australiac2 | ecdsa-sha2-nistp384 | `9Jc39OueTg3pQcq8KJgzsmPlVXxILG24Euw27on7SkY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEduOE61sSP2BozvJ6QLtRDZ7j0TenX7PjcpPVtYIQuKQ+h3qakXFwFnj8N3m8+LpTXYO41mgX7N02Rl12QvD7lDpUgHUChaNpUcMcSpm5qvguLyG6XZg2BDNd6pyx+fpw==` |
-> | southafricaw | rsa-sha2-256 | `aMMzaNmXR+V1NrwLmovyvKwfbKQ6aAKYiA5n8ETYQmU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDGhe98UTnljsYaeJwtP3ABvT/hZP6Mp1r5beyJ2SWpdqZSZaKC+UQlWLu6WhLxLZ+5snB+YAlC56u4qOdDHLoid6vbAR/FPIcJlvQfcFJD88nihv9sq1kUX3JXrh0ZUrl2/Zj71aNlM/RL1OnXK/Pg2E+wu4EfnQTrzlWMhR8bxlQA0jH1zmfFN/6BTwP2if29TNlQkWuW3uq3rccY1GA6n0QtlucanPNRzsBtAzsH5/oFuB5R4sD/Msw0itvWuQP4e0y+Vdov1My/rjK19xLce6AhWmmhwkn5qxHdIy158C4cWnSkQvkYzPnwsi7KT9WRH7vfr8qD9zlA5mO+IDxJ` |
-> | southafricaw | rsa-sha2-512 | `Uc7QB0fT4NGyBp34GCAt8G4j1ZBXh/3Wa2YRlILu818=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCijtmaOHIcXjI07fVugz1M33+amlOEqdqtVgOlLmFRKSehPW2+6iGpAjQVwzsYOx32Hp5O07xj/PhiFsbBBqZXGHmuSIOJYa7tQSFvwclO+JW/kuoELXQLwnHxUfPyq4tYoj83GSZ5k/KRlEtbmjEwcozMQVya/7MzulAeV4nN6PDxoLjXlfGEQU2ZCGz2neeisQEM8+hZNuEH+O9O03g7CW8bwiI1Y70/bnNq95xJ5F7lRpwtJNWlx+kmUiNpfXOUPxZAUsny7z1Ka5XKEB1fDP8E/jAtrSWrRPDJew8lFpQeWukwB5tf3F3bh1SuSKaSQqKBArnSpJizWxp0brZZ` |
-> | southafricaw | ecdsa-sha2-nistp256 | `pr1KB8apI+FNQLKkzvUXx0/waiqBGZPEXMNglKimUwA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPvbvOfXQjT+/3+jQtW3FBAnPnaypYSUhZMkTTSfd7RQMmSxsLNmDooERhVuUTa7XCTlpDNTSPdnnaa6P1a+F6A=` |
-> | southafricaw | ecdsa-sha2-nistp384 | `A3RfMOd6dGgUlcrkXL1YRKNXIdAB8M1lF9qwmy6PjFg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaJmo4QGmo6pbLHOXh06Rz9inntdxmuOtVxlJBO1i/ZK5les/AuaILMW7oQCxOKvZs/xI+P0MWRfrNgWSSapy5hNuTkbl8IqO4pH/lO//zdaHmVBC1kPnujDM9znJs6Rg==` |
-> | jioinw | rsa-sha2-256 | `hcy1XbIniEZloraGrvecJCvlw6zZhTOrzgMJag5b5DA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOBU9e1Ae68+ScLUA5O1gaZ3eq0EGqBIEqL3+QuN8LYpF3Bi/+m43kgjhgiOx5imPK6peHHaaT/nEBQFJKFtWyn8q2kspcDy1xvJfG8Jaks1GQG33djOItiHlKjRWMcyWFvisFE2vVkp3uO0xG4nMDLM2rFazkax+6XA5cf2iW2SfL6Trs4v1waakU/jQLA7vsrx14S+wGEdVINTSPeh5DHqkLzTa3m2tpXVcUA4CG8uQZM8E/3/y0BuIW0Ahl/P6dx35W1Al7gnaTqmx7+idcc/YVe0auorZWWdyclf1sjnAw6U8uMhWmQ0dZgDehDtshlHyx84vvJ1JOJs0+6S2l` |
-> | jioinw | rsa-sha2-512 | `LPctDLIz/vqg4POMOPqI1yD9EE9sNS1gxY6cJoX+gEY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOH+IZFFfJN4lpFFpvp5x1lRzuOxLXs0WfpcCIACMOhCor2tkaa/MHlmPIbAqgZgth5NZIWpYkPAv7GpzPBOwTp3Bg5lUM7MXSayO/5+eJjMhB5PUCJ0We8Kfgf/U+vbaMIg9R8gJKutXrANd3sAWXMwWqKUw+ZX/AC7h58w04gb1s+lNOQbfhpqkw8+mrOj2eKH8zHYUJQBUYEyDHqirj565r7HhBtEZImn/ioJS+nYT5Zl/SNtW/ehhUsARG9p6O4wSy20Ysdk7b9Ur2YL0RyFa6QhWQeKktKPVFQuMMLRkYX7dv35uAKq8YN833lLjGESYNdCzYmGTJXk5KYZ8B` |
-> | jioinw | ecdsa-sha2-nistp256 | `mBx6CZ+6DseVrwsKfkNPh9NdrVLgwhHT4DEq9cYfZrg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXqhYQKwmkGb8qRq52ulEkXrNVjzVU4sGEuRFn4xXK8xanasbEea3iMOTihzMDflMwgTDmVGoTKtDXy8tQ+Y8k=` |
-> | jioinw | ecdsa-sha2-nistp384 | `lwQX9Yfn7uDz/8gXpG4sZcWLCAoXIpkpSYlgh8NpK1E=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLKY2+wwHIzFOfiKFKFHyfiqjUrscm0qwYTAirNPE1GI6OwAjconeX072ecY3/1G0dE7kAUaWbCKWSO3DqM4r6O+AewzxJoey85zMexW23g2lXFH7HkYn9rldURoUdk31A==` |
-> | swedens | rsa-sha2-256 | `kS1NUherycqJAYe8KZi8AnqIQ9UdDbpoEpcdHlZ702g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ+Imy6VuOvZsel9SCoMmej4kFvP8MDgDY9EdgfkgpjEfOSk+vmXBMCFtthI7sHRggkuXQE5v6OkOPwuWuVWjAWmclfFIz+TTNE5dUUY6L+UMipDEcwFxtufnY3AW0v2MW5lOFHWbx3w7605yb2AFQuZjvngkjdelhDpVpX9a0XdPa7zUYBwXdxWeteH+i4ZJ62sjlBGzYRjFhK/y1rUKR3BVR5xtP9ofzqE1n/TRLpViU8iy4bpsQntTWa71xVoTFtE29h3ESw4QG2lRCwk7NIf8efyNdR25+YpVGIysAxXG2smGAi2W/YXUjteCE7k3IU+ehHJdWKB3spUBSoF/V` |
-> | swedens | rsa-sha2-512 | `G+oX014UJXR0t1xHrCi715XuoHBkBxJMdH8hmVMilJc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCa5Ny0EUd8yLOgzczm6Zge+D39VY7hpG+et2ln0i/HdYLd1aijEiF/0RDgnJYxZM4RhPZHxrVZXJXLsLa2T+ud+cqifvsjudsUSCzWNY3pHAwKBTSuu8Po+TrJXx8b+ogg+EhTh1BZQzIVQbtLwqRFJ3beLtvhp+V1pPWOoXRiN6Rq+x6ciT37jOdp033rbEM3AtzWdRBvRxUiVxKoRXcDYwAAIb3joaZ26p69Vj7HpD0HAf7w9f70zIwIzqrW4RcHcP+RbDVzNukK8gWP66OgSKrAQgRmibS6SEJx4kgkaghiQfm1k1bXkTnlKlz956DHkTkpMQe21/eW1Prs+q1` |
-> | swedens | ecdsa-sha2-nistp256 | `8C148yiGdrJCGF6HpDzINhGkB5AAyWDqkauJClRqCZs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEREKXJT7obM0RXGFrUPmJOoEpJD8T+QT29UEt3/jZrUzVzqXLV/9+VK0xqv1suljhUoUoClBdKqx5E/Sv1kSV4=` |
-> | swedens | ecdsa-sha2-nistp384 | `ra8+vb8aSkTBsO0KAxDrl2lN9p41BxymtRU6Seby83M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMby6y3wzWnzE304DjregQcSqKTsoMx2vPGk7OlBtjFKoubZlBRQH4jQrtPbJv/Hpf8f+D0JmvPe5G75yZFG1BcP5eB4aonAr0NNCw+3sCb50JVpoT4yoT787KKYf+5qg==` |
-> | jioinc | rsa-sha2-256 | `DmNCjG1VJxWWmrXw5USD0pAnJAbEAVonkUtzRFKEEFI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/x6T0nye3elqPzK8IF+Q70bLn2zg4MVJpK3P6YurtsRH8cv5+NEHyP0LWdeQWqKa9ivQRIQb8mHS+9KDMxOnzZraUeaaJLcXI0YV512kqzdevsEbH6BSmy8HhZHcRyXqH0PjxLcWJ5Wn9+caNhiVC40Oks7yrrZpAVbddzD9y/eJfguMVWiu1c8iZpYORss1QYo7JqVvEB6pLY03NXWM+xti1RSs+C6IEblQkPvnT3ELni9T1eZOACi12KGZHVLU9n27Nyg/fPjRheYSkw/lkkKDG0zvIQ7jr/k8SCHGcvtDYwRlFErFdGYBlIE888le2oDNNoKjJuhzN6S7ddpzp` |
-> | jioinc | rsa-sha2-512 | `m2P7vnysl2adTz/0P6ebSR7Xx8AubkYkex6cmD9C0ys=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDQHFDt8zTk+Hqh912v0U8CVTgAPUb8Kmuec+2orydM/InG+/zSuqQHsCZaD2mhEg8kevU8k2veF5z2sbko5TR/cghGg5dXlzz4YaKiNdNyKIGm2MdynXJofAtiktGhcB6ummctHqATfGSgkLJHtLvstzTVbVK1zgxXcB8hA52c2EPB1cN1TkAKEyiYNX7fKFe1EEPCxdx3fC/UyApKdD+D432HCW/g8Syj/n7asdB8EQqcoCT3ajh2wG2Qq0ZxjVbbrFImlr0VoTqLImJ4kZ9d2G7Rq2jqrlfESLAxKVDaqj+SjyWpzb3MHFSnwJZybCKXyTt+7BXpKeeGAcHpTs3F` |
-> | jioinc | ecdsa-sha2-nistp256 | `zAZ0A1pk0Xz8Vr/DEf8ztPaLCivXxfajlKMtWqAEzgU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDow29ds+BRDNTZNW70CEoxUjLiUF+IHgaDRaO+dAWwxL13d+MqTIYY4I0D7vgVvh0OegmYLXIWpCdR8LvVT7zA=` |
-> | jioinc | ecdsa-sha2-nistp384 | `OTG7jxUSj+XrdL28JpYAhsfr6tfO7vtnfzWCxkC/jmQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/Bb3/3u/UIcYGRLSl7YvOObb43LO5Ksi0ewWJU+MPsPWZr7OTTPs76TdwXMvD8+QuY8U9JxgQQrNmvbpabmbGENkllEgjGlev5P2mHy/IZZAUQhAeeCinCRvTsiOOoLw==` |
-> | brazilse | rsa-sha2-256 | `D+S7uHDWy0VPrRg9mlaK70PBPghBRCR1ru/KEsKpcjA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz86hzEBpBBVJqClTRb7dNolds/LShOM4jucPRawZrlKGEpeKv70Khk8BdI4697fORKavxjWK0O9tQpAJHtVGSv3Ajwb9MB7ERKncBVx/xfdiedrJNmF0G+p97XlCiwkAqePT/9IFaMy1OFqwl6LF7p7I0iBKZX0UgePwdiHtJnK0foTfsASNY4AEVcXHEuaulLFJKUjrr6ootblEbPBnC6IxTPj9oD+Nm0rtlCeD5JtCRFgKUj3LWybEog/AnnMNQDQ+vMPbsCnfsW/J/HQc+ebx3NtcumL+PIxqJw2Pk6mRpDdL+vs2nw/PtnPkdJ7DjIQYLypBSi3AFIONSlO15` |
-> | brazilse | rsa-sha2-512 | `C+p2eAPf5uec0yG+aeoVAaLOAAf0p8gbBNss3xfawPQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDV3WmETlQwzfuYoOsPAqfB9Z2gxsNecbpuwIBYwitLYKmJnT9Q3SNSgqnBiI1TKWyEweerdQaPnEvz9TeynGqSmLyGT0JJXQXFQCjTCgRHP4WD0Q+V7HWHnWYQ5c2e8tKEVA1jWt57dcrFlrGKEsywuMeEX21V13qQxK2acXVRWJPWgQCVwtiNpToc/cILOqL5XXKnSA81Ex7iRqw8QRAGdIozkryisucy+cStdJX6q+YUE5L62ENV8qMwJdwUGywEpKhXRg5VQKN0ciFqvyT/3cZQVF+NkUFGPnOi0bk4JzHxWxmQNTIwE7bmPsuniw5njD3ota/IPUHV2og190Xx` |
-> | brazilse | ecdsa-sha2-nistp256 | `dhADLmzQOE0mPcctS3wV+x2AUlv1GviguDQgSbGn/qs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYuseeJN3CvtyPSKOz5FSu7PoNul+o6/LB62/MW9CUW+3AmqtVANVox1XQ8eX/YhL0a5+brjmveZPQS6M09YyQ=` |
-> | brazilse | ecdsa-sha2-nistp384 | `mjFHELtgAJkPTWO4dc7pzVVnJ6WLfAXGvXN1Wq2+NPs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIwFI6bRmgPe0tN7Qwc03PMrlpBn+wBChcuofyWlKVd/Ec6t2dxHr/0ev0dqyKS2nbK7CAOQxSrV1NVYnYZKql/eC2sPqI1oxz7DzUtRnNKrXcH714ViN3RIY3DZA6rJOw==` |
-> | norwayw | rsa-sha2-256 | `Ea3Vj3EfZYM25AX1IAty30AD+lhXYZsgtPGEFzNtjOk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDuxOcTdADdJHI8MFrXV00XKbKVjXpirS3ZPzzIxw0mIFxFTArJEpXJeRfb0OZzQ1IABDwoasp1u+IhnY1Uv2VQ8mYAXtC3He08+7+EXJgFU/xQ8qFfM4eioAuXpxR7M7qV/0golNT4dvvLrY4zHxbSWmVB7cYJAeIjDU8dKISWFvMYjnRuiI7RYtxh/JI5ZfImU65Vfxi26vqWm51QDyF5+FmmXLUHpMFFuW8i/g8wSE1C3Qk+NZ3YJDlHjYqasPm4QidX8rHQ1xyMX9+ouzBZArNrVfrA4/ozoKGnPhe4GFzpuwdppkP4Ciy+H6t1/de/8fo9zkNgUJWHQrxzT4Lt` |
-> | norwayw | rsa-sha2-512 | `uHGfIB97I8y8nSAEciD7InBKzAx9ui5xQHAXIUo6gdE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPXLVCb1kqh8gERY43bvyPcfxVOUnZyWsHkEK5+QT6D7ttThO2alZbnAPMhMGpAzJieT1IArRbCjmssWQmJrhTGXSJBsi75zmku4vN+UB712EGXm308/TvClN0wlnFwFI9RWXonDBkUN1WjZnUoQuN+JNZ7ybApHEgyaiHkJfhdrtTkfzGLHqyMnESUvnEJkexLDog88xZVNL7qJTSJlq1m32JEAEDgTuO4Wb7IIr92s6GOFXKukwY8dRldXCaJvjwfBz5MEdPknvipwTHYlxYzpcCtb9qnOliDLD2g4gm9d5nq3QBlLj/4cS1M9trkAxQQfUmuVQooXfO2Zw+fOW1` |
-> | norwayw | ecdsa-sha2-nistp256 | `muljUcRHpId06YvSLxboTHWmq0pUXxH6QRZHspsLZvs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOefohG21zu2JGcUvjk/qlz5sxhJcy5Vpk5Etj3cgmE/BuOTt5GR4HHpbcj/hrLxGRmAWhBV7uVMqO376pwsOBs=` |
-> | norwayw | ecdsa-sha2-nistp384 | `QlzJV54Ggw1AObztQjGt/J2TQ1kTiTtJDcxxIdCtWYE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYnNgJKaYCByLPdh21ZYEV/I4FNSZ4RWxK4bMDgNo/53HROhQmezQgoDvJFWsQiFVDXOPLXf26OeVXJ7qXAm6vS+17Z7E1iHkrqo2MqnlMTYzvBOgYNFp9GfW6lkDYfiQ==` |
+> | West Europe | rsa-sha2-256 | `IeHrQ+N6WAdLMKSMsJiML4XqMrkF1kyOiTeTjh1PFyc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDZL63ZKHrWlwN8gkPvq43uTh88n0V6GwlTH2/sEpIyPxN56/gpgWW6aDyzyv6PIRI/zlLjZNdOBhqmEO+MhnBPkAI8edlvFoVOA6c/ft5RljQOhv+nFzgELyP8qAlZOi1iQHx7UeB1NGkQ5AIwNIkRDImeft9Iga+bDF6yWu60gY43QdGQCTNhjglNuZ6lkGnrTxQtPSC01AyU51V1yXKHzgaTByrA4tK6cGtwjFjMBsnXtX2+yoyyuQz/xNnIN63awqpQxZameGOtjAYhLhtEgl39XEIgvpAs1hXDWcSEBSMWP4z04U/tw2R5mtorL3QU1CmokWmuAQZNQcLSLLlt` |
+> | West Europe | rsa-sha2-512 | `7+VdJ21y+HcaNRZZeaaBtk1AjkCNK4weG5mkkoyabi0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDYAmiv6Tk/o02McJi79dlIdPLu1I5HfhsdPlUycW+t1zQwZL+WaI182G6SY728hJOGzAz51XqD4e5yueAZYjOJwcGhHVq6MfabbhvT1sxWQplnk3QKrUMRXnyuuSua1j+AwXsm957RlbW9bi1aQKdJgKq3y2yz+hqBS76SX9d8BxOHWJl5KwCIFaaJWb0u32W2HGb9eLDMQNipzHyANEQXI9Uq2qRL7Z20GiRGyy7VPP6AbPYTprrivo3QpYXSXe9VUuuXA9g3Bz3itxmOw6RV9aGQhCSp22BdJKDl70FMxTm1d87LEwOQmAViqelEeY+DEowPHwVLQs3rIJrZHxYV` |
+> | West Europe | ecdsa-sha2-nistp256 | `0WNMHmCNJE1YFBpHNeADuT5h+PfJ/jJPtUDHCxCSrO0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANx85rJLXM8QZi33y8fzvUbH+O5Cujn0oJFDGQrwhGJQTHsjIhd5bhFFgDvJ64/4SGrtP1LHDKLwr9+ltzgxIE=` |
+> | West Europe | ecdsa-sha2-nistp384 | `90g+JfQChjbb3OOV0YIGSVTkNotnefCV2NcSuMdPrzY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJgtrLFy2zsyhNvXlwHUmDBw1De++05pr1ZTxOIVnB17XZix0Euwq/wZTs0cE01c5/kYdAp+gQHEz594e7AQXBTCTqUiIS1a4+IXzfiCcShVfMsLFBvzjm9Yn8qgW9Ofg==` |
+> | East US | rsa-sha2-256 | `F6pNN5Py68/1hVRGEoCwpY5H7vWhXZM/4L442lY4ydE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAiUB94zwLf0e/++OeiAjE0X7Od2nuqyLyAqpOb7nfQUAOWyqgRL04yaan6R2Ir2YtI0FRwA6yRETUBf2+NuVhIONgLNsgPw3RakL1BUqAEzZAyF4sOjWnYE5/s/1KmYOE052SefzMciqjgkBV2+YrPW1CLivNhL4d1vuQh05kADLgHJiAVD6BqSM7Z6VoLhW+hfP4JklyQAojCF6ejXW7ZGWdqQGKLCUhdaOPSRAxjOmr9gZxJ69OvdJT2Cy6KO1YQt2gY2GbPs+4uAeNrz40swffjut4zn1NILImpHi8PTM+wcGYzbW4Nn7t5lhvT9kmX9BkSYXLVTlI9p1neT9t` |
+> | East US | rsa-sha2-512 | `MIpoRIiCtEKI23MN+S2bLqm5GKClzgmRpMnh90DaHx8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8Ut7Rq7Vak26F29czig5auq334N9mNaJmdtWoT32uDzWjZNw/N8uxXQS51oSeD7c0oXWIMBklH0AS8JR1xvMUGVnv5aRXwubicQ6z4poG5RSudYDA3BjMs61LZUKZH/DRj7qR/KUBMNieT1X+0DbopZkO9etxXdKx+VqJaK3fRC5Zflxj5Z9Stfx/XlaBXptDdqnInHZAUbZxnNziPYrBOuXYl5/Cd6W4lR7dBsMCbjINSIShvrhPpVfd3qOv/xPpU172nqkOx2VsV4mrfqqg62ZdcenLJDYsiXd/AVNUAL+dvzmj1/3/yVtFwadA2l83Em6CgGpqUmvK6brY3bPh` |
+> | East US | ecdsa-sha2-nistp256 | `ixDeCdmQOB9ROqdJiVdXyFVsRqLmJJUb2M4shrWj8gI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNrdcVT12fftnDcjYL8K3zLX3bdMwYLjNu2ZJ1kUZwpVHSjNc+1KWB2CGHca+cMLuPSao4TxjIX0drn9/o+GHMQ=` |
+> | East US | ecdsa-sha2-nistp384 | `DPTC6EIORrsxzpGt6IZzAN67nlZUXzg5ANQ3QGz987Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEP3CUvPVWNVnFuojR43KRxTQt1xiClbgDzqN/s9F5luivP+Gh0QrK5UHf6diEju4ZQ9k2O10MEDs6c46g4fT56rY8CQkeBsaaBq8WYLRhSQsFZ6SZuw14oFNodniAO33g==` |
+> | West India | rsa-sha2-256 | `Fkh7r/tOJy1cZC6nI75VsO1sS3ugMvJ56U02uGGJHFo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDHCzLI51bbBLWK7TcXvXvEHaLQMzuYKEwyoS1/oC5EN3NsLZl4BV5d2zbLETFDjsky/btWiAkCvHuzxealxGgzw69ll90aWSOEY/epaYJvueOTvGy4+rJY8Xyc64VdHml8n3EEZTQmBEi3Tn6bViLwvC0iT2/noLeYGXh0/NL0T3BeblwSm3cNXyemkBQO/zyYcchqRtKJu8w8brYVZYFINlTeBu4LyDP1k9DMtuewGoeH8SmvDxUmiIGh2VDlPmXe3IkMR0nSgz10jMl3F0fei7ZJ+8zdCVbBuIqsJf+koJa/q9npstWGMFddMX3nR0A3HnG4v5aCAGVmfl11iC0J` |
+> | West India | rsa-sha2-512 | `xDtcgfElRGUUgWlU9tRmSQ58WEOKoUSKrHFDruhgDIM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCXehufp18nKehU4/GOWMkXJ87t22TyG5bNdVCPO2AgLJ88FBwZJvDurLgdPRDRuJImysbD7ucwk2WoDNC39q0TWtCRyIKTXfwvPmyG+JZKkT+/QfslMqiAXAPIQtVr2iXTeuHmn3tk+PksGXnTwb3oFV4wv40Wi1CbwvtCkUsBSujq4AR7BqksPnAqPrAyw+fFR3w4iD3EdtHBdIVULez3lkpMH/d04rf2bjh6lpI9YUdcdAmTGYeMtsf/ef8z0G2xpN2aniLCoCPQP85cooKq7YEhBDR8Lzem3vWnqS3gPc4rUrCJoDkGm0iL/4GCWRyG+RPi70WSdVysJ+HIm0Ct` |
+> | West India | ecdsa-sha2-nistp256 | `t+PVPMSVEgQ3FPNploXz7mO25PFiEwzxutMjypoA2DM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCzR5dhW3wfN5bRqLfeZ2hlj7iRerE4lF5jk+iQl6HJHKXIsH6lQ63Wyg7wOzF65jNnvubAJoEmzyyYig+D3A+w=` |
+> | West India | ecdsa-sha2-nistp384 | `pLODd+3JNeLVcPYYnI0rSWoemhMWws0jLc3J8cV6+GU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL2PEknfZpPAT4ejqBJW8InHPELP1G7hGvroW5J3evJr8Qrr//voa6aH8ZF7Ak0HcVVOOCSzfjcEpZYjjrXrzuCOekU48DkSF8i1kKqV4iXejNNQ1ohDCbsiAyoxQMY9cA==` |
+> | East US 2 | rsa-sha2-256 | `K+QQglmdpev3bvEKUgBTiOGMxwTlbD7gaYnLZhPfe1c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOA2Aj1tIG/TUXoVFHOltyW8qnCOtm2gq2NFTdfyDFw3/C4jk2HHQf2smDX54g8ixcLuSX3bRDtKRJItUhrKFY6A0AfN1+r46kkJJdFjzdcgi7C3M0BehH7HlHZ7Fv2u01VdROiXocHpNOVeLFKyt516ooe6b6bxrdc480RrgopPYpf6etJUm8d4WrDtGXB3ctip8p6Z2Z/ORfK77jTeKO4uzaHLM0W7G5X+nZJWn3axaf4H092rDAIH1tjEuWIhEivhkG9stUSeI3h6zw7q9FsJTGo0mIMZ9BwgE+Q2WLZtE2uMpwQ0mOqEPDnm0uJ5GiSmQLVyaV6E5SqhTfvVZ1` |
+> | East US 2 | rsa-sha2-512 | `UKT1qPRfpm+yzpRMukKpBCRFnOd257uSxGizI7fPLTw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/HCjYc4tMVNKmbEDT0HXVhyYkyzufrad8pvGb3bW1qGnpM1ZT3qauJrKizJFIuT3cPu43slhwR/Ryy79x6fLTKXNNucHHEpwT/yzf5H6L14N+i0rB/KWvila2enB2lTDVkUW50Fo+k5U/JPTn8vdLPkYJbtx9s0s3RMwaRrRBkW6+36Xrh0h7rxV5LfY/EI1331f+1bgNM7xD59D3U76OafZMh5VfSbCisvDWyIPebXkOMF/eL8ATlaOfab0TAC8lriCkLQolR+El9ARZ69CJtKg4gBB3IY766Ag3+rry1/J97kr4X3aVrDxMps1Pq+Q8TCOf4zFDPf2JwZhUpDPp` |
+> | East US 2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
+> | East US 2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
+> | West US | rsa-sha2-256 | `kqxoK1j6vHU8o8XyaiV87tZFEX9nE6o/yU0lOR5S6lE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAd7gh0mFw3iRAKusX3ai6OE0KO5O2CMlezvZOAJEH88fzWQ/zp0RZ1j7zJ8sbwslA6v3oRQ7Cx9ptAMTrL8SW4CZYcwETlfL3ZP39Llh+t7rZovIgvCDU0tijYvsa1W0T9XZgcwWEm6cWQzdm+i9U0KUdh7KgsubPAhGQ7xrOVEqgB9MYMofSSdIfKMt8K7xOSam6mhWiTSSIEGgeMTIZ9TgXkgAEJ8TNl3QHRoM8HxMnRFjtkXbT3EeSg6VOqi69Cei3hrmS64qvdzt2WwoTQwTFjxHocWGgA+Ow53wqWt8iYgOudpoB1neXiIcF4p0CN8zjvXNiRbZPg9lXFM9R` |
+> | West US | rsa-sha2-512 | `/PP9B/9KEa+QUyump1Yt05Lfk0LY/eyQhHyojh5zMEg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8R8bFe8QSTYKK+4evMpnlB8y0rQCqikTyviqD4rva7i4f1f/JxmptJQ/wkipHPXk6E7Du6oK/iJaZ+wjZ03tNIWwAGn0SdlTvWuwQwigK9k3JRlLYO+Uj/SSnBQWf8Dmp+cA6RDalteHpM2KwaUK65BHYC75bWKHaNntadTIU4kQ0BvFzmNRcJWL6otd5RkdYXjJWHu21zcv4EpRHGmVCD0na+UWce6UGDbLDtsZVJd2Q7IyeTrXpWxEO0fFN2Gu9gINfWC1FpuffGaqWSa4nK69n39lUKz4PUdu6Owmd9aNbLXknvtnW4+xGbX6oQa8wYulINHjdNz8Ez6nOiNZ9` |
+> | West US | ecdsa-sha2-nistp256 | `peqBbfcWZRW4QzLi69HicUUTwdtfW7/E9WGkgRMheAo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcTos/zmSn15kzn1Lk8N8QQh9hzOwqOSOf/bCpu6AQbWJtvjf1pHMuZlS2PpIV7G+/ImxXGpqpHqQlcD+Lg8Ro=` |
+> | West US | ecdsa-sha2-nistp384 | `sg63Cc3Mvnn9hoapGaEuZByscUEMa+xgw/3ruz49szk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzX2t9ALjFwpcBLm4B0+/D47PMrDya0KTva5w4E5GZNb5OwQujQvtUS2owd8BcKdMBeXx2S7qbcw6cQFffRxE+ZTr4J+3GoCmDM0PqraXxJHBRxyeK6vlrSR8ojRzIApA==` |
+> | East US 2 EUAP | rsa-sha2-256 | `dkP64W5LSbRoRlv2MV02TwH5wFPbV6D3R3nyTGivVfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC3PqLDKkkqUXrZSAbiEZsI6T1jYRh5cp+F5ktPCw7aXq6E9Vn2e6Ngu+vr+nNrwwtHqPzcZhxuu9ej2vAKTfp2FcExvy3fKKEhJKq0fJX8dc/aBNAGihKqxTKUI7AX5XsjhtIf0uuhig506g9ZssyaDWXuQ/3gvTDn923R9Hz5BdqQEH9RSHKW+intO8H4CgbhgwfuVZ0mD4ioJKCwfdhakJ2cKMDfgi/FS6QQqeh1wI+uPoS7DjW8Zurd7fhXEfJQFyuy5yZ7CZc7qV381kyo/hV1az6u3W4mrFlGPlNHhp9TmGFBij5QISC6yfmyFS4ZKMbt6n8xFZTJODiU2mT1` |
+> | East US 2 EUAP | rsa-sha2-512 | `M39Ofv6366yGPdeFZ0/2B7Ui6JZeBUoTpxmFPkwIo4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+1NvYoRon15Tr2wwSNGmL+uRi7GoVKwBsKFVhbRHI/w8oa3kndnXWI4rRRyfOS7KVlwFgtb/WolWzBdKOGVe6IaUHBU8TjOx2nKUhUvL605O0aNuaGylACJpponYxy7Kazftm2rV/WfxCcV7TmOGV1159mbbILCXdEWbHXZkA3qWe4JPGCT+XoEzrsXdPUDsXuUkSGVp0wWFI2Sr13KvygrwFdv4jxH1IkzJ5uk6Sxn0iVE+efqUOmBftQdVetleVdgR9qszQxxye0P2/FuXr0S+LUrwX4+lsWo3TSxXAUHxDd8jZoyYZFjAsVYGdp0NDQ+Y6yOx5L9bR6whSvKE1` |
+> | East US 2 EUAP | ecdsa-sha2-nistp256 | `X+c1NIpAJGvWU31UJ3Vd2Os4J7bCfgvyZGh35b2oSBQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+U6CE6con74cCntkFAm6gxbzGxm9RgjboKuLcwBiFanNs/uYywMCpj+1PMYXVx/nMM4vFbAjEOA20fJeoQtN8=` |
+> | East US 2 EUAP | ecdsa-sha2-nistp384 | `Q3zIFfOI1UfCrMq6Eh7nP1/VIvgPn3QluTBkyZ2lfCw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWRjO+e8kZpalcdg7HblZ4I3q9yzURY5VXGjvs5+XFuvxyq4CoAIPskCsgtDLjB5u6NqYeFMPzlvo406XeugO4qAui+zUMoQDY8prNjTGk5t7JVc4wYeAWbBJ2WUFyMrQ==` |
+> | Australia Central | rsa-sha2-256 | `q2pDjwwgUuAMU3irDl2D+sbH8wQpPB5LHBOFFzwM9Sk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDnqOrNklxmyreRYe7N72ylBCensxxPTBWX/CfbdbGfEbcGRtMGHReeojkvf4iJ7mDMZRzecgYxZ9o2bwTH9UImNkuZTsFNH6APuJ075WyxoDgdBX1UAQ3eE6BrCNI0BcwLakU9lq0rNhmxMpt/quBXxxWbRieKR9liTOg5CGSqoUPo7TpwaZQBltJCEf7rN5wGUlHV49iuiJIasSldYT6F1c3vS4bJb2sdIvVnKVLq+yTMzaPzWn34BD+KHx/pkB+s7/vQtdMfBBEdgEdPVvMPsyXtIKhx4Q79LnfZT19RDY8KW1mJrbPo67oEcjJYTXSZTKysjCUNmNNrnXvp6sHd` |
+> | Australia Central | rsa-sha2-512 | `+tdLVAC4I+7DhQn9JguFBPu0/Hdt5Ru2zjuOOat+Opw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCnd0ETMwpU8w7uwWu1AWDv6COIwLKMmxXu+/1rR429cNXuPrBkMldUfI7NKVaiwnu1kLPuJsgUDkvs/fc7lxx2l5i6mYBWJOXcWzAfXSBfB1a+1SK+2tDPYT3j4/W/KRW74DFPokWTINre22UVc+8sbzkmdtX/FzZdVcqI4+xJSjwdsp2hbzcsVWkxWhrFzKmBU40m5E/YwKQwAcbkzmX6AN5O8s66TQs2uPkRuTItDWI3ShW7QzW05jb6W8TeYdkouZ5PY0Yz/h3/oysFzo4VaUc0y3JP98KRWNXPiBrmvphpKnU1TQrjvVkYEsiCBHMOUnNVHdR1oIHd2zPRneK5` |
+> | Australia Central | ecdsa-sha2-nistp256 | `m2HCt3ESvMLlVBMwuo9jsQd9hJzPc/fe0WOJcoqO3RA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBElXRuNJbnDPWZF84vNtTjt4I/842dWBPvPi2fkgOV//2e/Y9gh0koVVAYp6MotNodg4L9MS7IfV9nnFSKaJW3o=` |
+> | Australia Central | ecdsa-sha2-nistp384 | `uoYLwsgkLp4D5diAulDKlLb7C5nT4gMCyf9MFvjr7qg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARO/VI5TyirrsNZZkI2IBS0TelywsJKj71zjBGB8+mmki+mmdtooSTPgH0zmmyWb/z3iJG+BnEEv/58zIvJ+cXsVoRChzN+ewvsqdfzfCqVrzwyro52x5ymB08yBwDYig==` |
+> | North Central US | rsa-sha2-256 | `9AV5CnZNkf9nd6WO6WGNu7x6c4FdlxyC0k6w6wRO0cs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJTv+aoDs1ngYi5OPrRl1R6hz+ko4D35hS0pgPTAjx/VbktVC9WGIlZMRjIyerfalN6niJkyUqYMzE4OoR9Z2NZCtHN+mJ7rc88WKg7RlXmQJUYtuAVV3BhNEFniufXC7rB/hPfAJSl+ogfZoPW4MeP/2V2g+jAKvGyjaixqMczjC2IVAA1WHB5zr/JqP2p2B6JiNNqNrsFWwrTScbQg0OzR4zcLcaICJWqLo3fWPo5ErNIPsWlLLY6peO0lgzOPrIZe4lRRdNc1D//63EajPgHzvWeT30fkl8fT/gd7WTyGjnDe4TK3MEEBl3CW8GB71I4NYlH4QBx13Ra20IxMlN` |
+> | North Central US | rsa-sha2-512 | `R3HlMn2cnNblX4qnHxdReba31GMPphUl9+BQYSeR6+E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDeM6MOS9Av7a5PGhYLyLmT09xETbcvdt9jgNE1rFnZho5ikzjzRH4nz60cJsUbbOxZ38+DDyZdR84EfTOYR2Fyvv08mg98AYXdKVWMyFlx08w1xI4vghjN2QQWa8cfWI02RgkxBHMlxxvkBYEyfXcV1wrKHSggqBtzpxPO94mbrqqO+2nZrPrPFkBg4xbiN8J2j+8c7d6mXJjAbSddVfwEbRs4mH8GwK8yd/PXPd1U0+f62bJRIbheWbB+NTfOnjND5XFGL9vziCTXO8AbFEz0vEZ9NmxfFTuVVxGtJBePVdCAYbifQbxe/gRTEGiaJnwDRnQHn/zzK+RUNesJuuFJ` |
+> | North Central US | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
+> | North Central US | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
+> | Brazil South | rsa-sha2-256 | `qNzxx1kid41tZGcmbbyZrzlCIPJ9TFa20pUqvRbcjro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC04g5K8emsS4NpL6jCT3wlpi6Msb5ax6QGlefO3IKp3wDKWAEqN+PvqBdrNp1PsitTKeyRSCLofq9k2wzeAMzV2n3UVqmUpNf9Q0Yd8SuXPhKG6VhqG2hL5+ztrlVTMI2Ak18SLaAEA1x7y9Z1lkEYGvCzJQaAw5EG8kd7XHGaI9nSCJ7RFOdJQF/40gq8z6E+bWW9Xs55JpWQ0i44i/ZvQUEiv5nyAa7D86y23wk1pTIFkRT99Kwdua0GtyUlcgCRDDTOzsCTn4qTo/MAF1Uq/ol4G0ZxwKnAEkazSZ1c+zEmh6GJNwT64nWBZ+pt5Rp3ugW+iDc/mIlXtxEV2k7V` |
+> | Brazil South | rsa-sha2-512 | `KAmGT8A7nRdxxQD7gulgmGTJvRhRdWPVDdagGCDmJug=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6W0FiaS21Dze6Sr6CTB8rLBu1T1Zej+11m7Kt283PSkQNYmjDDPUx0wSgylHoElTnFcXG+eFMznnjxDqkH+GnYBmlpW3nxxdTYD/MrdP4dX9ibPCFjDupIDJ4thv+9xWCw/0RpGc1NlUx2YmenDVMFJtYqjB1IDa2UUEeUHeQa1qmiBs1tbBQdlws1MCFnfldliB5H+cO4xnbAUjOlaa01k7GKqPf0H75+R83VcIcFw8hSuCvgMT+86H6jRRfqiIzE7WGbQBTPQs0rGcvxcGR3oGOmtB2UmOD232XTEk+sG3q2RxtPKWTz8wz1Tt2c1BOxmtuXTtzXnigZjB2t8y5` |
+> | Brazil South | ecdsa-sha2-nistp256 | `rbOdmodk5Aq+nxHt04TN7g6WyuwbW5o+sDbj86l6jp8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNFqueXBigofrM5Kp2eA4wd4XxHcwcNgNFWGgEd0EoNdKWt9NroU47bN43f79Y5vPiSa4prKW1ccMBl40nNN4S4=` |
+> | Brazil South | ecdsa-sha2-nistp384 | `cenQeg58JZ+Dvu3AC7P7lC/Jq7V3+YPaS37/BBn3OlQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHBhfnlfXV9/m6ZgOoqmSgX3VPnRdTOraBhMv8v7lEN1lWwyBpiWepu52KS0jR1RhttfXB+n+p6i2+9djJ1zT7fHq4sNn/d/3k2J6IjJlymZ32GwPvDk+fGefupUtabvRQ==` |
+> | UK West | rsa-sha2-256 | `2NQ5z6fQjt4SZKdViPS+I2kX7GoXOx3fVE81t8/BCVE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNq0xtA0tdZmkSDTNgA05YLH5ZuLFKD7RbruzuL4KVU2In0DQUtJkVqRXIaB3f+cEBTs9QrMUqolOdCCunhzosr5FvCO3I6HZ8BLnVNshtUBf2C1aT9yonlkdiIyc2pCHonds8vHKC4SBNu3Jr584bhyan8NuzJqzPCnKTdHwyWjf8m5mB4liK/ka4QGiaLLYTAjCCXmaXXOVZI2u0yDcJQXAjAP5niCOQaPHgdGk6oSjs0YKB29V+lIdB8twUnBaJA9jgECM2brywksmXrAyUPnIFD6AVEiFZsUH3iwgFAH7O6PLZTOSgJuu994CNwigrOXTbABfpH2YMjvUF///5` |
+> | UK West | rsa-sha2-512 | `MrfRlQmVjukl5Q5KvQ6YDYulC3EWnGH9StlLnR2JY7Q=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQClZODHJMnlU29q0Dk1iwWFO0Sa0whbWIvUlJJFLgOKF5hGBz9t9L6JhKFd1mKzDJYnP9FXaK1x9kk7l0Rl+u1A4BJMsIIhESuUBYq62atL5po18YOQX5zv8mt0ou2aFlUDJiZQ4yuWyKd44jJCD5xUaeG8QVV4A8IgxKIUu2erV5hvfVDCmSK07OCuDudZGlYcRDOFfhu8ewu/qNd7M0LCU5KvTwAvAq55HiymifqrMJdXDhnjzojNs4gfudiwjeTFTXCYg02uV/ubR1iaSAKeLV649qxJekwsCmusjsEGQF5qMUkezl2WbOQcRsAVrajjqMoW/w1GEFiN6c70kYil` |
+> | UK West | ecdsa-sha2-nistp256 | `bNYdYVgicvl1yaOR/1xLqocxT8bamjezGFqFdO6Od0I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKWKoJuxB3EO5bKjxnviF+QTv3PBSViD1SNKbfj0qYfAjObQKZuiqcFYeDoPrkhk9jfan2jU6oCEN4+KDwivz3k=` |
+> | UK West | ecdsa-sha2-nistp384 | `6V8vLtRf6I5BjuLgToJ1cROM72UqPD+SC0N9L9WG6PA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+7R/5qSfsXACmseiErhfwhiE7Rref/TNHONqiFlAZq2KCW3w3u8+O4gpJEflibMFP/Mj5YeoygdPUwflFNcST9K+vnkEL3/lqzoGOarGBYIKtEZwixv3qlBR+KyoRUkw==` |
+> | West Central US | rsa-sha2-256 | `aSNxepEhr3CEijxbPB4D5I+vj8Um7OO6UtpzJ/iVeRg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDWmd8Zd7dCfamYd/c1i4wYhhRnaIgUmK7z/o8ehr4bzJgWRbjrxMtbkD2y7ImjE2NIBG5xglz6v9z4CFNjCKUmoUl7+Le3Rsc5sJ/JmHAmEXb0uiDMzhq9f6Qztp+Pb9uqLfsPmm6pt1WOcpu+KNpiGtPWTL21sJApv6JPKU+msUrrCIekutsHtW6044YPXNVOnvUXv08BaPFhbpeGZ4zkrji0mCdGfz2RNcgLw0y3ZzgUuv0Lw+xV0/xwanJu4IOFI1X9Ab7NnoGMkqN/upBLJ4lRhjYVTNEv01IX2/r5WZzTn4c38Nfw4Ma3hR0BiLMTFfklFVGg2R64Z7IILoB` |
+> | West Central US | rsa-sha2-512 | `vVHVYoH1kU1IZk+uZnStj3Qv2UCyOR9qVxJfmTc20jQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9Q8Tvvnea8hdaqt+SZr4XN1JIeR43nX6vrdhcS6yyfRgaTcEdKbAKQbwj9Fu3kq80c4F+SNzh1KQWlqLu3MJHSaSdQLN9RaHO1Dd+iVK1WgZtsPM9+6U7wupMZq8Hdmao5sqaMT5lj7g+win2J+Wibz7t8YwS7g2Xi+ode8tFPFKduZ5WvKLjI0EiAS4mvcyWEWca142E8fxV9TobUjAICfgtL4vCpmLYKnSL/kUgplD0ow86k/MHp9zghDLVSVDj8MGMra+IJEpgHOUrFNnuyua2WSJVuXR2ITfaecRKrGg7Z4IJzExPoQzDIWdCHptiGLAqvtKT0NE2rPj9U4Rp` |
+> | West Central US | ecdsa-sha2-nistp256 | `rkHjcTK2BvryQAFvjugTHdbpYBGfOdbBUNOuzctzJqM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMjEAUTIttG+f5eocMzRIhRx5GjHH7yYPzh/h9rp9Yb3c9q2Yxw/j35JNWxpGwpkb9W1QG86Hjt4xbB+7q/D8c=` |
+> | West Central US | ecdsa-sha2-nistp384 | `gS9SYvaH6dCqyugArvFb13cwi8q90glNaK+fyfPie+Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0HqM8ubcDBRMwuruX5zqCWSp1DaLcS9cA9ndXbQHzb2gJ5bJkjzxZEeIOM+AHPJB8UUZoD12It4tCRCCOkFnKgruT61hXbn0GSg4zjpTslLRYsbJzJ/q6F2DjlsOnvQQ==` |
+> | Central US | rsa-sha2-256 | `GOPn34T1cCkLHO0xjLwmkEphxKKBQIjIf9QE1OAk3lU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9oA4N2MxbHjcdSrOlJOdIPjTB2LpQUMwJJj+aI2KEALYDGWWJnv0E14XjY1/M35jk8z0hX4MHGE/MEocSsTVdFRdWdW9CKTWT6eICpg9frTj6wfkB/Dxz/BAYb/YXq5OMrFlkFJUG8FMp9N80W6UWichcltmSrCpRi5N3ZGpVXEYhJF+I0mCH7Yheoq2KzIG2iWU/EJT5fui4t51wD8CQ1NWG8/THnNr0gjCr3AtB+ZPAl/6N7i2vO3FlZEHUj6BHoQ4dhIGjGCkgFDNU6RpdifqMJRihP9fSMOq4qksch1TE5sOnp0sOaP/RQvChb4oXB8Pru+j45RxPzIvzzOZZ` |
+> | Central US | rsa-sha2-512 | `VLhZbZjHzoNRMyRSa3GYvk2rgacjmldxQ2YNzvsMpZA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPnuJixcZd6gAIifABQ377Mn0ZootRmdJs1J3R8/u7mbdfmpX2ItI0VfgMh4BzGEdgCqewx4BjADhfXRurfimuP8P9PLRq89AHX2V+IfeizLZkrnrxKiijjGEz640gORzzwIp2X+bmnBABWzEZjSNOeE3CKVr4ONvH80bYGFFqR4+arOelDqWEgxktM1QBlId7xR7efmtEGAuAhFbZVaqjBNsbqyiR/hlkMQfmWn1bjGSoenUoPojc7UAp9+Xf6ujkhCihRV/O4A69tVvp5E0Qv5MJ1Qj3kzAYbHQcIQ2l47MQq1wdZYxkYBHmH5leAjHgQbbccPalOLSbLRYjF169` |
+> | Central US | ecdsa-sha2-nistp256 | `qN1Fm+zcCQ4xEkNTarKiQduCd9S+Aq3vH8BlfCaqL74=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN6KpNy9XBIlV6jsqyRDSxPO2niTAEesFjIScsq8q36bZpKTXOLV4MjML0rOTD4VLm0mPGCwhY5riLZ743fowWA=` |
+> | Central US | ecdsa-sha2-nistp384 | `9no3/m09BEugyFxhaChilKiwyOaykwghTlV+dWfPT6c=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiEYrlw9pskKzDp/6HsA2g0uMXNxJKrO5n16cHwXS1lVlgYMK3mmQdA+CjzMdJflvVw7sZO2caApr+sxI3rMmGWCt5gNvBaU6E9oUN8kdcNDvsfFavCb3vguOgtgbvHTg==` |
+> | North Europe | rsa-sha2-256 | `vTEOsEjvg/jHYH1xIWf2rKrtENlIScpBx450ROw52UI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQChnfrsd1M0nb7mOYhWqgjpA+ChNf7Ch6Eul6wnGbs7ZLxXtXIPyEkFKlEUw4bnozSRDCfrGFY78pjx4FXrPe5/m1sCCojZX8iaxCOyj00ETj+oIgw/87Mke1pQPjyPCL29TeId16e7Wmv5XlRhop8IN6Z9baeLYxg6phTH9ilA5xwc9a1AQVoQslG0k/eTyL4gVNVOgjhz94dlPYjwcsmMFif6nq2YgQgJlIjFJ+OwMqFIzCEZIIME1Mc04tRtPlClnZN/I+Hgnxl8ysroLBJrNXGYhuRMJjJm0J1AZyFIugp/z3X1SmBIjupu1RFn/M/iB6AxziebQcsaaFEkee0l` |
+> | North Europe | rsa-sha2-512 | `c4FqTQY/IjTcovY/g7RRxOVS5oObxpiu3B0ZFvC0y+A=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCanDNwi72RmI2j6ZEhRs4/tWoeDE4HTHgKs5DRgRfkH/opK6KHM64WnVADFxAvwNws1DYT1cln3eUs6VvxUDq5mVb6SGNSz4BWGuLQ4onRxOUS/L90qUgBp4JNgQvjxBI1LX2VNmFSed34jUkkdZnLfY+lCIA/svxwzMFDw5YTp+zR0pyPhTsdHB6dST7qou+gJvyRwbrcV4BxdBnZZ7gpJxnAPIYV0oLECb9GiNOlLiDZkdsG+SpL7TPduCsOrKb/J0gtjjWHrAejXoyfxP5R054nDk+NfhIeOVhervauxZPWeLPvqdskRNiEbFBhBzi9PZSTsV4Cvh5S5bkGCfV5` |
+> | North Europe | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
+> | North Europe | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
+> | UAE North | rsa-sha2-256 | `Vazz+KIADh85GQHAylrlI1tTY8/ckoRqTe/kbLXPmd0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDRGQHLLR9ruI0GcNF2u3EpS2CbHdZlqcgSR1bkaOXA9ZufHyxuhIpzG2IgYQ8wrjGzIilYds6UIH7CAw9FApKLNpLR6qdm8qjM0tJiyHLm3KloU27FfjCQjE9JhmsbTWCRH3N52A9HXIdiVCE3BBSoXhg/mF+3cvm1JvabKr1twoyfbUgDFuF7fDyhSxJ/MTig8SpgzWqcd5J+wbzjXG0ob2yWVhwtrcB6k97g25p77EKXo3VhSs0jN7VR+SAHupVwWsUgx4fZzi2I5xTUTBdOXW+e3EiXytPL2N5N/MtFKVY/JVhFkKkcTRgeuOds51tkByteSkc32kakcUxw6CjJ` |
+> | UAE North | rsa-sha2-512 | `NDeTZPUor2OuTdgSjLLhSaqJiTJUdfwTAzpkjNbNQUY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAx9LfiyVmWwGD/rjQeHiHTMWYaE/mMP6rxmfs9/I4wEFkaTBbc4qewxUlrB1jd7Se2a0kljI3lqQJ9h+gjtH/IaVTZOKCOZD8yV9Dh4ZENRqH/TOVz6LCvZifVbjUtxRtbvOuh1lJIVBSBFciNr0HThFMnTEIwcs5V48EFIT6eS9Krggu+cWAX2RbjM0VQnIgkA5BeM33MjSjNz86zhO+e7e1lhflPKL5RTIswtWbwatgkyvfM33pJql/zJz+3/usSpIA/pgWw23c8WziYXiHPTShJXN+N+9iLKf9YUkpzQUZSaRw8XDPyjJNx327Lot0Bh4YLpe37R0SrOvitBsN` |
+> | UAE North | ecdsa-sha2-nistp256 | `vAuGgsr0IQnOLUaWCCOBt+Jg0DV9C6rqHhnoJnwORM8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYpnxgANJNJ4IIvSwvrRtjgbejCpTc3D+l5iob7dBK4KQ7MB40rq+CtdBDGZ1J7d6oCevW6gb1SIxU/PxCuvMI=` |
+> | UAE North | ecdsa-sha2-nistp384 | `A5fa4Pzkdl0H2kVJxlNiEQkOhPzBYkrfQrcviQUUWUA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOz4ENDgFpo0547D5XCRCJLg8brp+iUyId2IdEhZAhuNX9spxlVe6uSkiQbd+8D5hHPVNuLFTFx7v2wXObycM8tr/WGejn/934BvSUhM6lDpU+d5n+ZcxEEhp4gDiy1l+Q==` |
+> | Germany Westcentral | rsa-sha2-256 | `0SKtGye+E9pp4QLtWNLLiPSx+qKvDLNjrqHwYcDjyZ8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsbkjxK9IJP8K98j+4/wGJQVdkO/x0Msf89wpjd/3O4VIbmZuQ/Ilfo6OClSMVxah6biDdt3ErqeyszSaDH9n3qnaLxSd5f+317oVpBlgr2FRoxBEgzLvR/a2ZracH14zWLiEmCePp/5dgseeN7TqPtFGalvGewHEol6y0C6rkiSBzuWwFK+FzXgjOFvme7M6RYbUS9/MF7cbQbq696jyetw2G5lzEdPpXuOxJdf0GqYWpgU7XNVm+XsMXn66lp87cijNBYkX7FnXyn4XhlG4Q6KlsJ/BcM3BMk+WxT+equ7R7sU/oMQ0ti/QNahd5E/5S/hDWxg6ZI1zN8WTzypid` |
+> | Germany Westcentral | rsa-sha2-512 | `9OYO7Hn5p+JJeGGVsTSanmHK3rm+iC6KKhLEWRPD9ro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwrSTqa0GD36iymT4ZxSMz3mf5iMIHk6APQ2snhR5FUvacnqTOHt3xhMF+UwYmGLbQtmr4HdXIKd7Dgn5EzHcfaYFbaLJs2aDngfv7Pd6TyLW3TtSgJ6K+mC1MDI/vHzGvRxizuxwdN0uMXv5kflQvnEtWlsKAHW/H7Ypk4R8s+Kl2AIVEKdy+PYwzRd2ojqqNs+4T2tPP5Y6pnJpzTlcHkIIIf7V0Bk/bFG2B7r73DG2cSUlnJz8QW9pLXIn7268YDOR/5nozSXj7DinVDBlE5oeZh4qkdMHO1FSWynj/isUCm5qBn76WNa6sAeMBS3dYiJHUgmKMc+ZHgpu6sqgd` |
+> | Germany Westcentral | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
+> | Germany Westcentral | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
+> | Switzerland West | rsa-sha2-256 | `yoVjbjB+U4Cp/ZpMgKKuji9T2pIFtdeXnJudyeNvPs0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFl9NO3CJyKTdYxDIgCjygwIxlT1ppJQm/ykv2zDz6C7mjweiuuwhVM3LRua3WyP5mbgl3qYm+PHlA7UyIMY5jtsg7GaSfhiBSGZAdfgfDgOp3qRkgyep84P69SLb2b0hwgsPVkx8eWLDDVbOEdQLLx7TVndyxtdw+X4bZs6UdEcLMvLUWl7v3SoD5oiuJN6vOJPQl0VBeEaK/uhujjFgnlEu7/31rYEKQ8vQBbx22a4kIyBtUSAGo/VfKGRWF9oXL7Umh2xHAPwNbGwP+DdCKUY27wWG7Qe18O+QS9AOu0yL4+MRIHZg8ODLQsk0Hp3q8Iw2JjohSkk4lcjHYgb69` |
+> | Switzerland West | rsa-sha2-512 | `UgWxFaVY0YYMiNQ82Wt3D1LDg3xta1DfRUUKWjZYllk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6svukqfg7147raZZrA1bZFOO/EDFgi+WRsxoOfH/EEWGmZ89QQ5m855TpsTPZ5ZARQD9kxrYEtqefcSPuWgth4Ze5PNVwRfAwedsSfnYwHZqHRlRM54bOQ6Img7T292ERl4KNJUI7SLyF+kKB7eXqp5nMBrTZ4rSHXoeibv2yZAph0cyf4V/NnfRj6KZSf6YDs0LW1VuovWAC6S7mpBjwtabBmd1gIiJleWhB7Jj48yiyh0m7L9oIoR4NRiuFC535JwqCYhrgFwujuk6iIR9ScRdayEr6gVcv6tBms3MyR16ytA/MHRxYHfPKb1kHUrpFjDQZZZswoDJDnhQGOm8Z` |
+> | Switzerland West | ecdsa-sha2-nistp256 | `5MyZiuHQIMDh/+QEnbr3Zm6/HnsLpYT2GXetsWD6M8Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj5nXHEjkVlLcf9R9fPQw9k2QGyUUP6NrFRj1gbxKzwHsgG2YKWDdOJiyguiro0xV9+JRdW3VC49/psIYUFDPA=` |
+> | Switzerland West | ecdsa-sha2-nistp384 | `nS9RIUnm5ULmNIG+d7qSeIl/kNzuJxAX9/PcwfCxcB0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB/Ps4Wp15xhNenavSHZijwVXdZcvhzVq8IcfHR3+Gz3tKLed36OdHRTdWpvjrg0mENw4L1mEZnHnDx96WMtA+FfagGWXMVMMfcyM4riIedemHsz45KAR2suqcdkNHfdVA==` |
+> | Sweden Central | rsa-sha2-256 | `feu0rEf3KhvHGfhxEjcuFcPtIl+f0ZVzOMXyxy+f8f4=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOimUzZHr0DxrjdWEPQqkrBudLW2P2dvPE9DoaXSNbehU13bxzsF6lzO65JBPh9rlNwwyt2yWtrR4XI0Qh/QSXmBntefOeH6BZVrN06aHrsd1dQBr4UFT5chCwy6Keu0ARW3fY8kO9lycTmMIeoiaYahicxyRRC8WLs0cSCH8tO0dA2aoaMxafBWqR6D5dNzu00rIcsCxvyjtN3Y8C4fw3YnNvPB/qWHdZ4aNcu7sQMRhCYVNPqX9UNGeXkbw8gHf9uL9dFu1c+P+VFIEs5bIecgT5HiGvtuXsWRdtEcM1v3mrRnNdmeWWQIqXzLrs5svipMIbnYXekhhLYHIlVo4d` |
+> | Sweden Central | rsa-sha2-512 | `5fx+Ic5p/MMR6TZvjj2yrb4HMHwc1TgM4x1xQw4aD3Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2nRaxWTg4KGLClTZLQ5QgPZPyQ/XYbH4prjhg1uK7m/JKlmJw5LjmIUVKnlXS38qTKpWpJZyGU/eBCa5FPQODvoAXfNncgtIQxd7j00P8aO2tho+uIxSgiTCte8sgrAyx22uIJlORJn2x1cBFBJrlgQDJOKEAs9IakMNdLvlfjJV405gk7pstF4eeIANRWC3eOTrMs0O1gCTt2rnWR5BNQJu8swj9FEWreNQ3PvUliM6Ig6u8b+4d8ryYGuzh5+E8wy/aNxlowkoCI4D/+dBnH43pSYyjhrVx966JMlrJZjDmbgtygkJI+FoEEfBoFlrpIGfisqIX41Np9ZRre4Ux` |
+> | Sweden Central | ecdsa-sha2-nistp256 | `6HikgYBMSL9VguDq9bmwRaVXdOIUKEQUf4hlOjfvv6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBErZhZNNmDhMKbSUXLB1VcTmR7pXcXWAqfFpdI81OP1FeCxBtpRNpIeWMyVoP3FeO3yWcODLm/ZkK7BICFpjleo=` |
+> | Sweden Central | ecdsa-sha2-nistp384 | `apRb96GLQ3LZ3E+rt2dyr9imMFDXYbaZERiireEO6ks=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKA5kwsqDKzZWmQCjIFBBjZun3rjg62pv8BOULwvwImaPvMFuR2OipExQZIyKSbR7wS9HA4/QKVA5rLRrSGpYvOBG438/7fwVZy5rOj3GXq6X7Havr1ExRXwsw5rJ56acA==` |
+> | East Asia | rsa-sha2-256 | `XYuEB+zABdpXRklca8RCoWy4hZp9wAxjk50MD9w6UjE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNKlaGhRiHdomU5uGvkcEjFQ0mhmNnWXsHAUNoGUhH6BU8LmsgWS61QOKHf1d3qQ0C9bPaRWMpemAa3DKGGqbgIdRrI2Yd9op+tqM+3hrZ8cBvBCgqKgaj4ZitoFnYm+iwwuReOz+x0I2/NmWUxuQlbiHTzcu8TVIln/5sj+n9PbwXC8Zk6vsCt6aon/P7hESHBJ4yf2E+Io30m+vaPNzxQGmwHjmBrZXzX8gAjGi6p823v5zdL4mq3tT5aPPsFQcfjkSMRDGq6yFSMMEA7i2dfczBQmTIJkYihdS8LBE0Ir5islJbaoPQxeXIrF+EgYgla505kJEogrLprcTGCY/t` |
+> | East Asia | rsa-sha2-512 | `FUYhL0FaN8Zkj/M0/VJnm8jPL+2WxMsHrrc/G+xo5QM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7x8s74EH+il+k99G2aLl1ip5cfGfO/WUd3foiwwq+qT/95xdtstPYmOP77VBQ4G6EnauP2dY6RHKzSM2qUdmBWiYaK0aaI/2lCAaPxco12Te5Htf7igWyAHYz7W99I2CfJCEUm1Soa0v/57gLtvUg/HOqTgFX44W+PEOstMhqGoU9bSpw2IKlos9ZP87C6IQB5xPQQ1HlnIQRIzluJoFCuT7YHXFWU+F4ZOwq5+uofNH3tLlCy7D4JlxLQ0hkqq3IhF4y5xXJyuWaBYF2H8OGjOL4QN+r9osrP7iithf1Q0EZwuPYqcT1QeIhgqI7OIYEKwqMfMIMNxZwnzKgnDC1` |
+> | East Asia | ecdsa-sha2-nistp256 | `/iq1i88fRFHFBw4DBtZUX7GRbT5dQq4g7KfUi5346co=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCvI7Dc7W3K919GK2VHZZkzJhTM+n2tX3mxq4EAI7l8p0HO0UHSmucHdQhpKApTIBR0j9O/idZ/Ew6Yr4nusBwE=` |
+> | East Asia | ecdsa-sha2-nistp384 | `KssXSE1WC6Oca0dS2CNySgObkbVshqRGE2JcaNsUvpA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEYGYGolx8LNs5TVJRF/yoxOCal3a4C0fw1Wlj1BxzUsDtxaQAxSfzQhZG+lFCF7RVQyiUwKjCxmWoZbSb19aE7AnRx9UOVmrbTt2PMD3dx8VmPj1K8rsPOSq+XX4KGdQ==` |
+> | South Africa North | rsa-sha2-256 | `qU1qry+E/fBbRtDoO+CdKiLxxKNfGaI9gAplekDpYvk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2UBC1KeTx8/tQIxVEBUypcu/5n3B/g0zqE7tFmPYMFYngrXqEysIzgAdpiu2+ZX/vY8AF/0UkhYec/X/rwKQL8CCVwYqa2hufbSrX/qSuUHZd/95LFB2Nh+hJ23fn3EK8Gpgo/Xkmx9YVZoaQPGPsWVWVKjU6aVpM54cd6iuDT3y9SAnqbUMqgwwz3mK7bQGFPrbUVOUwVIcYKZD9HMNZhpo8HpjllKYIt1AFy4db8lSrLyuX8Nn/U7XAlPUndUCpKsAfWw8SemyuxSHziFDHF5xo8eLU+QYxdtzirgDAgEYWv9aa0TSx5Q2Mq8XJ7POffQxKj44ocHzmMGq/wPS1` |
+> | South Africa North | rsa-sha2-512 | `1/ogzd+xjh3itFg3IpAYA2pwj1o3DprEabjObSpY/DY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDLAkEygbVyp189UwvaslGRgaqcGWXaYJVq+gUB0906xkkjGoJeqSgTW5C/77vOk0zBCZM3yBgtDFZL1d6lze1QJZ6kGGPynJa5SeyydAds9G745yaFFuE53zJUyMy+y5I1ytfx003PKvk8+fHZK3rPYYr+LKm2u+9BmnuDB/0t561oFg1ZiMCPgNnDdUwkya2EtsJAifkUaBlYmzBZAFbIYyGfb898utZHyI+ix2TrMS/RHEDIchG8qSBMpOPmcpa29ADVsmAQDd5ds5D7WjirfMXwBxgJTMyuy+N9rJRgHoqDnt/GsgI2GtoPM7YSET8uYug941hAvFm5TI/dW3YR` |
+> | South Africa North | ecdsa-sha2-nistp256 | `e6v7pRdZE0i1U2/VePcQLguy7d+bHXdQf3RZ4jhae+g=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEQemJxERZKre+A+MAs0T0R7++E6XanZ7uiKXZEFCyDgqjVcjk8Xvtrpk5pqo4+tMWM7DbtE0sgm1XmKhDSWFs=` |
+> | South Africa North | ecdsa-sha2-nistp384 | `NmxPlXzK2GpozWY374nvAFnYUBwJ2cCs9v/VEnk0N6Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKgEuS9xgExVxicW0HMK4RLO5ZC6S0ZyENe5XVVJY0WKZ5IfIXEhVTkYXMnbtrYIdfrTdDuHstoWY9uu4bS8PtFDheNn3MyNfObqpoBPAh1qJdwfJgzo5e7pEoxVORUMnw==` |
+> | UK South | rsa-sha2-256 | `3nrDdWUOwG0XgfrFgW27xhWSizttjabHXTRX8AOHmGw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdLm+9OROp5zrc6nLKBJWNrTnUeCeo8n1v9Y3qWicwYMqmRs/sS9t5V3ABWnus4TxH3bqgnQW3OqWLgOHse/3S+K1wGERmBbEdKOl7A7kQ9QgDkWEZoftwJ9hp+AMVTfCYhcOOsG+gW021difNx+WW2O5TldL31pk+UvdhnQKRHLX31cqx5vuUmiwq4mlbBx+rY8B/xngP2bzx/oYXdy1I9fZbWWAQ6FwJBav1sSWL0l7snRdOsy5ASeMnYollEw1IATwYeUv8g3PzrryZuru+7gu/Ku9w8d5jbFyI6Up4KLwjs/gZNuqQ5dif7utiQYbVe4L0TPWOmuLA25JJRZaF` |
+> | UK South | rsa-sha2-512 | `Csnl8SFblkdpVVsJC1jNVSyc2eDWdCBVQj9t6J3KHvw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIwNEfrP6Httmm5GoxwprQ57AyD6b3EOVe5pTGQWIOzxnrIw2KnDPL07KNa33xZOmtXro5PYyhr5eNXUkFiQMEe+RblilZSNAvc4MHbp2TVD0L9N7Pdy2SetoF4m5BCXdC48kZntqgkpzXoDbFiaAVln5zQCHB5fOuBPS1id8+k3zqG0o+K0MHb6qcbYV8gdQeOn/PlJzKE4M0Ie8na3aWHdGvfJjDdK/hNN0J+eUK8qIb9KCJkSMDj/l3rnue9L8XgeKKA2Pkvh3nch4VBXCcCsDVhgSf+aoiJ0Fy8GVOTk2s7QDMzD9y37D9V2OPl66q4pjFGOfK0mJmrgqxWNy5` |
+> | UK South | ecdsa-sha2-nistp256 | `weMVzOmQnlMdMp5XBoU9SdN5meBbx/8nvA8dB45w8Ck=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEnBllEm4/HsTP+ZMhlc8YnSAYWF23tibZDqGxf0yBRTU/ncuaavuQdIJ5TcJb0NcXG7skEmq3StwHT0FPMWN8Y=` |
+> | UK South | ecdsa-sha2-nistp384 | `HpsZ8zoOCCsUbpD3nAOtxpuKIvn0L8KGyg1KMLuMUqU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd/672brwX1kOhH31ZTdBRj+bcEmemcdmTEe0J88cJ3RRQy7nDFs25UrnR+h3P0ov9Uq24EJQS8auxRgNCUJ3i3ZH9QjcwX/MDRFPnUrNosH8NkcPmJ/pezVeMJLqs3Qw==` |
+> | Australia Southeast | rsa-sha2-256 | `YafIMxux7NtoKCrjQ2eDxaoRKHBzpanatwsYbRhrDKQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7omLu37G00gLoGvrPOJXpRcI5GTszUSldKjrARq0WeJIXEdekaSTz5qv2kSN/JaBDJiO9E4AJFI9q5AvchdmMVs4I59EIJ0bsR9lK+9eRP4071EEc8pb3u/EPFaZQ8plKkvINJrdK6p0R2FhlFxa7wrRlKybenF1v7aU3Vw79RQYYXaZifiNrIQFB8XQy3QQj2DcWoEEOjbOgZir9XzPBvmeR8LLEWPTLYangYd3TsQtybDpP6acpOKaGYDEyXiA8Lxv8O276LUBny6katPrNvfGZScxn6vbTEZyog+By8vyXMWKEbC1Qc/ecBBk5obFzbUnq3RP1VXLJspo99cex` |
+> | Australia Southeast | rsa-sha2-512 | `FpFCo9sNUkdnD1kOSkWDIfnasYhExvRr1pJlE631QrA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmuW2VZAhR6IoIOr32WnLlsr/rt3y4bPFpFcNhXaLifCenkflj9BufX3lk5aEXadcemfKlUJJdwBTvTt1j4+X3P2ecCZX1/GSsRKSTuiivuOgkPxk3UlfggkgN9flE9EdUxHi/jN/OQ9CjGtHxxk72NJSMNAjvIe0Ixs7TfqqyEytYAcirYcSGcc0r70juiiWozflXlt+bS7mXvkxpqMjjIivX+wTAizzzJRaC6WcRbjQAkL2GP6UCFfBI1o9NBfXbz+qvs1KTmNA0ugRQ7g6MdiNOePHrvoF1JgTlCxEjy+/IqPiC8nNQUVCW6/gcATQoDQn0n9Lwm1ekycS35xEh` |
+> | Australia Southeast | ecdsa-sha2-nistp256 | `4xc49pnNg4t/tr91pdtbZLDkqzQVCguwyUc16ACuYTc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdswzJ+/Bw5ia/wRMaa0llZOjlz67MyZXkq7Ye38XMSHbS4k/GwM0AzdX+qFEwR00lxZCmpHH28SS+RyCdIzO0=` |
+> | Australia Southeast | ecdsa-sha2-nistp384 | `DEyjMyyAYkegwLtMBROR/8STr1kNoQzEV+EZbAMhb1s=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJRZx6caZTXnbTW/zRzKfoKC4LGzvD5fnr2p8yGWxDq27CjUEMxkctWcT6XS3AstF2MLMTOFp/UkkSr8eP0vXI8g99YDeFGXtoBiFSIgYF2Aiu/kpfEu3feiIUl3SVzxKw==` |
+> | France South | rsa-sha2-256 | `aywTR4RYJBQrwWsiALXc1lDDHpJ34jIEnq3DQhYny0g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDELY4UcRAMkJpEBZT40Oh5TIxI6o6Enmlv+KxWkkcyFcNJlFtaF2Hl+afWlysrg+lB5Un4XpveWY64pl7a/dSju7aPfujcXowELIPqFSoWW7xQ+jkfJdyI0daa0l2h2oNCPqWnx8+04Vx5kcb2GktlNG4RMLx7Q6COJgQ3pGHtyfZ5fnmrWNBsuv4mvsXp0u1KGWX6s2LZtO+BpKE6DegSNLMVapAZ0ju8pagqtm6aeWEtqmkAvsI0U31qhL25FQX4DzjIbGzXd6I25AJcSXcpnwQefsaOwO/ztvIKeIf3i/h2rXdigXV1wyhvIdKm1uWwj6ph4XvOiHMZhsRUe02B` |
+> | France South | rsa-sha2-512 | `+y5oZsLMVG6kfdlHltp475WoKuqhFbTZnvY0KvLyOpA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmsS9WimMMG95CMXFZiStR/peQU1VA6dklMbGmYwLqpxLNxxsaQuQi6NpyU6/TS8C3CX0832v1uutW38IfQGrQfcTGdAz+GjKverzaSXqZGgTMh/JSj06rxreSKvRjYae596aPdxX5P+9YVuTEeTMSdzeklpxaElPfOoZ7Ba5A2iCnB/5l/piHiN8qlXBPmfGLdZrTUFtgRkE4Ie4zaoWo19611XgUDMDX4N4be/qilb95cUBE73ceXwdVKJ3QVQinZgbwWFUq0fMlyd8ZNb9XN6bwXH7K6cLS6HYGgG6uJhkYSAqpAZK2pOFn3MCh8gw2BkM/Rg+1ahqPNAzGPVz9` |
+> | France South | ecdsa-sha2-nistp256 | `LHWlPtDIQAHBlMkOagvMJUO0Wr41aGgM+B/zjsDXCl0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdj2SxQdvNbizz8DPcRSZHLyw5dOtQbzNgjedSmFwOqiRuZ2Vzu88m2v5achBwIj9gp0Ga14T7YMGyAm04OOA0=` |
+> | France South | ecdsa-sha2-nistp384 | `btqtCD/hJWVahHWz/qftHV3B+ezJPY1I3JEI/WpgOuQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB2rbgGSTtFMciVSpWMvmGGTu8p1vGYfS2nlm+5pAM85A4Em1mYlgHfVZx+SdG5FSYcsX4vTWt4Yw2OnDmxV3W0ycrKBs4Bx3ASX4rx3oZezVafHsUUV0ErM+LmdmKfH8g==` |
+> | West US 2 | rsa-sha2-256 | `ktnBebdoHk7eDo2tvJuv36XnMtfauhvF/r0uSo6DBfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDoskHzExtM+YSXGK6cBgmnLlXsZLWXkEexPKC7wHdt0kSqkIk9F31wD+2LefZzaTGfAmY5/EWrOsyBJvIgOoksH+ZPMnE9+TOWqy6vsS+Ml/ITvUkWajS1bKDPDSoIrCM1rQ9PlbgMQFg4o0FfyxLVCP7hcgvHO+aycOxkiDqtvwANvIn2Qwt7xwpIv1Mnc4OpcBSxbigb7ISlrvR9XWivE/piWjXS3IEYkGv7VitAlmWEoTt9L7K94bYol2nCXSXJ33X6xVVwVNpdxVtnUQBIRinN+vkOccgG0jvWtWPtrMjyDg/lyvr6lBdO/CQy4VO4VrIBuL6pjsS8KfIfTxKd` |
+> | West US 2 | rsa-sha2-512 | `i8v3Xxh/phaa5EyZEr5NM4nTSC/Rz7Nz0KJLdvAL0Ls=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOOo5f0ACypThvoDEokPfzGJUxbkyMoQKca9AgEb3YkQ/lsYCfLtfGxMr2FTOGQyx5wfhOrN0B2SpI4DBgF3B0YSLK0omZRY7fpVPspWWHrsbTOJm/Fn7bWCM+p63xurZ6RUPCA6J1gXd3xbdW7WQXLGBJZ6fjG7PbqphIOfFtwcs/JvjhjhvleHrXOtfGw9b4Jr8W1ldtgKslGCU1mnUhOWWXUi+AhwGFTI0G/AShlpX8ywulk2R+fxet3SNGNQmjydnNkcsrBI/EMytO1kwoJB3KmLHEeijaQzK7iJxRDZEHlHWos6G7jwaGPI4rV5/S1N+xnG+OhCDYAUbunp5R` |
+> | West US 2 | ecdsa-sha2-nistp256 | `rt5kaA0neIFDIWTP2MjWk9cOSapzEyafirEgPGt+9HM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKEKP+1QZf3GfEvkNZtzoKr05iAwGq+yPhUsVdyA7uKnwvTwZAi7NBr4hMkGIIdgQlGrMNNXKS0V+rhMNI1sH48=` |
+> | West US 2 | ecdsa-sha2-nistp384 | `g0vDKd4G5MKnxWewbnCYahCv1lZgfnZeEXfPAhv+trs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1+/Qu9Y1BqqV3seN0+0ITYCueuv0TFAnfG9z1Io8VYjmxLvdmaDvTi9kJ0ShjFJRKjbCfYKNekqYZDW4gBfmE9EyvMPI6VXPVLNY3TQ/z+Y7qO/oa28cSirW9fhp7vbA==` |
+> | South India | rsa-sha2-256 | `5gFLJvQvQodZxKBi3DnGywpf9dliWguiMTqcgkTmtu8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDlxVnaYnmg1cK+g/PI1jB1fgQQJiX39ZmfBss3mSW3kUxP3KWhm7lHBTkrbnfhVHnGpP6GcGFy09YBQa6UiyVpD8p8APtx0j9Jp8m3yhhgqOIjup0C7crl49NqMVryOZmCLOvA7KTyTxxV37GpRI+ffqQ8LOO+anWVWVaJlVCYBMct/OVhA7ePXblcbJg5eu5JjUiWW+cPdVqAqWojNHZzzprCFEBTCvYaZtzBx4kFGiipPmJSN6yvBPEfnA7Lzr/T9iXV/XkmI1txuJRBasoQMt+4jCZG25sCCN8y4iuUJCioUELr//TWaDyTsQAR4MbRW+L/GSIM9VUY4Uc+Impp` |
+> | South India | rsa-sha2-512 | `T4mrHCEHbFNAQSng//m0Viu/hXfi11JMnyA0PqAuTtg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz9tQa7D4dyrULCLH75yKwH27AQMRNWFUqgUQQXYHR1MYegLf7JEmFn126bxgEHPRO0bNwBM9S626Gcr1R1uDI/luL6uvG0Q57k+Pmv7HNQtv12J3fAuxuhSPcppE5IE5QR94Qgd1RzGXv954TK1Z+kCXHyLA583XTQ4btOEwqUo/16tSCqaoTSdyNp17q8BrOCPaTWMqT774lSGELIDc6RaGKHRu/Qa+F5FRMswdZt5YJDEKtlKdvbyIiSfIP2GZGhWBiSW2D6xpnzSjstR3LfRfFek/ryGkDPu5c5HNaVJwc1fatP6ggAbhVCcyDgWiCCpEBICV2wnPpGfDUdbRh` |
+> | South India | ecdsa-sha2-nistp256 | `7PQhzR5S6sEFYkn2s3GxK6k8bwHgAy0000zb07YvI44=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgZw/ouE23XQnzO8bBPSCJp/KR+N/xfuJS5QtWU/PzlNLmSYS20b65GRP6ThwZdaigMhwHOEc8twpJ7aA7LBu0=` |
+> | South India | ecdsa-sha2-nistp384 | `sXR2nhTTNof58ne5K+Xjm9Pu8miEbKJn4Bo9NYoqQs4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwbzUI8q9f5YTLIs6ddRTPlHdb35xrbsJeOQII/nEXhlNjzpdL9XnDJjQunQL2vg6XND1pfp3TNBJ9LF3mud442LbpwSt9B7EZD8tQ5u0+2NeNjn8JnCu6/tdvS+xoNiA==` |
+> | Japan West | rsa-sha2-256 | `DRVsSje7BbYlVJCfXqLzIzncbVU4/ETFGzdxFwocl8E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDl/rlTgQpomq4FmJKSR2fjgAklV818RcjR/e/C1VUJVpbntJoWUlBhKYDFPTVQaHXDTK5HyJU5APsdy6CJo8ia32qc2E/573LDNk4dgFFrh+KFRiD+ULt3IH15i1DieVw61MAVOvzh+DmTJHPLaTufXoQ62YACm3yC1st1kXv4bawfXs0ssmeqrBcCOQvMvW/DexnnGXO6QXYTcjUktNrO2h2dd355n5FP4fcsBEdGmfT79HYPM6ZoqkItRZEO6Nel65KxtenAwQub8SK3iJgFyJwd3zIH4OCHp3z4tcGXw5yNAX15dJMSnls0zvzhx0f4ThwfgB4t1g9jVb47Ig7B` |
+> | Japan West | rsa-sha2-512 | `yLl9t2jlkrTVWAxsZ59Wbpq+ZCnwHfdMW8foUmMvGwI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9zrpnjY7c0dHpE1BMv+sUp+hmvkBl3zPW/uCInYM5SgtViSQqn/DowySeq+2qMEnOpHGZ8DnEjq55PmHEumkYGWUUAs38xVGdvRZk6yU7TxGU42GBz0fT/sdungPHLQ2WvqTZYOFqBeulRaWsSBgovrOnQEa2bNTejK9m353/dmAtKHfu68zVT+XYADrT3PY5KZ1tpKJA0ZO9/ScUvXEAYs20WSYRZBcNDoSC9xz4K8hv9/6w3O3k0LyBKMFM5ZW8WVDfpZx1X0GBCypqS+RNZuVvx81h3nxVAZSx80CygYcV4UHml7wtnWDYEIBSyVRsJWVNGBlQrQ4voNdoTrk5` |
+> | Japan West | ecdsa-sha2-nistp256 | `VYWgC6A4ol1B7MolIeKuF2zhhgdzQAoGBj5WgnQj9XE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLIuhTo1et/bNvYUj+sanWnQEqiqs9ArKANIWG19F9Db6HVMtX8Y4i7qX6eFxXhZL17YB2cEpReNSEjs+DcEw4=` |
+> | Japan West | ecdsa-sha2-nistp384 | `+gvZrOQRq3lVOUqDqgsSawKvj6v/IWraGInqvqOmC6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3ZiyS1p7F1xdf6sJ3ebarzA5DbQl1HazzLUJCqnrA84U8yliLvPolUQJw4aYORIb5pMgijsN3v9l0spRYwjGHxbJZY/V6tmcaGbNPekJWzgXA1DY35EbFYJTkxh/Yezw==` |
+> | Norway East | rsa-sha2-256 | `vmcel/XXoNut7YsRo79fP5WAKYhTQUOrUcwnbasj/fQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC4Y1b2Bomv8tc/JwPgW0jR5YQhF031XOk4G0l3FOdZWY31L8fLTW6rOaJdizOnWCvMwYQK39tyHe6deN9TZESobh0kVVuCWaZNI6NUR0PSHi0OfbUkuV0gm/nwtwJkH5G9QbtiJ5miNb4Ys3+467/7JkqFZmqN6vBLhL9RVInO00LPYkUGtGfTv+/hmsPDGzSAujNDCFybti4c+wMgkrIH6/uqenGfA1zW3AjBYN2bBBDZopzysLHNJYQi3nQHQSiD4Mdl7IGZtJQeC/tH9CKH5R4U4jdPN1TmvNMuaBR/Etw4+v0vrDALG1aTmWJ7kJiBXEZKoWq/vWRfLzhxd4oB` |
+> | Norway East | rsa-sha2-512 | `JZPRhXmx44tEnXp+wPvexDys1tSYq9EDkolj9k6nRCM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC11j19LeEqRzJOs8sWeNarue+bknE3vvkvSnsewApVMQH35t9kpqRGMSr6RTU2QCYDiQTCKI2vzLSTLGoizoPBiY/7lvdylDRCbeEpuFUkgvKZrapkJ6JqKOySPpFNhqCs27rdY5dJ2C7/nmTL/kvcyhXFXZT2lJaOIdRSKv/1Q3DAWQ9icNGbDokQDubF5etlkquqTV6r/ioFuh7hdKE+fJooyHa2oYTD+j5cNDKBxrJWBEidOe2HwplR4lYPggUcVtGu9aoSVIMmswztFF6+MNIdOT1kdvHewKLjkVB1hbIHl/E+uexsyMGcCg5fPy7dDIipFi1aED+6R7CnAynJ` |
+> | Norway East | ecdsa-sha2-nistp256 | `mE43kdFMTV2ioIOQxwwHD7z+VvI4lvLCYW8ZRDtWCxI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWP6vJCbOhnvdmr7gPe8awR/E+Bx+c8fhjeFLRwp6/0xvhcywT9a1AFp7FdAhkVahNKuNMU1dZ0WTbOEhEGvdg=` |
+> | Norway East | ecdsa-sha2-nistp384 | `cKF2asIQufOuV0C/wau4exb9ioVTrGUJjJDWfj+fcxg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGb8w8jVrPU1n68/hz9lblILow6YA9SPOYh5r9ClAW0VdaVvCIR/9cvQCHljOMJQbWwfQOcBXUQkO5yI4kgAN3oCTwLpFYcCNEK6RVug9Q5ULQh1MRcGCy3IcUcmvnYdg==` |
+> | France Central | rsa-sha2-256 | `zYLnY1rtM2sgP5vwYCtaU8v2isldoWWcR8eMmQSQ9KQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCmdsufvzqydsoecjXzxxL9AqnnRNCjlIRPRGohdspT9AfApKA9ZmoJUPY8461hD9qzsd7ps8RSIOkbGzgNfDUU9+ekEZLnhvrc7sSS9bikWyKmGtjDdr3PrPSZ/4zePAlYwDzRqtlWa/GKzXQrnP/h9SU4/3pj21gyUssOu2Mpr6zdPk59lO/n/w2JRTVVmkRghCmEVaWV25qmIEslWmbgI3WB5ysKfXZp79YRuByVZHZpuoQSBbU0s7Kjh3VRX8+ZoUnBuq7HKnIPwt+YzSxHx7ePHR+Ny4EEwU7NFzyfVYiUZflBK+Sf8e1cHnwADjv/qu/nhSinf3JcyQDG1lN` |
+> | France Central | rsa-sha2-512 | `ixum/Dragma5DAMBzA/c5/MY02FjUBD/gI8+XQDzJvc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDjTJ9EvMFWicBCcmYF0zO2GaWZJXLc7F5QrvFv6Nm/6pV72YrRmSdiY9znZowNK0NvwnucKjjQj0RkJOlwVEnsq7OVS+RqGA35vN6u6c0iGl4q2Jp+XLRm8nazC1B5uLVurVzYCH0SOl1vkkeXTqMOAZQlhj9e7RiFibDdv8toxU3Fl87KtexFYeSm3kHBVBJHoo5sD2CdeCv5/+nw9/vRQVhFKy2DyLaxtS+l2b0QXUqh6Op7KzjaMr3hd168yCaqRjtm8Jtth/Nzp+519H7tT0c0M+pdAeB7CQ9PAUqieXZJK+IvycM5gfi0TnmSoGRG8TPMGHMFQlcmr3K1eZ8h` |
+> | France Central | ecdsa-sha2-nistp256 | `N61PH8SVCAXOq7Z7eIV4mRnotafmNoPrpc+TaLxtPX4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3UBFa/Ke9y3aLs1q1b8gh/tXiS7lpOTzUiDFpXdbq00/V9Ag+v2z5MIaicFdum9Ih4fls1Mg07Ert16bi5M8E=` |
+> | France Central | ecdsa-sha2-nistp384 | `/CkQnHA57ehNeC9ZHkTyvVr8yVyl/P1dau2AwCg579k=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/x6qX+DRtmxOoMZwe7d7ZckHyeLkBWxB7SNH6Wnw2tXvtNekI9d9LGl1DaSmiZLJnawtX+MPj64S31v8AhZcVle9OPVIvH5im3IcoPSKQ6TIfZ26e2WegwJxuc1CjZZg==` |
+> | West US 3 | rsa-sha2-256 | `pOKzaf3mrTJhfdR/9dbodyNza30TpQrYRFwKAndeaMo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC0KEDBaFSLsI28jdc854Rq6AL9Ku8g8L+OWQfWvb1ooBChMMd/oqVvFF9hkLzJ8nFPQw7+esVKys5uFwRTpBNuobF/RVtY0zLsNd+jkPxoUhs7Yl0hI2XXAPdp3uCsID56O+OrB7XbOsPCrJ2aXfiaRheRQg84/92c357uQ/epsva8XCMjIIGOAyEL6d4mnCNJ2Y0mXPJT1lfswoC8i2GSUKdJZhTLCe9zVDvTCTWuZJSH3A8nM3RVtnNgMXfNjh2blwW9YFv5BrMOXA205fahuDcPjwvXo9OMfEneDsrODmiEGYzbYLby/5/KPzz5OVn7BDJma6HL0z07i3PmEzXN` |
+> | West US 3 | rsa-sha2-512 | `KKcoWCeuJeepexnJCxoFqKJM88XrpsPKavXOoNFEGuY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNzhiVgDjCIarGEjKgmSxRh4vWjV6PxFbNK3cD0M4jWGlxPx/otJNEXCMee0hW29b7bwo2+aiyv3AEt7JYTeM/G9SHmenU6MTpqD/lC/LABtqTB7EV9FIFkc8MbbOvEkdTnRJw1d09MTqqwbkR9wq297AWggSzCuPDqMq+268UzsthMzODRVqW3yTr3M6vhlBCPfN5ptcvYwqRaa7Yhe4bdRZ+xYB5I2+ZMkalfn7SQiySSgAGjUJxrxK+LnJKSi32CfqTU8KjWNjCc40eAqexLFjg6AN9BtC0+ZYcD2KQmeqJ8oRCWw9r4CsaduSmcjc7XD75RKGdArjYzjeiVSlt` |
+> | West US 3 | ecdsa-sha2-nistp256 | `j4NlZP/wOXKnM3MkNEcTksqwBdF0Z46+rdi2Ic1Oj54=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBETvvRvehAQ2Ol0FfTt649/4Xsd0DQQ7vyZ666B92wRhvyziGIrOhy8klXHcijmRYRz3EjTHyXHZ4W8kcSKB4Lo=` |
+> | West US 3 | ecdsa-sha2-nistp384 | `DkJet/6Pm6EXfpz2Ut6aahJ94OvjG3R7+dlK0H4O1ts=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEu+HpgDp0a02miiJjD5qVcMcjWiZg5iIExECqD/KQVkfyraJ3WZ8P28JwB+IYlEGa2SHQxScDjG2t3iOSuU9BtpA0KK5PGtu3ZxhN1UmZbQgz6ANov7/+WHChg7/lhK0Q==` |
+> | Central India | rsa-sha2-256 | `OcX6wPaofbb+UG/lLYr30CPhntKUXyTW7PUAhC6iDN0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWuKbOJR8ZZqhE3k2HMBWO99rNWoHwUa+PVWPQHyCELyLR19hfrygNL9uugMQKTvPYPWx8VM6PrQBrvioifktc/HMNRsoOxlBifQETfRgBseXcIWorNlslfFhBnSn6ZGn8q4XICGgZ1hWUj9z1PUmcM2LZDjJS33LLzd23uIdLePizAliJAzlPyea8JNpCVjfmwnNwtuxXc48uAUXlmX+e0ZXRwuEGble8c1PbrWWTWU4xhWNJ+MInyvIGv9s6cGN7+fxAFaUAJS0wNEa3poCnhyNxrckvaqiI3WhPQ8Hefy2DpXTY03mdxCz8PZPcLWdQU3H5nmuAc/pypnc0Avax` |
+> | Central India | rsa-sha2-512 | `HSgc5u8s+QILdyBq6wGJkxRcK5nxj81gxvpkR5bcH6k=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDSO/R/yw8q33yLkSHOw0Bi2WKDWQPrll8skh3hdRUB6wtw9dvtQFEV3suvFJsTVvAbnGBe2Fjgi69X0zkIygxg74XuQsx7GZO6gyaKDwljyanFoCzer+OzFSpDcVJ0zOfhY99uHeYT6k4leb2ngABqjiqieDHMZ9JQX12KOK3cAks/oytrNUo9krGb1Nyv5BYu4dWXHmuFgtigDd043khaARfdWkg88lKgb6G9k+vQTGKphLnFMqhada/aP8GsaA2Dq5d/LH5P5CTU7MRPA8TuuyLOtbv8FtQ2TyaAXhYCplCQELtto1yXZ79WVjQE/uKuX8xK5M2rfOH+H5ck/Rxl` |
+> | Central India | ecdsa-sha2-nistp256 | `zBKGtf770MPVvxgqLl/4pJinGPJDlvh/mM963AwH6rs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjHx8+PF0VBspl6l9Xa3BGyJwSx2eDX0qTDnhrdayzHMWsHGX3vz0wr7oMeBVdQ26dOckExa6iPrEDSt8foV1M=` |
+> | Central India | ecdsa-sha2-nistp384 | `PzKXWvO/DR/KnUElcVWIwSdabp6ZJqce37DJZzNl3Sk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJwEy1f+GYN4rxhlCAkXGgqAU1S7ssI4JPEJs8z1mcs8dDAAVy1cqdsir9yZ9RSZzOz/BOIubZsG137G2+po0Pz0FfJ0jZVGzlx1UHXu7OMuKQ7d2+1TkPpBYFy6PiCa3w==` |
+> | Korea South | rsa-sha2-256 | `J1W5chMr9yRceU2fqpywvhEQLG7jC6avayPoqUDQTXHtB2oTlQy2rQB` |
+> | Korea South | rsa-sha2-512 | `sHzKpDvhndbXaRAfJUskmpCCB3HgPbsDFI/9HFrSi3U=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCfGUmJIogHgbhxjEunkOALMjG77m+jgZqujO3MwTIQxQNd/mDeNDQaWDBVb2FJrw15TD3uvkctztGn2ear3lLOfPFt0NjYAaZ8u5g9JYCtdZUTo5CETQFU/sfbu2P2RJ/vIucMMg8HuuuIMO059+etsDZ5dZHu9cySfwbz/XtGA0jDaTlWG0ZDT+evOE0KmFABjgMFWyPnupzmSEXAjzlD/muGeeUhtXUB8F6HVUCXLz7ffzgYiYj+1OB0eZlG/cF8+aW7MOpnWvfpBxwm16soSE1gmZnXhPrz/KXlqPmEhgIhq7Cwk54r3rgfg/wCqFw+1JcbNOv5d4levu/aA7pt` |
+> | Korea South | ecdsa-sha2-nistp256 | `XM5xNHAWcYsC5WxEMUMIFCoNJU/g90kjk/rfLdqK7aw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHTHpO85vgsI6/SEJWgEhP5VDTikLrNrSi6myIqoJvRx6x8+doTPkH87L/bOe/pTU/rCgkuPi1kXTC7iUTSzZYk=` |
+> | Korea South | ecdsa-sha2-nistp384 | `6T8uMI9gcG3HtjYUYqNNxi99ksghHvsDitIYpdQ4BL4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgPPIDWZqvB/kuIguFnmCws7F4vzb6QG7pqSG/L9E1VfhlJBeKfngQwyUJxzS2tCSwXlto/1/W302g0HQSIzCtsR4vSbx827Hu2pGMGECPJmNrN3g82P8M0zz7y3dSJPA==` |
+> | South Central US | rsa-sha2-256 | `n7P8NrxY8pWNSaNIh8tSZxi9rXi11g3JuzWZF93Ws4g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD4PgB8PxPPpGfvrIUGSiiDFIfkRk2/u1DmhcoGoIfW+/KR8KC2JA0kY4Yj+AceGnDUiBbSPz7lcmy2eGATfCCL6fC5swgJoDoYDJiJoaKACuVA0Nk2y0OeO58kS6qVHGX/XHzx8+IkfGdlhUUttcga7RNeppT5iqSz49q9x6Ly42yrV3DIAkOgh+f9SsMMfR6dQQmvWN3HYDOtiO2DvVN+ZenViQVcsynspF3z4ysk53ZYw5YcLhZu8JFw4u0F6QJAznR6TfNqIlhSjR1ub8DiHvIwrmDNf8TgG5kPVGhIcibYPf+y0B0M8nr9OKCxZzUTlXX4Xcnx+VOQ1e1qGHvV` |
+> | South Central US | rsa-sha2-512 | `B2oOtHpXzwezblrKxGcNBc3QJLQG/TiVgOjnmNorqkA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+LJA8W3BcwITzJv6CAkx/0HBPdy3LjKPK2NQgV9mxSMw8mhz4Ere59u2vRsVFcdW6iAeGrH66VF6mJSCgUKiYnyZAfTp1O6p6DnUg4tktMQFo4BEwSz1S5SGDuRhpWvoKjzvljESf/vZBqgms7nMRWe3MGuvlUWBqB+2CnJ7bxhvGQCdBTQeoPO9EZKYKi/fPlcxBmLFGcZnRRpB6nu/Cxhhj1aHLJdjqCd+4ahtjBHeFrPxeQv9gTJ1B+EipJZu7WgPZOTI8iZaIcnCbhuGOy0iOFXeuexC9/ptHDW9UEgKVLyZ4UIPJkSLFVgW5NRujWyZ/thc5+EfHY9Db3UAl` |
+> | South Central US | ecdsa-sha2-nistp256 | `Wg9hTlPmrRH9aC9lTSf8hGFqa85AnW3jqvSXjmHAdg4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnEz4iwyq7aaBNKiABce+CsVIUfiw9Jw3pp6pGbL6cUaJs9mEVg1RMLHgPg2I+7XV0doisYhYb/XtufxzGCe94=` |
+> | South Central US | ecdsa-sha2-nistp384 | `rgRhPelmxAix6TBDahmGqXnKjdImdI3MnDPVc6qhF2o=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKXGKbWfVe18G9gbCxFQiBGkGYM9LktSPKkRI18WRQ50qyuxVRXRDoV+iIEJyCQTpuFTPprQ6glQYeF+ztEb4MZaXpVrcs1/Og191dcEtty3UWuJBCrv/t1kezlwBWKyXg==` |
+> | Korea Central | rsa-sha2-256 | `Ek+yOmuAfsZhTF4w7ToRcWdOevgZPYXCxLiM10q44oA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCyUTae7QtAd3lmH+4lKJNEBNWnPUB+PELE9f4us5GxP8rGYRar1v3ZGXiP2gzPF1km1cGNrPvBChlwFMjW+O5HavIFYugVIe8NzfI7S3t+kgTylXegSo1cWen18MAZe6Q5vxqqFzfs+ZChWEa/P37lTXVkLVOYCe5NJUPm8Zvip7DHB2vk25Fk3HMHG9M50KNj1Hp4etPI7yiLNLNCh5V410mf3xhZChMUrH6PMl/A+sVv68ulcVeIZ68eMuQktxz1ULohBdSExZGmknVrwfF/fLTKWxHlVBjB3yDlLIJO3nTFKaQ4RzPa/0If+FcbY+hIdzSjIAK6W3fRlbVuWHYR` |
+> | Korea Central | rsa-sha2-512 | `KAji7Q8E2lT3+lSe7h74L6rfPnLEfGVzYZ/xyM96//0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDxZYb5eIWhBmWSwNU6G9FFDRgqlZjYYorMSXJ4swHm4YYHKGZTf4JOE5d87MNtkVgKe2942TQxA1t2TaENlmNejeVG5QZ4to+nVnwsFov2iqAYChoI6GlhpwzyPsO0RkqLB8mvhoKMel1sNGfmxjxYVFt4OSPHDzNIU4XjGfW24YURx/xRkLU1M9zBNADDx+41EMNRT7aBXrKW9MzsxkfCM3bYwjdBbI2Yi2nUqARm+e/sBPLTqVfjuMFvosacYc43MqepFSQoZE5snwYxkLJzltAbxNUysJs277isnGgezh9p5T2MCxtCERU0lvp7M52hd1p75QEtNrdadfDprzT9` |
+> | Korea Central | ecdsa-sha2-nistp256 | `XjVSEyGlBtkONdvdw11tA0X1nKpw5nlCvN/0vXEy1Gc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYiomaLB3ROxZvdfqiilpZ+2XJiPDeIIv4/fnQRZxnCBCFrUm7ATB6bMBSUTd00WfMhnOGj4hKRGFjkE+7SPy4=` |
+> | Korea Central | ecdsa-sha2-nistp384 | `p/jQTk9VsbsKZYk09opQVQi3HzvJ/GOKjALEMYmCRHw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN3NA7U4ZC576ibkC/ACWl6/jHvQixx+C6i2gUeYxp7Tq6k4YLn7Gr1GNr+XIitT6Jy5lNgTkTqmuLTOp7Bx9rGIw9Or37EGf7keUM42Urtd+9xF1dyVKbBw0pIuUSSy+w==` |
+> | Southeast Asia | rsa-sha2-256 | `f0cyRMVxUgtpsa9J6pwAMysk2MY/sybo5ioPjhy9LZk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWPK6PAGMTdzNkwKZt+A3Dhbnete6jyLLboOXWdv/QdhvjR2pNCMhGuWUxadaiLUxzZM7IvugSLGexQlZi5aCJ06DpaVYqZk/Q8l+QUydp9TfNg/kP+0OJXCJ6XdsVggboDIfrEN8ku4nfasD4QTo2tnmqZhmbIDUr38SP16PsH2bQAi2lZKg4DfWgnSFyj5sbMSDLljBEY6JQkLGiPcbqlYEN4kjB5mudE9c/ts6Jn1fhizBwJY/pE3kOydq8dCMXYFMZ6NafPacCi7Pe5zcTKfi/daioVlSXQhWK3jNzCVENonF2xWSPH+1T5F2IOV0wb0HL2l8d02x5Bw2Su4aF` |
+> | Southeast Asia | rsa-sha2-512 | `vh8Uh40NCD3iHVh5KEcURUZrT3hictlF9pMDEoK5Rxk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdL+E/W2RpmJiWMRg5EtMs0AE7BF2Qb5jnXXaIbwqr5/BGuUPLm43eVJJt5R0BmEJe2lYfYLAzinC9MhsxKSTHIt5u8QleyIAxI759M3DWZwFSKngjsHFRe/SvZOzc7gvtR7osdnVaXCTXY5NccLT34gDybEbjlmp+SEvSZZmXyy2wmUR3O022euBifKN0t9Tk1mkLYhbfRySQi0ZADWazjd7loM9ZHArVe8y9oDrs7QYX4eHIVRbgtsBbkR3g9zP3VWVMERFyi6cU0Dyvue8DCx9YzNsdmKjkB2dvYTMVcUkad81pbO81jpLb1wL25WPHIPHqTOLZhdn9JxLn245Z` |
+> | Southeast Asia | ecdsa-sha2-nistp256 | `q7OsE02p9SZ6E63b+Mxri1wbI5WfkdWcIJgAP2+WTg8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEbvjkwSA0RQuT2nQf8ABKc21s/kcC/7I5431oNEwQPZQ8S18RAKktv6ti19Ju8op6NOZZ3Up9lOn3iybxHgy+s=` |
+> | Southeast Asia | ecdsa-sha2-nistp384 | `HpneuSwbRG7eiqHGEAkSXF0HtjvccoT3OIgeQbPDzoE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGAMUN+0oyuXuf6rkS+eopeoISA2US3UrgAovMwoqAeYSPoHKy9n/WKczsHPy/G+FKsXM4VlMHtNhEAxYwjtueF0Sb2GRZFzngeXMfVZPVL5Twph/pT6ZJnUD8iloW0Mw==` |
+> | Australia East | rsa-sha2-256 | `MrPZLU8llsG+SzgBN8eH702H4zuynyYgqqQLQmWGDEs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsRwHZ+DKINZZNP0GO6l7mFIIgTRnJy7ikg07h54iuk+KStoB2Cwppj+ulPs0NiR2RgAsP5nchWRZQysjsfYDui8wha6JEXKvWPlQ90rEYEs96gpUcbVQesgfH8ILXK06Kn1xY/4CWAHEc5U++66e+pHQulkkFyDXTsRYHsjTk574OiUI1` |
+> | Australia East | rsa-sha2-512 | `jkDaVBMh+d9CUJq0QtH5LNwCIpc9DuWxetgJsE5vgNc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFHirQKaYqkcecqdutyMQr1eFwIaIM/h302KjROiocgb4iywMAJkBLXmhJn+sSbagM5tzIk4K4k5LRjAizEIhC26sc2aa7spyvDu7+HMqDmNQ+nRgBgvO7kpxVRcK45ZjFsgZ6+mq9jK/eRnA8wgG3LnM+3zWaNLhWlrcCM0Pdy87Cswev/CEFZu6o6E6PgpBGw0MiPVY8CbdhFoTkT8Nt6tx9VhMTpcA2yzkd3LT7JGdC2I6MvRpuyZH1q+VhW9bC4eUVoVuIHJ81hH0vzzhIci2DKsikz2P4pJT0osg5YE/o9hVJs+4CG5n1MZN/l11K8lVb9Ns7oXYsvVdtR2Jp` |
+> | Australia East | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
+> | Australia East | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
+> | Japan East | rsa-sha2-256 | `P3w0fZQMpmRcFBtnIQH2R88eWc+fYudlPy7fT5NaQbY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCZucqkz4UicI20DdIyMMeuFs+xUxMytNp7QaqufmA2SgUOoM387jesl27rwvadT6PlJmzFIBCSnFzjWe5xYy3GE59hv4Q3Fp3HMr5twlvAdYc5Ns5BEBEKiU0m88VPIXgsXfoWbF0wzhChx8duxHgG4Cm+F8SOsEw/yvl+Z/d42U9YzliQ1AafNj4siFVcAkoytxKZZgIqIL4VUI322uc93K5OBi9lgBqciFnvLMiVjxTWS/wXtVEjORFqbuTAu/gM4FuKHqKzD1o39hvBenyZF2BjIAfkiE6iYqROd75KaVfZlBSOOIIgrkdhvyj9IfaZFYs3HkLc7XgawYe6JVPR` |
+> | Japan East | rsa-sha2-512 | `4adNtgbPGYD+r/yLQZfuSpkirI9zD5ase01a+G7ppDw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCjHai98wsFv0iy+RPFPxcSv8fvTs3hN/YnuPxesS21tUtf0j5t8BTZiicFg6MLOQJxT4jv5AfwEwlfTqvSj3db6lZaUf/7qs/X9aN1gSoQNnUvALgnQDYGjNYO8frhR7S0/D/WggQo2YKMAeNLRScT7Pg/MJaOI12UhoUloCXbTAP1c85hYx0TGKlGWpFjfen/2fwYEKR1vuqaQxj+amRatnG+k18KWsqvHKze8I2D19cn5fp2VkqXzh6zQ1s5AMc5B9qIF48NIec9FAemb9pXzOoYBDFna0qNT4dfeWOQK6tM/Ll10jafaw2P32dGBF8MQKXB2sxtcC0nU4EEtS5d` |
+> | Japan East | ecdsa-sha2-nistp256 | `IFt/j4bH2Jc0UvhUUADfcy3TvesQO+vhVdY4KPBeZY8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVq+uiJXmIlYS367Ir9AFq/mL3iliLgUNIWqdLSh7XV+R8UJUz1jpcT1F6sJlCdGovM3R5xW/PrTQOr3DmikyI=` |
+> | Japan East | ecdsa-sha2-nistp384 | `9XLsxg1xqDtoZOsvWZ/m74I8HwdOw9dx7rqbYGZokqA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFh7i1cfUoXeyAgXs+LxFGo7NwrO2vjDwCmONLuPMnwPT+Ujt7xelTlAW72G3aPeG2eoLgr6zkE48VguyhzSSQKy7fSpLkJCKt9s0DZg2w0+Bqs44XuB43ao6ZnxbMelJQ==` |
+> | Canada East | rsa-sha2-256 | `SRhd9gnvJS630A8VtCYMqc4djz5R8EiG7spwAUCYSJk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD2nSByIh/NC3ZHsjK3zt7mspcUUXcq9Y/jc9QQsfHXQetOH/fBalf17d5odCwQSyNY5Mm+RWTt+Aae5t8kGm0f+sKVO/4HcBIihNlAnXkf1ah5NoeJ+R0eFxRs6Uz/cJILD4wuJnyDptRk1GFhpAphvBi0fLEnvn6lGJbrfOxuHJSXhjJcxDCbmcTlcWoU1l+1SaYfOzkVBcqelYIimspCmIznMdE2D9vNar77FVaNlx4J9Ew+HQRPSLG1zAh5ae1806B6CHG1+4puuTUFxJR1AO+BuT6fqy1p0V77CrhkBTHs8DNqw9ZYI27fjyTrSW4SixyfcH16DAegeHO+d2YZ` |
+> | Canada East | rsa-sha2-512 | `60yzcSSOHlubdGkuNPWMXB9j21HqIkIzGdJUv0J57iY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDmA4meGZwkdDzrgA9jAgcrlglZro0+IVzkLDCo791vsjJ29bTM6UbXVYFoKEkYliXSueL0q92W91IaFH/NhlOdW81Dbjs3jE+CuE4OX5pMisIMKx45QDcYCx3MJxqZrIOkDdS+m8JLs6XwM07LxiTX+6bH5vSwuGwvqg5gpnYfUpN0U5o7Wq7H7UplyUN8vsiDvTux3glXBLAI3ugjn6FC/YVPwMOq7Luwry3kxwEMx4Fnewe6hAlz47lbBHW6l/qmzzu4wfhJC20GqPzMJHD3kjHEGFBHpcmRbyijUUIyd7QBrnfS4J0xPVLftGJsrOOUP7Oq8AAru66/00We501` |
+> | Canada East | ecdsa-sha2-nistp256 | `YPqDobCavdQ/zGV7FuR/gzYqgUIzWePgERDTQjYEE0M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlfnJ9/rlO5/YGeGle1K6I6Ctan4Z3cKpGE3W9BPe1ZcSfkXq47u/f6F/nR7WgrC6+NwJHaMkhiBGadEWbuA3Q=` |
+> | Canada East | ecdsa-sha2-nistp384 | `Y6FK9rWscBkyKN7mgPAEj0jKFXrv4mGNzoaZ9ttc4io=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDS8gYaqmJ8eEjmDF2ET7d2d6WAO7SgBQdTvqt6cUEjp7I11AYATKVN4Isz1hx8qBCWGIjA42X1/jNzk3YR7Bv/hgXO7PgAfDZ41AcT4+cJd0WrAWnxv0xgOvgLKL/8GYQ==` |
+> | Canada Central | rsa-sha2-256 | `KOYkeGvx4egH9DTGgxiONDMvSlkEkoU8cXWnynOEQRE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7jhZvp5GMrYyA2gYjbQXTC/QoSeDeluBUpji6ndy52KuqBNXelmHIsaEBc69MbixqfoopaFyJshdu7X8maxcRSsdDcnhbCgUO/MnJ+am6yb33v/25qtLToqzJRXb5y86o9/WtyA9DXbJMwwzQFqxIsa1gB` |
+> | Canada Central | rsa-sha2-512 | `tdixmLr++BVpFMpiWyVkr5iAXM4TDmj3jp5EC0x8mrw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNMZwL0AuF2Uyn4NIK+XdaHuon2jEBwUFSNAXo4JP7WDfmewISzMWqqi1btip/7VwZbxiz98C4NUEcsPNweaw3VdpYiXXXc7NN45cC32uM8yFeV6TqizoerHf+8Hm8avWQOfBv17kvGihob2vx8wZo4HkZg9KacQGvyuUyfUKa9LJI9BnpI2Wo3RPue4kbaV3JKmzxl8sF9i6OTT8Adj6+H7SkluITm105NX32uKBMjipEeMwDSQvkWGwlh2oZwJpL+Tvi2G0hQ/Q/FCQS5MAW9MCwnp0SSPWZaLiA9EDnzFrugFoundyBa0vRjNGZoj+X4+8MVG2fYgOzDED1JSPB` |
+> | Canada Central | ecdsa-sha2-nistp256 | `HhbpllbdxrinWvNsk/OvkowI9nWd9ZRVXXkQmwn2cq4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuyYEUpBjzEnYljSwksmHMxl5uoErbC30R8wstMIDLexpjSpdUxty1u2nDE3WY7m4W/doyXVSBYiHUUYhdNFjg=` |
+> | Canada Central | ecdsa-sha2-nistp384 | `EjEadkKaEgaNfdwXtzlqanUbDigzsdzcZJeTzJfQXP0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBORAcpaBXKmSUyCLbAOzghHvH8NKzk0khR0QGHdru0kiFiE16uz9j07aV9AiQQ3PRyRZzsf+dnheD7zuEZAewRiWc54Vg8v8QVi9VUkOHCeSNaYxzaDTcMsKP/A7lR2AOQ==` |
+> | Switzerland North | rsa-sha2-256 | `4cXg5pca9HCvAxDMrE7GdwvUZl5RlaivApaqz8gl7vs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCqqSS6hVSmykLqNCqZntOao0QSS1xG89BiwNaR7uQvz7Y2H+gJiXhgot6wtc4/A5743t7svXZqsCBGPvkpK05JMNZDUy0UTwQ1eI9WAcgFAHqzmazKT1B5/aK0P5IMcK00dVap4jTwxaoQbtc973E5XAiUW1ZRt6YComeoZB6cFVX28MaE6auWOPdEaSg8SlcmWyw73Q9X5SsJkDTW5543tzjJI5hnH03LAvPIs8pIvqxntsKPEeWnyIMHWtc5Vpg8LB7CnAr4C86++hxt3mws7+AOtcjfUu2LmLzG1A34B1yEa/wLqJCz7jWV/Wm21KlTp1VdBk+4qFoVfy2IFeX9` |
+> | Switzerland North | rsa-sha2-512 | `E63lmwPWd5a6K3wJLj4ksx0wPab1lqle2a4kwjXuR4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCtSlbkDdzwqHy2C/pAteV2mrkZFpJHAlL05iOrJSFk0dhq8iwsmOmQiF9Xwth6T1n3NVVncAodIN2MyHR7pQTUJu1dmHcikG/JU6wGPVN8law0+3f9aClbqWRV5tdOx1vWQP3uPrppYlT90bWbD0IBmmHnxPJXsXm+7tI1n+P1/bKewG7FvU1yF+gqOXyTXrdb3sEZOD6IYW/PusR44mDl/rV5dFilBvmluHY5155hk1O2HBOWlCiDGBdEIOmB73waUQabqBCicAWfyloGZqB1n8Eay6FksLtRSAUcCSyBSnA81phYdLiLBd9UmiVKPC7gvdBWPztWB+2MeLsXtim9` |
+> | Switzerland North | ecdsa-sha2-nistp256 | `DfyPsw04f2rU6PXeLx8iVRu+hrtSLushETT3zs5Dq7U=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJICveabT6GPfbyaCSeU7D553Q4Rr/IgGjTMC8vMCIUJKUzazeCeS3q46mXL2kwnBLIge9wTzzvP7JSWf+I2Fis=` |
+> | Switzerland North | ecdsa-sha2-nistp384 | `Rw0TLDVU4PqsXbOunR2BZcn2/wqFty6rCgWN4cCD/1Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLhGaEyHYvfVU05lmKV4Rnrl9YiuSSOCXjUaJjJJRhe5ZXbDMHeiC67CAWW3mm/+c5i1hoob/8pHg7vmeC+ve+Ztu/ww12JsC4qy/CG8qIIQvlnDDqnfmOgr0Svw3/Izw==` |
+> | UAE Central | rsa-sha2-256 | `GW5lrSx75BsjFe4y4vwJFdg454fndPjm4ez2mYsG3zs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQiEpj9zkZ8F3iDkDDbZV4A3+1RC/0Un6HZVYv5MCVYKqsVzmyn+7rbseUTkZMO/EqgF8+VWlwSU5C2JOesZtKXAgNzXBSOER3NbiucB5v1b1cC+8Qo4C2+iTHXyJSKxV0bTz55crCfhKO1KTQw3uZoYh6jE9xI1RzCI1J4qP+afZQQhn3H+7q+8kTMhmlQrfKuMWennoWZih+uTe9LPHjlvzwYiXkS2sOIlKtx8eLDJJg2ONl7YKSE4XVq7K33807Gz5sCD/ZV+Bn+NyP2yX14QKcyI97pkrFdcJf2DZi7LdTuEVPx3qK/rHzmzotwe6ne6sfV+FJpowUUTbKgT5` |
+> | UAE Central | rsa-sha2-512 | `zflL4olL2bga9JCxPA/qfvT2jSYmIfr2RY6GagpUjkE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAtxSG7lHzGFclWVuErRZZo6VG5uaWy1ikhb67rJSXdTLuSGDU+4Boj4wKxK0EyVKXpdQ3VrIwC4rOEy/lKAlnI2PrkrMjluau2aetlwW0hCBKAcgEOpMeMJJxCvv9EVatmEhvCe0ARyVM539058da9LzoZ2geFnFIbh3t8fNCaJZTNSS5PW1SLkspSqYXUYJWzu8Kx9l3LTzlmJT1DukKLIKj5ZDwuzOIN5m1ePYp4MzfIeBN6ys8df8HqXLoEXE+vOZWOzwkPVWoTsYvwB8j9+FHECAVf4Gcm8sPvRZA/RKDn1dGW2THzVw/VI/F87fFC7stLmZJ1v+a9TTFE649` |
+> | UAE Central | ecdsa-sha2-nistp256 | `P3KxgoZgjHHxid66gbkRETjPsHUsNiPt5/TFU0Kby6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvHAXCWC9HGJnr5SRW8I1zZWsyHIczEdPpzmafrU8drYmhpRxlD6HlKnY7iXqfq8bOIK063tpVOsPbrVevAKPs=` |
+> | UAE Central | ecdsa-sha2-nistp384 | `E+jKxd6hnfVIXPQYreABXpZB7tppZnWUxAelvEDh874=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDLyroqceuIpmDQk/gvHHzFup7NZbyzjXMdGrkDvZDE2H+6XTthCGSVNVmwqdyHE4yGw88jgW1TfWTAZxCxTfXD+xF72iYyBAsejgiyYY/0x9NKM/lrtw8mnRtkZzLyrA==` |
+> | Germany North | rsa-sha2-256 | `ppHnlruDLR73KzW/m7yc3dHQ0JvxzuC1QKJWHPom9KU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNNjCcDxjL3ess1QQEkb9n5bPYpxXpekd32ZX4oTcXXFDOu+tz/jpA8JZL8lOBAcBQ5n+mZF0Pot1o+B1JxQOHHiEZdcdKtLtPWrI2OQyxZnvo7sCBeSk+r/j3mjqpvq3+KpwoTZKpYF/oNRXVHU4VFs+MzvqWd6vgLXsDwtJrriojtkrWy0bTa4NjjN/+olsITxDmR0TGAu+epCJptdpKjTcgcn25QuIKy37/zVW8BJ5QsZmIRwvlCYxj11UOAoDcbapJcnzJYpOmQTNpdzkazjViX17DZW17Jmfhc6Dk3H+TEseilkbq1ZjsUyGBBxklWHid7+BgKVXOoIgG6+0x` |
+> | Germany North | rsa-sha2-512 | `m/OFTRHkc3HxfhCKk1+jY1rPJrT9t4FYtQ/Wmo3MOUE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDkN3CN1VITaHy/CduQaZIkuKSC/+oX19sYntRdgCblJlIzUBmiGlhmnKXyhA29lwWWAYxSbUu0jEJUZfQ6xdQ4uALOb815DLNZtVrxqSm4SjvP5anNa7zRyCFfo4V8M4i6ji6NB+u+PwH5DOhxKLu6/Ml9pF8hWyfLRft8cg4wORLLhwGt2+agizq7N7vF2nmLBojmS0MMmpH5ON/NFshYIDNKPEeK9ehpaARf4fuXm440Zqzy/FfpptSspJIhbY2zsg4qGQgYGZyuRxkLzYgtD/uKW5ieFwXPn+tvVeVzezZTmGMoDlkPX18HSsuNaRkdnwpX8yk1/uoBCsuOFSph` |
+> | Germany North | ecdsa-sha2-nistp256 | `F4o8Z9llB5SRp0faYFwKMQtNw/+JYFKZdNoIuO7XUU0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMoIo/OXMP7W5a5QRDAVBo+9YQg4YBrl3J7xM91PUGUiALDE1Iw8Uq4e1gLiSNA6A46om5yY/6oGj4iqEk8Ar8Y=` |
+> | Germany North | ecdsa-sha2-nistp384 | `BgW5e9lciYG1oIxolnVUcpdh3JpN/eQxfOyeyuZ6ZjI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ69kH0urhCdcMMaqpID2m+u8MECowtNlYjYXoSUn6oEhj7VPxvCRZi5R02vHrtrTJslsrbpgYHXz+/jSLplKpccQGJFaZso9WWgEJH1k7tJOuOv0NIjoBTv7fY5IxeAvQ==` |
+> | Australia Central 2 | rsa-sha2-256 | `sqVq1zdzD3OiAbnDjs70/why2c3UZOPMTuk5sXeOu4Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDKNZVZ5RVnGa0fYSn+Nx3tnt526fmMf+VufOBOy5/hEnqV6mPKXMiDijx2gFhKY4nyy957jYUwcqp1XasweWX6ISuhfg4QWcygW0HgmVdlSDezobPDueuP0WdhVsG3vXGbEYnrZOUR5kQHagX/wWf6Diy1J5Cn2ojIKGuSuGY/9bu3bnZzKt08fj+gQCEd1GxpPoBUfjF/73MM57IRhdmv919rsGD5nsyZCBmqFoKlLH/gKYZ4B3hylqf/6gER7OeZmG2S/U/fRAN0hVK7RkHNf2CFoCmuxXS6r87BznT5vF3nmd7tsf0akaxLjfWRbKLMWWyZkzU4/jijpbDDuu1x` |
+> | Australia Central 2 | rsa-sha2-512 | `p6vLHCTBcKOkqz7eiVCT6pLuIg7h4Jp41lvL/WOQLWM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDcqD2zICW1RLKweRXMG9wtOGxA5unQO/nd9yslfOIo54Ef0dlhAXGFPmCd3Yj60Gt/CIpqguzKLGm4D3nf19KjXE8V59cD7/lN6mVrFrm+6CU44JAzKN9ERUelxhSQKi/dsDR773wt4jsAt4SLBRrs19RC2fkYnxZgC/LzNZKXXY3FFb06uwheJjGOHyeQJbGpaV3hlelhOSV1UF2JAB8v6d8+9+S+b666EcpQ70JtxtA8h1s30hqhTKgYdRYMPfz7lqKXvact2NBXlqYRPod5cLW7lYBb2LzqTk1D44d8cwDknX2pYQJpgeFwJhB6SO9mF/Ot+jk+jV/CxUI55DPd` |
+> | Australia Central 2 | ecdsa-sha2-nistp256 | `m7Go9P1bwcPHAcZzRSXdwYroDIdZzt0jhxkJW42YGKY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHp76felOL7GAHcJoW6vcCS83jkeR6RdFCwUk0Jf6v7SFoqYNZfTaryy2n0vwG1W1dAyHvOjB1+gzTZOkHN/cAI=` |
+> | Australia Central 2 | ecdsa-sha2-nistp384 | `9Jc39OueTg3pQcq8KJgzsmPlVXxILG24Euw27on7SkY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEduOE61sSP2BozvJ6QLtRDZ7j0TenX7PjcpPVtYIQuKQ+h3qakXFwFnj8N3m8+LpTXYO41mgX7N02Rl12QvD7lDpUgHUChaNpUcMcSpm5qvguLyG6XZg2BDNd6pyx+fpw==` |
+> | South Africa West | rsa-sha2-256 | `aMMzaNmXR+V1NrwLmovyvKwfbKQ6aAKYiA5n8ETYQmU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDGhe98UTnljsYaeJwtP3ABvT/hZP6Mp1r5beyJ2SWpdqZSZaKC+UQlWLu6WhLxLZ+5snB+YAlC56u4qOdDHLoid6vbAR/FPIcJlvQfcFJD88nihv9sq1kUX3JXrh0ZUrl2/Zj71aNlM/RL1OnXK/Pg2E+wu4EfnQTrzlWMhR8bxlQA0jH1zmfFN/6BTwP2if29TNlQkWuW3uq3rccY1GA6n0QtlucanPNRzsBtAzsH5/oFuB5R4sD/Msw0itvWuQP4e0y+Vdov1My/rjK19xLce6AhWmmhwkn5qxHdIy158C4cWnSkQvkYzPnwsi7KT9WRH7vfr8qD9zlA5mO+IDxJ` |
+> | South Africa West | rsa-sha2-512 | `Uc7QB0fT4NGyBp34GCAt8G4j1ZBXh/3Wa2YRlILu818=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCijtmaOHIcXjI07fVugz1M33+amlOEqdqtVgOlLmFRKSehPW2+6iGpAjQVwzsYOx32Hp5O07xj/PhiFsbBBqZXGHmuSIOJYa7tQSFvwclO+JW/kuoELXQLwnHxUfPyq4tYoj83GSZ5k/KRlEtbmjEwcozMQVya/7MzulAeV4nN6PDxoLjXlfGEQU2ZCGz2neeisQEM8+hZNuEH+O9O03g7CW8bwiI1Y70/bnNq95xJ5F7lRpwtJNWlx+kmUiNpfXOUPxZAUsny7z1Ka5XKEB1fDP8E/jAtrSWrRPDJew8lFpQeWukwB5tf3F3bh1SuSKaSQqKBArnSpJizWxp0brZZ` |
+> | South Africa West | ecdsa-sha2-nistp256 | `pr1KB8apI+FNQLKkzvUXx0/waiqBGZPEXMNglKimUwA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPvbvOfXQjT+/3+jQtW3FBAnPnaypYSUhZMkTTSfd7RQMmSxsLNmDooERhVuUTa7XCTlpDNTSPdnnaa6P1a+F6A=` |
+> | South Africa West | ecdsa-sha2-nistp384 | `A3RfMOd6dGgUlcrkXL1YRKNXIdAB8M1lF9qwmy6PjFg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaJmo4QGmo6pbLHOXh06Rz9inntdxmuOtVxlJBO1i/ZK5les/AuaILMW7oQCxOKvZs/xI+P0MWRfrNgWSSapy5hNuTkbl8IqO4pH/lO//zdaHmVBC1kPnujDM9znJs6Rg==` |
+> | Jio India West | rsa-sha2-256 | `hcy1XbIniEZloraGrvecJCvlw6zZhTOrzgMJag5b5DA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOBU9e1Ae68+ScLUA5O1gaZ3eq0EGqBIEqL3+QuN8LYpF3Bi/+m43kgjhgiOx5imPK6peHHaaT/nEBQFJKFtWyn8q2kspcDy1xvJfG8Jaks1GQG33djOItiHlKjRWMcyWFvisFE2vVkp3uO0xG4nMDLM2rFazkax+6XA5cf2iW2SfL6Trs4v1waakU/jQLA7vsrx14S+wGEdVINTSPeh5DHqkLzTa3m2tpXVcUA4CG8uQZM8E/3/y0BuIW0Ahl/P6dx35W1Al7gnaTqmx7+idcc/YVe0auorZWWdyclf1sjnAw6U8uMhWmQ0dZgDehDtshlHyx84vvJ1JOJs0+6S2l` |
+> | Jio India West | rsa-sha2-512 | `LPctDLIz/vqg4POMOPqI1yD9EE9sNS1gxY6cJoX+gEY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOH+IZFFfJN4lpFFpvp5x1lRzuOxLXs0WfpcCIACMOhCor2tkaa/MHlmPIbAqgZgth5NZIWpYkPAv7GpzPBOwTp3Bg5lUM7MXSayO/5+eJjMhB5PUCJ0We8Kfgf/U+vbaMIg9R8gJKutXrANd3sAWXMwWqKUw+ZX/AC7h58w04gb1s+lNOQbfhpqkw8+mrOj2eKH8zHYUJQBUYEyDHqirj565r7HhBtEZImn/ioJS+nYT5Zl/SNtW/ehhUsARG9p6O4wSy20Ysdk7b9Ur2YL0RyFa6QhWQeKktKPVFQuMMLRkYX7dv35uAKq8YN833lLjGESYNdCzYmGTJXk5KYZ8B` |
+> | Jio India West | ecdsa-sha2-nistp256 | `mBx6CZ+6DseVrwsKfkNPh9NdrVLgwhHT4DEq9cYfZrg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXqhYQKwmkGb8qRq52ulEkXrNVjzVU4sGEuRFn4xXK8xanasbEea3iMOTihzMDflMwgTDmVGoTKtDXy8tQ+Y8k=` |
+> | Jio India West | ecdsa-sha2-nistp384 | `lwQX9Yfn7uDz/8gXpG4sZcWLCAoXIpkpSYlgh8NpK1E=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLKY2+wwHIzFOfiKFKFHyfiqjUrscm0qwYTAirNPE1GI6OwAjconeX072ecY3/1G0dE7kAUaWbCKWSO3DqM4r6O+AewzxJoey85zMexW23g2lXFH7HkYn9rldURoUdk31A==` |
+> | Sweden South | rsa-sha2-256 | `kS1NUherycqJAYe8KZi8AnqIQ9UdDbpoEpcdHlZ702g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ+Imy6VuOvZsel9SCoMmej4kFvP8MDgDY9EdgfkgpjEfOSk+vmXBMCFtthI7sHRggkuXQE5v6OkOPwuWuVWjAWmclfFIz+TTNE5dUUY6L+UMipDEcwFxtufnY3AW0v2MW5lOFHWbx3w7605yb2AFQuZjvngkjdelhDpVpX9a0XdPa7zUYBwXdxWeteH+i4ZJ62sjlBGzYRjFhK/y1rUKR3BVR5xtP9ofzqE1n/TRLpViU8iy4bpsQntTWa71xVoTFtE29h3ESw4QG2lRCwk7NIf8efyNdR25+YpVGIysAxXG2smGAi2W/YXUjteCE7k3IU+ehHJdWKB3spUBSoF/V` |
+> | Sweden South | rsa-sha2-512 | `G+oX014UJXR0t1xHrCi715XuoHBkBxJMdH8hmVMilJc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCa5Ny0EUd8yLOgzczm6Zge+D39VY7hpG+et2ln0i/HdYLd1aijEiF/0RDgnJYxZM4RhPZHxrVZXJXLsLa2T+ud+cqifvsjudsUSCzWNY3pHAwKBTSuu8Po+TrJXx8b+ogg+EhTh1BZQzIVQbtLwqRFJ3beLtvhp+V1pPWOoXRiN6Rq+x6ciT37jOdp033rbEM3AtzWdRBvRxUiVxKoRXcDYwAAIb3joaZ26p69Vj7HpD0HAf7w9f70zIwIzqrW4RcHcP+RbDVzNukK8gWP66OgSKrAQgRmibS6SEJx4kgkaghiQfm1k1bXkTnlKlz956DHkTkpMQe21/eW1Prs+q1` |
+> | Sweden South | ecdsa-sha2-nistp256 | `8C148yiGdrJCGF6HpDzINhGkB5AAyWDqkauJClRqCZs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEREKXJT7obM0RXGFrUPmJOoEpJD8T+QT29UEt3/jZrUzVzqXLV/9+VK0xqv1suljhUoUoClBdKqx5E/Sv1kSV4=` |
+> | Sweden South | ecdsa-sha2-nistp384 | `ra8+vb8aSkTBsO0KAxDrl2lN9p41BxymtRU6Seby83M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMby6y3wzWnzE304DjregQcSqKTsoMx2vPGk7OlBtjFKoubZlBRQH4jQrtPbJv/Hpf8f+D0JmvPe5G75yZFG1BcP5eB4aonAr0NNCw+3sCb50JVpoT4yoT787KKYf+5qg==` |
+> | Jio India Central | rsa-sha2-256 | `DmNCjG1VJxWWmrXw5USD0pAnJAbEAVonkUtzRFKEEFI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/x6T0nye3elqPzK8IF+Q70bLn2zg4MVJpK3P6YurtsRH8cv5+NEHyP0LWdeQWqKa9ivQRIQb8mHS+9KDMxOnzZraUeaaJLcXI0YV512kqzdevsEbH6BSmy8HhZHcRyXqH0PjxLcWJ5Wn9+caNhiVC40Oks7yrrZpAVbddzD9y/eJfguMVWiu1c8iZpYORss1QYo7JqVvEB6pLY03NXWM+xti1RSs+C6IEblQkPvnT3ELni9T1eZOACi12KGZHVLU9n27Nyg/fPjRheYSkw/lkkKDG0zvIQ7jr/k8SCHGcvtDYwRlFErFdGYBlIE888le2oDNNoKjJuhzN6S7ddpzp` |
+> | Jio India Central | rsa-sha2-512 | `m2P7vnysl2adTz/0P6ebSR7Xx8AubkYkex6cmD9C0ys=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDQHFDt8zTk+Hqh912v0U8CVTgAPUb8Kmuec+2orydM/InG+/zSuqQHsCZaD2mhEg8kevU8k2veF5z2sbko5TR/cghGg5dXlzz4YaKiNdNyKIGm2MdynXJofAtiktGhcB6ummctHqATfGSgkLJHtLvstzTVbVK1zgxXcB8hA52c2EPB1cN1TkAKEyiYNX7fKFe1EEPCxdx3fC/UyApKdD+D432HCW/g8Syj/n7asdB8EQqcoCT3ajh2wG2Qq0ZxjVbbrFImlr0VoTqLImJ4kZ9d2G7Rq2jqrlfESLAxKVDaqj+SjyWpzb3MHFSnwJZybCKXyTt+7BXpKeeGAcHpTs3F` |
+> | Jio India Central | ecdsa-sha2-nistp256 | `zAZ0A1pk0Xz8Vr/DEf8ztPaLCivXxfajlKMtWqAEzgU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDow29ds+BRDNTZNW70CEoxUjLiUF+IHgaDRaO+dAWwxL13d+MqTIYY4I0D7vgVvh0OegmYLXIWpCdR8LvVT7zA=` |
+> | Jio India Central | ecdsa-sha2-nistp384 | `OTG7jxUSj+XrdL28JpYAhsfr6tfO7vtnfzWCxkC/jmQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/Bb3/3u/UIcYGRLSl7YvOObb43LO5Ksi0ewWJU+MPsPWZr7OTTPs76TdwXMvD8+QuY8U9JxgQQrNmvbpabmbGENkllEgjGlev5P2mHy/IZZAUQhAeeCinCRvTsiOOoLw==` |
+> | Brazil Southeast | rsa-sha2-256 | `D+S7uHDWy0VPrRg9mlaK70PBPghBRCR1ru/KEsKpcjA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz86hzEBpBBVJqClTRb7dNolds/LShOM4jucPRawZrlKGEpeKv70Khk8BdI4697fORKavxjWK0O9tQpAJHtVGSv3Ajwb9MB7ERKncBVx/xfdiedrJNmF0G+p97XlCiwkAqePT/9IFaMy1OFqwl6LF7p7I0iBKZX0UgePwdiHtJnK0foTfsASNY4AEVcXHEuaulLFJKUjrr6ootblEbPBnC6IxTPj9oD+Nm0rtlCeD5JtCRFgKUj3LWybEog/AnnMNQDQ+vMPbsCnfsW/J/HQc+ebx3NtcumL+PIxqJw2Pk6mRpDdL+vs2nw/PtnPkdJ7DjIQYLypBSi3AFIONSlO15` |
+> | Brazil Southeast | rsa-sha2-512 | `C+p2eAPf5uec0yG+aeoVAaLOAAf0p8gbBNss3xfawPQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDV3WmETlQwzfuYoOsPAqfB9Z2gxsNecbpuwIBYwitLYKmJnT9Q3SNSgqnBiI1TKWyEweerdQaPnEvz9TeynGqSmLyGT0JJXQXFQCjTCgRHP4WD0Q+V7HWHnWYQ5c2e8tKEVA1jWt57dcrFlrGKEsywuMeEX21V13qQxK2acXVRWJPWgQCVwtiNpToc/cILOqL5XXKnSA81Ex7iRqw8QRAGdIozkryisucy+cStdJX6q+YUE5L62ENV8qMwJdwUGywEpKhXRg5VQKN0ciFqvyT/3cZQVF+NkUFGPnOi0bk4JzHxWxmQNTIwE7bmPsuniw5njD3ota/IPUHV2og190Xx` |
+> | Brazil Southeast | ecdsa-sha2-nistp256 | `dhADLmzQOE0mPcctS3wV+x2AUlv1GviguDQgSbGn/qs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYuseeJN3CvtyPSKOz5FSu7PoNul+o6/LB62/MW9CUW+3AmqtVANVox1XQ8eX/YhL0a5+brjmveZPQS6M09YyQ=` |
+> | Brazil Southeast | ecdsa-sha2-nistp384 | `mjFHELtgAJkPTWO4dc7pzVVnJ6WLfAXGvXN1Wq2+NPs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIwFI6bRmgPe0tN7Qwc03PMrlpBn+wBChcuofyWlKVd/Ec6t2dxHr/0ev0dqyKS2nbK7CAOQxSrV1NVYnYZKql/eC2sPqI1oxz7DzUtRnNKrXcH714ViN3RIY3DZA6rJOw==` |
+> | Norway West | rsa-sha2-256 | `Ea3Vj3EfZYM25AX1IAty30AD+lhXYZsgtPGEFzNtjOk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDuxOcTdADdJHI8MFrXV00XKbKVjXpirS3ZPzzIxw0mIFxFTArJEpXJeRfb0OZzQ1IABDwoasp1u+IhnY1Uv2VQ8mYAXtC3He08+7+EXJgFU/xQ8qFfM4eioAuXpxR7M7qV/0golNT4dvvLrY4zHxbSWmVB7cYJAeIjDU8dKISWFvMYjnRuiI7RYtxh/JI5ZfImU65Vfxi26vqWm51QDyF5+FmmXLUHpMFFuW8i/g8wSE1C3Qk+NZ3YJDlHjYqasPm4QidX8rHQ1xyMX9+ouzBZArNrVfrA4/ozoKGnPhe4GFzpuwdppkP4Ciy+H6t1/de/8fo9zkNgUJWHQrxzT4Lt` |
+> | Norway West | rsa-sha2-512 | `uHGfIB97I8y8nSAEciD7InBKzAx9ui5xQHAXIUo6gdE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPXLVCb1kqh8gERY43bvyPcfxVOUnZyWsHkEK5+QT6D7ttThO2alZbnAPMhMGpAzJieT1IArRbCjmssWQmJrhTGXSJBsi75zmku4vN+UB712EGXm308/TvClN0wlnFwFI9RWXonDBkUN1WjZnUoQuN+JNZ7ybApHEgyaiHkJfhdrtTkfzGLHqyMnESUvnEJkexLDog88xZVNL7qJTSJlq1m32JEAEDgTuO4Wb7IIr92s6GOFXKukwY8dRldXCaJvjwfBz5MEdPknvipwTHYlxYzpcCtb9qnOliDLD2g4gm9d5nq3QBlLj/4cS1M9trkAxQQfUmuVQooXfO2Zw+fOW1` |
+> | Norway West | ecdsa-sha2-nistp256 | `muljUcRHpId06YvSLxboTHWmq0pUXxH6QRZHspsLZvs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOefohG21zu2JGcUvjk/qlz5sxhJcy5Vpk5Etj3cgmE/BuOTt5GR4HHpbcj/hrLxGRmAWhBV7uVMqO376pwsOBs=` |
+> | Norway West | ecdsa-sha2-nistp384 | `QlzJV54Ggw1AObztQjGt/J2TQ1kTiTtJDcxxIdCtWYE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYnNgJKaYCByLPdh21ZYEV/I4FNSZ4RWxK4bMDgNo/53HROhQmezQgoDvJFWsQiFVDXOPLXf26OeVXJ7qXAm6vS+17Z7E1iHkrqo2MqnlMTYzvBOgYNFp9GfW6lkDYfiQ==` |
+ <sup>1</sup> The SHA 256 fingerprint is used by Open SSH and WinSCP.
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support for Azure Bl
- There's a 4 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically. -- Maximum file upload size is 90 GB.- ## Other - Special containers such as $logs, $blobchangefeed, $root, $web are not accessible via the SFTP endpoint.
+- When using custom domains the connection string is `<accountName>.<userName>@customdomain.com`. If home directory has not been specified for the user, it is `<accountName>.<containerName>.<userName>@customdomain.com`.
+ - Symbolic links are not supported. - `ssh-keyscan` is not supported.
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) |
| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
The items that appear in these tables will change over time as support continues
| [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) |
| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> <sup>3</sup> | ![No](../media/icons/no-icon.png)| ![No](../media/icons/no-icon.png) | | [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
storage Storage Samples Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-cli.md
The following table includes links to Bash scripts built using the Azure CLI tha
| Script | Description | ||| |**Storage accounts**||
-| [Create a storage account and retrieve/rotate the access keys](../scripts/storage-common-rotate-account-keys-cli.md?toc=%2fcli%2fazure%2ftoc.json) | Creates an Azure Storage account and retrieves and rotates its access keys. |
+| [Create a storage account and retrieve/rotate the access keys](../scripts/storage-common-rotate-account-keys-cli.md) | Creates an Azure Storage account and retrieves and rotates its access keys. |
|**Blob storage**||
-| [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-cli.md?toc=%2fcli%2fazure%2ftoc.json) | Calculates the total size of all the blobs in a container. |
-| [Delete containers with a specific prefix](../scripts/storage-blobs-container-delete-by-prefix-cli.md?toc=%2fcli%2fazure%2ftoc.json) | Deletes containers starting with a specified string. |
+| [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-cli.md) | Calculates the total size of all the blobs in a container. |
+| [Delete containers with a specific prefix](../scripts/storage-blobs-container-delete-by-prefix-cli.md) | Deletes containers starting with a specified string. |
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
You can access resources in a storage account by any language that can make HTTP
- [Azure Storage REST API](/rest/api/storageservices/) - [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage) - [Azure Storage client library for Java/Android](/java/api/overview/azure/storage)-- [Azure Storage client library for Node.js](/javascript/api/overview/azure/storage-overview)
+- [Azure Storage client library for Node.js]((/azure/storage/blobs/reference#javascript-client-libraries)
- [Azure Storage client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/storage/azure-storage-blob) - [Azure Storage client library for PHP](https://github.com/Azure/azure-storage-php) - [Azure Storage client library for Ruby](https://github.com/Azure/azure-storage-ruby)
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Private endpoints can be created in subnets that use [Service Endpoints](../../v
When you create a private endpoint for a storage service in your VNet, a consent request is sent for approval to the storage account owner. If the user requesting the creation of the private endpoint is also an owner of the storage account, this consent request is automatically approved.
-Storage account owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the storage account in the [Azure portal](https://portal.azure.com).
+Storage account owners can manage consent requests and the private endpoints through the '*Private endpoints*' tab for the storage account in the [Azure portal](https://portal.azure.com).
> [!TIP] > If you want to restrict access to your storage account through the private endpoint only, configure the storage firewall to deny or control access through the public endpoint.
-You can secure your storage account to only accept connections from your VNet, by [configuring the storage firewall](storage-network-security.md#change-the-default-network-access-rule) to deny access through its public endpoint by default. You don't need a firewall rule to allow traffic from a VNet that has a private endpoint, since the storage firewall only controls access through the public endpoint. Private endpoints instead rely on the consent flow for granting subnets access to the storage service.
+You can secure your storage account to only accept connections from your VNet by [configuring the storage firewall](storage-network-security.md#change-the-default-network-access-rule) to deny access through its public endpoint by default. You don't need a firewall rule to allow traffic from a VNet that has a private endpoint, since the storage firewall only controls access through the public endpoint. Private endpoints instead rely on the consent flow for granting subnets access to the storage service.
> [!NOTE] > When copying blobs between storage accounts, your client must have network access to both accounts. So if you choose to use a private link for only one account (either the source or the destination), make sure that your client has network access to the other account. To learn about other ways to configure network access, see [Configure Azure Storage firewalls and virtual networks](storage-network-security.md?toc=/azure/storage/blobs/toc.json).
For read access to the secondary region with a storage account configured for ge
## Connecting to a private endpoint
-Clients on a VNet using the private endpoint should use the same connection string for the storage account, as clients connecting to the public endpoint. We rely upon DNS resolution to automatically route the connections from the VNet to the storage account over a private link.
+Clients on a VNet using the private endpoint should use the same connection string for the storage account as clients connecting to the public endpoint. We rely upon DNS resolution to automatically route the connections from the VNet to the storage account over a private link.
> [!IMPORTANT]
-> Use the same connection string to connect to the storage account using private endpoints, as you'd use otherwise. Please don't connect to the storage account using its `privatelink` subdomain URL.
+> Use the same connection string to connect to the storage account using private endpoints as you'd use otherwise. Please don't connect to the storage account using its `privatelink` subdomain URL.
-We create a [private DNS zone](../../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make additional changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
+By default, We create a [private DNS zone](../../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints. However, if you're using your own DNS server, you may need to make additional changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints.
## DNS changes for private endpoints
This approach enables access to the storage account **using the same connection
If you are using a custom DNS server on your network, clients must be able to resolve the FQDN for the storage account endpoint to the private endpoint IP address. You should configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `StorageAccountA.privatelink.blob.core.windows.net` with the private endpoint IP address. > [!TIP]
-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the storage account name in the `privatelink` subdomain to the private endpoint IP address. You can do this by delegating the `privatelink` subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records.
+> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the storage account name in the `privatelink` subdomain to the private endpoint IP address. You can do this by delegating the `privatelink` subdomain to the private DNS zone of the VNet or by configuring the DNS zone on your DNS server and adding the DNS A records.
The recommended DNS zone names for private endpoints for storage services, and the associated endpoint target sub-resources, are:
storage Storage Use Azcopy Optimize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-optimize.md
If you're uploading or downloading files, consider setting the `--check-length`
#### Turn on concurrent local scanning (Linux)
-File scans on some Linux systems don't execute fast enough to saturate all of the parallel network connections. In these cases, you can set the `AZCOPY_CONCURRENT_SCAN` to `true`.
+File scans on some Linux systems don't execute fast enough to saturate all of the parallel network connections. In these cases, you can set the `AZCOPY_CONCURRENT_SCAN` to a higher number.
## Increase the number of concurrent requests
storage Storage Blobs Container Calculate Size Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
Title: Azure CLI Script Sample - Calculate blob container size | Microsoft Docs
description: Calculate the size of a container in Azure Blob storage by totaling the size of the blobs in the container. - ms.devlang: azurecli Previously updated : 06/28/2017 Last updated : 03/01/2022
This script calculates the size of a container in Azure Blob storage by totaling the size of the blobs in the container. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] > [!IMPORTANT]
This script calculates the size of a container in Azure Blob storage by totaling
> > The maximum number of blobs returned with a single listing call is 5000. If you need to return more than 5000 blobs, use a continuation token to request additional sets of results. + ## Sample script
-[!code-azurecli[main](../../../cli_scripts/storage/calculate-container-size/calculate-container-size.sh?highlight=2-3 "Calculate container size")]
+
+### Run the script
+
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, container, and all related resources.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to calculate the size of the Blob storage container. Each item in the table links to command-specific documentation.
storage Storage Blobs Container Delete By Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md
Title: Azure CLI Script Sample - Delete containers by prefix | Microsoft Docs
description: Delete Azure Storage blob containers based on a container name prefix, then clean up the deployment. See help links for commands used in the script sample. - ms.devlang: azurecli Previously updated : 06/22/2017 Last updated : 03/01/2022
This script first creates a few sample containers in Azure Blob storage, then deletes some of the containers based on a prefix in the container name. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/storage/delete-containers-by-prefix/delete-containers-by-prefix.sh?highlight=2-3 "Delete containers by prefix")]
+
+### Run the script
+
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, remaining containers, and all related resources.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to delete containers based on container name prefix. Each item in the table links to command-specific documentation.
storage Storage Common Rotate Account Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-cli.md
Title: Azure CLI Script Sample - Rotate storage account access keys | Microsoft
description: Create an Azure Storage account, then retrieve and rotate its account access keys. - ms.devlang: azurecli Previously updated : 10/20/2020 Last updated : 03/02/2022
This script creates an Azure Storage account, displays the new storage account's access keys, then renews (rotates) the keys. - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] + ## Sample script
-[!code-azurecli-interactive[main](../../../cli_scripts/storage/rotate-storage-account-keys/rotate-storage-account-keys.sh "Rotate storage account keys")]
+
+### Run the script
+
-## Clean up deployment
+## Clean up resources
-Run the following command to remove the resource group, storage account, and all related resources.
-```azurecli-interactive
-az group delete --name myResourceGroup
+```azurecli
+az group delete --name $resourceGroup
```
-## Script explanation
+## Sample reference
This script uses the following commands to create the storage account and retrieve and rotate its access keys. Each item in the table links to command-specific documentation.
synapse-analytics How To Move Workspace From One Region To Another https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-move-workspace-from-one-region-to-another.md
Title: Move an Azure Synapse Analytics workspace from region to another description: This article teaches you how to move an Azure Synapse Analytics workspace from one region to another. -+ Last updated 08/16/2021-+
synapse-analytics Quickstart Scale Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md
Title: 'Quickstart: Scale compute for Synapse SQL pool (Azure portal)' description: You can scale compute for Synapse SQL pool (data warehouse) using the Azure portal.--++ Previously updated : 04/28/2020 Last updated : 03/09/2022
synapse-analytics Quickstart Scale Compute Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-powershell.md
Title: 'Quickstart: Scale compute for dedicated SQL pool (formerly SQL DW) (Azure PowerShell)' description: You can scale compute for dedicated SQL pool (formerly SQL DW) using Azure PowerShell.--++ Previously updated : 04/17/2018 Last updated : 03/09/2022
synapse-analytics Quickstart Scale Compute Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-tsql.md
Title: 'Quickstart: Scale compute in dedicated SQL pool (formerly SQL DW) - T-SQL' description: Scale compute in dedicated SQL pool (formerly SQL DW) using T-SQL and SQL Server Management Studio (SSMS). Scale out compute for better performance, or scale back compute to save costs.-++ Previously updated : 04/17/2018- Last updated : 03/09/2022
synapse-analytics Sql Data Warehouse How To Troubleshoot Missed Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-troubleshoot-missed-classification.md
Title: Troubleshoot misclassified workload in a dedicated SQL pool description: Identify and troubleshoot scenarios where workloads are misclassified to unintended workload groups in a dedicated SQL pool in Azure Synapse Analytics. --++ Previously updated : 01/24/2022 Last updated : 03/09/2022
synapse-analytics Sql Data Warehouse Manage Compute Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md
Title: Pause, resume, scale with REST APIs for dedicated SQL pool (formerly SQL DW) description: Manage compute power for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics through REST APIs.-++ Previously updated : 03/29/2019- Last updated : 03/09/2022
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
Extension execution output is logged to the following file:
| 9 | Enable called prematurely | [Update the Azure Linux Agent](./update-linux-agent.md) to the latest available version. | | 10 | VM is already connected to a Log Analytics workspace | To connect the VM to the workspace specified in the extension schema, set stopOnMultipleConnections to false in public settings or remove this property. This VM gets billed once for each workspace it is connected to. | | 11 | Invalid config provided to the extension | Follow the preceding examples to set all property values necessary for deployment. |
-| 17 | Log Analytics package installation failure |
-| 19 | OMI package installation failure |
-| 20 | SCX package installation failure |
+| 17 | Log Analytics package installation failure | |
+| 18 | Installation of OMSConfig package failed. | Look through the command output for the root failure. |
+| 19 | OMI package installation failure | |
+| 20 | SCX package installation failure | |
+| 33 | Error generating metaconfiguration for omsconfig. | File a [GitHub Issue](https://github.com/Microsoft/OMS-Agent-for-Linux/issues) with details from the output. |
| 51 | This extension is not supported on the VM's operation system | |
-| 52 | This extension failed due to a missing dependency | Check the output and logs for more information about which dependency is missing. |
+| 52 | This extension failed due to a missing dependency or permission | Check the output and logs for more information about which dependency or permission is missing. |
| 53 | This extension failed due to missing or wrong configuration parameters | Check the output and logs for more information about what went wrong. Additionally, check the correctness of the workspace ID, and verify that the machine is connected to the internet. | | 55 | Cannot connect to the Azure Monitor service or required packages missing or dpkg package manager is locked| Check that the system either has internet access, or that a valid HTTP proxy has been provided. Additionally, check the correctness of the workspace ID, and verify that curl and tar utilities are installed. |
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
OnPrem FPGA, both the management endpoint (Device ID 5004) and role endpoint (De
On Azure NP VMs, the XDMA 2.1 platform only supports Host_Mem(SB) and DDR data retention features. <br> To enable Host_Mem(SB) (up to 1 Gb RAM): sudo xbutil host_mem --enable --size 1g -
+<br>
To disable Host_Mem(SB): sudo xbutil host_mem --disable -
+<br>
+Starting on XRT2021.1, OnPrem FPGA in Linux exposes [M2M data transfer](https://xilinx.github.io/XRT/master/html/m2m.html)
+This feature is not supported in Azure NP VMs.
+<p>
+
**Q:** Can I run xbmgmt commands? **A:** No, on Azure VMs there's no management support directly from the Azure VM.
virtual-network Troubleshoot Outbound Smtp Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/troubleshoot-outbound-smtp-connectivity.md
Using these email delivery services isn't restricted in Azure, regardless of the
For VMs that are deployed in standard Enterprise Agreement subscriptions, the outbound SMTP connections on TCP port 25 will not be blocked. However, there is no guarantee that external domains will accept the incoming emails from the VMs. If your emails are rejected or filtered by the external domains, you should contact the email service providers of the external domains to resolve the problems. These problems are not covered by Azure support. For Enterprise Dev/Test subscriptions, port 25 is blocked by default.
-It is possible to have this block removed. To request to have the block removed, go to the **Cannot send email (SMTP-Port 25)** section of the **Diagnose and Solve** blade in the Azure Virtual Network resource in the Azure portal and open a support request.
+It is possible to have this block removed. To request to have the block removed, go to the **Cannot send email (SMTP-Port 25)** section of the **Diagnose and Solve** blade in the Azure Virtual Network resource in the Azure portal and run the diagnostic. This will exempt the qualified enterprise dev/test subscriptions automatically.
After the subscription is exempted from this block and the VMs are stopped and restarted, all VMs in that subscription are exempted going forward. The exemption applies only to the subscription requested and only to VM traffic that is routed directly to the internet.
After the subscription is exempted from this block and the VMs are stopped and r
The Azure platform will block outbound SMTP connections on TCP port 25 for deployed VMs. This is to ensure better security for Microsoft partners and customers, protect MicrosoftΓÇÖs Azure platform, and conform to industry standards.
-If you're using these subscription types, we encourage you to use an authenticated SMTP relay service, as outlined earlier in this article.
+If you're using a non-enterprise subscription type, we encourage you to use an authenticated SMTP relay service, as outlined earlier in this article.
## Changing subscription type
If you change your subscription type from Enterprise Agreement to another type o
## Need help? Contact support
-If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. Use this issue type: **Technical** > **Virtual Network** > **Cannot send email (SMTP/Port 25)**.
+If you are using an Enterprise Agreement subscription and still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. Use this issue type: **Technical** > **Virtual Network** > **Cannot send email (SMTP/Port 25)**.
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
An Azure Virtual WAN connection is composed of 2 tunnels. A Virtual WAN VPN gate
### What happens during a Gateway Reset in a Virtual WAN VPN Gateway?
-The Gateway Reset button should be used if your on-premises devices are all working as expected but Site to Site VPN connections in Azure are in a Disconnected state. Virtual WAN VPN Gateways are always deployed in an Active-Active state for high availability. This means there is always more than one instance deployed in a VPN Gateway at any point of time. When the Gateway Reset button is used, it reboots the instances in the VPN Gateway in a sequential manner, so your connections are not disrupted. There will be a brief gap as connections move from one instance to the other, but this gap should be less than a minute. Additionally, please note that resetting the gateways will not change your Public IPs.
+The Gateway Reset button should be used if your on-premises devices are all working as expected but Site to Site VPN connections in Azure are in a Disconnected state. Virtual WAN VPN Gateways are always deployed in an Active-Active state for high availability. This means there is always more than one instance deployed in a VPN Gateway at any point of time. When the Gateway Reset button is used, it reboots the instances in the VPN Gateway in a sequential manner, so your connections are not disrupted. There will be a brief gap as connections move from one instance to the other, but this gap should be less than a minute. Additionally, please note that resetting the gateways will not change your Public IPs.
### Can the on-premises VPN device connect to multiple hubs?
When you choose to deploy a security partner provider to protect Internet access
For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md).
+### Will BGP communities generated by on-premises be preserved in Virtual WAN?
+
+Yes, BGP communities generated by on-premises will be preserved in Virtual WAN. You can use your own public ASNs or private ASNs for your on-premises networks. You can't use the ranges reserved by Azure or IANA:
+ * ASNs reserved by Azure:
+ * Public ASNs: 8074, 8075, 12076
+ * Private ASNs: 65515, 65517, 65518, 65519, 65520
+ * ASNs reserved by IANA: 23456, 64496-64511, 65535-65551
+
+### In Virtual WAN, what are the estimated performances by ExpressRoute gateway SKU?
++
+### Why am I seeing a message and button called "Update router to latest software version" in portal?
+
+The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets (VMSS) based deployments. This will enable the virtual hub router to now be availability zone aware and have enhanced scaling out capabilities during high CPU usage. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the lastest version by clicking on the button. The Cloud Services infrastructure will be deprecated soon, so **please migrate by May 31, 2022.**
+
+Note that youΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNET connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new VMSS based virtual hub routers, youΓÇÖll face an expected downtime of 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating.
+ ## Next steps * For more information about Virtual WAN, see [About Virtual WAN](virtual-wan-about.md).