Updates from: 03/11/2022 02:11:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Previously updated : 02/25/2022 Last updated : 03/10/2022
If the sign-in process is successful, your browser is redirected to `https://jwt
## Next steps
-Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
+- Learn how to [pass the Azure AD token to your application](idp-pass-through-user-flow.md).
+- Check out the Azure AD multi-tenant federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory), and how to pass Azure AD access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory-with-access-token)
::: zone-end
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
Update the relying party (RP) file that initiates the user journey that you crea
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. - ## Next steps
-Learn how to [pass Facebook token to your application](idp-pass-through-user-flow.md).
+- Learn how to [pass Facebook token to your application](idp-pass-through-user-flow.md).
+- Check out the Facebook federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#facebook), and how to pass Facebook access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#facebook-with-access-token)
+
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
The GitHub technical profile requires the **CreateIssuerUserId** claim transform
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+## Next steps
+
+- Learn how to [pass GitHub token to your application](idp-pass-through-user-flow.md).
+- Check out the GitHub federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#github), and how to pass GitHub access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#github-with-access-token)
+ ::: zone-end
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
You can define a Google account as a claims provider by adding it to the **Claim
If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. - ## Next steps
-Learn how to [pass a Google token to your application](idp-pass-through-user-flow.md).
+- Learn how to [pass Google token to your application](idp-pass-through-user-flow.md).
+- Check out the Google federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google), and how to pass Google access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#google-with-access-token)
++
active-directory-b2c Idp Pass Through User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/idp-pass-through-user-flow.md
Previously updated : 09/16/2021 Last updated : 03/10/2022
Azure AD B2C supports passing the access token of [OAuth 2.0](add-identity-provi
::: zone pivot="b2c-custom-policy"
-Azure AD B2C supports passing the access token of [OAuth 2.0](authorization-code-flow.md) and [OpenID Connect](openid-connect.md) identity providers. For all other identity providers, the claim is returned blank.
+Azure AD B2C supports passing the access token of [OAuth 2.0](authorization-code-flow.md) and [OpenID Connect](openid-connect.md) identity providers. For all other identity providers, the claim is returned blank. For more details, check out the identity providers federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers).
::: zone-end
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
For [Applications](https://admin.bindid-sandbox.io/console/#/applications) to co
| Name | Azure AD B2C/your desired application name| | Domain | name.onmicrosoft.com| | Redirect URIs| https://jwt.ms |
-| Redirect URLs |Specify the page to which users are redirected after BindID authentication: https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
+| Redirect URLs |Specify the page to which users are redirected after BindID authentication: `https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp`<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
>[!NOTE] >BindID will provide you Client ID and Client Secret, which you'll need later to configure the Identity provider in Azure AD B2C.
active-directory Skip Out Of Scope Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/skip-out-of-scope-deletions.md
Title: Skip deletion of out of scope users in Azure Active Directory Application
description: Learn how to override the default behavior of de-provisioning out of scope users in Azure Active Directory. -+
This article describes how to use the Microsoft Graph API and the Microsoft Grap
* If ***SkipOutOfScopeDeletions*** is set to 0 (false), accounts that go out of scope will be disabled in the target. * If ***SkipOutOfScopeDeletions*** is set to 1 (true), accounts that go out of scope will not be disabled in the target. This flag is set at the *Provisioning App* level and can be configured using the Graph API.
-Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox.
+Because this configuration is widely used with the *Workday to Active Directory user provisioning* app, the following steps include screenshots of the Workday application. However, the configuration can also be used with *all other apps*, such as ServiceNow, Salesforce, and Dropbox. Note that in order to successfully complete this procedure you must have first set up app provisioning for the app. Each app has its own configuration article. For example, to configure the Workday application, see [Tutorial: Configure Workday to Azure AD user provisioning](../saas-apps/workday-inbound-cloud-only-tutorial.md).
## Step 1: Retrieve your Provisioning App Service Principal ID (Object ID)
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
The following limitations apply to using SSPR from the Windows sign-in screen:
- *BlockNonAdminUserInstall* is set to enabled or 1 - *EnableLostMode* is set on the device - Explorer.exe is replaced with a custom shell
+ - Interactive logon: Require smart card is set to enabled or 1
- The combination of the following specific three settings can cause this feature to not work. - Interactive logon: Do not require CTRL+ALT+DEL = Disabled - *DisableLockScreenAppNotifications* = 1 or Enabled
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 03/03/2022 Last updated : 03/10/2022
These browsers support device authentication, allowing the device to be identifi
> [!NOTE] > Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario. > Safari is supported for device-based Conditional Access, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
+> [Firefox 91+](https://support.mozilla.org/kb/windows-sso) is supported for device-based Conditional Access, but "Allow Windows single sign-on for Microsoft, work, and school accounts" needs to be enabled.
#### Why do I see a certificate prompt in the browser
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
There are two scenarios that make up continuous access evaluation, critical even
### Critical event evaluation
-Continuous access evaluation is implemented by enabling services, like Exchange Online, SharePoint Online, and Teams, to subscribe to critical Azure AD events. Those events can then be evaluated and enforced near real time. Critical event evaluation doesn't rely on Conditional Access policies so is available in any tenant. The following events are currently evaluated:
+Continuous access evaluation is implemented by enabling services, like Exchange Online, SharePoint Online, and Teams, to subscribe to critical Azure AD events. Those events can then be evaluated and enforced near real time. Critical event evaluation doesn't rely on Conditional Access policies so it is available in any tenant. The following events are currently evaluated:
- User Account is deleted or disabled - Password for a user is changed or reset
active-directory Developer Support Help Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/developer-support-help-options.md
Previously updated : 10/28/2021 Last updated : 03/09/2022 -
If you need help with one of the Microsoft Authentication Libraries (MSAL), open
- [Azure Active Directory Identity Blog](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity): Get news and information about Azure AD. - [Tech Community](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/bg-p/Identity/): Share your experiences, engage, and learn from experts.+
+## Share your product ideas
+
+Have an idea for improving the for the Microsoft identity platform? Browse and vote for ideas submitted by others or submit your own:
+
+https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789
++
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 03/01/2022 Last updated : 03/10/2022
Welcome to what's new in the Microsoft identity platform documentation. This art
## February 2022
-### New articles
--- [Quickstart: Sign in users and call the Microsoft Graph API from an Android app](mobile-app-quickstart-portal-android.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app](mobile-app-quickstart-portal-ios.md)- ### Updated articles - [Desktop app that calls web APIs: Acquire a token using WAM](scenario-desktop-acquire-token-wam.md)
Welcome to what's new in the Microsoft identity platform documentation. This art
### New articles - [Access Azure AD protected resources from an app in Google Cloud (preview)](workload-identity-federation-create-trust-gcp.md)-- [Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity](console-app-quickstart.md)-- [Quickstart: Acquire a token and call Microsoft Graph API from a desktop application](desktop-app-quickstart.md)-- [Quickstart: Add sign-in with Microsoft to a web app](web-app-quickstart.md)-- [Quickstart: Protect a web API with the Microsoft identity platform](web-api-quickstart.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from a mobile application](mobile-app-quickstart.md) ### Updated articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Exchange a SAML token issued by AD FS for a Microsoft Graph access token](v2-saml-bearer-assertion.md) - [Logging in MSAL.js](msal-logging-js.md) - [Permissions and consent in the Microsoft identity platform](v2-permissions-and-consent.md)-- [Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity](quickstart-v2-java-daemon.md)-- [Quickstart: Acquire a token and call Microsoft Graph API from a Python console app using app's identity](quickstart-v2-python-daemon.md)-- [Quickstart: Add sign-in with Microsoft to a Java web app](quickstart-v2-java-webapp.md)-- [Quickstart: Add sign-in with Microsoft to a Python web app](quickstart-v2-python-webapp.md)-- [Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app](quickstart-v2-aspnet-core-webapp.md)-- [Quickstart: ASP.NET web app that signs in Azure AD users](quickstart-v2-aspnet-webapp.md)-- [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](quickstart-v2-netcore-daemon.md)-- [Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform](quickstart-v2-aspnet-core-web-api.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from an Android app](quickstart-v2-android.md)-- [Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app](quickstart-v2-ios.md)
+- [Quickstart: Acquire a token and call the Microsoft Graph API by using a console app's identity](console-app-quickstart.md)
+- [Quickstart: Acquire a token and call Microsoft Graph API from a desktop application](desktop-app-quickstart.md)
+- [Quickstart: Add sign-in with Microsoft to a web app](web-app-quickstart.md)
+- [Quickstart: Protect a web API with the Microsoft identity platform](web-api-quickstart.md)
+- [Quickstart: Sign in users and call the Microsoft Graph API from a mobile application](mobile-app-quickstart.md)
## December 2021
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Claims mapping policy type](reference-claims-mapping-policy-type.md) - [Microsoft identity platform developer glossary](developer-glossary.md)-- [Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow](quickstart-v2-javascript-auth-code-angular.md) - [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 02/16/2022 Last updated : 03/10/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on February 16th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on March 10th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| SKYPE FOR BUSINESS PSTN DOMESTIC CALLING | MCOPSTN1 | 0dab259f-bf13-4952-b7f8-7db8f131b28d | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | DOMESTIC CALLING PLAN (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING (120 Minutes)| MCOPSTN5 | 54a152dc-90de-4996-93d2-bc47e670fc06 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | DOMESTIC CALLING PLAN (54a152dc-90de-4996-93d2-bc47e670fc06) | | Skype for Business PSTN Usage Calling Plan | MCOPSTNPP | 06b48c5f-01d9-4b18-9015-03b52040f51a | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) |
+| Teams Phone with Calling Plan | MCOTEAMS_ESSENTIALS | ae2343d1-0999-43f6-ae18-d816516f6e78 | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | TELSTRA CALLING FOR O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) | | Universal Print | UNIVERSAL_PRINT | 9f3d9c1d-25a5-4aaa-8e59-23a1e6450a67 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9) |
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
Selected policies should either have an **Include** or **Exclude** option checke
![ Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png) >[!NOTE]
->The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+>The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
### Virtual Server Properties
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
Previously updated : 01/28/2022 Last updated : 03/07/2022
# Assign Azure AD roles with administrative unit scope
-In Azure Active Directory (Azure AD), for more granular administrative control, you can assign an Azure AD role with a scope that's limited to one or more administrative units.
+In Azure Active Directory (Azure AD), for more granular administrative control, you can assign an Azure AD role with a scope that's limited to one or more administrative units. When an Azure AD role is assigned at the scope of an administrative unit, role permissions apply only when managing members of the administrative unit itself, and do not apply to tenant-wide settings or configurations.
+
+For example, an administrator who is assigned the Groups Administrator role at the scope of an administrative unit can manage groups that are members of the administrative unit, but they cannot manage other groups in the tenant. They also cannot manage tenant-level settings related to groups, such as expiration or group naming policies.
+
+This article describes how to assign Azure AD roles with administrative unit scope.
## Prerequisites
The following Azure AD roles can be assigned with administrative unit scope:
| Role | Description | | --| -- | | [Authentication Administrator](permissions-reference.md#authentication-administrator) | Has access to view, set, and reset authentication method information for any non-admin user in the assigned administrative unit only. |
-| [Groups Administrator](permissions-reference.md#groups-administrator) | Can manage all aspects of groups and groups settings, such as naming and expiration policies, in the assigned administrative unit only. |
+| [Groups Administrator](permissions-reference.md#groups-administrator) | Can manage all aspects of groups in the assigned administrative unit only. |
| [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) | Can reset passwords for non-administrators in the assigned administrative unit only. | | [License Administrator](permissions-reference.md#license-administrator) | Can assign, remove, and update license assignments within the administrative unit only. | | [Password Administrator](permissions-reference.md#password-administrator) | Can reset passwords for non-administrators within the assigned administrative unit only. |
-| [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) * | Can manage all aspects of the SharePoint service. |
-| [Teams Administrator](permissions-reference.md#teams-administrator) * | Can manage the Microsoft Teams service. |
+| [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. For SharePoint sites associated with Microsoft 365 groups in an administrative unit, can also update site properties (site name, URL, and external sharing policy) using the Microsoft 365 admin center. Cannot use the SharePoint admin center or SharePoint APIs to manage sites. |
+| [Teams Administrator](permissions-reference.md#teams-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. Can manage team members in the Microsoft 365 admin center for teams associated with groups in the assigned administrative unit only. Cannot use the Teams admin center. |
| [Teams Devices Administrator](permissions-reference.md#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. | | [User Administrator](permissions-reference.md#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins within the assigned administrative unit only. |
-(*) The SharePoint Administrator and Teams Administrator roles can only be used for managing properties in the Microsoft 365 admin center. Teams admin center and SharePoint admin center currently do not support administrative unit-scoped administration.
- Certain role permissions apply only to non-administrator users when assigned with the scope of an administrative unit. In other words, administrative unit scoped [Helpdesk Administrators](permissions-reference.md#helpdesk-administrator) can reset passwords for users in the administrative unit only if those users do not have administrator roles. The following list of permissions are restricted when the target of an action is another administrator: - Read and modify user authentication methods, or reset user passwords
active-directory Sonarqube Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sonarqube-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create Sonarqube test user
-In this section, you create a user called B.Simon in Sonarqube. Work with [Sonarqube Client support team](https://www.sonarsource.com/support/) to add the users in the Sonarqube platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Sonarqube. Work with [Sonarqube Client support team](https://sonarsource.com/company/contact/) to add the users in the Sonarqube platform. Users must be created and activated before you use single sign-on.
## Test SSO
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) drivers for Azure Disks on Azure Ku
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 10/15/2021 Last updated : 03/09/2022
Besides original in-tree driver features, Azure Disk CSI driver already provides
- `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported, check more details about [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md) - [Snapshot](#volume-snapshots) - [Volume clone](#clone-volumes)
+- [Resize disk PV without downtime](#resize-a-persistent-volume-without-downtime)
## Use CSI persistent volumes with Azure disks
outfile
test.txt ```
-## Resize a persistent volume
+## Resize a persistent volume without downtime
You can instead request a larger volume for a PVC. Edit the PVC object, and specify a larger size. This change triggers the expansion of the underlying volume that backs the PV.
Filesystem Size Used Avail Use% Mounted on
``` > [!IMPORTANT]
-> Currently, the Azure disk CSI driver only supports resizing PVCs with no pods associated (and the volume not mounted to a specific node).
-
-As such, let's delete the pod we created earlier:
-
-```console
-$ kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/nginx-pod-azuredisk.yaml
-
-pod "nginx-azuredisk" deleted
-```
+> Currently, Azure disk CSI driver supports resizing PVCs without downtime on specific regions.
+> Follow this [link][expand-an-azure-managed-disk] to register the disk online resize feature.
Let's expand the PVC by increasing the `spec.resources.requests.storage` field:
pvc-391ea1a6-0191-4022-b915-c8dc4216174a 15Gi RWO Delete
(...) ```
-> [!NOTE]
-> The PVC won't reflect the new size until it has a pod associated to it again.
-
-Let's create a new pod:
-
-```console
-$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/nginx-pod-azuredisk.yaml
-
-pod/nginx-azuredisk created
-```
-
-And, finally, confirm the size of the PVC and inside the pod:
+And after a few minutes, confirm the size of the PVC and inside the pod:
```console $ kubectl get pvc pvc-azuredisk
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Win
(...) ```
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
- ## Next steps - To learn how to use CSI drivers for Azure Files, see [Use Azure Files with CSI drivers](azure-files-csi.md).
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[azure-disk-volume]: azure-disk-volume.md [azure-files-pvc]: azure-files-dynamic-pv.md [premium-storage]: ../virtual-machines/disks-types.md
+[expand-an-azure-managed-disk]: ../virtual-machines/linux/expand-disks.md#expand-an-azure-managed-disk
[az-disk-list]: /cli/azure/disk#az_disk_list [az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create [az-disk-create]: /cli/azure/disk#az_disk_create
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[az-feature-register]: /cli/azure/feature#az_feature_register [az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register
-[use-tags]: use-tags.md
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS) Previously updated : 03/01/2019 Last updated : 03/09/2019 #Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
The disk resource ID is displayed once the command has successfully completed, a
``` ## Mount disk as volume
+Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID. For example:
-To mount the Azure disk into your pod, configure the volume in the container spec. Create a new file named `azure-disk-pod.yaml` with the following contents. Update `diskName` with the name of the disk created in the previous step, and `diskURI` with the disk ID shown in output of the disk create command. If desired, update the `mountPath`, which is the path where the Azure disk is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+```yaml
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-azuredisk
+spec:
+ capacity:
+ storage: 100Gi
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Retain
+ csi:
+ driver: disk.csi.azure.com
+ readOnly: false
+ volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ volumeAttributes:
+ fsType: ext4
+```
+
+Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
+
+```yaml
+apiVersion: v1
+metadata:
+ name: pvc-azuredisk
+spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Gi
+ volumeName: pv-azuredisk
+ storageClassName: ""
+```
+
+Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*.
+
+```console
+kubectl apply -f pv-azuredisk.yaml
+kubectl apply -f pvc-azuredisk.yaml
+```
+
+Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*.
+
+```console
+$ kubectl get pvc pvc-azuredisk
+
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+pvc-azuredisk Bound pv-azuredisk 100Gi RWO 5s
+```
+
+Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
```yaml apiVersion: v1
spec:
- name: azure mountPath: /mnt/azure volumes:
- - name: azure
- azureDisk:
- kind: Managed
- diskName: myAKSDisk
- diskURI: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ - name: azure
+ persistentVolumeClaim:
+ claimName: pvc-azuredisk
```
-Use the `kubectl` command to create the pod.
- ```console kubectl apply -f azure-disk-pod.yaml ```
-You now have a running pod with an Azure disk mounted at `/mnt/azure`. You can use `kubectl describe pod mypod` to verify the disk is mounted successfully. The following condensed example output shows the volume mounted in the container:
-
-```
-[...]
-Volumes:
- azure:
- Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
- DiskName: myAKSDisk
- DiskURI: /subscriptions/<subscriptionID/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
- Kind: Managed
- FSType: ext4
- CachingMode: ReadWrite
- ReadOnly: false
- default-token-z5sd7:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-z5sd7
- Optional: false
-[...]
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 1m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
- Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-z5sd7"
- Normal SuccessfulMountVolume 41s kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "azure"
-[...]
-```
-
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
- ## Next steps For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
For more information about AKS clusters interact with Azure disks, see the [Kube
[azure-files-volume]: azure-files-volume.md [operator-best-practices-storage]: operator-best-practices-storage.md [concepts-storage]: concepts-storage.md
-[use-tags]: use-tags.md
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
$ kubectl exec -it busybox-azurefile-0 -- cat c:\mnt\azurefile\data.txt # on Win
(...) ```
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
- ## Next steps - To learn how to use CSI drivers for Azure disks, see [Use Azure disks with CSI drivers](azure-disk-csi.md).
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[az-provider-register]: /cli/azure/provider#az_provider_register [node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [storage-skus]: ../storage/common/storage-redundancy.md
-[use-tags]: use-tags.md
+[use-tags]: use-tags.md
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 01/29/2022 Last updated : 03/9/2022 #Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
spec:
- mfsymlinks - cache=strict - nosharesock
+ - nobrl
``` Create a *azurefile-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Enable Container Storage Interface (CSI) drivers on Azure Kubernetes Serv
description: Learn how to enable the Container Storage Interface (CSI) drivers for Azure disks and Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 10/15/2021 Last updated : 03/10/2022
The Container Storage Interface (CSI) is a standard for exposing arbitrary block
The CSI storage driver support on AKS allows you to natively use: - [*Azure disks*](azure-disk-csi.md), which can be used to create a Kubernetes *DataDisk* resource. Disks can use Azure Premium Storage, backed by high-performance SSDs, or Azure Standard Storage, backed by regular HDDs or Standard SSDs. For most production and development workloads, use Premium Storage. Azure disks are mounted as *ReadWriteOnce*, so are only available to a single pod. For storage volumes that can be accessed by multiple pods simultaneously, use Azure Files.-- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs.
+- [*Azure Files*](azure-files-csi.md), which can be used to mount an SMB 3.0/3.1 share backed by an Azure Storage account to pods. With Azure Files, you can share data across multiple nodes and pods. Azure Files can use Azure Standard Storage backed by regular HDDs or Azure Premium Storage backed by high-performance SSDs.
> [!IMPORTANT] > Starting in Kubernetes version 1.21, Kubernetes will use CSI drivers only and by default. These drivers are the future of storage support in Kubernetes.
Whilst explicit migration to the CSI provider is not needed for your storage cla
Migration of these storage classes will involve deleting the existing storage classes, and re-provisioning them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
-Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
-
-> [!IMPORTANT]
-> If your Storage class reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/).
+Whilst this will update the mapping of the storage classes, the binding of the Persistent Volume to the CSI provisioner will only take place at provisioning time. This could be during a cordon & drain operation (cluster update) or by detaching and reattaching the Volume.
### Migrating Storage Class provisioner
parameters:
The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
+### Migrating in-tree disk persistent volumes
+
+> [!IMPORTANT]
+> If your in-tree Persistent Volume reclaimPolicy is set to Delete you will need to change the Persistent Volume to Retain to persist your data. This can be achieved via a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
+> ```console
+> $ kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
+> ```
+
+If you have in-tree persistent volumes, get disk ID from `azureDisk.diskURI` and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes
+ ## Next steps - To use the CSI drive for Azure disks, see [Use Azure disks with CSI drivers](azure-disk-csi.md).
The CSI storage system supports the same features as the In-tree drivers, so the
<!-- LINKS - internal --> [azure-disk-volume]: azure-disk-volume.md
+[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-volume
[azure-files-pvc]: azure-files-dynamic-pv.md [premium-storage]: ../virtual-machines/disks-types.md [az-disk-list]: /cli/azure/disk#az_disk_list
api-management Api Management Advanced Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-advanced-policies.md
This example shows one way to verify a reference token with an authorization ser
| mode="string" | Determines whether this is a new request or a copy of the current request. In outbound mode, mode=copy does not initialize the request body. | No | New | | response-variable-name="string" | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. | Yes | N/A | | timeout="integer" | The timeout interval in seconds before the call to the URL fails. | No | 60 |
-| ignore-error | If true and the request results in an error:<br /><br /> - If response-variable-name was specified it will contain a null value.<br />- If response-variable-name was not specified, context.Request will not be updated. | No | false |
+| ignore-error | If true and the request results in an error, the error will be ignored, and the response variable will contain a null value. | No | false |
| name | Specifies the name of the header to be set. | Yes | N/A | | exists-action | Specifies what action to take when the header is already specified. This attribute must have one of the following values.<br /><br /> - override - replaces the value of the existing header.<br />- skip - does not replace the existing header value.<br />- append - appends the value to the existing header value.<br />- delete - removes the header from the request.<br /><br /> When set to `override` enlisting multiple entries with the same name results in the header being set according to all entries (which will be listed multiple times); only listed values will be set in the result. | No | override |
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
If the virtual network is in a different subscription than the app, you must ens
### Routes
-There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of you app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic is routed from your virtual network and out.
+You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of you app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic is routed from your virtual network and out.
+
+By default, only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) sent from your app is routed through the virtual network integration. Unless you configure application routing or configuration routing options, all other traffic will not be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it is sent through the virtual network integration.
#### Application routing
-Application routing affects all the traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
+Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during start up. When you configure application routing, you can either route all traffic or only private traffic into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
> [!NOTE]
-> * When **Route All** is enabled, all app traffic is subject to the NSGs and UDRs that are applied to your integration subnet. When **Route All** is enabled, outbound traffic is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
+> * Only traffic configured in applicaiton or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet.
+> * When **Route All** is enabled, outbound traffic from your app is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
> * Regional virtual network integration can't use port 25. Learn [how to configure application routing](./configure-vnet-integration-routing.md).
We recommend that you use the **Route All** configuration setting to enable rout
#### Configuration routing
-When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, configuration traffic will go directly over the public route, but individual components you actively configure it to be routed through the virtual network integration.
+When you are using virtual network integration, you can configure how parts of the configuration traffic is managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
> [!NOTE] > * Windows containers don't support routing App Service Key Vault references or pulling custom container images over virtual network integration.
App settings using Key Vault references will attempt to get secrets over the pub
#### Network routing
-You can use route tables to route outbound traffic from your app to wherever you want. Route tables affect your destination traffic. When **Route All** is disabled in [application routing](#application-routing), only private traffic (RFC1918) is affected by your route tables. Common destinations can include firewall devices or gateways. Routes that are set on your integration subnet won't affect replies to inbound app requests.
+You can use route tables to route outbound traffic from your app to wherever you want. Route tables affect your destination traffic. Route tables only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Common destinations can include firewall devices or gateways. Routes that are set on your integration subnet won't affect replies to inbound app requests.
-When you want to route all outbound traffic on-premises, you can use a route table to send all outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back.
+When you want to route outbound traffic on-premises, you can use a route table to send outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back.
Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. Similar to user-defined routes, BGP routes affect traffic according to your routing scope setting. ### Network security groups
-An app that uses regional virtual network integration can use a [network security group](../virtual-network/network-security-groups-overview.md) to block outbound traffic to resources in your virtual network or the internet. To block traffic to public addresses, enable [Route All](#application-routing) to the virtual network. When **Route All** isn't enabled, NSGs are only applied to RFC1918 traffic.
+An app that uses virtual network integration can use a [network security group](../virtual-network/network-security-groups-overview.md) to block outbound traffic to resources in your virtual network or the internet. To block traffic to public addresses, enable [Route All](#application-routing). When **Route All** isn't enabled, NSGs are only applied to RFC1918 traffic from your app.
An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet.
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
In your Python code, you use these settings as environment variables with statem
Having issues? Refer first to the [Troubleshooting guide](configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).
+> [!NOTE]
+> If you want to try an alternative approach to connect your app to the Postgres database in Azure, see the [Service Connector version](../service-connector/tutorial-django-webapp-postgres-cli.md) of this tutorial. Service Connector is a new Azure service that is currently in public preview. [Section 4.2](../service-connector/tutorial-django-webapp-postgres-cli.md#42-configure-environment-variables-to-connect-the-database) of that tutorial introduces a simplified process for creating the connection.
+ ### 4.3 Run Django database migrations Django database migrations ensure that the schema in the PostgreSQL on Azure database matches with those described in your code.
Learn how to map a custom DNS name to your app:
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
applied-ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-create-immersive-reader.md
The script is designed to be flexible. It will first look for existing Immersive
-ResourceGroupName 'MyResourceGroupName' -ResourceGroupLocation 'westus2' -AADAppDisplayName 'MyOrganizationImmersiveReaderAADApp'
- -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'
+ -AADAppIdentifierUri 'api://MyOrganizationImmersiveReaderAADApp'
-AADAppClientSecret 'SomeStrongPassword' -AADAppClientSecretExpiration '2021-12-31' ```
The script is designed to be flexible. It will first look for existing Immersive
| ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group does not already exist, a new one with this name will be created. | | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. | | AADAppDisplayName |The Azure Active Directory application display name. If an existing Azure AD application is not found, a new one with this name will be created. This parameter is optional if the Azure AD application already exists. |
- | AADAppIdentifierUri |The URI for the Azure AD app. If an existing Azure AD app is not found, a new one with this URI will be created. For example, `https://immersivereaderaad-mycompany`. |
+ | AADAppIdentifierUri |The URI for the Azure AD app. If an existing Azure AD app is not found, a new one with this URI will be created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we are using the default Azure AD URI scheme prefix of `api://` for compatibility with the [Azure AD policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |
| AADAppClientSecret |A password you create that will be used later to authenticate when acquiring a token to launch the Immersive Reader. The password must be at least 16 characters long, contain at least 1 special character, and contain at least 1 numeric character. To manage Azure AD application client secrets after you've created this resource please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> `[AADAppDisplayName]` -> Certificates and Secrets blade -> Client Secrets section (as shown in the "Manage your Azure AD application secrets" screenshot below). | | AADAppClientSecretExpiration |The date or datetime after which your `[AADAppClientSecret]` will expire (e.g. '2020-12-31T11:59:59+00:00' or '2020-12-31'). |
azure-app-configuration Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-soft-delete.md
+
+ Title: Soft Delete in Azure App Configuration
+description: Soft Delete in Azure App Configuration
+++++ Last updated : 03/01/2022++
+# Soft delete
+
+Azure App Configuration's Soft delete feature allows recovery of your data such as key-values, feature flags, and revision history of a deleted store. It's automatically enabled for all stores in the standard tier. In this article, learn more about the soft delete feature and its functionality.
+
+Learn how to [recover Azure App Configuration stores](./howto-recover-deleted-stores-in-azure-app-configuration.md) using the soft delete feature.
+
+> [!NOTE]
+> When an App Configuration store is soft-deleted, services that are integrated with the store will be deleted. For example Azure RBAC roles assignments, managed identity, Event Grid subscriptions, and private endpoints. Recovering a soft-deleted App Configuration store will not restore these services. They will need to be recreated.
+
+## Scenarios
+
+The soft delete feature addresses the recovery of the deleted stores, whether the deletion was accidental or intentional. The soft delete feature will act as a safeguard in the following scenarios:
+
+* **Recovery of a deleted App Configuration store**: A deleted app configuration store could be recovered in the retention time period.
+
+* **Permanent deletion of App Configuration store**: This feature helps you to permanently delete an app configuration store.
+
+## Recover
+Recover is the operation to get the stores in a soft deleted state back to an active state where one can request the store for configuration and feature management.
+
+## Retention period
+A variable to specify the time period, in days, for which a soft deleted store will be retained. This value can only be set at the creation of store and once set, it can't be changed. Once the retention period elapses, the store will be permanently deleted automatically.
+
+## Purge
+Purge is the operation to permanently delete the stores in a soft deleted state, provided the store doesn't have purge-protection enabled. To recreate the App Configuration store with the same name as a deleted store, you need to purge the store first if it's not already past the retention period.
+
+## Purge protection
+With Purge protection enabled, soft deleted stores can't be purged in the retention period. If disabled, the soft deleted store can be purged before the retention period expires. Once purge protection is enabled on a store, it can't be disabled.
+
+## Permissions to recover or purge store
+
+A user has to have below permissions to recover or purge a soft-deleted app configuration store. The built-in Contributor and Owner roles already have the required permissions to recover and purge.
+
+- Permission to recover - `Microsoft.AppConfiguration/configurationStores/write`
+
+- Permission to purge - `Microsoft.AppConfiguration/configurationStores/action`
+
+## Billing implications
+
+There won't be any charges for the soft deleted stores. Once you recover a soft deleted store, the usual charges will start applying. Soft delete isn't available with free tier.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Recover Azure App Configuration stores](./howto-recover-deleted-stores-in-azure-app-configuration.md)
azure-app-configuration Howto Recover Deleted Stores In Azure App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md
+
+ Title: Recover Azure App Configuration stores (Preview)
+description: Recover/Purge Azure App Configuration soft deleted Stores
+++++ Last updated : 03/01/2022++
+# Recover Azure App Configuration stores (Preview)
+
+This article covers the soft delete feature of Azure App Configuration stores. You'll learn about how to set retention policy, enable purge protection, recover and purge a soft-deleted store.
+
+To learn more about the concept of soft delete feature, see [Soft-Delete in Azure App Configuration](./concept-soft-delete.md).
+
+## Prerequisites
+
+* An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
+
+* Refer to the [Soft-Delete in Azure App Configuration](./concept-soft-delete.md#permissions-to-recover-or-purge-store) for permissions requirements.
+
+## Set retention policy and enable purge protection at store creation
+
+To create a new App Configuration store in the Azure portal, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). In the upper-left corner of the home page, select **Create a resource**. In the **Search the Marketplace** box, type *App Configuration* and press Enter.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-3.png" alt-text="In MarketPlace Search results, App Configuration is highlighted":::
+
+1. Select **App Configuration** from the search results, and then select **Create**.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-7.png" alt-text="In Snapshot, Create option is highlighted":::
+
+1. On the **Create App Configuration** pane, enter the following settings:
+
+ | Setting | Suggested value | Description |
+ ||||
+ | **Subscription** | Your subscription | Select the Azure subscription for your store |
+ | **Resource group** | Your resource group | Select the Azure resource group for your store |
+ | **Resource name** | Globally unique name | Enter a unique resource name to use for the App Configuration store. This name can't be the same name as the previous configuration store. |
+ | **Location** | Your desired Location | Select the region you want to create your configuration store in. |
+ | **Pricing tier** | *Standard* | Select the standard pricing tier. For more information, see the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration). |
+ | **Days to retain deleted stores** | Retention period for soft deleted stores | Select the number of days for which you would want the soft deleted stores and their content to be retained. |
+ | **Enable Purge protection** | Purge protection status | Check to enable Purge protection on the store so no one can purge it before the retention period expires. |
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-6.png" alt-text="In Create, Recovery options are highlighted":::
+
+1. Select **Review + create** to validate your settings.
+1. Select **Create**. The deployment might take a few minutes.
+
+## Enable Purge Protection in an existing store
+
+1. Log in to the Azure portal.
+1. Select your standard tier App Configuration store.
+1. Refer to the screenshot below on where to check for the soft delete status of an existing store.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-1.png" alt-text="In Overview, Soft-delete is highlighted.":::
+
+1. Click on the **Enabled** value of Soft Delete. You'll be redirected to the **properties** of your store. At the bottom of the page, you can review the information related to soft delete. The Retention period is shown as "Days to retain deleted stores". You can't change this value once it's set. The Purge protection check box shows whether purge protection is enabled for this particular store or not. Once enabled, purge protection can't be disabled.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-2.png" alt-text="In Properties, Soft delete, Days to retain are highlighted.":::
+
+## List, recover, or purge a soft deleted App Configuration store
+
+1. Log in to the Azure portal.
+1. Click on the search bar at the top of the page.
+1. Search for "App Configuration" and click on **App Configuration** under **Services**. Don't click on an individual App Configuration store.
+1. At the top of the screen, click the option to **Manage deleted stores**. A context pane will open on the right side of your screen.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-4.png" alt-text="On App Configuration stores, the Manage deleted stores option is highlighted.":::
+
+1. Select your subscription from the drop box. If you've deleted one or more App Configuration stores, these stores will appear in the context pane on the right. Click "Load More" at the bottom of the context pane if not all deleted stores are loaded.
+1. Once you find the store that you wish to recover or purge, select the checkbox next to it. You can select multiple stores
+1. Please click **Recover** at the bottom of the context pane to recover the store OR
+ click **Purge** option to permanently delete the store. Note you won't be able to purge a store when purge protection is enabled.
+
+ :::image type="content" source="./media/how-to-soft-delete-app-config-5.png" alt-text="On Manage deleted stores panel, one store is selected, and the Recover button is highlighted.":::
+
+## Recover an App Configuration store with customer-managed key enabled
+
+When recovering stores that use customer-managed keys, there are extra steps that need to be performed to access the recovered data. This is because the recovered store, will no longer have a managed identity assigned that has access to the customer-managed key. A new managed identity should be assigned to the store and the customer managed key settings should be reconfigured to use the newly assigned identity. When updating the managed key settings to use the newly assigned identity, ensure to continue using the same key from the key vault. For more details on how to use customer-managed keys in App Configuration stores, refer to [Use customer-managed keys to encrypt your App Configuration data](./concept-customer-managed-keys.md).
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Soft-Delete in Azure App Configuration](./concept-soft-delete.md)
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
| -- | - | | `https://management.azure.com` (for Azure Cloud), `https://management.usgovcloudapi.net` (for Azure US Government) | Required for the agent to connect to Azure and register the cluster. | | `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. |
-| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
+| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us`, `<region>.login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. | | `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Only used when deploying to a Windows or Linux Premium plan or to a Windows Consumption plan. Not supported for Linux Consumption plans or Windows or Linux Dedicated plans. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
## WEBSITE\_CONTENTOVERVNET
The file path to the function app code and configuration in an event-driven scal
||| |WEBSITE_CONTENTSHARE|`functionapp091999e2`|
-Only used when deploying to a Premium plan or to a Consumption plan running on Windows. Not supported for Consumptions plans running Linux. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Only used when deploying to a Windows or Linux Premium plan or to a Windows Consumption plan. Not supported for Linux Consumption plans or Windows or Linux Dedicated plans. Changing or removing this setting may cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
When using an Azure Resource Manager template to create a function app during deployment, don't include WEBSITE_CONTENTSHARE in the template. This slot setting is generated during deployment. To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
This section describes the configuration settings available for this binding, wh
"maxAutoLockRenewalDuration": "00:05:00", "maxConcurrentCalls": 16, "maxConcurrentSessions": 8,
- "maxMessages": 1000,
+ "maxMessageBatchSize": 1000,
"sessionIdleTimeout": "00:01:00", "enableCrossEntityTransactions": false }
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing the Azure Monitor agent (AMA) on Azure virtual
Previously updated : 01/27/2022 Last updated : 03/09/2022
We strongly recommended to update to generally available versions listed as foll
| September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> | | December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> | | January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
+| Feburary 2022 | <ul><li>Bugfixes for the AMA Client installer (private preview)</li><li>Versioning fix to reflect appropriate Windows major/minor/hotfix versions</li></ul> | 1.2.0.0 | Not yet available |
<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1, v1.1.5.0. Please use hotfixed versions listed above. <sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 3/3/2022 Last updated : 3/9/2022 # Azure Monitor agent overview
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
+The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of Azure virtual machines and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
+Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure Portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
## Relationship to other agents The Azure Monitor agent replaces the following legacy agents that are currently used by Azure Monitor to collect guest data from virtual machines ([view known gaps](../faq.yml)):
The following table shows the current support for the Azure Monitor agent with A
| Azure Monitor feature | Current support | More information | |:|:|:| | File based logs and Windows IIS logs | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+| Windows Client OS installer | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
| [VM insights](../vm/vminsights-overview.md) | Private preview | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | | [Connect using private links](azure-monitor-agent-data-collection-endpoint.md) | Public preview | No sign-up needed |
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
Add the following dependencies to your pom.xml or build.gradle file:
* [Application Insights spring-boot-starter](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-starter) 2.5.0 or later * Micrometer Azure Registry 1.1.0 or above
-* [Micrometer Spring Legacy](https://micrometer.io/docs/ref/spring/1.5) 1.1.0 or above (this backports the autoconfig code in the Spring framework).
+* [Micrometer Spring Legacy](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-metrics) 1.1.0 or above (this backports the autoconfig code in the Spring framework).
* [ApplicationInsights Resource](./create-new-resource.md) Steps
azure-monitor Java Standalone Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-troubleshoot.md
In this article, we cover some of the common issues that you might face while in
## Check the self-diagnostic log file
-By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory that holds the `applicationinsights-agent-3.2.2.jar` file.
+By default, Application Insights Java 3.x produces a log file named `applicationinsights.log` in the same directory
+that holds the `applicationinsights-agent-3.2.7.jar` file.
This log file is the first place to check for hints to any issues you might be experiencing.
+If no log file is generated, check that your Java application has write permission to the directory that holds the
+`applicationinsights-agent-3.2.7.jar` file.
+
+If still no log file is generated, check the stdout log from your Java application. Application Insights Java 3.x
+should log any errors to stdout that would prevent it from logging to its normal location.
+ ## JVM fails to start If the JVM fails to start with "Error opening zip file or JAR manifest missing",
In this case, the server side is the Application Insights ingestion endpoint or
If using Java 9 or later, please check if the JVM has `jdk.crypto.cryptoki` module included in the jmods folder. Also if you are building a custom java runtime using `jlink` please make sure to include the same module.
+Otherwise, these cipher suites should already be part of modern Java 8+ distributions,
+so it is recommended to check where you installed your Java distribution from, and investigate why the security
+providers in that Java distribution's `java.security` configuration file differ from standard Java distributions.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
You can currently configure the following tables for Basic Logs:
> [!NOTE] > Tables created with the [Data Collector API](data-collector-api.md) do not support Basic Logs. + ## Set table configuration
+# [API](#tab/api-1)
+ To configure a table for Basic Logs or Analytics Logs, call the **Tables - Update** API: ```http
PATCH https://management.azure.com/subscriptions/<subscriptionId>/resourcegroups
> [!IMPORTANT] > Use the Bearer token for authentication. Read more about [using Bearer tokens](https://social.technet.microsoft.com/wiki/contents/articles/51140.azure-rest-management-api-the-quickest-way-to-get-your-bearer-token.aspx).
-### Request body
+**Request body**
+ |Name | Type | Description | | | | | |properties.plan | string | The table plan. Possible values are *Analytics* and *Basic*.|
-### Example
+**Example**
+ This example configures the `ContainerLog` table for Basic Logs.
-#### Sample request
+
+**Sample request**
```http PATCH https://management.azure.com/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/ContosoWorkspace/tables/ContainerLog?api-version=2021-12-01-preview
Use this request body to change to Analytics Logs:
} ```
-#### Sample response
+**Sample response**
+ This is the response for a table changed to Basic Logs. Status code: 200
Status code: 200
} ```
+# [CLI](#tab/cli-1)
+
+To configure a table for Basic Logs or Analytics Logs, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and set the `--plan` parameter to `Basic` or `Analytics`.
+
+For example:
+
+- To set Basic Logs:
+
+ ```azurecli
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name ContainerLog --plan Basic
+ ```
+
+- To set Analytics Logs:
+
+ ```azurecli
+ az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name ContainerLog --plan Analytics
+ ```
+
+ ## Check table configuration # [Portal](#tab/portal-1)
Status code: 200
} ```
+# [CLI](#tab/cli-2)
+
+To check the configuration of a table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Syslog --output table \
+```
+ ## Retention and archiving of Basic Logs
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
To set the default workspace retention policy:
## Set retention and archive policy by table
-You can set retention policies for individual tables, except for workspaces in the legacy Free Trial pricing tier, using Azure Resource Manager APIs. You cannot currently configure data retention for individual tables in the Azure portal.
+You can set retention policies for individual tables, except for workspaces in the legacy Free Trial pricing tier, using Azure Resource Manager APIs. You canΓÇÖt currently configure data retention for individual tables in the Azure portal.
You can keep data in interactive retention between 4 and 730 days. You can set the archive period for a total retention time of up to 2,555 days (seven years).
-Each table is a sub-resource of the workspace it's in. For example, you can address the `SecurityEvent` table in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
+Each table is a subresource of the workspace it's in. For example, you can address the `SecurityEvent` table in [Azure Resource Manager](../../azure-resource-manager/management/overview.md) as:
``` /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent ```
-Note that the table name is case-sensitive.
+The table name is case-sensitive.
-### Get retention and archive policy by table
-
-To get the retention policy of a particular table (in this example, `SecurityEvent`), Call the **Tables - Get** API:
-
-```JSON
-GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview
-```
-
-To get all table-level retention policies in your workspace, don't set a table name; for example:
-
-```JSON
-GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2021-12-01-preview
-```
-### Set the retention and archive policy for a table
+# [API](#tab/api-1)
To set the retention and archive duration for a table, call the **Tables - Update** API:
You can use either PUT or PATCH, with the following difference:
- The **PUT** API sets *retentionInDays* and *totalRetentionInDays* to the default value if you don't set non-null values. - The **PATCH** API doesn't change the *retentionInDays* or *totalRetentionInDays* values if you don't specify values.
+**Request body**
-#### Request body
The request body includes the values in the following table. |Name | Type | Description |
The request body includes the values in the following table.
|properties.retentionInDays | integer | The table's data retention in days. This value can be between 4 and 730; or 1095, 1460, 1826, 2191, or 2556. <br/>Setting this property to null will default to the workspace retention. For a Basic Logs table, the value is always 8. | |properties.totalRetentionInDays | integer | The table's total data retention including archive period. Set this property to null if you don't want to archive data. |
-#### Example
-The following table sets table retention to workspace default of 30 days, and total of 2 years. This means that the archive duration would be 23 months.
-###### Request
+**Example**
+
+This example sets the table's interactive retention to the workspace default of 30 days, and the total retention to two years. This means the archive duration is 23 months.
+
+**Request**
```http PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/CustomLog_CL?api-version=2021-12-01-preview ```
-#### Request body
+**Request body**
```http { "properties": {
PATCH https://management.azure.com/subscriptions/00000000-0000-0000-0000-0000000
} ```
-###### Response
+**Response**
Status code: 200
Status code: 200
... } ```+
+# [CLI](#tab/cli-1)
+
+To set the retention and archive duration for a table, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command and pass the `--retention-time` and `--total-retention-time` parameters.
+
+This example sets table's interactive retention to 30 days, and the total retention to two years. This means the archive duration is 23 months:
+
+```azurecli
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+--name AzureMetrics --retention-time 30 --total-retention-time 730
+```
+
+To reapply the workspace's default interactive retention value to the table and reset its total retention to 0, run the [az monitor log-analytics workspace table update](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-update) command with the `--retention-time` and `--total-retention-time` parameters set to `-1`.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table update --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Syslog --retention-time -1 --total-retention-time -1
+```
++
+## Get retention and archive policy by table
+
+# [API](#tab/api-2)
+
+To get the retention policy of a particular table (in this example, `SecurityEvent`), call the **Tables - Get** API:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2021-12-01-preview
+```
+
+To get all table-level retention policies in your workspace, don't set a table name; for example:
+
+```JSON
+GET /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables?api-version=2021-12-01-preview
+```
+
+# [CLI](#tab/cli-2)
+
+To get the retention policy of a particular table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name SecurityEvent
+```
+++ ## Purge retained data When you shorten an existing retention policy, it takes several days for Azure Monitor to remove data that you no longer want to keep.
-If you set the data retention policy to 30 days, you can purge older data immediately using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. This can be useful when you need to remove personal data immediately. The immediate purge functionality is not available through the Azure portal.
+If you set the data retention policy to 30 days, you can purge older data immediately using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. The purge functionality is useful when you need to remove personal data immediately. The immediate purge functionality isn't available through the Azure portal.
Note that workspaces with a 30-day retention policy might actually keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
-You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#how-to-export-and-delete-private-data), which removes personal data. You cannot purge data from archived logs.
+You can also purge data from a workspace using the [purge feature](personal-data-mgmt.md#how-to-export-and-delete-private-data), which removes personal data. You canΓÇÖt purge data from archived logs.
The Log Analytics [Purge API](/rest/api/loganalytics/workspacepurge/purge) doesn't affect retention billing. **To lower retention costs, decrease the retention period for the workspace or for specific tables.**
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Azure Monitor Logs Dedicated Clusters are a deployment option that enables advanced capabilities for Azure Monitor Logs customers. Customers can select which of their Log Analytics workspaces should be hosted on dedicated clusters.
-Dedicated clusters require customers to commit for at least 500 GB of data ingestion per day. You can migrate an existing workspace to a dedicated cluster with no data loss or service interruption.
+Dedicated clusters require customers to commit for at least 500 GB of data ingestion per day. You can link existing workspace to a dedicated cluster and unlink it with no data loss or service interruption.
Capabilities that require dedicated clusters:
Capabilities that require dedicated clusters:
Dedicated clusters are managed with an Azure resource that represents Azure Monitor Log clusters. Operations are performed programmatically using [CLI](/cli/azure/monitor/log-analytics/cluster), [PowerShell](/powershell/module/az.operationalinsights) or the [REST](/rest/api/loganalytics/clusters).
-Once a cluster is created, workspaces can be linked to it and new ingested data to them is stored on the cluster. Workspaces can be unlinked from a cluster at any time and new data is stored in shared Log Analytics clusters. The link and unlink operation doesn't affect your queries and the access to data before and after the operation with subjection to retention in workspaces. The Cluster and workspaces must be in the same region to allow linking.
+Once a cluster is created, workspaces can be linked to it, and new ingested data to them is stored on the cluster. Workspaces can be unlinked from a cluster at any time and new data then stored on shared Log Analytics clusters. The link and unlink operation doesn't affect your queries and access to data before, and after the operation. The Cluster and workspaces must be in the same region.
All operations on the cluster level require the `Microsoft.OperationalInsights/clusters/write` action permission on the cluster. This permission could be granted via the Owner or Contributor that contains the `*/write` action or via the Log Analytics Contributor role that contains the `Microsoft.OperationalInsights/*` action. For more information on Log Analytics permissions, see [Manage access to log data and workspaces in Azure Monitor](./manage-access.md).
The cluster Commitment Tier level is configured programmatically with Azure Reso
There are two modes of billing for usage on a cluster. These can be specified by the `billingType` parameter when configuring your cluster.
-1. **Cluster (default)**: Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster.
+1. **Cluster (default)**--Billing for ingested data is done at the cluster level. The ingested data quantities from each workspace associated to a cluster are aggregated to calculate the daily bill for the cluster.
-2. **Workspaces**: The Commitment Tier costs for your Cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) This full details of this pricing model are explained [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
+2. **Workspaces**--The Commitment Tier costs for your Cluster are attributed proportionately to the workspaces in the cluster, by each workspace's data ingestion volume (after accounting for per-node allocations from [Microsoft Defender for Cloud](../../security-center/index.yml) for each workspace.) Details of pricing model are explained [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
-If your workspace is using legacy Per Node pricing tier, when it is linked to a cluster it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+If your linked workspace is using legacy Per Node pricing tier, it will be billed based on data ingested against the cluster's Commitment Tier, and no longer Per Node. Per-node data allocations from Microsoft Defender for Cloud will continue to be applied.
+
+When you link workspaces to a cluster, the pricing tier is changed to cluster, and ingestion is billed based on cluster's Commitment Tier. Workspaces can be unlinked from a cluster at any time, and pricing tier change to per-GB.
Complete details are billing for Log Analytics dedicated clusters are available [here](./manage-cost-storage.md#log-analytics-dedicated-clusters).
The user account that creates the clusters must have the standard Azure resource
After you create your cluster resource, you can edit additional properties such as *sku*, *keyVaultProperties, or *billingType*. See more details below.
-You can have up to 2 active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to 4 reserved clusters per subscription per region (active or recently deleted).
+You can have up to two active clusters per subscription per region. If the cluster is deleted, it is still reserved for 14 days. You can have up to four reserved clusters per subscription per region (active or recently deleted).
> [!NOTE] > Cluster creation triggers resource allocation and provisioning. This operation can take a few hours to complete.
Authorization: Bearer <token>
## Change cluster properties
-After you create your cluster resource and it is fully provisioned, you can edit additional properties using CLI, PowerShell or REST API. The additional properties that can be set after the cluster has been provisioned include the following:
+After you create your cluster resource and it's fully provisioned, you can edit additional properties using CLI, PowerShell or REST API. The additional properties that can be set after the cluster has been provisioned include the following:
- **keyVaultProperties** - Contains the key in Azure Key Vault with the following parameters: *KeyVaultUri*, *KeyName*, *KeyVersion*. See [Update cluster with Key identifier details](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details). - **Identity** - The identity used to authenticate to your Key Vault. This can be System-assigned or User-assigned. - **billingType** - Billing attribution for the cluster resource and its data. Includes on the following values:
- - **Cluster (default)** - The costs for your cluster are attributed to the cluster resource.
- - **Workspaces** - The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters) to learn more about the cluster pricing model.
+ - **Cluster (default)**--The costs for your cluster are attributed to the cluster resource.
+ - **Workspaces**--The costs for your cluster are attributed proportionately to the workspaces in the Cluster, with the cluster resource being billed some of the usage if the total ingested data for the day is under the commitment tier. See [Log Analytics Dedicated Clusters](./manage-cost-storage.md#log-analytics-dedicated-clusters) to learn more about the cluster pricing model.
>[!IMPORTANT] >Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations. > [!NOTE]
-> The *billingType* property is not supported in CLI.
+> The *billingType* property isn't supported in CLI.
## Get all clusters in resource group
Content-type: application/json
### Update billingType in cluster
+### PowerShell
+
+```powershell
+Select-AzSubscription "cluster-subscription-id"
+
+Update-AzOperationalInsightsCluster -ResourceGroupName "resource-group-name" -ClusterName "cluster-name" -BillingType "Workspaces"
+```
+ The *billingType* property determines the billing attribution for the cluster and its data: - *Cluster* (default) -- The billing is attributed to the Cluster resource - *Workspaces* -- The billing is attributed to linked workspaces proportionally. When data volume from all workspaces is below the Commitment Tier level, the remaining volume is attributed to the cluster
Content-type: application/json
### Unlink a workspace from cluster
-You can unlink a workspace from a cluster. After unlinking a workspace from the cluster, new data associated with this workspace is not sent to the dedicated cluster. Also, the workspace billing is no longer done via the cluster.
+You can unlink a workspace from a cluster, and new data to workspace isn't ingested to cluster. Also, the workspace pricing tier is set to per-GB.
Old data of the unlinked workspace might be left on the cluster. If this data is encrypted using customer-managed keys (CMK), the Key Vault secrets are kept. The system is abstracts this change from Log Analytics users. Users can just query the workspace as usual. The system performs cross-cluster queries on the backend as needed with no indication to users. > [!WARNING]
Remove-AzOperationalInsightsLinkedService -ResourceGroupName "resource-group-nam
## Delete cluster
-It's recommended that you unlink all workspaces from a dedicated cluster before deleting it. You need to have *write* permissions on the cluster resource. When deleting a cluster, you are losing access to all data ingested to the cluster from linked workspaces and from workspaces that were linked previously. This operation is not reversible. If you delete your cluster when workspaces are linked, these get unlinked automatically and new data get ingested to Log Analytics storage instead.
+It's recommended that you unlink all workspaces from a dedicated cluster before deleting it. You need to have *write* permissions on the cluster resource. When deleting a cluster, you're losing access to all data ingested to the cluster from linked workspaces and from workspaces that were linked previously. This operation isn't reversible. If you delete your cluster when workspaces are linked, these get unlinked automatically and new data get ingested to Log Analytics storage instead.
-A cluster resource that was deleted in the last 14 days is kept in soft-delete state and its name remained reserved. After the soft-delete period, the cluster is permanently deleted and it's name can be used.
+A cluster resource that was deleted in the last 14 days is kept in soft-delete state and its name remained reserved. After the soft-delete period, the cluster is permanently deleted and its name can be reused to create a cluster.
> [!WARNING] > - The recovery of soft-deleted clusters isn't supported and it can't be recovered once deleted.
-> - There is a limit of 4 clusters per subscription. Both active and soft-deleted clusters are counted as part of this. Customers should not create recurrent procedures that create and delete clusters. It has a significant impact on Log Analytics backend systems.
+> - There is a limit of 4 clusters per subscription. Both active and soft-deleted clusters are counted as part of this. Customers shouldn't create recurrent procedures that create and delete clusters. It has a significant impact on Log Analytics backend systems.
Use the following commands to delete a cluster:
Authorization: Bearer <token>
- [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) is configured automatically for clusters created from October 2020 in supported regions. You can verify if your cluster is configured for double encryption by sending a GET request on the cluster and observing that the `isDoubleEncryptionEnabled` value is `true` for clusters with Double encryption enabled. - If you create a cluster and get an error "region-name doesn't support Double Encryption for clusters.", you can still create the cluster without Double encryption by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body.
- - Double encryption setting can not be changed after the cluster has been created.
+ - Double encryption setting can't can not be changed after the cluster has been created.
+
+- Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
## Troubleshooting -- If you get conflict error when creating a cluster, it may be that you have deleted your cluster in the last 14 days and it's in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
+- If you get conflict error when creating a cluster, it may be that you've deleted your cluster in the last 14 days and it's in a soft-delete state. The cluster name remains reserved during the soft-delete period and you can't create a new cluster with that name. The name is released after the soft-delete period when the cluster is permanently deleted.
- If you update your cluster while the cluster is at provisioning or updating state, the update will fail.
Authorization: Bearer <token>
### Cluster Create -- 400 -- Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.-- 400 -- The body of the request is null or in bad format.-- 400 -- SKU name is invalid. Set SKU name to capacityReservation.-- 400 -- Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.-- 400 -- Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.-- 400 -- Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.-- 400 -- No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.-- 400 -- Identity is null or empty. Set Identity with systemAssigned type.-- 400 -- KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.-- 400 -- Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
+- 400--Cluster name is not valid. Cluster name can contain characters a-z, A-Z, 0-9 and length of 3-63.
+- 400--The body of the request is null or in bad format.
+- 400--SKU name is invalid. Set SKU name to capacityReservation.
+- 400--Capacity was provided but SKU is not capacityReservation. Set SKU name to capacityReservation.
+- 400--Missing Capacity in SKU. Set Capacity value to 500, 1000, 2000 or 5000 GB/day.
+- 400--Capacity is locked for 30 days. Decreasing capacity is permitted 30 days after update.
+- 400--No SKU was set. Set the SKU name to capacityReservation and Capacity value to 500, 1000, 2000 or 5000 GB/day.
+- 400--Identity is null or empty. Set Identity with systemAssigned type.
+- 400--KeyVaultProperties are set on creation. Update KeyVaultProperties after cluster creation.
+- 400--Operation cannot be executed now. Async operation is in a state other than succeeded. Cluster must complete its operation before any update operation is performed.
### Cluster Update -- 400 -- Cluster is in deleting state. Async operation is in progress . Cluster must complete its operation before any update operation is performed.-- 400 -- KeyVaultProperties is not empty but has a bad format. See [key identifier update](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details).-- 400 -- Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesn't exist. Verify that you [set key and access policy](../logs/customer-managed-keys.md#grant-key-vault-permissions) in Key Vault.-- 400 -- Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)-- 400 -- Operation cannot be executed now. Wait for the Async operation to complete and try again.-- 400 -- Cluster is in deleting state. Wait for the Async operation to complete and try again.
+- 400--Cluster is in deleting state. Async operation is in progress. Cluster must complete its operation before any update operation is performed.
+- 400--KeyVaultProperties is not empty but has a bad format. See [key identifier update](../logs/customer-managed-keys.md#update-cluster-with-key-identifier-details).
+- 400--Failed to validate key in Key Vault. Could be due to lack of permissions or when key doesn't exist. Verify that you [set key and access policy](../logs/customer-managed-keys.md#grant-key-vault-permissions) in Key Vault.
+- 400--Key is not recoverable. Key Vault must be set to Soft-delete and Purge-protection. See [Key Vault documentation](../../key-vault/general/soft-delete-overview.md)
+- 400--Operation cannot be executed now. Wait for the Async operation to complete and try again.
+- 400--Cluster is in deleting state. Wait for the Async operation to complete and try again.
### Cluster Get
+ - 404--Cluster not found, the cluster may have been deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it, or use another name to create a new cluster.
### Cluster Delete
+ - 409--Can't delete a cluster while in provisioning state. Wait for the Async operation to complete and try again.
### Workspace link -- 404 -- Workspace not found. The workspace you specified doesn't exist or was deleted.-- 409 -- Workspace link or unlink operation in process.-- 400 -- Cluster not found, the cluster you specified doesn't exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
+- 404--Workspace not found. The workspace you specified doesn't exist or was deleted.
+- 409--Workspace link or unlink operation in process.
+- 400--Cluster not found, the cluster you specified doesn't exist or was deleted. If you try to create a cluster with that name and get conflict, the cluster is in soft-delete for 14 days. You can contact support to recover it.
### Workspace unlink-- 404 -- Workspace not found. The workspace you specified doesn't exist or was deleted.-- 409 -- Workspace link or unlink operation in process.
+- 404--Workspace not found. The workspace you specified doesn't exist or was deleted.
+- 409--Workspace link or unlink operation in process.
## Next steps
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
The restore operation creates the restore table and allocates additional compute
The destination table provides a view of the underlying source data, but does not affect it in any way. The table has no retention setting, and you must explicitly [dismiss the restored data](#dismiss-restored-data) when you no longer need it.
-## Restore data using API
+## Restore data
+
+# [API](#tab/api-1)
To restore data from a table, call the **Tables - Create or Update** API. The name of the destination table must end with *_RST*. ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{user defined name}_RST?api-version=2021-12-01-preview ```
-### Request body
+
+**Request body**
+ The body of the request must include the following values: |Name | Type | Description |
The body of the request must include the following values:
|properties.restoredLogs.startRestoreTime | string | Start of the time range to restore. | |properties.restoredLogs.endRestoreTime | string | End of the time range to restore. |
-### Restore table status
+**Restore table status**
+ The **provisioningState** property indicates the current state of the restore table operation. The API returns this property when you start the restore, and you can retrieve this property later using a GET operation on the table. The **provisioningState** property has one of the following values: | Value | Description
The **provisioningState** property indicates the current state of the restore ta
| Succeeded | Restore operation completed. | | Deleting | Deleting the restored table. |
-#### Sample request
+**Sample request**
+ This sample restores data from the month of January 2020 from the *Usage* table to a table called *Usage_RST*. **Request**
PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
} } ```
+# [CLI](#tab/cli-1)
+To restore data from a table, run the [az monitor log-analytics workspace table restore create](/cli/azure/monitor/log-analytics/workspace/table/restore#az-monitor-log-analytics-workspace-table-restore-create) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table restore create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Heartbeat_RST --restore-source-table Heartbeat --start-restore-time "2022-01-01T00:00:00.000Z" --end-restore-time "2022-01-08T00:00:00.000Z" --no-wait
+```
++ ## Dismiss restored data To save costs, dismiss restored data when you no longer need it by deleting the restored table.
+Deleting the restored table does not delete the data in the source table.
+
+> [!NOTE]
+> Restored data is available as long as the underlying source data is available. When you delete the source table from the workspace or when the source table's retention period ends, the data is dismissed from the restored table. However, the empty table will remain if you do not delete it explicitly.
+
+# [API](#tab/api-2)
To delete a restore table, call the **Tables - Delete** API: ```http DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/{user defined name}_RST?api-version=2021-12-01-preview ```
-Deleting the restored table does not delete the data in the source table.
+# [CLI](#tab/cli-2)
-> [!NOTE]
-> Restored data is available as long as the underlying source data is available. When you delete the source table from the workspace or when the source table's retention period ends, the data is dismissed from the restored table. However, the empty table will remain if you do not delete it explicitly.
+To delete a restore table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
+
+For example:
+```azurecli
+az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name Heartbeat_RST
+```
++ ## Limitations Restore is subject to the following limitations.
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs are asynchronous queries that fetch records into a new search table
## When to use search jobs
-Use a search job when the log query timeout of 10 minutes is not enough time to search through large volumes of data or when you are running a slow query.
+Use a search job when the log query timeout of 10 minutes isn't enough time to search through large volumes of data or when you're running a slow query.
Search jobs also let you retrieve records from [Archived Logs](data-retention-archive.md) and [Basic Logs](basic-logs-configure.md) tables into a new log table you can use for queries. In this way, running a search job can be an alternative to:
Search jobs also let you retrieve records from [Archived Logs](data-retention-ar
A search job sends its results to a new table in the same workspace as the source data. The results table is available as soon as the search job begins, but it may take time for results to begin to appear.
-The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans-preview) table that is available for log queries or any other features of Azure Monitor that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this retention once the table is created.
+The search job results table is a [Log Analytics](log-analytics-workspace-overview.md#log-data-plans-preview) table that is available for log queries and other Azure Monitor features that use tables in a workspace. The table uses the [retention value](data-retention-archive.md) set for the workspace, but you can modify this value after the table is created.
The search results table schema is based on the source table schema and the specified query. The following additional columns help you track the source records:
The search results table schema is based on the source table schema and the spec
Queries on the results table appear in [log query auditing](query-audit.md) but not the initial search job. ## Create a search job+
+# [API](#tab/api-1)
To run a search job, call the **Tables - Create or Update** API. The call includes the name of the results table to be created. The name of the results table must end with *_SRCH*. ```http PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
-### Request body
+**Request body**
+ Include the following values in the body of the request: |Name | Type | Description |
Include the following values in the body of the request:
|properties.searchResults.endSearchTime | string | End of the time range to search. |
-### Sample request
+**Sample request**
+ This example creates a table called *Syslog_suspected_SRCH* with the results of a query that searches for particular records in the *Syslog* table. **Request**+ ```http PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_suspected_SRCH?api-version=2021-12-01-preview ``` **Request body**+ ```json { "properties": {
PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
} ```
-**Response**<br>
+**Response**
+ Status code: 202 accepted.
+# [CLI](#tab/cli-1)
+
+To run a search job, run the [az monitor log-analytics workspace table search-job create](/cli/azure/monitor/log-analytics/workspace/table/search-job#az-monitor-log-analytics-workspace-table-search-job-create) command. The name of the results table, which you set using the `--name` parameter, must end with *_SRCH*.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table search-job create --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name HeartbeatByIp_SRCH --search-query 'Heartbeat | where ComputerIP has "00.000.00.000"' --limit 1500 \
+ --start-search-time "2022-01-01T00:00:00.000Z" --end-search-time "2022-01-08T00:00:00.000Z" --no-wait
+```
++ ## Get search job status and details
+# [API](#tab/api-2)
+ Call the **Tables - Get** API to get the status and details of a search job: ```http GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
-### Table status
+**Table status**<br>
+ Each search job table has a property called *provisioningState*, which can have one of the following values: | Status | Description |
Each search job table has a property called *provisioningState*, which can have
| Deleting | Deleting the search job table. |
-#### Sample request
+**Sample request**
+ This example retrieves the table status for the search job in the previous example. **Request**+ ```http GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-00000000000/resourcegroups/testRG/providers/Microsoft.OperationalInsights/workspaces/testWS/tables/Syslog_SRCH?api-version=2021-12-01-preview ```
-**Response**<br>
+**Response**
+ ```json { "properties": {
GET https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000
} ```
+# [CLI](#tab/cli-2)
+
+To check the status and details of a search job table, run the [az monitor log-analytics workspace table show](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-show) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table show --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name HeartbeatByIp_SRCH --output table \
+```
++ ## Delete search job table
-We recommend deleting the search job table when you're done querying the table. This reduces workspace clutter and additional charges for data retention.
+We recommend deleting the search job table when you're done querying the table. This reduces workspace clutter and extra charges for data retention.
+
+# [API](#tab/api-3)
To delete a table, call the **Tables - Delete** API:
To delete a table, call the **Tables - Delete** API:
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/tables/<TableName>_SRCH?api-version=2021-12-01-preview ```
+# [CLI](#tab/cli-3)
+
+To delete a search table, run the [az monitor log-analytics workspace table delete](/cli/azure/monitor/log-analytics/workspace/table#az-monitor-log-analytics-workspace-table-delete) command.
+
+For example:
+
+```azurecli
+az monitor log-analytics workspace table delete --subscription ContosoSID --resource-group ContosoRG --workspace-name ContosoWorkspace \
+ --name HeartbeatByIp_SRCH
+```
+++ ## Limitations Search jobs are subject to the following limitations:
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/vmext-troubleshoot.md
If the *Log Analytics agent for Linux* VM extension is not installing or reporti
2. For other unhealthy statuses, review the Log Analytics agent for Linux VM extension logs files in `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/extension.log` and `/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/CommandExecution.log` 3. If the extension status is healthy, but data is not being uploaded review the Log Analytics agent for Linux log files in `/var/opt/microsoft/omsagent/log/omsagent.log`
-For more information, see [troubleshooting Linux extensions](../../virtual-machines/extensions/oms-linux.md).
- ## Next steps
-For additional troubleshooting guidance related to the Log Analytics agent for Linux hosted on computers outside of Azure, see [Troubleshoot Azure Log Analytics Linux Agent](../agents/agent-linux-troubleshoot.md).
+For additional troubleshooting guidance related to the Log Analytics agent for Linux, see [Troubleshoot Azure Log Analytics Linux Agent](../agents/agent-linux-troubleshoot.md).
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
Title: Allow the Azure portal URLs on your firewall or proxy server description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist. Previously updated : 12/13/2021 Last updated : 03/09/2022
The URL endpoints to allow for the Azure portal are specific to the Azure cloud
#### [China Government Cloud](#tab/china-government-cloud) ```
+aadcdn.msauth.cn
+aadcdn.msftauth.cn
+login.live.com
*.azure.cn *.microsoft.cn *.microsoftonline.cn *.chinacloudapi.cn *.trafficmanager.cn
-*.chinacloudsites.cn
*.windowsazure.cn ```
azure-resource-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/cli-samples.md
# Azure CLI Samples for Azure Managed Applications
-The following table includes links to bash scripts for Azure Managed Applications that use the Azure CLI.
+The following table includes links to a sample CLI script for Azure Managed Applications.
| Create managed application | Description | | -- | -- |
-| [Create managed application definition](scripts/managed-application-cli-sample-create-definition.md) | Creates a managed application definition in the service catalog. |
-| [Deploy managed application](scripts/managed-application-cli-sample-create-application.md) | Deploys a managed application from the service catalog. |
-|**Update managed resource group**| **Description** |
-| [Get resources in managed resource group and resize VMs](scripts/managed-application-cli-sample-get-managed-group-resize-vm.md) | Gets resources from the managed resource group, and resizes the VMs. |
+| [Define and create a managed application](scripts/managed-application-define-create-cli-sample.md) | Creates a managed application definition in the service catalog and then deploys the managed application from the service catalog. |
azure-resource-manager Managed Application Cli Sample Create Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-cli-sample-create-application.md
- Title: Azure CLI script sample - Deploy a managed application
-description: Provides Azure CLI sample script that deploys an Azure Managed Application definition to the subscription.
--- Previously updated : 10/25/2017----
-# Deploy a managed application for service catalog with Azure CLI
-
-This script deploys a managed application definition from the service catalog.
----
-## Sample script
-
-[!code-azurecli[main](../../../../cli_scripts/managed-applications/create-application/create-application.sh "Create application")]
--
-## Script explanation
-
-This script uses the following command to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp create](/cli/azure/managedapp#az_managedapp_create) | Create a managed application. Provide the definition ID and parameters for the template. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Cli Sample Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-cli-sample-create-definition.md
- Title: Create Managed Application definition - Azure CLI
-description: Provides an Azure CLI script sample that creates a managed application definition in the subscription.
--- Previously updated : 10/25/2017----
-# Create a managed application definition with Azure CLI
-
-This script publishes a managed application definition to a service catalog.
----
-## Sample script
-
-[!code-azurecli[main](../../../../cli_scripts/managed-applications/create-definition/create-definition.sh "Create definition")]
--
-## Script explanation
-
-This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp definition create](/cli/azure/managedapp/definition#az_managedapp_definition_create) | Create a managed application definition. Provide the package that contains the required files. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Cli Sample Get Managed Group Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-cli-sample-get-managed-group-resize-vm.md
- Title: Get managed resource group & resize VMs - Azure CLI
-description: Provides Azure CLI sample script that gets a managed resource group in an Azure Managed Application. The script resizes VMs.
--- Previously updated : 10/25/2017----
-# Get resources in a managed resource group and resize VMs with Azure CLI
-
-This script retrieves resources from a managed resource group, and resizes the VMs in that resource group.
----
-## Sample script
-
-[!code-azurecli[main](../../../../cli_scripts/managed-applications/get-application/get-application.sh "Get application")]
--
-## Script explanation
-
-This script uses the following commands to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp list](/cli/azure/managedapp#az_managedapp_list) | List managed applications. Provide query values to focus the results. |
-| [az resource list](/cli/azure/resource#az_resource_list) | List resources. Provide a resource group and query values to focus the result. |
-| [az vm resize](/cli/azure/vm#az_vm_resize) | Update a virtual machine's size. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Define Create Cli Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-define-create-cli-sample.md
+
+ Title: Create managed application definition - Azure CLI
+description: Provides an Azure CLI script sample that publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
+
+ms.devlang: azurecli
+ Last updated : 03/07/2022++++
+# Create a managed application definition to service catalog and deploy managed application from service catalog with Azure CLI
+
+This script publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
+++
+## Sample script
++
+### Run the script
++
+## Clean up resources
++
+```azurecli
+az group delete --name $appResourceGroup -y
+az group delete --name $appDefinitionResourceGroup -y
+```
+
+## Sample reference
+
+This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
+
+| Command | Notes |
+|||
+| [az managedapp definition create](/cli/azure/managedapp/definition#az_managedapp_definition_create) | Create a managed application definition. Provide the package that contains the required files. |
+
+## Next steps
+
+* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
+* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/automated-backups-overview.md
PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444
```json { "properties":{
- "retentionDays":28
+ "retentionDays":28,
"diffBackupIntervalInHours":24 } }
PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444
"name": "default", "type": "Microsoft.Sql/resourceGroups/servers/databases/backupShortTermRetentionPolicies", "properties": {
- "retentionDays": 28
+ "retentionDays": 28,
"diffBackupIntervalInHours":24 } }
azure-sql Database Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-export.md
When you need to export a database for archiving or for moving to another platfo
> [!NOTE] > BACPACs are not intended to be used for backup and restore operations. Azure automatically creates backups for every user database. For details, see [business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md) and [SQL Database backups](automated-backups-overview.md).
+> [!NOTE]
+> [Import and Export using Private Link](database-import-export-private-link.md) is in preview.
+ ## The Azure portal Exporting a BACPAC of a database from [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) or from a database in the [Hyperscale service tier](service-tier-hyperscale.md) using the Azure portal is not currently supported. See [Considerations](#considerations).
azure-sql Database Import Export Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import-export-private-link.md
$importRequest = New-AzSqlDatabaseExport -ResourceGroupName "<resourceGroupName>
### Create Import-Export Private link using REST API Existing APIs to perform Import and Export jobs have been enhanced to support Private Link. Refer to [Import Database API](/rest/api/sql/2021-08-01-preview/servers/import-database)
+## Limitations
+
+- Import using Private Link does not support specifying a backup storage redundancy while creating a new database and creates with the default geo-redundant backup storage redundancy. As a work around, first create an empty database with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into this empty database.
+- Import and Export operations are not supported in Azure SQL DB Hyperscale tier yet.
+- Import using REST API with private link can only be done to existing database since the API uses database extensions. To workaround this create an empty database with desired name and call Import REST API with Private link.
++ ## Next steps - [Import or Export Azure SQL Database without allowing Azure services to access the server](database-import-export-azure-services-off.md) - [Import a database from a BACPAC file](database-import.md)
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import.md
You can import a SQL Server database into Azure SQL Database or SQL Managed Inst
> [!IMPORTANT] > After importing your database, you can choose to operate the database at its current compatibility level (level 100 for the AdventureWorks2008R2 database) or at a higher level. For more information on the implications and options for operating a database at a specific compatibility level, see [ALTER DATABASE Compatibility Level](/sql/t-sql/statements/alter-database-transact-sql-compatibility-level). See also [ALTER DATABASE SCOPED CONFIGURATION](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql) for information about additional database-level settings related to compatibility levels.
+> [!NOTE]
+> [Import and Export using Private Link](database-import-export-private-link.md) is in preview.
+ ## Using Azure portal Watch this video to see how to import from a BACPAC file in the Azure portal or continue reading below:
Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $Se
- Import does not support specifying a backup storage redundancy while creating a new database and creates with the default geo-redundant backup storage redundancy. To workaround, first create an empty database with desired backup storage redundancy using Azure portal or PowerShell and then import the BACPAC into this empty database. - Storage behind a firewall is currently not supported.
-> [!NOTE]
-> Azure SQL Database Configurable Backup Storage Redundancy is currently available in public preview in Southeast Asia Azure region only.
## Import using wizards
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Last updated 01/26/2022
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] > [!div class="op_single_selector"]
-> * [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
-> * [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
-> * [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
-
+>
+> - [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
+> - [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
+> - [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
-Configure an [auto-failover group](auto-failover-group-sql-db.md) for an Azure SQL Database elastic pool and test failover using the Azure portal.
+Configure an [auto-failover group](auto-failover-group-sql-db.md) for an Azure SQL Database elastic pool and test failover using the Azure portal.
In this tutorial, you'll learn how to:
To complete the tutorial, make sure you have the following items:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -- ## 1 - Create a single database
In this step, you create your elastic pool and add your database to the elastic
Set these additional parameter values for use in creating the elastic pool. ### Create elastic pool on primary server Use the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command to create an elastic pool. ### Add database to elastic pool Use the [az sql db update](/cli/azure/sql/db#az_sql_db_update) command to add a database to an elastic pool. This portion of the tutorial uses the following Azure CLI cmdlets:
Set these additional parameter values for use in creating the failover group.
Change the failover location as appropriate for your environment. ### Create secondary server
Use the [az sql server create](/cli/azure/sql/server#az_sql_server_create) comma
> [!NOTE] > The server login and firewall settings must match that of your primary server. ### Create elastic pool on secondary server Use the [az sql elastic-pool create](/cli/azure/sql/elastic-pool#az-sql-elastic-pool-create) command to create an elastic pool on the secondary server. ### Create failover group Use the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command to create a failover group. ### Add database to the failover group Use the [az sql failover-group update](/cli/azure/sql/failover-group#az_sql_failover_group_update) command to add a database to the failover group. ### Azure CLI failover group creation reference
Test failover using the Azure CLI.
Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to confirm the roles of each server in the failover group. ### Fail over to the secondary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail over to the secondary server. Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to verify a successful failover. ### Revert failover group back to the primary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail back to the primary server. ### Azure CLI failover group management reference
This script uses the following commands. Each command in the table links to comm
# [Azure CLI](#tab/azure-cli) # [Azure portal](#tab/azure-portal)
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
To complete the tutorial, make sure you have the following items:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] -- ## 1 - Create a database
Set these additional parameter values for use in creating the failover group, in
Change the failover location as appropriate for your environment. ### Create the secondary server
Use the [az sql server create](/cli/azure/sql/server#az_sql_server_create) comma
> [!NOTE] > The server login and firewall settings must match that of your primary server. ### Create the failover group Use the [az sql failover-group create](/cli/azure/sql/failover-group#az_sql_failover_group_create) command to create a failover group. ### Azure CLI failover group creation reference
Test failover using the Azure CLI.
Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to confirm the roles of each server. ### Fail over to the secondary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) to fail over to the secondary server. Use the [az sql failover-group show](/cli/azure/sql/failover-group#az_sql_failover_group_show) command to verify a successful failover. ### Revert failover group back to the primary server Use the [az sql failover-group set-primary](/cli/azure/sql/failover-group#az_sql_failover_group_set_primary) command to fail back to the primary server. ### Azure CLI failover group management reference
This script uses the following commands. Each command in the table links to comm
# [Azure CLI](#tab/azure-cli) This script uses the following commands. Each command in the table links to command specific documentation.
azure-sql Metrics Diagnostic Telemetry Logging Streaming Export Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/metrics-diagnostic-telemetry-logging-streaming-export-configure.md
Previously updated : 11/17/2021 Last updated : 3/10/2022 # Configure streaming export of Azure SQL Database and SQL Managed Instance diagnostic telemetry
Learn more about [database wait statistics](/sql/relational-databases/system-dyn
|ElasticPoolName_s|Name of the elastic pool for the database, if any | |DatabaseName_s|Name of the database | |ResourceId|Resource URI |
-|error_state_d|Error state code |
+|error_state_d|A numeric state value associated with the query timeout (an [attention](/sql/relational-databases/errors-events/mssqlserver-3617-database-engine-error) event) |
|query_hash_s|Query hash, if available | |query_plan_hash_s|Query plan hash, if available |
azure-sql Add Database To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/add-database-to-failover-group-cli.md
Last updated 01/26/2022
-# Use Azure CLI to add a database to a failover group
+# Add a database to a failover group using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a database in Azure SQL Database, creates
### Run the script ## Clean up resources
azure-sql Add Elastic Pool To Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/add-elastic-pool-to-failover-group-cli.md
Last updated 01/26/2022
-# Use CLI to add an Azure SQL Database elastic pool to a failover group
+# Add an Azure SQL Database elastic pool to a failover group using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a single database, adds it to an elastic p
### Run the script ## Clean up resources
azure-sql Auditing Threat Detection Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/auditing-threat-detection-cli.md
Last updated 01/26/2022
-# Use CLI to configure SQL Database auditing and Advanced Threat Protection
+# Configure SQL Database auditing and Advanced Threat Protection using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example configures SQL Database auditing and Advanced Thre
### Run the script ## Clean up resources
azure-sql Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/backup-database-cli.md
Last updated 01/26/2022
-# Use CLI to backup an Azure SQL single database to an Azure storage container
+# Backup an Azure SQL single database to an Azure storage container using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI example backs up a database in SQL Database to an Azure storage c
### Run the script ## Clean up resources
azure-sql Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/copy-database-to-new-server-cli.md
Last updated 01/26/2022
-# Use CLI to copy a database in Azure SQL Database to a new server
+# Copy a database in Azure SQL Database to a new server using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a copy of an existing database in a new se
### Run the script ## Clean up resources
azure-sql Create And Configure Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/create-and-configure-database-cli.md
Last updated 01/26/2022
-# Use Azure CLI to create a single database and configure a firewall rule
+# Create a single database and configure a firewall rule using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates a single database in Azure SQL Database an
### Run the script ## Clean up resources
azure-sql Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/import-from-bacpac-cli.md
Last updated 01/26/2022
-# Use CLI to import a BACPAC file into a database in SQL Database
+# Import a BACPAC file into a database in SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example imports a database from a *.bacpac* file into a da
### Run the script ## Clean up resources
azure-sql Monitor And Scale Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/monitor-and-scale-database-cli.md
Last updated 01/26/2022
-# Use the Azure CLI to monitor and scale a single database in Azure SQL Database
+# Monitor and scale a single database in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example scales a single database in Azure SQL Database to
### Run the script > [!TIP] > Use [az sql db op list](/cli/azure/sql/db/op?#az_sql_db_op_list) to get a list of operations performed on the database, and use [az sql db op cancel](/cli/azure/sql/db/op#az_sql_db_op_cancel) to cancel an update operation on the database.
azure-sql Move Database Between Elastic Pools Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/move-database-between-elastic-pools-cli.md
Last updated 01/26/2022
-# Use Azure CLI to move a database in SQL Database in a SQL elastic pool
+# Move a database in SQL Database in a SQL elastic pool using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates two elastic pools, moves a pooled database
### Run the script ## Clean up resources
azure-sql Restore Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/restore-database-cli.md
Last updated 02/11/2022
-# Use CLI to restore a single database in Azure SQL Database to an earlier point in time
+# Restore a single database in Azure SQL Database to an earlier point in time using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI example restores a single database in Azure SQL Database to a spe
### Run the script ## Clean up resources
azure-sql Scale Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/scale-pool-cli.md
Last updated 01/26/2022
-# Use the Azure CLI to scale an elastic pool in Azure SQL Database
+# Scale an elastic pool in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example creates elastic pools in Azure SQL Database, moves
### Run the script ## Clean up resources
azure-sql Setup Geodr Failover Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-failover-database-cli.md
Last updated 01/26/2022
-# Use CLI to configure active geo-replication for a single database in Azure SQL Database
+# Configure active geo-replication for a single database in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example configures active geo-replication for a single dat
### Run the script ## Clean up resources
azure-sql Setup Geodr Failover Group Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-failover-group-cli.md
Last updated 01/26/2022
-# Use CLI to configure a failover group for a group of databases in Azure SQL Database
+# Configure a failover group for a group of databases in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
Last updated 01/26/2022
### Run the script ## Clean up resources
azure-sql Setup Geodr Failover Pool Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/scripts/setup-geodr-failover-pool-cli.md
Last updated 01/26/2022
-# Use CLI to configure active geo-replication for a pooled database in Azure SQL Database
+# Configure active geo-replication for a pooled database in Azure SQL Database using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqldb.md)]
This Azure CLI script example configures active geo-replication for a pooled dat
### Run the script ## Clean up resources
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/service-tier-hyperscale.md
The vCore-based service tiers are differentiated based on database availability
| **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSD storage (per instance)| | **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB | | **IOPS** | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS |
-|**Availability** | 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
+| **Availability** | 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, zone-redundant HA (preview), partial local cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
| **Backups** | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant, or locally-redundant backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant, or locally-redundant backup storage, 1-35 day retention (default 7 days) |
-|||||
<sup>1</sup> Elastic pools are not supported in the Hyperscale service tier.
These are the current limitations to the Hyperscale service tier as of GA. We'r
| Database integrity check | DBCC CHECKDB isn't currently supported for Hyperscale databases. DBCC CHECKTABLE ('TableName') WITH TABLOCK and DBCC CHECKFILEGROUP WITH TABLOCK may be used as a workaround. See [Data Integrity in Azure SQL Database](https://azure.microsoft.com/blog/data-integrity-in-azure-sql-database/) for details on data integrity management in Azure SQL Database. | | Elastic Jobs | Using a Hyperscale database as the Job database is not supported. However, elastic jobs can target Hyperscale databases in the same way as any other Azure SQL database. | |Data Sync| Using a Hyperscale database as a Hub or Sync Metadata database is not supported. However, a Hyperscale database can be a member database in a Data Sync topology. |
+|Import Export | Import-Export service is currently not supported for Hyperscale databases. |
## Next steps
azure-sql Single Database Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/single-database-create-quickstart.md
The following values are used in subsequent commands to create the database and
Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. Use the public IP address of the computer you're using to restrict access to the server to only your IP address. ### Create a resource group Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location: ### Create a server Create a server with the [az sql server create](/cli/azure/sql/server) command. ### Configure a server-based firewall rule Create a firewall rule with the [az sql server firewall-rule create](/cli/azure/sql/server/firewall-rule) command. ### Create a single database Create a database with the [az sql db create](/cli/azure/sql/db) command in the [serverless compute tier](serverless-tier-overview.md). ```azurecli
+echo "Creating $database in serverless tier"
az sql db create \ --resource-group $resourceGroup \ --server $server \
The following values are used in subsequent commands to create the database and
Change the location as appropriate for your environment. Replace `0.0.0.0` with the IP address range to match your specific environment. > [!NOTE] > [az sql up](/cli/azure/sql#az_sql_up) is currently in preview and does not currently support the serverless compute tier. Also, the use of non-alphabetic and non-numeric characters in the database name are not currently supported.
Use the [az sql up](/cli/azure/sql#az_sql_up) command to create and configure a
--database-name $database \\ --admin-user $login \ --admin-password $password- ``` 2. A server firewall rule is automatically created. If the server declines your IP address, create a new firewall rule using the `az sql server firewall-rule create` command and specifying appropriate start and end IP addresses.
azure-sql Link Feature Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature-best-practices.md
+
+ Title: The link feature best practices
+
+description: Learn about best practices when using the link feature for Azure SQL Managed Instance.
++++
+ms.devlang:
++++ Last updated : 03/10/2022+
+# Best practices with link feature for Azure SQL Managed Instance (preview)
+
+This article outlines best practices when using the link feature for Azure SQL Managed Instance. The link feature for Azure SQL Managed Instance connects your SQL Servers hosted anywhere to SQL Managed Instance, providing near real-time data replication to the cloud.
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+## Take log backups regularly
+
+The link feature replicates data using the [Distributed availability groups](/sql/database-engine/availability-groups/windows/distributed-availability-groups) concept based the Always On availability groups technology stack. Data replication with distributed availability groups is based on replicating transaction log records. No transaction log records can be truncated from the database on the primary instance until they're replicated to the database on the secondary instance. If transaction log record replication is slow or blocked due to network connection issues, the log file keeps growing on the primary instance. Growth speed depends on the intensity of workload and the network speed. If there's a prolonged network connection outage and heavy workload on primary instance, the log file may take all available storage space.
++
+To minimize the risk of running out of space on your primary instance due to log file growth, make sure to take database log backups regularly. By taking log backups regularly, you make your database more resilient to unplanned log growth events. Consider scheduling daily log backup tasks using SQL Server Agent job.
+
+You can use a Transact-SQL (T-SQL) script to back up the log file, such as the sample provided in this section. Replace the placeholders in the sample script with name of your database, name and path of the backup file, and the description.
+
+To back up your transaction log, use the following sample Transact-SQL (T-SQL) script:
+
+```sql
+
+USE [<DatabaseName>]
+--Set current database inside job step or script
+--Check that you are executing the script on the primary instance
+if (SELECT role
+ FROM sys.dm_hadr_availability_replica_states AS a
+ JOIN sys.availability_replicas AS b
+ ON b.replica_id = a.replica_id
+WHERE b.replica_server_name = @@SERVERNAME) = 1
+BEGIN
+-- Take log backup
+BACKUP LOG [<DatabaseName>]
+TO DISK = N'<DiskPathandFileName>'
+WITH NOFORMAT, NOINIT,
+NAME = N'<Description>', SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 1
+END
+```
++
+Use the following Transact-SQL (T-SQL) command to check the log spaced used by your database:
+
+```sql
+DBCC SQLPERF(LOGSPACE);
+```
+
+The query output looks like the following example below for sample database **tpcc**:
++
+In this example, the database has used 76% of the available log, with an absolute log file size of approximately 27 GB (27,971 MB). The thresholds for action may vary based on your workload, but it's typically an indication that you should take a log backup to truncate log file and free up some space.
+
+## Add startup trace flags
+
+There are two trace flags (`-T1800` and `-T9567`) that, when added as start up parameters, can optimize the performance of data replication through the link. See [Enable startup trace flags](managed-instance-link-preparation.md#enable-startup-trace-flags) to learn more.
+
+## Next steps
+
+To get started with the link feature, [prepare your environment for replication](managed-instance-link-preparation.md).
+
+For more information on the link feature, see the following articles:
+
+- [Managed Instance link ΓÇô overview](link-feature.md)
+- [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog)
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/link-feature.md
Last updated 02/04/2022
-# Link feature for Azure SQL Managed Instance (limited preview)
+# Link feature for Azure SQL Managed Instance (preview)
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)] The new link feature in Azure SQL Managed Instance connects your SQL Servers hosted anywhere to SQL Managed Instance, providing hybrid flexibility and database mobility. With an approach that uses near real-time data replication to the cloud, you can offload workloads to a read-only secondary in Azure to take advantage of Azure-only features, performance, and scale. After a disastrous event, you can continue running your read-only workloads on SQL Managed Instance in Azure. You can also choose to migrate one or more applications from SQL Server to SQL Managed Instance at the same time, at your own pace, and with the best possible minimum downtime compared to other solutions in Azure today.
-## Sign-up for link
- To use the link feature, you'll need: -- SQL Server 2019 Enterprise Edition with [CU15 (or above)](https://support.microsoft.com/en-us/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) installed on-premises, or on an Azure VM.
+- SQL Server 2019 Enterprise Edition or Developer Edition with [CU15 (or above)](https://support.microsoft.com/en-us/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) installed on-premises, or on an Azure VM.
- Network connectivity between your SQL Server and managed instance is required. If your SQL Server is running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two separate subnets. - Azure SQL Managed Instance provisioned on any service tier.
-Use the following link to sign-up for the limited preview of the link feature.
-
-> [!div class="nextstepaction"]
-> [Sign up for link feature preview](https://aka.ms/mi-link-signup)
+> [!NOTE]
+> SQL Managed Instance link feature is available in regions: Australia Central, Australia Central 2, Australia Southeast, Brazil South, Brazil Southeast, France Central, France South, South India, Central India, West India, Japan West, Japan East, Jio India West, Jio India Central, Korea Central, Korea South, North Central US, North Europe, Norway West, Norway East, South Africa North, South Africa West, South Central US, Southeast Asia, Sweden Central, Switzerland North, Switzerland West, UK South, UK West, West Central US, West Europe, West US, West US 2, West US 3. We are working on enabling link feature in all regions.
## Overview
The underlying technology of near real-time data replication between SQL Server
There's no need to have an existing availability group or multiple nodes. The link supports single node SQL Server instances without existing availability groups, and also multiple-node SQL Server instances with existing availability groups. Through the link, you can leverage the modern benefits of Azure without migrating your entire SQL Server data estate to the cloud.
-You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if/when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance.
+You can keep running the link for as long as you need it, for months and even years at a time. And for your modernization journey, if or when you're ready to migrate to Azure, the link enables a considerably-improved migration experience with the minimum possible downtime compared to all other options available today, providing a true online migration to SQL Managed Instance.
## Supported scenarios
Secure connectivity, such as VPN or Express Route is used between an on-premises
There could exist up to 100 links from the same, or various SQL Server sources to a single SQL Managed Instance. This limit is governed by the number of databases that could be hosted on a managed instance at this time. Likewise, a single SQL Server can establish multiple parallel database replication links with several managed instances in different Azure regions in a 1 to 1 relationship between a database and a managed instance . The feature requires CU13 or higher to be installed on SQL Server 2019.
-> [!NOTE]
-> The link feature is released in limited public preview with support for currently only SQL Server 2019 Enterprise Edition CU13 (or above). [Sign-up now](https://aka.ms/mi-link-signup) to participate in the limited public preview.
- ## Limitations This section describes the productΓÇÖs functional limitations.
Some Managed Instance link features and capabilities are limited **at this time*
## Next steps
+If you are interested in using Link feature for Azure SQL Managed Instance with versions and editions that are currently not supported, sign-up [here](https://aka.ms/mi-link-signup).
+ For more information on the link feature, see the following: - [Managed Instance link ΓÇô connecting SQL Server to Azure reimagined](https://aka.ms/mi-link-techblog).
+- [Prepare for SQL Managed Instance link](./managed-instance-link-preparation.md).
+- [Use SQL Managed Instance link via SSMS to replicate database](./managed-instance-link-use-ssms-to-replicate-database.md).
+- [Use SQL Managed Instance link via SSMS to migrate database](./managed-instance-link-use-ssms-to-failover-database.md).
For other replication scenarios, consider:
azure-sql Managed Instance Link Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-preparation.md
+
+ Title: Prepare environment for link feature
+
+description: This guide teaches you how to prepare your environment to use the SQL Managed Instance link to replicate your database over to Azure SQL Managed Instance, and possibly failover.
++++
+ms.devlang:
++++ Last updated : 03/07/2022++
+# Prepare environment for link feature - Azure SQL Managed Instance
+
+This article teaches you to prepare your environment for the [Managed Instance link feature](link-feature.md) so that you can replicate your databases from your instance of SQL Server to your instance of Azure SQL Managed Instance.
+
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
+
+## Prerequisites
+
+To use the Managed Instance link feature, you need the following prerequisites:
+
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019?filetype=EXE), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+++
+## Prepare your SQL Server instance
+
+To prepare your SQL Server instance, you need to validate you're on the minimum supported version, you've enabled the availability group feature, and you've added the proper trace flags at startup. You will need to restart SQL Server for these changes to take effect.
+
+### Install CU15 (or higher)
+
+The link feature for SQL Managed Instance was introduced in CU15 of SQL Server 2019.
+
+To check your SQL Server version, run the following Transact-SQL (T-SQL) script:
+
+```sql
+-- Shows the version and CU of the SQL Server
+SELECT @@VERSION
+```
+
+If your SQL Server version is lower than CU15 (15.0.4198.2), either install the minimally supported [CU15](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6), or the current latest cumulative update. Your SQL Server instance will be restarted during the update.
++
+### Enable availability groups feature
+
+The link feature for SQL Managed Instance relies on the Always On availability groups feature, which is not enabled by default. To learn more, review [enabling the Always On availability groups feature](/sql/database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server).
+
+To confirm the Always On availability groups feature is enabled, run the following Transact-SQL (T-SQL) script:
+
+```sql
+-- Is HADR enabled on this SQL Server?
+declare @IsHadrEnabled sql_variant = (select SERVERPROPERTY('IsHadrEnabled'))
+select
+ @IsHadrEnabled as IsHadrEnabled,
+ case @IsHadrEnabled
+ when 0 then 'The Always On availability groups is disabled.'
+ when 1 then 'The Always On availability groups is enabled.'
+ else 'Unknown status.'
+ end as 'HadrStatus'
+```
+
+If the availability groups feature is not enabled, follow these steps to enable it:
+
+1. Open the **SQL Server Configuration Manager**.
+1. Choose the SQL Server service from the navigation pane.
+1. Right-click on the SQL Server service, and select **Properties**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot showing S Q L Server configuration manager.":::
+
+1. Go to the **Always On Availability Groups** tab.
+1. Select the checkbox to enable **Always On Availability Groups**. Select **OK**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/always-on-availability-groups-properties.png" alt-text="Screenshot showing always on availability groups properties.":::
+
+1. Select **OK** on the dialog box to restart the SQL Server service.
+
+### Enable startup trace flags
+
+To optimize Managed Instance link performance, enabling trace flags `-T1800` and `-T9567` at startup is highly recommended:
+
+- **-T1800**: This trace flag optimizes SQL Server performance when the disks hosting the log files for the primary and secondary replica in an availability group have different sector sizes, such as 512 bytes and 4k. If both primary and secondary replicas have a disk sector size of 4k, this trace flag isn't required. To learn more, review [KB3009974](https://support.microsoft.com/topic/kb3009974-fix-slow-synchronization-when-disks-have-different-sector-sizes-for-primary-and-secondary-replica-log-files-in-sql-server-ag-and-logshipping-environments-ed181bf3-ce80-b6d0-f268-34135711043c).
+- **-T9567**: This trace flag enables compression of the data stream for availability groups during automatic seeding, which increases the load on the processor but can significantly reduce transfer time during seeding.
+
+To enable these trace flags at startup, follow these steps:
+
+1. Open **SQL Server Configuration Manager**.
+1. Choose the SQL Server service from the navigation pane.
+1. Right-click on the SQL Server service, and select **Properties**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-properties.png" alt-text="Screenshot showing S Q L Server configuration manager.":::
+
+1. Go to the **Startup Parameters** tab. In **Specify a startup parameter**, enter `-T1800` and select **Add** to add the startup parameter. After the trace flag has been added, enter `-T9567` and select **Add** to add the other trace flag as well. Select **Apply** to save your changes:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/startup-parameters-properties.png" alt-text="Screenshot showing Startup parameter properties.":::
+
+1. Select **OK** to close the **Properties** window.
++
+To learn more, review [enabling trace flags](/sql/t-sql/database-console-commands/dbcc-traceon-transact-sql).
++
+### Restart SQL Server and validate configuration
++
+After you've validated you're on a supported version of SQL Server, enabled the Always On availability groups feature, and added your startup trace flags, restart your SQL Server instance to apply all of these changes.
+
+To restart your SQL Server instance, follow these steps:
+
+1. Open **SQL Server Configuration Manager**.
+1. Choose the SQL Server service from the navigation pane.
+1. Right-click on the SQL Server service, and select **Restart**:
+
+ :::image type="content" source="./media/managed-instance-link-preparation/sql-server-configuration-manager-sql-server-restart.png" alt-text="Screenshot showing S Q L Server restart command call.":::
+
+After the restart, use Transact-SQL to validate the configuration of your SQL Server. Your SQL Server version should be 15.0.4198.2 or greater, the Always On availability groups feature should be enabled, and you should have the Trace flags -T1800 and -T9567 enabled.
+
+To validate your configuration, run the following Transact-SQL (T-SQL) script:
+
+```sql
+-- Shows the version and CU of SQL Server
+SELECT @@VERSION
+
+-- Shows if Always On availability groups feature is enabled
+SELECT SERVERPROPERTY ('IsHadrEnabled')
+
+-- Lists all trace flags enabled on the SQL Server
+DBCC TRACESTATUS
+```
+
+The following screenshot is an example of the expected outcome for a SQL Server that's been properly configured:
+++
+## Configure network connectivity
+
+For the Managed Instance link to work, there must be network connectivity between SQL Server and SQL Managed Instance. The network option that you choose depends on where your SQL Server resides - whether it's on-premises or on a virtual machine (VM).
+
+### SQL Server on Azure VM
+
+Deploying your SQL Server to an Azure VM in the same Azure virtual network (VNet) that hosts your SQL Managed Instance is the simplest method, as there will automatically be network connectivity between the two instances. To learn more, see the detailed tutorial [Deploy and configure an Azure VM to connect to Azure SQL Managed Instance](./connect-vm-instance-configure.md).
+
+If your SQL Server on Azure VM is in a different VNet to your managed instance, either connect the two Azure VNets using [Global VNet peering](https://techcommunity.microsoft.com/t5/azure-sql/new-feature-global-vnet-peering-support-for-azure-sql-managed/ba-p/1746913), or configure [VPN gateways](../../vpn-gateway/tutorial-create-gateway-portal.md).
+
+>[!NOTE]
+> Global VNet peering is enabled by default on managed instances provisioned after November 2020. [Raise a support ticket](../database/quota-increase-request.md) to enable Global VNet peering on older instances.
++
+### SQL Server outside of Azure
+
+If your SQL Server is hosted outside of Azure, establish a VPN connection between your SQL Server and your SQL Managed Instance with either option:
+
+- [Site-to-site virtual private network (VPN) connection](/office365/enterprise/connect-an-on-premises-network-to-a-microsoft-azure-virtual-network)
+- [Azure Express Route connection](../../expressroute/expressroute-introduction.md)
+
+> [!TIP]
+> Azure Express Route is recommended for the best network performance when replicating data. Ensure to provision a gateway with sufficiently large bandwidth for your use case.
+
+### Open network ports between the environments
+
+Port 5022 needs to allow inbound and outbound traffic between SQL Server and SQL Managed Instance. Port 5022 is the standard port used for availability groups, and cannot be changed or customized.
+
+The following table describes port actions for each environment:
+
+|Environment|What to do|
+|:|:--|
+|SQL Server (in Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of the SQL Managed Instance. If necessary, do the same on the Windows firewall as well. Create an NSG rule in the virtual network hosting the VM that allows communication on port 5022. |
+|SQL Server (outside of Azure) | Open both inbound and outbound traffic on port 5022 for the network firewall to the entire subnet of the SQL Managed Instance. If necessary, do the same on the Windows firewall as well. |
+|SQL Managed Instance |[Create an NSG rule](../../virtual-network/manage-network-security-group.md#create-a-security-rule) in the Azure portal to allow inbound and outbound traffic from the IP address of the SQL Server on port 5022 to the virtual network hosting the SQL Managed Instance. |
+
+Use the following PowerShell script on the host SQL Server to open ports in the Windows Firewall:
+
+```powershell
+New-NetFirewallRule -DisplayName "Allow TCP port 5022 inbound" -Direction inbound -Profile Any -Action Allow -LocalPort 5022 -Protocol TCP
+New-NetFirewallRule -DisplayName "Allow TCP port 5022 outbound" -Direction outbound -Profile Any -Action Allow -LocalPort 5022 -Protocol TCP
+```
++
+## Test bidirectional network connectivity
+
+Bidirectional network connectivity between SQL Server and SQL Managed Instance is necessary for the Managed Instance link feature to work. After opening your ports on the SQL Server side, and configuring an NSG rule on the SQL Managed Instance side, test connectivity.
++
+### Test connection from SQL Server to SQL Managed Instance
+
+To check if SQL Server can reach your SQL Managed Instance use the `tnc` command in PowerShell from the SQL Server host machine. Replace `<ManagedInstanceFQDN>` with the fully qualified domain name of the Azure SQL Managed Instance.
+
+```powershell
+tnc <ManagedInstanceFQDN> -port 5022
+```
+
+A successful test shows `TcpTestSucceeded : True`:
+++
+If the response is unsuccessful, verify the following:
+- There are rules in both the network firewall *and* the windows firewall that allow traffic to the *subnet* of the SQL Managed Instance.
+- There is an NSG rule allowing communication on port 5022 for the virtual network hosting the SQL Managed Instance.
++
+#### Test connection from SQL Managed Instance to SQL Server
+
+To check that the SQL Managed Instance can reach your SQL Server, create a test endpoint, and then use the SQL Agent to execute a PowerShell script with the `tnc` command pinging SQL Server on port 5022.
+++
+Connect to the SQL Managed Instance and run the following Transact-SQL (T-SQL) script to create test endpoint:
+
+```sql
+-- Create certificate needed for the test endpoint
+USE MASTER
+CREATE CERTIFICATE TEST_CERT
+WITH SUBJECT = N'Certificate for SQL Server',
+EXPIRY_DATE = N'3/30/2051'
+GO
+
+-- Create test endpoint
+USE MASTER
+CREATE ENDPOINT TEST_ENDPOINT
+ STATE=STARTED
+ AS TCP (LISTENER_PORT=5022, LISTENER_IP = ALL)
+ FOR DATABASE_MIRRORING (
+ ROLE=ALL,
+ AUTHENTICATION = CERTIFICATE TEST_CERT,
+ ENCRYPTION = REQUIRED ALGORITHM AES
+ )
+```
+
+Next, create a new SQL Agent job called `NetHelper`, using the public IP address or DNS name that can be resolved from the SQL Managed Instance for `SQL_SERVER_ADDRESS`.
+
+To create the SQL Agent Job, run the following Transact-SQL (T-SQL) script:
++
+```sql
+-- SQL_SERVER_ADDRESS should be public IP address, or DNS name that can be resolved from the Managed Instance host machine.
+DECLARE @SQLServerIpAddress NVARCHAR(MAX) = '<SQL_SERVER_ADDRESS>'
+DECLARE @tncCommand NVARCHAR(MAX) = 'tnc ' + @SQLServerIpAddress + ' -port 5022 -InformationLevel Quiet'
+DECLARE @jobId BINARY(16)
+
+EXEC msdb.dbo.sp_add_job @job_name=N'NetHelper',
+ @enabled=1,
+ @description=N'Test Managed Instance to SQL Server network connectivity on port 5022.',
+ @category_name=N'[Uncategorized (Local)]',
+ @owner_login_name=N'cloudSA', @job_id = @jobId OUTPUT
+
+EXEC msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'tnc step',
+ @step_id=1,
+ @os_run_priority=0, @subsystem=N'PowerShell',
+ @command = @tncCommand,
+ @database_name=N'master',
+ @flags=40
+
+EXEC msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
+
+EXEC msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
+
+EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper'
+```
++
+Execute the SQL Agent job by running the following T-SQL command:
+
+```sql
+EXEC msdb.dbo.sp_start_job @job_name = N'NetHelper'
+```
+
+Execute the following query to show the log of the SQL Agent job:
+
+```sql
+SELECT
+ sj.name JobName, sjs.step_id, sjs.step_name, sjsl.log, sjsl.date_modified
+FROM
+ msdb.dbo.sysjobs sj
+ LEFT OUTER JOIN msdb.dbo.sysjobsteps sjs
+ ON sj.job_id = sjs.job_id
+ LEFT OUTER JOIN msdb.dbo.sysjobstepslogs sjsl
+ ON sjs.step_uid = sjsl.step_uid
+WHERE
+ sj.name = 'NetHelper'
+```
+
+If the connection is successful, the log will show `True`. If the connection is unsuccessful, the log will show `False`.
++
+Finally, drop the test endpoint and certificate with the following Transact-SQL (T-SQL) commands:
+
+```sql
+DROP ENDPOINT TEST_ENDPOINT
+GO
+DROP CERTIFICATE TEST_CERT
+GO
+```
+
+If the connection is unsuccessful, verify the following:
+- The firewall on the host SQL Server allows inbound and outbound communication on port 5022.
+- There is an NSG rule for the virtual network hosting the SQL Managed instance that allows communication on port 5022.
+- If your SQL Server is on an Azure VM, there is an NSG rule allowing communication on port 5022 on the virtual network hosting the VM.
+- SQL Server is running.
+
+> [!CAUTION]
+> Proceed with the next steps only if there is validated network connectivity between your source and target environments. Otherwise, please troubleshoot network connectivity issues before proceeding any further.
++
+## Install SSMS
+
+SQL Server Management Studio (SSMS) v18.11.1 is the easiest way to use the Managed Instance Link. [Download SSMS version 18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms) and install it to your client machine.
+
+After installation completes, open SSMS and connect to your supported SQL Server instance. Right-click a user database, and validate you see the "Azure SQL Managed Instance link" option in the menu:
++
+## Next steps
+
+After your environment has been prepared, you're ready to start [replicating your database](managed-instance-link-use-ssms-to-replicate-database.md). To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Ssms To Failover Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-failover-database.md
Title: Managed Instance link - Use SSMS to failover database
+ Title: Failover database with link feature in SSMS
-description: This tutorial teaches you how to use Managed Instance link and SSMS to failover database from SQL Server to Azure SQL Managed Instance.
+description: This guide teaches you how to use the SQL Managed Instance link in SQL Server Management Studio (SSMS) to failover database from SQL Server to Azure SQL Managed Instance.
-+ ms.devlang: -+ Last updated 03/07/2022
-# Tutorial: Perform Managed Instance link database failover with SSMS
+# Failover database with link feature in SSMS - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-Managed Instance link is in preview.
+This article teaches you to use the [Managed Instance link feature](link-feature.md) to failover your database from SQL Server to Azure SQL Managed Instance in SQL Server Management Studio (SSMS).
-Managed Instance link feature enables you to replicate and optionally migrate your database hosted on SQL Server to Azure SQL Managed Instance.
+Failing over your database from your SQL Server instance to your SQL Managed Instance breaks the link between the two databases, stopping replication, and leaving both databases in an independent state, ready for individual read-write workloads.
-Once Managed Instance link database failover is performed from SSMS, the Managed Instance link is cut. Database hosted on SQL Server will become independent from database on Managed Instance and both databases will be able to perform read-write workload. This tutorial will cover performing Managed Instance link database failover by using latest version of SSMS (v18.11 and newer).
+Before failing over your database, make sure you've [prepared your environment](managed-instance-link-preparation.md) and [configured replication through the link feature](managed-instance-link-use-ssms-to-replicate-database.md).
-## Managed Instance link database failover (migration)
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
-Follow the steps described in this section to perform Managed Instance link database failover.
+## Prerequisites
-1. Managed Instance link database failover starts with connecting to SQL Server from SSMS.
- To perform Managed Instance link database failover and migrate database from SQL Server to Managed Instance, open the context menu of the SQL Server database. Then select Azure SQL Managed Instance link and then choose Failover database option.
+To failover your databases to Azure SQL Managed Instance, you need the following prerequisites:
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-database-context-failover-database.png" alt-text="Screenshot showing database's context menu option for database failover.":::
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- [Prepared your environment for replication](managed-instance-link-preparation.md)
+- Setup the [link feature and replicated your database to your managed instance in Azure](managed-instance-link-use-ssms-to-replicate-database.md).
-2. When the wizard starts, click Next.
+## Failover database
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-introduction.png" alt-text="Screenshot showing Introduction window.":::
+Use the **Failover database to Managed Instance** wizard in SQL Server Management Studio (SSMS) to failover your database from your instance of SQL Server to your instance of SQL Managed Instance. The wizard takes you through the failing over your database, breaking the link between the two instances in the process.
-3. On the Log in to Azure window, sign-in to your Azure account, select Subscription that is hosting the Managed Instance and click Next.
+> [!CAUTION]
+> If you are performing a planned manual failover, stop the workload on the database hosted on the source SQL Server to allow the replicated database on the SQL Managed Instance to completely catch up and failover without data loss. If you are performing a forced failover, there may be data loss.
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure window.":::
+To failover your database, follow these steps:
-4. On the Failover type window, select the failover type, fill in the required details and click Next.
+1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
+1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Failover database** to open the **Failover database to Managed Instance** wizard:
- In regular situations you should choose planned manual failover option and confirm that the workload on SQL Server database is stopped.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-ssms-database-context-failover-database.png" alt-text="Screenshot showing database's context menu option for database failover.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-failover-type.png" alt-text="Screenshot showing Failover Type window.":::
+1. Select **Next** on the **Introduction** page of the **Failover database to Managed Instance** wizard:
+
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-introduction.png" alt-text="Screenshot showing Introduction page.":::
++
+3. On the **Log in to Azure** page, select **Sign-in** to provide your credentials and sign into your Azure account. Select the subscription that is hosting the your SQL Managed Instance from the drop-down and then select **Next**:
+
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-login-to-azure.png" alt-text="Screenshot showing Log in to Azure page.":::
+
+4. On the **Failover type** page, choose the type of failover you're performing and check the box to confirm that you've either stopped the workload for a planned failover, or you understand that there may be data loss for a forced failover. Select **Next**:
+
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-failover-type.png" alt-text="Screenshot showing Failover Type page.":::
+
+1. On the **Clean up (optional)**, choose to drop the availability group if it was created solely for the purpose of migrating your database to Azure and you no longer need the availability group. If you want to keep the availability group, then leave the boxes unchecked. Select **Next**:
-> [!NOTE]
-> If you are performing planned manual failover, you should stop the workload on the database hosted on the SQL Server to allow Managed Instance link to completely catch up with the replication, so that failover without data loss is possible.
-5. In case Availability Group and Distributed Availability Group were created only for the purpose of Managed Instance link, you can choose to drop these objects on the Clean-up window. Dropping these objects is optional. Click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-cleanup-optional.png" alt-text="Screenshot showing Cleanup (optional) page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-cleanup-optional.png" alt-text="Screenshot showing Cleanup (optional) window.":::
+1. On the **Summary** page, review the actions that will be performed for your failover. (Optionally) You can also create a script to save and run yourself at a later time. When you're ready to proceed with the failover, select **Finish**:
-6. In the Summary window, you will be able to review the upcoming process. Optionally you can create the script to save it, or to execute it manually. If everything is as expected and you want to proceed with the Managed Instance link database failover, click Finish.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-summary.png" alt-text="Screenshot showing Summary page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-summary.png" alt-text="Screenshot showing Summary window.":::
+7. The **Executing actions** page displays the progress of each action:
-7. You will be able to track the progress of the process.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-executing-actions.png" alt-text="Screenshot showing Executing actions page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-executing-actions.png" alt-text="Screenshot showing Executing actions window.":::
+8. After all steps complete, the **Results** page shows a completed status, with checkmarks next to each successfully completed action. You can now close the window:
-8. Once all steps are completed, click Close.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-failover-database/link-failover-results.png" alt-text="Screenshot showing Results window.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-results.png" alt-text="Screenshot showing Results window.":::
+## View failed over database
-9. After this, Managed Instance link no longer exists. Both databases on SQL Server and Managed Instance can execute read-write workload and are independent.
- With this step, the migration of the database from SQL Server to Managed Instance is completed.
+During the failover process, the Managed Instance link is dropped and no longer exists. Both databases on the source SQL Server instance and target SQL Managed Instance can execute a read-write workload, and are completely independent.
- Database on SQL Server.
+You can validate this by reviewing the database on the SQL Server:
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-sql-server-database.png" alt-text="Screenshot showing database on SQL Server in SSMS.":::
- Database on Managed Instance.
+And then reviewing the database on the SQL Managed Instance:
- :::image type="content" source="./media/managed-instance-link-ssms/link-failover-ssms-managed-instance-database.png" alt-text="Screenshot showing database on Managed Instance in SSMS.":::
## Next steps For more information about Managed Instance link feature, see the following resources: -- [Managed Instance link feature](./link-feature.md)
+To learn more, review [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Managed Instance Link Use Ssms To Replicate Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/managed-instance-link-use-ssms-to-replicate-database.md
Title: Managed Instance link - Use SSMS to replicate database
+ Title: Replicate database with link feature in SSMS
-description: This tutorial teaches you how to use Managed Instance link and SSMS to replicate database from SQL Server to Azure SQL Managed Instance.
+description: This guide teaches you how to use the SQL Managed Instance link in SQL Server Management Studio (SSMS) to replicate database from SQL Server to Azure SQL Managed Instance.
-+ ms.devlang: -+ Last updated 03/07/2022
-# Tutorial: Create Managed Instance link and replicate database with SSMS
+# Replicate database with link feature in SSMS - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-Managed Instance link is in public preview.
+This article teaches you to use the [Managed Instance link feature](link-feature.md) to replicate your database from SQL Server to Azure SQL Managed Instance in SQL Server Management Studio (SSMS).
-Managed Instance link feature enables you to replicate your database hosted on SQL Server to Azure SQL Managed Instance. This tutorial will cover setting up Managed Instance link. More specifically, setting up database replication from SQL Server to Managed Instance with latest version of SSMS. This functionality is available in SSMS version 18.11 and newer.
+Before configuring replication for your database through the link feature, make sure you've [prepared your environment](managed-instance-link-preparation.md).
-## Managed Instance link database replication setup
+> [!NOTE]
+> The link feature for Azure SQL Managed Instance is currently in preview.
-Follow the steps described in this section to create Managed Instance link.
+## Prerequisites
-1. Managed Instance link database replication setup starts with connecting to SQL Server from SSMS.
- In the object explorer, select the database you want to replicate to Azure SQL Managed Instance. From databaseΓÇÖs context menu, choose ΓÇ£Azure SQL Managed Instance linkΓÇ¥ and then ΓÇ£Replicate databaseΓÇ¥, as shown in the screenshot below.
+To replicate your databases to Azure SQL Managed Instance, you need the following prerequisites:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option for replicate database.":::
+- An active Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/).
+- [SQL Server 2019 Enterprise or Developer edition](https://www.microsoft.com/en-us/evalcenter/evaluate-sql-server-2019), starting with [CU15 (15.0.4198.2)](https://support.microsoft.com/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6).
+- An instance of Azure SQL Managed Instance. [Get started](instance-create-quickstart.md) if you don't have one.
+- [SQL Server Management Studio (SSMS) v18.11.1 or later](/sql/ssms/download-sql-server-management-studio-ssms).
+- A properly [prepared environment](managed-instance-link-preparation.md).
-2. Wizard that takes you thought the process of creating Managed Instance link will be started. Once the link is created, your source database will get its read-only replica on your target Azure SQL Managed Instance.
- Once the wizard starts, you'll see the Introduction window. Click Next to proceed.
+## Replicate database
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-introduction.png" alt-text="Screenshot showing the introduction window for Managed Instance link replicate database wizard.":::
+Use the **New Managed Instance link** wizard in SQL Server Management Studio (SSMS) to setup the link between your instance of SQL Server and your instance of SQL Managed Instance. The wizard takes you through the process of creating the Managed Instance link. Once the link is created, your source database gets a read-only replica copy on your target Azure SQL Managed Instance.
-3. Wizard will check Managed Instance link requirements. If all requirements are met and you'll be able to click the Next button to continue.
+To set up the Managed Instance link, follow these steps:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-sql-server-requirements.png" alt-text="Screenshot showing SQL Server requirements window.":::
+1. Open SQL Server Management Studio (SSMS) and connect to your instance of SQL Server.
+1. In **Object Explorer**, right-click your database, hover over **Azure SQL Managed Instance link** and select **Replicate database** to open the **New Managed Instance link** wizard:
-4. On the Select Databases window, choose one or more databases to be replicated via Managed Instance link. Make database selection and click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-ssms-database-context-replicate-database.png" alt-text="Screenshot showing database's context menu option to replicate database after hovering over Azure SQL Managed Instance link.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-select-databases.png" alt-text="Screenshot showing Select Databases window.":::
+1. Select **Next** on the **Introduction** page of the **New Managed Instance link** wizard:
-5. On the Login to Azure and select Managed Instance window you'll need to sign-in to Microsoft Azure, select Subscription, Resource Group and Managed Instance. Finally, you'll need to provide login details for the chosen Managed Instance.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-introduction.png" alt-text="Screenshot showing the introduction page for Managed Instance link replicate database wizard.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-login-to-azure.png" alt-text="Screenshot showing Login to Azure and select Managed Instance window.":::
+1. On the **Requirements** page, the wizard validates requirements to establish a link to your SQL Managed Instance. Select **Next** once all the requirements are validated:
-6. Once all of that is populated, you'll be able to click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-sql-server-requirements.png" alt-text="Screenshot showing S Q L Server requirements page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-login-to-azure-populated.png" alt-text="Screenshot showing Login to Azure and select Managed Instance populated window.":::
+1. On the **Select Databases** page, choose one or more databases you want to replicate to your SQL Managed Instance via the Managed Instance link. Select **Next**:
-7. On the Specify Distributed AG Options window, you'll see prepopulated values for the various parameters. Unless you need to customize something, you can proceed with the default options and click Next.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-select-databases.png" alt-text="Screenshot showing Select Databases page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-distributed-ag-options.png" alt-text="Screenshot showing Specify Distributed AG options window.":::
+1. On the **Login to Azure and select Managed Instance** page, select **Sign In...** to sign into Microsoft Azure. Choose the subscription, resource group, and target managed instance from the drop-downs. Select **Login** and provide login details for the SQL Managed Instance:
-8. On the Summary window you'll be able to see the steps for creating Managed Instance link. Optionally, you can generate the setup Script to save it or to run it yourself.
- Complete the wizard process by clicking on the Finish.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure.png" alt-text="Screenshot showing Login to Azure and select Managed Instance page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-summary.png" alt-text="Screenshot showing Summary window.":::
+1. After providing all necessary information, select **Next**:
-9. The Executing actions window will display the progress of the process.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-login-to-azure-populated.png" alt-text="Screenshot showing Login to Azure and select Managed Instance populated page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-executing-actions.png" alt-text="Screenshot showing Executing actions window.":::
+1. Review the prepopulated values on the **Specify Distributed AG Options** page, and change any that need customization. When ready, select **Next**.
-10. Results window will show up once the process is completed and all steps are marked with a green check sign. At this point, you can close the wizard.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-distributed-ag-options.png" alt-text="Screenshot showing Specify Distributed A G options page.":::
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-results.png" alt-text="Screenshot showing Results window.":::
+1. Review the actions on the **Summary** page, and select **Finish** when ready. (Optionally) You can also create a script to save and run yourself at a later time.
-11. With this, Managed Instance link has been created and chosen databases are being replicated to the Managed Instance.
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-summary.png" alt-text="Screenshot showing Summary window.":::
- In Object explorer, you'll see that the source database hosted on SQL Server is now in ΓÇ£SynchronizedΓÇ¥ state. Also, under Always On High Availability, Availability Groups that Availability Group and Distributed Availability Group are created for Managed Instance link.
+1. The **Executing actions** page displays the progress of each action:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-sql-server-database.png" alt-text="Screenshot showing the state of SQL Server database and Availability Group and Distributed Availability Group in SSMS.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-executing-actions.png" alt-text="Screenshot showing Executing actions page.":::
- We can also see a new database under target Managed Instance. Depending on the database size and network speed, initially you may see the database on the Managed Instance side in the ΓÇ£RestoringΓÇ¥ state. Once the seeding from the SQL Server to Managed Instance is done, the database will be ready for read-only workload and visible as in the screenshot below.
+1. After all steps complete, the **Results** page shows a completed status, with checkmarks next to each successfully completed action. You can now close the window:
- :::image type="content" source="./media/managed-instance-link-ssms/link-replicate-ssms-managed-instance-database.png" alt-text="Screenshot showing the state of Managed Instance database.":::
+ :::image type="content" source="./media/managed-instance-link-use-ssms-to-replicate-database/link-replicate-results.png" alt-text="Screenshot showing Results page.":::
-## Next steps
+## View replicated database
+
+After the Managed Instance link is created, the selected databases are replicated to the SQL Managed Instance.
+
+Use **Object Explorer** on your SQL Server instance to view the `Synchronized` status of the replicated database, and expand **Always On High Availability** and **Availability Groups** to view the distributed availability group that is created for the Managed Instance link.
+
-For more information about Managed Instance link feature, see the following resources:
+Connect to your SQL Managed Instance and use **Object Explorer** to view your replicated database. Depending on the database size and network speed, the database may initially be in a `Restoring` state. After initial seeding completes, the database is restored to the SQL Managed Instance and ready for read-only workloads:
++
+## Next steps
-- [Managed Instance link feature](./link-feature.md)
+To break the link and failover your database to the SQL Managed Instance, see [failover database](managed-instance-link-use-ssms-to-failover-database.md). To learn more, see [Link feature in Azure SQL Managed Instance](link-feature.md).
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
Support for the premium-series hardware generations (public preview) is currentl
| Australia Central | Yes | | | Australia East | Yes | Yes | | Canada Central | Yes | |
-| Canada East | Yes | |
-| Central US | Yes | |
-| East US | Yes | |
-| East US 2 | Yes | |
-| Germany West Central | | Yes |
| Japan East | Yes | | | Korea Central | Yes | | | North Central US | Yes | |
-| North Europe | Yes | |
| South Central US | Yes | Yes | | Southeast Asia | Yes | | | West Europe | | Yes |
azure-sql Create Configure Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/create-configure-managed-instance-cli.md
Last updated 01/26/2022
-# Use CLI to create an Azure SQL Managed Instance
+# Create an Azure SQL Managed Instance using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqlmi.md)]
This Azure CLI script example creates an Azure SQL Managed Instance in a dedicat
### Run the script ## Clean up resources
azure-sql Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/restore-geo-backup-cli.md
Last updated 02/11/2022
-# Use CLI to restore a Managed Instance database to another geo-region
+# Restore a Managed Instance database to another geo-region using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqlmi.md)]
This sample requires an existing pair of managed instances, see [Use Azure CLI t
### Run the script ## Clean up resources
azure-sql Transparent Data Encryption Byok Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/scripts/transparent-data-encryption-byok-sql-managed-instance-cli.md
Last updated 01/26/2022
-# Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault
+# Manage Transparent Data Encryption in a Managed Instance using your own key from Azure Key Vault using the Azure CLI
[!INCLUDE[appliesto-sqldb](../../includes/appliesto-sqlmi.md)]
This sample requires an existing Managed Instance, see [Use Azure CLI to create
### Run the script ## Clean up resources
backup Backup Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks.md
Title: Back up Azure Managed Disks description: Learn how to back up Azure Managed Disks from the Azure portal. Previously updated : 11/25/2021 Last updated : 03/10/2022
To configure disk backup, follow these steps:
- You can't create an incremental snapshot for a particular disk outside of that disk's subscription. So, choose the resource group within the same subscription where the disk needs to be backed up. [Learn more](../virtual-machines/disks-incremental-snapshots.md#restrictions) about incremental snapshot for managed disks.
- - Once you configure the backup of a disk, you canΓÇÖt change the Snapshot Resource Group thatΓÇÖs assigned to a backup instance.
-
- - During a backup operation, Azure Backup creates a Storage Account in the Snapshot resource group. Only one Storage Account is created per snapshot resource group. The account is reused across multiple disk backup instances that use the same resource group as the snapshot resource group.
-
- - The Storage account doesnΓÇÖt store the Snapshots. The Managed-diskΓÇÖs incremental snapshots are ARM resources created on resource group and not in a Storage Account.
- - Storage Account stores the metadata for each recovery point. Azure Backup service creates a blob container per disk backup instance. For each recovery point, a block blob will be created to store metadata describing the recovery point (such as subscription, disk ID, disk attributes, and so on) that occupies a small space (in a few KiBs).
- - Storage Account is created as RA GZRS if the region supports zonal redundancy. If the region doesnΓÇÖt support Zonal redundancy, the Storage Account is created as RA GRS.
- If any existing policy stops the creation of a Storage Account on the subscription or resource group with GRS redundancy, the Storage Account is created as LRS. The Storage Account thatΓÇÖs created is **General Purpose v2**, with block blobs stored on the hot tier in the Blob container.
- - The number of recovery points is determined by the Backup policy used to configure backup of the disk backup instance. According to the Garbage collection process, the older block blobs are deleted, as the corresponding older recovery points are pruned.
-
- - DonΓÇÖt apply resource lock or policies or firewall on the snapshot resource group or Storage Account created by Azure Backup service. The service creates and manages resources in this Snapshot resource group thatΓÇÖs assigned to a backup instance when you configure a disk backup. The service creates the Storage Account and its resources, and this shouldnΓÇÖt be deleted or moved.
-
- >[!Note]
- >If a Storage Account is deleted, backups will fail, and restore will fail for all existing recovery points.
+ - Once you configure the backup of a disk, you canΓÇÖt change the Snapshot Resource Group thatΓÇÖs assigned to a backup instance.
:::image type="content" source="./media/backup-managed-disks/validate-snapshot-resource-group-inline.png" alt-text="Screenshot showing the process to initiate prerequisites checks." lightbox="./media/backup-managed-disks/validate-snapshot-resource-group-expanded.png":::
backup Disk Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md
Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Previously updated : 05/27/2021 Last updated : 03/10/2022+++ # Overview of Azure Disk Backup
Incremental snapshots are always stored on standard storage, irrespective of the
The snapshots created by Azure Backup are stored in the resource group within your Azure subscription and incur Snapshot Storage charges. ForTo more details about the snapshot pricing, see [Managed Disk Pricing](https://azure.microsoft.com/pricing/details/managed-disks/). Because the snapshots aren't copied to the Backup Vault, Azure Backup doesn't charge a Protected Instance fee and Backup Storage cost doesn't apply.
-During a backup operation, the Azure Backup service creates a Storage Account in the Snapshot Resource Group, where the snapshots are stored. Managed diskΓÇÖs incremental snapshots are ARM resources created on Resource group and not in Storage Account.
-
-Storage Account is used to store metadata for each recovery point. Azure Backup service creates a Blob container per disk backup instance. For each recovery point, a block blob is created to store metadata information describing the recovery point, such as subscription, disk ID, disk attributes, and so on, that occupies a small space (in a few KiBs).
-
-The storage account is created as RA GZRS if the region supports zonal redundancy. If the region doesnΓÇÖt support Zonal redundancy, the storage account is created as RAGRS. If your existing policy stops creation of storage accounts on the subscription or resource group with GRS redundancy, the Storage account is created as LRS. The storage account created is General Purpose v2 with block blobs stored on Hot tier in the blob container. YouΓÇÖre charged for the Storage Account according to the storage account's redundancy. These charges are for the size of the block blobs. However, this will be a minimal amount as it stores metadata only, which are few KiBs per recovery point.
- The number of recovery points is determined by the Backup policy used to configure backups of the disk backup instances. Older block blobs are deleted according to the garbage collection process as the corresponding older recovery points are pruned. ## Next steps
cognitive-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-batch-processing.md
The batch processing kit offers three modes, using the `--run-mode` parameter.
#### [REST](#tab/rest)
-`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a python module extension, or importing as a submodule.
+`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a Python module extension, or importing as a submodule.
:::image type="content" source="media/containers/batch-rest-api-mode.png" alt-text="A diagram showing the batch-kit container processing files in REST mode.":::
cognitive-services Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/big-data/samples-python.md
from requests import Request
from mmlspark.io.http import HTTPTransformer, http_udf from pyspark.sql.functions import udf, col
-# Use any requests from the python requests library
+# Use any requests from the Python requests library
def world_bank_request(country): return Request("GET", "http://api.worldbank.org/v2/country/{}?format=json".format(country))
cognitive-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
Last updated 04/27/2020
-#Customer intent: As a python developer, I want use Personalizer in an Azure Notebook so that I can understand the end to end lifecycle of a Personalizer loop.
+#Customer intent: As a Python developer, I want use Personalizer in an Azure Notebook so that I can understand the end to end lifecycle of a Personalizer loop.
# Tutorial: Use Personalizer in Azure Notebook
These values have a very short duration in order to show changes in this tutoria
Run each executable cell and wait for it to return. You know it is done when the brackets next to the cell display a number instead of a `*`. The following sections explain what each cell does programmatically and what to expect for the output.
-### Include the python modules
+### Include the Python modules
-Include the required python modules. The cell has no output.
+Include the required Python modules. The cell has no output.
```python import json
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
The tables below summarize current availability:
| : | :-- | : | : | :- | : | | Denmark | Toll-Free | Not Available | Not Available | Public Preview | Public Preview\* | | Denmark | Local | Not Available | Not Available | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Toll-Free | General Availability | General Availability | Public Preview | Public Preview\* |
+| USA & Puerto Rico | Local | Not Available | Not Available | Public Preview | Public Preview\* |
\* Available through Azure Bot Framework and Dynamics only
communication-services Dominant Speaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/dominant-speaker.md
Title: Get active speakers description: Use Azure Communication Services SDKs to render the active speakers in a call.--++ Last updated 08/10/2021
+zone_pivot_groups: acs-plat-web-ios-android-windows
#Customer intent: As a developer, I want to get a list of active speakers within a call.
During an active call, you may want to get a list of active speakers in order to
- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md). - Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md) [!INCLUDE [Dominant Speaker JavaScript](./includes/dominant-speaker/dominant-speaker-web.md)]+++ ## Next steps - [Learn how to manage video](./manage-video.md)
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
pip install azure.confidentialledger
[!INCLUDE [Register the microsoft.ConfidentialLedger resource provider](../../includes/confidential-ledger-register-rp.md)]
-## Create your python app
+## Create your Python app
### Initialization
-We can now start writing our python application. First, we'll import the required packages.
+We can now start writing our Python application. First, we'll import the required packages.
```python # Import the Azure authentication library
cosmos-db Diagnostic Queries Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/diagnostic-queries-cassandra.md
For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setti
:::image type="content" source="./media/cassandra-log-analytics/log-analytics-questions-bubble.png" alt-text="Image of a bubble word map with possible questions on how to leverage Log Analytics within Cosmos DB"::: ### RU consumption-- What application queries are causing high RU consumption
+- Cassandra operations that are consuming high RU/s.
```kusto CDBCassandraRequests
-| where DatabaseName startswith "azure"
+| where DatabaseName=="azure_comos" and CollectionName=="user"
| project TimeGenerated, RequestCharge, OperationName, requestType=split(split(PIICommandText,'"')[3], ' ')[0]
-| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType);
+| summarize max(RequestCharge) by bin(TimeGenerated, 10m), tostring(requestType), OperationName;
``` -- Monitoring RU Consumption per operation on logical partition keys.
+- Monitoring RU consumption per operation on logical partition keys.
```kusto CDBPartitionKeyRUConsumption
-| where DatabaseName startswith "azure"
+| where DatabaseName=="azure_comos" and CollectionName=="user"
| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId | order by TotalRequestCharge; CDBPartitionKeyRUConsumption
-| where DatabaseName startswith "azure"
+| where DatabaseName=="azure_comos" and CollectionName=="user"
| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by OperationName, PartitionKey | order by TotalRequestCharge; - CDBPartitionKeyRUConsumption
-| where DatabaseName startswith "azure"
-| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey, PartitionKeyRangeId
+| where DatabaseName=="azure_comos" and CollectionName=="user"
+| summarize TotalRequestCharge=sum(todouble(RequestCharge)) by bin(TimeGenerated, 1m), PartitionKey
| render timechart; ``` - What are the top queries impacting RU consumption? ```kusto
-let topRequestsByRUcharge = CDBDataPlaneRequests
-| where TimeGenerated > ago(24h)
-| project RequestCharge , TimeGenerated, ActivityId;
CDBCassandraRequests
-| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0]
-| join kind=inner topRequestsByRUcharge on ActivityId
-| project DatabaseName, CollectionName, tostring(queryText), RequestCharge, TimeGenerated
-| order by RequestCharge desc
-| take 10;
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where TimeGenerated > ago(24h)
+| project ActivityId, DatabaseName, CollectionName, queryText=split(split(PIICommandText,'"')[3], ' ')[0], RequestCharge, TimeGenerated
+| order by RequestCharge desc;
```-- RU Consumption based on variations in payload sizes for read and write operations.
+- RU consumption based on variations in payload sizes for read and write operations.
```kusto // This query is looking at read operations
-CDBDataPlaneRequests
-| where OperationName in ("Read", "Query")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), OperationName
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName =="SELECT"
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
// This query is looking at write operations
-CDBDataPlaneRequests
-| where OperationName in ("Create", "Upsert", "Delete", "Execute")
-| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), OperationName
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
// Write operations over a time period.
-CDBDataPlaneRequests
-| where OperationName in ("Create", "Update", "Delete", "Execute")
-| summarize maxResponseLength=max(ResponseLength) by bin(TimeGenerated, 1m), OperationName
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName in ("CREATE", "UPDATE", "INSERT", "DELETE", "DROP")
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
+| render timechart;
+
+// Read operations over a time period.
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| project ResponseLength, TimeGenerated, RequestCharge, cassandraOperationName=split(split(PIICommandText,'"')[3], ' ')[0]
+| where cassandraOperationName =="SELECT"
+| summarize maxResponseLength=max(ResponseLength), maxRU=max(RequestCharge) by bin(TimeGenerated, 10m), tostring(cassandraOperationName)
| render timechart; ```
+- RU consumption based on read and write operations by logical partition.
+```kusto
+CDBPartitionKeyRUConsumption
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where OperationName in ("Delete", "Read", "Upsert")
+| summarize totalRU=max(RequestCharge) by OperationName, PartitionKeyRangeId
+```
+ - RU consumption by physical and logical partition. ```kusto CDBPartitionKeyRUConsumption
-| where DatabaseName==ΓÇ¥uprofileΓÇ¥ and AccountName startswith ΓÇ£azureΓÇ¥
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| summarize totalRequestCharge=sum(RequestCharge) by PartitionKey, PartitionKeyRangeId; ``` -- Is there a high RU consumption because of having hot partition?
+- Is a hot partition leading to high RU consumption?
```kusto CDBPartitionKeyStatistics
-| where AccountName startswith ΓÇ£azureΓÇ¥
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| where TimeGenerated > now(-8h) | summarize StorageUsed = sum(SizeKb) by PartitionKey | order by StorageUsed desc
CDBPartitionKeyStatistics
| project AccountName=tolower(AccountName), PartitionKey, SizeKb; CDBCassandraRequests | project AccountName=tolower(AccountName),RequestCharge, ErrorCode, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName
-| where DatabaseName != "<empty>"
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| join kind=inner storageUtilizationPerPartitionKey on $left.AccountName==$right.AccountName | where ErrorCode != -1 //successful | project AccountName, PartitionKey,ErrorCode,RequestCharge,SizeKb, OperationName, ActivityId, DatabaseName, CollectionName, PIICommandText, RegionName;
CDBCassandraRequests
### Latency - Number of server-side timeouts (Status Code - 408) seen in the time window. ```kusto
-CDBDataPlaneRequests
-| where TimeGenerated >= now(-6h)
-| where AccountName startswith "azure"
-| where StatusCode == 408
-| summarize count() by bin(TimeGenerated, 10m)
-| render timechart
+CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where ErrorCode in (4608, 4352) //Corresponding code in Cassandra
+| summarize max(DurationMs) by bin(TimeGenerated, 10m), ErrorCode
+| render timechart;
``` - Do we observe spikes in server-side latencies in the specified time window? ```kusto
-CDBDataPlaneRequests
+CDBCassandraRequests
| where TimeGenerated > now(-6h)
-| where AccountName startswith "azure"
+| DatabaseName=="azure_cosmos" and CollectionName=="user"
| summarize max(DurationMs) by bin(TimeGenerated, 10m)
-| render timechart
+| render timechart;
``` -- Query operations that are getting throttled.
+- Operations that are getting throttled.
```kusto CDBCassandraRequests
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
| project RequestLength, ResponseLength, RequestCharge, DurationMs, TimeGenerated, OperationName, query=split(split(PIICommandText,'"')[3], ' ')[0]
CDBCassandraRequests
``` - What queries are causing your application to throttle with a specified time period looking specifically at 429. ```kusto
-let throttledRequests = CDBDataPlaneRequests
-| where StatusCode==429
-| project OperationName , TimeGenerated, ActivityId;
CDBCassandraRequests
-| project PIICommandText, ActivityId, DatabaseName , CollectionName
-| join kind=inner throttledRequests on ActivityId
+| where DatabaseName=="azure_cosmos" and CollectionName=="user"
+| where ErrorCode==4097 // Corresponding error code in Cassandra
| project DatabaseName , CollectionName , CassandraCommands=split(split(PIICommandText,'"')[3], ' ')[0] , OperationName, TimeGenerated; ```
cosmos-db Migrate Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data.md
You can move data from existing Cassandra workloads to Azure Cosmos DB by using
Use the [CQL COPY command](https://cassandra.apache.org/doc/latest/cassandra/tools/cqlsh.html#cqlshrc) to copy local data to the Cassandra API account in Azure Cosmos DB.
+> [!WARNING]
+> Only use the CQL COPY to migrate small datasets. To move large datasets, [migrate data by using Spark](#migrate-data-by-using-spark).
+ 1. To be certain that your csv file contains the correct file structure, use the `COPY TO` command to export data directly from your source Cassandra table to a csv file (ensure that cqlsh is connected to the source table using the appropriate credentials): ```bash
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator released versions and it details
## Release notes
+### 2.14.6 (March 7, 2022)
+
+ - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. In addition to this update there are couple issues that were addressed in this release:
+ * Fix for an issue related to high CPU usage when the emulator is running.
+ * Add PowerShell option to set the Mongo API version: "-MongoApiVersion". Valid setting are: "3.2", "3.6" and "4.0"
+ ### 2.14.5 (January 18, 2022) - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB. One other important update with this release is to reduce the number of services executed in the background and start them as needed.
cosmos-db Partial Document Update Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update-getting-started.md
if (response.isSuccessStatusCode()) {
Support for Partial document update (Patch API) in the [Azure Cosmos DB JavaScript SDK](sql/sql-api-sdk-node.md) is available from version *3.15.0* onwards. You can download it from the [NPM Registry](https://www.npmjs.com/package/@azure/cosmos/v/3.15.0) > [!NOTE]
-> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the Javascript SDK
+> A complete partial document update sample can be found in the [.js v3 samples repository](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples/v3/typescript/src/ItemManagement.ts#L167) on GitHub. In the sample, as the container is created without a partition key specified, the JavaScript SDK
resolves the partition key values from the items through the container's partition key definition.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
The script in this article demonstrates creating an Azure Cosmos DB account, key
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
The script in this article demonstrates creating an Azure Cosmos DB account, key
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
The script in this article demonstrates preventing resources from being deleted
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
The script in this article demonstrates creating a serverless Azure Cosmos DB ac
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
The script in this article creates a Cassandra keyspace with shared throughput a
### Run the script ## Clean up resources
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
The script in this article demonstrates creating a Cosmos DB account with defaul
### Run the script ## Clean up resources
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
The script in this article demonstrates four operations.
### Run the script ## Clean up resources
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
This script uses a SQL (Core) API account, but these operations are identical ac
### Run the script ## Clean up resources
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
This script uses a SQL (Core) API account. To use this sample for other APIs, ap
### Run the script ## Clean up resources
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
This script uses a Core (SQL) API account. To use this sample for other APIs, ap
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
The script in this article demonstrates creating a Gremlin API database and grap
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
The script in this article demonstrates creating a Gremlin database and graph.
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
The script in this article demonstrates creating a Gremlin serverless account, d
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
The script in this article creates a Gremlin database with shared throughput and
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
The script in this article demonstrates creating a MongoDB API database with aut
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
The script in this article demonstrates creating a MongoDB API database and coll
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
The script in this article demonstrates creating a MongoDB API serverless accoun
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
The script in this article creates a MongoDB database with shared throughput and
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/autoscale.md
The script in this article demonstrates creating a SQL API database and containe
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/create.md
The script in this article demonstrates creating a SQL API database and containe
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/serverless.md
The script in this article demonstrates creating a SQL API serverless account wi
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/sql/throughput.md
The script in this article creates a Core (SQL) API database with shared through
### Run the script ## Clean up resources
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
The script in this article demonstrates creating a Table API table with autoscal
### Run the script ## Clean up resources
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
The script in this article demonstrates creating a Table API table.
### Run the script ## Clean up resources
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
The script in this article demonstrates performing resource lock operations for
### Run the script ## Clean up resources
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
The script in this article demonstrates creating a Table API serverless account
### Run the script ## Clean up resources
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
The script in this article creates a Table API table then updates the throughput
### Run the script ## Clean up resources
data-factory Concepts Data Flow Performance Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-pipelines.md
If your data flows execute in parallel, we recommend that you don't enable the A
## Execute data flows sequentially
-If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. The service will reuse the compute resources, resulting in a faster cluster start-up time. Each activity will still be isolated and receive a new Spark context for each execution. To reduce the time between sequential activities even more, set the **quick re-use** checkbox on the Azure IR to tell the service to re-use the existing cluster.
+If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. The service will reuse the compute resources, resulting in a faster cluster start-up time. Each activity will still be isolated and receive a new Spark context for each execution.
## Overloading a single data flow
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md
The following properties are supported for file system under `location` settings
| Property | Description | Required | | - | | -- | | type | The type property under `location` in dataset must be set to **FileServerLocation**. | Yes |
-| folderPath | The path to folder. If you want to use wildcard to filter folder, skip this setting and specify in activity source settings. | No |
+| folderPath | The path to folder. If you want to use wildcard to filter folder, skip this setting and specify in activity source settings. Note that you will need to setup the file share location in your Windows or Linux environment to expose the folder for sharing. | No |
| fileName | The file name under the given folderPath. If you want to use wildcard to filter files, skip this setting and specify in activity source settings. | No | **Example:**
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
The Core Count and Compute Type properties can be set dynamically to adjust to t
Choose which Integration Runtime to use for your Data Flow activity execution. By default, the service will use the auto-resolve Azure Integration runtime with four worker cores. This IR has a general purpose compute type and runs in the same region as your service instance. For operationalized pipelines, it is highly recommended that you create your own Azure Integration Runtimes that define specific regions, compute type, core counts, and TTL for your data flow activity execution.
-A minimum compute type of General Purpose (compute optimized is not recommended for large workloads) with an 8+8 (16 total v-cores) configuration and a 10-minute is the minimum recommendation for most production workloads. By setting a small TTL, the Azure IR can maintain a warm cluster that will not incur the several minutes of start time for a cold cluster. You can speed up the execution of your data flows even more by select "Quick re-use" on the Azure IR data flow configurations. For more information, see [Azure integration runtime](concepts-integration-runtime.md).
+A minimum compute type of General Purpose with an 8+8 (16 total v-cores) configuration and a 10-minute Time to live (TTL) is the minimum recommendation for most production workloads. By setting a small TTL, the Azure IR can maintain a warm cluster that will not incur the several minutes of start time for a cold cluster. For more information, see [Azure integration runtime](concepts-integration-runtime.md).
:::image type="content" source="media/data-flow/ir-new.png" alt-text="Azure Integration Runtime":::
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
Finally, you must create the private endpoint in your data factory.
| **Private DNS integration** | | | Integrate with private DNS zone | Leave the default of **Yes**. | | Subscription | Select your subscription. |
- | Private DNS zones | Leave the default of **(New) privatelink.azurewebsites.net**.
+ | Private DNS zones | Leave the default value in both Target sub-resources: 1. datafactory: **(New) privatelink.datafactory.azure.net**. 2. portal: **(New) privatelink.adf.azure.com**.|
7. Select **Review + create**.
data-factory Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/transform-data-spark-powershell.md
This sample PowerShell script creates a pipeline that transforms data in the clo
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] ## Prerequisites
-* **Azure Storage account**. Create a python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
+* **Azure Storage account**. Create a Python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
-### Upload python script to your Blob Storage account
-1. Create a python file named **WordCount_Spark.py** with the following content:
+### Upload Python script to your Blob Storage account
+1. Create a Python file named **WordCount_Spark.py** with the following content:
```python import sys
data-factory Transform Data Using Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-spark.md
The following table describes the JSON properties used in the JSON definition:
| getDebugInfo | Specifies when the Spark log files are copied to the Azure storage used by HDInsight cluster (or) specified by sparkJobLinkedService. Allowed values: None, Always, or Failure. Default value: None. | No | ## Folder structure
-Spark jobs are more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the java CLASSPATH), python files (placed on the PYTHONPATH), and any other files.
+Spark jobs are more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the java CLASSPATH), Python files (placed on the PYTHONPATH), and any other files.
-Create the following folder structure in the Azure Blob storage referenced by the HDInsight linked service. Then, upload dependent files to the appropriate sub folders in the root folder represented by **entryFilePath**. For example, upload python files to the pyFiles subfolder and jar files to the jars subfolder of the root folder. At runtime, the service expects the following folder structure in the Azure Blob storage:
+Create the following folder structure in the Azure Blob storage referenced by the HDInsight linked service. Then, upload dependent files to the appropriate sub folders in the root folder represented by **entryFilePath**. For example, upload Python files to the pyFiles subfolder and jar files to the jars subfolder of the root folder. At runtime, the service expects the following folder structure in the Azure Blob storage:
| Path | Description | Required | Type | | | - | -- | |
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
-* **Azure Storage account**. You create a python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
+* **Azure Storage account**. You create a Python script and an input file, and upload them to the Azure storage. The output from the spark program is stored in this storage account. The on-demand Spark cluster uses the same storage account as its primary storage.
* **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
-### Upload python script to your Blob Storage account
-1. Create a python file named **WordCount_Spark.py** with the following content:
+### Upload Python script to your Blob Storage account
+1. Create a Python file named **WordCount_Spark.py** with the following content:
```python import sys
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-json-scripting-reference.md
Note the following points:
- The **type** property is set to **HDInsightSpark**. - The **rootPath** is set to **adfspark\\pyFiles** where adfspark is the Azure Blob container and pyFiles is fine folder in that container. In this example, the Azure Blob Storage is the one that is associated with the Spark cluster. You can upload the file to a different Azure Storage. If you do so, create an Azure Storage linked service to link that storage account to the data factory. Then, specify the name of the linked service as a value for the **sparkJobLinkedService** property. See Spark Activity properties for details about this property and other properties supported by the Spark Activity.-- The **entryFilePath** is set to the **test.py**, which is the python file.
+- The **entryFilePath** is set to the **test.py**, which is the Python file.
- The **getDebugInfo** property is set to **Always**, which means the log files are always generated (success or failure). > [!IMPORTANT]
databox-online Azure Stack Edge Gpu Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-configure-compute.md
Before you set up a compute role on your Azure Stack Edge Pro device, make sure
To configure a client to access Kubernetes cluster, you will need the Kubernetes endpoint. Follow these steps to get Kubernetes API endpoint from the local UI of your Azure Stack Edge Pro device.
-1. In the local web UI of your device, go to **Devices** page.
-2. Under the **Device endpoints**, copy the **Kubernetes API service** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
+1. In the local web UI of your device, go to **Device** page.
+2. Under the **Device endpoints**, copy the **Kubernetes API** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
![Device page in local UI](./media/azure-stack-edge-gpu-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
To configure a client to access Kubernetes cluster, you will need the Kubernetes
4. While you are in the local web UI, you can:
- - Go to Kubernetes API, select **advanced settings**, and download an advanced configuration file for Kubernetes.
+ - If you have been provided a key from Microsoft (select users may have a key), go to Kubernetes API, select **Advanced config**, and download an advanced configuration file for Kubernetes.
![Device page in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-1.png)-
- If you have been provided a key from Microsoft (select users may have a key), then you can use this config file.
-
+
![Device page in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-2.png) - You can also go to **Kubernetes dashboard** endpoint and download an `aseuser` config file.
databox-online Azure Stack Edge Mini R Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-overview.md
Previously updated : 10/04/2021 Last updated : 03/09/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Mini R is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Mini R has the following capabilities:
|Accelerated AI inferencing| Enabled by the Intel Movidius Myriad X VPU. | |Wired and wireless | Allows wired and wireless data transfers.| |Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.|
-|Disconnected mode| Device and service can be optionally managed via Azure Stack Hub. Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.|
+|Disconnected mode| Deploy, run, manage applications in offline mode. <br> Disconnected mode supports offline upload scenarios.|
|Supported file transfer protocols |Supports standard SMB, NFS, and REST protocols for data ingestion. <br> For more information on supported versions, go to [Azure Stack Edge Mini R system requirements](azure-stack-edge-gpu-system-requirements.md).| |Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).| |Double encryption | Use of self-encrypting drive provides the first layer of encryption. VPN provides the second layer of encryption. BitLocker support to locally encrypt data and secure data transfer to cloud over *https* . <br> For more information, see [Configure VPN on your Azure Stack Edge Pro R device](azure-stack-edge-mini-r-configure-vpn-powershell.md).| |Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).|
-|Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center (Preview). <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).|
+|Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center. <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).|
## Use cases
For a discussion of considerations for choosing a region for the Azure Stack Edg
## Next steps -- Review the [Azure Stack Edge Mini R system requirements](azure-stack-edge-gpu-system-requirements.md).
+- Review the [Azure Stack Edge Mini R system requirements](azure-stack-edge-gpu-system-requirements.md).
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 03/03/2022 Last updated : 03/10/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| **Container running in privileged mode (Preview)**<br>(K8S.NODE_PrivilegedContainerArtifacts) | Machine logs indicate that a privileged Docker container is running. A privileged container has full access to the host's resources. If compromised, an attacker can use the privileged container to gain access to the host machine. | PrivilegeEscalation, Execution | Low | | **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium | | **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[1](#footnote1)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
-| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
+| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
| **Detected file download from a known malicious source (Preview)**<br>(K8S.NODE_SuspectDownload) | Analysis of processes running within a container detected download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | | **Detected Persistence Attempt (Preview)**<br>(K8S.NODE_NewSingleUserModeStartupScript) | Analysis of processes running within a container detected installation of a startup script for single-user mode. It is extremely rare that any legitimate process needs to execute in that mode so it may indicate an attacker has added a malicious process to every run-level to guarantee persistence. | Persistence | Medium | | **Detected suspicious file download (Preview)**<br>(K8S.NODE_SuspectDownloadArtifacts) | Analysis of processes running within a container detected suspicious download of a remote file. | Persistence | Low | | **Detected suspicious use of the nohup command (Preview)**<br>(K8S.NODE_SuspectNohup) | Analysis of processes running within a container detected suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It is rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium | | **Detected suspicious use of the useradd command (Preview)**<br>(K8S.NODE_SuspectUserAddition) | Analysis of processes running within a container detected suspicious use of the useradd command. | Persistence | Medium |
-| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
+| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
| **Digital currency mining related behavior detected (Preview)**<br>(K8S.NODE_DigitalCurrencyMining) | Analysis of host data detected the execution of a process or command normally associated with digital currency mining. | Execution | High | | **Docker build operation detected on a Kubernetes node (Preview)**<br>(K8S.NODE_ImageBuildOnNode) | Analysis of processes running within a container indicates a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low | | **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low | | **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of host data detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | Execution | Medium | | **Execution of hidden file (Preview)**<br>(K8S.NODE_ExecuteHiddenFile) | Analysis of host data indicates that a hidden file was executed by the specified user account. | Persistence, DefenseEvasion | Informational | | **Exposed Docker daemon on TCP socket (Preview)**<br>(K8S.NODE_ExposedDocker) | Machine logs indicate that your Docker daemon (dockerd) exposes a TCP socket. By default, Docker configuration, does not use encryption or authentication when a TCP socket is enabled. This enables full access to the Docker daemon, by anyone with access to the relevant port. | Execution, Exploitation | Medium |
-| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
-| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
-| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
+| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) <sup>[2](#footnote2)</sup> | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: https://aka.ms/exposedkubeflow-blog | Initial Access | Medium |
+| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
+| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) <sup>[2](#footnote2)</sup> | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low | | **Indicators associated with DDOS toolkit detected (Preview)**<br>(K8S.NODE_KnownLinuxDDoSToolkit) | Analysis of processes running within a container detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
-| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
-| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
+| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes which contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
+| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
| **Local host reconnaissance detected (Preview)**<br>(K8S.NODE_LinuxReconnaissance) | Analysis of processes running within a container detected the execution of a command normally associated with common Linux bot reconnaissance. | Discovery | Medium | | **Manipulation of host firewall detected (Preview)**<br>(K8S.NODE_FirewallDisabled) | Analysis of processes running within a container detected possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | | **Microsoft Defender for Cloud test alert (not a threat). (Preview)**<br>(K8S.NODE_EICAR) | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High | | **MITRE Caldera agent detected (Preview)**<br>(K8S.NODE_MitreCalderaTools) | Analysis of processes running within a container indicate that a suspicious process was running. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
-| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[2](#footnote2)</sup>| Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
| **Possible attack tool detected (Preview)**<br>(K8S.NODE_KnownLinuxAttackTool) | Analysis of processes running within a container indicates a suspicious tool ran. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | | **Possible backdoor detected (Preview)**<br>(K8S.NODE_LinuxBackdoorArtifact) | Analysis of processes running within a container detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium | | **Possible command line exploitation attempt (Preview)**<br>(K8S.NODE_ExploitAttempt) | Analysis of processes running within a container detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low | | **Process associated with digital currency mining detected (Preview)**<br>(K8S.NODE_CryptoCoinMinerArtifacts) | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium | | **Process seen accessing the SSH authorized keys file in an unusual way (Preview)**<br>(K8S.NODE_SshKeyAccess) | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
-| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
| **Screenshot taken on host (Preview)**<br>(K8S.NODE_KnownLinuxScreenshotTool) | Analysis of host/device data detected the use of a screen capture tool. Attackers may use these tools to access private data. | Collection | Low | | **Script extension mismatch detected (Preview)**<br>(K8S.NODE_MismatchedScriptFeatures) | Analysis of processes running within a container detected a mismatch between the script interpreter and the extension of the script file provided as input. This has frequently been associated with attacker script executions. | DefenseEvasion | Medium | | **Security-related process termination detected (Preview)**<br>(K8S.NODE_SuspectProcessTermination) | Analysis of processes running within a container detected attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| | | | | <sup><a name="footnote1"></a>1</sup>: **Limitations on GKE clusters**: GKE uses a Kuberenetes audit policy that doesn't support all alert types. As a result, this security alert, which is based on Kubernetes audit events, are not supported for GKE clusters.
-
+
+<sup><a name="footnote2"></a>2</sup>: This alert is supported by Windows.
+ ## <a name="alerts-sql-db-and-warehouse"></a>Alerts for SQL Database and Azure Synapse Analytics [Further details and notes](defender-for-sql-introduction.md)
defender-for-cloud Defender For Container Registries Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-usage.md
Title: How to use Microsoft Defender for container registries
-description: Learn about using Microsoft Defender for container registries to scan Linux images in your Linux-hosted registries
Previously updated : 12/09/2021
+ Title: How to use Defender for Containers
+description: Learn how to use Defender for Containers to scan Linux images in your Linux-hosted registries
Last updated : 03/07/2022
-# Use Microsoft Defender for container registries to scan your images for vulnerabilities
+# Use Defender for Containers to scan your ACR images for vulnerabilities
[!INCLUDE [Banner for top of topics](./includes/banner.md)] This page explains how to use the built-in vulnerability scanner to scan the container images stored in your Azure Resource Manager-based Azure Container Registry.
-When **Microsoft Defender for container registries** is enabled, any image you push to your registry will be scanned immediately. In addition, any image pulled within the last 30 days is also scanned.
+When **Defender for Containers** is enabled, any image you push to your registry will be scanned immediately. In addition, any image pulled within the last 30 days is also scanned.
When the scanner reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
When the scanner reports vulnerabilities to Defender for Cloud, Defender for Clo
To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
-1. Enable **Microsoft Defender for container registries** for your subscription. Defender for Cloud is now ready to scan images in your registries.
+1. Enable **Defender for Containers** for your subscription. Defender for Cloud is now ready to scan images in your registries.
>[!NOTE] > This feature is charged per image.
To enable vulnerability scans of images stored in your Azure Resource Manager-ba
1. Follow the steps in the remediation section of this pane.
-1. When you have taken the steps required to remediate the security issue, replace the image in your registry:
+1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
- 1. Push the updated image. This will trigger a scan.
+ 1. Push the updated image to trigger a scan.
1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648). If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
- 1. When you are sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+ 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
## Disable specific findings > [!NOTE] > [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
-If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 02/28/2022 Last updated : 03/09/2022 # Overview of Microsoft Defender for Containers
Microsoft Defender for Containers is the cloud-native solution for securing your
On this page, you'll learn how you can use Defender for Containers to improve, monitor, and maintain the security of your clusters, containers, and their applications.
-## Availability
+## Microsoft Defender for Containers plan availability
| Aspect | Details | |--|--|
-| Release state: | General availability (GA)<br>Where indicated, specific features are in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] |
+| Release state: | General availability (GA)<br> Certain features are in preview, for a full list see the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section. |
+| Feature availability | Refer to the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section for additional information on feature release state and availability.|
| Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| Kubernetes distributions and configurations: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br>ΓÇóThe AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google GKE Standard clusters](https://cloud.google.com/kubernetes-engine/) <br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) |
| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) (Except for preview features)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects|
+| Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
| | | ## What are the benefits of Microsoft Defender for Containers?
The **Azure Policy add-on for Kubernetes** collects cluster and workload configu
| azuredefender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No | | azuredefender-publisher-ds-* | kube-system | [DeamonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers' backend service where the data will be processed for and analyzed. | N/A | memory: 64Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
-\* resource limits are not configurable
+\* resource limits aren't configurable
### [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
Defender for Containers includes an integrated vulnerability scanner for scannin
There are four triggers for an image scan: -- **On push** - Whenever an image is pushed to your registry, Defender for container registries automatically scans that image. To trigger the scan of an image, push it to your repository.
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
There are four triggers for an image scan:
- **Continuous scan**- This trigger has two modes:
- - A Continuous scan based on an image pull. This scan is performed every 7 days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+ - A Continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
- - (Preview) Continuous scan for running images. This scan is performed every 7 days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
Defender for Cloud filters, and classifies findings from the scanner. When an im
### View vulnerabilities for running images
-Defender for Containers expands on the registry scanning features of the Defender for container registries plan by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
+Defender for Containers expands on the registry scanning features by introducing the **preview feature** of run-time visibility of vulnerabilities powered by the Defender profile, or extension.
+
+> [!NOTE]
+> There's no Defender profile for Windows, it's only available on Linux OS.
The new recommendation, **Running container images should have vulnerability findings resolved**, only shows vulnerabilities for running images, and relies on the Defender security profile, or extension to discover which images are currently running. This recommendation groups running images that have vulnerabilities, and provides details about the issues discovered, and how to remediate them. The Defender profile, or extension is used to gain visibility into vulnerable containers that are active.
-This recommendation shows running images, and their vulnerabilities based on ACR image image. Images that are deployed from a non ACR registry, will not be scanned, and will appear under the Not applicable tab.
+This recommendation shows running images, and their vulnerabilities based on ACR image. Images that are deployed from a non ACR registry, won't be scanned, and will appear under the Not applicable tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable" lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
The full list of available alerts can be found in the [Reference table of alerts
## FAQ - Defender for Containers -- [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for container registries enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-container-registries-enabled)
+- [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-containers-enabled)
- [Is Defender for Containers a mandatory upgrade?](#is-defender-for-containers-a-mandatory-upgrade) - [Does the new plan reflect a price increase?](#does-the-new-plan-reflect-a-price-increase) - [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)
-### What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for container registries enabled?
+### What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?
Subscriptions that already have one of these plans enabled can continue to benefit from it.
If you haven't enabled them yet, or create a new subscription, these plans can n
### Is Defender for Containers a mandatory upgrade?
-No. Subscriptions that have either Microsoft Defender for Kubernetes or Microsoft Defender for container registries enabled don't need to be upgraded to the new Microsoft Defender for Containers plan. However, they won't benefit from the new and improved capabilities and theyΓÇÖll have an upgrade icon shown alongside them in the Azure portal.
+No. Subscriptions that have either Microsoft Defender for Kubernetes or Microsoft Defender for Containers Registries enabled doesn't need to be upgraded to the new Microsoft Defender for Containers plan. However, they won't benefit from the new and improved capabilities and theyΓÇÖll have an upgrade icon shown alongside them in the Azure portal.
### Does the new plan reflect a price increase? No. ThereΓÇÖs no direct price increase. The new comprehensive Container security plan combines Kubernetes protection and container registry image scanning, and removes the previous dependency on the (paid) Defender for Servers plan.
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for servers. Previously updated : 11/09/2021 Last updated : 03/08/2022 # Introduction to Microsoft Defender for servers
To protect machines in hybrid and multi-cloud environments, Defender for Cloud u
- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) > [!TIP]
-> For details of which Defender for servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers-).
+> For details of which Defender for servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers-).
## What are the benefits of Microsoft Defender for servers?
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Title: Endpoint protection recommendations in Microsoft Defender for Clouds description: How the endpoint protection solutions are discovered and identified as healthy. Previously updated : 12/14/2021 Last updated : 03/08/2022 # Endpoint protection assessment and recommendations in Microsoft Defender for Cloud [!INCLUDE [Banner for top of topics](./includes/banner.md)]
-Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations:
+Microsoft Defender for Cloud provides health assessments of [supported](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) versions of Endpoint protection solutions. This article explains the scenarios that lead Defender for Cloud to generate the following two recommendations:
- [Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) - [Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000)
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations Previously updated : 02/28/2022 Last updated : 03/08/2022 # Protect your Kubernetes workloads
This page describes how to use Microsoft Defender for Cloud's set of security re
> [!TIP] > For a list of the security recommendations that might appear for Kubernetes clusters and nodes, see the [Container recommendations](recommendations-reference.md#container-recommendations) of the recommendations reference table.
-## Availability
-
-| Aspect | Details |
-|--|--|
-| Release state: | AKS - General availability (GA) <br> Arc enabled Kubernetes - Preview |
-| Pricing: | Free for AKS workloads<br>For Azure Arc-enabled Kubernetes, it's billed according to the Microsoft Defender for Containers plan |
-| Required roles and permissions: | **Owner** or **Security admin** to edit an assignment<br>**Reader** to view the recommendations |
-| Environment requirements: | Kubernetes v1.14 (or newer) is required<br>No PodSecurityPolicy resource (old PSP model) on the clusters<br>Windows nodes are not supported |
-| Azure Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) |
-| Non-Azure Clouds, and On-prem: | supported via Arc enabled Kubernetes. |
-| | |
- ## Set up your workload protection Microsoft Defender for Cloud includes a bundle of recommendations that are available once you've installed the **Azure Policy add-on for Kubernetes or extensions**.
defender-for-cloud Os Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/os-coverage.md
Defender for Cloud depends on the [Log Analytics agent](../azure-monitor/agents/
Also ensure your Log Analytics agent is [properly configured to send data to Defender for Cloud](enable-data-collection.md#manual-agent)
-To learn more about the specific Defender for Cloud features available on Windows and Linux, see [Feature coverage for machines](supported-machines-endpoint-solutions-clouds.md).
+To learn more about the specific Defender for Cloud features available on Windows and Linux, see [Feature coverage for machines](supported-machines-endpoint-solutions-clouds-containers.md).
> [!NOTE] > Even though **Microsoft Defender for servers** is designed to protect servers, most of its features are supported for Windows 10 machines. One feature that isn't currently supported is [Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md).
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 02/27/2022 Last updated : 03/10/2022 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an account with one of two
- **Defender for Cloud's CSPM features** extend to your AWS resources. This agentless plan assesses your AWS resources according to AWS-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to AWS (AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices). Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your AWS resources alongside your Azure resources. - **Microsoft Defender for Containers** extends Defender for Cloud's container threat detection and advanced defenses to your **Amazon EKS clusters**.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md?tabs=tab/features-multi-cloud) table.
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multi-cloud) table.
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
|-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]| |Pricing:|The **CSPM plan** is free.<br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for AWS at the same price as for Azure resources.<br>For every AWS machine connected to Azure with [Azure Arc-enabled servers](../azure-arc/servers/overview.md), the **Defender for servers** plan is billed at the same price as the [Microsoft Defender for servers](defender-for-servers-introduction.md) plan for Azure machines. If an AWS EC2 doesn't have the Azure Arc agent deployed, you won't be charged for that machine.|
-|Required roles and permissions:|**Owner** on the relevant Azure subscription<br>**Contributor** can also connect an AWS account if an owner provides the service principal details (required for the Defender for servers plan)|
+|Required roles and permissions:|**Contributor** permission for the relevant Azure subscription.|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)| |||
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
- The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region. - **To enable the Defender for servers plan**, you'll need:
+
- Microsoft Defender for servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
+
- An active AWS account, with EC2 instances.
+
- Azure Arc for servers installed on your EC2 instances. - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing, and future EC2 instances managed by AWS Systems Manager (SSM) and using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon: - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
- - To manually install Azure Arc on your existing and future EC2 instances, follow the instructions in the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation.
+ > [!NOTE]
+ > To enable the Azure Arc auto-provisioning, you'll need an **Owner** permission on the relevant Azure subscription.
+
+ - To manually install Azure Arc on your existing and future EC2 instances, follow the instructions in the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation.
+
- Additional extensions should be enabled on the Arc-connected machines. These extensions are currently configured in the subscription level. It means that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to these components. - Microsoft Defender for Endpoint - VA solution (TVM/ Qualys)
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To protect your GCP-based resources, you can connect an account in two different
- **Environment settings page** (Recommended) - This page provides the onboarding experience (including auto provisioning). This mechanism also extends Defender for Cloud's enhanced security features to your GCP resources: - **Defender for Cloud's CSPM features** extends to your GCP resources. This agentless plan assesses your GCP resources according to GCP-specific security recommendations and these are included in your secure score. The resources will also be assessed for compliance with built-in standards specific to GCP. Defender for Cloud's [asset inventory page](asset-inventory.md) is a multi-cloud enabled feature helping you manage your GCP resources alongside your Azure resources.
- - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds.md)
+ - **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP VM instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more. You can view the full list of available features in the [Supported features for virtual machines and servers table](supported-machines-endpoint-solutions-clouds-servers.md)
- **Microsoft Defender for Containers** - Microsoft Defender for Containers brings threat detection and advanced defenses to your Google's Kubernetes Engine (GKE) Standard clusters. This plan includes Kubernetes threat protection, behavioral analytics, Kubernetes best practices, admission control recommendations and more. :::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="./media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 03/02/2022 Last updated : 03/08/2022 # Archive for what's new in Defender for Cloud?
We've added two **preview** recommendations to deploy and maintain the endpoint
|Recommendation |Description |Severity | |||| |[Endpoint protection should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) |To protect your machines from threats and vulnerabilities, install a supported endpoint protection solution. <br> <a href="/azure/defender-for-cloud/endpoint-protection-recommendations-technical">Learn more about how Endpoint Protection for machines is evaluated.</a><br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |High |
-|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented [here](./supported-machines-endpoint-solutions-clouds.md?tabs=features-windows). Endpoint protection assessment is documented <a href='/azure/defender-for-cloud/endpoint-protection-recommendations-technical'>here</a>.<br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |Medium |
+|[Endpoint protection health issues should be resolved on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) |Resolve endpoint protection health issues on your virtual machines to protect them from latest threats and vulnerabilities. Azure Security Center supported endpoint protection solutions are documented [here](./supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows). Endpoint protection assessment is documented <a href='/azure/defender-for-cloud/endpoint-protection-recommendations-technical'>here</a>.<br />(Related policy: [Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2faf6cd1bd-1635-48cb-bde7-5b15693900b9)) |Medium |
||| > [!NOTE]
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/03/2022 Last updated : 03/10/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in March include: - [Deprecated the recommendations to install the network traffic data collection agent](#deprecated-the-recommendations-to-install-the-network-traffic-data-collection-agent)-
+- [Defender for Containers can now scan for vulnerabilities in Windows images (preview)](#defender-for-containers-can-now-scan-for-vulnerabilities-in-windows-images-preview)
+- [New alert for Microsoft Defender for Storage (preview)](#new-alert-for-microsoft-defender-for-storage-preview)
+- [Configure email notifications settings from an alert](#configure-email-notifications-settings-from-an-alert)
+
### Deprecated the recommendations to install the network traffic data collection agent Changes in our roadmap and priorities have removed the need for the network traffic data collection agent. Consequently, the following two recommendations and their related policies were deprecated.
Changes in our roadmap and priorities have removed the need for the network traf
|[Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/24d8af06-d441-40b4-a49c-311421aa9f58) |Defender for Cloud uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats. |Medium | |||
+### Defender for Containers can now scan for vulnerabilities in Windows images (preview)
+
+Defender for Container's image scan now supports Windows images that are hosted in Azure Container Registry. This feature is free while in preview, and will incur a cost when it becomes generally available.
+
+Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-container-registries-usage.md).
+
+### New alert for Microsoft Defender for Storage (preview)
+
+To expand the threat protections provided by Microsoft Defender for Storage, we've added a new preview alert.
+
+Threat actors use applications and tools to discover and access storage accounts. Microsoft Defender for Storage detects these applications and tools so that you can block them and remediate your posture.
+
+This preview alert is called `Access from a suspicious application`. The alert is relevant to Azure Blob Storage, and ADLS Gen2 only.
+
+| Alert (alert type) | Description | MITRE tactic | Severity |
+|--|--|--|--|
+| **PREVIEW - Access from a suspicious application**<br>(Storage.Blob_SuspiciousApp) | Indicates that a suspicious application has successfully accessed a container of a storage account with authentication.<br>This might indicate that an attacker has obtained the credentials necessary to access the account, and is exploiting it. This could also be an indication of a penetration test carried out in your organization.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Initial Access | Medium |
+
+### Configure email notifications settings from an alert
+
+A new section has been added to the alert User Interface (UI) which allows you to view and edit who will receive email notifications for alerts that are triggered on the current subscription.
++
+Learn how to [Configure email notifications for security alerts](configure-email-notifications.md).
+ ## February 2022 Updates in February include:
The new automated onboarding of GCP environments allows you to protect GCP workl
- **Microsoft Defender for servers** brings threat detection and advanced defenses to your GCP compute instances. This plan includes the integrated license for Microsoft Defender for Endpoint, vulnerability assessment scanning, and more.
- For a full list of available features, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds.md). Automatic onboarding capabilities will allow you to easily connect any existing, and new compute instances discovered in your environment.
+ For a full list of available features, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md). Automatic onboarding capabilities will allow you to easily connect any existing, and new compute instances discovered in your environment.
Learn how to protect, and [connect your GCP projects](quickstart-onboard-gcp.md) with Microsoft Defender for Cloud.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
+
+ Title: Microsoft Defender for Containers feature availability
+description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment.
+ Last updated : 03/08/2022+++
+# Defender for Containers feature availability
++
+The **tabs** below show the features of Microsoft Defender for Cloud that are available for Windows and Linux machines.
+
+## Supported features by environment
+
+### [**Azure (AKS)**](#tab/azure-aks)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
+|--|--|--|--|--|--|--|--|
+| Compliance | Docker CIS | VMs | GA | X | Log Analytics agent | Defender for Servers | |
+| VA | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| VA | View vulnerabilities for running images | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Hardening | Kubernetes data plane recommendations | AKS | GA | X | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime Threat Detection | Agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Runtime Threat Detection | Agent-based threat detection | AKS | Preview | X | Defender profile | Defender for Containers | Commercial clouds |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | AKS | GA | Γ£ô | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and Auto provisioning | Auto provisioning of Defender profile | AKS | GA | X | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy add-on | AKS | GA | X | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### [**AWS (EKS)**](#tab/aws-eks)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | EC2 | Preview | X | Log Analytics agent | Defender for Servers |
+| VA | Registry scan | N/A | - | - | - | - |
+| VA | View vulnerabilities for running images | N/A | - | - | - | - |
+| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | EKS | Preview | X | Azure Policy extension | Defender for Containers |
+| Runtime Threat Detection | Agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
+| Runtime Threat Detection | Agent-based threat detection | EKS | Preview | X | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | EKS | Preview | X | Agentless | Free |
+| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | EKS | Preview | X | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Defender extension | N/A | N/A | X | - | - |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | N/A | N/A | X | - | - |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### [**GCP (GKE)**](#tab/gcp-gke)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | GCP VMs | Preview | X | Log Analytics agent | Defender for Servers |
+| VA | Registry scan | N/A | - | - | - | - |
+| VA | View vulnerabilities for running images | N/A | - | - | - | - |
+| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | GKE | Preview | X | Azure Policy extension | Defender for Containers |
+| Runtime Threat Detection | Agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
+| Runtime Threat Detection | Agent-based threat detection | GKE | Preview | X | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | GKE | Preview | X | Agentless | Free |
+| Discovery and Auto provisioning | Auditlog collection for agentless threat detection | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Defender DaemonSet | GKE | Preview | X | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | GKE | Preview | X | Agentless | Defender for Containers |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### [**On-prem/IasS (ARC)**](#tab/iass-arc)
+
+| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing tier |
+|--|--| -- | -- | -- | -- | --|
+| Compliance | Docker CIS | Arc enabled VMs | Preview | X | Log Analytics agent | Defender for Servers |
+| VA | Registry scan | ACR, Private ACR | Preview | Γ£ô | Agentless | Defender for Containers |
+| VA | View vulnerabilities for running images | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Hardening | Control plane recommendations | N/A | - | - | - | - |
+| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | X | Azure Policy extension | Defender for Containers |
+| Runtime Threat Detection | Threat detection via auditlog | Arc enabled K8s clusters | - | Γ£ô | Defender extension | Defender for Containers |
+| Runtime Threat Detection | Agent-based threat detection | Arc enabled K8s clusters | Preview | X | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Discovery of uncovered/unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
+| Discovery and Auto provisioning | Auditlog collection for threat detection | Arc enabled K8s clusters | Preview | Γ£ô | Defender extension | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Defender extension | Arc enabled K8s clusters | Preview | Γ£ô | Agentless | Defender for Containers |
+| Discovery and Auto provisioning | Auto provisioning of Azure policy extension | Arc enabled K8s clusters | Preview | X | Agentless | Defender for Containers |
+
+<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+++
+## Additional information
+
+### Registries and images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
++
+### Kubernetes distributions and configurations
+
+| Aspect | Details |
+|--|--|
+| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)<sup>[1](#footnote1)</sup><br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup><br>ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br>**Unsupported**<br> ΓÇó Any [taints](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) applied to your nodes *might* disrupt the configuration of Defender for Containers<br> |
+
+<sup><a name="footnote1"></a>1</sup>The AKS Defender profile doesn't support AKS clusters that don't have RBAC role enabled.<br>
+<sup><a name="footnote2"></a>2</sup>Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.<br>
+<sup><a name="footnote3"></a>3</sup>To get [Microsoft Defender for Containers](../azure-arc/kubernetes/overview.md) protection for you should onboard to [Azure Arc-enabled Kubernetes](https://mseng.visualstudio.com/TechnicalContent/_workitems/recentlyupdated/) and enable Defender for Containers as an Arc extension.
+
+> [!NOTE]
+> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+
+## Next steps
+
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
+- Learn how [Defender for Cloud manages and safeguards data](data-security.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
+
+ Title: Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud
+description: Learn about the availability of Microsoft Defender for Cloud's servers features according to OS, machine type, and cloud deployment.
+ Last updated : 03/08/2022+++
+# Feature coverage for machines
++
+The **tabs** below show the features of Microsoft Defender for Cloud that are available for Windows and Linux machines.
+
+## Supported features for virtual machines and servers <a name="vm-server-features"></a>
+
+### [**Windows machines**](#tab/features-windows)
+
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for servers required** |
+|--|::|::|::|::|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
+| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | - | - | - | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions-) | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No |
+| Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
+| | | | | |
+
+### [**Linux machines**](#tab/features-linux)
+
+| **Feature** | **Azure Virtual Machines** | **Azure Virtual Machine Scale Sets** | **Azure Arc-enabled machines** | **Defender for servers required** |
+|--|::|::|::|::|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | - | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | - | - | - | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | - | Γ£ö | Yes |
+| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | - | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | Γ£ö | Γ£ö | Γ£ö | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions-) | - | - | - | No |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö | - | No |
+| Third-party vulnerability assessment | Γ£ö | - | Γ£ö | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | Γ£ö | - | No |
+| | | | | |
++
+### [**Multi-cloud machines**](#tab/features-multi-cloud)
+
+| **Feature** | **Availability in AWS** | **Availability in GCP** |
+|--|:-:|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | - | - |
+| [Just-in-time VM access](just-in-time-access-usage.md) | - | - |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö |
+| [File integrity monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö |
+| [Network map](protect-network-resources.md#network-map) | - | - |
+| [Adaptive network hardening](adaptive-network-hardening.md) | - | - |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö |
+| [Docker host hardening](harden-docker-hosts.md) | Γ£ö | Γ£ö |
+| Missing OS patches assessment | Γ£ö | Γ£ö |
+| Security misconfigurations assessment | Γ£ö | Γ£ö |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions-) | Γ£ö | Γ£ö |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md#unsupported-scenarios)) |
+| Third-party vulnerability assessment | - | - |
+| [Network security assessment](protect-network-resources.md) | - | - |
+| | |
+
+
+
+> [!TIP]
+>To experiment with features that are only available with enhanced security features enabled, you can enroll in a 30-day trial. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
++
+## Supported endpoint protection solutions <a name="endpoint-supported"></a>
+
+The following table provides a matrix of supported endpoint protection solutions and whether you can use Microsoft Defender for Cloud to install each solution for you.
+
+For information about when recommendations are generated for each of these solutions, see [Endpoint Protection Assessment and Recommendations](endpoint-protection-recommendations-technical.md).
+
+| Solution | Supported platforms | Defender for Cloud installation |
+||||
+| Microsoft Defender Antivirus | Windows Server 2016 or later | No (built into OS) |
+| System Center Endpoint Protection (Microsoft Antimalware) | Windows Server 2012 R2 | Via extension |
+| Trend Micro ΓÇô Deep Security | Windows Server (all) | No |
+| Symantec v12.1.1100+ | Windows Server (all) | No |
+| McAfee v10+ | Windows Server (all) | No |
+| McAfee v10+ | Linux (GA) | No |
+| Microsoft Defender for Endpoint for Linux<sup>[1](#footnote1)</sup> | Linux (GA) | Via extension |
+| Sophos V9+ | Linux (GA) | No |
+| | | |
+
+<sup><a name="footnote1"></a>1</sup> It's not enough to have Microsoft Defender for Endpoint on the Linux machine: the machine will only appear as healthy if the always-on scanning feature (also known as real-time protection (RTP)) is active. By default, the RTP feature is **disabled** to avoid clashes with other AV software.
++++
+## Feature support in government and national clouds
+
+| Feature/Service | Azure | Azure Government | Azure China 21Vianet |
+||-|--|--|
+| **Defender for Cloud free features** | | | |
+| - [Continuous export](./continuous-export.md) | GA | GA | GA |
+| - [Workflow automation](./workflow-automation.md) | GA | GA | GA |
+| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
+| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
+| - [Email notifications for security alerts](./configure-email-notifications.md) | GA | GA | GA |
+| - [Auto provisioning for agents and extensions](./enable-data-collection.md) | GA | GA | GA |
+| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
+| - [Azure Monitor Workbooks reports in Microsoft Defender for Cloud's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
+| - [Integration with Microsoft Defender for Cloud Apps](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps-) | GA | Not Available | Not Available |
+| **Microsoft Defender plans and extensions** | | | |
+| - [Microsoft Defender for servers](./defender-for-servers-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for container registries](./defender-for-container-registries-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA <sup>[2](#footnote2)</sup> | GA <sup>[2](#footnote2)</sup> |
+| - [Microsoft Defender for container registries scanning of images in CI/CD workflows](./defender-for-container-registries-cicd.md) <sup>[3](#footnote3)</sup> | Public Preview | Not Available | Not Available |
+| - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[4](#footnote4)</sup> | GA | GA | GA |
+| - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[10](#footnote4)</sup> | GA | GA | GA |
+| - [Defender extension for Azure Arc-enabled Kubernetes clusters, servers or data services](./defender-for-kubernetes-azure-arc.md) <sup>[5](#footnote5)</sup> | Public Preview | Not Available | Not Available |
+| - [Microsoft Defender for Azure SQL database servers](./defender-for-sql-introduction.md) | GA | GA | GA <sup>[9](#footnote9)</sup> |
+| - [Microsoft Defender for SQL servers on machines](./defender-for-sql-introduction.md) | GA | GA | Not Available |
+| - [Microsoft Defender for open-source relational databases](./defender-for-databases-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for Key Vault](./defender-for-key-vault-introduction.md) | GA | Not Available | Not Available |
+| - [Microsoft Defender for Resource Manager](./defender-for-resource-manager-introduction.md) | GA | GA | GA |
+| - [Microsoft Defender for Storage](./defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA | Not Available |
+| - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Public Preview | Not Available | Not Available |
+| - [Kubernetes workload protection](./kubernetes-workload-protections.md) | GA | GA | GA |
+| - [Bi-directional alert synchronization with Sentinel](../sentinel/connect-azure-security-center.md) | Public Preview | Not Available | Not Available |
+| **Microsoft Defender for servers features** <sup>[7](#footnote7)</sup> | | | |
+| - [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA |
+| - [File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
+| - [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA |
+| - [Adaptive network hardening](./adaptive-network-hardening.md) | GA | Not Available | Not Available |
+| - [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA |
+| - [Integrated Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md) | GA | Not Available | Not Available |
+| - [Regulatory compliance dashboard & reports](./regulatory-compliance-dashboard.md) <sup>[8](#footnote8)</sup> | GA | GA | GA |
+| - [Microsoft Defender for Endpoint deployment and integrated license](./integration-defender-for-endpoint.md) | GA | GA | Not Available |
+| - [Connect AWS account](./quickstart-onboard-aws.md) | GA | Not Available | Not Available |
+| - [Connect GCP project](./quickstart-onboard-gcp.md) | GA | Not Available | Not Available |
+| | | | |
+
+<sup><a name="footnote1"></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview.
+
+<sup><a name="footnote2"></a>2</sup> Vulnerability scans of container registries on the Azure Government cloud can only be performed with the scan on push feature.
+
+<sup><a name="footnote3"></a>3</sup> Requires Microsoft Defender for container registries.
+
+<sup><a name="footnote4"></a>4</sup> Partially GA: Support for Azure Arc-enabled clusters is in public preview and not available on Azure Government.
+
+<sup><a name="footnote5"></a>5</sup> Requires Microsoft Defender for Kubernetes or Microsoft Defender for Containers.
+
+<sup><a name="footnote6"></a>6</sup> Partially GA: Some of the threat protection alerts from Microsoft Defender for Storage are in public preview.
+
+<sup><a name="footnote7"></a>7</sup> These features all require [Microsoft Defender for servers](./defender-for-servers-introduction.md).
+
+<sup><a name="footnote8"></a>8</sup> There may be differences in the standards offered per cloud type.
+
+<sup><a name="footnote9"></a>9</sup> Partially GA: Subset of alerts and vulnerability assessment for SQL servers. Behavioral threat protections aren't available.
+
+<sup><a name="footnote4"></a>10</sup> Partially GA: Support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is in public preview and not available on Azure Government. Run-time visibility of vulnerabilities in container images is also a preview feature.
+
+## Next steps
+
+- Learn how [Defender for Cloud collects data using the Log Analytics Agent](enable-data-collection.md).
+- Learn how [Defender for Cloud manages and safeguards data](data-security.md).
+- Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 03/06/2022 Last updated : 03/08/2022 # Important upcoming changes to Microsoft Defender for Cloud
When the recommendations are released to general availability, they will replace
- Assessment key for the **GA** recommendation: 3bcd234d-c9c7-c2a2-89e0-c01f419c1a8a Learn more:-- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds.md#endpoint-supported)
+- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported)
- [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md) ### AWS recommendations to GA
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
Title: Configure a lab to use Remote Desktop Gateway
-description: Learn how to configure a lab in Azure DevTest Labs with a remote desktop gateway to ensure secure access to the lab VMs without having to expose the RDP port.
+ Title: Configure a lab to use a remote desktop gateway
+description: Learn how to configure a remote desktop gateway in Azure DevTest Labs for secure access to lab VMs without exposing RDP ports.
Previously updated : 06/26/2020 Last updated : 03/07/2022
-# Configure your lab in Azure DevTest Labs to use a remote desktop gateway
-In Azure DevTest Labs, you can configure a remote desktop gateway for your lab to ensure secure access to the lab virtual machines (VMs) without having to expose the RDP port. The lab provides a central place for your lab users to view and connect to all virtual machines they have access to. The **Connect** button on the **Virtual Machine** page creates a machine-specific RDP file that you can open to connect to the machine. You can further customize and secure the RDP connection by connecting your lab to a remote desktop gateway.
+# Configure and use a remote desktop gateway in Azure DevTest Labs
-This approach is more secure because the lab user authenticates directly to the gateway machine or can use company credentials on a domain-joined gateway machine to connect to their machines. The lab also supports using token authentication to the gateway machine that allows users to connect to their lab virtual machines without having the RDP port exposed to the internet. This article walks through an example on how to set up a lab that uses token authentication to connect to lab machines.
+This article describes how to set up and use a gateway for secure remote desktop access to lab virtual machines (VMs) in Azure DevTest Labs. Using a gateway improves security because you don't expose the VMs' remote desktop protocol (RDP) ports to the internet. This remote desktop gateway solution also supports token authentication.
-Looking to connect through Bastion, read "[Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md)".
+DevTest Labs provides a central place for lab users to view and connect to their VMs. Selecting **Connect** > **RDP** on a lab VM's **Overview** page creates a machine-specific RDP file, and users can open the file to connect to the VM.
-## Architecture of the solution
+With a remote desktop gateway, lab users connect to their VMs through a gateway machine. Users authenticate directly to the gateway machine, and can use company-supplied credentials on domain-joined machines. Token authentication provides an extra layer of security.
-![Architecture of the solution](./media/configure-lab-remote-desktop-gateway/architecture.png)
+Another way to securely access lab VMs without exposing ports or IP addresses is through a browser with Azure Bastion. For more information, see [Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md).
-1. The [Get RDP file contents](/rest/api/dtl/virtualmachines/getrdpfilecontents) action is called when you select the **Connect** button.1.
-1. The Get RDP file contents action invokes `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to request an authentication token.
- 1. `{gateway-hostname}` is the gateway hostname specified on the **Lab Settings** page for your lab in the Azure portal.
- 1. `{lab-machine-name}` is the name of the machine that you're trying to connect.
- 1. `{port-number}` is the port on which the connection needs to be made. Usually this port is 3389. If the lab VM is using the [shared IP](devtest-lab-shared-ip.md) feature in DevTest Labs, the port will be different.
-1. The remote desktop gateway defers the call from `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to an Azure function to generate the authentication token. The DevTest Labs service automatically includes the function key in the request header. The function key is to be saved in the lab's key vault. The name for that secret to be shown as **Gateway token secret** on the **Lab Settings** page for the lab.
-1. The Azure function is expected to return a token for certificate-based token authentication against the gateway machine.
-1. The Get RDP file contents action then returns the complete RDP file, including the authentication information.
-1. You open the RDP file using your preferred RDP connection program. Remember that not all RDP connection programs support token authentication. The authentication token does have an expiration date, set by the function app. Make the connection to the lab VM before the token expires.
-1. Once the remote desktop gateway machine authenticates the token in the RDP file, the connection is forwarded to your lab machine.
+## Architecture
-### Solution requirements
-To work with the DevTest Labs token authentication feature, there are a few configuration requirements for the gateway machines, domain name services (DNS), and functions.
+The following diagram shows how a remote desktop gateway applies token authentication and connects to DevTest Labs VMs.
-### Requirements for remote desktop gateway machines
-- TLS/SSL certificate must be installed on the gateway machine to handle HTTPS traffic. The certificate must match the fully qualified domain name (FQDN) of the load balancer for the gateway farm or the FQDN of the machine itself if there's only one machine. Wild-card TLS/SSL certificates don't work. -- A signing certificate installed on gateway machine(s). Create a signing certificate by using [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1) script.-- Install the [Pluggable Authentication](https://code.msdn.microsoft.com/windowsdesktop/Remote-Desktop-Gateway-517d6273) module that supports token authentication for the remote desktop gateway. One example of such a module is `RDGatewayFedAuth.msi` that comes with [System Center Virtual Machine Manager (VMM) images](/system-center/vmm/install-console?view=sc-vmm-1807&preserve-view=true). For more information about System Center, see [System Center documentation](/system-center/) and [pricing details](https://www.microsoft.com/cloud-platform/system-center-pricing). -- The gateway server can handle requests made to `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}`.
+![Diagram that shows the remote desktop gateway architecture.](./media/configure-lab-remote-desktop-gateway/architecture.png)
- The gateway-hostname is the FQDN of the load balancer of the gateway farm or the FQDN of machine itself if there's only one machine. The `{lab-machine-name}` is the name of the lab machine that you're trying to connect, and the `{port-number}` is port on which the connection will be made. By default, this port is 3389. However, if the virtual machine is using the [shared IP](devtest-lab-shared-ip.md) feature in DevTest Labs, the port will be different.
-- The [Application Routing Request](/iis/extensions/planning-for-arr/using-the-application-request-routing-module) module for Internet Information Server (IIS) can be used to redirect `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` requests to the azure function, which handles the request to get a token for authentication.
+1. Selecting **Connect** > **RDP** from a lab VM invokes the [getRdpFileContents](/rest/api/dtl/virtualmachines/getrdpfilecontents) REST command:
+ ```http
+ POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevTestLab/labs/{labName}/virtualmachines/{name}/getRdpFileContents
+ ```
-## Requirements for Azure function
-Azure function handles request with format of `https://{function-app-uri}/app/host/{lab-machine-name}/port/{port-number}` and returns the authentication token based on the same signing certificate installed on the gateway machines. The `{function-app-uri}` is the uri used to access the function. The function key is automatically be passed in the header of the request. For a sample function, see [https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/src/RDGatewayAPI/Functions/CreateToken.cs](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/src/RDGatewayAPI/Functions/CreateToken.cs).
+1. When the lab has a gateway configured, the `getRdpFileContents` action invokes `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` to request an authentication token.
+ - `{gateway-hostname}`, or `{lb-uri}` for a load balancer, is the gateway hostname specified on the **Lab settings** page for the lab.
+ - `{lab-machine-name}` is the name of the VM to connect to.
+ - `{port-number}` is the port to use for the connection. Usually this port is 3389, but if the lab VM uses a [shared IP](devtest-lab-shared-ip.md), the port number is different.
+1. The remote desktop gateway uses `https://{function-app-uri}/api/host/{lab-machine-name}/port/{port-number}` to defer the call to an Azure Functions function app.
-## Requirements for network
+ > [!NOTE]
+ > The request header automatically includes the function key, which it gets from the lab's key vault. The function key secret's name is the **Gateway token secret** on the lab's **Lab settings** page.
-- DNS for the FQDN associated with the TLS/SSL certificate installed on the gateway machines must direct traffic to the gateway machine or the load balancer of the gateway machine farm.-- If the lab machine uses private IPs, there must be a network path from the gateway machine to the lab machine, either through sharing the same virtual network or using peered virtual networks.
+1. The Azure function generates and returns a token for certificate-based authentication on the gateway machine.
-## Configure the lab to use token authentication
-This section shows how to configure a lab to use a remote desktop gateway machine that supports token authentication. This section doesn't cover how to set up a remote desktop gateway farm itself. For that information, See the [Sample to create a remote desktop gateway](#sample-to-create-a-remote-desktop-gateway) section at the end of this article.
+1. The `getRdpFileContents` action returns the complete RDP file, including the authentication token.
-Before you update the lab settings, store the key needed to successfully execute the function to return an authentication token in the lab's key vault. You can get the function key value in the **Manage** page for the function in the Azure portal. For more information on how to save a secret in a key vault, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Save the name of the secret for later use.
+When an RDP connection program opens the RDP file, the remote desktop gateway authenticates the token, and the connection forwards to the lab VM.
-To find the ID of the lab's key vault, run the following Azure CLI command:
+> [!NOTE]
+> Not all RDP connection programs support token authentication.
-```azurecli
-az resource show --name {lab-name} --resource-type 'Microsoft.DevTestLab/labs' --resource-group {lab-resource-group-name} --query properties.vaultName
-```
+> [!IMPORTANT]
+> The Azure function sets an expiration date for the authentication token. A user must connect to the VM before the token expires.
+
+## Configuration requirements
+
+There are some configuration requirements for gateway machines, Azure Functions, and networks to work with DevTest Labs RDP access and token authentication:
+
+### Gateway machine requirements
-Configure the lab to use the token authentication by using these steps:
+The gateway machine must have the following configuration:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **All Services**, and then select **DevTest Labs** from the list.
-1. From the list of labs, select your **lab**.
-1. On the lab's page, select **Configuration and policies**.
-1. On the left menu, in the **Settings** section, select **Lab settings**.
-1. In the **Remote desktop** section, enter the fully qualified domain name (FQDN) or IP address of the remote desktop services gateway machine or farm for the **Gateway hostname** field. This value must match the FQDN of the TLS/SSL certificate used on gateway machines.
+- A TLS/SSL certificate to handle HTTPS traffic. The certificate must match the fully qualified domain name (FQDN) of the gateway machine if there's only one machine, or the load balancer of a gateway farm. Wild-card TLS/SSL certificates don't work.
- ![Remote desktop options in lab settings](./media/configure-lab-remote-desktop-gateway/remote-desktop-options-in-lab-settings.png)
-1. In the **Remote desktop** section, for **Gateway token** secret, enter the name of the secret created earlier. This value isn't the function key itself, but the name of the secret in the lab's key vault that holds the function key.
+- A signing certificate. You can create a signing certificate by using the [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1) PowerShell script.
- ![Gateway token secret in lab settings](./media/configure-lab-remote-desktop-gateway/gateway-token-secret.png)
-1. **Save** Changes.
+- A [pluggable authentication module](https://en.wikipedia.org/wiki/Pluggable_authentication_module) that supports token authentication. One example is *RDGatewayFedAuth.msi*, which comes with [System Center Virtual Machine Manager (VMM)](/system-center/vmm/install-console?view=sc-vmm-1807&preserve-view=true) images.
- > [!NOTE]
- > By clicking **Save**, you agree to [Remote Desktop Gateway's license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
+- The ability to handle requests to `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}`.
+You can use the [Application Routing Request module for Internet Information Server (IIS)](/iis/extensions/planning-for-arr/using-the-application-request-routing-module) to redirect `https://{gateway-hostname}/api/host/{lab-machine-name}/port/{port-number}` requests to the function app.
-If configuring the lab via automation is preferred, see [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) for a sample PowerShell script to set **gateway hostname** and **gateway token secret** settings. The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) also provides an Azure Resource Manager template that creates or updates a lab with the **gateway hostname** and **gateway token secret** settings.
+### Azure Functions requirements
-## Configure network security group
-To further secure the lab, a network security group (NSG) can be added to the virtual network used by the lab virtual machines. For instructions how to set up an NSG, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).
+An Azure Functions function app handles requests with the `https://{function-app-uri}/app/host/{lab-machine-name}/port/{port-number}` format, and creates and returns the authentication token based on the gateway machine's signing certificate. The `{function-app-uri}` is the URI used to access the function.
-Here is an example NSG that only allows traffic that first goes through the gateway to reach lab machines. The source in this rule is the IP address of the single gateway machine, or the IP address of the load balancer in front of the gateway machines.
+The request header must pass the function key, which it gets from the lab's key vault.
-![Network security group - rules](./media/configure-lab-remote-desktop-gateway/network-security-group-rules.png)
+For a sample function, see [CreateToken.cs](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/src/RDGatewayAPI/Functions/CreateToken.cs).
-## Sample to create a remote desktop gateway
+### Network requirements
+
+- The DNS for the FQDN associated with the gateway machine's TLS/SSL certificate must direct traffic to the gateway machine or to the load balancer of a gateway machine farm.
+
+- If the lab VM uses a private IP address, there must be a network path from the gateway machine to the lab machine. The two machines must either share the same virtual network or use peered virtual networks.
+
+## Create a remote desktop gateway
+
+The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) has Azure Resource Manager (ARM) templates that help set up DevTest Labs token authentication and remote desktop gateway resources. There are templates for gateway machine creation, lab settings, and a function app.
> [!NOTE]
-> By using the sample templates, you agree to [Remote Desktop Gateway's license terms](https://www.microsoft.com/licensing/product-licensing/products). For more information about remote gateway, see [Welcome to Remote Desktop Services](/windows-server/remote/remote-desktop-services/Welcome-to-rds) and [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure).
+> By using the sample templates, you agree to the [Remote Desktop Gateway license terms](https://www.microsoft.com/licensing/product-licensing/products).
-The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab) provides a few samples to help setup the resources needed to use token authentication and remote desktop gateway with DevTest Labs. These samples include Azure Resource Manager templates for gateway machines, lab settings, and function app.
+Follow these steps to set up a sample remote desktop gateway farm.
-Follow these steps to set up a sample solution for the remote desktop gateway farm.
+1. Create a signing certificate.
-1. Create a signing certificate. Run [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1). Save the thumbprint, password, and Base64 encoding of the created certificate.
-2. Get a TLS/SSL certificate. FQDN associated with the TLS/SSL certificate must be for the domain you control. Save the thumbprint, password, and Base64 encoding for this certificate. To get thumbprint using PowerShell, use the following commands.
+ Run [Create-SigningCertificate.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Create-SigningCertificate.ps1). Record the thumbprint, password, and Base64 encoding of the created certificate to use later.
+
+1. Get a TLS/SSL certificate. The FQDN associated with the TLS/SSL certificate must be for a domain you control.
- ```powershell
- $cer = New-Object System.Security.Cryptography.X509Certificates.X509Certificate;
- $cer.Import('path-to-certificate');
- $hash = $cer.GetCertHashString()
- ```
+1. Record the password, thumbprint, and Base64 encoding for the TLS/SSL certificate to use later.
- To get the Base64 encoding using PowerShell, use the following command.
+ - To get the thumbprint, use the following PowerShell commands:
- ```powershell
- [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes('path-to-certificate'))
- ```
-3. Download files from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway).
-
- The template requires access to a few other Resource Manager templates and related resources at the same base URI. Copy all the files from [https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/arm/gateway) and RDGatewayFedAuth.msi to a blob container in a storage account.
-4. Deploy **azuredeploy.json** from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway). The template takes the following parameters:
- - adminUsername ΓÇô Required. Administrator user name for the gateway machines.
- - adminPassword ΓÇô Required. Password for the administrator account for the gateway machines.
- - instanceCount ΓÇô Number of gateway machines to create.
- - alwaysOn ΓÇô Indicates whether to keep the created Azure Functions app in a warm state or not. Keeping the Azure Functions app will avoid delays when users first try to connect to their lab VM, but it does have cost implications.
- - tokenLifetime ΓÇô The length of time the created token will be valid. Format is HH:MM:SS.
- - sslCertificate ΓÇô The Base64 encoding of the TLS/SSL certificate for the gateway machine.
- - sslCertificatePassword ΓÇô The password of the TLS/SSL certificate for the gateway machine.
- - sslCertificateThumbprint - The certificate thumbprint for identification in the local certificate store of the TLS/SSL certificate.
- - signCertificate ΓÇô The Base64 encoding for signing certificate for the gateway machine.
- - signCertificatePassword ΓÇô The password for signing certificate for the gateway machine.
- - signCertificateThumbprint - The certificate thumbprint for identification in the local certificate store of the signing certificate.
- - _artifactsLocation ΓÇô URI location where all supporting resources can be found. This value must be a fully qualified UIR, not a relative path.
- - _artifactsLocationSasToken ΓÇô The Shared Access Signature (SAS) token used to access supporting resources, if the location is an Azure storage account.
-
- The template can be deployed using the Azure CLI by using the following command:
-
- ```azurecli
- az deployment group create --resource-group {resource-group} --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -ΓÇôparameters _artifactsLocation="{storage-account-endpoint}/{container-name}" -ΓÇôparameters _artifactsLocationSasToken = "?{sas-token}"
+ ```powershell
+ $cer = New-Object System.Security.Cryptography.X509Certificates.X509Certificate;
+ $cer.Import('path-to-certificate');
+ $hash = $cer.GetCertHashString()
+ ```
+
+ - To get the Base64 encoding, use the following PowerShell command:
+
+ ```powershell
+ [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes('path-to-certificate'))
+ ```
+
+1. Download all the files from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway). Copy all the files and *RDGatewayFedAuth.msi* to a blob container in a storage account.
+
+1. Open *azuredeploy.json* from [https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/gateway), and fill out the following parameters:
+
+ - `adminUsername` ΓÇô **Required**. Administrator user name for the gateway machines.
+ - `adminPassword` ΓÇô **Required**. Password for the administrator account for the gateway machines.
+ - `instanceCount` ΓÇô Number of gateway machines to create.
+ - `alwaysOn` ΓÇô Whether to keep the created Azure Functions app in a warm state or not. Keeping the Azure Functions app on avoids delays when users first try to connect to their lab VMs, but has cost implications.
+ - `tokenLifetime` ΓÇô The length of time in HH:MM:SS format that the created token will be valid.
+ - `sslCertificate` ΓÇô **Required**. The Base64 encoding of the TLS/SSL certificate for the gateway machine.
+ - `sslCertificatePassword` ΓÇô **Required**. The password of the TLS/SSL certificate for the gateway machine.
+ - `sslCertificateThumbprint` - **Required**. The certificate thumbprint for identification in the local certificate store of the TLS/SSL certificate.
+ - `signCertificate` ΓÇô **Required**. The Base64 encoding for the signing certificate for the gateway machine.
+ - `signCertificatePassword` ΓÇô **Required**. The password for the signing certificate for the gateway machine.
+ - `signCertificateThumbprint` - **Required**. The certificate thumbprint for identification in the local certificate store of the signing certificate.
+ - `_artifactsLocation` ΓÇô **Required**. The URI location to find artifacts this template requires. This value must be a fully qualified URI, not a relative path. The artifacts include other templates, PowerShell scripts, and the Remote Desktop Gateway Pluggable Authentication module, expected to be named *RDGatewayFedAuth.msi*, that supports token authentication.
+ - `_artifactsLocationSasToken` ΓÇô **Required**. The shared access signature (SAS) token to access artifacts, if the `_artifactsLocation` is an Azure storage account.
+
+1. Deploy *azuredeploy.json* by using the following Azure CLI command:
+
+ ```azurecli
+ az deployment group create --resource-group {resource-group} --template-file azuredeploy.json --parameters @azuredeploy.parameters.json -ΓÇôparameters _artifactsLocation="{storage-account-endpoint}/{container-name}" -ΓÇôparameters _artifactsLocationSasToken = "?{sas-token}"
```
- Here are the descriptions of the parameters:
+ - Get the `{storage-account-endpoint}` by running
+ `az storage account show --name {storage-account-name} --query primaryEndpoints.blob`.
+
+ - Get the `{sas-token}` by running
+ `az storage container generate-sas --name {container-name} --account-name {storage-account-name} --https-only ΓÇôpermissions drlw ΓÇôexpiry {utc-expiration-date}`.
+
+ - `{storage-account-name}` is the name of the storage account that holds the files you uploaded.
+ - `{container-name}` is the container in the `{storage-account-name}` that holds the files you uploaded.
+ - `{utc-expiration-date}` is the date, in UTC, when the SAS token will expire and can no longer be used to access the storage account.
+
+1. Record the values for `gatewayFQDN` and `gatewayIP` from the template deployment output. Also save the value of the key for the newly created function, which you can find in the function app's [Application settings tab](../azure-functions/functions-how-to-use-azure-function-app-settings.md#settings).
+
+1. Configure DNS so that the FQDN of the TLS/SSL certificate directs to the `gatewayIP` IP address.
- - The {storage-account-endpoint} can be obtained by running `az storage account show --name {storage-acct-name} --query primaryEndpoints.blob`. The {storage-acct-name} is the name of the storage account that holds files that you uploaded.
- - The {container-name} is the name of the container in the {storage-acct-name} that holds files that you uploaded.
- - The {sas-token} can be obtained by running `az storage container generate-sas --name {container-name} --account-name {storage-acct-name} --https-only ΓÇôpermissions drlw ΓÇôexpiry {utc-expiration-date}`.
- - The {storage-acct-name} is the name of the storage account that holds files that you uploaded.
- - The {container-name} is the name of the container in the {storage-acct-name} that holds files that you uploaded.
- - The {utc-expiration-date} is the date, in UTC, at which the SAS token will expire and the SAS token can no longer be used to access the storage account.
+After you create the remote desktop gateway farm and update DNS, you can configure Azure DevTest Labs to use the gateway.
- Record the values for gatewayFQDN and gatewayIP from the template deployment output. You'll also need to save the value of the function key for the newly created function, which can be found in the [Function app settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) tab.
-5. Configure DNS so that FQDN of TLS/SSL cert directs to IP address of gatewayIP from previous step.
+## Configure the lab to use token authentication
- After the Remote Desktop Gateway farm is created and appropriate DNS updates are made, it's ready to be used by a lab in DevTest Labs. The **gateway hostname** and **gateway token secret** settings must be configured to use the gateway machine(s) you deployed.
+Before you update the lab settings, store the key for the authentication token function in the lab's key vault. You can get the function key value on the function's **Function Keys** page in the Azure portal.
- > [!NOTE]
- > If the lab machine uses private IPs, there must be a network path from the gateway machine to the lab machine, either through sharing the same virtual network or using a peered virtual network.
+To find the ID of the lab's key vault, run the following Azure CLI command:
+
+```azurecli
+az resource show --name {lab-name} --resource-type 'Microsoft.DevTestLab/labs' --resource-group {lab-resource-group-name} --query properties.vaultName
+```
+
+For more information on how to save a secret in a key vault, see [Add a secret to Key Vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault). Record the secret name to use later. This value isn't the function key itself, but the name of the key vault secret that holds the function key.
+
+To configure a lab's **Gateway hostname** and **Gateway token secret** to use token authentication with the gateway machine(s), follow these steps:
+
+1. On the lab's **Overview** page, select **Configuration and policies** from the left navigation.
+
+1. On the **Configuration and policies** page, select **Lab settings** from the **Settings** section of the left navigation.
+
+1. In the **Remote desktop** section:
+
+ - For the **Gateway hostname** field, enter the FQDN or IP address of the remote desktop services gateway machine or farm. This value must match the FQDN of the TLS/SSL certificate used on gateway machines.
+
+ - For **Gateway token**, enter the secret name you recorded earlier. This value isn't the function key itself, but the name of the key vault secret that holds the function key.
- Once both gateway and lab are configured, the connection file created when the lab user clicks on the **Connect** will automatically include information necessary to connect using token authentication.
+ ![Screenshot of Remote desktop options in Lab settings.](./media/configure-lab-remote-desktop-gateway/remote-desktop-options-in-lab-settings.png)
+
+1. Select **Save**.
+
+ > [!NOTE]
+ > By selecting **Save**, you agree to [Remote Desktop Gateway license terms](https://www.microsoft.com/licensing/product-licensing/products).
+
+Once you configure both the gateway and the lab, the RDP connection file created when the lab user selects **Connect** includes the necessary information to connect to the gateway and use token authentication.
+
+### Configure a lab via automation
+
+- [Set-DevTestLabGateway.ps1](https://github.com/Azure/azure-devtestlab/blob/master/samples/DevTestLabs/GatewaySample/tools/Set-DevTestLabGateway.ps1) is a sample PowerShell script to automatically set **Gateway hostname** and **Gateway token secret** settings.
+
+- The [Azure DevTest Labs GitHub repository](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) has [Gateway sample ARM templates](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/GatewaySample/arm/lab) that create or update a lab with **Gateway hostname** and **Gateway token secret** settings.
+
+### Configure a network security group
+
+To further secure the lab, you can add a network security group (NSG) to the virtual network the lab VMs use. For instructions, see [Create, change, or delete a network security group](../virtual-network/manage-network-security-group.md).
+
+For example, an NSG could allow only traffic that first goes through the gateway to reach lab VMs. The rule source is the IP address of the gateway machine or load balancer for the gateway farm.
+
+![Screenshot of a Network security group rule.](./media/configure-lab-remote-desktop-gateway/network-security-group-rules.png)
## Next steps
-See the following article to learn more about Remote Desktop
+
+- [Remote Desktop Services documentation](/windows-server/remote/remote-desktop-services/Welcome-to-rds)
+- [Deploy your remote desktop environment](/windows-server/remote/remote-desktop-services/rds-deploy-infrastructure)
+- [System Center documentation](/system-center/)
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
API metrics such as requests, latency, and failure rate can be viewed in the [Az
From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's menu to bring up the **Metrics** page. From here, you can view the metrics for your instance and create custom views.
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-high-availability-disaster-recovery.md
To view Service Health events...
:::image type="content" source="media/concepts-high-availability-disaster-recovery/issue-updates.png" alt-text="Screenshot of the Azure portal showing the 'Health History' page with the 'Issue updates' tab highlighted. The tab displays the status of entries." lightbox="media/concepts-high-availability-disaster-recovery/issue-updates.png":::
-The information displayed in this tool isn't specific to one Azure Digital instance. After using Service Health to understand what's going with the Azure Digital Twins service in a certain region or subscription, you can take monitoring a step further by using the [Resource health tool](troubleshoot-resource-health.md) to drill down into specific instances and see whether they're affected.
+The information displayed in this tool isn't specific to one Azure Digital instance. After using Service Health to understand what's going with the Azure Digital Twins service in a certain region or subscription, you can take monitoring a step further by using [Azure Resource Health](how-to-monitor-resource-health.md) to drill down into specific instances and see whether they're affected.
## Best practices
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
Routing metrics such as count, latency, and failure rate can be viewed in the [A
From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's navigation menu on the left to bring up the **Metrics** page. From here, you can view the metrics for your instance and create custom views.
-For more on viewing Azure Digital Twins metrics, see [Troubleshooting: Metrics](troubleshoot-metrics.md).
+For more on viewing Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
## Next steps
digital-twins How To Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-alerts.md
+
+# Mandatory fields.
+ Title: Monitor with alerts
+
+description: Learn how to troubleshoot Azure Digital Twins by setting up alerts based on service metrics.
++ Last updated : 03/10/2022++++
+# Monitor Azure Digital Twins with alerts
+
+In this article, you'll learn how to set up *alerts* in the [Azure portal](https://portal.azure.com). These alerts will notify you when configurable conditions you've defined based on the metrics of your Azure Digital Twins instance are met, allowing you to take important actions.
+
+Azure Digital Twins collects [metrics](how-to-monitor-metrics.md) for your service instance that give information about the state of your resources. You can use these metrics to assess the overall health of Azure Digital Twins service and the resources connected to it.
+
+Alerts proactively notify you when important conditions are found in your metrics data. They allow you to identify and address issues before the users of your system notice them. You can read more about alerts in [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
+
+## Turn on alerts
+
+Here's how to enable alerts for your Azure Digital Twins instance:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. Select **Alerts** from the menu, then **+ New alert rule**.
+
+ :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot of the Azure portal showing the button to create a new alert rule in the Alerts section of an Azure Digital Twin instance." lightbox="media/how-to-monitor-alerts/alerts-pre.png":::
+
+3. On the **Create alert rule** page that follows, you can follow the prompts to define conditions, actions to be triggered, and alert details.
+ * **Scope** details should fill automatically with the details for your instance
+ * You'll define **Condition** and **Action group** details to customize alert triggers and responses. For more information about this process, see the [Select conditions](#select-conditions) section later in this article.
+ * In the **Alert rule details** section, enter a name and optional description for your rule.
+ - You can select the **Enable alert rule upon creation** checkbox if you want the alert to become active as soon as it's created.
+ - You can select the **Automatically resolve alerts** checkbox if you want to resolve the alert when the condition isn't met anymore.
+ - This section is also where you select a **subscription**, **resource group**, and **Severity** level.
+
+4. Select the **Create alert rule** button to create your alert rule.
+
+ :::image type="content" source="media/how-to-monitor-alerts/create-alert-rule.png" alt-text="Screenshot of the Azure portal showing the Create Alert Rule page with sections for scope, condition, action group, and alert rule details." lightbox="media/how-to-monitor-alerts/create-alert-rule.png":::
+
+For a guided walkthrough of filling out these fields, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). Below are some examples of what the steps will look like for Azure Digital Twins.
+
+## Select conditions
+
+Here's an excerpt from the **Select condition** process illustrating what types of alert signals are available for Azure Digital Twins. On this page you can filter the type of signal, and select the signal that you want from a list.
++
+After selecting a signal, you'll be asked to configure the logic of the alert. You can filter on a dimension, set a threshold value for your alert, and set the frequency of checks for the condition. Here's an example of setting up an alert for when the average Routing Failure Rate metric goes above 5%.
++
+## Verify success
+
+After setting up alerts, they'll show up back on the **Alerts** page for your instance.
+
+
+## Next steps
+
+* For more information about alerts with Azure Monitor, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
+* For information about the Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
+* To see how to enable diagnostics logging for your Azure Digital Twins metrics, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-diagnostics.md
+
+# Mandatory fields.
+ Title: Monitor with diagnostic logs
+
+description: In this article, learn how to enable logging with diagnostics settings and query the logs for immediate viewing. Also, learn about the log categories and their schemas.
++ Last updated : 03/10/2022+++++
+# Monitor Azure Digital Twins with diagnostics logs
+
+This article shows you how to configure diagnostic settings in the [Azure portal](https://portal.azure.com), including what types of logs to collect and where to store them (such as Log Analytics or a storage account of your choice). Then, you can query the logs to quickly gather custom insights.
+
+Azure Digital Twins can collect *logs* for your service instance to monitor its performance, access, and other data. You can use these logs to get an idea of what is happening in your Azure Digital Twins instance, and analyze root causes on issues without needing to contact Azure support.
+
+This article also contains information about all the log categories that Azure Digital Twins can collect, and their schemas.
+
+## Turn on diagnostic settings
+
+Turn on diagnostic settings to start collecting logs on your Azure Digital Twins instance. You can also choose the destination where the exported logs should be stored. Here's how to enable diagnostic settings for your Azure Digital Twins instance.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. Select **Diagnostic settings** from the menu, then **Add diagnostic setting**.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings.png" alt-text="Screenshot showing the diagnostic settings page in the Azure portal and button to add." lightbox="media/how-to-monitor-diagnostics/diagnostic-settings.png":::
+
+3. On the page that follows, fill in the following values:
+ * **Diagnostic setting name**: Give the diagnostic settings a name.
+ * **Category details**: Choose which operations you want to monitor, and check the boxes to enable diagnostics for those operations. The operations that diagnostic settings can report on are:
+ - DigitalTwinsOperation
+ - EventRoutesOperation
+ - ModelsOperation
+ - QueryOperation
+ - AllMetrics
+
+ For more details about these categories and the information they contain, see the [Log categories](#log-categories) section below.
+ * **Destination details**: Choose where you want to send the logs. You can select any combination of the three options:
+ - Send to Log Analytics
+ - Archive to a storage account
+ - Stream to an event hub
+
+ You may be asked to fill in more details if they're necessary for your destination selection.
+
+4. Save the new settings.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings-details.png" alt-text="Screenshot showing the diagnostic setting page in the Azure portal where the user has filled in a diagnostic setting information." lightbox="media/how-to-monitor-diagnostics/diagnostic-settings-details.png":::
+
+New settings take effect in about 10 minutes. After that, logs appear in the configured target back on the **Diagnostic settings** page for your instance.
+
+For more detailed information on diagnostic settings and their setup options, you can visit [Create diagnostic settings to send platform logs and metrics to different destinations](../azure-monitor/essentials/diagnostic-settings.md).
+
+## View and query logs
+
+After configuring storage details of your Azure Digital Twins logs, you can write *custom queries* for them to generate insights and troubleshoot issues. The service also provides a few example queries that can help you get started, by addressing common questions that customers may have about their instances.
+
+Here's how to query the logs for your instance.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. Select **Logs** from the menu to open the log query page. The page opens to a window called **Queries**.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/logs.png" alt-text="Screenshot showing the Logs page for an Azure Digital Twins instance in the Azure portal with the Queries window overlaid, showing prebuilt queries." lightbox="media/how-to-monitor-diagnostics/logs.png":::
+
+ These queries are prebuilt examples written for various logs. You can select one of the queries to load it into the query editor and run it to see these logs for your instance.
+
+ You can also close the **Queries** window without running anything to go straight to the query editor page, where you can write or edit custom query code.
+
+3. After exiting the **Queries** window, you'll see the main query editor page. Here you can view and edit the text of the example queries, or write your own queries from scratch.
+ :::image type="content" source="media/how-to-monitor-diagnostics/logs-query.png" alt-text="Screenshot showing the Logs page for an Azure Digital Twins instance in the Azure portal. It includes a list of logs, query code, and Queries History." lightbox="media/how-to-monitor-diagnostics/logs-query.png":::
+
+ In the left pane,
+ - The **Tables** tab shows the different Azure Digital Twins [log categories](#log-categories) that are available to use in your queries.
+ - The **Queries** tab contains the example queries that you can load into the editor.
+ - The **Filter** tab lets you customize a filtered view of the data that the query returns.
+
+For more detailed information on log queries and how to write them, you can visit [Overview of log queries in Azure Monitor](../azure-monitor/logs/log-query-overview.md).
+
+## Log categories
+
+Here are more details about the categories of logs that Azure Digital Twins collects.
+
+| Log category | Description |
+| | |
+| ADTModelsOperation | Log all API calls related to Models |
+| ADTQueryOperation | Log all API calls related to Queries |
+| ADTEventRoutesOperation | Log all API calls related to to Event Routes and egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs, and Service Bus |
+| ADTDigitalTwinsOperation | Log all API calls related to individual twins |
+
+Each log category consists of operations of write, read, delete, and action. These categories map to REST API calls as follows:
+
+| Event type | REST API operations |
+| | |
+| Write | PUT and PATCH |
+| Read | GET |
+| Delete | DELETE |
+| Action | POST |
+
+Here's a comprehensive list of the operations and corresponding [Azure Digital Twins REST API calls](/rest/api/azure-digitaltwins/) that are logged in each category.
+
+>[!NOTE]
+> Each log category contains several operations/REST API calls. In the table below, each log category maps to all operations/REST API calls underneath it until the next log category is listed.
+
+| Log category | Operation | REST API calls and other events |
+| | | |
+| ADTModelsOperation | Microsoft.DigitalTwins/models/write | Digital Twin Models Update API |
+| | Microsoft.DigitalTwins/models/read | Digital Twin Models Get By ID and List APIs |
+| | Microsoft.DigitalTwins/models/delete | Digital Twin Models Delete API |
+| | Microsoft.DigitalTwins/models/action | Digital Twin Models Add API |
+| ADTQueryOperation | Microsoft.DigitalTwins/query/action | Query Twins API |
+| ADTEventRoutesOperation | Microsoft.DigitalTwins/eventroutes/write | Event Routes Add API |
+| | Microsoft.DigitalTwins/eventroutes/read | Event Routes Get By ID and List APIs |
+| | Microsoft.DigitalTwins/eventroutes/delete | Event Routes Delete API |
+| | Microsoft.DigitalTwins/eventroutes/action | Failure while attempting to publish events to an endpoint service (not an API call) |
+| ADTDigitalTwinsOperation | Microsoft.DigitalTwins/digitaltwins/write | Digital Twins Add, Add Relationship, Update, Update Component |
+| | Microsoft.DigitalTwins/digitaltwins/read | Digital Twins Get By ID, Get Component, Get Relationship by ID, List Incoming Relationships, List Relationships |
+| | Microsoft.DigitalTwins/digitaltwins/delete | Digital Twins Delete, Delete Relationship |
+| | Microsoft.DigitalTwins/digitaltwins/action | Digital Twins Send Component Telemetry, Send Telemetry |
+
+## Log schemas
+
+Each log category has a schema that defines how events in that category are reported. Each individual log entry is stored as text and formatted as a JSON blob. The fields in the log and example JSON bodies are provided for each log type below.
+
+`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema. `ADTEventRoutesOperation` extends the schema to contain an `endpointName` field in properties.
+
+### API log schemas
+
+This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, except the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+The schema contains information pertinent to API calls to an Azure Digital Twins instance.
+
+Here are the field and property descriptions for API logs.
+
+| Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `OperationVersion` | String | The API Version used during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultType` | String | Outcome of the event |
+| `ResultSignature` | String | Http status code for the event |
+| `ResultDescription` | String | Additional details about the event |
+| `DurationMs` | String | How long it took to perform the event in milliseconds |
+| `CallerIpAddress` | String | A masked source IP address for the event |
+| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `RequestUri` | Uri | The endpoint used during the event |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+
+Below are example JSON bodies for these types of logs.
+
+#### ADTDigitalTwinsOperation
+
+```json
+{
+ "time": "2020-03-14T21:11:14.9918922Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/digitaltwins/write",
+ "operationVersion": "2020-10-31",
+ "category": "DigitalTwinOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": 8,
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "2f6a8e64-94aa-492a-bc31-16b9f0b16ab3",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+#### ADTModelsOperation
+
+```json
+{
+ "time": "2020-10-29T21:12:24.2337302Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/models/write",
+ "operationVersion": "2020-10-31",
+ "category": "ModelsOperation",
+ "resultType": "Success",
+ "resultSignature": "201",
+ "resultDescription": "",
+ "durationMs": "80",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "9dcb71ea-bb6f-46f2-ab70-78b80db76882",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/Models?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+#### ADTQueryOperation
+
+```json
+{
+ "time": "2020-12-04T21:11:44.1690031Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/query/action",
+ "operationVersion": "2020-10-31",
+ "category": "QueryOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": "314",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "1ee2b6e9-3af4-4873-8c7c-1a698b9ac334",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/query?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+#### ADTEventRoutesOperation
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that isn't of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+```json
+ {
+ "time": "2020-10-30T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/write",
+ "operationVersion": "2020-10-31",
+ "category": "EventRoutesOperation",
+ "resultType": "Success",
+ "resultSignature": "204",
+ "resultDescription": "",
+ "durationMs": 42,
+ "callerIpAddress": "212.100.32.*",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/EventRoutes/egressRouteForEventHub?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+ },
+```
+
+### Egress log schemas
+
+The following example is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details related to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
+
+|Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultDescription` | String | Additional details about the event |
+| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+| `EndpointName` | String | The name of egress endpoint created in Azure Digital Twins |
+
+Below are example JSON bodies for these types of logs.
+
+#### ADTEventRoutesOperation for Microsoft.DigitalTwins/eventroutes/action
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
+
+```json
+{
+ "time": "2020-11-05T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/action",
+ "operationVersion": "",
+ "category": "EventRoutesOperation",
+ "resultType": "",
+ "resultSignature": "",
+ "resultDescription": "Unable to send EventHub message to [myPath] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
+ "durationMs": -1,
+ "callerIpAddress": "",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "",
+ "properties": {
+ "endpointName": "myEventHub"
+ },
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+},
+```
+
+## Next steps
+
+* For more information about configuring diagnostics, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
+* For information about the Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
+* To see how to enable alerts for your Azure Digital Twins metrics, see [Monitor with alerts](how-to-monitor-alerts.md).
digital-twins How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-metrics.md
+
+# Mandatory fields.
+ Title: Monitor with metrics
+
+description: Learn how to view Azure Digital Twins metrics in Azure Monitor to troubleshoot and oversee your instance.
++ Last updated : 03/10/2022+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Monitor Azure Digital Twins with metrics
+
+The metrics described in this article give you information about the state of Azure Digital Twins resources in your Azure subscription. Azure Digital Twins metrics help you assess the overall health of the Azure Digital Twins service and the resources connected to it. These user-facing statistics help you see what is going on with your Azure Digital Twins and help analyze the root causes of issues without needing to contact Azure support.
+
+Metrics are enabled by default. You can view Azure Digital Twins metrics from the [Azure portal](https://portal.azure.com).
+
+## View the metrics
+
+1. Create an Azure Digital Twins instance. You can find instructions on how to set up an Azure Digital Twins instance in [Set up an instance and authentication](how-to-set-up-instance-portal.md).
+
+2. Find your Azure Digital Twins instance in the [Azure portal](https://portal.azure.com) (you can open the page for it by typing its name into the portal search bar).
+
+ From the instance's menu, select **Metrics**.
+
+ :::image type="content" source="media/how-to-monitor-metrics/azure-digital-twins-metrics.png" alt-text="Screenshot showing the metrics page for Azure Digital Twins in the Azure portal.":::
+
+ This page displays the metrics for your Azure Digital Twins instance. You can also create custom views of your metrics by selecting the ones you want to see from the list.
+
+3. You can choose to send your metrics data to an Event Hubs endpoint or an Azure Storage account by selecting **Diagnostics settings** from the menu, then **Add diagnostic setting**.
+
+ :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings.png" alt-text="Screenshot showing the diagnostic settings page and button to add in the Azure portal.":::
+
+ For more information about this process, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
+
+4. You can choose to set up alerts for your metrics data by selecting **Alerts** from the menu, then **+ New alert rule**.
+ :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot showing the Alerts page and button to add in the Azure portal.":::
+
+ For more information about this process, see [Monitor with alerts](how-to-monitor-alerts.md).
+
+## List of metrics
+
+Azure Digital Twins provides several metrics to give you an overview of the health of your instance and its associated resources. You can also combine information from multiple metrics to paint a bigger picture of the state of your instance.
+
+The following tables describe the metrics tracked by each Azure Digital Twins instance, and how each metric relates to the overall status of your instance.
+
+#### Metrics for tracking service limits
+
+You can configure these metrics to track when you're approaching a [published service limit](reference-service-limits.md#functional-limits) for some aspect of your solution.
+
+To set up tracking, use the [alerts](how-to-monitor-alerts.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
+| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
+
+#### API request metrics
+
+Metrics having to do with API requests:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| ApiRequests | API Requests | Count | Total | The number of API Requests made for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text |
+| ApiRequestsFailureRate | API Requests Failure Rate | Percent | Average | The percentage of API requests that the service receives for your instance that give an internal error (500) response code for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text
+| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This value refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
+
+#### Billing metrics
+
+Metrics having to do with billing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
+| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this limit will be counted as additional messages in 1 KB increments (so a message between 1 KB and 2 KB will be counted as 2 messages, between 2 KB and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5 KB in the response body, for example, will be billed as 2 operations. | Meter ID |
+| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There's also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
+
+For more information on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
+
+#### Ingress metrics
+
+Metrics having to do with data ingress:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| IngressEvents | Ingress Events | Count | Total | The number of incoming telemetry events into Azure Digital Twins. | Result |
+| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result |
+| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
+
+#### Routing metrics
+
+Metrics having to do with routing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| MessagesRouted | Messages Routed | Count | Total | The number of messages routed to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they're routed from Azure Digital Twins to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it's posted to the endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+
+## Dimensions
+
+Dimensions help identify more details about the metrics. Some of the routing metrics provide information per endpoint. The table below lists possible values for these dimensions.
+
+| Dimension | Values |
+| | |
+| Authentication | OAuth |
+| Operation (for API Requests) | Microsoft.DigitalTwins/digitaltwins/delete, <br>Microsoft.DigitalTwins/digitaltwins/write, <br>Microsoft.DigitalTwins/digitaltwins/read, <br>Microsoft.DigitalTwins/eventroutes/read, <br>Microsoft.DigitalTwins/eventroutes/write, <br>Microsoft.DigitalTwins/eventroutes/delete, <br>Microsoft.DigitalTwins/models/read, <br>Microsoft.DigitalTwins/models/write, <br>Microsoft.DigitalTwins/models/delete, <br>Microsoft.DigitalTwins/query/action |
+| Endpoint Type | Event Grid, <br>Event Hubs, <br>Service Bus |
+| Protocol | HTTPS |
+| Result | Success, <br>Failure |
+| Status Code | 200, 404, 500, and so on. |
+| Status Code Class | 2xx, 4xx, 5xx, and so on. |
+| Status Text | Internal Server Error, Not Found, and so on. |
+
+## Next steps
+
+To learn more about managing recorded metrics for Azure Digital Twins, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-resource-health.md
+
+# Mandatory fields.
+ Title: Monitor resource health
+
+description: Learn how to use Azure Resource Health to check the health of your Azure Digital Twins instance.
++ Last updated : 03/10/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Monitor Azure Digital Twins resource health
+
+[Azure Service Health](../service-health/index.yml) is a suite of experiences that can help you diagnose and get support for service problems that affect your Azure resources. It contains resource health, service health, and status information, and reports on both current and past health information.
+
+## Use Azure Resource Health
+
+[Azure Resource Health](../service-health/resource-health-overview.md) can help you monitor whether your Azure Digital Twins instance is up and running. You can also use it to learn whether a regional outage is impacting the health of your instance.
+
+To check the health of your instance, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. From your instance's menu, select **Resource health** under Support + troubleshooting. This will take you to the page for viewing resource health history.
+
+ :::image type="content" source="media/how-to-monitor-resource-health/resource-health.png" alt-text="Screenshot showing the 'Resource health' page. There is a 'Health history' section showing a daily report from the last nine days.":::
+
+In the image above, this instance is showing as **Available**, and has been for the past nine days. To learn more about the Available status and the other status types that may appear, see [Resource Health overview](../service-health/resource-health-overview.md).
+
+You can also learn more about the different checks that go into resource health for different types of Azure resources in [Resource types and health checks in Azure resource health](../service-health/resource-health-checks-resource-types.md).
+
+## Use Azure Service Health
+
+[Azure Service Health](../service-health/service-health-overview.md) can help you check the health of the entire Azure Digital Twins service in a certain region, and be aware of events like ongoing service issues and upcoming planned maintenance.
+
+To check service health, sign in to the [Azure portal](https://portal.azure.com) and navigate to the **Service Health** service. You can find it by typing "service health" into the portal search bar.
+
+You can then filter service issues by subscription, region, and service.
+
+For more information on using Azure Service Health, see [Service Health overview](../service-health/service-health-overview.md).
+
+## Use Azure status
+
+The [Azure status](../service-health/azure-status-overview.md) page provides a global view of the health of Azure services and regions. While Azure Service Health and Azure Resource Health are personalized to your specific resource, Azure status has a larger scope and can be useful to understand incidents with wide-ranging impact.
+
+To check Azure status, navigate to the [Azure status](https://status.azure.com/status/) page. The page displays a table of Azure services along with health indicators per region. You can view Azure Digital Twins by searching for its table entry on the page.
+
+For more information on using the Azure status page, see [Azure status overview](../service-health/azure-status-overview.md).
+
+## Next steps
+
+Read about other ways to monitor your Azure Digital Twins instance in the following articles:
+* [Monitor with metrics](how-to-monitor-metrics.md)
+* [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
+* [Monitor with alerts](how-to-monitor-alerts.md)
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-service-limits.md
When a limit is reached, any requests beyond it are throttled by the service, wh
To manage the throttling, here are some recommendations for working with limits. * Use retry logic. The [Azure Digital Twins SDKs](concepts-apis-sdks.md) implement retry logic for failed requests, so if you're working with a provided SDK, this functionality is already built-in. Otherwise, consider implementing retry logic in your own application. The service sends back a `Retry-After` header in the failure response, which you can use to determine how long to wait before retrying.
-* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](troubleshoot-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Troubleshooting: Alerts](troubleshoot-alerts.md). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
+* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](how-to-monitor-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Monitor with alerts](how-to-monitor-alerts.md). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
* Deploy at scale across multiple instances. Avoid having a single point of failure. Instead of one large graph for your entire deployment, consider sectioning out subsets of twins logically (like by region or tenant) across multiple instances. >[!NOTE]
digital-twins Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-performance.md
# Mandatory fields. Title: "Troubleshooting: Performance"
+ Title: "Troubleshooting performance"
description: Tips for troubleshooting performance of an Azure Digital Twins instance. Previously updated : 10/8/2021- Last updated : 03/10/2022+ # Optional fields. Don't forget to remove # if you need a field.
#
-# Troubleshooting Azure Digital Twins: Performance
+# Troubleshooting Azure Digital Twins performance
If you're experiencing delays or other performance issues when working with Azure Digital Twins, use the tips in this article to help you troubleshoot. ## Isolate the source of the delay
-Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For instructions on how to view Azure Monitor metrics for an Azure Digital Twins instance, see [Troubleshooting: Metrics](troubleshoot-metrics.md).
+Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For instructions on how to view Azure Monitor metrics for an Azure Digital Twins instance, see [Monitor with metrics](how-to-monitor-metrics.md).
## Check regions
If your solution uses Azure Digital Twins in combination with other Azure servic
## Check logs
-Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Troubleshooting: Diagnostics logs](troubleshoot-diagnostics.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
+Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Monitor with diagnostic logs](how-to-monitor-diagnostics.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
## Check API frequency
If you're still experiencing performance issues after troubleshooting with the s
Follow these steps:
-1. Gather [metrics](troubleshoot-metrics.md) and [logs](troubleshoot-diagnostics.md) for your instance.
+1. Gather [metrics](how-to-monitor-metrics.md) and [logs](how-to-monitor-diagnostics.md) for your instance.
2. Navigate to [Azure Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. Use the prompts to provide details of your issue, see recommended solutions, share your metrics/log files, and submit any other information that the support team can use to help investigate your issue. For more information on creating support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ## Next steps
-Read about other ways to troubleshoot your Azure Digital Twins instance in the following articles:
-* [Troubleshooting: Metrics](troubleshoot-metrics.md)
-* [Troubleshooting: Diagnostics logs](troubleshoot-diagnostics.md).
-* [Troubleshooting: Alerts](troubleshoot-alerts.md)
-* [Troubleshooting: Resource health](troubleshoot-resource-health.md)
+Read about other ways to monitor your Azure Digital Twins instance to help with troubleshooting:
+* [Monitor with metrics](how-to-monitor-metrics.md)
+* [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
+* [Monitor with alerts](how-to-monitor-alerts.md)
+* [Monitor resource health](how-to-monitor-resource-health.md)
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
To complete this tutorial, you need to:
* [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace * Have an Azure account that is assigned to one of the built-in roles listed below: - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Owner or Contributor role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
+ - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- Owner or Contributor role for the Azure subscription (required if creating a new DMS service). > [!IMPORTANT] > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
To complete this tutorial, you need to:
> [!NOTE] > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-1. If you picked the first option for network share, provide details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
+* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
|Field |Description | ||-|
To complete this tutorial, you need to:
|**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. | |**Storage account details** |The resource group and storage account where backup files will be uploaded to. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
-1. If you picked the second option for backups stored in an Azure Blob Container specify the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file** from the corresponding drop-down lists. This Azure storage account will be used by DMS to upload the database backups from network share. You do not need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process.
+* For backups stored in an Azure storage blob container specify the below details of the Target database name,
+Resource group, Azure storage account, Blob container from the corresponding drop-down lists.
+
+ |Field |Description |
+ ||-|
+ |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Storage account details** |The resource group, storage account and container where backup files are located.
+
> [!IMPORTANT] > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
frontdoor Front Door Quickstart Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-quickstart-template-samples.md
na Previously updated : 02/16/2022 Last updated : 03/10/2022
+zone_pivot_groups: front-door-tiers
# Azure Resource Manager deployment model templates for Front Door The following table includes links to Azure Resource Manager deployment model templates for Azure Front Door.
-## Azure Front Door
-
-| Template | Description |
-| | |
-| [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
-| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
-| [Onboard a custom domain and managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain)| Add a custom domain to your Front Door and use a Front Door-managed TLS certificate. |
-| [Onboard a custom domain and customer-managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain-customer-certificate)| Add a custom domain to your Front Door and use your own TLS certificate by using Key Vault. |
-| [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
-| [Control Health Probes for your backends on Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-health-probes)| Update your Front Door to change the health probe settings by updating the probe path and also the intervals in which the probes will be sent. |
-| [Create Front Door with Active/Standby backend configuration](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-priority-lb)| Creates a Front Door that demonstrates priority-based routing for Active/Standby application topology, that is, by default send all traffic to the primary (highest-priority) backend until it becomes unavailable. |
-| [Create Front Door with caching enabled for certain routes](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-caching)| Creates a Front Door with caching enabled for the defined routing configuration thus caching any static assets for your workload. |
-| [Configure Session Affinity for your Front Door host names](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-session-affinity) | Updates a Front Door to enable session affinity for your frontend host, thereby, sending subsequent traffic from the same user session to the same backend. |
-| [Configure Front Door for client IP allowlisting or blocklisting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-clientip)| Configures a Front Door to restrict traffic certain client IPs using custom access control using client IPs. |
-| [Configure Front Door to take action with specific http parameters](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-http-params)| Configures a Front Door to allow or block certain traffic based on the http parameters in the incoming request by using custom rules for access control using http parameters. |
-| [Configure Front Door rate limiting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-rate-limiting)| Configures a Front Door to rate limit incoming traffic for a given frontend host. |
-| | |
-
-## Azure Front Door Standard/Premium (Preview)
| Sample | Description | |-|-| | [Front Door (quick create)](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium/) | Creates a basic Front Door profile including an endpoint, origin group, origin, and route. | | [Rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rule-set/) | Creates a Front Door profile and rule set. |
+|**Custom domains**| **Description** |
+| [Custom domain and managed TLS certificate](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-custom-domain/) | Creates a Front Door profile with a custom domain and a Microsoft-managed TLS certificate. |
+| [Custom domain and customer-managed TLS certificate](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-custom-domain-customer-certificate/) | Creates a Front Door profile with a custom domain and use your own TLS certificate by using Key Vault. |
+| [Custom domain and Azure DNS](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-custom-domain-azure-dns/) | Creates a Front Door profile with a custom domain and an Azure DNS zone. |
+|**Web Application Firewall**| **Description** |
| [WAF policy with managed rule set](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-waf-managed/) | Creates a Front Door profile and WAF with managed rule set. | | [WAF policy with custom rule](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-waf-custom/) | Creates a Front Door profile and WAF with custom rule. | | [WAF policy with rate limit](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rate-limit/) | Creates a Front Door profile and WAF with a custom rule to perform rate limiting. |
The following table includes links to Azure Resource Manager deployment model te
| [Virtual machine with Private Link service](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-premium-vm-private-link) | Creates a virtual machine and Private Link service, and a Front Door profile. | | | | ++
+| Template | Description |
+| | |
+| [Create a basic Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-basic)| Creates a basic Front Door configuration with a single backend. |
+| [Create a Front Door with multiple backends and backend pools and URL based routing](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-multiple-backends)| Creates a Front Door with load balancing configured for multiple backends in ta backend pool and also across backend pools based on URL path. |
+| [Onboard a custom domain and managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain)| Add a custom domain to your Front Door and use a Front Door-managed TLS certificate. |
+| [Onboard a custom domain and customer-managed TLS certificate with Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-custom-domain-customer-certificate)| Add a custom domain to your Front Door and use your own TLS certificate by using Key Vault. |
+| [Create Front Door with geo filtering](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-geo-filtering)| Create a Front Door that allows/blocks traffic from certain countries/regions. |
+| [Control Health Probes for your backends on Front Door](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-health-probes)| Update your Front Door to change the health probe settings by updating the probe path and also the intervals in which the probes will be sent. |
+| [Create Front Door with Active/Standby backend configuration](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-priority-lb)| Creates a Front Door that demonstrates priority-based routing for Active/Standby application topology, that is, by default send all traffic to the primary (highest-priority) backend until it becomes unavailable. |
+| [Create Front Door with caching enabled for certain routes](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-create-caching)| Creates a Front Door with caching enabled for the defined routing configuration thus caching any static assets for your workload. |
+| [Configure Session Affinity for your Front Door host names](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-session-affinity) | Updates a Front Door to enable session affinity for your frontend host, thereby, sending subsequent traffic from the same user session to the same backend. |
+| [Configure Front Door for client IP allowlisting or blocklisting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-clientip)| Configures a Front Door to restrict traffic certain client IPs using custom access control using client IPs. |
+| [Configure Front Door to take action with specific http parameters](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-waf-http-params)| Configures a Front Door to allow or block certain traffic based on the http parameters in the incoming request by using custom rules for access control using http parameters. |
+| [Configure Front Door rate limiting](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/front-door-rate-limiting)| Configures a Front Door to rate limit incoming traffic for a given frontend host. |
+| | |
++ ## Next steps +
+- Learn how to [create a Front Door profile](standard-premium/create-front-door-portal.md).
+++ - Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).+
frontdoor Front Door Url Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-redirect.md
Title: Azure Front Door - URL Redirect | Microsoft Docs description: This article helps you understand how Azure Front Door supports URL redirection for their routing rules. Previously updated : 09/28/2020 Last updated : 03/09/2022
+zone_pivot_groups: front-door-tiers
# URL redirect+ Azure Front Door can redirect traffic at each of the following levels: protocol, hostname, path, query string. These functionalities can be configured for individual microservices since the redirection is path-based. This can simplify application configuration by optimizing resource usage, and supports new redirection scenarios including global and path-based redirection.
-</br>
++
+In Azure Front Door Standard/Premium tier, you can configure URL redirect using a Rule Set.
++
+> [!IMPORTANT]
+> Azure Front Door Standard/Premium (Preview) is currently in public preview.
+> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++ :::image type="content" source="./media/front-door-url-redirect/front-door-url-redirect.png" alt-text="Azure Front Door URL Redirect"::: + ## Redirection types A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported:
The destination fragment is the portion of URL after '#', which is used by the b
## Next steps -- Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+* Learn how to [create a Front Door](quickstart-create-front-door.md).
+* Learn more about [Azure Front Door Rule Set](front-door-rules-engine.md).
+* Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Front Door Url Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-url-rewrite.md
Title: Azure Front Door - URL Rewrite | Microsoft Docs
-description: This article helps you understand how Azure Front Door does URL Rewrite for your routes, if configured.
+description: This article helps you understand how URL rewrites works in Azure Front Door.
-+ Previously updated : 09/28/2020 Last updated : 03/09/2022
+zone_pivot_groups: front-door-tiers
-# URL rewrite (custom forwarding path)
+# URL rewrite
++
+Azure Front Door Standard/Premium supports URL rewrite to change the path of a request that is being routed to your origin. URL rewrite also allows you to add conditions to make sure that the URL or the specified headers gets rewritten only when certain conditions gets met. These conditions are based on the request and response information.
+
+With this feature, you can redirect users to different origins based on scenarios, device types, or the requested file type.
+
+URL rewrite settings can be found in the Rule set configuration.
++
+## Source pattern
+
+Source pattern is the URL path in the source request to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (/) as the source pattern value.
+
+For URL rewrite source pattern, only the path after the route configuration ΓÇ£patterns to matchΓÇ¥ is considered. For example, you have the following incoming URL format `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-Source-pattern>`, only `/<Rule-URL-Rewrite-Source-pattern>` will be considered by the rule engine as the source pattern to be rewritten. Therefore, when you have a URL rewrite rule using source pattern match, the format for the outgoing URL will be `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-destination>`.
+
+For scenarios, where `/<route-patterns-to-match-path` segment of the URL path must be removed, set the Origin path of the Origin group in route configuration to `/`.
+
+## Destination
+
+You can define the destination path to use in the rewrite. The destination path overwrites the source pattern.
+
+## Preserve unmatched path
+
+Preserve unmatched path allows you to append the remaining path after the source pattern to the new path.
+
+For example, if I set **Preserve unmatched path to Yes**.
+* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination get set to `/foo/`, and the content get served from `/foo/sub/1`.jpg from the origin.
+
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/`, the content get served from `/foo/image/1.jpg` from the origin.
+
+For example, if I set **Preserve unmatched path to No**.
+* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/2.jpg`, the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
+++ Azure Front Door supports URL rewrite by configuring an optional **Custom Forwarding Path** to use when constructing the request to forward to the backend. By default, if a custom forwarding path isn't provided, the Front Door will copy the incoming URL path to the URL used in the forwarded request. The Host header used in the forwarded request is as configured for the selected backend. Read [Backend Host Header](front-door-backend-pool.md#hostheader) to learn what it does and how you can configure it.
-The powerful part of URL rewrite is that the custom forwarding path will copy any part of the incoming path that matches to a wildcard path to the forwarded path (these path segments are the **green** segments in the example below):
-</br>
+The robust part of URL rewrite is that the custom forwarding path will copy any part of the incoming path that matches the wildcard path to the forwarded path (these path segments are the **green** segments in the example below):
:::image type="content" source="./media/front-door-url-rewrite/front-door-url-rewrite-example.jpg" alt-text="Azure Front Door URL Rewrite"::: ## URL rewrite example+ Consider a routing rule with the following combination of frontend hosts and paths configured:
-| Hosts | Paths |
-||-|
-| www\.contoso.com | /\* |
-| | /foo |
-| | /foo/\* |
-| | /foo/bar/\* |
+| Hosts | Paths |
+|--|--|
+| www\.contoso.com | /\* |
+| | /foo |
+| | /foo/\* |
+| | /foo/bar/\* |
The first column of the table below shows examples of incoming requests and the second column shows what would be the "most-specific" matching route 'Path'. The third and ensuing columns of the table are examples of configured **Custom Forwarding Paths**. For example, if we read across the second row, it's saying that for incoming request `www.contoso.com/sub`, if the custom forwarding path was `/`, then the forwarded path would be `/sub`. If the custom forwarding path was `/fwd/`, then the forwarded path would be `/fwd/sub`. And so forth, for the remaining columns. The **emphasized** parts of the paths below represent the portions that are part of the wildcard match.
-| Incoming request | Most-specific match path | / | /fwd/ | /foo/ | /foo/bar/ |
-||--||-|-|--|
-| www\.contoso.com/ | /\* | / | /fwd/ | /foo/ | /foo/bar/ |
-| www\.contoso.com/**sub** | /\* | /**sub** | /fwd/**sub** | /foo/**sub** | /foo/bar/**sub** |
-| www\.contoso.com/**a/b/c** | /\* | /**a/b/c** | /fwd/**a/b/c** | /foo/**a/b/c** | /foo/bar/**a/b/c** |
-| www\.contoso.com/foo | /foo | / | /fwd/ | /foo/ | /foo/bar/ |
-| www\.contoso.com/foo/ | /foo/\* | / | /fwd/ | /foo/ | /foo/bar/ |
-| www\.contoso.com/foo/**bar** | /foo/\* | /**bar** | /fwd/**bar** | /foo/**bar** | /foo/bar/**bar** |
+| Incoming request | Most-specific match path | / | /fwd/ | /foo/ | /foo/bar/ |
+|--|--|--|--|--|--|
+| www\.contoso.com/ | /\* | / | /fwd/ | /foo/ | /foo/bar/ |
+| www\.contoso.com/**sub** | /\* | /**sub** | /fwd/**sub** | /foo/**sub** | /foo/bar/**sub** |
+| www\.contoso.com/**a/b/c** | /\* | /**a/b/c** | /fwd/**a/b/c** | /foo/**a/b/c** | /foo/bar/**a/b/c** |
+| www\.contoso.com/foo | /foo | / | /fwd/ | /foo/ | /foo/bar/ |
+| www\.contoso.com/foo/ | /foo/\* | / | /fwd/ | /foo/ | /foo/bar/ |
+| www\.contoso.com/foo/**bar** | /foo/\* | /**bar** | /fwd/**bar** | /foo/**bar** | /foo/bar/**bar** |
> [!NOTE]
-> Azure Front Door only supports URL rewrite from a static path to another static path. Preserve unmatched path is supported with Azure Front Door Standard/Premium SKU. See [preserve unmatched path](standard-premium/concept-rule-set-url-redirect-and-rewrite.md#preserve-unmatched-path) for more details.
+> Azure Front Door only supports URL rewrite from a static path to another static path. Preserve unmatched path is supported with Azure Front Door Standard/Premium SKU. For more information, see [Preserve unmatched path](front-door-url-rewrite.md#preserve-unmatched-path).
> ## Optional settings
-There are additional optional settings you can also specify for any given routing rule settings:
+
+There are extra optional settings you can also specify for any given routing rule settings:
* **Cache Configuration** - If disabled or not specified, requests that match to this routing rule won't attempt to use cached content and instead will always fetch from the backend. Read more about [Caching with Front Door](front-door-caching.md). + ## Next steps - Learn how to [create a Front Door](quickstart-create-front-door.md).-- Learn [how Front Door works](front-door-routing-architecture.md).
+- Learn more about [Azure Front Door Rules engine](front-door-rules-engine.md)
+- Learn about [Azure Front Door routing architecture](front-door-routing-architecture.md).
frontdoor Concept Rule Set Url Redirect And Rewrite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/concept-rule-set-url-redirect-and-rewrite.md
- Title: 'URL redirect and URL rewrite with Azure Front Door Standard/Premium (Preview)'
-description: This article helps you understand how Azure Front Door supports URL redirection and URL rewrite using Azure Front Door Rule Set.
---- Previously updated : 02/18/2021---
-# URL redirect and URL rewrite with Azure Front Door Standard/Premium (Preview)
-
-> [!Note]
-> This documentation is for Azure Front Door Standard/Premium (Preview). Looking for information on Azure Front Door? View [here](../front-door-overview.md).
-
-This article helps you understand how Azure Front Door Standard/Premium supports URL redirect and URL rewrite used in a Rule Set.
-
-> [!IMPORTANT]
-> Azure Front Door Standard/Premium (Preview) is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## URL redirect
-
-Azure Front Door can redirect traffic at each of the following levels: protocol, hostname, path, query string, and fragment. These functionalities can be configured for individual micro-service since the redirection is path-based. With URL redirect you can simplify application configuration by optimizing resource usage, and supports new redirection scenarios including global and path-based redirection.
-
-You can configure URL redirect via Rule Set.
--
-### Redirection types
-A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported:
-
-* **301 (Moved permanently)**: Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource will use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
-* **302 (Found)**: Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests.
-* **307 (Temporary redirect)**: Indicates that the target resource is temporarily under a different URI. The user agent MUST NOT change the request method if it does an automatic redirection to that URI. Since the redirection can change over time, the client ought to continue using the original effective request URI for future requests.
-* **308 (Permanent redirect)**: Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource should use one of the enclosed URIs.
-
-### Redirection protocol
-You can set the protocol that will be used for redirection. The most common use cases of the redirect feature, is to set HTTP to HTTPS redirection.
-
-* **HTTPS only**: Set the protocol to HTTPS only, if you're looking to redirect the traffic from HTTP to HTTPS. Azure Front Door recommends that you should always set the redirection to HTTPS only.
-* **HTTP only**: Redirects the incoming request to HTTP. Use this value only if you want to keep your traffic HTTP that is, non-encrypted.
-* **Match request**: This option keeps the protocol used by the incoming request. So, an HTTP request remains HTTP and an HTTPS request remains HTTPS post redirection.
-
-### Destination host
-As part of configuring a redirect routing, you can also change the hostname or domain for the redirect request. You can set this field to change the hostname in the URL for the redirection or otherwise preserve the hostname from the incoming request. So, using this field you can redirect all requests sent on `https://www.contoso.com/*` to `https://www.fabrikam.com/*`.
-
-### Destination path
-For cases where you want to replace the path segment of a URL as part of redirection, you can set this field with the new path value. Otherwise, you can choose to preserve the path value as part of redirect. So, using this field, you can redirect all requests sent to `https://www.contoso.com/\*` to `https://www.contoso.com/redirected-site`.
-
-### Query string parameters
-You can also replace the query string parameters in the redirected URL. To replace any existing query string from the incoming request URL, set this field to 'Replace' and then set the appropriate value. Otherwise, you can keep the original set of query strings by setting the field to 'Preserve'. As an example, using this field, you can redirect all traffic sent to `https://www.contoso.com/foo/bar` to `https://www.contoso.com/foo/bar?&utm_referrer=https%3A%2F%2Fwww.bing.com%2F`.
-
-### Destination fragment
-The destination fragment is the portion of URL after '#', which is used by the browser to land on a specific section of a web page. You can set this field to add a fragment to the redirect URL.
-
-## URL rewrite
-
-Azure Front Door supports URL rewrite to rewrite the path of a request that's en route to your origin. URL rewrite allows you to add conditions to ensure that the URL or the specified headers get rewritten only when certain conditions get met. These conditions are based on the request and response information.
-
-With this feature, you can redirect users to different origins based on scenario, device type, and requested file type.
-
-You can configure URL redirect via Rule Set.
--
-### Source pattern
-
-Source pattern is the URL path in the source request to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (/) as the source pattern value.
-
-For URL rewrite source pattern, only the path after the route configuration ΓÇ£patterns to matchΓÇ¥ is considered. For example, you have the following incoming URL format `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-Source-pattern>`, only `/<Rule-URL-Rewrite-Source-pattern>` will be considered by the rule engine as the source pattern to be rewritten. Therefore, when you have a URL rewrite rule using source pattern match, the format for the outgoing URL will be `<Frontend-domain>/<route-patterns-to-match-path>/<Rule-URL-Rewrite-destination>`.
-
-For scenarios, where `/<route-patterns-to-match-path` segment of the URL path must be removed, set the Origin path of the Origin group in route configuration to `/`.
-
-### Destination
-
-You can define the destination path to use in the rewrite. The destination path overwrites the source pattern.
-
-### Preserve unmatched path
-
-Preserve unmatched path allows you to append the remaining path after the source pattern to the new path.
-
-For example, if I set **Preserve unmatched path to Yes**.
-* If the incoming request is `www.contoso.com/sub/1.jpg`, the source pattern gets set to `/`, the destination get set to `/foo/`, and the content get served from `/foo/sub/1`.jpg from the origin.
-
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/`, the content get served from `/foo/image/1.jpg` from the origin.
-
-For example, if I set **Preserve unmatched path to No**.
-* If the incoming request is `www.contoso.com/sub/image/1.jpg`, the source pattern gets set to `/sub/`, the destination get set to `/foo/2.jpg`, the content will always be served from `/foo/2.jpg` from the origin no matter what paths followed in `wwww.contoso.com/sub/`.
-
-## Next steps
-
-* Learn more about [Azure Front Door Standard/Premium Rule Set](../front-door-rules-engine.md).
frontdoor Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/faq.md
Azure Front Door is a globally distributed multi-tenant service. The infrastruct
### Is HTTP->HTTPS redirection supported?
-Yes. In fact, Azure Front Door supports host, path, query string redirection, and part of URL redirection. Learn more about [URL redirection](concept-rule-set-url-redirect-and-rewrite.md).
+Yes. In fact, Azure Front Door supports host, path, query string redirection, and part of URL redirection. Learn more about [URL redirection](../front-door-url-redirect.md).
### How do I lock down the access to my backend to only Azure Front Door?
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Key capabilities in an IoT Central application include:
IoT Central lets you manage the fleet of [IoT devices](#devices) that are sending data to your solution. For example, you can: -- Control which devices can [connect](concepts-get-connected.md) to your application and how they authenticate.
+- Control which devices can [connect](overview-iot-central-developer.md#how-devices-connect) to your application and how they authenticate.
- Use [device templates](concepts-device-templates.md) to define the types of device that can connect to your application. - Manage devices by setting properties or calling commands on connected devices. For example, set a target temperature property for a thermostat device or call a command to trigger a device to update its firmware. You can set properties and call commands on: - Individual devices through a [customizable](concepts-device-templates.md#views) web UI.
In an IoT Central application, you can view and analyze data for individual devi
In an IoT Central application you can manage the following security aspects of your solution: -- [Device connectivity](concepts-get-connected.md): Create, revoke, and update the security keys that your devices use to establish a connection to your application.
+- [Device authentication](concepts-device-authentication.md): Create, revoke, and update the security keys that your devices use to establish a connection to your application.
- [App integrations](howto-authorize-rest-api.md#get-an-api-token): Create, revoke, and update the security keys that other applications use to establish secure connections with your application. - [Data export](howto-export-data.md#connection-options): Use managed identities to secure the connection to your data export destinations. - [User management](howto-manage-users-roles.md): Manage the users that can sign in to the application and the roles that determine what permissions those users have.
A device can use properties to report its state, such as whether a valve is open
IoT Central can also control devices by calling commands on the device. For example, instructing a device to download and install a firmware update.
-The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Associate a device with a device template](concepts-get-connected.md#associate-a-device-with-a-device-template).
+The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/libraries-sdks.md).
iot-central Concepts Device Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-authentication.md
+
+ Title: Device authentication in Azure IoT Central | Microsoft Docs
+description: This article introduces key concepts relating to device authentication in Azure IoT Central
++ Last updated : 03/02/2022++++++
+# This article applies to operators and device developers.
++
+# Device authentication concepts in IoT Central
+
+This article describes how devices authenticate to an IoT Central application. To learn more about the overall connection process, see [Connect a device](overview-iot-central-developer.md#how-devices-connect).
+
+Devices authenticate with the IoT Central application by using either a _shared access signature (SAS) token_ or an _X.509 certificate_. X.509 certificates are recommended in production environments.
+
+You use _enrollment groups_ to manage the device authentication options in your IoT Central application.
+
+This article describes the following device authentication options:
+
+- [X.509 enrollment group](#x509-enrollment-group)
+- [SAS enrollment group](#sas-enrollment-group)
+- [Individual enrollment](#individual-enrollment)
+
+## X.509 enrollment group
+
+In a production environment, using X.509 certificates is the recommended device authentication mechanism for IoT Central. To learn more, see [Device Authentication using X.509 CA Certificates](../../iot-hub/iot-hub-x509ca-overview.md).
+
+An X.509 enrollment group contains a root or intermediate X.509 certificate. Devices can authenticate if they have a valid leaf certificate that's derived from the root or intermediate certificate.
+
+To connect a device with an X.509 certificate to your application:
+
+1. Create an _enrollment group_ that uses the **Certificates (X.509)** attestation type.
+1. Add and verify an intermediate or root X.509 certificate in the enrollment group.
+1. Generate a leaf certificate from the root or intermediate certificate in the enrollment group. Install the leaf certificate on the device for it to use when it connects to your application.
+
+To learn more, see [How to connect devices with X.509 certificates](how-to-connect-devices-x509.md)
+
+### For testing purposes only
+
+In a production environment, use certificates from your certificate provider. For testing only, you can use the following utilities to generate root, intermediate, and device certificates:
+
+- [Tools for the Azure IoT Device Provisioning Device SDK](https://github.com/Azure/azure-iot-sdk-node/blob/main/provisioning/tools/readme.md): a collection of Node.js tools that you can use to generate and verify X.509 certificates and keys.
+- [Manage test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md): a collection of PowerShell and Bash scripts to:
+ - Create a certificate chain.
+ - Save the certificates as .cer files to upload to your IoT Central application.
+ - Use the verification code from the IoT Central application to generate the verification certificate.
+ - Create leaf certificates for your devices using your device IDs as a parameter to the tool.
+
+## SAS enrollment group
+
+A SAS enrollment group contains group-level SAS keys. Devices can authenticate if they have a valid SAS token that's derived from a group-level SAS key.
+
+To connect a device with device SAS token to your application:
+
+1. Create an _enrollment group_ that uses the **Shared Access Signature (SAS)** attestation type.
+1. Copy the group primary or secondary key from the enrollment group.
+1. Use the Azure CLI to generate a device token from the group key:
+
+ ```azurecli
+ az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
+ ```
+
+1. Use the generated device token when the device connects to your IoT Central application.
+
+> [!NOTE]
+> To use existing SAS keys in your enrollment groups, disable the **Auto generate keys** toggle and manually enter your SAS keys.
+
+## Individual enrollment
+
+Typically, devices connect by using credentials derived from an enrollment group X.509 certificate or SAS key. However, if your devices each have their own credentials, you can use individual enrollments. An individual enrollment is an entry for a single device that's allowed to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
+
+> [!NOTE]
+> When you create an individual enrollment for a device, it takes precedence over the default enrollment group options in your IoT Central application.
+
+### Create individual enrollments
+
+IoT Central supports the following attestation mechanisms for individual enrollments:
+
+- **Symmetric key attestation:** Symmetric key attestation is a simple approach to authenticating a device with the DPS instance. To create an individual enrollment that uses symmetric keys, open the **Device connection** page for the device, select **Individual enrollment** as the authentication type, and **Shared access signature (SAS)** as the authentication method. Enter the base64 encoded primary and secondary keys, and save your changes. Use the **ID scope**, **Device ID**, and either the primary or secondary key to connect your device.
+
+ > [!TIP]
+ > For testing, you can use **OpenSSL** to generate base64 encoded keys: `openssl rand -base64 64`
+
+- **X.509 certificates:** To create an individual enrollment with X.509 certificates, open the **Device Connection** page, select **Individual enrollment** as the authentication type, and **Certificates (X.509)** as the authentication method. Device certificates used with an individual enrollment entry have a requirement that the issuer and subject CN are set to the device ID.
+
+ > [!TIP]
+ > For testing, you can use [Tools for the Azure IoT Device Provisioning Device SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/tools) to generate a self-signed certificate: `node create_test_cert.js device "mytestdevice"`
+
+- **Trusted Platform Module (TPM) attestation:** A [TPM](../../iot-dps/concepts-tpm-attestation.md) is a type of hardware security module. Using a TPM is one of the most secure ways to connect a device. This article assumes you're using a discrete, firmware, or integrated TPM. Software emulated TPMs are well suited for prototyping or testing, but they don't provide the same level of security as discrete, firmware, or integrated TPMs. Don't use software TPMs in production. To create an individual enrollment that uses a TPM, open the **Device Connection** page, select **Individual enrollment** as the authentication type, and **TPM** as the authentication method. Enter the TPM endorsement key and save the device connection information.
+
+## Automatically register devices
+
+This scenario enables OEMs to mass manufacture devices that can connect without first being registered in an application. An OEM generates suitable device credentials, and configures the devices in the factory.
+
+To automatically register devices that use X.509 certificates:
+
+1. Generate the leaf-certificates for your devices using the root or intermediate certificate you added to your [X.509 enrollment group](#x509-enrollment-group). Use the device IDs as the `CNAME` in the leaf certificates. A device ID can contain letters, numbers, and the `-` character.
+
+1. As an OEM, flash each device with a device ID, a generated X.509 leaf-certificate, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
+
+1. When you switch on a device, it first connects to DPS to retrieve its IoT Central connection information.
+
+1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
+
+1. The IoT Central application uses the model ID sent by the device to [assign the registered device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
+
+To automatically register devices that use SAS tokens:
+
+1. Copy the group primary key from the **SAS-IoT-Devices** enrollment group:
+
+ :::image type="content" source="media/concepts-device-authentication/group-primary-key.png" alt-text="Group primary key from S A S - I o T - Devices enrollment group.":::
+
+1. Use the `az iot central device compute-device-key` command to generate the device SAS keys. Use the group primary key from the previous step. The device ID can contain letters, numbers, and the `-` character:
+
+ ```azurecli
+ az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
+ ```
+
+1. As an OEM, flash each device with the device ID, the generated device SAS key, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
+
+1. When you switch on a device, it first connects to DPS to retrieve its IoT Central registration information.
+
+1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
+
+1. The IoT Central application uses the model ID sent by the device to [assign the registered device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
+
+## Next steps
+
+Some suggested next steps are to:
+
+- Review [best practices](concepts-device-implementation.md#best-practices) for developing devices.
+- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
+- Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)
+- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
+- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
+
+ Title: Device implementation in Azure IoT Central | Microsoft Docs
+description: This article introduces the key concepts and best practices for implementing a device that connects to your IoT Central application.
++ Last updated : 03/04/2022++++++
+# This article applies to device developers.
++
+# Device implementation and best practices for IoT central
+
+This article provides information about how to implement devices that connect to your IoT central application. It also includes some best practices. To learn more about the overall connection process, see [Connect a device](overview-iot-central-developer.md#how-devices-connect).
+
+For sample device implementation code, see [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+
+## Implement the device
+
+Devices that connect to IoT Central should follow the _IoT Plug and Play conventions_. One of these conventions is that a device should send the _model ID_ of the device model it implements when it connects. The model ID enables the IoT Central application to assign the device to the correct device template.
+
+An IoT Central device template includes a _model_ that specifies the behaviors a device of that type should implement. Behaviors include telemetry, properties, and commands.
+
+Each model has a unique _device twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then assign the correct device template to the device.
+
+[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a DTDL model.
+
+The [Azure IoT device SDKs](#device-sdks) include support for the IoT Plug and Play conventions.
+
+### Device model
+
+A device model is defined by using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl) modeling language. This language lets you define:
+
+- The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double.
+- The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean.
+- The properties the device can receive from IoT Central. Optionally, you can mark a property as writable. For example, IoT Central sends a target temperature as a double to a device.
+- The commands a device responds to. The definition includes the name of the command, and the names and data types of any parameters. For example, a device responds to a reboot command that specifies how many seconds to wait before rebooting.
+
+A DTDL model can be a _no-component_ or a _multi-component_ model:
+
+- No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _root component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.
+- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.
+
+> [!TIP]
+> You can export the model from an IoT Central device template as a [Digital Twins Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl) JSON file.
+
+To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
+
+### Conventions
+
+A device should follow the IoT Plug and Play conventions when it exchanges data with IoT Central. The conventions include:
+
+- Send the DTMI when it connects to IoT Central.
+- Send correctly formatted JSON payloads and metadata to IoT Central.
+- Correctly respond to writable properties and commands from IoT Central.
+- Follow the naming conventions for component commands.
+
+> [!NOTE]
+> Currently, IoT Central does not fully support the DTDL **Array** and **Geospatial** data types.
+
+To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
+
+To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
+
+### Device SDKs
+
+Use one of the [Azure IoT device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) to implement the behavior of your device. The code should:
+
+- Register the device with DPS and use the information from DPS to connect to the internal IoT hub in your IoT Central application.
+- Announce the DTMI of the model the device implements.
+- Send telemetry in the format that the device model specifies. IoT Central uses the model in the device template to determine how to use the telemetry for visualizations and analysis.
+- Synchronize property values between the device and IoT Central. The model specifies the property names and data types so that IoT Central can display the information.
+- Implement command handlers for the commands specified in the model. The model specifies the command names and parameters that the device should use.
+
+For more information about the role of device templates, see [What are device templates?](./concepts-device-templates.md).
+
+The following table summarizes how Azure IoT Central device features map on to IoT Hub features:
+
+| Azure IoT Central | Azure IoT Hub |
+| -- | - |
+| Telemetry | [Device-to-cloud messaging](../../iot-hub/iot-hub-devguide-messages-d2c.md) |
+| Offline commands | [Cloud-to-device messaging](../../iot-hub/iot-hub-devguide-messages-c2d.md) |
+| Property | [Device twin reported properties](../../iot-hub/iot-hub-devguide-device-twins.md) |
+| Property (writable) | [Device twin desired and reported properties](../../iot-hub/iot-hub-devguide-device-twins.md) |
+| Command | [Direct methods](../../iot-hub/iot-hub-devguide-direct-methods.md) |
+
+### Communication protocols
+
+Communication protocols that a device can use to connect to IoT Central include MQTT, AMQP, and HTTPS. Internally, IoT Central uses an IoT hub to enable device connectivity. For more information about the communication protocols that IoT Hub supports for device connectivity, see [Choose a communication protocol](../../iot-hub/iot-hub-devguide-protocols.md).
+
+If your device can't use any of the supported protocols, use Azure IoT Edge to do protocol conversion. IoT Edge supports other intelligence-on-the-edge scenarios to offload processing from the Azure IoT Central application.
+
+## Best practices
+
+These recommendations show how to implement devices to take advantage of the [built-in high availability, disaster recovery, and automatic scaling](concepts-faq-scalability-availability.md) in IoT Central.
+
+### Handle connection failures
+
+For scaling or disaster recovery purposes, IoT Central may update its underlying IoT hubs. To maintain connectivity, your device code should handle specific connection errors by establishing a connection to a new IoT Hub endpoint.
+
+If the device gets any of the following errors when it connects, it should reprovision the device with DPS to get a new connection string. These errors mean the connection string is no longer valid:
+
+- Unreachable IoT Hub endpoint.
+- Expired security token.
+- Device disabled in IoT Hub.
+
+If the device gets any of the following errors when it connects, it should use a back-off strategy to retry the connection. These errors mean the connection string is still valid, but transient conditions are stopping the device from connecting:
+
+- Operator blocked device.
+- Internal error 500 from the service.
+
+To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md).
+
+### Test failover capabilities
+
+The Azure CLI lets you test the failover capabilities of your device code. The CLI command works by temporarily switching a device registration to a different internal IoT hub. To verify the device failover worked, check that the device still sends telemetry and responds to commands.
+
+To run the failover test for your device, run the following command:
+
+```azurecli
+az iot central device manual-failover \
+ --app-id {Application ID of your IoT Central application} \
+ --device-id {Device ID of the device you're testing} \
+ --ttl-minutes {How to wait before moving the device back to it's original IoT hub}
+```
+
+> [!TIP]
+> To find the **Application ID**, navigate to **Application > Management** in your IoT Central application.
+
+If the command succeeds, you see output that looks like the following:
+
+```output
+Command group 'iot central device' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
+{
+ "hubIdentifier": "6bd4...bafa",
+ "message": "Success! This device is now being failed over. You can check your device'ΓÇÖ's status using 'iot central device registration-info' command. The device will revert to its original hub at Tue, 18 May 2021 11:03:45 GMT. You can choose to failback earlier using device-manual-failback command. Learn more: https://aka.ms/iotc-device-test"
+}
+```
+
+To learn more about the CLI command, see [az iot central device manual-failover](/cli/azure/iot/central/device#az_iot_central_device_manual_failover).
+
+You can now check that telemetry from the device still reaches your IoT Central application.
+
+> [!TIP]
+> To see sample device code that handles failovers in various programing languages, see [IoT Central high availability clients](/samples/azure-samples/iot-central-high-availability-clients/iotc-high-availability-clients/).
+
+## Next steps
+
+Some suggested next steps are to:
+
+- Complete the tutorial [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
+- Review [Device authentication concepts in IoT Central](concepts-device-authentication.md)
+- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
+- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
A solution builder adds device templates to an IoT Central application. A device
A device template includes the following sections: -- _A device model_. This part of the device template defines how the device interacts with your application. A device developer implements the behaviors defined in the model.
- - _Root component_. Every device model has a root component. The root component's interface describes capabilities that are specific to the device model.
- - _Components_. A device model may include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example several phone device models could use the same camera interface.
- - _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the root component.
+- _A device model_. This part of the device template defines how the device interacts with your application. Every device model has a unique ID. A device developer implements the behaviors defined in the model.
+ - _Root component_. Every device model has a root component. The root component's interface describes capabilities that are specific to the device model.
+ - _Components_. A device model may include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example, several phone device models could use the same camera interface.
+ - _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the root component.
- _Cloud properties_. This part of the device template lets the solution developer specify any device metadata to store. Cloud properties are never synchronized with devices and only exist in the application. Cloud properties don't affect the code that a device developer writes to implement the device model. - _Customizations_. This part of the device template lets the solution developer override some of the definitions in the device model. Customizations are useful if the solution developer wants to refine how the application handles a value, such as changing the display name for a property or the color used to display a telemetry value. Customizations don't affect the code that a device developer writes to implement the device model. - _Views_. This part of the device template lets the solution developer define visualizations to view data from the device, and forms to manage and control a device. The views use the device model, cloud properties, and customizations. Views don't affect the code that a device developer writes to implement the device model.
+## Assign a device to a device template
+
+For a device to interact with IoT Central, it must be assigned to a device template. This assignment is done in one of four ways:
+
+- When you register a device on the **Devices** page, you can identify the template the device should use.
+- When you bulk import a list of devices, you can choose the device template all the devices on the list should use.
+- You can manually assign an unassigned device to a device template after it connects.
+- You can automatically assign a device to a device template by sending a model ID when the device first connects to your application.
+
+### Automatic assignment
+
+IoT Central can automatically assign a device to a device template when the device connects. A device should send a [model ID](../../iot-fundamentals/iot-glossary.md?toc=/azure/iot-central/toc.json&bc=/azure/iot-central/breadcrumb/toc.json#model-id) when it connects. IoT Central uses the model ID to identify the device template for that specific device model. The discovery process works as follows:
+
+1. If the device template is already published in the IoT Central application, the device is assigned to the device template.
+1. If the device template isn't already published in the IoT Central application, IoT Central looks for the device model in the [public model repository](https://github.com/Azure/iot-plugandplay-models). If IoT Central finds the model, it uses it to generate a basic device template.
+1. If IoT Central doesn't find the model in the public model repository, the device is marked as **Unassigned**. An operator can either create a device template for the device and then migrate the unassigned device to the new device template, or [autogenerate a device template](howto-set-up-template.md#autogenerate-a-device-template) based on the data the device sends.
+
+The following screenshot shows you how to view the model ID of a device template in IoT Central. In a device template, select a component, and then select **Edit identity**:
++
+You can view the [thermostat model](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-1.json) in the public model repository. The model ID definition looks like:
+
+```json
+"@id": "dtmi:com:example:Thermostat;1"
+```
+
+Use the following DPS payload to assign the device to a device template:
+
+```json
+{
+ "modelId":"dtmi:com:example:TemperatureController;2"
+}
+```
+
+To lean more about the DPS payload, see the sample code used in the [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+
+ ## Device models A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can import a JSON file that defines the device model into a device template, or use the web UI in IoT Central to create or edit a device model.
iot-central Concepts Faq Scalability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-scalability-availability.md
Before a device connects to IoT Central, it must be registered and provisioned i
When a device first connects to your IoT Central application, DPS provisions the device in one of the enrollments group's linked IoT hubs. The device is then associated with that IoT hub. DPS uses an allocation policy to load balance the provisioning across the IoT hubs in the application. This process makes sure each IoT hub has a similar number of provisioned devices.
-To learn more about registration and provisioning in IoT Central, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+To learn more about registration and provisioning in IoT Central, see [IoT Central device connectivity guide](overview-iot-central-developer.md#how-devices-connect).
### Device connections After DPS provisions a device to an IoT hub, the device always tries to connect to that hub. If a device can't reach the IoT hub it's provisioned to, it can't connect to your IoT Central application. To handle this scenario, your device firmware should include a retry strategy that reprovisions the device to another hub.
-To learn more about how device firmware should handle connection errors and connect to a different hub, see [Best practices](overview-iot-central-developer.md#best-practices).
+To learn more about how device firmware should handle connection errors and connect to a different hub, see [Best practices](concepts-device-implementation.md#best-practices).
-To learn more about how to verify your device firmware can handle connection failures, see [Test failover capabilities](overview-iot-central-developer.md#test-failover-capabilities).
+To learn more about how to verify your device firmware can handle connection failures, see [Test failover capabilities](concepts-device-implementation.md#test-failover-capabilities).
## Data export
iot-central Concepts Get Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-get-connected.md
- Title: Device connectivity in Azure IoT Central | Microsoft Docs
-description: This article introduces key concepts relating to device connectivity in Azure IoT Central
-- Previously updated : 12/21/2021------
-# This article applies to operators and device developers.
--
-# Get connected to Azure IoT Central
-
-This article describes how devices connect to an Azure IoT Central application. Before a device can exchange data with IoT Central, it must:
--- *Authenticate*. Authentication with the IoT Central application uses either a _shared access signature (SAS) token_ or an _X.509 certificate_. X.509 certificates are recommended in production environments.-- *Register*. Devices must be registered with the IoT Central application. You can view registered devices on the **Devices** page in the application.-- *Associate with a device template*. In an IoT Central application, device templates define the UI that operators use to view and manage connected devices.-
-IoT Central supports the following two device registration scenarios:
--- *Automatic registration*. The device is registered automatically when it first connects. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. An OEM generates suitable device credentials, and configures the devices in the factory. Optionally, you can require an operator to approve the device before it starts sending data. This scenario requires you to configure an X.509 or SAS _group enrollment_ in your application.-- *Manual registration*. Operators either register individual devices on the **Devices** page, or [import a CSV file](howto-manage-devices-in-bulk.md#import-devices) to bulk register devices. In this scenario you can use X.509 or SAS _group enrollment_, or X.509 or SAS _individual enrollment_.-
-Devices that connect to IoT Central should follow the *IoT Plug and Play conventions*. One of these conventions is that a device should send the _model ID_ of the device model it implements when it connects. The model ID enables the IoT Central application to associate the device with the correct device template.
-
-IoT Central uses the [Azure IoT Hub Device Provisioning service (DPS)](../../iot-dps/about-iot-dps.md) to manage the connection process. A device first connects to a DPS endpoint to retrieve the information it needs to connect to your application. Internally, your IoT Central application uses an IoT hub to handle device connectivity. Using DPS enables:
--- IoT Central to support onboarding and connecting devices at scale.-- You to generate device credentials and configure the devices offline without registering the devices through IoT Central UI.-- You to use your own device IDs to register devices in IoT Central. Using your own device IDs simplifies integration with existing back-office systems.-- A single, consistent way to connect devices to IoT Central.-
-This article describes the following device connection steps:
--- [X.509 group enrollment](#x509-group-enrollment)-- [SAS group enrollment](#sas-group-enrollment)-- [Individual enrollment](#individual-enrollment)-- [Device registration](#device-registration)-- [Associate a device with a device template](#associate-a-device-with-a-device-template)-
-## X.509 group enrollment
-
-In a production environment, using X.509 certificates is the recommended device authentication mechanism for IoT Central. To learn more, see [Device Authentication using X.509 CA Certificates](../../iot-hub/iot-hub-x509ca-overview.md).
-
-To connect a device with an X.509 certificate to your application:
-
-1. Create an *enrollment group* that uses the **Certificates (X.509)** attestation type.
-1. Add and verify an intermediate or root X.509 certificate in the enrollment group.
-1. Generate a leaf certificate from the root or intermediate certificate in the enrollment group. Send the leaf certificate from the device when it connects to your application.
-
-To learn more, see [How to connect devices with X.509 certificates](how-to-connect-devices-x509.md)
-
-### For testing purposes only
-
-For testing only, you can use the following utilities to generate root, intermediate, and device certificates:
--- [Tools for the Azure IoT Device Provisioning Device SDK](https://github.com/Azure/azure-iot-sdk-node/blob/main/provisioning/tools/readme.md): a collection of Node.js tools that you can use to generate and verify X.509 certificates and keys.-- [Manage test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md): a collection of PowerShell and Bash scripts to:
- - Create a certificate chain.
- - Save the certificates as .cer files to upload to your IoT Central application.
- - Use the verification code from the IoT Central application to generate the verification certificate.
- - Create leaf certificates for your devices using your device IDs as a parameter to the tool.
-
-## SAS group enrollment
-
-To connect a device with device SAS key to your application:
-
-1. Create an *enrollment group* that uses the **Shared Access Signature (SAS)** attestation type.
-1. Copy the group primary or secondary key from the enrollment group.
-1. Use the Azure CLI to generate a device key from the group key:
-
- ```azurecli
- az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
- ```
-
-1. Use the generated device key when the device connects to your IoT Central application.
-
-> [!NOTE]
-> To use existing SAS keys in your enrollment groups, disable the **Auto generate keys** toggle and type-in the SAS keys.
-
-## Individual enrollment
-
-Customers connecting devices that each have their own authentication credentials, use individual enrollments. An individual enrollment is an entry for a single device that's allowed to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. A device ID can contain letters, numbers, and the `-` character. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
-
-> [!NOTE]
-> When you create an individual enrollment for a device, it takes precedence over the default group enrollment options in your IoT Central application.
-
-### Create individual enrollments
-
-IoT Central supports the following attestation mechanisms for individual enrollments:
--- **Symmetric key attestation:** Symmetric key attestation is a simple approach to authenticating a device with the DPS instance. To create an individual enrollment that uses symmetric keys, open the **Device connection** page for the device, select **Individual enrollment** as the connection method, and **Shared access signature (SAS)** as the mechanism. Enter base64 encoded primary and secondary keys, and save your changes. Use the **ID scope**, **Device ID**, and either the primary or secondary key to connect your device.-
- > [!TIP]
- > For testing, you can use **OpenSSL** to generate base64 encoded keys: `openssl rand -base64 64`
--- **X.509 certificates:** To create an individual enrollment with X.509 certificates, open the **Device Connection** page, select **Individual enrollment** as the connection method, and **Certificates (X.509)** as the mechanism. Device certificates used with an individual enrollment entry have a requirement that the issuer and subject CN are set to the device ID.-
- > [!TIP]
- > For testing, you can use [Tools for the Azure IoT Device Provisioning Device SDK for Node.js](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/tools) to generate a self-signed certificate: `node create_test_cert.js device "mytestdevice"`
--- **Trusted Platform Module (TPM) attestation:** A [TPM](../../iot-dps/concepts-tpm-attestation.md) is a type of hardware security module. Using a TPM is one of the most secure ways to connect a device. This article assumes you're using a discrete, firmware, or integrated TPM. Software emulated TPMs are well suited for prototyping or testing, but they don't provide the same level of security as discrete, firmware, or integrated TPMs. Don't use software TPMs in production. To create an individual enrollment that uses a TPM, open the **Device Connection** page, select **Individual enrollment** as the connection method, and **TPM** as the mechanism. Enter the TPM endorsement key and save the device connection information.-
-## Device registration
-
-Before a device can connect to an IoT Central application, it must be registered in the application:
--- Devices can automatically register themselves when they first connect. To use this option, you must use either [X.509 group enrollment](#x509-group-enrollment) or [SAS group enrollment](#sas-group-enrollment).-- An operator can import a CSV file to bulk register a list of devices in the application.-- An operator can manually register an individual device on the **Devices** page in the application.-
-IoT Central enables OEMs to mass manufacture devices that can register themselves automatically. An OEM generates suitable device credentials, and configures the devices in the factory. When a customer turns on a device for the first time, it connects to DPS, which then automatically connects the device to the correct IoT Central application. Optionally, you can require an operator to approve the device before it starts sending data to the application.
-
-> [!TIP]
-> On the **Administration > Device connection** page, the **Auto approve** option controls whether an operator must manually approve the device before it can start sending data.
-
-### Automatically register devices that use X.509 certificates
-
-1. Generate the leaf-certificates for your devices using the root or intermediate certificate you added to your [X.509 enrollment group](#x509-group-enrollment). Use the device IDs as the `CNAME` in the leaf certificates. A device ID can contain letters, numbers, and the `-` character.
-
-1. As an OEM, flash each device with a device ID, a generated X.509 leaf-certificate, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
-
-1. When you switch on a device, it first connects to DPS to retrieve its IoT Central connection information.
-
-1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
-
-The IoT Central application uses the model ID sent by the device to [associate the registered device with a device template](#associate-a-device-with-a-device-template).
-
-### Automatically register devices that use SAS tokens
-
-1. Copy the group primary key from the **SAS-IoT-Devices** enrollment group:
-
- :::image type="content" source="media/concepts-get-connected/group-primary-key.png" alt-text="Group primary key from SAS-IoT-Devices enrollment group":::
-
-1. Use the `az iot central device compute-device-key` command to generate the device SAS keys. Use the group primary key from the previous step. The device ID can contain letters, numbers, and the `-` character:
-
- ```azurecli
- az iot central device compute-device-key --primary-key <enrollment group primary key> --device-id <device ID>
- ```
-
-1. As an OEM, flash each device with the device ID, the generated device SAS key, and the application **ID scope** value. The device code should also send the model ID of the device model it implements.
-
-1. When you switch on a device, it first connects to DPS to retrieve its IoT Central registration information.
-
-1. The device uses the information from DPS to connect to, and register with, your IoT Central application.
-
-The IoT Central application uses the model ID sent by the device to [associate the registered device with a device template](#associate-a-device-with-a-device-template).
-
-### Bulk register devices in advance
-
-To register a large number of devices with your IoT Central application, use a CSV file to [import device IDs and device names](howto-manage-devices-in-bulk.md#import-devices).
-
-If your devices use SAS tokens to authenticate, [export a CSV file from your IoT Central application](howto-manage-devices-in-bulk.md#export-devices). The exported CSV file includes the device IDs and the SAS keys.
-
-If your devices use X.509 certificates to authenticate, generate X.509 leaf certificates for your devices using the root or intermediate certificate in you uploaded to your X.509 enrollment group. Use the device IDs you imported as the `CNAME` value in the leaf certificates.
-
-Devices must use the **ID Scope** value for your application and send a model ID when they connect.
-
-> [!TIP]
-> You can find the **ID Scope** value in **Administration > Device connection**.
-
-### Register a single device in advance
-
-This approach is useful when you're experimenting with IoT Central or testing devices. Select **+ New** on the **Devices** page to register an individual device. You can use the device connection SAS keys to connect the device to your IoT Central application. Copy the _device SAS key_ from the connection information for a registered device:
-
-![SAS keys for an individual device](./media/concepts-get-connected/single-device-sas.png)
-
-## Associate a device with a device template
-
-IoT Central automatically associates a device with a device template when the device connects. A device sends a [model ID](../../iot-fundamentals/iot-glossary.md?toc=/azure/iot-central/toc.json&bc=/azure/iot-central/breadcrumb/toc.json#model-id) when it connects. IoT Central uses the model ID to identify the device template for that specific device model. The discovery process works as follows:
-
-1. If the device template is already published in the IoT Central application, the device is associated with the device template.
-1. If the device template isn't already published in the IoT Central application, IoT Central looks for the device model in the [public model repository](https://github.com/Azure/iot-plugandplay-models). If IoT Central finds the model, it uses it to generate a basic device template.
-1. If IoT Central doesn't find the model in the public model repository, the device is marked as **Unassociated**. An operator can either create a device template for the device and then migrate the unassociated device to the new device template, or [autogenerate a device template](howto-set-up-template.md#autogenerate-a-device-template) based on the data the device sends.
-
-The following screenshot shows you how to view the model ID of a device template in IoT Central. In a device template, select a component, and then select **Edit identity**:
--
-You can view the [thermostat model](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-1.json) in the public model repository. The model ID definition looks like:
-
-```json
-"@id": "dtmi:com:example:Thermostat;1"
-```
-
-Use the following DPS payload to associate the device to a device template:
-
-```json
-{
- "modelId":"dtmi:com:example:TemperatureController;2"
-}
-```
-
-To lean more about the DPS payload, see the sample code used in the [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
-
-## Device status values
-
-When a real device connects to your IoT Central application, its device status changes as follows:
-
-1. The device status is first **Registered**. This status means the device is created in IoT Central, and has a device ID. A device is registered when:
- - A new real device is added on the **Devices** page.
- - A set of devices is added using **Import** on the **Devices** page.
-
-1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
-
-1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
-
-1. If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled. An operator must explicitly approve a device before it starts sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials will have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button.
-
-1. If the device status is **Unassociated**, it means the device connecting to IoT Central doesn't have an associated device template. This situation typically happens in the following scenarios:
-
- - A set of devices is added using **Import** on the **Devices** page without specifying the device template.
- - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
-
- An operator can associate a device to a device template from the **Devices** page using the **Migrate** button.
-
-## Device connection status
-
-When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events are not sent by the device, they are generated internally by IoT Central.
-
-The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute.
--
-Watch the following video to learn more about how to monitor device connection status:
-
-> [!VIDEO https://www.youtube.com/embed/EUZH_6Ihtto]
-
-You can include connection and disconnection events in [exports from IoT Central](howto-export-data.md#set-up-a-data-export). To learn more, see [React to IoT Hub events > Limitations for device connected and device disconnected events](../../iot-hub/iot-hub-event-grid.md#limitations-for-device-connected-and-device-disconnected-events).
-
-## SDK support
-
-The Azure Device SDKs offer the easiest way for you implement your device code. The following device SDKs are available:
--- [Azure IoT SDK for C](https://github.com/azure/azure-iot-sdk-c)-- [Azure IoT SDK for Python](https://github.com/azure/azure-iot-sdk-python)-- [Azure IoT SDK for Node.js](https://github.com/azure/azure-iot-sdk-node)-- [Azure IoT SDK for Java](https://github.com/azure/azure-iot-sdk-java)-- [Azure IoT SDK for .NET](https://github.com/azure/azure-iot-sdk-csharp)-
-### SDK features and IoT Hub connectivity
-
-All device communication with IoT Hub uses the following IoT Hub connectivity options:
--- [Device-to-cloud messaging](../../iot-hub/iot-hub-devguide-messages-d2c.md)-- [Cloud-to-device messaging](../../iot-hub/iot-hub-devguide-messages-c2d.md)-- [Device twins](../../iot-hub/iot-hub-devguide-device-twins.md)-
-The following table summarizes how Azure IoT Central device features map on to IoT Hub features:
-
-| Azure IoT Central | Azure IoT Hub |
-| -- | - |
-| Telemetry | Device-to-cloud messaging |
-| Offline commands | Cloud-to-device messaging |
-| Property | Device twin reported properties |
-| Property (writable) | Device twin desired and reported properties |
-| Command | Direct methods |
-
-### Protocols
-
-The Device SDKs support the following network protocols for connecting to an IoT hub:
--- MQTT-- AMQP-- HTTPS-
-For information about these difference protocols and guidance on choosing one, see [Choose a communication protocol](../../iot-hub/iot-hub-devguide-protocols.md).
-
-If your device can't use any of the supported protocols, use Azure IoT Edge to do protocol conversion. IoT Edge supports other intelligence-on-the-edge scenarios to offload processing from the Azure IoT Central application.
-
-## Security
-
-All data exchanged between devices and your Azure IoT Central is encrypted. IoT Hub authenticates every request from a device that connects to any of the device-facing IoT Hub endpoints. To avoid exchanging credentials over the wire, a device uses signed tokens to authenticate. For more information, see, [Control access to IoT Hub](../../iot-hub/iot-hub-devguide-security.md).
-
-## Next steps
-
-Some suggested next steps are to:
--- Review [best practices](overview-iot-central-developer.md#best-practices) for developing devices.-- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)-- Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)-- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)-- Read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md)
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
An IoT Edge device can be:
## IoT Edge devices and IoT Central
-IoT Edge devices can use *shared access signature* tokens or X.509 certificates to authenticate with IoT Central. You can manually register your IoT Edge devices in IoT Central before they connect for the first time, or use the Device Provisioning Service to handle the registration. To learn more, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+IoT Edge devices can use *shared access signature* tokens or X.509 certificates to authenticate with IoT Central. You can manually register your IoT Edge devices in IoT Central before they connect for the first time, or use the Device Provisioning Service to handle the registration. To learn more, see [How devices connect](overview-iot-central-developer.md#how-devices-connect).
IoT Central uses [device templates](concepts-device-templates.md) to define how IoT Central interacts with a device. For example, a device template specifies:
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
If you enable the **Queue if offline** option in the device template UI for the
## Next steps
-Now that you've learned about device templates, a suggested next steps is to read [Get connected to Azure IoT Central](./concepts-get-connected.md) to learn more about how to register devices with IoT Central and how IoT Central secures device connections.
+Now that you've learned about device templates, a suggested next steps is to read [IoT Central device connectivity guide](overview-iot-central-developer.md) to learn more about how to register devices with IoT Central and how IoT Central secures device connections.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
zone_pivot_groups: programming-languages-set-ten
# How to connect devices with X.509 certificates to IoT Central Application
-IoT Central supports both shared access signatures (SAS) and X.509 certificates to secure the communication between a device and your application. The [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial uses SAS. In this article, you learn how to modify the code sample to use X.509 certificates. X.509 certificates are recommended in production environments. For more information, see [Get connected to Azure IoT Central](./concepts-get-connected.md).
+IoT Central supports both shared access signatures (SAS) and X.509 certificates to secure the communication between a device and your application. The [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial uses SAS. In this article, you learn how to modify the code sample to use X.509 certificates. X.509 certificates are recommended in production environments. For more information, see [Device authentication concepts](concepts-device-authentication.md).
This guide shows two ways to use X.509 certificates - [group enrollments](how-to-connect-devices-x509.md#use-group-enrollment) typically used in a production environment, and [individual enrollments](how-to-connect-devices-x509.md#use-individual-enrollment) useful for testing. The article also describes how to [roll device certificates](#roll-x509-device-certificates) to maintain connectivity when certificates expire.
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-build-iotc-device-bridge.md
# Mandatory fields. See more on aka.ms/skyeye/meta. Title: Deploy the Azure IoT Central device bridge | Microsoft Docs
-description: Deploy the IoT Central device bridge to connect other IoT clouds to your IoT Central app. Other IoT clouds include Sigfox, Particle Device Cloud, and The Things Network.
+description: Deploy the IoT Central device bridge to connect other IoT clouds to your IoT Central app. Examples of other IoT clouds include Sigfox, Particle Device Cloud, and The Things Network.
The function app transforms the data into a format accepted by IoT Central and f
:::image type="content" source="media/howto-build-iotc-device-bridge/azure-function.png" alt-text="Screenshot of Azure Functions.":::
-If your IoT Central application recognizes the device ID in the forwarded message, the telemetry from the device appears in IoT Central. If the device ID isn't recognized by your IoT Central application, the function app attempts to register a new device with the device ID. The new device appears as an **Unassociated device** on the **Devices** page in your IoT Central application. From the **Devices** page, you can associate the new device with a device template and then view the telemetry.
+If your IoT Central application recognizes the device ID in the forwarded message, the telemetry from the device appears in IoT Central. If the device ID isn't recognized by your IoT Central application, the function app attempts to register a new device with the device ID. The new device appears as an **Unassigned** device on the **Devices** page in your IoT Central application. From the **Devices** page, you can assign the new device to a device template and then view the telemetry.
## Deploy the device bridge
Each key in the `measurements` object must match the name of a telemetry type in
You can include a `timestamp` field in the body to specify the UTC date and time of the message. This field must be in ISO 8601 format. For example, `2020-06-08T20:16:54.602Z`. If you don't include a timestamp, the current date and time is used.
-You can include a `modelId` field in the body. Use this field to associate the device with a device template during provisioning. This functionality is only supported by [V3 applications](howto-faq.yml#how-do-i-get-information-about-my-application-).
+You can include a `modelId` field in the body. Use this field to assign the device to a device template during provisioning. This functionality is only supported by [V3 applications](howto-faq.yml#how-do-i-get-information-about-my-application-).
The `deviceId` must be alphanumeric, lowercase, and may contain hyphens.
-If you don't include the `modelId` field, or if IoT Central doesn't recognize the model ID, then a message with an unrecognized `deviceId` creates a new _unassociated device_ in IoT Central. An operator can manually migrate the device to the correct device template. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices-individually.md).
+If you don't include the `modelId` field, or if IoT Central doesn't recognize the model ID, then a message with an unrecognized `deviceId` creates a new _unassigned device_ in IoT Central. An operator can manually migrate the device to the correct device template. To learn more, see [Manage devices in your Azure IoT Central application > Migrating devices to a template](howto-manage-devices-individually.md).
-In [V2 applications](howto-faq.yml#how-do-i-get-information-about-my-application-), the new device appears on the **Device Explorer > Unassociated devices** page. Select **Associate** and choose a device template to start receiving incoming telemetry from the device.
+In [V2 applications](howto-faq.yml#how-do-i-get-information-about-my-application-), the new device appears an unassigned device on the **Devices** page. Select **Assign template** and choose a device template to start receiving incoming telemetry from the device.
> [!NOTE]
-> Until the device is associated to a template, all HTTP calls to the function return a 403 error status.
+> Until the device is assigned to a template, all HTTP calls to the function return a 403 error status.
To switch on logging for the function app with Application Insights, navigate to **Monitoring > Logs** in your function app in the Azure portal. Select **Turn on Application Insights**.
The Resource Manager template provisions the following resources in your Azure s
The key vault stores the SAS group key for your IoT Central application.
-The function app runs on a [consumption plan](https://azure.microsoft.com/pricing/details/functions/). While this option doesn't offer dedicated compute resources, it enables the device bridge to handle hundreds of device messages per minute, suitable for smaller fleets of devices or devices that send messages less frequently. If your application depends on streaming a large number of device messages, replace the consumption plan with a dedicated a [App service plan](https://azure.microsoft.com/pricing/details/app-service/windows/). This plan offers dedicated compute resources, which give faster server response times. Using a standard App Service Plan, the maximum observed performance of the function from Azure in this repository was around 1,500 device messages per minute. To learn more, see [Azure Functions hosting options](../../azure-functions/functions-scale.md).
+The function app runs on a [consumption plan](https://azure.microsoft.com/pricing/details/functions/). While this option doesn't offer dedicated compute resources, it enables the device bridge to handle hundreds of device messages per minute, suitable for smaller fleets of devices or devices that send messages less frequently. If your application depends on streaming a large number of device messages, replace the consumption plan with a dedicated [App service plan](https://azure.microsoft.com/pricing/details/app-service/windows/). This plan offers dedicated compute resources, which give faster server response times. Using a standard App Service Plan, the maximum observed performance of the function from Azure in this repository was around 1,500 device messages per minute. To learn more, see [Azure Functions hosting options](../../azure-functions/functions-scale.md).
To use a dedicated App Service plan instead of a consumption plan, edit the custom template before deploying. Select **Edit template**.
To connect a Particle device through the device bridge to IoT Central, go to the
} ```
-Paste in the **function URL** from your function app, and you see Particle devices appear as unassociated devices in IoT Central. To learn more, see the [Here's how to integrate your Particle-powered projects with Azure IoT Central](https://blog.particle.io/2019/09/26/integrate-particle-with-azure-iot-central/) blog post.
+Paste in the **function URL** from your function app, and you see Particle devices appear as unassigned devices in IoT Central. To learn more, see the [Here's how to integrate your Particle-powered projects with Azure IoT Central](https://blog.particle.io/2019/09/26/integrate-particle-with-azure-iot-central/) blog post.
### Example 2: Connecting Sigfox devices through the device bridge
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-rigado-cascade-500.md
You're now ready to use your C500 device in your IoT Central application.
Some suggested next steps are to: -- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md)
+- Read about [How devices connect](overview-iot-central-developer.md#how-devices-connect)
- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Connect Ruuvi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-ruuvi.md
To create a simulated RuuviTag:
Some suggested next steps are to: -- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md)
+- [How devices connect](overview-iot-central-developer.md#how-devices-connect)
- Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data.md
Each exported message contains a normalized form of the full message the device
- `messageSource`: The source for the message - `telemetry`. - `deviceId`: The ID of the device that sent the telemetry message. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this message was received by IoT Central. - `enrichments`: Any enrichments set up on the export. - `module`: The IoT Edge module that sent this message. This field only appears if the message came from an IoT Edge module.
Each message or record represents changes to device and cloud properties. Inform
- `deviceId`: The ID of the device that sent the telemetry message. - `schema`: The name and version of the payload schema. - `enqueuedTime`: The time at which this change was detected by IoT Central.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `properties`: An array of properties that changed, including the names of the properties and values that changed. The component and module information is included if the property is modeled within a component or an IoT Edge module. - `enrichments`: Any enrichments set up on the export.
Each message or record represents a connectivity event from a single device. Inf
- `messageType`: Either `connected` or `disconnected`. - `deviceId`: The ID of the device that was changed. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this change occurred in IoT Central. - `enrichments`: Any enrichments set up on the export.
Each message or record represents one change to a single device. Information in
- `messageType`: The type of change that occurred. One of: `registered`, `deleted`, `provisioned`, `enabled`, `disabled`, `displayNameChanged`, and `deviceTemplateChanged`. - `deviceId`: The ID of the device that was changed. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this change occurred in IoT Central. - `enrichments`: Any enrichments set up on the export.
Each message or record represents one change to a single published device templa
- `messageSource`: The source for the message - `deviceTemplateLifecycle`. - `messageType`: Either `created`, `updated`, or `deleted`. - `schema`: The name and version of the payload schema.-- `templateId`: The ID of the device template associated with the device.
+- `templateId`: The ID of the device template assigned to the device.
- `enqueuedTime`: The time at which this change occurred in IoT Central. - `enrichments`: Any enrichments set up on the export.
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
Enter a job name and description, and then select **Rerun job**. A new job is su
## Import devices
-To connect large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/master/bulk-upload-devices). The CSV file should include the following column headers:
+To register a large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/master/bulk-upload-devices). The CSV file should include the following column headers:
| Column | Description | | - | - |
To bulk-register devices in your application:
If the device import operation fails, you see an error message on the **Device Operations** panel. A log file capturing all the errors is generated that you can download.
+If your devices use SAS tokens to authenticate, [export a CSV file from your IoT Central application](#export-devices). The exported CSV file includes the device IDs and the SAS keys.
+
+If your devices use X.509 certificates to authenticate, generate X.509 leaf certificates for your devices using the root or intermediate certificate in your X.509 enrollment group. Use the device IDs you imported as the `CNAME` value in the leaf certificates.
++ ## Export devices To connect a real device to IoT Central, you need its connection string. You can export device details in bulk to get the information you need to create device connection strings. The export process creates a CSV file with the device identity, device name, and keys for all the selected devices.
To bulk export devices from your application:
* IOTC_X509THUMBPRINT_PRIMARY * IOTC_X509THUMBPRINT_SECONDARY
-For more information about connecting real devices to your IoT Central application, see [Device connectivity in Azure IoT Central](concepts-get-connected.md).
+For more information about connecting real devices to your IoT Central application, see [How devices connect](overview-iot-central-developer.md#how-devices-connect).
## Next steps
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
Title: Manage devices individually in your Azure IoT Central application | Microsoft Docs
-description: Learn how to manage devices individually in your Azure IoT Central application. Create, delete, and update devices.
+description: Learn how to manage devices individually in your Azure IoT Central application. Monitor, manage, create, delete, and update devices.
Previously updated : 12/27/2021 Last updated : 03/02/2022
To view an individual device:
> [!TIP] > You can use the filter tool on this page to view devices in a specific organization.
+## Monitor your devices
+
+USe the **Devices** page to monitor and manage your devices
+
+### Device status values
+
+When a device connects to your IoT Central application, its device status changes as follows:
+
+1. The device status is first **Registered**. This status means the device is created in IoT Central, and has a device ID. A device is registered when:
+ - A new real device is added on the **Devices** page.
+ - A set of devices is added using **Import** on the **Devices** page.
+
+1. The device status changes to **Provisioned** when the device that connected to your IoT Central application with valid credentials completes the provisioning step. In this step, the device uses DPS to automatically retrieve a connection string from the IoT Hub used by your IoT Central application. The device can now connect to IoT Central and start sending data.
+
+1. An operator can block a device. When a device is blocked, it can't send data to your IoT Central application. Blocked devices have a status of **Blocked**. An operator must reset the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
+
+1. If the device status is **Waiting for Approval**, it means the **Auto approve** option is disabled. An operator must explicitly approve a device before it starts sending data. Devices not registered manually on the **Devices** page, but connected with valid credentials will have the device status **Waiting for Approval**. Operators can approve these devices from the **Devices** page using the **Approve** button.
+
+1. If the device status is **Unassigned**, it means the device connecting to IoT Central isn't assigned to a device template. This situation typically happens in the following scenarios:
+
+ - A set of devices is added using **Import** on the **Devices** page without specifying the device template.
+ - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
+
+ An operator can assign a device to a device template from the **Devices** page using the **Migrate** button.
+
+### Device connection status
+
+When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events are not sent by the device, they are generated internally by IoT Central.
+
+The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute.
++
+Watch the following video to learn more about how to monitor device connection status:
+
+> [!VIDEO https://www.youtube.com/embed/EUZH_6Ihtto]
+
+You can include connection and disconnection events in [exports from IoT Central](howto-export-data.md#set-up-a-data-export). To learn more, see [React to IoT Hub events > Limitations for device connected and device disconnected events](../../iot-hub/iot-hub-event-grid.md#limitations-for-device-connected-and-device-disconnected-events).
+ ## Add a device To add a device to your Azure IoT Central application:
To move a device to a different organization, you must have access to both the s
## Migrate devices to a template
-If you register devices by starting the import under **All devices**, then the devices are created without any device template association. Devices must be associated with a template to explore the data and other details about the device. Follow these steps to associate devices with a template:
+If you register devices by starting the import under **All devices**, then the devices are created without any device template association. Devices must be assigned to a template to explore the data and other details about the device. Follow these steps to assign devices to a template:
1. Choose **Devices** on the left pane. 1. On the left panel, choose **All devices**:
- :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-1.png" alt-text="Screenshot showing unassociated devices.":::
+ :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-1.png" alt-text="Screenshot showing unassigned devices.":::
-1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassociated** for any of your devices.
+1. Use the filter on the grid to determine if the value in the **Device Template** column is **Unassigned** for any of your devices.
-1. Select the devices you want to associate with a template:
+1. Select the devices you want to assign to a template:
1. Select **Migrate**:
- :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-2.png" alt-text="Screenshot showing how to associate a device.":::
+ :::image type="content" source="media/howto-manage-devices-individually/unassociated-devices-2.png" alt-text="Screenshot showing how to assign a device.":::
1. Choose the template from the list of available templates and select **Migrate**.
-1. The selected devices are associated with the device template you chose.
+1. The selected devices are assigned to the device template you chose.
## Delete a device
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
To learn more, see [Create an IoT Central organization](howto-create-organizatio
Devices that connect to your IoT Central application typically use X.509 certificates or shared access signatures (SAS) as credentials. An administrator manages the group certificates or keys that these device credentials are derived from. To learn more, see: -- [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment)-- [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment)
+- [X.509 group enrollment](concepts-device-authentication.md#x509-enrollment-group)
+- [SAS group enrollment](concepts-device-authentication.md#sas-enrollment-group)
- [How to roll X.509 device certificates](how-to-connect-devices-x509.md). An administrator can also create and manage the API tokens that a client application uses to authenticate with your IoT Central application. Client applications use the REST API to interact with IoT Central. To learn more, see:
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
Title: Azure IoT Central device connectivity guide | Microsoft Docs
-description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how to connect IoT devices to your IoT Central application. After a device connects, it uses telemetry to send streaming data and properties to report device state. Iot Central can set device state using writable properties and call commands on a device. This article outlines best practices for device connectivity.
+description: Azure IoT Central is an IoT application platform that simplifies the creation of IoT solutions. This guide describes how IoT devices connect to your IoT Central application. After a device connects, it uses telemetry to send streaming data and properties to report device state. Iot Central can set device state using writable properties and call commands on a device.
Previously updated : 01/28/2022 Last updated : 03/02/2022
To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central appli
A gateway device manages one or more downstream devices that connect to your IoT Central application. A gateway device can process the telemetry from the downstream devices before it's forwarded to your IoT Central application. Both IoT devices and IoT Edge devices can act as gateways. To learn more, see [Define a new IoT gateway device type in your Azure IoT Central application](./tutorial-define-gateway-device-type.md) and [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
-## Connect a device
+## How devices connect
-Azure IoT Central uses the [Azure IoT Hub Device Provisioning service (DPS)](../../iot-dps/about-iot-dps.md) to manage all device registration and connection.
+As you connect a device to IoT Central, it goes through the following stages: _registered_, _provisioned_, and _connected_.
+
+To learn how to monitor the status of a device, see [Monitor your devices](howto-manage-devices-individually.md#monitor-your-devices).
+
+### Register a device
+
+When you register a device with IoT Central, you're telling IoT Central the ID of a device that you want to connect to the application. Optionally at this stage, you can assign the device to a [device template](concepts-device-templates.md) that declares the capabilities of the device to your application.
+
+> [!TIP]
+> A device ID can contain letters, numbers, and the `-` character.
+
+There are three ways to register a device in an IoT Central application:
+
+- Use the **Devices** page in your IoT Central application to register devices individually. To learn more, see [Add a device](howto-manage-devices-individually.md#add-a-device).
+- Add devices in bulk from a CSV file. To learn more, see [Import devices](howto-manage-devices-in-bulk.md#import-devices).
+- Automatically register devices when they first try to connect. This scenario enables OEMs to mass manufacture devices that can connect without first being registered. To learn more, see [Automatically register devices](concepts-device-authentication.md#automatically-register-devices).
+
+ Optionally, you can require an operator to approve the device before it starts sending data.
+
+ > [!TIP]
+ > On the **Administration > Device connection** page, the **Auto approve** option controls whether an operator must manually approve the device before it can start sending data.
+
+You only need to register a device once in your IoT Central application.
+
+### Provision a device
+
+When a device first tries to connect to your IoT Central application, it starts the process by connecting to the Device Provisioning Service (DPS). DPS checks the device's credentials and, if they're valid, provisions the device with connection string for one of IoT Central's internal IoT hubs. DPS uses the _group enrollment_ configurations in your IoT Central application to manage this provisioning process for you.
+
+> [!TIP]
+> The device also sends the **ID scope** value that tells DPS which IoT Central application the device is connecting to. You can look up the **ID scope** in your IoT Central application on the **Permissions > Device connection groups** page.
+
+Typically, a device should cache the connection string it receives from DPS but should be prepared to retrieve new connection details if the current connection fails. To learn more, see [Handle connect failures](concepts-device-implementation.md#handle-connection-failures).
Using DPS enables: -- IoT Central to support onboarding and connecting devices at scale.
+- IoT Central to onboard and connect devices at scale.
- You to generate device credentials and configure the devices offline without registering the devices through IoT Central UI. - You to use your own device IDs to register devices in IoT Central. Using your own device IDs simplifies integration with existing back-office systems. - A single, consistent way to connect devices to IoT Central.
-To learn more, see [Get connected to Azure IoT Central](./concepts-get-connected.md) and [best practices](#best-practices).
+### Authenticate and connect device
-### Security
+A device uses its credentials and the connection string it received from DPS to connect to and authenticate with your IoT Central application. A device should also send a [model ID that identifies the device template it's assigned to](concepts-device-templates.md#assign-a-device-to-a-device-template).
-The connection between a device and your IoT Central application is secured by using either [shared access signatures](./concepts-get-connected.md#sas-group-enrollment) or industry-standard [X.509 certificates](./concepts-get-connected.md#x509-group-enrollment).
+IoT Central supports two types of device credential:
-### Communication protocols
+- Shared access signatures
+- X.509 certificates
-Communication protocols that a device can use to connect to IoT Central include MQTT, AMQP, and HTTPS. Internally, IoT Central uses an IoT hub to enable device connectivity. For more information about the communication protocols that IoT Hub supports for device connectivity, see [Choose a communication protocol](../../iot-hub/iot-hub-devguide-protocols.md).
+To learn more, see [Device authentication concepts](concepts-device-authentication.md).
-## Connectivity patterns
+All data exchanged between devices and your Azure IoT Central is encrypted. IoT Hub authenticates every request from a device that connects to any of the device-facing IoT Hub endpoints. To avoid exchanging credentials over the wire, a device uses signed tokens to authenticate. For more information, see, [Control access to IoT Hub](../../iot-hub/iot-hub-devguide-security.md).
-Device developers typically use one of the device SDKs to implement devices that connect to an IoT Central application. Some scenarios, such as for devices that can't connect to the internet, also require a gateway. To learn more about the device connectivity options available to device developers, see:
+## Connectivity patterns
-- [Get connected to Azure IoT Central](concepts-get-connected.md)-- [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md)
+Device developers typically use one of the device SDKs to implement devices that connect to an IoT Central application. Some scenarios, such as for devices that can't connect to the internet, also require a gateway.
A solution design must take into account the required device connectivity pattern. These patterns fall in to two broad categories. Both categories include devices sending telemetry to your IoT Central application: ### Persistent connections
-Persistent connections are required your solution needs *command and control* capabilities. In command and control scenarios, the IoT Central application sends commands to devices to control their behavior in near real time. Persistent connections maintain a network connection to the cloud and reconnect whenever there's a disruption. Use either the MQTT or the AMQP protocol for persistent device connections to IoT Central.
+Persistent connections are required your solution needs _command and control_ capabilities. In command and control scenarios, the IoT Central application sends commands to devices to control their behavior in near real time. Persistent connections maintain a network connection to the cloud and reconnect whenever there's a disruption. Use either the MQTT or the AMQP protocol for persistent device connections to IoT Central.
The following options support persistent device connections: - Use the IoT device SDKs to connect devices and send telemetry:
- The device SDKs enable both the MQTT and AMQP protocols for creating persistent connections to IoT Central. To learn more, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+ The device SDKs enable both the MQTT and AMQP protocols for creating persistent connections to IoT Central.
- Connect devices over a local network to an IoT Edge device that forwards telemetry to IoT Central:
The following options are available for custom transformations or computations b
To learn more, see [Transform data for IoT Central](howto-transform-data.md).
-## Implement the device
-
-An IoT Central device template includes a _model_ that specifies the behaviors a device of that type should implement. Behaviors include telemetry, properties, and commands.
-
-To learn more, see [Edit an existing device template](howto-edit-device-template.md).
-
-> [!TIP]
-> You can export the model from IoT Central as a [Digital Twins Definition Language (DTDL) v2](https://github.com/Azure/opendigitaltwins-dtdl) JSON file.
-
-Each model has a unique _device twin model identifier_ (DTMI), such as `dtmi:com:example:Thermostat;1`. When a device connects to IoT Central, it sends the DTMI of the model it implements. IoT Central can then associate the correct device template with the device.
-
-[IoT Plug and Play](../../iot-develop/overview-iot-plug-and-play.md) defines a set of [conventions](../../iot-develop/concepts-convention.md) that a device should follow when it implements a DTDL model.
-
-The [Azure IoT device SDKs](#languages-and-sdks) include support for the IoT Plug and Play conventions.
-
-### Device model
-
-A device model is defined by using the [DTDL](https://github.com/Azure/opendigitaltwins-dtdl) modeling language. This language lets you define:
--- The telemetry the device sends. The definition includes the name and data type of the telemetry. For example, a device sends temperature telemetry as a double.-- The properties the device reports to IoT Central. A property definition includes its name and data type. For example, a device reports the state of a valve as a Boolean.-- The properties the device can receive from IoT Central. Optionally, you can mark a property as writable. For example, IoT Central sends a target temperature as a double to a device.-- The commands a device responds to. The definition includes the name of the command, and the names and data types of any parameters. For example, a device responds to a reboot command that specifies how many seconds to wait before rebooting.-
-A DTDL model can be a _no-component_ or a _multi-component_ model:
--- No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _root component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.-- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.-
-To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
-
-### Conventions
-
-A device should follow the IoT Plug and Play conventions when it exchanges data with IoT Central. The conventions include:
--- Send the DTMI when it connects to IoT Central.-- Send correctly formatted JSON payloads and metadata to IoT Central.-- Correctly respond to writable properties and commands from IoT Central.-- Follow the naming conventions for component commands.-
-> [!NOTE]
-> Currently, IoT Central does not fully support the DTDL **Array** and **Geospatial** data types.
-
-To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
-
-To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
-
-### Device SDKs
-
-Use one of the [Azure IoT device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) to implement the behavior of your device. The code should:
--- Register the device with DPS and use the information from DPS to connect to the internal IoT hub in your IoT Central application.-- Announce the DTMI of the model the device implements.-- Send telemetry in the format that the device model specifies. IoT Central uses the model in the device template to determine how to use the telemetry for visualizations and analysis.-- Synchronize property values between the device and IoT Central. The model specifies the property names and data types so that IoT Central can display the information.-- Implement command handlers for the commands specified in the model. The model specifies the command names and parameters that the device should use.-
-For more information about the role of device templates, see [What are device templates?](./concepts-device-templates.md).
-
-For some sample code, see [Create and connect a client application](./tutorial-connect-device.md).
-
-### Languages and SDKs
-
-For more information about the supported languages and SDKs, see [Understand and use Azure IoT Hub device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks).
-
-## Best practices
-
-These recommendations show how to implement devices to take advantage of the built-in disaster recovery and automatic scaling in IoT Central.
-
-The following steps show the high-level flow when a device connects to IoT Central:
-
-1. Use DPS to provision the device and get a device connection string.
-
-1. Use the connection string to connect IoT Central's internal IoT Hub endpoint. Send data to and receive data from your IoT Central application.
-
-1. If the device gets connection failures, then depending on the error type, either retry the connection or reprovision the device.
-
-### Use DPS to provision the device
-
-To provision a device with DPS, use the scope ID, credentials, and device ID from your IoT Central application. To learn more about the credential types, see [X.509 group enrollment](concepts-get-connected.md#x509-group-enrollment) and [SAS group enrollment](concepts-get-connected.md#sas-group-enrollment). To learn more about device IDs, see [Device registration](concepts-get-connected.md#device-registration).
-
-On success, DPS returns a connection string the device can use to connect to your IoT Central application. To troubleshoot provisioning errors, see [Check the provisioning status of your device](troubleshoot-connection.md#check-the-provisioning-status-of-your-device).
-
-The device can cache the connection string to use for later connections. However, the device must be prepared to [handle connection failures](#handle-connection-failures).
-
-### Handle connection failures
-
-For scaling or disaster recovery purposes, IoT Central may update its underlying IoT hub. To maintain connectivity, your device code should handle specific connection errors by establishing a connection to the new IoT Hub endpoint.
-
-If the device gets any of the following errors when it connects, it should reprovision the device with DPS to get a new connection string. These errors mean the connection string is no longer valid:
--- Unreachable IoT Hub endpoint.-- Expired security token.-- Device disabled in IoT Hub.-
-If the device gets any of the following errors when it connects, it should use a back-off strategy to retry the connection. These errors mean the connection string is still valid, but transient conditions are stopping the device from connecting:
--- Operator blocked device.-- Internal error 500 from the service.-
-To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md).
-
-### Test failover capabilities
-
-The Azure CLI lets you test the failover capabilities of your device code. The CLI command works by temporarily switching a device registration to a different internal IoT hub. To verify the device failover worked, check that the device still sends telemetry and responds to commands.
-
-To run the failover test for your device, run the following command:
-
-```azurecli
-az iot central device manual-failover \
- --app-id {Application ID of your IoT Central application} \
- --device-id {Device ID of the device you're testing} \
- --ttl-minutes {How to wait before moving the device back to it's original IoT hub}
-```
-
-> [!TIP]
-> To find the **Application ID**, navigate to **Administration > Your application** in your IoT Central application.
-
-If the command succeeds, you see output that looks like the following:
-
-```output
-Command group 'iot central device' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus
-{
- "hubIdentifier": "6bd4...bafa",
- "message": "Success! This device is now being failed over. You can check your device'ΓÇÖ's status using 'iot central device registration-info' command. The device will revert to its original hub at Tue, 18 May 2021 11:03:45 GMT. You can choose to failback earlier using device-manual-failback command. Learn more: https://aka.ms/iotc-device-test"
-}
-```
-
-To learn more about the CLI command, see [az iot central device manual-failover](/cli/azure/iot/central/device#az_iot_central_device_manual_failover).
-
-You can now check that telemetry from the device still reaches your IoT Central application.
-
-> [!TIP]
-> To see sample device code that handles failovers in various programing languages, see [IoT Central high availability clients](/samples/azure-samples/iot-central-high-availability-clients/iotc-high-availability-clients/).
- ## Next steps If you're a device developer and want to dive into some code, the suggested next step is to [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md).
+If you want to learn more about device implementation, see [Device implementation and best practices for IoT central](concepts-device-implementation.md).
+ To learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Built-in features of IoT Central you can use to extract business value include:
- To learn more about dashboards, see [Create and manage multiple dashboards](howto-manage-dashboards.md) and [Configure the application dashboard](howto-manage-dashboards.md).
- - When a device connects to an IoT Central, the device is associated with a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. You can create and customize the available views for each device type. To learn more, see [Add views](howto-set-up-template.md#views).
+ - When a device connects to an IoT Central, the device is assigned to a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. You can create and customize the available views for each device type. To learn more, see [Add views](howto-set-up-template.md#views).
- Use built-in rules and analytics:
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
https://aka.ms/iotcentral-docs-dps-SAS",
| - | - | - | | Provisioned | No immediately recognizable issue. | N/A | | Registered | The device has not yet connected to IoT Central. | Check your device logs for connectivity issues. |
-| Blocked | The device is blocked from connecting to IoT Central. | Device is blocked from connecting to the IoT Central application. Unblock the device in IoT Central and retry. To learn more, see [Block devices](concepts-get-connected.md#device-status-values). |
-| Unapproved | The device is not approved. | Device isn't approved to connect to the IoT Central application. Approve the device in IoT Central and retry. To learn more, see [Approve devices](concepts-get-connected.md#device-registration) |
-| Unassociated | The device is not associated with a device template. | Associate the device with a device template so that IoT Central knows how to parse the data. |
+| Blocked | The device is blocked from connecting to IoT Central. | Device is blocked from connecting to the IoT Central application. Unblock the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values). |
+| Unapproved | The device is not approved. | Device isn't approved to connect to the IoT Central application. Approve the device in IoT Central and retry. To learn more, see [Device status values](howto-manage-devices-individually.md#device-status-values) |
+| Unassigned | The device is not assigned to a device template. | Assign the device to a device template so that IoT Central knows how to parse the data. |
-Learn more about [device status codes](concepts-get-connected.md#device-status-values).
+Learn more about [Device status values](howto-manage-devices-individually.md#device-status-values).
### Error codes
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-create-telemetry-rules.md
To create a telemetry rule, the device template must include at least one teleme
1. Enter the name _Temperature monitor_ to identify the rule and press Enter.
-1. Select the **Sensor Controller** device template. By default, the rule automatically applies to all the devices associated with the device template. To filter for a subset of the devices, select **+ Filter** and use device properties to identify the devices. To disable the rule, toggle the **Enabled/Disabled** button:
+1. Select the **Sensor Controller** device template. By default, the rule automatically applies to all the devices assigned to the device template. To filter for a subset of the devices, select **+ Filter** and use device properties to identify the devices. To disable the rule, toggle the **Enabled/Disabled** button:
:::image type="content" source="media/tutorial-create-telemetry-rules/device-filters.png" alt-text="Screenshot that shows the selection of the device template in the rule definition":::
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-define-gateway-device-type.md
Both your simulated downstream devices are now connected to your simulated gatew
In the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial, the sample code shows how to include the model ID from the device template in the provisioning payload the device sends.
-When you connect a downstream device, you can modify the provisioning payload to include the the ID of the gateway device. The model ID lets IoT Central associate the device with the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
+When you connect a downstream device, you can modify the provisioning payload to include the the ID of the gateway device. The model ID lets IoT Central assign the device to the correct downstream device template. The gateway ID lets IoT Central establish the relationship between the downstream device and its gateway. In this case the provisioning payload the device sends looks like the following JSON:
```json {
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/about-iot-dps.md
DPS automates device provisioning with Azure IoT Hub. Learn more about [IoT Hub]
IoT Central applications use an internal DPS instance to manage device connections. To learn more, see:
-* [Get connected to Azure IoT Central](../iot-central/core/concepts-get-connected.md)
+* [How devices connect to IoT Central](../iot-central/core/overview-iot-central-developer.md)
* [Tutorial: Create and connect a client application to your Azure IoT Central application](../iot-central/core/tutorial-connect-device.md) ## Next steps
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Install [Visual Studio Code](https://code.visualstudio.com/) first and then add
You'll also need to install some additional, language-specific tools to develop your module: -- C#, including Azure Functions: [.NET Core 2.1 SDK](https://dotnet.microsoft.com/download/dotnet/2.1)
+- C#, including Azure Functions: [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download/dotnet/3.1)
- Python: [Python](https://www.python.org/downloads/) and [Pip](https://pip.pypa.io/en/stable/installing/#installation) for installing Python packages (typically included with your Python installation).
iot-fundamentals Iot Phone App How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-phone-app-how-to.md
After you register the device in IoT Central, you can connect the smartphone app
1. On the **Settings > Registration** page, you can see the device ID and ID scope that the app used to connect to IoT Central.
-To learn more about how devices connect to IoT Central, see [Get connected to Azure IoT Central](../iot-central/core/concepts-get-connected.md).
+To learn more about how devices connect to IoT Central, see [How devices connect](../iot-central/core/overview-iot-central-developer.md).
### Verify the connection
iot-hub Monitor Iot Hub Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub-reference.md
To learn about metrics supported by other Azure services, see [Supported metrics
**Topics in this section** -- [Monitoring Azure IoT Hub data reference](#monitoring-azure-iot-hub-data-reference)
- - [Metrics](#metrics)
- - [Supported aggregations](#supported-aggregations)
- - [Cloud to device command metrics](#cloud-to-device-command-metrics)
- - [Cloud to device direct methods metrics](#cloud-to-device-direct-methods-metrics)
- - [Cloud to device twin operations metrics](#cloud-to-device-twin-operations-metrics)
- - [Configurations metrics](#configurations-metrics)
- - [Daily quota metrics](#daily-quota-metrics)
- - [Device metrics](#device-metrics)
- - [Device telemetry metrics](#device-telemetry-metrics)
- - [Device to cloud twin operations metrics](#device-to-cloud-twin-operations-metrics)
- - [Event grid metrics](#event-grid-metrics)
- - [Jobs metrics](#jobs-metrics)
- - [Routing metrics](#routing-metrics)
- - [Twin query metrics](#twin-query-metrics)
- - [Metric dimensions](#metric-dimensions)
- - [Resource logs](#resource-logs)
- - [Connections](#connections)
- - [Device telemetry](#device-telemetry)
- - [Cloud-to-device commands](#cloud-to-device-commands)
- - [Device identity operations](#device-identity-operations)
- - [File upload operations](#file-upload-operations)
- - [Routes](#routes)
- - [Device-to-cloud twin operations](#device-to-cloud-twin-operations)
- - [Cloud-to-device twin operations](#cloud-to-device-twin-operations)
- - [Twin queries](#twin-queries)
- - [Jobs operations](#jobs-operations)
- - [Direct Methods](#direct-methods)
- - [Distributed Tracing (Preview)](#distributed-tracing-preview)
- - [IoT Hub D2C (device-to-cloud) logs](#iot-hub-d2c-device-to-cloud-logs)
- - [IoT Hub ingress logs](#iot-hub-ingress-logs)
- - [IoT Hub egress logs](#iot-hub-egress-logs)
- - [Configurations](#configurations)
- - [Device Streams (Preview)](#device-streams-preview)
- - [Azure Monitor Logs tables](#azure-monitor-logs-tables)
- - [See Also](#see-also)
+- [Supported aggregations](#supported-aggregations)
+- [Cloud to device command metrics](#cloud-to-device-command-metrics)
+- [Cloud to device direct methods metrics](#cloud-to-device-direct-methods-metrics)
+- [Cloud to device twin operations metrics](#cloud-to-device-twin-operations-metrics)
+- [Configurations metrics](#configurations-metrics)
+- [Daily quota metrics](#daily-quota-metrics)
+- [Device metrics](#device-metrics)
+- [Device telemetry metrics](#device-telemetry-metrics)
+- [Device to cloud twin operations metrics](#device-to-cloud-twin-operations-metrics)
+- [Event grid metrics](#event-grid-metrics)
+- [Jobs metrics](#jobs-metrics)
+- [Routing metrics](#routing-metrics)
+- [Twin query metrics](#twin-query-metrics)
### Supported aggregations
To learn more about metric dimensions, see [Multi-dimensional metrics](../azure-
## Resource logs
-This section lists all the resource log category types and schemas collected for Azure IoT Hub. The resource provider and type for all IoT Hub logs is [Microsoft.Devices/IotHubs](../azure-monitor/essentials/resource-logs-categories.md#microsoftdevicesiothubs).
+This section lists all the resource log category types and schemas collected for Azure IoT Hub. The resource provider and type for all IoT Hub logs is [Microsoft.Devices/IotHubs](../azure-monitor/essentials/resource-logs-categories.md#microsoftdevicesiothubs). Be aware that events are emitted only for errors in some categories.
**Topics in this section** -- [Monitoring Azure IoT Hub data reference](#monitoring-azure-iot-hub-data-reference)
- - [Metrics](#metrics)
- - [Supported aggregations](#supported-aggregations)
- - [Cloud to device command metrics](#cloud-to-device-command-metrics)
- - [Cloud to device direct methods metrics](#cloud-to-device-direct-methods-metrics)
- - [Cloud to device twin operations metrics](#cloud-to-device-twin-operations-metrics)
- - [Configurations metrics](#configurations-metrics)
- - [Daily quota metrics](#daily-quota-metrics)
- - [Device metrics](#device-metrics)
- - [Device telemetry metrics](#device-telemetry-metrics)
- - [Device to cloud twin operations metrics](#device-to-cloud-twin-operations-metrics)
- - [Event grid metrics](#event-grid-metrics)
- - [Jobs metrics](#jobs-metrics)
- - [Routing metrics](#routing-metrics)
- - [Twin query metrics](#twin-query-metrics)
- - [Metric dimensions](#metric-dimensions)
- - [Resource logs](#resource-logs)
- - [Connections](#connections)
- - [Device telemetry](#device-telemetry)
- - [Cloud-to-device commands](#cloud-to-device-commands)
- - [Device identity operations](#device-identity-operations)
- - [File upload operations](#file-upload-operations)
- - [Routes](#routes)
- - [Device-to-cloud twin operations](#device-to-cloud-twin-operations)
- - [Cloud-to-device twin operations](#cloud-to-device-twin-operations)
- - [Twin queries](#twin-queries)
- - [Jobs operations](#jobs-operations)
- - [Direct Methods](#direct-methods)
- - [Distributed Tracing (Preview)](#distributed-tracing-preview)
- - [IoT Hub D2C (device-to-cloud) logs](#iot-hub-d2c-device-to-cloud-logs)
- - [IoT Hub ingress logs](#iot-hub-ingress-logs)
- - [IoT Hub egress logs](#iot-hub-egress-logs)
- - [Configurations](#configurations)
- - [Device Streams (Preview)](#device-streams-preview)
- - [Azure Monitor Logs tables](#azure-monitor-logs-tables)
- - [See Also](#see-also)
+- [Connections](#connections)
+- [Device telemetry](#device-telemetry)
+- [Cloud-to-device commands](#cloud-to-device-commands)
+- [Device identity operations](#device-identity-operations)
+- [File upload operations](#file-upload-operations)
+- [Routes](#routes)
+- [Device-to-cloud twin operations](#device-to-cloud-twin-operations)
+- [Cloud-to-device twin operations](#cloud-to-device-twin-operations)
+- [Twin queries](#twin-queries)
+- [Jobs operations](#jobs-operations)
+- [Direct Methods](#direct-methods)
+- [Distributed Tracing (Preview)](#distributed-tracing-preview)
+ - [IoT Hub D2C (device-to-cloud) logs](#iot-hub-d2c-device-to-cloud-logs)
+ - [IoT Hub ingress logs](#iot-hub-ingress-logs)
+ - [IoT Hub egress logs](#iot-hub-egress-logs)
+- [Configurations](#configurations)
+- [Device Streams (Preview)](#device-streams-preview)
### Connections
iot-hub Monitor Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/monitor-iot-hub.md
The following screenshot shows a diagnostic setting for routing the resource log
:::image type="content" source="media/monitor-iot-hub/diagnostic-setting-portal.png" alt-text="Diagnostic Settings pane for an IoT hub.":::
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs).
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure IoT Hub are listed under [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
When routing IoT Hub platform metrics to other locations, be aware that:
In Azure portal, you can select **Logs** under **Monitoring** on the left-pane o
For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Azure Monitor Logs tables in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#azure-monitor-logs-tables).
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs).
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). You can find the schema and categories of resource logs collected for Azure IoT Hub in [Resource logs in the Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md#resource-logs). Be aware that events are emitted only for errors in some categories.
The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
# Manage health probes for Azure Load Balancer using the Azure portal
-Azure Load Balancer supports health probes to monitor the health of backend instances. In this article, you'll learn how to manage health probes for Azure Load Balancer.
+Azure Load Balancer uses health probes to monitor the health of backend instances. In this article, you'll learn how to manage health probes for Azure Load Balancer.
There are three types of health probes:
In this article, you learned how to manage health probes for an Azure Load Balan
For more information about Azure Load Balancer, see: - [What is Azure Load Balancer?](load-balancer-overview.md) - [Frequently asked questions - Azure Load Balancer](load-balancer-faqs.yml)-- [Azure Load Balancer health probes](load-balancer-custom-probe-overview.md)
+- [Azure Load Balancer health probes](load-balancer-custom-probe-overview.md)
logic-apps Concepts Schedule Automated Recurring Tasks Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md
This article describes the capabilities for the Schedule built-in triggers and a
## Schedule triggers
-You can start your logic app workflow by using the Recurrence trigger or Sliding Window trigger, which isn't associated with any specific service or system. These triggers start and run your workflow based on your specified recurrence where you select the interval and frequency, such as the number of seconds, minutes, hours, days, weeks, or months. You can also set the start date and time along with the time zone. Each time that a trigger fires, Azure Logic Apps creates and runs a new workflow instance for your logic app.
+You can start your logic app workflow by using the [Recurrence trigger](../connectors/connectors-native-recurrence.md) or [Sliding Window trigger](../connectors/connectors-native-sliding-window.md), which isn't associated with any specific service or system. These triggers start and run your workflow based on your specified recurrence where you select the interval and frequency, such as the number of seconds, minutes, hours, days, weeks, or months. You can also set the start date and time along with the time zone. Each time that a trigger fires, Azure Logic Apps creates and runs a new workflow instance for your logic app.
Here are the differences between these triggers: * **Recurrence**: Runs your workflow at regular time intervals based on your specified schedule. If the trigger misses recurrences, for example, due to disruptions or disabled workflows, the Recurrence trigger doesn't process the missed recurrences but restarts recurrences with the next scheduled interval.
- If you select **Day** as the frequency, you can specify the hours of the day and minutes of the hour, for example, every day at 2:30. If you select **Week** as the frequency, you can also select days of the week, such as Wednesday and Saturday. You can also specify a start date and time along with a time zone for your recurrence schedule.
+ If you select **Day** as the frequency, you can specify the hours of the day and minutes of the hour, for example, every day at 2:30. If you select **Week** as the frequency, you can also select days of the week, such as Wednesday and Saturday. You can also specify a start date and time along with a time zone for your recurrence schedule. For more information about time zone formatting, see [Add a Recurrence trigger](../connectors/connectors-native-recurrence.md#add-the-recurrence-trigger).
> [!IMPORTANT] > If you use the **Day** or **Week** frequency and specify a future date and time, make sure that you set up the recurrence in advance:
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
Following this `Microsoft.Web/connections` resource definition, make sure that y
{ "type": "Microsoft.Web/connections/accessPolicies", "apiVersion": "2016-06-01",
- "name": "[concat('<connection-name>'),'/','<object-ID>')]",
+ "name": "[concat('<connection-name>','/','<object-ID>')]",
"location": "<location>", "dependsOn": [ "[resourceId('Microsoft.Web/connections', parameters('connection_name'))]"
machine-learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/release-notes.md
Azure portal users will always find the latest image available for provisioning
See the [list of known issues](reference-known-issues.md) to learn about known bugs and workarounds.
+## March 9, 2022
+
+[Data Science Virtual Machine - Windows 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview)
+
+Version: 21.12.03
+
+Windows 2019 DSVM will now be supported under publisher: microsoft-dsvm, offer ID: dsvm-win-2019, plan ID/SKU ID: winserver-2019
+
+Users using ARM template / VMSS to deploy the Windows DSVM machines, should configure the SKU with winserver-2019 instead of server-2019, since we will continue to ship updates to Windows DSVM images on the new SKU from March, 2022.
+ ## December 3, 2021 New image for [Windows Server 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-dsvm.dsvm-win-2019?tab=Overview).
media-services Concept Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/concept-media-reserved-units.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-Media Reserved Units (MRUs) were previously used in Azure Media Services v2 to control encoding concurrency and performance. You no longer need to manage MRUs or request quota increases for any media services account as the system will automatically scale up and down based on load. You will also see performance that is equal to or improved in comparison to using MRUs.
+Media Reserved Units (MRUs) were previously used in Azure Media Services v2 to control encoding concurrency and performance. You no longer need to manage MRUs or request quota increases for any media services account as the system will automatically scale up and down based on load. You'll also see performance that is equal to or improved in comparison to using MRUs.
-If you have an account that was created using a version prior to the 2020-05-01 API, you will still have access to APIΓÇÖs for managing MRUs, however none of the MRU configuration that you set will be used to control encoding concurrency or performance. If you donΓÇÖt see the option to manage MRUs in the Azure portal, you have an account that was created with the 2020-05-01 API or later.
+If you have an account that was created using a version prior to the 2020-05-01 API, you'll still have access to APIs for managing MRUs, however none of the MRU configuration that you set will be used to control encoding concurrency or performance. If you donΓÇÖt see the option to manage MRUs in the Azure portal, you have an account that was created with the 2020-05-01 API or later.
## Billing
-While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units. For more information on billing for encoding jobs, please see [Encoding video and audio with Media Services](encoding-concept.md)
+While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units. For more information on billing for encoding jobs, see [Encoding video and audio with Media Services](encoding-concept.md)
-For accounts created in with the **2020-05-01** version of the API (i.e. the v3 version) or through the Azure portal, scaling and media reserved units are no longer required. Scaling is now automatically handled by the service internally. Media reserved units are no longer needed or supported for any Azure Media Services account. See [Media reserved units (legacy)](concept-media-reserved-units.md) for additional information.
+For accounts created in with the **2020-05-01** version of the API, that is, the v3 version, or through the Azure portal, scaling and media reserved units are no longer required. Scaling is now automatically handled by the service internally. Media reserved units are no longer needed or supported for any Azure Media Services account. See [Media reserved units (legacy)](concept-media-reserved-units.md) for additional information.
## See also * [Migrate from Media Services v2 to v3](migrate-v-2-v-3-migration-introduction.md)
-* [Scale Media Reserved Units with CLI](media-reserved-units-cli-how-to.md)
+* [Scale Media Reserved Units with CLI](media-reserved-units-how-to.md)
media-services Configure Connect Dotnet Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/configure-connect-dotnet-howto.md
namespace ConsoleApp1
- [Tutorial: Analyze videos with Media Services v3 - .NET](analyze-videos-tutorial.md) - [Create a job input from a local file - .NET](job-input-from-local-file-how-to.md) - [Create a job input from an HTTPS URL - .NET](job-input-from-http-how-to.md)-- [Encode with a custom Transform - .NET](transform-custom-presets-how-to.md)
+- [Encode with a custom Transform - .NET](transform-custom-transform-how-to.md)
- [Use AES-128 dynamic encryption and the key delivery service - .NET](drm-playready-license-template-concept.md) - [Use DRM dynamic encryption and license delivery service - .NET](drm-protect-with-drm-tutorial.md)-- [Get a signing key from the existing policy - .NET](drm-get-content-key-policy-dotnet-how-to.md)
+- [Get a signing key from the existing policy - .NET](drm-get-content-key-policy-how-to.md)
- [Create filters with Media Services - .NET](filters-dynamic-manifest-dotnet-how-to.md) - [Advanced video on-demand examples of Azure Functions v2 with Media Services v3](https://aka.ms/ams3functions)
media-services Drm Add Option Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-add-option-content-key-policy-how-to.md
+
+ Title: Add an option to a content key policy
+description: This article shows how to add an option to a content key policy.
+++++ Last updated : 03/10/2022+++
+# Add an option to a content key policy
++
+## Methods
+
+Use the following methods to add an option to a content key policy.
+
+## [CLI](#tab/cli/)
+++
media-services Drm Content Key Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-content-key-policy-concept.md
Usually, you associate your content key policy with your [Streaming Locator](str
## Example
-To get to the key, use `GetPolicyPropertiesWithSecretsAsync`, as shown in the [Get a signing key from the existing policy](drm-get-content-key-policy-dotnet-how-to.md#get-contentkeypolicy-with-secrets) example.
+To get to the key, use `GetPolicyPropertiesWithSecretsAsync`, as shown in the [Get a signing key from the existing policy](drm-get-content-key-policy-how-to.md#get-contentkeypolicy-with-secrets) example.
## Filtering, ordering, paging
media-services Drm Create Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-create-content-key-policy-how-to.md
+
+ Title: Create a content key policy
+description: This article shows how to create a content key policy.
+++++ Last updated : 03/10/2022+++
+# Create a content key policy
++
+## Methods
+
+Use the following methods to create a content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Delete Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-delete-content-key-policy-how-to.md
+
+ Title: Delete a content key policy
+description: This article shows how to delete a content key policy.
+++++ Last updated : 03/10/2022+++
+# Delete a content key policy
++
+## Methods
+
+Use the following methods to delete a content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Get Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-get-content-key-policy-how-to.md
+
+ Title: Get a signing key from a policy
+description: This topic shows how to get a signing key from the existing policy using Media Services v3.
++++ Last updated : 03/09/2022+++
+# Get a signing key from the existing policy
++
+One of the key design principles of the v3 API is to make the API more secure. v3 APIs do not return secrets or credentials on **Get** or **List** operations. See the detailed explanation here: For more information, see [Azure RBAC and Media Services accounts](security-rbac-concept.md)
+
+The example in this article shows how to get a signing key from the existing policy.
+
+## Download
+
+Clone a GitHub repository that contains the full .NET sample to your machine using the following command:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials.git
+ ```
+
+The ContentKeyPolicy with secrets example is located in the [EncryptWithDRM](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/tree/main/AMSV3Tutorials/EncryptWithDRM) folder.
+
+## [.NET](#tab/net/)
+
+## Get ContentKeyPolicy with secrets
+
+To get to the key, use **GetPolicyPropertiesWithSecretsAsync**, as shown in the example below.
+
+[!code-csharp[Main](../../../media-services-v3-dotnet-tutorials/AMSV3Tutorials/EncryptWithDRM/Program.cs#GetOrCreateContentKeyPolicy)]
++
media-services Drm List Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-list-content-key-policy-how-to.md
+
+ Title: List the content key policies
+description: This article shows how to list the content key policies.
+++++ Last updated : 03/10/2022+++
+# List the content key policies
++
+## Methods
+
+Use the following methods to list the content key policies.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Offline Fairplay For Ios Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-offline-fairplay-for-ios-concept.md
Title: Media Services v3 offline FairPlay Streaming for iOS description: This topic gives an overview and shows how to use Azure Media Services v3 to dynamically encrypt your HTTP Live Streaming (HLS) content with Apple FairPlay in offline mode.- Previously updated : 05/25/2021 Last updated : 03/09/2022 + # Offline FairPlay Streaming for iOS with Media Services v3 [!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
Before you implement offline DRM for FairPlay on an iOS 10+ device:
You will need to modify the code in [Encrypt with DRM using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/tree/main/AMSV3Tutorials/EncryptWithDRM) to add FairPlay configurations.
+## [.NET](#tab/net/)
+ ## Configure content protection in Azure Media Services In the [GetOrCreateContentKeyPolicyAsync](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/main/AMSV3Tutorials/EncryptWithDRM/Program.cs#L192) method, do the following:
Three test samples in Media Services cover the following three scenarios:
You can find these samples at [this demo site](https://aka.ms/poc#22), with the corresponding application certificate hosted in an Azure web app. With either the version 3 or version 4 sample of the FPS Server SDK, if a master playlist contains alternate audio, during offline mode it plays audio only. Therefore, you need to strip the alternate audio. In other words, the second and third samples listed previously work in online and offline mode. The sample listed first plays audio only during offline mode, while online streaming works properly. ++ ## Offline Fairplay questions See [offline fairplay questions in the FAQ](frequently-asked-questions.yml).
media-services Drm Offline Playready Streaming For Windows 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-offline-playready-streaming-for-windows-10.md
Title: Configure offline PlayReady streaming
description: This article shows how to configure your Azure Media Services v3 account for streaming PlayReady for Windows 10 offline. keywords: DASH, DRM, Widevine Offline Mode, ExoPlayer, Android -+ -- Previously updated : 08/31/2020--+ Last updated : 03/09/2022+ # Offline PlayReady Streaming for Windows 10 with Media Services v3
Below are two sets of test assets, the first one using PlayReady license deliver
For playback testing, we used a Universal Windows Application on Windows 10. In [Windows 10 Universal samples](https://github.com/Microsoft/Windows-universal-samples), there is a basic player sample called [Adaptive Streaming Sample](https://github.com/Microsoft/Windows-universal-samples/tree/master/Samples/AdaptiveStreaming). All we have to do is to add the code for us to pick downloaded video and use it as the source, instead of adaptive streaming source. The changes are in button click event handler:
+## [.NET](#tab/net/)
+ ```csharp private async void LoadUri_Click(object sender, RoutedEventArgs e) {
In summary, we have achieved offline mode on Azure Media
* Content can be hosted in Azure Media Services or Azure Storage for progressive download; * PlayReady license delivery can be from Azure Media Services or elsewhere; * The prepared smooth streaming content can still be used for online streaming via DASH or smooth with PlayReady as the DRM.++
media-services Drm Offline Widevine For Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-offline-widevine-for-android.md
-- Previously updated : 05/25/2021+ Last updated : 03/09/2022
Before implementing offline DRM for Widevine on Android devices, you should firs
- [ExoPlayer Developer Guide](https://google.github.io/ExoPlayer/guide.html) - [EoPlayer Developer Blog](https://medium.com/google-exoplayer)
+## [.NET](#tab/net/)
+ ## Configure content protection in Azure Media Services In the [GetOrCreateContentKeyPolicyAsync](https://github.com/Azure-Samples/media-services-v3-dotnet-tutorials/blob/main/AMSV3Tutorials/EncryptWithDRM/Program.cs#L192) method, the following necessary steps are present:
The above open-source PWA app is authored in Node.js. If you want to host your o
- The certificate must have trusted CA and a self-signed development certificate does not work - The certificate must have a CN matching the DNS name of the web server or gateway ++ ## More information For more information, see [Content Protection in the FAQ](frequently-asked-questions.yml).
media-services Drm Remove Option Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-remove-option-content-key-policy-how-to.md
+
+ Title: Remove an option from a content key policy
+description: This article shows how to remove an option from a content key policy.
+++++ Last updated : 03/10/2022+++
+# Remove an option from a content key policy
++
+## Methods
+
+Use the following methods to remove an option from a content key policy.
+
+## [CLI](#tab/cli/)
+++
media-services Drm Show Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-show-content-key-policy-how-to.md
+
+ Title: Show an existing content key policy
+description: This article shows how to show an existing content key policy.
+++++ Last updated : 03/10/2022+++
+# Show an existing content key policy
++
+## Methods
+
+Use the following methods to show an existing content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Update Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-update-content-key-policy-how-to.md
+
+ Title: Update an existing content key policy
+description: This article shows how to update an existing content key policy.
+++++ Last updated : 03/10/2022+++
+# Update an existing content key policy
++
+## Methods
+
+Use the following methods to update an existing content key policy.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Drm Update Option Content Key Policy How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/drm-update-option-content-key-policy-how-to.md
+
+ Title: Update an option in a content key policy
+description: This article shows how to update an option in a content key policy.
+++++ Last updated : 03/10/2022+++
+# Update an option in a content key policy
++
+## Methods
+
+Use the following methods to update an option in a content key policy.
+
+## [CLI](#tab/cli/)
+++
media-services Encode Concept Preset Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-concept-preset-overrides.md
When encoding or using analytics with Media Services you can define custom prese
Preset overrides allow you the ability to pass in a customized preset that will override the settings supplied to a transform object after it was first created. This property is available on the [job output asset](/dotnet/api/microsoft.azure.management.media.models.joboutputasset) when submitting a new job to a transform.
-This can be useful for situations where you need to override some properties of your custom defined transforms, or a property on a built-in preset. For example, consider the scenario where you have created a custom transform that uses the [audio analyzer built-in preset](/rest/api/media/transforms/create-or-update#audioanalyzerpreset), but you initially set up that preset to use the audio language setting of "en-us" for English. This would result in a transform where each jobs submitted would be sent to the speech-to-text transcription engine as US English only. Every job submitted to that transform would be locked to the "en-us" language setting. You could work around this scenario by having a transform defined for every language, but that would be much more difficult to manage and you could hit transform quota limitations in your account.
+This can be useful for situations where you need to override some properties of your custom defined transforms, or a property on a built-in preset. For example, consider the scenario where you have created a custom transform that uses the [audio analyzer built-in preset](/rest/api/media/transforms/create-or-update#audioanalyzerpreset), but you initially set up that preset to use the audio language setting of "en-us" for English. This would result in a transform where each job submitted would be sent to the speech-to-text transcription engine as US English only. Every job submitted to that transform would be locked to the "en-us" language setting. You could work around this scenario by having a transform defined for every language, but that would be much more difficult to manage and you could hit transform quota limitations in your account.
To best solve for this scenario, you use a preset override on the job output asset prior to submitting the job to the transform. You can then define a single "Audio transcription" transform and pass in the required language settings per-job. The preset override provides you a way to pass in a new custom preset definition with each job submitted to the transform. This property is available on the [job output](/dotnet/api/microsoft.azure.management.media.models.joboutput) entity in all SDK versions based off the 2021-06-01 version of the API.
For reference, see the [presetOverride](https://github.com/Azure/azure-rest-api-
## Example of preset override in .NET
-A complete example using the .NET SDK for Media Services showing how to use preset override with a basic audio analyzer transform is available in github.
+A complete example using the .NET SDK for Media Services showing how to use preset override with a basic audio analyzer transform is available in GitHub.
See the [Analyze a media file with a audio analyzer preset](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/AudioAnalytics/AudioAnalyzer) sample for details on how to use the preset override property of the job output. ## Sample code of preset override in .NET
Check out the [Azure Media Services community](media-services-community.md) arti
* [Upload, encode, and stream using Media Services](stream-files-tutorial-with-api.md). * [Encode from an HTTPS URL using built-in presets](job-input-from-http-how-to.md). * [Encode a local file using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
+* [Build a custom preset to target your specific scenario or device requirements](transform-custom-transform-how-to.md).
media-services Encode Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-concept.md
To encode with Media Services v3, you need to create a [Transform](/rest/api/med
When encoding with Media Services, you use presets to tell the encoder how the input media files should be processed. In Media Services v3, you use Standard Encoder to encode your files. For example, you can specify the video resolution and/or the number of audio channels you want in the encoded content.
-You can get started quickly with one of the recommended built-in presets based on industry best practices or you can choose to build a custom preset to target your specific scenario or device requirements. For more information, see [Encode with a custom Transform](transform-custom-presets-how-to.md).
+You can get started quickly with one of the recommended built-in presets based on industry best practices or you can choose to build a custom preset to target your specific scenario or device requirements. For more information, see [Encode with a custom Transform](transform-custom-transform-how-to.md).
Starting with January 2019, when encoding with the Standard Encoder to produce MP4 file(s), a new .mpi file is generated and added to the output Asset. This MPI file is intended to improve performance for [dynamic packaging](encode-dynamic-packaging-concept.md) and streaming scenarios.
You can specify to create a [Job](/rest/api/media/jobs/create) with a single cli
See examples:
-* [Subclip a video with .NET](transform-subclip-video-dotnet-how-to.md)
-* [Subclip a video with REST](transform-subclip-video-rest-how-to.md)
+* [Subclip a video with .NET](transform-subclip-video-how-to.md)
## Built-in presets
Media Services fully supports customizing all values in presets to meet your spe
#### Examples -- [Customize presets with .NET](transform-custom-presets-how-to.md)-- [Customize presets with CLI](transform-custom-preset-cli-how-to.md)-- [Customize presets with REST](transform-custom-preset-rest-how-to.md)-
+- [Customize presets with .NET](transform-custom-transform-how-to.md)
## Preset schema
In Media Services v3, presets are strongly typed entities in the API itself. You
## Scaling encoding in v3
-To scale media processing, see [Scale with CLI](media-reserved-units-cli-how-to.md).
+To scale media processing, see [Scale with CLI](media-reserved-units-how-to.md).
For accounts created with the **2020-05-01** or later version of the API or through the Azure portal, scaling and media reserved units are no longer required. Scaling will be automatic and handled by the service internally. ## Billing
For accounts created with the **2020-05-01** or later version of the API or thro
Media Services does not bill for canceled or errored jobs. For example, a job that has reached 50% progress and is canceled is not billed at 50% of the job minutes. You are only charged for finished jobs. For more information, see [pricing](https://azure.microsoft.com/pricing/details/media-services/).-
-## Ask questions, give feedback, get updates
-
-Check out the [Azure Media Services community](media-services-community.md) article to see different ways you can ask questions, give feedback, and get updates about Media Services.
-
-## Next steps
-
-* [Upload, encode, and stream using Media Services](stream-files-tutorial-with-api.md).
-* [Encode from an HTTPS URL using built-in presets](job-input-from-http-how-to.md).
-* [Encode a local file using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
media-services Encode Dynamic Packaging Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-dynamic-packaging-concept.md
The advantages of just-in-time packaging are the following:
* You can store all your files in standard MP4 file format * You do not need to store multiple copies of static packaged HLS and DASH formats in blob storage, reducing the amount of video content stored and lowering your overall costs of storage * You can instantly take advantage of new protocol updates and changes to the specifications as they evolve over time without need of re-packaging the static content in your catalog
-* You can deliver content with our without encryption and DRM using the same MP4 files in storage
+* You can deliver content with or without encryption and DRM using the same MP4 files in storage
* You can dynamically filter or alter the manifests with simple asset-level or global filters to remove specific tracks, resolutions, languages, or provide shorter highlight clips from the same MP4 files without re-encoding or re-rendering the content. ## To prepare your source files for delivery
The following articles show examples of [how to encode a video with Media Servic
* [Use content aware encoding](encode-content-aware-concept.md). * [Encode from an HTTPS URL by using built-in presets](job-input-from-http-how-to.md). * [Encode a local file by using built-in presets](job-input-from-local-file-how-to.md).
-* [Build a custom preset to target your specific scenario or device requirements](transform-custom-presets-how-to.md).
+* [Build a custom preset to target your specific scenario or device requirements](transform-custom-transform-how-to.md).
* [Code samples for encoding with Standard Encoder using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding) See the list of supported Standard Encoder input [formats and codecs](encode-media-encoder-standard-formats-reference.md).
media-services Encode Media Encoder Standard Formats Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-media-encoder-standard-formats-reference.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article contains a list of the most common import and export file formats that you can use with [StandardEncoderPreset](/rest/api/medi).
+This article contains a list of the most common import and export file formats that you can use with [StandardEncoderPreset](/rest/api/medi).
## Input container/file formats
The following table lists the codecs and file formats that are supported for exp
| | | | | MP4 <br/><br/>(including multi-bitrate MP4 containers) |H.264 (High, Main, and Baseline Profiles), HEVC (H.265) 8-bit |AAC-LC, HE-AAC v1, HE-AAC v2 | | MPEG2-TS |H.264 (High, Main, and Baseline Profiles) |AAC-LC, HE-AAC v1, HE-AAC v2 |-
-## Next steps
-
-[Create a transform with a custom preset](transform-custom-presets-how-to.md)
media-services Job Cancel How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-cancel-how-to.md
+
+ Title: Cancel a job
+description: This article shows how to cancel a job.
+++++ Last updated : 03/10/2022+++
+# Cancel a job
++
+## Methods
+
+Use the following methods to cancel a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Create How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-create-how-to.md
Last updated 03/01/2022
-# CLI example: Create and submit a job
+# Create a job
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
In Media Services v3, when you submit Jobs to process your videos, you have to t
[Create a Media Services account](./account-create-how-to.md).
-## [Portal](#tab/rest/)
+## [Portal](#tab/portal/)
## [CLI](#tab/cli/)
-## Example script
-When you run `az ams job start`, you can set a label on the job's output. The label can later be used to identify what this output asset is for.
+## [REST](#tab/rest/)
-- If you assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=labelΓÇ¥-- If you do not assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=ΓÇ¥.
- Notice that you add "=" to the `output-assets`.
-
-```azurecli
-az ams job start \
- --name testJob001 \
- --transform-name testEncodingTransform \
- --base-uri 'https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/' \
- --files 'Ignite-short.mp4' \
- --output-assets testOutputAssetName= \
- -a amsaccount \
- -g amsResourceGroup
-```
-
-You get a response similar to this:
-
-```
-{
- "correlationData": {},
- "created": "2019-02-15T05:08:26.266104+00:00",
- "description": null,
- "id": "/subscriptions/<id>/resourceGroups/amsResourceGroup/providers/Microsoft.Media/mediaservices/amsaccount/transforms/testEncodingTransform/jobs/testJob001",
- "input": {
- "baseUri": "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
- "files": [
- "Ignite-short.mp4"
- ],
- "label": null,
- "odatatype": "#Microsoft.Media.JobInputHttp"
- },
- "lastModified": "2019-02-15T05:08:26.266104+00:00",
- "name": "testJob001",
- "outputs": [
- {
- "assetName": "testOutputAssetName",
- "error": null,
- "label": "",
- "odatatype": "#Microsoft.Media.JobOutputAsset",
- "progress": 0,
- "state": "Queued"
- }
- ],
- "priority": "Normal",
- "resourceGroup": "amsResourceGroup",
- "state": "Queued",
- "type": "Microsoft.Media/mediaservices/transforms/jobs"
-}
-```
media-services Job Delete How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-delete-how-to.md
+
+ Title: Delete a job
+description: This article shows how to delete a job.
+++++ Last updated : 03/10/2022+++
+# Delete a job
++
+## Methods
+
+Use the following methods to delete a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Download Results How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-download-results-how-to.md
Title: Download the results of a job - Azure Media Services description: This article demonstrates how to download the results of a job.- ---+ Previously updated : 08/31/2020 Last updated : 03/09/2022 - # Download the results of a job
In Azure Media Services, when processing your videos (for example, encoding or a
This article demonstrates how to download the results using Java and .NET SDKs.
-## Java
-
-```java
-/**
- * Use Media Service and Storage APIs to download the output files to a local folder
- * @param manager The entry point of Azure Media resource management
- * @param resourceGroup The name of the resource group within the Azure subscription
- * @param accountName The Media Services account name
- * @param assetName The asset name
- * @param outputFolder The output folder for downloaded files.
- * @throws StorageException
- * @throws URISyntaxException
- * @throws IOException
- */
-private static void downloadResults(MediaManager manager, String resourceGroup, String accountName,
- String assetName, File outputFolder) throws StorageException, URISyntaxException, IOException {
- ListContainerSasInput parameters = new ListContainerSasInput()
- .withPermissions(AssetContainerPermission.READ)
- .withExpiryTime(DateTime.now().plusHours(1));
- AssetContainerSas assetContainerSas = manager.assets()
- .listContainerSasAsync(resourceGroup, accountName, assetName, parameters).toBlocking().first();
-
- String strSas = assetContainerSas.assetContainerSasUrls().get(0);
- CloudBlobContainer container = new CloudBlobContainer(new URI(strSas));
-
- File directory = new File(outputFolder, assetName);
- directory.mkdir();
-
- ArrayList<ListBlobItem> blobs = container.listBlobsSegmented(null, true, EnumSet.noneOf(BlobListingDetails.class), 200, null, null, null).getResults();
-
- for (ListBlobItem blobItem: blobs) {
- if (blobItem instanceof CloudBlockBlob) {
- CloudBlockBlob blob = (CloudBlockBlob)blobItem;
- File downloadTo = new File(directory, blob.getName());
-
- blob.downloadToFile(downloadTo.getPath());
- }
- }
-
- System.out.println("Download complete.");
-}
-```
-
-See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-java/blob/master/VideoEncoding/EncodingWithMESPredefinedPreset/src/main/java/sample/EncodingWithMESPredefinedPreset.java)
+## Methods
-## .NET
+## [.NET](#tab/net/)
```csharp /// <summary>
private async static Task DownloadResults(IAzureMediaServicesClient client, stri
} ```
-See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_PredefinedPreset/Program.cs)
+## Code sample
-## Next steps
+See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_PredefinedPreset/Program.cs)
-[Create a job input from an HTTPS URL](job-input-from-http-how-to.md).
+
media-services Job Input From Http How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-input-from-http-how-to.md
In Media Services v3, when you submit Jobs to process your videos, you have to t
> [!TIP] > Before you start developing, review [Developing with Media Services v3 APIs](media-services-apis-overview.md) (includes information on accessing APIs, naming conventions, etc.)
+## Methods
+
+## [.NET](#tab/net/)
+ ## .NET sample The following code shows how to create a job with an HTTPS URL input. [!code-csharp[Main](../../../media-services-v3-dotnet-quickstarts/AMSV3Quickstarts/EncodeAndStreamFiles/Program.cs#SubmitJob)]
-## Job error codes
-
-See [Error codes](/rest/api/media/jobs/get#joberrorcode).
-
-## Next steps
-
-[Create a job input from a local file](job-input-from-local-file-how-to.md).
+
media-services Job Input From Local File How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-input-from-local-file-how-to.md
Title: Create a job input from a local file description: This article demonstrates how to create an Azure Media Services job input from a local file.- --+ Previously updated : 05/25/2021 Last updated : 03/09/2022
In Media Services v3, when you submit Jobs to process your videos, you have to t
* [Create a Media Services account](./account-create-how-to.md).
+## [.NET](#tab/net/)
+ ## .NET sample The following code shows how to create an input asset and use it as the input for the job. The CreateInputAsset function performs the following actions:
The following code snippet submits an encoding job:
See [Error codes](/rest/api/media/jobs/get#joberrorcode).
-## Next steps
-
-[Create a job input from an HTTPS URL](job-input-from-http-how-to.md).
+
media-services Job List How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-list-how-to.md
+
+ Title: List jobs
+description: This article shows how to list jobs.
+++++ Last updated : 03/10/2022+++
+# List jobs
++
+## Methods
+
+Use the following methods to list jobs.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Multiple Transform Outputs How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-multiple-transform-outputs-how-to.md
This topic shows how to create a Transform with two Transform Outputs. The first
> [!TIP] > Before you start developing, review [Developing with Media Services v3 APIs](media-services-apis-overview.md) (includes information on accessing APIs, naming conventions, etc.)
+## [.NET](#tab/net/)
+ ## Create a transform The following code shows how to create a transform that produces two outputs.
private static async Task<Job> SubmitJobAsync(IAzureMediaServicesClient client,
return job; } ```
-## Job error codes
-
-See [Error codes](/rest/api/media/jobs/get#joberrorcode).
-## Next steps
-
-[Azure Media Services v3 samples using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/master/)
+
media-services Job Show How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-show-how-to.md
+
+ Title: Show or get a job
+description: This article shows how to show or get a job.
+++++ Last updated : 03/10/2022+++
+# Show the details of a job
++
+## Methods
+
+Use the following methods to show or get a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Job Update How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-update-how-to.md
+
+ Title: Update a job
+description: This article shows how to update a job.
+++++ Last updated : 03/10/2022+++
+# Update a job
++
+## Methods
+
+Use the following methods to update a job.
+
+## [CLI](#tab/cli/)
++
+## [REST](#tab/rest/)
+++
media-services Live Event Cloud Dvr Time How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/live-event-cloud-dvr-time-how-to.md
For more information, see:
## Next steps
-* [Subclip your videos](transform-subclip-video-rest-how-to.md).
+* [Subclip your videos](transform-subclip-video-how-to.md).
* [Define filters for your assets](filters-dynamic-manifest-rest-howto.md).
media-services Media Reserved Units How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/media-reserved-units-how-to.md
+
+ Title: Scale Media Reserved Units (MRUs)
+description: This topic shows how to use to scale media processing with Azure Media Services.
+++++ Last updated : 08/25/2021++
+# How to scale media reserved units (legacy)
++
+This article shows you how to scale Media Reserved Units (MRUs) for faster encoding.
+
+> [!WARNING]
+> This command will no longer work for Media Services accounts that are created with the 2020-05-01 (or later) version of the API or later. For these accounts media reserved units are no longer needed as the system will automatically scale up and down based on load. If you donΓÇÖt see the option to manage MRUs in the Azure portal, youΓÇÖre using an account that was created with the 2020-05-01 API or later.
+> The purpose of this article is to document the legacy process of using MRUs
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md).
+
+Understand [Media Reserved Units](concept-media-reserved-units.md).
+
+## [CLI](#tab/cli/)
+
+## Scale Media Reserved Units with CLI
+
+Run the `mru` command.
+
+The following [az ams account mru](/cli/azure/ams/account/mru) command sets Media Reserved Units on the "amsaccount" account using the **count** and **type** parameters.
+
+```azurecli
+az ams account mru set -n amsaccount -g amsResourceGroup --count 10 --type S3
+```
+
+## Billing
+
+ While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units.
+
+## See also
+
+* [Migrate from Media Services v2 to v3](migrate-v-2-v-3-migration-introduction.md)
++
media-services Media Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/media-services-overview.md
Title: Azure Media Services v3 overview : Azure Media Services description: A high-level overview of Azure Media Services v3 with links to quickstarts, tutorials, and code samples.-
-tags: ''
-keywords: azure media services, stream, broadcast, live, offline
- - Previously updated : 3/10/2021 Last updated : 03/09/2022 -+ #Customer intent: As a developer or a content provider, I want to encode, stream (on demand or live), analyze my media content so that my customers can: view the content on a wide variety of browsers and devices, gain valuable insights from recorded content.
How-to guides contain code samples that demonstrate how to complete a task. In t
* [Encode with HTTPS as job input - .NET](job-input-from-http-how-to.md) * [Monitor events - Portal](monitoring/monitor-events-portal-how-to.md) * [Encrypt dynamically with multi-DRM - .NET](drm-protect-with-drm-tutorial.md)
-* [How to encode with a custom transform - CLI](transform-custom-preset-cli-how-to.md)
+* [How to encode with a custom transform - CLI](transform-custom-transform-how-to.md)
## Ask questions, give feedback, get updates
media-services Migrate V 2 V 3 Migration Scenario Based Content Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-content-protection.md
You should first unpublish (remove all Streaming Locators) on the Asset via the
### How to guides -- [Get a signing key from the existing policy](drm-get-content-key-policy-dotnet-how-to.md)
+- [Get a signing key from the existing policy](drm-get-content-key-policy-how-to.md)
- [Offline FairPlay Streaming for iOS with Media Services v3](drm-offline-fairplay-for-ios-concept.md) - [Offline Widevine streaming for Android with Media Services v3](drm-offline-widevine-for-android.md) - [Offline PlayReady Streaming for Windows 10 with Media Services v3](drm-offline-playready-streaming-for-windows-10.md)
media-services Migrate V 2 V 3 Migration Scenario Based Encoding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-encoding.md
For customers using the Indexer v1 processor in the v2 API, you need to create a
- [Create a job input from an HTTPS URL](job-input-from-http-how-to.md) - [Create a job input from a local file](job-input-from-local-file-how-to.md) - [Create a basic audio transform](transform-create-basic-audio-how-to.md)-- With .NET
- - [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md)
- - [How to create an overlay with Media Encoder Standard](transform-create-overlay-how-to.md)
- - [How to generate thumbnails using Encoder Standard with .NET](transform-generate-thumbnails-dotnet-how-to.md)
-- With Azure CLI
- - [How to encode with a custom transform - Azure CLI](transform-custom-preset-cli-how-to.md)
-- With REST
- - [How to encode with a custom transform - REST](transform-custom-preset-rest-how-to.md)
- - [How to generate thumbnails using Encoder Standard with REST](transform-generate-thumbnails-rest-how-to.md)
-- [Subclip a video when encoding with Media Services - .NET](transform-subclip-video-dotnet-how-to.md)-- [Subclip a video when encoding with Media Services - REST](transform-subclip-video-rest-how-to.md)
+- [How to encode with a custom transform](transform-custom-transform-how-to.md)
+- [How to create an overlay with Media Encoder Standard](transform-create-overlay-how-to.md)
+- [How to generate thumbnails using Encoder Standard](transform-generate-thumbnails-dotnet-how-to.md)
+- [Subclip a video when encoding with Media Services - REST](transform-subclip-video-how-to.md)
## Samples
media-services Migrate V 2 V 3 Migration Scenario Based Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-media-reserved-units.md
Please migrate your MRUs based on the following scenarios:
* If you are an existing V2 customer, you need to create a new V3 account to support your existing application prior to the completion of migration. * Indexer V1 or other media processors that are not fully deprecated yet may need to be enabled again.
-For more information about MRUs, see [Media Reserved Units](concept-media-reserved-units.md) and [How to scale media reserved units](media-reserved-units-cli-how-to.md).
+For more information about MRUs, see [Media Reserved Units](concept-media-reserved-units.md) and [How to scale media reserved units](media-reserved-units-how-to.md).
## MRU concepts, tutorials and how to guides
For more information about MRUs, see [Media Reserved Units](concept-media-reserv
### How to guides
-[How to scale media reserved units](media-reserved-units-cli-how-to.md)
+[How to scale media reserved units](media-reserved-units-how-to.md)
## Samples
media-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/release-notes.md
Azure Media Services is now available in the Norway East region in the Azure por
### Basic Audio Analysis
-The Audio Analysis preset now includes a Basic mode pricing tier. The new Basic Audio Analyzer mode provides a low-cost option to extract speech transcription, and format output captions and subtitles. This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The output of this mode includes an Insights JSON file including only the keywords, transcription,and timing information. Automatic language detection and speaker diarization are not included in this mode. See the list of [supported languages.](analyze-video-audio-files-concept.md#built-in-presets)
+The Audio Analysis preset now includes a Basic mode pricing tier. The new Basic Audio Analyzer mode provides a low-cost option to extract speech transcription, and format output captions and subtitles. This mode performs speech-to-text transcription and generation of a VTT subtitle/caption file. The output of this mode includes an Insights JSON file including only the keywords, transcription, and timing information. Automatic language detection and speaker diarization are not included in this mode. See the list of [supported languages.](analyze-video-audio-files-concept.md#built-in-presets)
Customers using Indexer v1 and Indexer v2 should migrate to the Basic Audio Analysis preset.
This functionality works with any [Transform](/rest/api/media/transforms) that i
See examples:
-* [Subclip a video with .NET](transform-subclip-video-dotnet-how-to.md)
-* [Subclip a video with REST](transform-subclip-video-rest-how-to.md)
+* [Subclip a video with REST](transform-subclip-video-how-to.md)
## May 2019
The CLI 2.0 module is now available for [Azure Media Services v3 GA](/cli/azure/
- [az ams live-output](/cli/azure/ams/live-output) - [az ams streaming-endpoint](/cli/azure/ams/streaming-endpoint) - [az ams streaming-locator](/cli/azure/ams/streaming-locator)-- [az ams account mru](/cli/azure/ams/account/mru) - enables you to manage Media Reserved Units. For more information, see [Scale Media Reserved Units](media-reserved-units-cli-how-to.md).
+- [az ams account mru](/cli/azure/ams/account/mru) - enables you to manage Media Reserved Units. For more information, see [Scale Media Reserved Units](media-reserved-units-how-to.md).
### New features and breaking changes
Starting with this release, you can use Resource Manager templates to create Liv
The following improvements were introduced: - Ingest from HTTP(s) URLs or Azure Blob Storage SAS URLs.-- Specify you own container names for Assets.
+- Specify your own container names for Assets.
- Easier output support to create custom workflows with Azure Functions. #### New Transform object
media-services Security Rbac Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/security-rbac-concept.md
See the following articles for more information:
## Next steps - [Developing with Media Services v3 APIs](media-services-apis-overview.md)-- [Get content key policy using Media Services .NET](drm-get-content-key-policy-dotnet-how-to.md)
+- [Get content key policy using Media Services .NET](drm-get-content-key-policy-how-to.md)
media-services Signal Descriptive Audio Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/signal-descriptive-audio-howto.md
Title: Signal descriptive audio tracks with Media Services v3 description: Follow the steps of this tutorial to upload a file, encode the video, add descriptive audio tracks, and stream your content with Media Services v3. - - Previously updated : 08/31/2020 Last updated : 03/09/2022
This article shows how to encode a video, upload an audio-only MP4 file (AAC cod
- Review [Dynamic packaging](encode-dynamic-packaging-concept.md). - Review the [Upload, encode, and stream videos](stream-files-tutorial-with-api.md) tutorial.
-## Create an input asset and upload a local file into it
+## [.NET](#tab/net/)
+
+## Create an input asset and upload a local file into it
The **CreateInputAsset** function creates a new input [Asset](/rest/api/media/assets) and uploads the specified local video file into it. This **Asset** is used as the input to your encoding Job. In Media Services v3, the input to a **Job** can either be an **Asset**, or it can be content that you make available to your Media Services account via HTTPS URLs.
To test the stream, this article uses Azure Media Player.
Azure Media Player can be used for testing but should not be used in a production environment.
-## Next steps
-
-[Analyze videos](analyze-videos-tutorial.md)
+
media-services Stream Files Tutorial With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/stream-files-tutorial-with-api.md
When encoding or processing content in Media Services, it's a common pattern to
When creating a new [Transform](/rest/api/medi).
-You can use a built-in EncoderNamedPreset or use custom presets. For more information, see [How to customize encoder presets](transform-custom-presets-how-to.md).
+You can use a built-in EncoderNamedPreset or use custom presets. For more information, see [How to customize encoder presets](transform-custom-transform-how-to.md).
When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesnΓÇÖt exist (a case-insensitive check on the name).
media-services Transform Create Basic Audio How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-basic-audio-how-to.md
Title: Create a basic audio transform description: Create a basic audio transform using Media Services API. - - Previously updated : 11/18/2020 Last updated : 03/09/2022
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+## [REST](#tab/rest/)
+ ### Using the REST API [!INCLUDE [media-services-cli-instructions.md](./includes/task-create-basic-audio-rest.md)]
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Next steps [!INCLUDE [transforms next steps](./includes/transforms-next-steps.md)]++
media-services Transform Create Copy Video Audio How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-copy-video-audio-how-to.md
Title: Create a CopyVideo CopyAudio transform description: Create a CopyVideo CopyAudio transform using Media Services API. - - Previously updated : 11/19/2020 Last updated : 03/09/2022
This article shows how to create a `CopyVideo/CopyAudio` transform.
-This transform allows you have input video / input audio streams copied from the input asset to the output asset without any changes. This can be of value with multi bitrate encoding output where the input video and/or audio would be part of the output. It simply writes the manifest and other files needed to stream content.
+This transform allows you to have input video/input audio streams copied from the input asset to the output asset without any changes. This can be of value with multi bitrate encoding output where the input video and/or audio would be part of the output. It simply writes the manifest and other files needed to stream content.
## Prerequisites
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+## [REST](#tab/rest/)
+ ### Using the REST API [!INCLUDE [task-create-copy-video-audio-rest.md](./includes/task-create-copy-video-audio-rest.md)]
-## Next steps
-+
media-services Transform Create Copyallbitratenoninterleaved How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-copyallbitratenoninterleaved-how-to.md
Title: Create a CopyAllBitrateNonInterleaved transform
-description: Create a CopyAllBitrateNonInterleaved transform using Media Services API.
-
+description: Create a CopyAllBitrateNonInterleaved transform.
- - Previously updated : 10/23/2020 Last updated : 03/09/2022
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+## [REST](#tab/rest/)
+ ### Using the REST API [!INCLUDE [task-create-copyallbitratenoninterleaved.md](./includes/task-create-copyallbitratenoninterleaved.md)]
-## Next steps
-+
media-services Transform Create Overlay How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-overlay-how-to.md
Previously updated : 08/31/2020 Last updated : 03/09/2022 # How to create an image overlay
Last updated 08/31/2020
Media Services allows you to overlay an image, audio file, or another video on top of a video. The input must specify exactly one image file. You can specify an image file in JPG, PNG, GIF or BMP format, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file in a supported file format.
+## [.NET](#tab/net/)
## Prerequisites
Media Services allows you to overlay an image, audio file, or another video on t
If you aren't already familiar with the creation of Transforms, it is recommended that you complete the following activities: * Read [Encoding video and audio with Media Services](encode-concept.md)
-* Read [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md). Follow the steps in that article to set up the .NET needed to work with transforms, then return here to try out an overlays preset sample.
+* Read [How to encode with a custom transform - .NET](transform-custom-transform-how-to.md). Follow the steps in that article to set up the .NET needed to work with transforms, then return here to try out an overlays preset sample.
* See the [Transforms reference document](/rest/api/media/transforms). Once you are familiar with Transforms, download the overlays sample.
The sample also publishes the content for streaming and will output the full HLS
* [Filters](/rest/api/media/transforms/create-or-update#filters) * [StandardEncoderPreset](/rest/api/media/transforms/create-or-update#standardencoderpreset) - [!INCLUDE [reference dotnet sdk references](./includes/reference-dotnet-sdk-references.md)]
-## Next steps
-+
media-services Transform Crop How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-crop-how-to.md
+
+ Title: How to crop video files with Media Services
+description: Cropping is the process of selecting a rectangular window within the video frame, and encoding just the pixels within that window. This topic shows how to crop video files with Media Services.
+++++ Last updated : 03/09/2022+++
+# How to crop video files with Media Services
++
+You can use Media Services to crop an input video. Cropping is the process of selecting a rectangular window within the video frame, and encoding just the pixels within that window. The following diagram helps illustrate the process.
+
+## Pre-processing stage
+
+Cropping is a pre-processing stage, so the *cropping parameters* in the encoding preset apply to the *input* video. Encoding is a subsequent stage, and the width/height settings apply to the *pre-processed* video, and not to the original video. When designing your preset, do the following:
+
+1. Select the crop parameters based on the original input video
+1. Select your encode settings based on the cropped video.
+
+> [!WARNING]
+> If you do not match your encode settings to the cropped video, the output will not be as you expect.
+
+For example, your input video has a resolution of 1920x1080 pixels (16:9 aspect ratio), but has black bars (pillar boxes) at the left and right, so that only a 4:3 window or 1440x1080 pixels contains active video. You can crop the black bars, and encode the 1440x1080 area.
+
+## [.NET](#tab/net/)
+
+## Transform code
+
+The following code snippet illustrates how to write a transform in .NET to crop videos. The code assumes that you have a local file to work with.
+
+- Left is the left-most location of the crop.
+- Top is the top-most location of the crop.
+- Width is the final width of the crop.
+- Height is the final height of the crop.
+
+```dotnet
+var preset = new StandardEncoderPreset
+
+ {
+
+ Filters = new Filters
+
+ {
+
+ Crop = new Rectangle
+
+ {
+
+ Left = "200",
+
+ Top = "200",
+
+ Width = "1280",
+
+ Height = "720"
+
+ }
+
+ },
+
+ Codecs =
+
+ {
+
+ new AacAudio(),
+
+ new H264Video()
+
+ {
+
+ Layers =
+
+ {
+
+ new H264Layer
+
+ {
+
+ Bitrate = 1000000,
+
+ Width = "1280",
+
+ Height = "720"
+
+ }
+
+ }
+
+ }
+
+ },
+
+ Formats =
+
+ {
+
+ new Mp4Format
+
+ {
+
+ FilenamePattern = "{Basename}_{Bitrate}{Extension}"
+
+ }
+
+ }
+
+ }
+
+```
++
media-services Transform Custom Preset Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-preset-cli-how-to.md
- Title: Encode a custom transform CLI
-description: This topic shows how to use Azure Media Services v3 to encode a custom transform using Azure CLI.
------- Previously updated : 08/31/2020---
-# How to encode with a custom transform - Azure CLI
--
-When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets, based on industry best practices, as demonstrated in the [Streaming files](stream-files-cli-quickstart.md#create-a-transform-for-adaptive-bitrate-encoding) quickstart. You can also build a custom preset to target your specific scenario or device requirements.
-
-## Considerations
-
-When creating custom presets, the following considerations apply:
-
-* All values for height and width on AVC content must be a multiple of 4.
-* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
-
-## Prerequisites
-
-[Create a Media Services account](./account-create-how-to.md).
-
-Make sure to remember the resource group name and the Media Services account name.
-
-## Define a custom preset
-
-The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
-
-In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
-
-We are going to save this transform in a file. In this example, we name the file `customPreset.json`.
-
-```json
-{
- "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
- "codecs": [
- {
- "@odata.type": "#Microsoft.Media.AacAudio",
- "channels": 2,
- "samplingRate": 48000,
- "bitrate": 128000,
- "profile": "AacLc"
- },
- {
- "@odata.type": "#Microsoft.Media.H264Video",
- "keyFrameInterval": "PT2S",
- "stretchMode": "AutoSize",
- "sceneChangeDetection": false,
- "complexity": "Balanced",
- "layers": [
- {
- "width": "1280",
- "height": "720",
- "label": "HD",
- "bitrate": 3400000,
- "maxBitrate": 3400000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- },
- {
- "width": "640",
- "height": "360",
- "label": "SD",
- "bitrate": 1000000,
- "maxBitrate": 1000000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- }
- ]
- },
- {
- "@odata.type": "#Microsoft.Media.PngImage",
- "stretchMode": "AutoSize",
- "start": "25%",
- "step": "25%",
- "range": "80%",
- "layers": [
- {
- "width": "50%",
- "height": "50%"
- }
- ]
- }
- ],
- "formats": [
- {
- "@odata.type": "#Microsoft.Media.Mp4Format",
- "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
- "outputFiles": []
- },
- {
- "@odata.type": "#Microsoft.Media.PngFormat",
- "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
- }
- ]
-}
-```
-
-## Create a new transform
-
-In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first check if one already exist. If the Transform exists, reuse it. The following `show` command returns the `customTransformName` transform if it exists:
-
-```azurecli-interactive
-az ams transform show -a amsaccount -g amsResourceGroup -n customTransformName
-```
-
-The following Azure CLI command creates the Transform based on the custom preset (defined earlier).
-
-```azurecli-interactive
-az ams transform create -a amsaccount -g amsResourceGroup -n customTransformName --description "Basic Transform using a custom encoding preset" --preset customPreset.json
-```
-
-For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Quickstart: Stream video files - Azure CLI](stream-files-cli-quickstart.md).
-
-## See also
-
-[Azure CLI](/cli/azure/ams)
media-services Transform Custom Preset Rest How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-preset-rest-how-to.md
- Title: Encode a custom transform REST
-description: This topic shows how to use Azure Media Services v3 to encode a custom transform using REST.
------- Previously updated : 08/31/2020---
-# How to encode with a custom transform - REST
--
-When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets, based on industry best practices, as demonstrated in the [Streaming files](stream-files-tutorial-with-rest.md#create-a-transform) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
---
-## Considerations
-
-When creating custom presets, the following considerations apply:
-
-* All values for height and width on AVC content must be a multiple of 4.
-* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
-
-## Prerequisites
--- [Create a Media Services account](./account-create-how-to.md). <br/>Make sure to remember the resource group name and the Media Services account name. -- [Configure Postman for Azure Media Services REST API calls](setup-postman-rest-how-to.md).<br/>Make sure to follow the last step in the topic [Get Azure AD Token](setup-postman-rest-how-to.md#get-azure-ad-token). -
-## Define a custom preset
-
-The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
-
-In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
-
-```json
-{
- "properties": {
- "description": "Basic Transform using a custom encoding preset",
- "outputs": [
- {
- "onError": "StopProcessingJob",
- "relativePriority": "Normal",
- "preset": {
- "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
- "codecs": [
- {
- "@odata.type": "#Microsoft.Media.AacAudio",
- "channels": 2,
- "samplingRate": 48000,
- "bitrate": 128000,
- "profile": "AacLc"
- },
- {
- "@odata.type": "#Microsoft.Media.H264Video",
- "keyFrameInterval": "PT2S",
- "stretchMode": "AutoSize",
- "sceneChangeDetection": false,
- "complexity": "Balanced",
- "layers": [
- {
- "width": "1280",
- "height": "720",
- "label": "HD",
- "bitrate": 3400000,
- "maxBitrate": 3400000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- },
- {
- "width": "640",
- "height": "360",
- "label": "SD",
- "bitrate": 1000000,
- "maxBitrate": 1000000,
- "bFrames": 3,
- "slices": 0,
- "adaptiveBFrame": true,
- "profile": "Auto",
- "level": "auto",
- "bufferWindow": "PT5S",
- "referenceFrames": 3,
- "entropyMode": "Cabac"
- }
- ]
- },
- {
- "@odata.type": "#Microsoft.Media.PngImage",
- "stretchMode": "AutoSize",
- "start": "25%",
- "step": "25%",
- "range": "80%",
- "layers": [
- {
- "width": "50%",
- "height": "50%"
- }
- ]
- }
- ],
- "formats": [
- {
- "@odata.type": "#Microsoft.Media.Mp4Format",
- "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
- "outputFiles": []
- },
- {
- "@odata.type": "#Microsoft.Media.PngFormat",
- "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
- }
- ]
- }
- }
- ]
- }
-}
-
-```
-
-## Create a new transform
-
-In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first use [Get](/rest/api/media/transforms/get) to check if one already exists. If the Transform exists, reuse it.
-
-In the Postman's collection that you downloaded, select **Transforms and Jobs**->**Create or Update Transform**.
-
-The **PUT** HTTP request method is similar to:
-
-```
-PUT https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/transforms/:transformName?api-version={{api-version}}
-```
-
-Select the **Body** tab and replace the body with the json code you [defined earlier](#define-a-custom-preset). For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform.
-
-Select **Send**.
-
-For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Tutorial: Stream video files - REST](stream-files-tutorial-with-rest.md).
-
-## Next steps
-
-See [other REST operations](/rest/api/media/)
media-services Transform Custom Presets How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-presets-how-to.md
- Title: Encode custom transform .NET
-description: This topic shows how to use Azure Media Services v3 to encode a custom transform using .NET.
------ Previously updated : 05/11/2021----
-# How to encode with a custom transform - .NET
--
-When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets based on industry best practices as demonstrated in the [Streaming files](stream-files-tutorial-with-api.md) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
-
-## Considerations
-
-When creating custom presets, the following considerations apply:
-
-* All values for height and width on AVC content must be a multiple of 4.
-* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
-
-## Prerequisites
-
-[Create a Media Services account](./account-create-how-to.md)
-
-## Download the sample
-
-Clone a GitHub repository that contains the full .NET Core sample to your machine using the following command:
-
- ```bash
- git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
- ```
-
-The custom preset sample is located in the [Encoding with a custom preset using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_H264) folder.
-
-## Create a transform with a custom preset
-
-When creating a new [Transform](/rest/api/media/transforms), you need to specify what you want it to produce as an output. The required parameter is a [TransformOutput](/rest/api/media/transforms/createorupdate#transformoutput) object, as shown in the code below. Each **TransformOutput** contains a **Preset**. The **Preset** describes the step-by-step instructions of video and/or audio processing operations that are to be used to generate the desired **TransformOutput**. The following **TransformOutput** creates custom codec and layer output settings.
-
-When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesn't exist (a case-insensitive check on the name).
-
-### Example custom transform
-
-The following example defines a set of outputs that we want to be generated when this Transform is used. We first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75%} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
-
-[!code-csharp[Main](../../../media-services-v3-dotnet/VideoEncoding/Encoding_H264/Program.cs#EnsureTransformExists)]
-
-## Next steps
-
-[Streaming files](stream-files-tutorial-with-api.md)
media-services Transform Custom Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-custom-transform-how-to.md
+
+ Title: Encode custom transform
+description: This topic shows how to use Azure Media Services v3 to encode a custom transform.
+++++ Last updated : 03/09/2022+++
+# How to encode with a custom transform
++
+When encoding with Azure Media Services, you can get started quickly with one of the recommended built-in presets based on industry best practices as demonstrated in the [Streaming files](stream-files-tutorial-with-api.md) tutorial. You can also build a custom preset to target your specific scenario or device requirements.
+
+## Considerations
+
+When creating custom presets, the following considerations apply:
+
+* All values for height and width on AVC content must be a multiple of 4.
+* In Azure Media Services v3, all of the encoding bitrates are in bits per second. This is different from the presets with our v2 APIs, which used kilobits/second as the unit. For example, if the bitrate in v2 was specified as 128 (kilobits/second), in v3 it would be set to 128000 (bits/second).
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md)
+
+## [CLI](#tab/cli/)
+
+## Define a custom preset
+
+The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
+
+In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+We are going to save this transform in a file. In this example, we name the file `customPreset.json`.
+
+```json
+{
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.AacAudio",
+ "channels": 2,
+ "samplingRate": 48000,
+ "bitrate": 128000,
+ "profile": "AacLc"
+ },
+ {
+ "@odata.type": "#Microsoft.Media.H264Video",
+ "keyFrameInterval": "PT2S",
+ "stretchMode": "AutoSize",
+ "sceneChangeDetection": false,
+ "complexity": "Balanced",
+ "layers": [
+ {
+ "width": "1280",
+ "height": "720",
+ "label": "HD",
+ "bitrate": 3400000,
+ "maxBitrate": 3400000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ },
+ {
+ "width": "640",
+ "height": "360",
+ "label": "SD",
+ "bitrate": 1000000,
+ "maxBitrate": 1000000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ }
+ ]
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "25%",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+}
+```
+
+## Create a new transform
+
+In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first check if one already exist. If the Transform exists, reuse it. The following `show` command returns the `customTransformName` transform if it exists:
+
+```azurecli-interactive
+az ams transform show -a amsaccount -g amsResourceGroup -n customTransformName
+```
+
+The following Azure CLI command creates the Transform based on the custom preset (defined earlier).
+
+```azurecli-interactive
+az ams transform create -a amsaccount -g amsResourceGroup -n customTransformName --description "Basic Transform using a custom encoding preset" --preset customPreset.json
+```
+
+For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Quickstart: Stream video files - Azure CLI](stream-files-cli-quickstart.md).
+
+## [REST](#tab/rest/)
+
+## Define a custom preset
+
+The following example defines the request body of a new Transform. We define a set of outputs that we want to be generated when this Transform is used.
+
+In this example, we first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+```json
+{
+ "properties": {
+ "description": "Basic Transform using a custom encoding preset",
+ "outputs": [
+ {
+ "onError": "StopProcessingJob",
+ "relativePriority": "Normal",
+ "preset": {
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.AacAudio",
+ "channels": 2,
+ "samplingRate": 48000,
+ "bitrate": 128000,
+ "profile": "AacLc"
+ },
+ {
+ "@odata.type": "#Microsoft.Media.H264Video",
+ "keyFrameInterval": "PT2S",
+ "stretchMode": "AutoSize",
+ "sceneChangeDetection": false,
+ "complexity": "Balanced",
+ "layers": [
+ {
+ "width": "1280",
+ "height": "720",
+ "label": "HD",
+ "bitrate": 3400000,
+ "maxBitrate": 3400000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ },
+ {
+ "width": "640",
+ "height": "360",
+ "label": "SD",
+ "bitrate": 1000000,
+ "maxBitrate": 1000000,
+ "bFrames": 3,
+ "slices": 0,
+ "adaptiveBFrame": true,
+ "profile": "Auto",
+ "level": "auto",
+ "bufferWindow": "PT5S",
+ "referenceFrames": 3,
+ "entropyMode": "Cabac"
+ }
+ ]
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "25%",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+
+```
+
+## Create a new transform
+
+In this example, we create a **Transform** that is based on the custom preset we defined earlier. When creating a Transform, you should first use [Get](/rest/api/media/transforms/get) to check if one already exists. If the Transform exists, reuse it.
+
+In the Postman's collection that you downloaded, select **Transforms and Jobs**->**Create or Update Transform**.
+
+The **PUT** HTTP request method is similar to:
+
+```
+PUT https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/transforms/:transformName?api-version={{api-version}}
+```
+
+Select the **Body** tab and replace the body with the json code you [defined earlier](#define-a-custom-preset). For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform.
+
+Select **Send**.
+
+For Media Services to apply the Transform to the specified video or audio, you need to submit a Job under that Transform. For a complete example that shows how to submit a job under a transform, see [Tutorial: Stream video files - REST](stream-files-tutorial-with-rest.md).
+
+## [.NET](#tab/net/)
+
+## Download the sample
+
+Clone a GitHub repository that contains the full .NET Core sample to your machine using the following command:
+
+ ```bash
+ git clone https://github.com/Azure-Samples/media-services-v3-dotnet.git
+ ```
+
+The custom preset sample is located in the [Encoding with a custom preset using .NET](https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/VideoEncoding/Encoding_H264) folder.
+
+## Create a transform with a custom preset
+
+When creating a new [Transform](/rest/api/media/transforms), you need to specify what you want it to produce as an output. The required parameter is a [TransformOutput](/rest/api/media/transforms/createorupdate#transformoutput) object, as shown in the code below. Each **TransformOutput** contains a **Preset**. The **Preset** describes the step-by-step instructions of video and/or audio processing operations that are to be used to generate the desired **TransformOutput**. The following **TransformOutput** creates custom codec and layer output settings.
+
+When creating a [Transform](/rest/api/media/transforms), you should first check if one already exists using the **Get** method, as shown in the code that follows. In Media Services v3, **Get** methods on entities return **null** if the entity doesn't exist (a case-insensitive check on the name).
+
+### Example custom transform
+
+The following example defines a set of outputs that we want to be generated when this Transform is used. We first add an AacAudio layer for the audio encoding and two H264Video layers for the video encoding. In the video layers, we assign labels so that they can be used in the output file names. Next, we want the output to also include thumbnails. In the example below we specify images in PNG format, generated at 50% of the resolution of the input video, and at three timestamps - {25%, 50%, 75%} of the length of the input video. Lastly, we specify the format for the output files - one for video + audio, and another for the thumbnails. Since we have multiple H264Layers, we have to use macros that produce unique names per layer. We can either use a `{Label}` or `{Bitrate}` macro, the example shows the former.
+
+[!code-csharp[Main](../../../media-services-v3-dotnet/VideoEncoding/Encoding_H264/Program.cs#EnsureTransformExists)]
++
media-services Transform Generate Thumbnails Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-generate-thumbnails-dotnet-how-to.md
- Title: Generate thumbnails using Media Encoder Standard .NET
-description: This article shows how to use .NET to encode an asset and generate thumbnails at the same time using Media Encoder Standard.
------ Previously updated : 12/01/2020---
-# How to generate thumbnails using Encoder Standard with .NET
--
-You can use Media Encoder Standard to generate one or more thumbnails from your input video in [JPEG](https://en.wikipedia.org/wiki/JPEG) or [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image file formats.
-
-## Recommended reading and practice
-
-It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform - .NET](transform-custom-presets-how-to.md).
-
-## Transform code example
-
-The below code example creates just a thumbnail. You should set the following parameters:
--- **start** - The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.-- **step** - The intervals at which thumbnails are generated. The value can be in ISO 8601 format (for example, PT05S for one image every 5 seconds), or a frame count (for example, 30 for one image every 30 frames), or a relative value to stream duration (for example, 10% for one image every 10% of stream duration). Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and step position from start time as the first output. As the default value is 10%, it means that if the stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select a reasonable value for step if the first thumbnail is expected be close to start time, or set the range value to 1 if only one thumbnail is needed at start time.-- **range** - The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.-- **layers** - A collection of output image layers to be produced by the encoder.-
-```csharp
-
-private static Transform EnsureTransformExists(IAzureMediaServicesClient client, string resourceGroupName, string accountName, string transformName)
-{
- // Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
- // also uses the same recipe or Preset for processing content.
- Transform transform = client.Transforms.Get(resourceGroupName, accountName, transformName);
-
- if (transform == null)
- {
- // Create a new Transform Outputs array - this defines the set of outputs for the Transform
- TransformOutput[] outputs = new TransformOutput[]
- {
- // Create a new TransformOutput with a custom Standard Encoder Preset
- // This demonstrates how to create custom codec and layer output settings
-
- new TransformOutput(
- new StandardEncoderPreset(
- codecs: new Codec[]
- {
- // Generate a set of PNG thumbnails
- new PngImage(
- start: "25%",
- step: "25%",
- range: "80%",
- layers: new PngLayer[]{
- new PngLayer(
- width: "50%",
- height: "50%"
- )
- }
- )
- },
- // Specify the format for the output files for the thumbnails
- formats: new Format[]
- {
- new PngFormat(
- filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
- )
- }
- ),
- onError: OnErrorType.StopProcessingJob,
- relativePriority: Priority.Normal
- )
- };
-
- string description = "A transform that includes thumbnails.";
- // Create the custom Transform with the outputs defined above
- transform = client.Transforms.CreateOrUpdate(resourceGroupName, accountName, transformName, outputs, description);
- }
-
- return transform;
-}
-```
media-services Transform Generate Thumbnails How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-generate-thumbnails-how-to.md
+
+ Title: Generate thumbnails using Media Encoder Standard
+description: This article shows how to encode an asset and generate thumbnails at the same time using Media Encoder Standard.
+++++ Last updated : 03/09/2022++
+# How to generate thumbnails using Encoder Standard
++
+You can use Media Encoder Standard to generate one or more thumbnails from your input video in [JPEG](https://en.wikipedia.org/wiki/JPEG), [PNG](https://en.wikipedia.org/wiki/Portable_Network_Graphics), or [BMP](https://en.wikipedia.org/wiki/BMP_file_format) image file formats.
++
+## [REST](#tab/rest/)
+
+## Recommended reading and practice
+
+It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform](transform-custom-transform-how-to.md).
+
+## Thumbnail parameters
+
+You should set the following parameters:
+
+- **start** - The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
+- **step** - The intervals at which thumbnails are generated. The value can be in ISO 8601 format (for example, PT05S for one image every 5 seconds), or a frame count (for example, 30 for one image every 30 frames), or a relative value to stream duration (for example, 10% for one image every 10% of stream duration). Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and step position from start time as the first output. As the default value is 10%, it means that if the stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select a reasonable value for step if the first thumbnail is expected be close to start time, or set the range value to 1 if only one thumbnail is needed at start time.
+- **range** - The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
+- **layers** - A collection of output image layers to be produced by the encoder.
+
+## Example of a "single PNG file" preset
+
+The following JSON preset can be used to produce a single output PNG file from the first few seconds of the input video, where the encoder makes a best-effort attempt at finding an ΓÇ£interestingΓÇ¥ frame. Note that the output image dimensions have been set to 100%, meaning these match the dimensions of the input video. Note also how the ΓÇ£FormatΓÇ¥ setting in "Outputs" is required to match the use of "PngLayers" in the ΓÇ£CodecsΓÇ¥ section.
+
+```json
+{
+ "properties": {
+ "description": "Basic Transform using a custom encoding preset for thumbnails",
+ "outputs": [
+ {
+ "onError": "StopProcessingJob",
+ "relativePriority": "Normal",
+ "preset": {
+ "@odata.type": "#Microsoft.Media.StandardEncoderPreset",
+ "codecs": [
+ {
+ "@odata.type": "#Microsoft.Media.PngImage",
+ "stretchMode": "AutoSize",
+ "start": "{Best}",
+ "step": "25%",
+ "range": "80%",
+ "layers": [
+ {
+ "width": "50%",
+ "height": "50%"
+ }
+ ]
+ }
+ ],
+ "formats": [
+ {
+ "@odata.type": "#Microsoft.Media.Mp4Format",
+ "filenamePattern": "Video-{Basename}-{Label}-{Bitrate}{Extension}",
+ "outputFiles": []
+ },
+ {
+ "@odata.type": "#Microsoft.Media.PngFormat",
+ "filenamePattern": "Thumbnail-{Basename}-{Index}{Extension}"
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+
+```
+
+## Example of a "series of JPEG images" preset
+
+The following JSON preset can be used to produce a set of 10 images at timestamps of 5%, 15%, …, 95% of the input timeline, where the image size is specified to be one quarter that of the input video.
+
+### JSON preset
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "JpgLayers": [
+ {
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "25%",
+ "Height": "25%"
+ }
+ ],
+ "Start": "5%",
+ "Step": "10%",
+ "Range": "96%",
+ "Type": "JpgImage"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Index}{Extension}",
+ "Format": {
+ "Type": "JpgFormat"
+ }
+ }
+ ]
+}
+```
+
+## Example of a "one image at a specific timestamp" preset
+
+The following JSON preset can be used to produce a single JPEG image at the 30-second mark of the input video. This preset expects the input video to be more than 30 seconds in duration (else the job fails).
+
+### JSON preset
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "JpgLayers": [
+ {
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "25%",
+ "Height": "25%"
+ }
+ ],
+ "Start": "00:00:30",
+ "Step": "1",
+ "Range": "1",
+ "Type": "JpgImage"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Index}{Extension}",
+ "Format": {
+ "Type": "JpgFormat"
+ }
+ }
+ ]
+}
+```
+
+## Example of a "thumbnails at different resolutions" preset
+
+The following preset can be used to generate thumbnails at different resolutions in one task. In the example, at positions 5%, 15%, …, 95% of the input timeline, the encoder generates two images – one at 100% of the input video resolution and the other at 50%.
+
+Note the use of {Resolution} macro in the FileName; it indicates to the encoder to use the width and height that you specified in the Encoding section of the preset while generating the file name of the output images. This also helps you distinguish between the different images easily.
+
+### JSON preset
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "JpgLayers": [
+{
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "100%",
+ "Height": "100%"
+},
+{
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "50%",
+ "Height": "50%"
+}
+
+ ],
+ "Start": "5%",
+ "Step": "10%",
+ "Range": "96%",
+ "Type": "JpgImage"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Resolution}_{Index}{Extension}",
+ "Format": {
+"Type": "JpgFormat"
+ }
+ }
+ ]
+}
+```
+
+## Example of generating a thumbnail while encoding
+
+While all of the above examples have discussed how you can submit an encoding task that only produces images, you can also combine video/audio encoding with thumbnail generation. The following JSON preset tells Encoder Standard to generate a thumbnail during encoding.
+
+### JSON preset
+
+For information about schema, see [this](../previous/media-services-mes-schema.md) article.
+
+```json
+{
+ "Version": 1.0,
+ "Codecs": [
+ {
+ "KeyFrameInterval": "00:00:02",
+ "SceneChangeDetection": "true",
+ "H264Layers": [
+ {
+ "Profile": "Auto",
+ "Level": "auto",
+ "Bitrate": 4500,
+ "MaxBitrate": 4500,
+ "BufferWindow": "00:00:05",
+ "Width": 1280,
+ "Height": 720,
+ "ReferenceFrames": 3,
+ "EntropyMode": "Cabac",
+ "AdaptiveBFrame": true,
+ "Type": "H264Layer",
+ "FrameRate": "0/1"
+
+ }
+ ],
+ "Type": "H264Video"
+ },
+ {
+ "JpgLayers": [
+ {
+ "Quality": 90,
+ "Type": "JpgLayer",
+ "Width": "100%",
+ "Height": "100%"
+ }
+ ],
+ "Start": "{Best}",
+ "Type": "JpgImage"
+ },
+ {
+ "Channels": 2,
+ "SamplingRate": 48000,
+ "Bitrate": 128,
+ "Type": "AACAudio"
+ }
+ ],
+ "Outputs": [
+ {
+ "FileName": "{Basename}_{Index}{Extension}",
+ "Format": {
+ "Type": "JpgFormat"
+ }
+ },
+ {
+ "FileName": "{Basename}_{Resolution}_{VideoBitrate}.mp4",
+ "Format": {
+ "Type": "MP4Format"
+ }
+ }
+ ]
+}
+```
+
+## [.NET](#tab/net/)
+
+## Recommended reading and practice
+
+It is recommended that you become familiar with custom transforms by reading [How to encode with a custom transform](transform-custom-transform-how-to.md).
+
+## Transform code example
+
+The below code example creates just a thumbnail. You should set the following parameters:
+
+- **start** - The position in the input video from where to start generating thumbnails. The value can be in ISO 8601 format (For example, PT05S to start at 5 seconds), or a frame count (For example, 10 to start at the 10th frame), or a relative value to stream duration (For example, 10% to start at 10% of stream duration). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video and will only produce one thumbnail, no matter what other settings are for Step and Range. The default value is macro {Best}.
+- **step** - The intervals at which thumbnails are generated. The value can be in ISO 8601 format (for example, PT05S for one image every 5 seconds), or a frame count (for example, 30 for one image every 30 frames), or a relative value to stream duration (for example, 10% for one image every 10% of stream duration). Step value will affect the first generated thumbnail, which may not be exactly the one specified at transform preset start time. This is due to the encoder, which tries to select the best thumbnail between start time and step position from start time as the first output. As the default value is 10%, it means that if the stream has long duration, the first generated thumbnail might be far away from the one specified at start time. Try to select a reasonable value for step if the first thumbnail is expected be close to start time, or set the range value to 1 if only one thumbnail is needed at start time.
+- **range** - The position relative to transform preset start time in the input video at which to stop generating thumbnails. The value can be in ISO 8601 format (For example, PT5M30S to stop at 5 minutes and 30 seconds from start time), or a frame count (For example, 300 to stop at the 300th frame from the frame at start time. If this value is 1, it means only producing one thumbnail at start time), or a relative value to the stream duration (For example, 50% to stop at half of stream duration from start time). The default value is 100%, which means to stop at the end of the stream.
+- **layers** - A collection of output image layers to be produced by the encoder.
+
+```csharp
+
+private static Transform EnsureTransformExists(IAzureMediaServicesClient client, string resourceGroupName, string accountName, string transformName)
+{
+ // Does a Transform already exist with the desired name? Assume that an existing Transform with the desired name
+ // also uses the same recipe or Preset for processing content.
+ Transform transform = client.Transforms.Get(resourceGroupName, accountName, transformName);
+
+ if (transform == null)
+ {
+ // Create a new Transform Outputs array - this defines the set of outputs for the Transform
+ TransformOutput[] outputs = new TransformOutput[]
+ {
+ // Create a new TransformOutput with a custom Standard Encoder Preset
+ // This demonstrates how to create custom codec and layer output settings
+
+ new TransformOutput(
+ new StandardEncoderPreset(
+ codecs: new Codec[]
+ {
+ // Generate a set of PNG thumbnails
+ new PngImage(
+ start: "25%",
+ step: "25%",
+ range: "80%",
+ layers: new PngLayer[]{
+ new PngLayer(
+ width: "50%",
+ height: "50%"
+ )
+ }
+ )
+ },
+ // Specify the format for the output files for the thumbnails
+ formats: new Format[]
+ {
+ new PngFormat(
+ filenamePattern:"Thumbnail-{Basename}-{Index}{Extension}"
+ )
+ }
+ ),
+ onError: OnErrorType.StopProcessingJob,
+ relativePriority: Priority.Normal
+ )
+ };
+
+ string description = "A transform that includes thumbnails.";
+ // Create the custom Transform with the outputs defined above
+ transform = client.Transforms.CreateOrUpdate(resourceGroupName, accountName, transformName, outputs, description);
+ }
+
+ return transform;
+}
+```
+
media-services Transform Stitch How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-stitch-how-to.md
Title: How to stitch two or more video files with .NET | Microsoft Docs
+ Title: How to stitch two or more video files | Microsoft Docs
description: This article shows how to stitch two or more video files. Previously updated : 03/24/2021 Last updated : 03/09/2022 -
-# How to stitch two or more video files with .NET
+# How to stitch two or more video files
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
The following example illustrates how you can generate a preset to stitch two or
> [!NOTE] > Video files edited together should share properties (video resolution, frame rate, audio track count, etc.). You should take care not to mix videos of different frame rates, or with different number of audio tracks.
+## [.NET](#tab/net/)
+ ## Prerequisites Clone or download the [Media Services .NET samples](https://github.com/Azure-Samples/media-services-v3-dotnet/).
media-services Transform Subclip Video Dotnet How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-subclip-video-dotnet-how-to.md
- Title: Subclip a video when encoding with Media Services
-description: This topic describes how to subclip a video when encoding with Azure Media Services using .NET SDK
------ Previously updated : 06/09/2019---
-# Subclip a video when encoding with Media Services - .NET
-
-You can trim or subclip a video when encoding it using a [Job](/rest/api/media/jobs). This functionality works with any [Transform](/rest/api/media/transforms) that is built using either the [BuiltInStandardEncoderPreset](/rest/api/media/transforms/createorupdate#builtinstandardencoderpreset) presets, or the [StandardEncoderPreset](/rest/api/media/transforms/createorupdate#standardencoderpreset) presets.
-
-The following C# example creates a job that trims a video in an Asset as it submits an encoding job.
-
-## Prerequisites
-
-To complete the steps described in this topic, you have to:
--- [Create an Azure Media Services account](./account-create-how-to.md)-- Create a Transform and an input and output Assets. You can see how to create a Transform and input and output Assets in the [Upload, encode, and stream videos using .NET](stream-files-tutorial-with-api.md) tutorial.-- Review the [Encoding concept](encode-concept.md) topic.-
-## Example
-
-```csharp
-/// <summary>
-/// Submits a request to Media Services to apply the specified Transform to a given input video.
-/// </summary>
-/// <param name="client">The Media Services client.</param>
-/// <param name="resourceGroupName">The name of the resource group within the Azure subscription.</param>
-/// <param name="accountName"> The Media Services account name.</param>
-/// <param name="transformName">The name of the transform.</param>
-/// <param name="jobName">The (unique) name of the job.</param>
-/// <param name="inputAssetName">The name of the input asset.</param>
-/// <param name="outputAssetName">The (unique) name of the output asset that will store the result of the encoding job. </param>
-// <SubmitJob>
-private static async Task<Job> JobWithBuiltInStandardEncoderWithSingleClipAsync(
- IAzureMediaServicesClient client,
- string resourceGroupName,
- string accountName,
- string transformName,
- string jobName,
- string inputAssetName,
- string outputAssetName)
-{
- var jobOutputs = new List<JobOutputAsset>
- {
- new JobOutputAsset(state: JobState.Queued, progress: 0, assetName: outputAssetName)
- };
-
- var clipStart = new AbsoluteClipTime()
- {
- Time = new TimeSpan(0, 0, 20)
- };
-
- var clipEnd = new AbsoluteClipTime()
- {
- Time = new TimeSpan(0, 0, 30)
- };
-
- var jobInput = new JobInputAsset(assetName: inputAssetName, start: clipStart, end: clipEnd);
-
- Job job = await client.Jobs.CreateAsync(
- resourceGroupName,
- accountName,
- transformName,
- jobName,
- new Job(input: jobInput, outputs: jobOutputs.ToArray(), name: jobName)
- {
- Description = $"A Job with transform {transformName} and single clip.",
- Priority = Priority.Normal,
- });
-
- return job;
-}
-```
-
-## Next steps
-
-[How to encode with a custom transform](transform-custom-presets-how-to.md)
media-services Transform Subclip Video How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-subclip-video-how-to.md
+
+ Title: Subclip a video when encoding with Media Services
+description: This topic describes how to subclip a video when encoding with Azure Media Services.
++++ Last updated : 03/09/2022+++
+# Subclip a video
+
+You can trim or subclip a video when encoding it using a Media Services Job.
+
+This functionality works with any Transform that is built using either the BuiltInStandardEncoderPreset presets, or the StandardEncoderPreset presets.
+
+## [REST](#tab/rest/)
+
+## Subclip a video when encoding with Media Services - REST
+
+You can trim or subclip a video when encoding it using a [Job](/rest/api/media/jobs). This functionality works with any [Transform](/rest/api/media/transforms) that is built using either the [BuiltInStandardEncoderPreset](/rest/api/media/transforms/createorupdate#builtinstandardencoderpreset) presets, or the [StandardEncoderPreset](/rest/api/media/transforms/createorupdate#standardencoderpreset) presets.
+
+The REST example in this topic creates a job that trims a video as it submits an encoding job.
++
+## Prerequisites
+
+To complete the steps described in this topic, you have to:
+
+- [Create an Azure Media Services account](./account-create-how-to.md).
+- [Configure Postman for Azure Media Services REST API calls](setup-postman-rest-how-to.md).
+
+ Make sure to follow the last step in the topic [Get Azure AD Token](setup-postman-rest-how-to.md#get-azure-ad-token).
+- Create a Transform and an output Assets. You can see how to create a Transform and an output Assets in the [Encode a remote file based on URL and stream the video - REST](stream-files-tutorial-with-rest.md) tutorial.
+- Review the [Encoding concept](encode-concept.md) topic.
+
+## Create a subclipping job
+
+1. In the Postman collection that you downloaded, select **Transforms and jobs** -> **Create Job with Sub Clipping**.
+
+ The **PUT** request looks like this:
+
+ ```
+ https://management.azure.com/subscriptions/:subscriptionId/resourceGroups/:resourceGroupName/providers/Microsoft.Media/mediaServices/:accountName/transforms/:transformName/jobs/:jobName?api-version={{api-version}}
+ ```
+1. Update the value of "transformName" environment variable with your transform name.
+1. Select the **Body** tab and update the "myOutputAsset" with your output Asset name.
+
+ ```json
+ {
+ "properties": {
+ "description": "A Job with transform cb9599fb-03b3-40eb-a2ff-7ea909f53735 and single clip.",
+
+ "input": {
+ "@odata.type": "#Microsoft.Media.JobInputHttp",
+ "baseUri": "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
+ "files": [
+ "Ignite-short.mp4"
+ ],
+ "start": {
+ "@odata.type": "#Microsoft.Media.AbsoluteClipTime",
+ "time": "PT10S"
+ },
+ "end": {
+ "@odata.type": "#Microsoft.Media.AbsoluteClipTime",
+ "time": "PT40S"
+ }
+ },
+
+ "outputs": [
+ {
+ "@odata.type": "#Microsoft.Media.JobOutputAsset",
+ "assetName": "myOutputAsset"
+ }
+ ],
+ "priority": "Normal"
+ }
+ }
+ ```
+1. Press **Send**.
+
+ You see the **Response** with the info about the job that was created and submitted and the job's status.
+
+## [.NET](#tab/net/)
+
+The following C# example creates a job that trims a video in an Asset as it submits an encoding job.
+
+## Prerequisites
+
+To complete the steps described in this topic, you have to:
+
+- [Create an Azure Media Services account](./account-create-how-to.md)
+- Create a Transform and an input and output Assets. You can see how to create a Transform and input and output Assets in the [Upload, encode, and stream videos using .NET](stream-files-tutorial-with-api.md) tutorial.
+- Review the [Encoding concept](encode-concept.md) topic.
+
+## Example
+
+```csharp
+/// <summary>
+/// Submits a request to Media Services to apply the specified Transform to a given input video.
+/// </summary>
+/// <param name="client">The Media Services client.</param>
+/// <param name="resourceGroupName">The name of the resource group within the Azure subscription.</param>
+/// <param name="accountName"> The Media Services account name.</param>
+/// <param name="transformName">The name of the transform.</param>
+/// <param name="jobName">The (unique) name of the job.</param>
+/// <param name="inputAssetName">The name of the input asset.</param>
+/// <param name="outputAssetName">The (unique) name of the output asset that will store the result of the encoding job. </param>
+// <SubmitJob>
+private static async Task<Job> JobWithBuiltInStandardEncoderWithSingleClipAsync(
+ IAzureMediaServicesClient client,
+ string resourceGroupName,
+ string accountName,
+ string transformName,
+ string jobName,
+ string inputAssetName,
+ string outputAssetName)
+{
+ var jobOutputs = new List<JobOutputAsset>
+ {
+ new JobOutputAsset(state: JobState.Queued, progress: 0, assetName: outputAssetName)
+ };
+
+ var clipStart = new AbsoluteClipTime()
+ {
+ Time = new TimeSpan(0, 0, 20)
+ };
+
+ var clipEnd = new AbsoluteClipTime()
+ {
+ Time = new TimeSpan(0, 0, 30)
+ };
+
+ var jobInput = new JobInputAsset(assetName: inputAssetName, start: clipStart, end: clipEnd);
+
+ Job job = await client.Jobs.CreateAsync(
+ resourceGroupName,
+ accountName,
+ transformName,
+ jobName,
+ new Job(input: jobInput, outputs: jobOutputs.ToArray(), name: jobName)
+ {
+ Description = $"A Job with transform {transformName} and single clip.",
+ Priority = Priority.Normal,
+ });
+
+ return job;
+}
+
+```
mysql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-troubleshoot-cli-errors.md
description: This topic gives guidance on troubleshooting common issues with Azu
-+ Last updated 08/24/2021 # Troubleshoot Azure Database for MySQL Flexible Server CLI errors+ [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] This doc will help you troubleshoot common issues with Azure CLI when using MySQL Flexible Server.
This doc will help you troubleshoot common issues with Azure CLI when using MySQ
If you receive and error that a command **is misspelled or not recognized by the system**. This could mean that CLI version on your client machine may not be up to date. Run ```az upgrade``` to upgrade to latest version. Doing an upgrade of your CLI version can help resolve issues with incompatibilities of a command due to any API changes.
-
## Debug deployment failures + Currently, Azure CLI doesn't support turning on debug logging, but you can retrieve debug logging following the steps below. >[!NOTE]
+>
> - Replace ```examplegroup``` and ```exampledeployment``` with the correct resource group and deployment name for your database server. > - You can see the Deployment name in the deployments page in your resource group. See [how to find the deployment name](../../azure-resource-manager/templates/deployment-history.md?tabs=azure-portal).
Currently, Azure CLI doesn't support turning on debug logging, but you can retri
## Next steps -- If you are still experiencing issues, please [report the issue](https://github.com/Azure/azure-cli/issues).
+- If you are still experiencing issues, please [report the issue](https://github.com/Azure/azure-cli/issues).
- If you have questions, visit our Stack Overflow page: https://aka.ms/azcli/questions. - Let us know how we are doing with this short survey https://aka.ms/azureclihats.
mysql Sample Cli Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-audit-logs.md
This sample CLI script enables [audit logs](../concepts-audit-logs.md) on an Azu
### Run the script ## Clean up resources
mysql Sample Cli Change Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-change-server-parameters.md
This sample CLI script lists all available [server parameters](../concepts-serve
### Run the script ## Clean up resources
mysql Sample Cli Create Connect Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-private-access.md
This sample CLI script creates an Azure Database for MySQL - Flexible Server in
### Run the script ## Test connectivity to the MySQL server from the VM
mysql Sample Cli Create Connect Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-public-access.md
Once the script runs successfully, the MySQL Flexible Server will be accessible
### Run the script ## Clean up resources
mysql Sample Cli Monitor And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-monitor-and-scale.md
This sample CLI script scales compute, storage and IOPS for a single Azure Datab
### Run the script ## Clean up resources
mysql Sample Cli Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-read-replicas.md
This sample CLI script creates and manages [read replicas](../concepts-read-repl
### Run the script ## Clean up resources
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
Also, see [stop/start limitations](../concepts-limitations.md#stopstart-operatio
### Run the script ## Clean up resources
mysql Sample Cli Restore Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restore-server.md
The new Flexible Server is created with the original server's configuration and
### Run the script ## Clean up resources
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
Currently, Same-Zone high availability is supported only for the General purpose
### Run the script ## Clean up resources
mysql Sample Cli Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-slow-query-logs.md
This sample CLI script configures [slow query logs](../concepts-slow-query-logs.
### Run the script ## Clean up resources
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
Currently, Zone-Redundant high availability is supported only for the General pu
### Run the script ## Clean up resources
mysql How To Fix Corrupt Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-fix-corrupt-database.md
description: In this article, you'll learn about how to fix database corruption
-+ Last updated 09/21/2020
You typically notice a database or table is corrupt when your application access
## Use the dump and restore method We recommend that you resolve corruption problems by using a *dump and restore* method. This method involves:+ 1. Accessing the corrupt table.
-1. Using the mysqldump utility to create a logical backup of the table. The backup will retain the table structure and the data within it.
-1. Reloading the table into the database.
+2. Using the mysqldump utility to create a logical backup of the table. The backup will retain the table structure and the data within it.
+3. Reloading the table into the database.
### Back up your database or tables > [!Important]
+>
> - Make sure you have configured a firewall rule to access the server from your client machine. For more information, see [configure a firewall rule on Single Server](howto-manage-firewall-using-portal.md) and [configure a firewall rule on Flexible Server](flexible-server/how-to-connect-tls-ssl.md). > - Use SSL option `--ssl-cert` for mysqldump if you have SSL enabled.
mysql Howto Data Encryption Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-encryption-troubleshoot.md
description: Learn how to troubleshoot data encryption in Azure Database for MyS
-+ Last updated 02/13/2020
mysql Howto Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-manage-vnet-using-cli.md
VNets and Azure service resources can be in the same or different subscriptions.
### Run the script ## Clean up resources
mysql Howto Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-troubleshoot-common-errors.md