Updates from: 05/06/2021 03:05:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
You also need a valid Azure AD Premium P1 or higher subscription license for eve
- Azure AD [application administrator](../roles/permissions-reference.md#application-administrator) role to configure the provisioning app in the Azure portal - A test and production instance of the cloud HR app. - Administrator permissions in the cloud HR app to create a system integration user and make changes to test employee data for testing purposes.-- For user provisioning to Active Directory, a server running Windows Server 2012 or greater with .NET 4.7.1+ runtime is required to host the Azure AD Connect provisioning agent
+- For user provisioning to Active Directory, a server running Windows Server 2016 or greater is required to host the Azure AD Connect provisioning agent. This server should be a tier 0 server based on the Active Directory administrative tier model.
- [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) for synchronizing users between Active Directory and Azure AD. ### Training resources
To troubleshoot any issues that might turn up during provisioning, see the follo
- [Writing expressions for attribute mappings](functions-for-customizing-application-data.md) - [Azure AD synchronization API overview](/graph/api/resources/synchronization-overview) - [Skip deletion of user accounts that go out of scope](skip-out-of-scope-deletions.md)-- [Azure AD Connect Provisioning Agent: Version release history](provisioning-agent-release-version-history.md)
+- [Azure AD Connect Provisioning Agent: Version release history](provisioning-agent-release-version-history.md)
active-directory Howto Authentication Passwordless Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-passwordless-faqs.md
For more information how to register and use FIDO2 security keys, see [Enable pa
No, not at this time.
+### Why I am getting "NotAllowedError" in the browser, when registering FIDO2 keys?
+
+You will receive "NotAllowedError" from fido2 key registration page. This typically happens when user is in private (Incognito) window or using remote desktop where FIDO2 Private key access is not possible.
+ ## Prerequisites * [Does this feature work if there's no internet connectivity?](#does-this-feature-work-if-theres-no-internet-connectivity)
active-directory Howto Authentication Use Email Signin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-authentication-use-email-signin.md
Title: Sign in with email as an alternate login ID for Azure Active Directory
-description: Learn how to configure and enable users to sign in to Azure Active Directory using their email address as an alternate login ID (preview)
+ Title: Sign-in to Azure AD with email as an alternate login ID
+description: Learn how to enable users to sign in to Azure Active Directory with their email as an alternate login ID
Previously updated : 10/01/2020 Last updated : 5/3/2021
-# Sign-in to Azure Active Directory using email as an alternate login ID (preview)
+# Sign-in to Azure AD with email as an alternate login ID (Preview)
> [!NOTE]
-> Sign in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Sign-in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Many organizations want to let users sign in to Azure Active Directory (Azure AD) using the same credentials as their on-premises directory environment. With this approach, known as hybrid authentication, users only need to remember one set of credentials. Some organizations haven't moved to hybrid authentication for the following reasons:
-* By default, the Azure AD user principal name (UPN) is set to the same UPN as the on-premises directory.
-* Changing the Azure AD UPN creates a mis-match between on-prem and Azure AD environments that could cause problems with certain applications and services.
+* By default, the Azure AD User Principal Name (UPN) is set to the same value as the on-premises UPN.
+* Changing the Azure AD UPN creates a mismatch between on-premises and Azure AD environments that could cause problems with certain applications and services.
* Due to business or compliance reasons, the organization doesn't want to use the on-premises UPN to sign in to Azure AD.
-To help with the move to hybrid authentication, you can now configure Azure AD to let users sign in with an email in your verified domain as an alternate login ID. For example, if *Contoso* rebranded to *Fabrikam*, rather than continuing to sign in with the legacy `balas@contoso.com` UPN, email as an alternate login ID can now be used. To access an application or services, users would sign in to Azure AD using their assigned email, such as `balas@fabrikam.com`.
+To help with the move to hybrid authentication, you can configure Azure AD to let users sign in with their email as an alternate login ID. For example, if *Contoso* rebranded to *Fabrikam*, rather than continuing to sign in with the legacy `balas@contoso.com` UPN, email as an alternate login ID can be used. To access an application or service, users would sign in to Azure AD using their non-UPN email, such as `balas@fabrikam.com`.
-This article shows you how to enable and use email as an alternate login ID. This feature is available in the Azure AD Free edition and higher.
+This article shows you how to enable and use email as an alternate login ID.
-> [!NOTE]
-> This feature is for cloud-authenticated Azure AD users only.
+## Before you begin
-> [!NOTE]
-> Currently, this feature is not supported on Azure AD joined Windows 10 devices for tenants with cloud authentication. This feature is not applicable to Hybrid Azure AD joined devices.
+Here's what you need to know about email as an alternate login ID:
+
+* The feature is available in Azure AD Free edition and higher.
+* The feature enables sign-in with verified domain *ProxyAddresses* for cloud-authenticated Azure AD users.
+* When a user signs in with a non-UPN email, the `unique_name` and `preferred_username` claims (if present) in the [ID token](https://docs.microsoft.com/azure/active-directory/develop/id-tokens) will have the value of the non-UPN email.
+* There are two options for configuring the feature:
+ * [Home Realm Discovery (HRD) policy](#enable-user-sign-in-with-an-email-address) - Use this option to enable the feature for the entire tenant. Global administrator privileges required.
+ * [Staged rollout policy](#enable-staged-rollout-to-test-user-sign-in-with-an-email-address) - Use this option to test the feature with specific Azure AD groups. Global administrator privileges required.
+
+## Preview limitations
+
+In the current preview state, the following limitations apply to email as an alternate login ID:
+
+* Users may see their UPN, even when they signed-in with their non-UPN email. The following example behavior may be seen:
+ * User is prompted to sign in with UPN when directed to Azure AD sign-in with `login_hint=<non-UPN email>`.
+ * When a user signs-in with a non-UPN email and enters an incorrect password, the *"Enter your password"* page changes to display the UPN.
+ * On some Microsoft sites and apps, such as Microsoft Office, the **Account Manager** control typically displayed in the upper right may display the user's UPN instead of the non-UPN email used to sign in.
-## Overview of Azure AD sign-in approaches
+* Some flows are currently not compatible with non-UPN emails, such as the following:
+ * Identity Protection doesn't match non-UPN emails with *Leaked Credentials* risk detection. This risk detection uses the UPN to match credentials that have been leaked. For more information, see [Azure AD Identity Protection risk detection and remediation][identity-protection].
+ * B2B invites sent to a non-UPN email are not fully supported. After accepting an invite sent to a non-UPN email, sign-in with the non-UPN email may not work for the guest user on the resource tenant endpoint.
+ * When a user is signed-in with a non-UPN email, they cannot change their password. Azure AD self-service password reset (SSPR) should work as expected. During SSPR, the user may see their UPN if they verify their identity via alternate email.
-To sign in to Azure AD, users enter a name that uniquely identifies their account. Historically, you could only use the Azure AD UPN as the sign-in name.
+* The following scenarios are not supported. Sign-in with non-UPN email to:
+ * Hybrid Azure AD joined devices
+ * Azure AD joined devices
+ * Skype for Business
+ * Microsoft Office on macOS
+ * OneDrive (when the sign-in flow does not involve Multi-Factor Authentication)
+ * Microsoft Teams on web
+ * Resource Owner Password Credentials (ROPC) flows
+
+* Changes made to the feature's configuration in HRD policy are not explicitly shown in the audit logs.
+* Staged rollout policy does not work as expected for users that are included in multiple staged rollout policies.
+* Within a tenant, a cloud-only user's UPN can be the same value as another user's proxy address synced from the on-premises directory. In this scenario, with the feature enabled, the cloud-only user will not be able to sign in with their UPN. More on this issue in the [Troubleshoot](#troubleshoot) section.
+
+## Overview of alternate login ID options
+
+To sign in to Azure AD, users enter a value that uniquely identifies their account. Historically, you could only use the Azure AD UPN as the sign-in identifier.
For organizations where the on-premises UPN is the user's preferred sign-in email, this approach was great. Those organizations would set the Azure AD UPN to the exact same value as the on-premises UPN, and users would have a consistent sign-in experience.
-However, in some organizations the on-premises UPN isn't used as a sign-in name. In the on-premises environments, you would configure the local AD DS to allow sign in with an alternate login ID. Setting the Azure AD UPN to the same value as the on-premises UPN isn't an option as Azure AD would then require users sign in with that value.
+### Alternate Login ID for AD FS
-The typical workaround to this issue was to set the Azure AD UPN to the email address the user expects to sign in with. This approach works, though results in different UPNs between the on-premises AD and in Azure AD, and this configuration isn't compatible with all Microsoft 365 workloads.
+However, in some organizations the on-premises UPN isn't used as a sign-in identifier. In the on-premises environments, you would configure the local AD DS to allow sign-in with an alternate login ID. Setting the Azure AD UPN to the same value as the on-premises UPN isn't an option as Azure AD would then require users to sign in with that value.
-A different approach is to synchronize the Azure AD and on-premises UPNs to the same value and then configure Azure AD to allow users to sign in to Azure AD with a verified email. To provide this ability, you define one or more email addresses in the user's *ProxyAddresses* attribute in the on-premises directory. *ProxyAddresses* are then synchronized to Azure AD automatically using Azure AD Connect.
+### Alternate Login ID in Azure AD Connect
-## Preview limitations
+The typical workaround to this issue was to set the Azure AD UPN to the email address the user expects to sign in with. This approach works, though results in different UPNs between the on-premises AD and Azure AD, and this configuration isn't compatible with all Microsoft 365 workloads.
-Sign in to Azure AD with email as an alternate login ID is available in the Azure AD Free edition and higher.
+### Email as an Alternate Login ID
-In the current preview state, the following limitations apply when a user signs in with a non-UPN email as an alternate login ID:
+A different approach is to synchronize the Azure AD and on-premises UPNs to the same value and then configure Azure AD to allow users to sign in to Azure AD with a verified email. To provide this ability, you define one or more email addresses in the user's *ProxyAddresses* attribute in the on-premises directory. *ProxyAddresses* are then synchronized to Azure AD automatically using Azure AD Connect.
-* Users may see their UPN, even when the signed in with their non-UPN email. The following example behavior may be seen:
- * User is prompted to sign in with UPN when directed to Azure AD sign-in with `login_hint=<non-UPN email>`.
- * When a user signs in with a non-UPN email and enters an incorrect password, the *"Enter your password"* page changes to display the UPN.
- * On some Microsoft sites and apps, such as [https://portal.azure.com](https://portal.azure.com) and Microsoft Office, the **Account Manager** control typically displayed in the upper right may display the user's UPN instead of the non-UPN email used to sign in.
-* Some flows are currently not compatible with the non-UPN email, such as the following:
- * Identity protection currently doesn't match email alternate login IDs with *Leaked Credentials* risk detection. This risk detection uses the UPN to match credentials that have been leaked. For more information, see [Azure AD Identity Protection risk detection and remediation][identity-protection].
- * B2B invites sent to an alternate login ID email aren't fully supported. After accepting an invite sent to an email as an alternate login ID, sign in with the alternate email may not work for the user on the tenanted endpoint.
+| Option | Description |
+|||
+| [Alternate Login ID for AD FS](https://docs.microsoft.com/windows-server/identity/ad-fs/operations/configuring-alternate-login-id) | Enable sign-in with an alternate attribute (such as Mail) for AD FS users. |
+| [Alternate Login ID in Azure AD Connect](https://docs.microsoft.com/azure/active-directory/hybrid/plan-connect-userprincipalname#alternate-login-id) | Synchronize an alternate attribute (such as Mail) as the Azure AD UPN. |
+| Email as an Alternate Login ID | Enable sign-in with verified domain *ProxyAddresses* for Azure AD users. |
## Synchronize sign-in email addresses to Azure AD Traditional Active Directory Domain Services (AD DS) or Active Directory Federation Services (AD FS) authentication happens directly on your network and is handled by your AD DS infrastructure. With hybrid authentication, users can instead sign in directly to Azure AD.
-To support this hybrid authentication approach, you synchronize your on-premises AD DS environment to Azure AD using [Azure AD Connect][azure-ad-connect] and configure it to use Password Hash Sync (PHS) or Pass-Through Authentication (PTA).
+To support this hybrid authentication approach, you synchronize your on-premises AD DS environment to Azure AD using [Azure AD Connect][azure-ad-connect] and configure it to use Password Hash Sync (PHS) or Pass-Through Authentication (PTA). For more information, see [Choose the right authentication method for your Azure AD hybrid identity solution][hybrid-auth-methods].
In both configuration options, the user submits their username and password to Azure AD, which validates the credentials and issues a ticket. When users sign in to Azure AD, it removes the need for your organization to host and manage an AD FS infrastructure.
-![Diagram of Azure AD hybrid identity with password hash synchronization](media/howto-authentication-use-email-signin/hybrid-password-hash-sync.png)
-
-![Diagram of Azure AD hybrid identity with pass-through authentication](media/howto-authentication-use-email-signin/hybrid-pass-through-authentication.png)
- One of the user attributes that's automatically synchronized by Azure AD Connect is *ProxyAddresses*. If users have an email address defined in the on-prem AD DS environment as part of the *ProxyAddresses* attribute, it's automatically synchronized to Azure AD. This email address can then be used directly in the Azure AD sign-in process as an alternate login ID. > [!IMPORTANT]
One of the user attributes that's automatically synchronized by Azure AD Connect
> > For more information, see [Add and verify a custom domain name in Azure AD][verify-domain].
-For more information, see [Choose the right authentication method for your Azure AD hybrid identity solution][hybrid-auth-methods].
- ## Enable user sign-in with an email address
-Once users with the *ProxyAddresses* attribute applied are synchronized to Azure AD using Azure AD Connect, you need to enable the feature for users to sign in with email as an alternate login ID for your tenant. This feature tells the Azure AD login servers to not only check the sign-in name against UPN values, but also against *ProxyAddresses* values for the email address.
+> [!NOTE]
+> This configuration option uses HRD policy. For more information, see [homeRealmDiscoveryPolicy resource type](https://docs.microsoft.com/graph/api/resources/homeRealmDiscoveryPolicy?view=graph-rest-1.0).
+
+Once users with the *ProxyAddresses* attribute applied are synchronized to Azure AD using Azure AD Connect, you need to enable the feature for users to sign in with email as an alternate login ID for your tenant. This feature tells the Azure AD login servers to not only check the sign-in identifier against UPN values, but also against *ProxyAddresses* values for the email address.
-During preview, you can currently only enable the sign-in with email as an alternate login ID feature using PowerShell. You need *tenant administrator* permissions to complete the following steps:
+During preview, you can currently only enable the sign-in with email as an alternate login ID feature using PowerShell. You need *global administrator* permissions to complete the following steps:
-1. Open an PowerShell session as an administrator, then install the *AzureADPreview* module using the [Install-Module][Install-Module] cmdlet:
+1. Open a PowerShell session as an administrator, then install the *AzureADPreview* module using the [Install-Module][Install-Module] cmdlet:
```powershell Install-Module AzureADPreview
During preview, you can currently only enable the sign-in with email as an alter
If prompted, select **Y** to install NuGet or to install from an untrusted repository.
-1. Sign in to your Azure AD tenant as a *tenant administrator* using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
+1. Sign in to your Azure AD tenant as a *global administrator* using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
```powershell Connect-AzureAD
During preview, you can currently only enable the sign-in with email as an alter
The command returns information about your account, environment, and tenant ID.
-1. Check if the *HomeRealmDiscoveryPolicy* policy already exists in your tenant using the [Get-AzureADPolicy][Get-AzureADPolicy] cmdlet as follows:
+1. Check if the *HomeRealmDiscoveryPolicy* already exists in your tenant using the [Get-AzureADPolicy][Get-AzureADPolicy] cmdlet as follows:
```powershell Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List *
During preview, you can currently only enable the sign-in with email as an alter
Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List * ```
-With the policy applied, it can take up to an hour to propagate and for users to be able to sign in using their alternate login ID.
-
-## Test user sign-in with email
-
-To test that users can sign in with email, browse to [https://myprofile.microsoft.com][my-profile] and sign in with a user account based on their email address, such as `balas@fabrikam.com`, not their UPN, such as `balas@contoso.com`. The sign-in experience should look and feel the same as with a UPN-based sign-in event.
+With the policy applied, it can take up to 1 hour to propagate and for users to be able to sign in using their alternate login ID.
## Enable staged rollout to test user sign-in with an email address
-[Staged rollout][staged-rollout] allows tenant administrators to enable features for specific groups. It is recommended that tenant administrators use staged rollout to test user sign-in with an email address. When administrators are ready to deploy this feature to their entire tenant, they should use a Home Realm Discovery policy.
+> [!NOTE]
+>This configuration option uses staged rollout policy. For more information, see [featureRolloutPolicy resource type](https://docs.microsoft.com/graph/api/resources/featurerolloutpolicy?view=graph-rest-1.0).
+
+Staged rollout policy allows tenant administrators to enable features for specific Azure AD groups. It is recommended that tenant administrators use staged rollout to test user sign-in with an email address. When administrators are ready to deploy this feature to their entire tenant, they should use [HRD policy](#enable-user-sign-in-with-an-email-address).
-You need *tenant administrator* permissions to complete the following steps:
+You need *global administrator* permissions to complete the following steps:
1. Open a PowerShell session as an administrator, then install the *AzureADPreview* module using the [Install-Module][Install-Module] cmdlet:
You need *tenant administrator* permissions to complete the following steps:
If prompted, select **Y** to install NuGet or to install from an untrusted repository.
-2. Sign in to your Azure AD tenant as a *tenant administrator* using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
+1. Sign in to your Azure AD tenant as a *global administrator* using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
```powershell Connect-AzureAD
You need *tenant administrator* permissions to complete the following steps:
The command returns information about your account, environment, and tenant ID.
-3. List all existing staged rollout policies using the following cmdlet:
+1. List all existing staged rollout policies using the following cmdlet:
```powershell Get-AzureADMSFeatureRolloutPolicy ```
-4. If there are no existing staged rollout policies for this feature, create a new staged rollout policy and take note of the policy ID:
+1. If there are no existing staged rollout policies for this feature, create a new staged rollout policy and take note of the policy ID:
```powershell $AzureADMSFeatureRolloutPolicy = @{
You need *tenant administrator* permissions to complete the following steps:
New-AzureADMSFeatureRolloutPolicy @AzureADMSFeatureRolloutPolicy ```
-5. Find the directoryObject ID for the group to be added to the staged rollout policy. Note the value returned for the *Id* parameter, because it will be used in the next step.
+1. Find the directoryObject ID for the group to be added to the staged rollout policy. Note the value returned for the *Id* parameter, because it will be used in the next step.
```powershell Get-AzureADMSGroup -SearchString "Name of group to be added to the staged rollout policy" ```
-6. Add the group to the staged rollout policy as shown in the following example. Replace the value in the *-Id* parameter with the value returned for the policy ID in step 4 and replace the value in the *-RefObjectId* parameter with the *Id* noted in step 5. It may take up to 1 hour before users in the group can use their proxy addresses to sign-in.
+1. Add the group to the staged rollout policy as shown in the following example. Replace the value in the *-Id* parameter with the value returned for the policy ID in step 4 and replace the value in the *-RefObjectId* parameter with the *Id* noted in step 5. It may take up to 1 hour before users in the group can sign in to Azure AD with email as an alternate login ID.
```powershell Add-AzureADMSFeatureRolloutPolicyDirectoryObject -Id "ROLLOUT_POLICY_ID" -RefObjectId "GROUP_OBJECT_ID" ```
-For new members added to the group, it may take up to 24 hours before they can use their proxy addresses to sign-in.
+For new members added to the group, it may take up to 24 hours before they can sign in to Azure AD with email as an alternate login ID.
### Removing groups
Set-AzureADMSFeatureRolloutPolicy -Id "ROLLOUT_POLICY_ID" -IsEnabled $false
Remove-AzureADMSFeatureRolloutPolicy -Id "ROLLOUT_POLICY_ID" ```
+## Test user sign-in with an email address
+
+To test that users can sign in with email, go to [https://myprofile.microsoft.com][my-profile] and sign in with a non-UPN email, such as `balas@fabrikam.com`. The sign-in experience should look and feel the same as signing-in with the UPN.
+ ## Troubleshoot
-If users have trouble with sign-in events using their email address, review the following troubleshooting steps:
+If users have trouble signing-in with their email address, review the following troubleshooting steps:
-1. Make sure the user account has their email address set for the *ProxyAddresses* attribute in the on-prem AD DS environment.
-1. Verify that Azure AD Connect is configured and successfully synchronizes user accounts from the on-prem AD DS environment into Azure AD.
-1. Confirm that the Azure AD *HomeRealmDiscoveryPolicy* policy has the *AlternateIdLogin* attribute set to *"Enabled": true*:
+1. Make sure it's been at least 1 hour since email as an alternate login ID was enabled. If the user was recently added to a group for staged rollout policy, make sure it's been at least 24 hours since they were added to the group.
+1. If using HRD policy, confirm that the Azure AD *HomeRealmDiscoveryPolicy* has the *AlternateIdLogin* definition property set to *"Enabled": true* and the *IsOrganizationDefault* property set to *True*:
```powershell Get-AzureADPolicy | Where-Object Type -eq "HomeRealmDiscoveryPolicy" | Format-List * ```
+ If using staged rollout policy, confirm that the Azure AD *FeatureRolloutPolicy* has the *IsEnabled* property set to *True*:
+
+ ```powershell
+ Get-AzureADMSFeatureRolloutPolicy
+ ```
+1. Make sure the user account has their email address set in the *ProxyAddresses* attribute in Azure AD.
+
+### Conflicting values between cloud-only and synced users
+
+Within a tenant, a cloud-only user's UPN may take on the same value as another user's proxy address synced from the on-premises directory. In this scenario, with the feature enabled, the cloud-only user will not be able to sign in with their UPN. Here are the steps for detecting instances of this issue.
+
+1. Open a PowerShell session as an administrator, then install the *AzureADPreview* module using the [Install-Module][Install-Module] cmdlet:
+
+ ```powershell
+ Install-Module AzureADPreview
+ ```
+
+ If prompted, select **Y** to install NuGet or to install from an untrusted repository.
+
+1. Sign in to your Azure AD tenant as a *global administrator* using the [Connect-AzureAD][Connect-AzureAD] cmdlet:
+
+ ```powershell
+ Connect-AzureAD
+ ```
+
+1. Get affected users.
+
+ ```powershell
+ # Get all users
+ $allUsers = Get-AzureADUser -All $true
+
+ # Get list of proxy addresses from all synced users
+ $syncedProxyAddresses = $allUsers |
+ Where-Object {$_.ImmutableId} |
+ Select-Object -ExpandProperty ProxyAddresses |
+ ForEach-Object {$_ -Replace "smtp:", ""}
+
+ # Get list of user principal names from all cloud-only users
+ $cloudOnlyUserPrincipalNames = $allUsers |
+ Where-Object {!$_.ImmutableId} |
+ Select-Object -ExpandProperty UserPrincipalName
+
+ # Get intersection of two lists
+ $duplicateValues = $syncedProxyAddresses |
+ Where-Object {$cloudOnlyUserPrincipalNames -Contains $_}
+ ```
+
+1. To output affected users:
+
+ ```powershell
+ # Output affected synced users
+ $allUsers |
+ Where-Object {$_.ImmutableId -And ($_.ProxyAddresses | Where-Object {($duplicateValues | ForEach-Object {"smtp:$_"}) -Contains $_}).Length -GT 0} |
+ Select-Object ObjectId, DisplayName, UserPrincipalName, ProxyAddresses, ImmutableId, UserType
+
+ # Output affected cloud-only users
+ $allUsers |
+ Where-Object {!$_.ImmutableId -And $duplicateValues -Contains $_.UserPrincipalName} |
+ Select-Object ObjectId, DisplayName, UserPrincipalName, ProxyAddresses, ImmutableId, UserType
+ ```
+
+1. To output affected users to CSV:
+
+ ```powershell
+ # Output affected users to CSV
+ $allUsers |
+ Where-Object {
+ ($_.ImmutableId -And ($_.ProxyAddresses | Where-Object {($duplicateValues | ForEach-Object {"smtp:$_"}) -Contains $_}).Length -GT 0) -Or
+ (!$_.ImmutableId -And $duplicateValues -Contains $_.UserPrincipalName)
+ } |
+ Select-Object ObjectId, DisplayName, UserPrincipalName, @{n="ProxyAddresses"; e={$_.ProxyAddresses -Join ','}}, @{n="IsSyncedUser"; e={$_.ImmutableId.Length -GT 0}}, UserType |
+ Export-Csv -Path .\AffectedUsers.csv -NoTypeInformation
+ ```
## Next steps
For more information on hybrid identity operations, see [how password hash sync]
[Get-AzureADPolicy]: /powershell/module/azuread/get-azureadpolicy [New-AzureADPolicy]: /powershell/module/azuread/new-azureadpolicy [Set-AzureADPolicy]: /powershell/module/azuread/set-azureadpolicy
-[staged-rollout]: /powershell/module/azuread/?view=azureadps-2.0-preview&preserve-view=true#staged-rollout
[my-profile]: https://myprofile.microsoft.com
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/cloud-sync/what-is-cloud-sync.md
The following table provides a comparison between Azure AD Connect and Azure AD
| Support for writeback (passwords, devices, groups) |ΓùÅ | | | Azure AD Domain Services support|ΓùÅ | | | [Exchange hybrid writeback](../hybrid/reference-connect-sync-attributes-synchronized.md#exchange-hybrid-writeback) |ΓùÅ | |
+| Unlimited number of objects per AD domain |ΓùÅ | |
| Support for up to 150,000 objects per AD domain |ΓùÅ |ΓùÅ | | Groups with up to 50,000 members |ΓùÅ |ΓùÅ | | Large groups with up to 250,000 members |ΓùÅ | | | Cross domain references|ΓùÅ | |
-| On-demand provisioning| |ΓùÅ |
+| On-demand provisioning|ΓùÅ |ΓùÅ |
## Next steps
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
Previously updated : 09/03/2020 Last updated : 05/04/2021
In this quickstart, you register an app in the Azure portal so the Microsoft ide
The Microsoft identity platform performs identity and access management (IAM) only for registered applications. Whether it's a client application like a web or mobile app, or it's a web API that backs a client app, registering it establishes a trust relationship between your application and the identity provider, the Microsoft identity platform.
+> [!TIP]
+> To register an application for Azure AD B2C, follow the steps in [Tutorial: Register a web application in Azure AD B2C](../../active-directory-b2c/tutorial-register-applications.md).
+ ## Prerequisites
-* An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Completion of the [Set up a tenant](quickstart-create-new-tenant.md) quickstart.
+- An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Completion of the [Set up a tenant](quickstart-create-new-tenant.md) quickstart.
## Register an application
Follow these steps to create the app registration:
1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**. 1. Enter a display **Name** for your application. Users of your application might see the display name when they use the app, for example during sign-in.
- You can change the display name at any time and multiple app registrations can share the same name. The app registration's automatically generated Application (client) ID, not its display name, uniquely identifies your app within the identity platform.
-1. Specify who can use the application, sometimes called its *sign-in audience*.
+ You can change the display name at any time and multiple app registrations can share the same name. The app registration's automatically generated Application (client) ID, not its display name, uniquely identifies your app within the identity platform.
+1. Specify who can use the application, sometimes called its _sign-in audience_.
- | Supported account types | Description |
- |-|-|
- | **Accounts in this organizational directory only** | Select this option if you're building an application for use only by users (or guests) in *your* tenant.<br><br>Often called a *line-of-business* (LOB) application, this app is a *single-tenant* application in the Microsoft identity platform. |
- | **Accounts in any organizational directory** | Select this option if you want users in *any* Azure Active Directory (Azure AD) tenant to be able to use your application. This option is appropriate if, for example, you're building a software-as-a-service (SaaS) application that you intend to provide to multiple organizations.<br><br>This type of app is known as a *multitenant* application in the Microsoft identity platform. |
- | **Accounts in any organizational directory and personal Microsoft accounts** | Select this option to target the widest set of customers.<br><br>By selecting this option, you're registering a *multitenant* application that can also support users who have personal *Microsoft accounts*. |
- | **Personal Microsoft accounts** | Select this option if you're building an application only for users who have personal Microsoft accounts. Personal Microsoft accounts include Skype, Xbox, Live, and Hotmail accounts. |
+ | Supported account types | Description |
+ | - | - |
+ | **Accounts in this organizational directory only** | Select this option if you're building an application for use only by users (or guests) in _your_ tenant.<br><br>Often called a _line-of-business_ (LOB) application, this app is a _single-tenant_ application in the Microsoft identity platform. |
+ | **Accounts in any organizational directory** | Select this option if you want users in _any_ Azure Active Directory (Azure AD) tenant to be able to use your application. This option is appropriate if, for example, you're building a software-as-a-service (SaaS) application that you intend to provide to multiple organizations.<br><br>This type of app is known as a _multitenant_ application in the Microsoft identity platform. |
+ | **Accounts in any organizational directory and personal Microsoft accounts** | Select this option to target the widest set of customers.<br><br>By selecting this option, you're registering a _multitenant_ application that can also support users who have personal _Microsoft accounts_. |
+ | **Personal Microsoft accounts** | Select this option if you're building an application only for users who have personal Microsoft accounts. Personal Microsoft accounts include Skype, Xbox, Live, and Hotmail accounts. |
1. Don't enter anything for **Redirect URI (optional)**. You'll configure a redirect URI in the next section. 1. Select **Register** to complete the initial app registration.
- :::image type="content" source="media/quickstart-register-app/portal-02-app-reg-01.png" alt-text="Screenshot of the Azure portal in a web browser, showing the Register an application pane.":::
+ :::image type="content" source="media/quickstart-register-app/portal-02-app-reg-01.png" alt-text="Screenshot of the Azure portal in a web browser, showing the Register an application pane.":::
-When registration finishes, the Azure portal displays the app registration's **Overview** pane. You see the **Application (client) ID**. Also called the *client ID*, this value uniquely identifies your application in the Microsoft identity platform.
+When registration finishes, the Azure portal displays the app registration's **Overview** pane. You see the **Application (client) ID**. Also called the _client ID_, this value uniquely identifies your application in the Microsoft identity platform.
> [!IMPORTANT] > New app registrations are hidden to users by default. When you are ready for users to see the app on their [My Apps page](../user-help/my-apps-portal-end-user-access.md) you can enable it. To enable the app, in the Azure portal navigate to **Azure Active Directory** > **Enterprise applications** and select the app. Then on the **Properties** page toggle **Visible to users?** to Yes.
Your application's code, or more typically an authentication library used in you
## Add a redirect URI
-A *redirect URI* is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.
+A _redirect URI_ is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.
In a production web application, for example, the redirect URI is often a public endpoint where your app is running, like `https://contoso.com/auth-response`. During development, it's common to also add the endpoint where you run your app locally, like `https://127.0.0.1/auth-response` or `http://localhost/auth-response`.
To configure application settings based on the platform or device you're targeti
1. Under **Platform configurations**, select **Add a platform**. 1. Under **Configure platforms**, select the tile for your application type (platform) to configure its settings.
- :::image type="content" source="media/quickstart-register-app/portal-04-app-reg-03-platform-config.png" alt-text="Screenshot of the platform configuration pane in the Azure portal." border="false":::
+ :::image type="content" source="media/quickstart-register-app/portal-04-app-reg-03-platform-config.png" alt-text="Screenshot of the platform configuration pane in the Azure portal." border="false":::
+
+ | Platform | Configuration settings |
+ | -- | -- |
+ | **Web** | Enter a **Redirect URI** for your app. This URI is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.<br/><br/>Select this platform for standard web applications that run on a server. |
+ | **Single-page application** | Enter a **Redirect URI** for your app. This URI is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.<br/><br/>Select this platform if you're building a client-side web app by using JavaScript or a framework like Angular, Vue.js, React.js, or Blazor WebAssembly. |
+ | **iOS / macOS** | Enter the app **Bundle ID**. Find it in **Build Settings** or in Xcode in _Info.plist_.<br/><br/>A redirect URI is generated for you when you specify a **Bundle ID**. |
+ | **Android** | Enter the app **Package name**. Find it in the _AndroidManifest.xml_ file. Also generate and enter the **Signature hash**.<br/><br/>A redirect URI is generated for you when you specify these settings. |
+ | **Mobile and desktop applications** | Select one of the **Suggested redirect URIs**. Or specify a **Custom redirect URI**.<br/><br/>For desktop applications using embedded browser, we recommend<br/>`https://login.microsoftonline.com/common/oauth2/nativeclient`<br/><br/>For desktop applications using system browser, we recommend<br/>`http://localhost`<br/><br/>Select this platform for mobile applications that aren't using the latest Microsoft Authentication Library (MSAL) or aren't using a broker. Also select this platform for desktop applications. |
- | Platform | Configuration settings |
- | -- | - |
- | **Web** | Enter a **Redirect URI** for your app. This URI is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.<br/><br/>Select this platform for standard web applications that run on a server. |
- | **Single-page application** | Enter a **Redirect URI** for your app. This URI is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.<br/><br/>Select this platform if you're building a client-side web app by using JavaScript or a framework like Angular, Vue.js, React.js, or Blazor WebAssembly. |
- | **iOS / macOS** | Enter the app **Bundle ID**. Find it in **Build Settings** or in Xcode in *Info.plist*.<br/><br/>A redirect URI is generated for you when you specify a **Bundle ID**. |
- | **Android** | Enter the app **Package name**. Find it in the *AndroidManifest.xml* file. Also generate and enter the **Signature hash**.<br/><br/>A redirect URI is generated for you when you specify these settings. |
- | **Mobile and desktop applications** | Select one of the **Suggested redirect URIs**. Or specify a **Custom redirect URI**.<br/><br/>For desktop applications using embedded browser, we recommend<br/>`https://login.microsoftonline.com/common/oauth2/nativeclient`<br/><br/>For desktop applications using system browser, we recommend<br/>`http://localhost`<br/><br/>Select this platform for mobile applications that aren't using the latest Microsoft Authentication Library (MSAL) or aren't using a broker. Also select this platform for desktop applications. |
1. Select **Configure** to complete the platform configuration. ### Redirect URI restrictions
There are some restrictions on the format of the redirect URIs you add to an app
## Add credentials
-Credentials are used by [confidential client applications](msal-client-applications.md) that access a web API. Examples of confidential clients are [web apps](scenario-web-app-call-api-overview.md), other [web APIs](scenario-protected-web-api-overview.md), or [service-type and daemon-type applications](scenario-daemon-overview.md). Credentials allow your application to authenticate as itself, requiring no interaction from a user at runtime.
+Credentials are used by [confidential client applications](msal-client-applications.md) that access a web API. Examples of confidential clients are [web apps](scenario-web-app-call-api-overview.md), other [web APIs](scenario-protected-web-api-overview.md), or [service-type and daemon-type applications](scenario-daemon-overview.md). Credentials allow your application to authenticate as itself, requiring no interaction from a user at runtime.
You can add both certificates and client secrets (a string) as credentials to your confidential client app registration.
You can add both certificates and client secrets (a string) as credentials to yo
### Add a certificate
-Sometimes called a *public key*, a certificate is the recommended credential type. It provides more assurance than a client secret. For more information about using a certificate as an authentication method in your application, see [Microsoft identity platform application authentication certificate credentials](active-directory-certificate-credentials.md).
+Sometimes called a _public key_, a certificate is the recommended credential type. It provides more assurance than a client secret. For more information about using a certificate as an authentication method in your application, see [Microsoft identity platform application authentication certificate credentials](active-directory-certificate-credentials.md).
1. In the Azure portal, in **App registrations**, select your application. 1. Select **Certificates & secrets** > **Upload certificate**.
-1. Select the file you want to upload. It must be one of the following file types: *.cer*, *.pem*, *.crt*.
+1. Select the file you want to upload. It must be one of the following file types: _.cer_, _.pem_, _.crt_.
1. Select **Add**. ### Add a client secret
-The client secret is also known as an *application password*. It's a string value your app can use in place of a certificate to identity itself. The client secret is the easier of the two credential types to use. It's often used during development, but it's considered less secure than a certificate. Use certificates in your applications that are running in production.
+The client secret is also known as an _application password_. It's a string value your app can use in place of a certificate to identity itself. The client secret is the easier of the two credential types to use. It's often used during development, but it's considered less secure than a certificate. Use certificates in your applications that are running in production.
For more information about application security recommendations, see [Microsoft identity platform best practices and recommendations](identity-platform-integration-checklist.md#security). 1. In the Azure portal, in **App registrations**, select your application.
-1. Select **Certificates & secrets** > **New client secret**.
+1. Select **Certificates & secrets** > **New client secret**.
1. Add a description for your client secret. 1. Select a duration. 1. Select **Add**.
-1. *Record the secret's value* for use in your client application code. This secret value is *never displayed again* after you leave this page.
+1. _Record the secret's value_ for use in your client application code. This secret value is _never displayed again_ after you leave this page.
For security reasons, Microsoft limits creation of client secrets longer than 24 months and strongly recommends that you set this to a value less than 12 months.
active-directory Users Close Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-close-account.md
Previously updated : 12/03/2020 Last updated : 05/04/2021
Before you can close your account, you should confirm the following items:
To close an unmanaged work or school account, follow these steps:
-1. Sign in to [close your account](https://go.microsoft.com/fwlink/?linkid=873123), using the account that you want to close.
+1. Sign in to [close your account](https://portal.azure.com/#blade/Microsoft_AAD_IAM/PrivacyDataRequests), using the account that you want to close.
1. On **My data requests**, select **Close account**.
active-directory Howto Download Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-download-logs.md
Previously updated : 05/02/2021 Last updated : 05/05/2021
The option to download the data of an activity log is available in all editions
## Who can do it?
-To access the audit logs, you need to be in one of the following roles:
+While the global administrator works, you should use an account with lower privileges to perform this task. To access the audit logs, the following roles work:
-- Global Reader - Report Reader-- Global Administrator
+- Global Reader
- Security Administrator - Security Reader
-## Steps
-
-In Azure AD, you can access the download option in the toolbar of an activity log page.
-
-![Download log](./media/\howto-download-logs/download-log.png)
+## How to do it
**To download an activity log:**
In Azure AD, you can access the download option in the toolbar of an activity lo
3. **Download** the data.
+ ![Download log](./media/\howto-download-logs/download-log.png)
## Next steps
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
description: Learn about how to detect and handle user accounts in Azure AD that
documentationcenter: '' -+ editor: '' ms.assetid: ada19f69-665c-452a-8452-701029bf4252
na Previously updated : 01/21/2021 Last updated : 05/05/2021
No.
### What edition of Azure AD do I need to access the property?
-You can access this property in all editions of Azure AD.
+To access this property, you need an Azure Active Directory Premium edition.
### What permission do I need to read the property?
active-directory Quickstart Configure Named Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/quickstart-configure-named-locations.md
- Title: Configure named locations in Azure Active Directory | Microsoft Docs
-description: Learn how to configure named locations.
-------- Previously updated : 11/13/2018--
-#Customer intent: As an IT administrator, I want to label trusted IP address ranges in my organization so that I can allow them and configure location-based Conditional Access.
---
-# Quickstart: Configure named locations in Azure Active Directory
-
-With named locations, you can label trusted IP address ranges in your organization. Azure AD uses named locations to:
-- Detect false positives in [risk detections](../identity-protection/overview-identity-protection.md). Signing in from a trusted location lowers a user's sign-in risk. -- Configure [location-based Conditional Access](../conditional-access/location-condition.md).-
-In this quickstart, you learn how to configure named locations in your environment.
-
-## Prerequisites
-
-To complete this quickstart, you need:
-
-* An Azure AD tenant. Sign up for a [free trial](https://azure.microsoft.com/trial/get-started-active-directory/).
-* A user, who is a global administrator for the tenant.
-* An IP range that is established and credible in your organization. The IP range needs to be in **Classless Interdomain Routing (CIDR)** format.
-
-## Configure named locations
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. In the left pane, select **Azure Active Directory**, then select **Conditional Access** from the **Security** section.
-
- ![Conditional Access tab](./media/quickstart-configure-named-locations/entrypoint.png)
-
-3. On the **Conditional Access** page, select **Named locations** and select **New location**.
-
- ![Named location](./media/quickstart-configure-named-locations/namedlocation.png)
-
-6. Fill out the form on the new page.
-
- * In the **Name** box, type a name for your named location.
- * In the **IP ranges** box, type the IP range in CIDR format.
- * Click **Create**.
-
- ![The New blade](./media/quickstart-configure-named-locations/61.png)
-
-## Next steps
-
-For more information, see:
--- [Location as a condition in Conditional Access](../conditional-access/concept-conditional-access-conditions.md#locations).-- [Risky sign-ins report](../identity-protection/overview-identity-protection.md).
active-directory Ally Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ally-tutorial.md
Previously updated : 06/11/2020 Last updated : 05/05/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Ally.io supports **SP and IDP** initiated SSO * Ally.io supports **Just In Time** user provisioning
-* Once you configure Ally.io you can enforce session control, which protect exfiltration and infiltration of your organizationΓÇÖs sensitive data in real-time. Session control extend from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+* Once you configure Ally.io, you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
## Adding Ally.io from the gallery
To configure the integration of Ally.io into Azure AD, you need to add Ally.io f
1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. Go to **Enterprise Applications** and then select **All Applications**.
1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Ally.io** in the search box. 1. Select **Ally.io** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
To configure and test Azure AD SSO with Ally.io, complete the following building
1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on. 1. **[Configure Ally.io SSO](#configure-allyio-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Ally.io test user](#create-allyio-test-user)** - to have a counterpart of B.Simon in Ally.io that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. In the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
a. In the **Identifier** text box, type a URL using the following pattern: `https://app.ally.io/saml/consume/<CUSTOM_GUID>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Ally.io application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
- ![image](common/default-attributes.png)
+ ![Screenshot that shows the list of default attributes.](common/default-attributes.png)
1. In addition to above, Ally.io application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
Follow these steps to enable Azure AD SSO in the Azure portal.
![The Certificate download link](common/certificatebase64.png)
-1. On the **Set up Ally.io** section, copy the appropriate URL(s) based on your requirement.
+1. In the **Set up Ally.io** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png)
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Ally.io SSO
-To configure single sign-on on **Ally.io** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Ally.io support team](mailto:contact@ally.io). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on Ally.io side, you need to copy the Certificate (Base64) and appropriate URLs from Azure portal and add them in Ally.io.
-### Create Ally.io test user
+1. Sign in to Ally.io using an Admin account.
+1. Using the navigation bar on the left of the screen, select **Admin** > **Integrations**.
+1. Scroll to the **Authentication** section and select **Single Sign-On**. Then, select **Enable**.
-In this section, a user called B.Simon is created in Ally.io. Ally.io supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Ally.io, a new one is created when you attempt to access Ally.io.
+ ![Screenshot that shows the Enable button in Ally I O.](./media/ally-tutorial/ally-enable.png)
+
+ The **SSO Configuration** page opens, and you can configure the certificate and the copied URLs from the Azure portal.
+
+ ![Screenshot that shows the S S O configuration pane in Ally I O.](./media/ally-tutorial/ally-single-sign-on-configuration.png)
+
+1. In **SSO Configuration**, enter or select the following settings:
+
+ * **Ally**: Azure AD
+ * **SAML 2.0 Endpoint URL**: Login URL
+ * **Identity Provider Issuer URL**: Azure AD Identifier
+ * **Public(X.509) Certificate**: Certificate (base 64)
## Test SSO
In this section, you test your Azure AD single sign-on configuration using the A
When you click the Ally.io tile in the Access Panel, you should be automatically signed in to the Ally.io for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+A user called B.Simon is created in Ally.io. Ally.io supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Ally.io, a new one is created when you attempt to access Ally.io.
+ ## Additional resources -- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
When you click the Ally.io tile in the Access Panel, you should be automatically
- [What is session control in Microsoft Cloud App Security?](/cloud-app-security/proxy-intro-aad) -- [How to protect Ally.io with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
+- [How to protect Ally.io with advanced visibility and controls](/cloud-app-security/proxy-intro-aad)
active-directory Bentley Automatic User Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bentley-automatic-user-provisioning-tutorial.md
Once you've configured provisioning, use the following resources to monitor your
2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion 3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-## Connector limitations
-* The enterprise extension attribute "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager" is not supported and will be removed.
- ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Slack Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/slack-provisioning-tutorial.md
The objective of this tutorial is to show you the steps you need to perform in S
The scenario outlined in this tutorial assumes that you already have the following items: * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
* A Slack tenant with the [Plus plan](https://aadsyncfabric.slack.com/pricing) or better enabled. * A user account in Slack with Team Admin permissions.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Add Slack from the Azure AD application gallery
-Add Slack from the Azure AD application gallery to start managing provisioning to Slack. If you have previously setup Slack for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Slack from the Azure AD application gallery to start managing provisioning to Slack. If you have previously setup Slack for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 3. Define who will be in scope for provisioning
This section guides you through connecting your Azure AD to Slack's user account
|profileUrl|String| |timezone|String| |userType|String|
+ |preferredLanguage|String|
|urn:scim:schemas:extension:enterprise:1.0.department|String| |urn:scim:schemas:extension:enterprise:1.0.manager|Reference| |urn:scim:schemas:extension:enterprise:1.0.employeeNumber|String|
Once you've configured provisioning, use the following resources to monitor your
* Slack only allows matching with the attributes **userName** and **email**.
-* Common erorr codes are documented in the official Slack documentation - https://api.slack.com/scim#errors
+* Common error codes are documented in the official Slack documentation - https://api.slack.com/scim#errors
## Change log
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/csi-secrets-store-driver.md
az extension update --name aks-preview
> [!NOTE] > If you plan to provide access to the cluster via a user-assigned or system-assigned managed identity, enable Azure Active Directory on your cluster with the flag `enable-managed-identity`. See [Use managed identities in Azure Kubernetes Service][aks-managed-identity] for more.
+First, create an Azure resource group:
+
+```azurecli-interactive
+az group create -n myResourceGroup -l westus
+```
+ To create an AKS cluster with Secrets Store CSI Driver capability, use the [az aks create][az-aks-create] command with the addon `azure-keyvault-secrets-provider`: ```azurecli-interactive
az aks create -n myAKSCluster -g myResourceGroup --enable-addons azure-keyvault-
To upgrade an existing AKS cluster with Secrets Store CSI Driver capability, use the [az aks enable-addons][az-aks-enable-addons] command with the addon `azure-keyvault-secrets-provider`: ```azurecli-interactive
-az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --group myResourceGroup
+az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup
``` ## Verify Secrets Store CSI Driver installation
-These commands will install the Secrets Store CSI Driver and the Azure Key Vault provider on your nodes. Verify by listing all pods from all namespaces and ensuring your output looks similar to the following:
+These commands will install the Secrets Store CSI Driver and the Azure Key Vault provider on your nodes. Verify by listing all pods with the secrets-store-csi-driver and secrets-store-provider-azure labels in the kube-system namespace and ensuring your output looks similar to the following:
```bash
-kubectl get pods -n kube-system
+kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver, secrets-store-provider-azure)'
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aks-secrets-store-csi-driver-4vpkj 3/3 Running 2 4m25s
Take note of the following properties for use in the next section:
- Name of Key Vault resource - Azure Tenant ID the Subscription belongs to
+## Provide identity to access Azure Key Vault
+
+The example in this article uses a Service Principal, but the Azure Key Vault provider offers four methods of access. Review them and choose the one that best fits your use case. Be aware additional steps may be required depending on the chosen method, such as granting the Service Principal permissions to get secrets from key vault.
+
+- [Service Principal][service-principal-access]
+- [Pod Identity][pod-identity-access]
+- [User-assigned Managed Identity][ua-mi-access]
+- [System-assigned Managed Identity][sa-mi-access]
+ ## Create and apply your own SecretProviderClass object To use and configure the Secrets Store CSI driver for your AKS cluster, create a SecretProviderClass custom resource.
spec:
For more information, see [Create your own SecretProviderClass Object][sample-secret-provider-class]. Be sure to use the values you took note of above.
-## Provide identity to access Azure Key Vault
-
-The example in this article uses a Service Principal, but the Azure Key Vault provider offers four methods of access. Review them and choose the one that best fits your use case. Be aware additional steps may be required depending on the chosen method, such as granting the Service Principal permissions to get secrets from key vault.
--- [Service Principal][service-principal-access]-- [Pod Identity][pod-identity-access]-- [User-assigned Managed Identity][ua-mi-access]-- [System-assigned Managed Identity][sa-mi-access]- ### Apply the SecretProviderClass to your cluster Next, deploy the SecretProviderClass you created. For example:
After learning how to use the CSI Secrets Store Driver with an AKS Cluster, see
[az-extension-update]: /cli/azure/extension#az_extension_update [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-enable-addons]: /cli/azure/aks#az_aks_enable_addons
+[az-aks-disable-addons]: /cli/azure/aks#az_aks_disable_addons
[key-vault-provider]: ../key-vault/general/key-vault-integrate-kubernetes.md [csi-storage-drivers]: ./csi-storage-drivers.md [create-key-vault]: ../key-vault/general/quick-create-cli.md
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
Where `--enable-private-cluster` is a mandatory flag for a private cluster.
The following parameters can be leveraged to configure Private DNS Zone. - "System" is the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group.
+- If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register Microsoft.ContainerServices in both the subscriptions.
- "None" means AKS will not create a Private DNS Zone. This requires you to Bring Your Own DNS Server and configure the DNS resolution for the Private FQDN. If you don't configure DNS resolution, DNS is only resolvable within the agent nodes and will cause cluster issues after deployment. - "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" requires you to create a Private DNS Zone in this format for azure global cloud: `privatelink.<region>.azmk8s.io`. You will need the Resource Id of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles. - "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-access-restriction-policies.md
In the following example, the per subscription rate limit is 20 calls per 90 sec
| -- | -- | -- | - | | name | The name of the API for which to apply the rate limit. | Yes | N/A | | calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should nor exceed the value specified in `calls`. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. | Yes | N/A |
| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
In the following example, the rate limit of 10 calls per 60 seconds is keyed by
| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Yes | N/A | | counter-key | The key to use for the rate limit policy. | Yes | N/A | | increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should nor exceed the value specified in `calls`. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. | Yes | N/A |
| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
azure-app-configuration Concept Enable Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/concept-enable-rbac.md
Azure provides the following Azure built-in roles for authorizing access to App
- **Contributor**: Use this role to manage the App Configuration resource. While the App Configuration data can be accessed using access keys, this role does not grant direct access to the data using Azure AD. - **Reader**: Use this role to give read access to the App Configuration resource. This does not grant access to the resource's access keys, nor to the data stored in App Configuration.
+> [!NOTE]
+> After a role assignment is made for an identity, allow up to 15 minutes for the permission to propagate before accessing data stored in App Configuration using this identity.
+ ## Next steps Learn more about using [managed identities](howto-integrate-azure-managed-service-identity.md) to administer your App Configuration service.
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/howto-backup-config-store.md
In this article, you'll work with C# functions that have the following propertie
- Azure Functions runtime version 3.x - Function triggered by timer every 10 minutes
-To make it easier for you to start backing up your data, we've [tested and published a function](https://github.com/Azure/AppConfiguration/tree/master/examples/ConfigurationStoreBackup) that you can use without making any changes to the code. Download the project files and [publish them to your own Azure function app from Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+To make it easier for you to start backing up your data, we've [tested and published a function](https://github.com/Azure/AppConfiguration/tree/master/examples/ConfigurationStoreBackup) that you can use without making any changes to the code. Download the project files and [publish them to your own function app from Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
> [!IMPORTANT] > Don't make any changes to the environment variables in the code you've downloaded. You'll create the required app settings in the next section.
azure-arc Create Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-data-controller.md
Previously updated : 03/02/2021 Last updated : 05/05/2021
Regardless of the option you choose, during the creation process you will need t
- **Connectivity mode** - Connectivity mode determines the degree of connectivity from your Azure Arc enabled data services environment to Azure. Preview currently only supports indirectly connected and directly connected modes. For information, see [connectivity mode](./connectivity.md). - **Azure subscription ID** - The Azure subscription GUID for where you want the data controller resource in Azure to be created. - **Azure resource group name** - The name of the resource group where you want the data controller resource in Azure to be created.-- **Azure location** - The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see [Azure global infrastructure / Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
+- **Azure location** - The Azure location where the data controller resource metadata will be stored in Azure. For a list of available regions, see [Azure global infrastructure / Products by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). The metadata and billing information about the Azure resources managed by the data controller that you are deploying will be stored only in the location in Azure that you specify as the location parameter. If you are deploying in the directly connected mode, the location parameter for the data controller will be the same as the location of the custom location resource that you target.
## Next steps
azure-functions Functions Twitter Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-twitter-email.md
Create a connection to Twitter so your app can poll for new tweets.
| Setting | Value | | - | -- | | Search text | **#my-twitter-tutorial** |
- | How oven do you want to check for items? | **15** in the textbox, and <br> **Minute** in the dropdown |
+ | How oven do you want to check for items? | **1** in the textbox, and <br> **Hour** in the dropdown. You may enter different values but be sure to review the current [limitations](https://docs.microsoft.com/connectors/twitterconnector/#limits) of the Twitter connector. |
1. Select the **Save** button on the toolbar to save your progress.
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-overview.md
# Azure Monitor agent overview (preview)
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of virtual machines and delivers it to Azure Monitor. This articles provides an overview of the Azure Monitor agent including how to install it and how to configure data collection.
+The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of azure virtual machines and delivers it to Azure Monitor. This articles provides an overview of the Azure Monitor agent including how to install it and how to configure data collection.
## Relationship to other agents The Azure Monitor Agent replaces the following agents that are currently used by Azure Monitor to collect guest data from virtual machines:
The Azure Monitor agent supports Azure service tags (both AzureMonitor and Azure
## Next steps - [Install Azure Monitor agent](azure-monitor-agent-install.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Alerts Unified Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-unified-log.md
Log alerts are one of the alert types that are supported in [Azure Alerts](./ale
## Prerequisites
-Log alerts run queries on Log Analytics data. First you should start [collecting log data](../essentials/resource-logs.md) and query the log data for issues. You can use the [alert query examples topic](../logs/example-queries.md) in Log Analytics to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md).
+Log alerts run queries on Log Analytics data. First you should start [collecting log data](../essentials/resource-logs.md) and query the log data for issues. You can use the [alert query examples article](../logs/example-queries.md) in Log Analytics to understand what you can discover or [get started on writing your own query](../logs/log-analytics-tutorial.md).
[Azure Monitoring Contributor](../roles-permissions-security.md) is a common role that is needed for creating, modifying, and updating log alerts. Access & query execution rights for the resource logs are also needed. Partial access to resource logs can fail queries or return partial results. [Learn more about configuring log alerts in Azure](./alerts-log.md).
In workspaces and Application Insights, it's supported only in **Metric measurem
Split alerts by number or string columns into separate alerts by grouping into unique combinations. When creating resource-centric alerts at scale (subscription or resource group scope), you can split by Azure resource ID column. Splitting on Azure resource ID column will change the target of the alert to the specified resource.
-Splitting by Azure resource ID column is recommended when you want to monitor the same condition on multiple Azure resources. For example, monitoring all virtual machines for CPU usage over 80%. You may also decide not to split when you want a condition on multiple resources in the scope, such as monitoring that at least five machines in the resource group scope have CPU usage over 80%.
+Splitting by Azure resource ID column is recommended when you want to monitor the same condition on multiple Azure resources. For example, monitoring all virtual machines for CPU usage over 80%. You may also decide not to split when you want a condition on multiple resources in the scope. Such as monitoring that at least five machines in the resource group scope have CPU usage over 80%.
In workspaces and Application Insights, it's supported only in **Metric measurement** measure type. The field is called **Aggregate On**. It's limited to three columns. Having more than three groups by columns in the query could lead to unexpected results. In all other resource types, it's configured in **Split by dimensions** section of the condition (limited to six splits).
For example, if your rule [**Aggregation granularity**](#aggregation-granularity
## State and resolving alerts
-Log alerts can either be stateless or stateful (currently in preview when using the API).
+Log alerts can either be stateless or stateful (currently in preview).
Stateless alerts fire each time the condition is met, even if fired previously. You can [mark the alert as closed](../alerts/alerts-managing-alert-states.md) once the alert instance is resolved. You can also mute actions to prevent them from triggering for a period after an alert rule fired. In Log Analytics Workspaces and Application Insights, it's called **Suppress Alerts**. In all other resource types, it's called **Mute Actions**.
See this alert evaluation example:
| 00:15 | TRUE | Alert fires and action groups called. New alert state ACTIVE. | 00:20 | FALSE | Alert doesn't fire. No actions called. Pervious alerts state remains ACTIVE.
-Stateful alerts fire once per incident and resolve. When creating new or updating existing log alert rules, add the `autoMitigate` flag with value `true` of type `Boolean`, under the `properties` section. You can use this feature in these API versions: `2018-04-16` and `2020-05-01-preview`.
+Stateful alerts fire once per incident and resolve. You can set this using **Automatically resolve alerts** in the alert details section.
+
+## Location selection in log alerts
+
+Log alerts allow you to set a location for alert rules. In Log Analytics Workspaces, the rule location must match the workspace location. In all other resources, you can select any of the supported locations, which align to [Log Analytics supported region list](https://azure.microsoft.com/global-infrastructure/services/?products=monitor).
+
+Location affects which region the alert rule is evaluated in. Queries are executed on the log data in the selected region, that said, the alert service end to end is global. Meaning alert rule definition, fired alerts, notifications, and actions aren't bound to the location in the alert rule. Data is transfer from the set region since the Azure Monitor alerts service is a [non-regional service](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=non-regional).
## Pricing and billing of log alerts
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
You can also suppress these instrumentations by setting these environment variab
(which will then take precedence over enabled specified in the json configuration).
-> NOTE
+> [!NOTE]
> If you are looking for more fine-grained control, e.g. to suppress some redis calls but not all redis calls, > see [sampling overrides](./java-standalone-sampling-overrides.md).
azure-monitor Sla Report https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/sla-report.md
Title: Downtime, SLA, and outage workbook - Application Insights description: Calculate and report SLA for Web Test through a single pane of glass across your Application Insights resources and Azure subscriptions. Previously updated : 02/8/2021 Last updated : 05/4/2021
The SLA workbook template is accessible through the workbook gallery in your App
The parameters set in the workbook influence the rest of your report.
-`Subscriptions`, `App Insights Resources`, and `Web Test` parameters determine your high-level resource options. These parameters are based on log analytics queries and used in every report query.
+`Subscriptions`, `App Insights Resources`, and `Web Test` parameters determine your high-level resource options. These parameters are based on Log Analytics queries and used in every report query.
-`Failure Threshold` and `Outage Window` allow you to determine your own criteria for a service outage, for example, the criteria for App Insights Availability alert based upon failed location counter over a chosen period. The typical threshold is three locations over a five-minute window.
+`Failure Threshold` and `Outage Window` allow you to determine your own criteria for a service outage, for example, the criteria for App Insights Availability alert based upon failed location counter over a chosen period. The typical threshold is three locations over a five-minute window.
-`Maintenance Period` enables you to select your typical maintenance frequency and `Maintenance Window` is a datetime selector for an example maintenance period. All data that occurs during the identified period will be ignored in your results.
+`Maintenance Period` enables you to select your typical maintenance frequency and `Maintenance Window` is a datetime selector for an example maintenance period. All data that occurs during the identified period will be ignored in your results.
-`Availability Target 9s` specifies your Target 9s objective from two 9s to five 9s.
+`Availability Target %` specifies your target objective & takes custom values.
## Overview page
azure-resource-manager Reference Custom Providers Csharp Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/reference-custom-providers-csharp-endpoint.md
Last updated 01/14/2021
This article is a basic reference for a custom provider C# RESTful endpoint. If you're unfamiliar with Azure Custom Providers, see [the overview on custom resource providers](overview.md).
-## Azure function app RESTful endpoint
+## Azure Functions RESTful endpoint
-The following code works with an Azure function app. To learn how to set up an Azure function app to work with Azure Custom Providers, see [the tutorial on setting up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md).
+The following code works with a function app in Azure. To learn how to set up an function app to work with Azure Custom Providers, see [the tutorial on setting up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md).
```csharp #r "Newtonsoft.Json"
azure-resource-manager Tutorial Custom Providers Function Authoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-authoring.md
A custom provider is a contract between Azure and an endpoint. With custom providers, you can customize workflows on Azure. This tutorial shows how to author a custom provider RESTful endpoint. If you're unfamiliar with Azure Custom Providers, see [the overview on custom resource providers](overview.md). > [!NOTE]
-> This tutorial builds on the tutorial [Set up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md). Some of the steps in this tutorial work only if an Azure function app has been set up to work with custom providers.
+> This tutorial builds on the tutorial [Set up Azure Functions for Azure Custom Providers](./tutorial-custom-providers-function-setup.md). Some of the steps in this tutorial work only if a function app has been set up in Azure Functions to work with custom providers.
## Work with custom actions and custom resources
After all the RESTful methods are added to the function app, update the main **R
```csharp /// <summary>
-/// Entry point for the Azure function app webhook that acts as the service behind a custom provider.
+/// Entry point for the function app webhook that acts as the service behind a custom provider.
/// </summary> /// <param name="requestMessage">The HTTP request message.</param> /// <param name="log">The logger.</param>
azure-resource-manager Tutorial Custom Providers Function Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/custom-providers/tutorial-custom-providers-function-setup.md
Title: Set up Azure Functions
-description: This tutorial goes over how to create an Azure function app and set it up to work with Azure Custom Providers
+description: This tutorial goes over how to create a function app in Azure Functions and set it up to work with Azure Custom Providers.
Last updated 06/19/2019
# Set up Azure Functions for Azure Custom Providers
-A custom provider is a contract between Azure and an endpoint. With custom providers, you can change workflows in Azure. This tutorial shows how to set up an Azure function app to work as a custom provider endpoint.
+A custom provider is a contract between Azure and an endpoint. With custom providers, you can change workflows in Azure. This tutorial shows how to set up a function app in Azure Functions to work as a custom provider endpoint.
-## Create the Azure function app
+## Create the function app
> [!NOTE]
-> In this tutorial, you create a simple service endpoint that uses an Azure function app. However, a custom provider can use any publicly accessible endpoint. Alternatives include Azure Logic Apps, Azure API Management, and the Web Apps feature of Azure App Service.
+> In this tutorial, you create a simple service endpoint that uses a function app in Azure Functions. However, a custom provider can use any publicly accessible endpoint. Alternatives include Azure Logic Apps, Azure API Management, and the Web Apps feature of Azure App Service.
-To start this tutorial, you should first follow the tutorial [Create your first Azure function app in the Azure portal](../../azure-functions/functions-get-started.md). That tutorial creates a .NET core webhook function that can be modified in the Azure portal. It is also the foundation for the current tutorial.
+To start this tutorial, you should first follow the tutorial [Create your first function app in the Azure portal](../../azure-functions/functions-get-started.md). That tutorial creates a .NET core webhook function that can be modified in the Azure portal. It is also the foundation for the current tutorial.
## Install Azure Table storage bindings
The following XML element is an example C# project file:
## Next steps
-In this tutorial, you set up an Azure function app to work as an Azure custom provider endpoint.
+In this tutorial, you set up a function app in Azure Functions to work as an Azure custom provider endpoint.
-To learn how to author a RESTful custom provider endpoint, see [Tutorial: Authoring a RESTful custom provider endpoint](./tutorial-custom-providers-function-authoring.md).
+To learn how to author a RESTful custom provider endpoint, see [Tutorial: Authoring a RESTful custom provider endpoint](./tutorial-custom-providers-function-authoring.md).
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | batchaccounts | Yes | Yes | Batch accounts can't be moved directly from one region to another, but you can use a template to export a template, modify it, and deploy the template to the new region. <br/><br/> Learn about [moving a Batch account across regions](../../batch/best-practices.md#moving-batch-accounts-across-regions) |
+> | batchaccounts | Yes | Yes | Batch accounts can't be moved directly from one region to another, but you can use a template to export a template, modify it, and deploy the template to the new region. <br/><br/> Learn about [moving a Batch account across regions](../../batch/account-move.md) |
## Microsoft.Billing
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
Title: Tag resources, resource groups, and subscriptions for logical organization description: Shows how to apply tags to organize Azure resources for billing and managing. Previously updated : 01/04/2021 Last updated : 05/05/2021 + # Use tags to organize your Azure resources and management hierarchy
-You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them into a taxonomy. Each tag consists of a name and a value pair. For example, you can apply the name "Environment" and the value "Production" to all the resources in production.
+You apply tags to your Azure resources, resource groups, and subscriptions to logically organize them into a taxonomy. Each tag consists of a name and a value pair. For example, you can apply the name _Environment_ and the value _Production_ to all the resources in production.
For recommendations on how to implement a tagging strategy, see [Resource naming and tagging decision guide](/azure/cloud-adoption-framework/decision-guides/resource-tagging/?toc=/azure/azure-resource-manager/management/toc.json). > [!IMPORTANT] > Tag names are case-insensitive for operations. A tag with a tag name, regardless of casing, is updated or retrieved. However, the resource provider might keep the casing you provide for the tag name. You'll see that casing in cost reports.
->
+>
> Tag values are case-sensitive. [!INCLUDE [Handle personal data](../../../includes/gdpr-intro-sentence.md)]
For recommendations on how to implement a tagging strategy, see [Resource naming
There are two ways to get the required access to tag resources. -- You can have write access to the **Microsoft.Resources/tags** resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. Currently, the tag contributor role can't apply tags to resources or resource groups through the portal. It can apply tags to subscriptions through the portal. It supports all tag operations through PowerShell and REST API.
+- You can have write access to the `Microsoft.Resources/tags` resource type. This access lets you tag any resource, even if you don't have access to the resource itself. The [Tag Contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) role grants this access. Currently, the tag contributor role can't apply tags to resources or resource groups through the portal. It can apply tags to subscriptions through the portal. It supports all tag operations through PowerShell and REST API.
- You can have write access to the resource itself. The [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role grants the required access to apply tags to any entity. To apply tags to only one resource type, use the contributor role for that resource. For example, to apply tags to virtual machines, use the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor).
There are two ways to get the required access to tag resources.
### Apply tags
-Azure PowerShell offers two commands for applying tags - [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You must have the Az.Resources module 1.12.0 or later. You can check your version with `Get-Module Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) 3.6.1 or later.
+Azure PowerShell offers two commands for applying tags: [New-AzTag](/powershell/module/az.resources/new-aztag) and [Update-AzTag](/powershell/module/az.resources/update-aztag). You must have the `Az.Resources` module 1.12.0 or later. You can check your version with `Get-InstalledModule -Name Az.Resources`. You can install that module or [install Azure PowerShell](/powershell/azure/install-az-ps) 3.6.1 or later.
-The **New-AzTag** replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
+The `New-AzTag` replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
The following example applies a set of tags to a storage account:
Properties :
Team Compliance ```
-To add tags to a resource that already has tags, use **Update-AzTag**. Set the **-Operation** parameter to **Merge**.
+To add tags to a resource that already has tags, use `Update-AzTag`. Set the `-Operation` parameter to `Merge`.
```azurepowershell-interactive $tags = @{"Dept"="Finance"; "Status"="Normal"}
Properties :
Environment Production ```
-Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the Status tag from Normal to Green.
+Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
```azurepowershell-interactive $tags = @{"Status"="Green"}
Properties :
Environment Production ```
-When you set the **-Operation** parameter to **Replace**, the existing tags are replaced by the new set of tags.
+When you set the `-Operation` parameter to `Replace`, the existing tags are replaced by the new set of tags.
```azurepowershell-interactive $tags = @{"Project"="ECommerce"; "CostCenter"="00123"; "Team"="Web"}
To get resource groups that have a specific tag name and value, use:
### Remove tags
-To remove specific tags, use **Update-AzTag** and set **-Operation** to **Delete**. Pass in the tags you want to delete.
+To remove specific tags, use `Update-AzTag` and set `-Operation` to `Delete`. Pass in the tags you want to delete.
```azurepowershell-interactive $removeTags = @{"Project"="ECommerce"; "Team"="Web"}
Remove-AzTag -ResourceId "/subscriptions/$subscription"
### Apply tags
-Azure CLI offers two commands for applying tags - [az tag create](/cli/azure/tag#az_tag_create) and [az tag update](/cli/azure/tag#az_tag_update). You must have Azure CLI 2.10.0 or later. You can check your version with `az version`. To update or install, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+Azure CLI offers two commands for applying tags: [az tag create](/cli/azure/tag#az_tag_create) and [az tag update](/cli/azure/tag#az_tag_update). You must have Azure CLI 2.10.0 or later. You can check your version with `az version`. To update or install, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-The **az tag create** replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
+The `az tag create` replaces all tags on the resource, resource group, or subscription. When calling the command, pass in the resource ID of the entity you wish to tag.
The following example applies a set of tags to a storage account:
Notice that the two new tags were added to the two existing tags.
}, ```
-Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the Status tag from Normal to Green.
+Each tag name can have only one value. If you provide a new value for a tag, the old value is replaced even if you use the merge operation. The following example changes the `Status` tag from _Normal_ to _Green_.
```azurecli-interactive az tag update --resource-id $resource --operation Merge --tags Status=Green
You can tag resources, resource groups, and subscriptions during deployment with
The following example deploys a storage account with three tags. Two of the tags (`Dept` and `Environment`) are set to literal values. One tag (`LastDeployed`) is set to a parameter that defaults to the current date.
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "utcShort": {
- "type": "string",
- "defaultValue": "[utcNow('d')]"
- },
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "utcShort": {
+ "type": "string",
+ "defaultValue": "[utcNow('d')]"
},
- "resources": [
- {
- "apiVersion": "2019-04-01",
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "tags": {
- "Dept": "Finance",
- "Environment": "Production",
- "LastDeployed": "[parameters('utcShort')]"
- },
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {}
- }
- ]
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "Dept": "Finance",
+ "Environment": "Production",
+ "LastDeployed": "[parameters('utcShort')]"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```Bicep
+param location string = resourceGroup().location
+param utcShort string = utcNow('d')
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ Dept: 'Finance'
+ Environment: 'Production'
+ LastDeployed: utcShort
+ }
} ``` ++ ### Apply an object You can define an object parameter that stores several tags, and apply that object to the tag element. This approach provides more flexibility than the previous example because the object can have different properties. Each property in the object becomes a separate tag for the resource. The following example has a parameter named `tagValues` that is applied to the tag element.
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- },
- "tagValues": {
- "type": "object",
- "defaultValue": {
- "Dept": "Finance",
- "Environment": "Production"
- }
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
},
- "resources": [
- {
- "apiVersion": "2019-04-01",
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "tags": "[parameters('tagValues')]",
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {}
- }
- ]
+ "tagValues": {
+ "type": "object",
+ "defaultValue": {
+ "Dept": "Finance",
+ "Environment": "Production"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": "[parameters('tagValues')]",
+ "properties": {}
+ }
+ ]
} ```
+# [Bicep](#tab/bicep)
+
+```Bicep
+param location string = resourceGroup().location
+param tagValues object = {
+ Dept: 'Finance'
+ Environment: 'Production'
+}
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: tagValues
+}
+```
+++ ### Apply a JSON string
-To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
+To store many values in a single tag, apply a JSON string that represents the values. The entire JSON string is stored as one tag that can't exceed 256 characters. The following example has a single tag named `CostCenter` that contains several values from a JSON string:
+
+# [JSON](#tab/json)
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "apiVersion": "2019-04-01",
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "tags": {
- "CostCenter": "{\"Dept\":\"Finance\",\"Environment\":\"Production\"}"
- },
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {}
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "CostCenter": "{\"Dept\":\"Finance\",\"Environment\":\"Production\"}"
+ },
+ "properties": {}
+ }
+ ]
} ```
+# [Bicep](#tab/bicep)
+
+```Bicep
+param location string = resourceGroup().location
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ CostCenter: '{"Dept":"Finance","Environment":"Production"}'
+ }
+}
+```
+++ ### Apply tags from resource group To apply tags from a resource group to a resource, use the [resourceGroup()](../templates/template-functions-resource.md#resourcegroup) function. When getting the tag value, use the `tags[tag-name]` syntax instead of the `tags.tag-name` syntax, because some characters aren't parsed correctly in the dot notation.
+# [JSON](#tab/json)
+ ```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "location": {
- "type": "string",
- "defaultValue": "[resourceGroup().location]"
- }
- },
- "resources": [
- {
- "apiVersion": "2019-04-01",
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[concat('storage', uniqueString(resourceGroup().id))]",
- "location": "[parameters('location')]",
- "tags": {
- "Dept": "[resourceGroup().tags['Dept']]",
- "Environment": "[resourceGroup().tags['Environment']]"
- },
- "sku": {
- "name": "Standard_LRS"
- },
- "kind": "Storage",
- "properties": {}
- }
- ]
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2021-02-01",
+ "name": "[concat('storage', uniqueString(resourceGroup().id))]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "Storage",
+ "tags": {
+ "Dept": "[resourceGroup().tags['Dept']]",
+ "Environment": "[resourceGroup().tags['Environment']]"
+ },
+ "properties": {}
+ }
+ ]
+}
+```
+
+# [Bicep](#tab/bicep)
+
+```Bicep
+param location string = resourceGroup().location
+
+resource stgAccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+ name: 'storage${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'Storage'
+ tags: {
+ Dept: resourceGroup().tags['Dept']
+ Environment: resourceGroup().tags['Environment']
+ }
} ``` ++ ### Apply tags to resource groups or subscriptions
-You can add tags to a resource group or subscription by deploying the **Microsoft.Resources/tags** resource type. The tags are applied to the target resource group or subscription for the deployment. Each time you deploy the template you replace any tags there were previously applied.
+You can add tags to a resource group or subscription by deploying the `Microsoft.Resources/tags` resource type. The tags are applied to the target resource group or subscription for the deployment. Each time you deploy the template you replace any tags there were previously applied.
+
+# [JSON](#tab/json)
```json {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "tagName": {
- "type": "string",
- "defaultValue": "TeamName"
- },
- "tagValue": {
- "type": "string",
- "defaultValue": "AppTeam1"
- }
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "tagName": {
+ "type": "string",
+ "defaultValue": "TeamName"
},
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.Resources/tags",
- "name": "default",
- "apiVersion": "2019-10-01",
- "dependsOn": [],
- "properties": {
- "tags": {
- "[parameters('tagName')]": "[parameters('tagValue')]"
- }
- }
+ "tagValue": {
+ "type": "string",
+ "defaultValue": "AppTeam1"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/tags",
+ "name": "default",
+ "apiVersion": "2021-04-01",
+ "properties": {
+ "tags": {
+ "[parameters('tagName')]": "[parameters('tagValue')]"
}
- ]
+ }
+ }
+ ]
} ```
+# [Bicep](#tab/bicep)
+
+```Bicep
+param tagName string = 'TeamName'
+param tagValue string = 'AppTeam1'
+
+resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
+ name: 'default'
+ properties: {
+ tags: {
+ '${tagName}': tagValue
+ }
+ }
+}
+```
+++ To apply the tags to a resource group, use either PowerShell or Azure CLI. Deploy to the resource group that you want to tag. ```azurepowershell-interactive
For more information about subscription deployments, see [Create resource groups
The following template adds the tags from an object to either a resource group or subscription.
+# [JSON](#tab/json)
+ ```json
-"$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "tags": {
- "type": "object",
- "defaultValue": {
- "TeamName": "AppTeam1",
- "Dept": "Finance",
- "Environment": "Production"
- }
- }
- },
- "variables": {},
- "resources": [
- {
- "type": "Microsoft.Resources/tags",
- "name": "default",
- "apiVersion": "2019-10-01",
- "dependsOn": [],
- "properties": {
- "tags": "[parameters('tags')]"
- }
- }
- ]
+{
+ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "tags": {
+ "type": "object",
+ "defaultValue": {
+ "TeamName": "AppTeam1",
+ "Dept": "Finance",
+ "Environment": "Production"
+ }
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/tags",
+ "apiVersion": "2021-04-01",
+ "name": "default",
+ "properties": {
+ "tags": "[parameters('tags')]"
+ }
+ }
+ ]
} ```
+# [Bicep](#tab/bicep)
+
+```Bicep
+targetScope = 'subscription'
+
+param tagObject object = {
+ TeamName: 'AppTeam1'
+ Dept: 'Finance'
+ Environment: 'Production'
+}
+
+resource applyTags 'Microsoft.Resources/tags@2021-04-01' = {
+ name: 'default'
+ properties: {
+ tags: tagObject
+ }
+}
+```
+++ ## Portal [!INCLUDE [resource-manager-tag-resource](../../../includes/resource-manager-tag-resources.md)]
azure-resource-manager Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-portal.md
Title: Deploy resources with Azure portal description: Use Azure portal and Azure Resource Manage to deploy your resources to a resource group in your subscription. Previously updated : 10/22/2020 Last updated : 05/05/2021 # Deploy resources with ARM templates and Azure portal
If you want to execute a deployment but not use any of the templates in the Mark
- **Subscription**: Select an Azure subscription. - **Resource group**: Select **Create new** and give a name. - **Location**: Select an Azure location.
- - **Storage Account Type**: Use the default value.
+ - **Storage Account Type**: Use the default value. The camel-cased parameter name, *storageAccountType*, defined in the template is turned into a space-separated string when displayed on the portal.
- **Location**: Use the default value. - **I agree to the terms and conditions stated above**: (select)
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-to-azure-button.md
Title: Deploy to Azure button description: Use button to deploy Azure Resource Manager templates from a GitHub repository. Previously updated : 03/25/2021 Last updated : 05/05/2021 # Use a deployment button to deploy templates from GitHub repository
To test the full solution, select the following button:
[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.storage%2Fstorage-account-create%2Fazuredeploy.json)
-The portal displays a pane that allows you to easily provide parameter values. The parameters are pre-filled with the default values from the template.
+The portal displays a pane that allows you to easily provide parameter values. The parameters are pre-filled with the default values from the template. The camel-cased parameter name, *storageAccountType*, defined in the template is turned into a space-separated string when displayed on the portal.
![Use portal to deploy](./media/deploy-to-azure-button/portal.png)
azure-resource-manager Template Functions Logical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-logical.md
Title: Template functions - logical description: Describes the functions to use in an Azure Resource Manager template (ARM template) to determine logical values. Previously updated : 11/18/2020 Last updated : 05/05/2021 # Logical functions for ARM templates
The following [example template](https://github.com/krnese/AzureDeploy/blob/mast
# [Bicep](#tab/bicep)
-> [!NOTE]
-> `Conditions` are not yet implemented in Bicep. See [Conditions](https://github.com/Azure/bicep/issues/186).
+```bicep
+param vmName string
+param location string
+param logAnalytics string = ''
+
+resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2017-03-30' = if (!empty(logAnalytics)) {
+ name: '${vmName}/omsOnboarding'
+ location: location
+ properties: {
+ publisher: 'Microsoft.EnterpriseCloud.Monitoring'
+ type: 'MicrosoftMonitoringAgent'
+ typeHandlerVersion: '1.0'
+ autoUpgradeMinorVersion: true
+ settings: {
+ workspaceId: ((!empty(logAnalytics)) ? reference(logAnalytics, '2015-11-01-preview').customerId : json('null'))
+ }
+ protectedSettings: {
+ workspaceKey: ((!empty(logAnalytics)) ? listKeys(logAnalytics, '2015-11-01-preview').primarySharedKey : json('null'))
+ }
+ }
+}
+
+output mgmtStatus string = ((!empty(logAnalytics)) ? 'Enabled monitoring for VM!' : 'Nothing to enable')
+```
azure-resource-manager Template Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-parameters.md
Title: Parameters in templates description: Describes how to define parameters in an Azure Resource Manager template (ARM template) and Bicep file. Previously updated : 03/03/2021 Last updated : 05/05/2021 # Parameters in ARM templates
Each parameter must be set to one of the [data types](data-types.md).
At a minimum, every parameter needs a name and type. In Bicep, a parameter can't have the same name as a variable, resource, output, or other parameter in the same scope.
+When you deploy a template via the Azure portal, camel-cased parameter names are turned into space-separated names. For example, *demoString* in the following example is shown as *Demo String*. For more information, see [Use a deployment button to deploy templates from GitHub repository](./deploy-to-azure-button.md) and [Deploy resources with ARM templates and Azure portal](./deploy-portal.md).
+ # [JSON](#tab/json) ```json
azure-sql Connect Github Actions Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-github-actions-sql-db.md
Previously updated : 10/12/2020 Last updated : 05/05/2021
You'll use the connection string as a GitHub secret.
with: server-name: SQL_SERVER_NAME connection-string: ${{ secrets.AZURE_SQL_CONNECTION_STRING }}
- sql-file: './Database.dacpac'
+ dacpac-package: './Database.dacpac'
``` 1. Complete your workflow by adding an action to logout of Azure. Here is the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
You'll use the connection string as a GitHub secret.
with: server-name: SQL_SERVER_NAME connection-string: ${{ secrets.AZURE_SQL_CONNECTION_STRING }}
- sql-file: './Database.dacpac'
+ dacpac-package: './Database.dacpac'
# Azure logout - name: logout
azure-sql Service Tiers Dtu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-dtu.md
ms.devlang:
Previously updated : 10/15/2020 Last updated : 5/4/2021 # Service tiers in the DTU-based purchase model
Choosing a service tier depends primarily on business continuity, storage, and p
|**Uptime SLA**|99.99%|99.99%|99.99%| |**Maximum backup retention**|7 days|35 days|35 days| |**CPU**|Low|Low, Medium, High|Medium, High|
-|**IOPS (approximate)**\* |1-4 IOPS per DTU| 1-4 IOPS per DTU | 25 IOPS per DTU|
+|**IOPS (approximate)**\* |1-4 IOPS per DTU| 1-4 IOPS per DTU | >25 IOPS per DTU|
|**IO latency (approximate)**|5 ms (read), 10 ms (write)|5 ms (read), 10 ms (write)|2 ms (read/write)| |**Columnstore indexing** |N/A|S3 and above|Supported| |**In-memory OLTP**|N/A|N/A|Supported|
The key metrics in the benchmark are throughput and response time.
## Next steps - For details on specific compute sizes and storage size choices available for single databases, see [SQL Database DTU-based resource limits for single databases](resource-limits-dtu-single-databases.md#single-database-storage-sizes-and-compute-sizes).-- For details on specific compute sizes and storage size choices available for elastic pools, see [SQL Database DTU-based resource limits](resource-limits-dtu-elastic-pools.md#elastic-pool-storage-sizes-and-compute-sizes).
+- For details on specific compute sizes and storage size choices available for elastic pools, see [SQL Database DTU-based resource limits](resource-limits-dtu-elastic-pools.md#elastic-pool-storage-sizes-and-compute-sizes).
azure-sql Server Trust Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/server-trust-group-overview.md
Server Trust Group is a concept used for managing trust between Azure SQL Manage
## Server Trust Group setup
-The following section describes setup of Server Trust Group.
+Server Trust Group can be setup via [Azure PowerShell](https://docs.microsoft.com/powershell/module/az.sql/new-azsqlservertrustgroup) or [Azure CLI](https://docs.microsoft.com/cli/azure/sql/stg?view=azure-cli-latest).
+The following section describes setup of Server Trust Group using Azure portal.
1. Go to the [Azure portal](https://portal.azure.com/).
azure-sql Transact Sql Tsql Differences Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/transact-sql-tsql-differences-sql-server.md
ms.devlang: --++ Last updated 3/16/2021
azure-sql Sql Server To Sql Database Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-guide.md
For more migration information, see the [migration overview](sql-server-to-sql-d
## Prerequisites
-For your [SQL Server migration](https://azure.microsoft.com/en-us/migration/sql-server/) to Azure SQL Database, make sure you have the following prerequisites:
+For your [SQL Server migration](https://azure.microsoft.com/en-us/migration/sql-server/) to Azure SQL Database, make sure you have:
-- A chosen [migration method](sql-server-to-sql-database-overview.md#compare-migration-options) and corresponding tools .-- [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) installed on a machine that can connect to your source SQL Server.-- A target [Azure SQL Database](../../database/single-database-create-quickstart.md). -- Connectivity and proper permissions to access both source and target.
+- Chosen [migration method](sql-server-to-sql-database-overview.md#compare-migration-options) and corresponding tools .
+- Installed [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) on a machine that can connect to your source SQL Server.
+- Created a target [Azure SQL Database](../../database/single-database-create-quickstart.md).
+- Configured connectivity and proper permissions to access both source and target.
+- Reviewed the database engine features [available in Azure SQL Database](../../database/features-comparison.md).
azure-sql Sql Server To Sql Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-overview.md
One of the key benefits of migrating to SQL Database is that you can modernize y
You can also save costs by using the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to migrate your SQL Server on-premises licenses to Azure SQL Database. This option is available if you choose the [vCore-based purchasing model](../../database/service-tiers-vcore.md).
+Be sure to review the SQL Server database engine features [available in Azure SQL Database](../../database/features-comparison.md) to validate the supportability of your migration target.
+ ## Considerations The key factors to consider when you're evaluating migration options are:
Manual setup of SQL Server high-availability features like Always On failover cl
Beyond the high-availability architecture that's included in Azure SQL Database, the [auto-failover groups](../../database/auto-failover-group-overview.md) feature allows you to manage the replication and failover of databases in a managed instance to another region.
-### SQL Agent jobs
-SQL Agent jobs are not directly supported in Azure SQL Database and need to be deployed to [elastic database jobs (preview)](../../database/job-automation-overview.md).
- ### Logins and groups
-Move SQL logins from the SQL Server source to Azure SQL Database by using Database Migration Service in offline mode. Use the **Selected logins** pane in the Migration Wizard to migrate logins to your target SQL database.
-You can also migrate Windows users and groups via Database Migration Service by enabling the corresponding toggle on the Database Migration Service **Configuration** page.
+Windows logins are not supported in Azure SQL Database, create an Azure Active Directory login instead. Manually recreate any SQL logins.
-Alternatively, you can use the [PowerShell utility](https://github.com/microsoft/DataMigrationTeam/tree/master/IP%20and%20Scripts/MoveLogins) specially designed by Microsoft data migration architects. The utility uses PowerShell to create a Transact-SQL (T-SQL) script to re-create logins and select database users from the source to the target.
-
-The PowerShell utility automatically maps Windows Server Active Directory accounts to Azure Active Directory (Azure AD) accounts, and it can do a UPN lookup for each login against the source Active Directory instance. The utility scripts custom server and database roles, along with role membership and user permissions. Contained databases are not yet supported, and only a subset of possible SQL Server permissions are scripted.
+### SQL Agent jobs
+SQL Agent jobs are not directly supported in Azure SQL Database and need to be deployed to [elastic database jobs (preview)](../../database/job-automation-overview.md).
### System databases For Azure SQL Database, the only applicable system databases are [master](/sql/relational-databases/databases/master-database) and tempdb. To learn more, see [Tempdb in Azure SQL Database](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
The Data SQL Engineering team developed these resources. This team's core charte
- To assess the application access layer, see [Data Access Migration Toolkit (Preview)](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit). -- For details on how to perform A/B testing for the data access layer, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
+- For details on how to perform A/B testing for the data access layer, see [Database Experimentation Assistant](/sql/dea/database-experimentation-assistant-overview).
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
For more migration information, see the [migration overview](sql-server-to-manag
## Prerequisites
-To migrate your SQL Server to Azure SQL Managed Instance, make sure to go through the following pre-requisites:
--- Choose a [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options) and the corresponding tools that are required for the chosen method-- Install [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) on a machine that can connect to your source SQL Server-- Connectivity and proper permissions to access both source and target.
+To migrate your SQL Server to Azure SQL Managed Instance, make sure you have:
+- Chosen a [migration method](sql-server-to-managed-instance-overview.md#compare-migration-options) and the corresponding tools for your method.
+- Installed the [Data Migration Assistant (DMA)](https://www.microsoft.com/download/details.aspx?id=53595) on a machine that can connect to your source SQL Server.
+- Created a target [Azure SQL Managed Instance](../../managed-instance/instance-create-quickstart.md)
+- Configured connectivity and proper permissions to access both source and target.
+- Reviewed the SQL Server database engine features [available in Azure SQL Managed Instance](../../database/features-comparison.md).
## Pre-migration
azure-sql Sql Server To Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md
For other migration guides, see [Database Migration](/data-migration).
[Azure SQL Managed Instance](../../managed-instance/sql-managed-instance-paas-overview.md) is a recommended target option for SQL Server workloads that require a fully managed service without having to manage virtual machines or their operating systems. SQL Managed Instance enables you to move your on-premises applications to Azure with minimal application or database changes. It offers complete isolation of your instances with native virtual network support.
+Be sure to review the SQL Server database engine features [available in Azure SQL Managed Instance](../../database/features-comparison.md) to validate the supportability of your migration target.
+ ## Considerations The key factors to consider when you're evaluating migration options are:
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
Now that you've covered how to protect your Azure VMware Solution VMs, you may w
- Using the [Azure Defender dashboard](../security-center/azure-defender-dashboard.md) - [Advanced multistage attack detection in Azure Sentinel](../azure-monitor/logs/quick-create-workspace.md)-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
+- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
azure-vmware Backup Azure Vmware Solution Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/backup-azure-vmware-solution-virtual-machines.md
- Title: Back up Azure VMware Solution VMs with Azure Backup Server
-description: Configure your Azure VMware Solution environment to back up virtual machines by using Azure Backup Server.
- Previously updated : 02/04/2021--
-# Back up Azure VMware Solution VMs with Azure Backup Server
-
-In this article, we'll back up VMware virtual machines (VMs) running on Azure VMware Solution with Azure Backup Server. First, thoroughly go through [Set up Microsoft Azure Backup Server for Azure VMware Solution](set-up-backup-server-for-azure-vmware-solution.md).
-
-Then, we'll walk through all of the necessary procedures to:
-
-> [!div class="checklist"]
-> * Set up a secure channel so that Azure Backup Server can communicate with VMware servers over HTTPS.
-> * Add the account credentials to Azure Backup Server.
-> * Add the vCenter to Azure Backup Server.
-> * Set up a protection group that contains the VMware VMs you want to back up, specify backup settings, and schedule the backup.
-
-## Create a secure connection to the vCenter server
-
-By default, Azure Backup Server communicates with VMware servers over HTTPS. To set up the HTTPS connection, download the VMware certificate authority (CA) certificate and import it on the Azure Backup Server.
-
-### Set up the certificate
-
-1. In the browser, on the Azure Backup Server machine, enter the vSphere Web Client URL.
-
- > [!NOTE]
- > If the VMware **Getting Started** page doesn't appear, verify the connection and browser proxy settings and try again.
-
-1. On the VMware **Getting Started** page, select **Download trusted root CA certificates**.
-
- :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/vsphere-web-client.png" alt-text="vSphere Web Client":::
-
-1. Save the **download.zip** file to the Azure Backup Server machine, and then extract its contents to the **certs** folder, which contains the:
-
- - Root certificate file with an extension that begins with a numbered sequence like .0 and .1.
- - CRL file with an extension that begins with a sequence like .r0 or .r1.
-
-1. In the **certs** folder, right-click the root certificate file and select **Rename** to change the extension to **.crt**.
-
- The file icon changes to one that represents a root certificate.
-
-1. Right-click the root certificate, and select **Install Certificate**.
-
-1. In the **Certificate Import Wizard**, select **Local Machine** as the destination for the certificate, and select **Next**.
-
- ![Wizard welcome page](../backup/media/backup-azure-backup-server-vmware/certificate-import-wizard1.png)
-
- > [!NOTE]
- > If asked, confirm that you want to allow changes to the computer.
-
-1. Select **Place all certificates in the following store**, and select **Browse** to choose the certificate store.
-
- ![Certificate storage](../backup/media/backup-azure-backup-server-vmware/cert-import-wizard-local-store.png)
-
-1. Select **Trusted Root Certification Authorities** as the destination folder, and select **OK**.
-
-1. Review the settings, and select **Finish** to start importing the certificate.
-
- ![Verify certificate is in the proper folder](../backup/media/backup-azure-backup-server-vmware/cert-wizard-final-screen.png)
-
-1. After the certificate import is confirmed, sign in to the vCenter server to confirm that your connection is secure.
-
-### Enable TLS 1.2 on Azure Backup Server
-
-VMware 6.7 onwards had TLS enabled as the communication protocol.
-
-1. Copy the following registry settings, and paste them into Notepad. Then save the file as TLS.REG without the .txt extension.
-
- ```
-
- Windows Registry Editor Version 5.00
-
- [HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v2.0.50727]
-
- "SystemDefaultTlsVersions"=dword:00000001
-
- "SchUseStrongCrypto"=dword:00000001
-
- [HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319]
-
- "SystemDefaultTlsVersions"=dword:00000001
-
- "SchUseStrongCrypto"=dword:00000001
-
- [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727]
-
- "SystemDefaultTlsVersions"=dword:00000001
-
- "SchUseStrongCrypto"=dword:00000001
-
- [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
-
- "SystemDefaultTlsVersions"=dword:00000001
-
- "SchUseStrongCrypto"=dword:00000001
-
- ```
-
-1. Right-click the TLS.REG file, and select **Merge** or **Open** to add the settings to the registry.
--
-## Add the account on Azure Backup Server
-
-1. Open Azure Backup Server, and in the Azure Backup Server console, select **Management** > **Production Servers** > **Manage VMware**.
-
- ![Azure Backup Server console](../backup/media/backup-azure-backup-server-vmware/add-vmware-credentials.png)
-
-1. In the **Manage Credentials** dialog box, select **Add**.
-
- ![In the Manage Credentials dialog box, select Add.](../backup/media/backup-azure-backup-server-vmware/mabs-manage-credentials-dialog.png)
-
-1. In the **Add Credential** dialog box, enter a name and a description for the new credential. Specify the user name and password you defined on the VMware server.
-
- > [!NOTE]
- > If the VMware server and Azure Backup Server aren't in the same domain, specify the domain in the **User name** box.
-
- ![Azure Backup Server Add Credential dialog box](../backup/media/backup-azure-backup-server-vmware/mabs-add-credential-dialog2.png)
-
-1. Select **Add** to add the new credential.
-
- ![Screenshot shows the Azure Backup Server Manage Credentials dialog box with new credentials displayed.](../backup/media/backup-azure-backup-server-vmware/new-list-of-mabs-creds.png)
-
-## Add the vCenter server to Azure Backup Server
-
-1. In the Azure Backup Server console, select **Management** > **Production Servers** > **Add**.
-
- ![Open Production Server Addition Wizard](../backup/media/backup-azure-backup-server-vmware/add-vcenter-to-mabs.png)
-
-1. Select **VMware Servers**, and select **Next**.
-
- ![Production Server Addition Wizard](../backup/media/backup-azure-backup-server-vmware/production-server-add-wizard.png)
-
-1. Specify the IP address of the vCenter.
-
- ![Specify VMware server](../backup/media/backup-azure-backup-server-vmware/add-vmware-server-provide-server-name.png)
-
-1. In the **SSL Port** box, enter the port used to communicate with the vCenter.
-
- > [!TIP]
- > Port 443 is the default port, but you can change it if your vCenter listens on a different port.
-
-1. In the **Specify Credential** box, select the credential that you created in the previous section.
-
-1. Select **Add** to add the vCenter to the servers list, and select **Next**.
-
- ![Add VMware server and credential](../backup/media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png)
-
-1. On the **Summary** page, select **Add** to add the vCenter to Azure Backup Server.
-
- The new server gets added immediately. vCenter doesn't need an agent.
-
- ![Add VMware server to Azure Backup Server](../backup/media/backup-azure-backup-server-vmware/tasks-screen.png)
-
-1. On the **Finish** page, review the settings, and then select **Close**.
-
- ![Finish page](../backup/media/backup-azure-backup-server-vmware/summary-screen.png)
-
- You see the vCenter server listed under **Production Server** with:
- - Type as **VMware Server**
- - Agent Status as **OK**
-
- If you see **Agent Status** as **Unknown**, select **Refresh**.
-
-## Configure a protection group
-
-Protection groups gather multiple VMs and apply the same data retention and backup settings to all VMs in the group.
-
-1. In the Azure Backup Server console, select **Protection** > **New**.
-
- ![Open the Create New Protection Group wizard](../backup/media/backup-azure-backup-server-vmware/open-protection-wizard.png)
-
-1. On the **Create New Protection Group** wizard welcome page, select **Next**.
-
- ![Create New Protection Group wizard dialog box](../backup/media/backup-azure-backup-server-vmware/protection-wizard.png)
-
-1. On the **Select Protection Group Type** page, select **Servers**, and then select **Next**. The **Select Group Members** page appears.
-
-1. On the **Select Group Members** page, select the VMs (or VM folders) that you want to back up, and then select **Next**.
-
- > [!NOTE]
- > When you select a folder or VMs, folders inside that folder are also selected for backup. You can uncheck folders or VMs you don't want to back up. If a VM or folder is already being backed up, you can't select it, which ensures duplicate recovery points aren't created for a VM.
-
- ![Select group members](../backup/media/backup-azure-backup-server-vmware/server-add-selected-members.png)
-
-1. On the **Select Data Protection Method** page, enter a name for the protection group and protection settings.
-
-1. Set the short-term protection to **Disk**, enable online protection, and then select **Next**.
-
- ![Select data protection method](../backup/media/backup-azure-backup-server-vmware/name-protection-group.png)
-
-1. Specify how long you want to keep data backed up to disk.
-
- - **Retention range**: The number of days that disk recovery points are kept.
- - **Express Full Backup**: How often disk recovery points are taken. To change the times or dates when short-term backups occur, select **Modify**.
-
- :::image type="content" source="media/azure-vmware-solution-backup/new-protection-group-specify-short-term-goals.png" alt-text="Specify your short-term goals for disk-based protection":::
-
-1. On the **Review Disk Storage Allocation** page, review the disk space provided for the VM backups.
-
- - The recommended disk allocations are based on the retention range you specified, the type of workload, and the size of the protected data. Make any changes required, and then select **Next**.
- - **Data size:** Size of the data in the protection group.
- - **Disk space:** Recommended amount of disk space for the protection group. If you want to modify this setting, select space lightly larger than the amount you estimate each data source grows.
- - **Storage pool details:** Shows the status of the storage pool, which includes total and remaining disk size.
-
- :::image type="content" source="media/azure-vmware-solution-backup/review-disk-allocation.png" alt-text="Review disk space given in the storage pool":::
-
- > [!NOTE]
- > In some scenarios, the data size reported is higher than the actual VM size. We're aware of the issue and currently investigating it.
-
-1. On the **Choose Replica Creation Method** page, indicate how you want to take the initial backup, and select **Next**.
-
- - The default is **Automatically over the network** and **Now**. If you use the default, specify an off-peak time. If you choose **Later**, specify a day and time.
- - For large amounts of data or less-than-optimal network conditions, consider replicating the data offline by using removable media.
-
- ![Choose replica creation method](../backup/media/backup-azure-backup-server-vmware/replica-creation.png)
-
-1. For **Consistency check options**, select how and when to automate the consistency checks and select **Next**.
-
- - You can run consistency checks when replica data becomes inconsistent, or on a set schedule.
- - If you don't want to configure automatic consistency checks, you can run a manual check by right-clicking the protection group **Perform Consistency Check**.
-
-1. On the **Specify Online Protection Data** page, select the VMs or VM folders that you want to back up, and then select **Next**.
-
- > [!TIP]
- > You can select the members individually or choose **Select All** to choose all members.
-
- ![Specify online protection data](../backup/media/backup-azure-backup-server-vmware/select-data-to-protect.png)
-
-1. On the **Specify Online Backup Schedule** page, indicate how often you want to back up data from local storage to Azure.
-
- - Cloud recovery points for the data to get generated according to the schedule.
- - After the recovery point gets generated, it's then transferred to the Recovery Services vault in Azure.
-
- ![Specify online backup schedule](../backup/media/backup-azure-backup-server-vmware/online-backup-schedule.png)
-
-1. On the **Specify Online Retention Policy** page, indicate how long you want to keep the recovery points created from the backups to Azure.
-
- - There's no time limit for how long you can keep data in Azure.
- - The only limit is that you can't have more than 9,999 recovery points per protected instance. In this example, the protected instance is the VMware server.
-
- ![Specify online retention policy](../backup/media/backup-azure-backup-server-vmware/retention-policy.png)
-
-1. On the **Summary** page, review the settings and then select **Create Group**.
-
- ![Protection group member and setting summary](../backup/media/backup-azure-backup-server-vmware/protection-group-summary.png)
-
-## Monitor with the Azure Backup Server console
-
-After you configure the protection group to back up Azure VMware Solution VMs, you can monitor the status of the backup job and alert by using the Azure Backup Server console. Here's what you can monitor.
--- In the **Monitoring** task area:
- - Under **Alerts**, you can monitor errors, warnings, and general information. You can view active and inactive alerts and set up email notifications.
- - Under **Jobs**, you can view jobs started by Azure Backup Server for a specific protected data source or protection group. You can follow job progress or check resources consumed by jobs.
-- In the **Protection** task area, you can check the status of volumes and shares in the protection group. You can also check configuration settings such as recovery settings, disk allocation, and the backup schedule.-- In the **Management** task area, you can view the **Disks, Online**, and **Agents** tabs to check the status of disks in the storage pool, registration to Azure, and deployed DPM agent status.--
-## Restore VMware virtual machines
-
-In the Azure Backup Server Administrator Console, there are two ways to find recoverable data. You can search or browse. When you recover data, you might or might not want to restore data or a VM to the same location. For this reason, Azure Backup Server supports three recovery options for VMware VM backups:
--- **Original location recovery (OLR)**: Use OLR to restore a protected VM to its original location. You can restore a VM to its original location only if no disks were added or deleted since the backup occurred. If disks were added or deleted, you must use alternate location recovery.-- **Alternate location recovery (ALR)**: Use when the original VM is missing, or you don't want to disturb the original VM. Provide the location of an ESXi host, resource pool, folder, and the storage datastore and path. To help differentiate the restored VM from the original VM, Azure Backup Server appends *"-Recovered"* to the name of the VM.-- **Individual file location recovery (ILR)**: If the protected VM is a Windows Server VM, individual files or folders inside the VM can be recovered by using the ILR capability of Azure Backup Server. To recover individual files, see the procedure later in this article. Restoring an individual file from a VM is available only for Windows VM and disk recovery points.-
-### Restore a recovery point
-
-1. In the Azure Backup Server Administrator Console, select the **Recovery** view.
-
-1. Using the **Browse** pane, browse or filter to find the VM you want to recover. After you select a VM or folder, the **Recovery points for pane display the available recovery points.
-
- ![Available recovery points](../backup/media/restore-azure-backup-server-vmware/recovery-points.png)
-
-1. In the **Recovery points for** pane, select a date when a recovery point was taken. Calendar dates in bold have available recovery points. Alternately, you can right-click the VM and select **Show all recovery points** and then select the recovery point from the list.
-
- > [!NOTE]
- > For short-term protection, select a disk-based recovery point for faster recovery. After short-term recovery points expire, you see only **Online** recovery points to recover.
-
-1. Before recovering from an online recovery point, ensure the staging location contains enough free space to house the full uncompressed size of the VM you want to recover. The staging location can be viewed or changed by running the **Configure Subscription Settings Wizard**.
-
- :::image type="content" source="media/azure-vmware-solution-backup/mabs-recovery-folder-settings.png" alt-text="Azure Backup Server Recovery Folder Settings":::
-
-1. Select **Recover** to open the **Recovery Wizard**.
-
- ![Recovery Wizard, Review Recovery Selection page](../backup/media/restore-azure-backup-server-vmware/recovery-wizard.png)
-
-1. Select **Next** to go to the **Specify Recovery Options** screen. Select **Next** again to go to the **Select Recovery Type** screen.
-
- > [!NOTE]
- > VMware workloads don't support enabling network bandwidth throttling.
-
-1. On the **Select Recovery Type** page, either recover to the original instance or a new location.
-
- - If you choose **Recover to original instance**, you don't need to make any more choices in the wizard. The data for the original instance is used.
- - If you choose **Recover as virtual machine on any host**, then on the **Specify Destination** screen, provide the information for **ESXi Host**, **Resource Pool**, **Folder**, and **Path**.
-
- ![Select Recovery Type page](../backup/media/restore-azure-backup-server-vmware/recovery-type.png)
-
-1. On the **Summary** page, review your settings and select **Recover** to start the recovery process.
-
- The **Recovery status** screen shows the progression of the recovery operation.
-
-### Restore an individual file from a VM
-
-You can restore individual files from a protected VM recovery point. This feature is only available for Windows Server VMs. Restoring individual files is similar to restoring the entire VM, except you browse into the VMDK and find the files you want before you start the recovery process.
-
-> [!NOTE]
-> Restoring an individual file from a VM is available only for Windows VM and disk recovery points.
-
-1. In the Azure Backup Server Administrator Console, select the **Recovery** view.
-
-1. Using the **Browse** pane, browse or filter to find the VM you want to recover. After you select a VM or folder, the **Recovery points for pane display the available recovery points.
-
- ![Recovery points available](../backup/media/restore-azure-backup-server-vmware/vmware-rp-disk.png)
-
-1. In the **Recovery points for** pane, use the calendar to select the date that contains the wanted recovery points. Depending on how the backup policy was configured, dates can have more than one recovery point.
-
-1. After you select the day when the recovery point was taken, make sure you choose the correct **Recovery time**.
-
- > [!NOTE]
- > If the selected date has multiple recovery points, choose your recovery point by selecting it in the **Recovery time** drop-down menu.
-
- After you choose the recovery point, the list of recoverable items appears in the **Path** pane.
-
-1. To find the files you want to recover, in the **Path** pane, double-click the item in the **Recoverable Item** column to open it. Then select the file or folders you want to recover. To select multiple items, select the **Ctrl** key while you select each item. Use the **Path** pane to search the list of files or folders that appear in the **Recoverable Item** column.
-
- > [!NOTE]
- > **Search list below** doesn't search into subfolders. To search through subfolders, double-click the folder. Use the **Up** button to move from a child folder into the parent folder. You can select multiple items (files and folders), but they must be in the same parent folder. You can't recover items from multiple folders in the same recovery job.
-
- ![Review recovery selection](../backup/media/restore-azure-backup-server-vmware/vmware-rp-disk-ilr-2.png)
-
-1. When you've selected the items for recovery, in the Administrator Console tool ribbon, select **Recover** to open the **Recovery Wizard**. In the **Recovery Wizard**, the **Review Recovery Selection** screen shows the selected items to be recovered.
-
-1. On the **Specify Recovery Options** screen, do one of the following steps:
-
- - Select **Modify** to enable network bandwidth throttling. In the **Throttle** dialog box, select **Enable network bandwidth usage throttling** to turn it on. Once enabled, configure the **Settings** and **Work Schedule**.
- - Select **Next** to leave network throttling disabled.
-
-1. On the **Select Recovery Type** screen, select **Next**. You can only recover your files or folders to a network folder.
-
-1. On the **Specify Destination** screen, select **Browse** to find a network location for your files or folders. Azure Backup Server creates a folder where all recovered items are copied. The folder name has the prefix MABS_day-month-year. When you select a location for the recovered files or folder, the details for that location are provided.
-
- ![Specify location to recover files](../backup/media/restore-azure-backup-server-vmware/specify-destination.png)
-
-1. On the **Specify Recovery Options** screen, choose which security setting to apply. You can opt to modify the network bandwidth usage throttling, but throttling is disabled by default. Also, **SAN Recovery** and **Notification** aren't enabled.
-
-1. On the **Summary** screen, review your settings and select **Recover** to start the recovery process. The **Recovery status** screen shows the progression of the recovery operation.
-
-## Next steps
-
-Now that you've covered backing up your Azure VMware Solution VMs with Azure Backup Server, you may want to learn about:
--- [Troubleshooting when setting up backups in Azure Backup Server](../backup/backup-azure-mabs-troubleshoot.md)-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
azure-vmware Deploy Vm Content Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-vm-content-library.md
Now that the content library has been created, you can add an ISO image to deplo
Now that you've covered creating a content library to deploy VMs in Azure VMware Solution, you may want to learn about: - [How to migrate VM workloads to your private cloud](tutorial-deploy-vmware-hcx.md)-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
+- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md)
<!-- LINKS - external-->
azure-vmware Lifecycle Management Of Azure Vmware Solution Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/lifecycle-management-of-azure-vmware-solution-vms.md
Title: Lifecycle management of Azure VMware Solution VMs
+ Title: Monitor and manage Azure VMware Solution VMs
description: Learn to manage all aspects of the lifecycle of your Azure VMware Solution VMs with Microsoft Azure native tools.- Previously updated : 02/08/2021+ Last updated : 05/04/2021
-# Lifecycle management of Azure VMware Solution VMs
+# Monitor and manage Azure VMware Solution VMs
++ Microsoft Azure native tools allow you to monitor and manage your virtual machines (VMs) in the Azure environment. Yet they also allow you to monitor and manage your VMs on Azure VMware Solution and your on-premises VMs. In this article, we'll look at the integrated monitoring architecture Azure offers, and how you can use its native tools to manage your Azure VMware Solution VMs throughout their lifecycle.
azure-vmware Netapp Files With Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
Title: Azure NetApp Files with Azure VMware Solution
+ Title: Integrate Azure NetApp Files with Azure VMware Solution
description: Use Azure NetApp Files with Azure VMware Solution VMs to migrate and sync data across on-premises servers, Azure VMware Solution VMs, and cloud infrastructures. Last updated 02/10/2021
-# Azure NetApp Files with Azure VMware Solution
+# Integrate Azure NetApp Files with Azure VMware Solution
In this article, we'll walk through the steps of integrating Azure NetApp Files with Azure VMware Solution-based workloads. The guest operating system will run inside virtual machines (VMs) accessing Azure NetApp Files volumes.
azure-vmware Protect Azure Vmware Solution With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/protect-azure-vmware-solution-with-application-gateway.md
Title: Use Azure Application Gateway to protect your web apps on Azure VMware Solution
+ Title: Protect web apps on Azure VMware Solution with Azure Application Gateway
description: Configure Azure Application Gateway to securely expose your web apps running on Azure VMware Solution. Last updated 02/10/2021
-# Use Azure Application Gateway to protect your web apps on Azure VMware Solution
+# Protect web apps on Azure VMware Solution with Azure Application Gateway
[Azure Application Gateway](https://azure.microsoft.com/services/application-gateway/) is a layer 7 web traffic load balancer that lets you manage traffic to your web applications. It's offered in both Azure VMware Solution v1.0 and v2.0. Both versions tested with web apps running on Azure VMware Solution.
azure-vmware Reserved Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reserved-instance.md
Now that you've covered reserved instance of Azure VMware Solution, you may want
- [Creating an Azure VMware Solution assessment](../migrate/how-to-create-azure-vmware-solution-assessment.md). - [Managing DHCP for Azure VMware Solution](manage-dhcp.md).-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
azure-vmware Reset Vsphere Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/reset-vsphere-credentials.md
In addition to this how-to, you can also view the video for [resetting the vCent
2. Run the following command to update your vCenter CloudAdmin password. You will need to replace {SubscriptionID}, {ResourceGroup}, and {PrivateCloudName} with the actual values of the private cloud that the CloudAdmin account belongs to.
-```
-az resource invoke-action --action rotateVcenterPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
-```
+ ```azurecli-interactive
+ az resource invoke-action --action rotateVcenterPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
+ ```
-3. Run the following command to update your NSX-T admin password. You will need to replace {SubscriptionID}, {ResourceGroup}, and {PrivateCloudName} with the actual values of the private cloud that the NSX-T admin account belongs to.
+3. Run the following command to update your NSX-T admin password. You will need to replace **{SubscriptionID}**, **{ResourceGroup}**, and **{PrivateCloudName}** with the actual values of the private cloud that the NSX-T admin account belongs to.
-```
-az resource invoke-action --action rotateNSXTPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
-```
+ ```azurecli-interactive
+ az resource invoke-action --action rotateNSXTPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
+ ```
## Ensure the HCX connector has your latest vCenter Server credentials
Now that you've reset your credentials, follow these steps to ensure the HCX con
2. On the VMware HCX Dashboard, select **Site Pairing**.
- :::image type="content" source="media/reset-vsphere-credentials/hcx-site-pairing.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
+ :::image type="content" source="media/reset-vsphere-credentials/hcx-site-pairing.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
3. Select the correct connection to Azure VMware Solution (if there is more than one) and select **Edit Connection**.
Now that you've reset your credentials, follow these steps to ensure the HCX con
Now that you've covered resetting vCenter Server and NSX-T Manager credentials for Azure VMware Solution, you may want to learn about: - [Configuring NSX network components in Azure VMware Solution](configure-nsx-network-components-azure-portal.md).-- [Lifecycle management of Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).-- [Deploying disaster recovery of virtual machines using Azure VMware Solution](disaster-recovery-for-virtual-machines.md).
+- [Monitor and manage Azure VMware Solution VMs](lifecycle-management-of-azure-vmware-solution-vms.md).
+- [Deploying disaster recovery of virtual machines using Azure VMware Solution](disaster-recovery-for-virtual-machines.md).
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
- Title: Set up Azure Backup Server for Azure VMware Solution
-description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server.
- Previously updated : 02/04/2021--
-# Set up Azure Backup Server for Azure VMware Solution
-
-Azure Backup Server contributes to your business continuity and disaster recovery (BCDR) strategy. With Azure VMware Solution, you can only configure a virtual machine (VM)-level backup using Azure Backup Server.
-
-Azure Backup Server can store backup data to:
--- **Disk**: For short-term storage, Azure Backup Server backs up data to disk pools.-- **Azure**: For both short-term and long-term storage off-premises, Azure Backup Server data stored in disk pools can be backed up to the Microsoft Azure cloud by using Azure Backup.-
-Use Azure Backup Server to restore data to the source or an alternate location. That way, if the original data is unavailable because of planned or unexpected issues, you can restore data to an alternate location.
-
-This article helps you prepare your Azure VMware Solution environment to back up VMs by using Azure Backup Server. We walk you through the steps to:
-
-> [!div class="checklist"]
-> * Determine the recommended VM disk type and size to use.
-> * Create a Recovery Services vault that stores the recovery points.
-> * Set the storage replication for a Recovery Services vault.
-> * Add storage to Azure Backup Server.
-
-## Supported VMware features
--- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter or ESXi server to back up the VM. Instead, just provide the IP address or fully qualified domain name (FQDN) and the sign-in credentials used to authenticate the VMware server with Azure Backup Server.-- **Cloud-integrated backup:** Azure Backup Server protects workloads to disk and the cloud. The backup and recovery workflow of Azure Backup Server helps you manage long-term retention and offsite backup.-- **Detect and protect VMs managed by vCenter:** Azure Backup Server detects and protects VMs deployed on a vCenter or ESXi server. Azure Backup Server also detects VMs managed by vCenter so that you can protect large deployments.-- **Folder-level auto protection:** vCenter lets you organize your VMs in VM folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. When protecting folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.-- **Azure Backup Server continues to protect vMotioned VMs within the cluster:** As VMs are vMotioned for load balancing within the cluster, Azure Backup Server automatically detects and continues VM protection.-- **Recover necessary files faster:** Azure Backup Server can recover files or folders from a Windows VM without recovering the entire VM.-
-## Limitations
--- Update Rollup 1 for Azure Backup Server v3 must be installed.-- You can't back up user snapshots before the first Azure Backup Server backup. After Azure Backup Server finishes the first backup, then you can back up user snapshots.-- Azure Backup Server can't protect VMware VMs with pass-through disks and physical raw device mappings (pRDMs).-- Azure Backup Server can't detect or protect VMware vApps.-
-To set up Azure Backup Server for Azure VMware Solution, you must finish the following steps:
--- Set up the prerequisites and environment.-- Create a Recovery Services vault.-- Download and install Azure Backup Server.-- Add storage to Azure Backup Server.-
-### Deployment architecture
-
-Azure Backup Server is deployed as an Azure infrastructure as a service (IaaS) VM to protect Azure VMware Solution VMs.
--
-## Prerequisites for the Azure Backup Server environment
-
-Consider the recommendations in this section when you install Azure Backup Server in your Azure environment.
-
-### Azure Virtual Network
-
-Ensure that you [configure networking for your VMware private cloud in Azure](tutorial-configure-networking.md).
-
-### Determine the size of the VM
-
-Follow the instructions in the [Create your first Windows VM in the Azure portal](../virtual-machines/windows/quick-create-portal.md) tutorial. You'll create the VM in the virtual network, which you created in the previous step. Start with a gallery image of Windows Server 2019 Datacenter to run the Azure Backup Server.
-
-The table summarizes the maximum number of protected workloads for each Azure Backup Server VM size. The information is based on internal performance and scale tests with canonical values for the workload size and churn. The actual workload size can be larger but should be accommodated by the disks attached to the Azure Backup Server VM.
-
-| Maximum protected workloads | Average workload size | Average workload churn (daily) | Minimum storage IOPS | Recommended disk type/size | Recommended VM size |
-|-|--|--||--||
-| 20 | 100 GB | Net 5% churn | 2,000 | Standard HDD (8 TB or above size per disk) | A4V2 |
-| 40 | 150 GB | Net 10% churn | 4,500 | Premium SSD* (1 TB or above size per disk) | DS3_V2 |
-| 60 | 200 GB | Net 10% churn | 10,500 | Premium SSD* (8 TB or above size per disk) | DS3_V2 |
-
-*To get the required IOPs, use minimum recommended- or higher-size disks. Smaller-size disks offer lower IOPs.
-
-> [!NOTE]
-> Azure Backup Server is designed to run on a dedicated, single-purpose server. You can't install Azure Backup Server on a computer that:
-> * Runs as a domain controller.
-> * Has the Application Server role installed.
-> * Is a System Center Operations Manager management server.
-> * Runs Exchange Server.
-> * Is a node of a cluster.
-
-### Disks and storage
-
-Azure Backup Server requires disks for installation.
-
-| Requirement | Recommended size |
-|-|-|
-| Azure Backup Server installation | Installation location: 3 GB<br />Database files drive: 900 MB<br />System drive: 1 GB for SQL Server installation<br /><br />You'll also need space for Azure Backup Server to copy the file catalog to a temporary installation location when you archive. |
-| Disk for storage pool<br />(Uses basic volumes, can't be on a dynamic disk) | Two to three times the protected data size.<br />For detailed storage calculation, see [DPM Capacity Planner](https://www.microsoft.com/download/details.aspx?id=54301). |
-
-To learn how to attach a new managed data disk to an existing Azure VM, see [Attach a managed data disk to a Windows VM by using the Azure portal](../virtual-machines/windows/attach-managed-disk-portal.md).
-
-> [!NOTE]
-> A single Azure Backup Server has a soft limit of 120 TB for the storage pool.
-
-### Store backup data on local disk and in Azure
-
-Storing backup data in Azure reduces backup infrastructure on the Azure Backup Server VM. For operational recovery (backup), Azure Backup Server stores backup data on Azure disks attached to the VM. After the disks and storage space are attached to the VM, Azure Backup Server manages the storage for you. The amount of storage depends on the number and size of disks attached to each Azure VM. Each size of the Azure VM has a maximum number of disks that can be attached. For example, A2 is four disks, A3 is eight disks, and A4 is 16 disks. Again, the size and number of disks determine the total backup storage pool capacity.
-
-> [!IMPORTANT]
-> You should *not* retain operational recovery data on Azure Backup Server-attached disks for more than five days. If data is more than five days old, store it in a Recovery Services vault.
-
-To store backup data in Azure, create or use a Recovery Services vault. When you prepare to back up the Azure Backup Server workload, you [configure the Recovery Services vault](#create-a-recovery-services-vault). Once configured, each time an online backup job runs, a recovery point gets created in the vault. Each Recovery Services vault holds up to 9,999 recovery points. Depending on the number of recovery points created and how long kept, you can keep backup data for many years. For example, you could create monthly recovery points and keep them for five years.
-
-> [!IMPORTANT]
-> Whether you send backup data to Azure or keep it locally, you must register Azure Backup Server with a Recovery Services vault.
-
-### Scale deployment
-
-If you want to scale your deployment, you have the following options:
--- **Scale up**: Increase the size of the Azure Backup Server VM from A series to DS3 series, and increase the local storage.-- **Offload data**: Send older data to Azure and keep only the newest data on the storage attached to the Azure Backup Server machine.-- **Scale out**: Add more Azure Backup Server machines to protect the workloads.-
-### .NET Framework
-
-The VM must have .NET Framework 3.5 SP1 or higher installed.
-
-### Join a domain
-
-The Azure Backup Server VM must be joined to a domain. A domain user with administrator privileges on the VM must install Azure Backup Server.
-
-Azure Backup Server deployed in an Azure VM can back up workloads on the VMs in Azure VMware Solution. The workloads should be in the same domain to enable the backup operation.
-
-## Create a Recovery Services vault
-
-A Recovery Services vault is a storage entity that stores the recovery points created over time. It also contains backup policies that are associated with protected items.
-
-1. Sign in to your subscription in the [Azure portal](https://portal.azure.com/).
-
-1. On the left menu, select **All services**.
-
- ![On the left menu, select All services.](../backup/media/backup-create-rs-vault/click-all-services.png)
-
-1. In the **All services** dialog box, enter **Recovery Services** and select **Recovery Services vaults** from the list.
-
- ![Enter and choose Recovery Services vaults.](../backup/media/backup-create-rs-vault/all-services.png)
-
- The list of Recovery Services vaults in the subscription appears.
-
-1. On the **Recovery Services vaults** dashboard, select **Add**.
-
- ![Add a Recovery Services vault.](../backup/media/backup-create-rs-vault/add-button-create-vault.png)
-
- The **Recovery Services vault** dialog box opens.
-
-1. Enter values for the **Name**, **Subscription**, **Resource group**, and **Location**.
-
- ![Configure the Recovery Services vault.](../backup/media/backup-create-rs-vault/create-new-vault-dialog.png)
-
- - **Name**: Enter a friendly name to identify the vault. The name must be unique to the Azure subscription. Specify a name that has at least two but not more than 50 characters. The name must start with a letter and consist only of letters, numbers, and hyphens.
- - **Subscription**: Choose the subscription to use. If you're a member of only one subscription, you'll see that name. If you're not sure which subscription to use, use the default (suggested) subscription. There are multiple choices only if your work or school account is associated with more than one Azure subscription.
- - **Resource group**: Use an existing resource group or create a new one. To see the list of available resource groups in your subscription, select **Use existing**, and then select a resource from the drop-down list. To create a new resource group, select **Create new** and enter the name.
- - **Location**: Select the geographic region for the vault. To create a vault to protect Azure VMware Solution virtual machines, the vault *must* be in the same region as the Azure VMware Solution private cloud.
-
-1. When you're ready to create the Recovery Services vault, select **Create**.
-
- ![Create the Recovery Services vault.](../backup/media/backup-create-rs-vault/click-create-button.png)
-
- It can take a while to create the Recovery Services vault. Monitor the status notifications in the **Notifications** area in the upper-right corner of the portal. After creating your vault, it's visible in the list of Recovery Services vaults. If you don't see your vault, select **Refresh**.
-
- ![Refresh the list of backup vaults.](../backup/media/backup-create-rs-vault/refresh-button.png)
-
-## Set storage replication
-
-The storage replication option lets you choose between geo-redundant storage (the default) and locally redundant storage. Geo-redundant storage copies the data in your storage account to a secondary region, making your data durable. Locally redundant storage is a cheaper option that isn't as durable. To learn more about geo-redundant and locally redundant storage options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
-
-> [!IMPORTANT]
-> Changing the setting of **Storage replication type Locally-redundant/Geo-redundant** for a Recovery Services vault must be done before you configure backups in the vault. After you configure backups, the option to modify it is disabled, and you can't change the storage replication type.
-
-1. From **Recovery Services vaults**, select the new vault.
-
-1. Under **Settings**, select **Properties**. Under **Backup Configuration**, select **Update**.
-
-1. Select the storage replication type, and select **Save**.
-
-## Download and install the software package
-
-Follow the steps in this section to download, extract, and install the software package.
-
-### Download the software package
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. If you already have a Recovery Services vault open, continue to the next step. If you don't have a Recovery Services vault open, and you're in the Azure portal, on the main menu, select **Browse**.
-
- 1. In the list of resources enter **Recovery Services**.
-
- 1. As you begin typing, the list filters based on your input. When you see **Recovery Services vaults**, select it.
-
- ![Create Recovery Services vault step 1](../backup/media/backup-azure-microsoft-azure-backup/open-recovery-services-vault.png)
-
-1. From the list of Recovery Services vaults, select a vault.
-
- The selected vault dashboard opens.
-
- ![The selected vault dashboard opens.](../backup/media/backup-azure-microsoft-azure-backup/vault-dashboard.png)
-
- The **Settings** option opens by default. If closed, select **Settings** to open it.
-
- ![The Settings option opens by default. If closed, select Settings to open it.](../backup/media/backup-azure-microsoft-azure-backup/vault-setting.png)
-
-1. Select **Backup** to open the **Getting Started** wizard.
-
- ![Select Backup to open the Getting Started wizard.](../backup/media/backup-azure-microsoft-azure-backup/getting-started-backup.png)
-
-1. In the window that opens:
-
- 1. From the **Where is your workload running?** menu, select **On-Premises**.
-
- :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-on-premises-workload.png" alt-text="Where is your workload running?":::
-
- 1. From the **What do you want to back up?** menu, select the workloads you want to protect by using Azure Backup Server.
-
- 1. Select **Prepare Infrastructure** to download and install Azure Backup Server and the vault credentials.
-
- :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-prepare-infrastructure.png" alt-text="Prepare Infrastructure":::
-
-1. In the **Prepare infrastructure** window that opens:
-
- 1. Select the **Download** link to install Azure Backup Server.
-
- 1. Select **Already downloaded or using the latest Azure Backup Server installation** and then **Download** to download the vault credentials. You'll use these credentials when you register the Azure Backup Server to the Recovery Services vault. The links take you to the Download Center, where you download the software package.
-
- :::image type="content" source="media/azure-vmware-solution-backup/deploy-mabs-prepare-infrastructure2.png" alt-text="Prepare Infrastructure - Azure Backup Server":::
-
-1. On the download page, select all the files and select **Next**.
-
- > [!NOTE]
- > You must download all the files to the same folder. Because the download size of the files together is greater than 3 GB, it might take up to 60 minutes for the download to complete.
-
- ![On the download page, select all the files and select Next.](../backup/media/backup-azure-microsoft-azure-backup/downloadcenter.png)
-
-### Extract the software package
-
-If you downloaded the software package to a different server, copy the files to the VM you created to deploy Azure Backup Server.
-
-> [!WARNING]
-> At least 4 GB of free space is required to extract the setup files.
-
-1. After you've downloaded all the files, double-click **MicrosoftAzureBackupInstaller.exe** to open the **Microsoft Azure Backup** setup wizard, and then select **Next**.
-
-1. Select the location to extract the files to and select **Next**.
-
-1. Select **Extract** to begin the extraction process.
-
- ![Select Extract to begin the extraction process.](../backup/media/backup-azure-microsoft-azure-backup/extract/03.png)
-
-1. Once extracted, select the option to **Execute setup.exe** and then select **Finish**.
-
-> [!TIP]
-> You can also locate the setup.exe file from the folder where you extracted the software package.
-
-### Install the software package
-
-1. On the setup window under **Install**, select **Microsoft Azure Backup** to open the setup wizard.
-
- ![On the setup window under Install, select Microsoft Azure Backup to open the setup wizard.](../backup/media/backup-azure-microsoft-azure-backup/launch-screen2.png)
-
-1. On the **Welcome** screen, select **Next** to continue to the **Prerequisite Checks** page.
-
-1. Select **Check Again** to determine if the hardware and software meet the prerequisites for Azure Backup Server. If met successfully, select **Next**.
-
- ![ Select Check Again to determine if the hardware and software meet the prerequisites for Azure Backup Server. If met successfully, select Next.](../backup/media/backup-azure-microsoft-azure-backup/prereq/prereq-screen2.png)
-
-1. The Azure Backup Server installation package comes bundled with the appropriate SQL Server binaries that are needed. When you start a new Azure Backup Server installation, select the **Install new Instance of SQL Server with this Setup** option. Then select **Check and Install**.
-
- ![The Azure Backup Server installation package comes bundled with the appropriate SQL Server binaries that are needed.](../backup/media/backup-azure-microsoft-azure-backup/sql/01.png)
-
- > [!NOTE]
- > If you want to use your own SQL Server instance, the supported SQL Server versions are SQL Server 2014 SP1 or higher, 2016, and 2017. All SQL Server versions should be Standard or Enterprise 64-bit. The instance used by Azure Backup Server must be local only; it can't be remote. If you use an existing SQL Server instance for Azure Backup Server, the setup only supports the use of *named instances* of SQL Server.
-
- If a failure occurs with a recommendation to restart the machine, do so, and select **Check Again**. For any SQL Server configuration issues, reconfigure SQL Server according to the SQL Server guidelines. Then retry to install or upgrade Azure Backup Server using the existing instance of SQL Server.
-
- **Manual configuration**
-
- When you use your own SQL Server instance, make sure you add builtin\Administrators to the sysadmin role to the master database's sysadmin role.
-
- **Configure reporting services with SQL Server 2017**
-
- If you use your instance of SQL Server 2017, you must configure SQL Server 2017 Reporting Services (SSRS) manually. After configuring SSRS, make sure to set the **IsInitialized** property of SSRS to **True**. When set to **True**, Azure Backup Server assumes that SSRS is already configured and skips the SSRS configuration.
-
- To check the SSRS configuration status, run:
-
- ```powershell
- $configset =Get-WmiObject ΓÇônamespace
- "root\Microsoft\SqlServer\ReportServer\RS_SSRS\v14\Admin" -class
- MSReportServer_ConfigurationSetting -ComputerName localhost
-
- $configset.IsInitialized
- ```
-
- Use the following values for SSRS configuration:
-
- * **Service Account**: **Use built-in account** should be **Network Service**.
- * **Web Service URL**: **Virtual Directory** should be **ReportServer_\<SQLInstanceName>**.
- * **Database**: **DatabaseName** should be **ReportServer$\<SQLInstanceName>**.
- * **Web Portal URL**: **Virtual Directory** should be **Reports_\<SQLInstanceName>**.
-
- [Learn more](/sql/reporting-services/report-server/configure-and-administer-a-report-server-ssrs-native-mode) about SSRS configuration.
-
- > [!NOTE]
- > [Microsoft Online Services Terms](https://www.microsoft.com/licensing/product-licensing/products) (OST) governs the licensing for SQL Server used as the database for Azure Backup Server. According to OST, only use SQL Server bundled with Azure Backup Server as the database for Azure Backup Server.
-
-1. After the installation is successful, select **Next**.
-
-1. Provide a location for installing Microsoft Azure Backup Server files, and select **Next**.
-
- > [!NOTE]
- > The scratch location is required for backup to Azure. Ensure the scratch location is at least 5% of the data planned for backing up to the cloud. For disk protection, separate disks need configuring after the installation finishes. For more information about storage pools, see [Configure storage pools and disk storage](/previous-versions/system-center/system-center-2012-r2/hh758075(v=sc.12)).
-
- ![Provide a location for the installation of Microsoft Azure Backup Server files, and select Next.](../backup/media/backup-azure-microsoft-azure-backup/space-screen.png)
-
-1. Provide a strong password for restricted local user accounts, and select **Next**.
-
- ![Provide a strong password for restricted local user accounts, and select Next.](../backup/media/backup-azure-microsoft-azure-backup/security-screen.png)
-
-1. Select whether you want to use Microsoft Update to check for updates, and select **Next**.
-
- > [!NOTE]
- > We recommend having Windows Update redirect to Microsoft Update, which offers security and important updates for Windows and other products like Azure Backup Server.
-
- ![Select whether you want to use Microsoft Update to check for updates, and select Next.](../backup/media/backup-azure-microsoft-azure-backup/update-opt-screen2.png)
-
-1. Review the **Summary of Settings**, and select **Install**.
-
- The installation happens in phases.
- - The first phase installs the Microsoft Azure Recovery Services Agent.
- - The second phase checks for internet connectivity. If available, you can continue with the installation. If not available, you must provide proxy details to connect to the internet.
- - The final phase checks the prerequisite software. If not installed, any missing software gets installed along with the Microsoft Azure Recovery Services Agent.
-
-1. Select **Browse** to locate your vault credentials to register the machine to the Recovery Services vault, and then select **Next**.
-
-1. Select a passphrase to encrypt or decrypt the data sent between Azure and your premises.
-
- > [!TIP]
- > You can automatically generate a passphrase or provide your minimum 16-character passphrase.
-
-1. Enter the location to save the passphrase, and then select **Next** to register the server.
-
- > [!IMPORTANT]
- > Save the passphrase to a safe location other than the local server. We strongly recommend using the Azure Key Vault to store the passphrase.
-
- After the Microsoft Azure Recovery Services Agent setup finishes, the installation step moves on to the installation and configuration of SQL Server and the Azure Backup Server components.
-
- ![After the Microsoft Azure Recovery Services Agent setup finishes, the installation step moves on to the installation and configuration of SQL Server and the Azure Backup Server components.](../backup/media/backup-azure-microsoft-azure-backup/final-install/venus-installation-screen.png)
-
-1. After the installation step finishes, select **Close**.
-
-### Install Update Rollup 1
-
-Installing the Update Rollup 1 for Azure Backup Server v3 is mandatory before you can protect the workloads. You can find the bug fixes and installation instructions in the [knowledge base article](https://support.microsoft.com/en-us/help/4534062/).
-
-## Add storage to Azure Backup Server
-
-Azure Backup Server v3 supports Modern Backup Storage that offers:
--- Storage savings of 50%.-- Backups that are three times faster.-- More efficient storage.-- Workload-aware storage.-
-### Volumes in Azure Backup Server
-
-Add the data disks with the Azure Backup Server VM's required storage capacity if not already added.
-
-Azure Backup Server v3 only accepts storage volumes. When you add a volume, Azure Backup Server formats the volume to Resilient File System (ReFS), which Modern Backup Storage requires.
-
-### Add volumes to Azure Backup Server disk storage
-
-1. In the **Management** pane, rescan the storage and then select **Add**.
-
-1. Select from the available volumes to add to the storage pool.
-
-1. After you add the available volumes, give them a friendly name to help you manage them.
-
-1. Select **OK** to format these volumes to ReFS so that Azure Backup Server can use Modern Backup Storage benefits.
--
-## Next steps
-
-Now that you've covered how to set up Azure Backup Server for Azure VMware Solution, you may want to learn about:
--- [Configuring backups for your Azure VMware Solution VMs](backup-azure-vmware-solution-virtual-machines.md).-- [Protecting your Azure VMware Solution VMs with Azure Security Center integration](azure-security-integration.md).
backup Backup Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-blobs-storage-account-ps.md
+
+ Title: Back up Azure blobs within a storage account using Azure PowerShell
+description: Learn how to back up all Azure blobs within a storage account using Azure PowerShell.
+ Last updated : 05/05/2021++
+# Back up all Azure blobs in a storage account using Azure PowerShell
+
+This article describes how to back up all [Azure blobs](/azure/backup/blob-backup-overview) within a storage account using Azure PowerShell.
+
+In this article, you'll learn how to:
+
+- Create a Backup vault
+
+- Create a backup policy
+
+- Configure a backup of all Azure blobs within storage accounts
+
+For information on the Azure blob region availability, supported scenarios and limitations, see the [support matrix](blob-backup-support-matrix.md).
+
+## Create a Backup vault
+
+A Backup vault is a storage entity in Azure that holds backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers, Azure blobs and Azure blobs. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+
+Before creating a backup vault, choose the storage redundancy of the data within the vault. Then proceed to create the backup vault with that storage redundancy and the location. In this article, we will create a backup vault _TestBkpVault_ in region _westus_, under the resource group _testBkpVaultRG_. Use the [New-AzDataProtectionBackupVault](/powershell/module/az.dataprotection/new-azdataprotectionbackupvault?view=azps-5.7.0&preserve-view=true) command to create a backup vault.Learn more about [creating a Backup vault](./backup-vault-overview.md#create-a-backup-vault).
+
+```azurepowershell-interactive
+$storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant/GeoRedundant -DataStoreType VaultStore
+
+New-AzDataProtectionBackupVault -ResourceGroupName testBkpVaultRG -VaultName TestBkpVault -Location westus -StorageSetting $storageSetting
+$TestBkpVault = Get-AzDataProtectionBackupVault -VaultName TestBkpVault
+$TestBKPVault | fl
+ETag :
+Id : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourceGroups/testBkpVaultRG/providers/Microsoft.DataProtection/backupVaults/TestBkpVault
+Identity : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.DppIdentityDetails
+IdentityPrincipalId :
+IdentityTenantId :
+IdentityType :
+Location : westus
+Name : TestBkpVault
+ProvisioningState : Succeeded
+StorageSetting : {Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.StorageSetting}
+SystemData : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.SystemData
+Tag : Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.DppTrackedResourceTags
+Type : Microsoft.DataProtection/backupVaults
+```
+
+After creation of vault, let's create a backup policy to protect Azure blobs.
+
+> [!IMPORTANT]
+> Though you'll see the Backup storage redundancy of the vault, the redundancy doesn't apply to the operational backup of blobs as the backup is local in nature and no data is stored in the Backup vault. The Backup vault. Here, the backup vault is the management entity to help you manage the protection of block blobs in your storage accounts.
+
+## Create a Backup policy
+
+> [!IMPORTANT]
+> Read [this section](blob-backup-configure-manage.md#before-you-start) before proceeding to create the policy and configuring backups for Azure blobs.
+
+To understand the inner components of a backup policy for Azure blob backup, retrieve the policy template using the [Get-AzDataProtectionPolicyTemplate](/powershell/module/az.dataprotection/get-azdataprotectionpolicytemplate?view=azps-5.7.0&preserve-view=true) command. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy.
+
+```azurepowershell-interactive
+$policyDefn = Get-AzDataProtectionPolicyTemplate -DatasourceType AzureBlob
+$policyDefn | fl
++
+DatasourceType : {Microsoft.Storage/storageAccounts/blobServices}
+ObjectType : BackupPolicy
+PolicyRule : {Default}
+
+$policyDefn.PolicyRule | fl
+
+IsDefault : True
+Lifecycle : {Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api202101.SourceLifeCycle}
+Name : Default
+ObjectType : AzureRetentionRule
+```
+
+The policy template consists of a lifecycle only (which decides when to delete/copy/move the backup). As operational backup for blobs is continuous in nature, you don't need a schedule to perform backups.
+
+```azurepowershell-interactive
+$policyDefn.PolicyRule.Lifecycle | fl
++
+DeleteAfterDuration : P30D
+DeleteAfterObjectType : AbsoluteDeleteOption
+SourceDataStoreObjectType : DataStoreInfoBase
+SourceDataStoreType : OperationalStore
+TargetDataStoreCopySetting :
+```
+
+> [!NOTE]
+> Restoring over long durations may lead to restore operations taking longer to complete. Also, the time that it takes to restore a set of data is based on the number of write and delete operations made during the restore period. For example, an account with one million objects with 3,000 objects added per day and 1,000 objects deleted per day will require approximately two hours to restore to a point 30 days in the past.<br><br>We do not recommend a retention period and restoration more than 90 days in the past for an account with this rate of change.
+
+Once the policy object has all the desired values, proceed to create a new policy from the policy object using the [New-AzDataProtectionBackupPolicy](/powershell/module/az.dataprotection/new-azdataprotectionbackuppolicy?view=azps-5.7.0&preserve-view=true) command.
+
+```azurepowershell-interactive
+New-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name blobBkpPolicy -Policy $policyDefn
+
+Name Type
+- -
+blobBkpPolicy Microsoft.DataProtection/backupVaults/backupPolicies
+
+$blobBkpPol = Get-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "blobBkpPolicy"
+```
+
+## Configure backup
+
+Once the vault and policy are created, there are two critical points that the user needs to consider to protect all Azure blobs within a storage account.
+
+### Key entities involved
+
+#### Storage account which contains the blobs to be protected
+
+Fetch the Azure Resource Manager ID of the storage account which contains the blobs to be protected. This will serve as the identifier of the storage account. We will use an example of a storage account named _PSTestSA_, under the resource group _blobrg_, in a different subscription.
+
+```azurepowershell-interactive
+$SAId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/blobrg/providers/Microsoft.Storage/storageAccounts/PSTestSA"
+```
+
+#### Backup vault
+
+The Backup vault requires permissions on the storage account to enable backups on blobs present within the storage account. The system-assigned managed identity of the vault is used for assigning such permissions.
+
+### Assign permissions
+
+You need to assign a few permissions via RBAC to vault (represented by vault MSI) and the relevant storage account. These can be performed via Portal or PowerShell. Learn more about all [related permissions](blob-backup-configure-manage.md#grant-permissions-to-the-backup-vault-on-storage-accounts).
+
+### Prepare the request
+
+Once all the relevant permissions are set, the configuration of backup is performed in 2 steps. First, we prepare the relevant request by using the relevant vault, policy, storage account using the [Initialize-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/initialize-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command. Then, we submit the request to protect the blobs within the storage account using the [New-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/new-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command.
+
+```azurepowershell-interactive
+$instance = Initialize-AzDataProtectionBackupInstance -DatasourceType AzureBlob -DatasourceLocation $TestBkpvault.Location -PolicyId $blobBkpPol[0].Id -DatasourceId $SAId
+New-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -BackupInstance $instance
+
+Name Type BackupInstanceName
+- -
+blobrg-PSTestSA-3df6ac08-9496-4839-8fb5-8b78e594f166 Microsoft.DataProtection/backupVaults/backupInstances blobrg-PSTestSA-3df6ac08-9496-4839-8fb5-8b78e594f166
+```
+
+> [!IMPORTANT]
+> Once a storage account is configured for blobs backup, a few capabilities are affected, such as change feed and delete lock. [Learn more](blob-backup-configure-manage.md#effects-on-backed-up-storage-accounts).
+
+## Next steps
+
+[Restore Azure blobs using Azure PowerShell](restore-blobs-storage-account-ps.md)
backup Blob Backup Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-configure-manage.md
Title: Configure operational backup for Azure Blobs
-description: Learn how to configure and manage operational backup for Azure Blobs (in preview)
+description: Learn how to configure and manage operational backup for Azure Blobs.
Previously updated : 02/16/2021 Last updated : 05/05/2021
-# Configure operational backup for Azure Blobs (in preview)
+# Configure operational backup for Azure Blobs
Azure Backup lets you easily configure operational backup for protecting block blobs in your storage accounts. This article explains how to configure operational backup on one or more storage accounts using the Azure portal. The article discusses the following:
For instructions on how to create a Backup vault, see the [Backup vault document
## Grant permissions to the Backup vault on storage accounts
-Operational backup also protects the storage account (that contains the blobs to be protected) from any accidental deletions by applying a Backup-owned Delete Lock. This requires the Backup vault to have certain permissions on the storage accounts that need to be protected. For convenience of use, these permissions have been consolidated under the Storage Account Backup Contributor role. Follow the instructions below for storage accounts that need to be protected:
+Operational backup also protects the storage account (that contains the blobs to be protected) from any accidental deletions by applying a Backup-owned Delete Lock. This requires the Backup vault to have certain permissions on the storage accounts that need to be protected. For convenience of use, these minimum permissions have been consolidated under the **Storage Account Backup Contributor** role.
-1. In the storage account to be protected, navigate to the **Access Control (IAM) tab** on the left navigation pane.
+We recommend you to assign this role to the Backup vault before you configure backup. However, you can also perform the role assignment while configuring backup. [Learn more](#using-backup-center) on configure backup using Backup Center.
+
+To assign the required role for storage accounts that you need to protect, follow these steps:
+
+>[!NOTE]
+>You can also assign the roles to the vault at the Subscription or Resource Group levels according to your convenience.
+
+1. In the storage account that needs to be protected, navigate to the **Access Control (IAM)** tab on the left navigation pane.
1. Select **Add role assignments** to assign the required role. ![Add role assignments](./media/blob-backup-configure-manage/add-role-assignments.png)
Operational backup also protects the storage account (that contains the blobs to
1. Under **Role**, choose **Storage Account Backup Contributor**. 1. Under **Assign access to**, choose **User, group or service principal**.
- 1. Type the **name of the Backup vault** that you want to protect the blobs in this storage account and select the same from the search results.
- 1. Once done, select **Save**.
+ 1. Search for the Backup vault you want to use for backing up blobs in this storage account, and then select it from the search results.
+ 1. Select **Save**.
![Role assignment options](./media/blob-backup-configure-manage/role-assignment-options.png) >[!NOTE]
- >Allow up to 10 minutes for the role assignment to take effect.
+ >The role assignment might take up to 10 minutes to take effect.
## Create a backup policy
Here are the steps to create a backup policy for operational backup of your blob
## Configure backup
-Backup of blobs is configured at the storage account level. So all the blobs in the storage account are protected with operational backup.
+Backup of blobs is configured at the storage account level. So, all blobs in the storage account are protected with operational backup.
+
+You can configure backup for multiple storage accounts using the Backup Center. You can also configure backup for a storage account using the storage accountΓÇÖs **Data Protection** properties. This section discusses both the ways to configure backup.
+
+### Using Backup Center
To start configuring backup: 1. Search for **Backup Center** in the search bar.+ 1. Navigate to **Overview** -> **+Backup**. ![Backup Center overview](./media/blob-backup-configure-manage/backup-center-overview.png)
-1. In the **Initiate: Configure Backup** tab, choose **Azure Blobs (Azure Storage)** as the Datasource type.
+1. On the **Initiate: Configure Backup** tab, choose **Azure Blobs (Azure Storage)** as the DataSource type.
![Initiate: Configure Backup tab](./media/blob-backup-configure-manage/initiate-configure-backup.png)
-1. In the **Basics** tab, specify **Azure Blobs (Azure Storage)** as the **Datasource** type and select the Backup vault to which you want to associate your storage accounts. You can view details of the selected vault in the pane.
+1. On the **Basics** tab, specify **Azure Blobs (Azure Storage)** as the DataSource type, and select the Backup vault to which you want to associate your storage accounts.<br></br>You can view details of the selected vault in the same pane.
![Basics tab](./media/blob-backup-configure-manage/basics-tab.png)
-1. Next, select the backup policy that you want to use for specifying the retention. You can view the details of the selected policy in the lower part of the screen. The operational data store column shows the retention defined in the policy. "Operational" means that the data is maintained locally in the source storage account itself.
+ >[!NOTE]
+ >Only operational backups will be enabled for blobs, which stores backups in the source storage account (and not in the Backup vault). So, the backup storage redundancy type selected for the vault doesnΓÇÖt apply for the backup of blobs.
+1. Select the backup policy that you want to use for specifying the retention.<br></br>You can view the details of the selected policy in the bottom part of the screen. The operational data store column displays the retention defined in the policy. **Operational** implies that the data is maintained locally in the source storage account.
+
![Choose backup policy](./media/blob-backup-configure-manage/choose-backup-policy.png)
- You can also create a new backup policy. To do this, select **Create new** and follow the steps below:
+ You can also create a new backup policy. To do this, select **Create new** and follow these steps:
+
+ 1. Provide a name for the policy you want to create.<br></br>Ensure that the other boxes display the correct DataSource type and Vault name.
+
+ 1. On the **Backup policy** tab, select the **Edit retention rule** icon for the retention rule to modify the duration for the data retention.<br></br>You can set the retention up to **360** days.
+
+ >[!NOTE]
+ >While backups are unaffected by the retention period, the restore operation for restoring older backups might take longer to complete.
+
+ ![Create new backup policy](./media/blob-backup-configure-manage/new-backup-policy.png)
+
+ 1. Select **Review + create** to create the backup policy.
+
+1. Choose the required storage accounts for configuring protection of blobs. You can choose multiple storage accounts at once and choose Select.<br></br>However, ensure that the vault you have chosen has the required RBAC role assigned to configure backup on storage accounts. Learn more about [Grant permissions to the Backup vault on storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts).<br></br>If the role is not assigned, you can still assign the role while configuring backup. See step 7.
+
+ ![Verify permissions of the vault](./media/blob-backup-configure-manage/verify-vault-permissions.png)
+
+ Backup validates if the vault has sufficient permissions to allow configuring backup on the selected storage accounts. This takes a while to finish validations.
+
+ ![Permissions to allow configuring backup](./media/blob-backup-configure-manage/permissions-for-configuring-backup.png)
+
+1. After validations are complete, the **Backup readiness** column will inform if the Backup vault has enough permissions to configure backups for each storage account.
+
+ ![Information of Backup vault permissions](./media/blob-backup-configure-manage/information-of-backup-vault-permissions.png)
+
+ If validation displays errors (for two of the storage accounts listed in the figure above), you have not assigned the **Storage account backup contributor** role for these [storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts). Also, you can assign the required role here, based on your current permissions. The error message can help you understand if you have the required permissions, and take the appropriate action:
+
+ - **Role assignment not done:** This error (as shown for the item _blobbackupdemo3_ in the figure above) indicates that you (the user) have permissions to assign the **Storage account backup contributor** role and the other required roles for the storage account to the vault. Select the roles, and click **Assign missing roles** on the toolbar. This will automatically assign the required role to the backup vault, and also trigger an auto-revalidation.<br><br>At times, role propagation may take a while (up to 10 minutes) causing the revalidation to fail. In such scenario, please wait for a few minutes and click the ΓÇÿRevalidateΓÇÖ button retry validation.
+
+ - **Insufficient permissions for role assignment:** This error (as shown for the item _blobbackupdemo4_ in the figure above) indicates that the vault doesnΓÇÖt have the required role to configure backup, and you (the user) donΓÇÖt have enough permissions to assign the required role. To make the role assignment easier, Backup allows you to download the role assignment template, which you can share with users with permissions to assign roles for storage accounts. To do this, select such storage accounts, and click **Download role assignment template** to download the template.<br><br>Once the roles are assigned, you can share it with the appropriate users. On successful assignment of the role, click **Revalidate** to validate permissions again, and then configure backup.
+ >[!NOTE]
+ >The template would only contain details for selected storage accounts. So, if there are multiple users that need to assign roles for different storage accounts, you can select and download different templates accordingly.
+1. Once the validation is successful for all selected storage accounts, continue to **Review and configure** backup.<br><br>You'll receive notifications about the status of configuring protection and its completion.
+
+### Using Data protection settings of the storage account
- 1. Provide a name for the policy you want to create. Ensure that the other boxes display the correct Datasource type and Vault name.
- 1. In the **Backup policy** tab, select the edit retention rule icon to edit and specify the duration for which you want the data to be retained. You can specify retention up to 360 days. Restoring over long durations may lead to restore operations taking longer to complete.
+You can configure backup for blobs in a storage account directly from the ΓÇÿData ProtectionΓÇÖ settings of the storage account.
- ![Create new backup policy](./media/blob-backup-configure-manage/new-backup-policy.png)
+1. Go to the storage account for which you want to configure backup for blobs, and then navigate to **Data Protection** in left pane (under **Data management**).
- 1. Once done, select **Review + create** to create the backup policy.
+1. In the available data protection options, the first one allows you to enable operational backup using Azure Backup.
-1. Next, you're required to choose the storage accounts for which you want to configure protection of blobs. You can choose multiple storage accounts at once and choose **Select**.
+ ![Operational backup using Azure Backup](./media/blob-backup-configure-manage/operational-backup-using-azure-backup.png)
- However, make sure the vault you have chosen has the required permissions to configure backup on the storage accounts as detailed above in [Grant permissions to the Backup vault on storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts).
+1. Select the check box corresponding to **Enable operational backup with Azure Backup**. Then select the Backup vault and the Backup policy you want to associate.<br><br>You can select the existing vault and policy, or create new ones, as required.
- ![Select resources to back up](./media/blob-backup-configure-manage/select-resources.png)
+ >[!IMPORTANT]
+ >You should have assigned the **Storage account backup contributor** role to the selected vault. Learn more about [Grant permissions to the Backup vault on storage accounts](#grant-permissions-to-the-backup-vault-on-storage-accounts).
+
+ - If you have already assigned the required role, click **Save** to finish configuring backup. Follow the portal notifications to track the progress of configuring backup.
+ - If you havenΓÇÖt assigned it yet, click **Manage identity** and Follow the steps below to assign the roles.
- Backup checks if the vault has sufficient permissions to allow configuring of backup on the selected storage accounts.
+ ![Enable operational backup with Azure Backup](./media/blob-backup-configure-manage/enable-operational-backup-with-azure-backup.png)
- ![Backup validates permissions](./media/blob-backup-configure-manage/validate-permissions.png)
- If validation results in errors (as with one of the storage accounts in the screenshot), go to the selected storage accounts and assign appropriate roles, as detailed [here](#grant-permissions-to-the-backup-vault-on-storage-accounts), and select **Revalidate**. New role assignment may take up to 10 minutes to take effect.
+ 1. On clicking **Manage identity**, brings you to the Identity blade of the storage account.
+
+ 1. Click **Add role assignment** to initiate the role assignment.
-1. Once validation succeeds for all selected storage accounts, continue to **Review and configure** to configure backup. You'll see notifications informing you about the status of configuring protection and its completion.
+ ![Add role assignment to initiate the role assignment](./media/blob-backup-configure-manage/add-role-assignment-to-initiate-role-assignment.png)
++
+ 1. Choose the scope, the subscription, the resource group, or the storage account you want to assign to the role.<br><br>We recommend you to assign the role at resource group level if you want to configure operational backup for blobs for multiple storage accounts.
+
+ 1. From the **Role** drop-down, select the **Storage account backup contributor** role.
+
+ ![Select Storage account backup contributor role](./media/blob-backup-configure-manage/select-storage-account-backup-contributor-role.png)
++
+ 1. Click **Save** to finish role assignment.<br><br>You will be notified through the portal once this completes successfully. You can also see the new role added to the list of existing ones for the selected vault.
+
+ ![Finish role assignment](./media/blob-backup-configure-manage/finish-role-assignment.png)
+
+ 1. Click the cancel icon (**x**) on the top right corner to return to the **Data protection** blade of the storage account.<br><br>Once back, continue configuring backup.
## Effects on backed up storage accounts
You can use Backup Center as your single pane of glass for managing all your bac
![Backup Center](./media/blob-backup-configure-manage/backup-center.png)
-For more information, see [Overview of Backup Center (Preview)](backup-center-overview.md).
+For more information, see [Overview of Backup Center](backup-center-overview.md).
+
+## Stopping protection
+
+You can stop operational backup for your storage account according to your requirement.
+
+>[!NOTE]
+>Stopping protection only dissociates the storage account from the Backup vault (and the Backup tools, such as Backup Center), and doesnΓÇÖt disable blob point-in-time restore, versioning, and change feed that were configured.
+
+To stop backup for a storage account, follow these steps:
+
+1. Navigate to the backup instance for the storage account being backed up.<br><br>You can navigate to this from the storage account via **Storage account** -> **Data protection** -> **Manage backup settings**, or directly from the Backup Center via **Backup Center** -> **Backup instances** -> search for the storage account name.
+
+ ![Storage account location](./media/blob-backup-configure-manage/storage-account-location.png)
+
+ ![Storage account location through Backup Center](./media/blob-backup-configure-manage/storage-account-location-through-backup-center.png)
++
+1. In the backup instance, click **Delete** to stop operational backup for the particular storage account.
+
+ ![Stop operational backup](./media/blob-backup-configure-manage/stop-operational-backup.png)
+
+After stopping backup, you may disable other storage data protection capabilities (that are enabled for configuring backup) from the data protection blade of the storage account.
+ ## Next steps -- [Restore Azure Blobs](blob-restore.md)
+[Restore Azure Blobs](blob-restore.md)
backup Blob Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-overview.md
Title: Overview of operational backup for Azure Blobs
-description: Learn about operational backup for Azure Blobs (in preview).
+description: Learn about operational backup for Azure Blobs.
Previously updated : 02/16/2021 Last updated : 05/05/2021
-# Overview of operational backup for Azure Blobs (in preview)
+# Overview of operational backup for Azure Blobs
Operational backup for Blobs is a managed, local data protection solution that lets you protect your block blobs from various data loss scenarios like corruptions, blob deletions, and accidental storage account deletion. The data is stored locally within the source storage account itself and can be recovered to a selected point in time whenever needed. So it provides a simple, secure, and cost-effective means to protect your blobs.
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup
-description: Provides a summary of support settings and limitations when backing up Azure Blobs (in preview)
+description: Provides a summary of support settings and limitations when backing up Azure Blobs.
Last updated 02/16/2021
-# Support matrix for Azure Blobs backup (in preview)
+# Support matrix for Azure Blobs backup
This article summarizes the regional availability, supported scenarios, and limitations of operational backup of blobs.
Operational backup of blobs uses blob point-in-time restore, blob versioning, so
## Next steps -- [Overview of operational backup for Azure Blobs (in preview)](blob-backup-overview.md)
+[Overview of operational backup for Azure Blobs](blob-backup-overview.md)
backup Blob Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/blob-restore.md
Title: Restore Azure Blobs
-description: Learn how to restore Azure Blobs (in preview).
+description: Learn how to restore Azure Blobs.
Previously updated : 02/16/2021 Last updated : 05/05/2021
-# Restore Azure Blobs (in preview)
+# Restore Azure Blobs
Block blobs in storage accounts with operational backup configured can be restored to any point in time within the retention range. Also, you can scope your restores to all block blobs in the storage account or to a subset of blobs.
The restore operation shown in the image performs the following actions:
## Next steps -- [Overview of operational backup for Azure Blobs (in preview)](blob-backup-overview.md)
+- [Overview of operational backup for Azure Blobs](blob-backup-overview.md)
backup Restore Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-blobs-storage-account-ps.md
+
+ Title: Restore Azure blobs via Azure Powershell
+description: Learn how to restore Azure blobs to any point-in-time using Azure Powershell.
+ Last updated : 05/05/2021++
+# Restore Azure blobs to point-in-time using Azure Powershell
+
+This article describes how to restore [blobs](blob-backup-overview.md) to any point-in-time using Azure Backup.
+
+> [!IMPORTANT]
+> Before proceeding to restore Azure blobs using Azure Backup, see [important points](blob-restore.md#before-you-start).
+
+In this article, you'll learn how to:
+
+- Restore Azure blobs to point-in-time
+
+- Track the restore operation status
+
+We will refer to an existing backup vault _TestBkpVault_, under the resource group _testBkpVaultRG_ in the examples.
+
+```azurepowershell-interactive
+$TestBkpVault = Get-AzDataProtectionBackupVault -VaultName TestBkpVault -ResourceGroupName "testBkpVaultRG"
+```
+
+## Restoring Azure blobs within a storage account
+
+### Fetching the valid time range for restore
+
+As the operational backup for blobs is continuous, there are no distinct points to restore from. Instead, we need to fetch the valid time-range under which blobs can be restored to any point-in-time. In this example, let's check for valid time-ranges to restore within the last 30 days.
+
+```azurepowershell-interactive
+$startDate = (Get-Date).AddDays(-30)
+$endDate = Get-Date
+```
+
+First fetch all instances using [Get-AzDataProtectionBackupInstance](/powershell/module/az.dataprotection/get-azdataprotectionbackupinstance?view=azps-5.7.0&preserve-view=true) command and identify the relevant instance.
+
+```azurepowershell-interactive
+$AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name
+```
+
+You can also use Az.Resourcegraph and the [Search-AzDataProtectionBackupInstanceInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionbackupinstanceinazgraph?view=azps-5.7.0&preserve-view=true) command to search across instances in many vaults and subscriptions.
+
+```azurepowershell-interactive
+$AllInstances = Search-AzDataProtectionBackupInstanceInAzGraph -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -DatasourceType AzureBlob -ProtectionStatus ProtectionConfigured
+```
+
+Once the instance is identified then fetch the relevant recovery range using the Find-AzDataProtectionRestorableTimeRange command.
+
+```azurepowershell-interactive
+Find-AzDataProtectionRestorableTimeRange -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -BackupInstanceName $AllInstances[2].BackupInstanceName -StartTime $startDate -endTime $endDate
+
+$DesiredPIT = (Get-Date -Date "2021-04-23T02:47:02.9500000Z")
+```
+
+### Preparing the restore request
+
+Once the point-in-time to restore is fixed, there are multiple options to restore. Use the [Initialize-AzDataProtectionRestoreRequest](/powershell/module/az.dataprotection/initialize-azdataprotectionrestorerequest?view=azps-5.7.0&preserve-view=true) command to prepare the restore request with all relevant details.
+
+#### Restoring all the blobs to a point-in-time
+
+Using this option restores all block blobs in the storage account by rolling them back to the selected point in time. Storage accounts containing large amounts of data or witnessing a high churn may take longer times to restore.
+
+```azurepowershell-interactive
+$restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureBlob -SourceDataStore OperationalStore -RestoreLocation $TestBkpVault.Location -RestoreType OriginalLocation -PointInTime (Get-Date -Date "2021-04-23T02:47:02.9500000Z") -BackupInstance $AllInstances[2]
+```
+
+#### Restoring selected containers
+
+Using this option allows you to browse and select up to 10 containers to restore.
+
+```azurepowershell-interactive
+$restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureBlob -SourceDataStore OperationalStore -RestoreLocation $TestBkpVault.Location -RestoreType OriginalLocation -PointInTime (Get-Date -Date "2021-04-23T02:47:02.9500000Z") -BackupInstance $AllInstances[2] -ItemLevelRecovery -ContainersList "abc","xyz"
+```
+
+#### Restoring containers using a prefix match
+
+This option lets you restore a subset of blobs using a prefix match. You can specify up to 10 lexicographical ranges of blobs within a single container or across multiple containers to return those blobs to their previous state at a given point in time. Here are a few things to keep in mind:
+
+- You can use a forward slash (/) to delineate the container name from the blob prefix
+- The start of the range specified is inclusive, however the specified range is exclusive.
+
+[Learn more](blob-restore.md#use-prefix-match-for-restoring-blobs) about using prefixes to restore blob ranges.
+
+```azurepowershell-interactive
+$restorerequest = Initialize-AzDataProtectionRestoreRequest -DatasourceType AzureBlob -SourceDataStore OperationalStore -RestoreLocation $TestBkpVault.Location -RestoreType OriginalLocation -PointInTime (Get-Date -Date "2021-04-23T02:47:02.9500000Z") -BackupInstance $AllInstances[2] -ItemLevelRecovery -FromPrefixPattern "containerabc/aaa","containerabc/ccc" -ToPrefixPattern "containerabc/bbb","containerabc/ddd"
+```
+
+### Trigger the restore
+
+Use the [Start-AzDataProtectionBackupInstanceRestore](/powershell/module/az.dataprotection/start-azdataprotectionbackupinstancerestore?view=azps-5.7.0&preserve-view=true) command to trigger the restore with the request prepared above.
+
+```azurepowershell-interactive
+Start-AzDataProtectionBackupInstanceRestore -BackupInstanceName $AllInstances[2].BackupInstanceName -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Parameter $restorerequest
+```
+
+## Tracking job
+
+Track all jobs using the [Get-AzDataProtectionJob](/powershell/module/az.dataprotection/get-azdataprotectionjob?view=azps-5.7.0&preserve-view=true) command. You can list all jobs and fetch a particular job detail.
+
+You can also use Az.ResourceGraph to track all jobs across all backup vaults. Use the [Search-AzDataProtectionJobInAzGraph](/powershell/module/az.dataprotection/search-azdataprotectionjobinazgraph?view=azps-5.7.0&preserve-view=true) command to get the relevant job which can be across any backup vault.
+
+```azurepowershell-interactive
+$job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName "testBkpVaultRG" -Vault $TestBkpVault.Name -DatasourceType AzureBlob -Operation Restore
+```
+
+## Next steps
+
+[Overview of Azure blob backup](blob-backup-overview.md)
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 04/22/2021 Last updated : 05/05/2021 # What's new in Azure Backup
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- May 2021
+ - [Backup for Azure Blobs is now generally available](#backup-for-azure-blobs-is-now-generally-available)
- April 2021 - [Enhancements to encryption using customer-managed keys for Azure Backup (in preview)](#enhancements-to-encryption-using-customer-managed-keys-for-azure-backup-in-preview) - March 2021
You can learn more about the new releases by bookmarking this page or by [subscr
- [Zone redundant storage (ZRS) for backup data (in preview)](#zone-redundant-storage-zrs-for-backup-data-in-preview) - [Soft delete for SQL Server and SAP HANA workloads in Azure VMs](#soft-delete-for-sql-server-and-sap-hana-workloads)
+## Backup for Azure Blobs is now generally available
+
+Operational backup for Azure Blobs is a managed-data protection solution that lets you protect your block blob data from various data loss scenarios, such as blob corruptions, blob deletions, and accidental deletion of storage accounts.
+
+Being an operational backup solution, the backup data is stored locally in the source storage account, and can be recovered from a selected point-in-time, giving you a simple and cost-effective means to protect your blob data. To do this, the solution uses the blob point-in-time restore capability available from blob storage.
+
+Operational backup for blobs integrates with the Azure Backup management tools, including Backup Center, to help you manage the protection of your blob data effectively and at-scale. In addition to previously available capabilities, you can now configure and manage operational backup for blobs using the **Data protection** view of the storage accounts, also [through PowerShell](backup-blobs-storage-account-ps.md). Also, Backup now gives you an enhanced experience for managing role assignments required for configuring operational backup.
+
+For more information, see [Overview of operational backup for Azure Blobs](blob-backup-overview.md).
+ ## Azure Disk Backup is now generally available Azure Backup offers snapshot lifecycle management to Azure Managed Disks by automating periodic creation of snapshots and retaining these for configured durations using Backup policy.
batch Account Move https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/account-move.md
+
+ Title: Move an Azure Batch account to another region
+description: Learn how to move an Azure Batch account to a different region.
+ Last updated : 05/05/2021+++
+# Move an Azure Batch account to another region
+
+There are scenarios in which it might be helpful to move an existing [Batch account](accounts.md) from one region to another. For example, you may want to move to another region as part of disaster recovery planning.
+
+Azure Batch accounts can't be directly moved from one region to another. You can, however, use an Azure Resource Manager template to export the existing configuration of your Batch account. You can then stage the resource in another region by exporting the Batch account to a template, modifying the parameters to match the destination region, and then deploying the template to the new region. You can then recreate jobs and other features in the account.
+
+ For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
+
+This topic explains how to move a Batch account between regions by using the Azure portal.
+
+## Prerequisites
+
+- Move the storage account associated with your Batch account to the new target region by following the steps in [Move an Azure Storage account to another region](../storage/common/storage-account-move.md). If you prefer, you can leave the storage account in the original region; however, we recommend moving it, as you'll generally see better performance if it's in the same region as your Batch account. The instructions below assume you have already migrated your storage account.
+- Ensure that the services and features that your Batch account uses are supported in the target region.
+
+## Prepare
+
+To get started, you'll need to export and then modify a Resource Manager template.
+
+### Export a template
+
+First, export a template that contains settings and information for your Batch account.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **All resources** and then select your Batch account.
+
+3. Select > **Automation** > **Export template**.
+
+4. Choose **Download** in the **Export template** blade.
+
+5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that comprise the template and scripts to deploy the template.
+
+### Modify the template
+
+Next, load and modify the template so you can create a new Batch account in the target region.
+
+1. In the Azure portal, select **Create a resource**.
+
+1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**.
+
+1. Select **Template deployment (deploy using custom templates)**.
+
+1. Select **Create**.
+
+1. Select **Build your own template in the editor**.
+
+1. Select **Load file**, and then select the **template.json** file that you downloaded in the last section.
+
+1. In the uploaded **template.json** file, name the target Batch account by entering a new **defaultValue** for the Batch account name. This example sets the **defaultValue** of the Batch account name to `mytargetaccount`. and replaces the string in **defaultValue** with the resource ID for `mytargetstorageaccount`.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "batchAccounts_mysourceaccount_name": {
+ "defaultValue": "mytargetaccount",
+ "type": "String"
+ }
+ },
+ ```
+
+1. Next, update the **defaultValue** of the storage account with your migrated storage account's resource ID. To get this value, navigate to the storage account in the Azure portal, select **JSON View** near the top fo the screen, and then copy the value shown under **Resource ID**. This example uses the resource ID for a storage account named `mytargetstorageaccount` in the resource group `mytargetresourcegroup`.
+
+ ```json
+ "storageAccounts_mysourcestorageaccount_externalid": {
+ "defaultValue": "/subscriptions/{subscriptionID}/resourceGroups/mytargetresourcegroup/providers/Microsoft.Storage/storageAccounts/mytargetstorageaccount",
+ "type": "String"
+ }
+ },
+ ```
+
+1. Finally, edit the **location** property to use your target region. This example sets the target region to `centralus`.
+
+```json
+ {
+ "resources": [
+ {
+ "type": "Microsoft.Batch/batchAccounts",
+ "apiVersion": "2021-01-01",
+ "name": "[parameters('batchAccounts_mysourceaccount_name')]",
+ "location": "centralus",
+```
+
+To obtain region location codes, see [Azure Locations](https://azure.microsoft.com/global-infrastructure/locations/). The code for a region is the region name with no spaces. For example, **Central US** = **centralus**.
+
+## Move
+
+Deploy the template to create a new Batch account in the target region.
+
+1. Now that you've made your modifications, select **Save** below the **template.json** file.
+
+1. Enter or select the property values:
+ - **Subscription**: Select an Azure subscription.
+ - **Resource group**: Select the resource group that you created when moving the associated storage account.
+ - **Region**: Select the Azure region to which you are moving the account.
+
+1. Select **Review and create**, then select **Create**.
+
+### Configure the new Batch account
+
+Some features won't export to a template, so you'll have to recreate them in the new Batch account. These include the following:
+
+- Jobs
+- Job schedules
+- Certificates
+- Application packages
+
+Be sure to configure these as needed in the new account as needed. You can look at how you've configured these features in your source Batch account for reference.
+
+## Discard or clean up
+
+Once you've confirmed that your new Batch account is successfully working in the new region, and you've restored the necessary features, you can delete the source Batch account.
+
+To remove a Batch account by using the Azure portal:
+
+1. In the Azure portal, expand the menu on the left side to open the menu of services, and choose **Batch accounts**.
+
+2. Locate the Batch account to delete, and right-click the **More** button (**...**) on the right side of the listing. Be sure that this is the original source Batch account, not the new one you created.
+
+3. Select **Delete**, then confirm.
+
+## Next steps
+
+- Learn more about [moving resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).
+- Learn how to [move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md).
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 03/11/2020 Last updated : 04/29/2021 # Azure Batch best practices
-This article discusses a collection of best practices and useful tips for using the Azure Batch service effectively, based on real-life experiences with Batch. These tips can help you enhance performance and avoid design pitfalls in your Azure Batch solutions.
+This article discusses best practices and useful tips for using the Azure Batch service effectively. These tips can help you enhance performance and avoid design pitfalls in your Batch solutions.
> [!TIP] > For guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md).
This article discusses a collection of best practices and useful tips for using
### Pool configuration and naming -- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable an important, but small subset of scenarios. You can read more about user subscription mode at [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).
+- **Pool allocation mode:** When creating a Batch account, you can choose between two pool allocation modes: **Batch service** or **user subscription**. For most cases, you should use the default Batch service mode, in which pools are allocated behind the scenes in Batch-managed subscriptions. In the alternative user subscription mode, Batch VMs and other resources are created directly in your subscription when a pool is created. User subscription accounts are primarily used to enable a small but important subset of scenarios. For more information, see [Additional configuration for user subscription mode](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode).
-- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':**
- While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
+- **'virtualMachineConfiguration' or 'cloudServiceConfiguration':** While you can currently create pools using either configuration, new pools should be configured using 'virtualMachineConfiguration' and not 'cloudServiceConfiguration'. All current and new Batch features will be supported by Virtual Machine Configuration pools. Cloud Services Configuration pools do not support all features and no new capabilities are planned. You won't be able to create new 'cloudServiceConfiguration' pools or add new nodes to existing pools [after February 29, 2024](https://azure.microsoft.com/updates/azure-batch-cloudserviceconfiguration-pools-will-be-retired-on-29-february-2024/). For more information, see [Migrate Batch pool configuration from Cloud Services to Virtual Machine](batch-pool-cloud-service-to-virtual-machine-configuration.md).
-- **Consider job and task run time when determining job to pool mapping:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
+- **Job and task run time considerations:** If you have jobs comprised primarily of short-running tasks, and the expected total task counts are small, so that the overall expected run time of the job is not long, do not allocate a new pool for each job. The allocation time of the nodes will diminish the run time of the job.
-- **Pools should have more than one compute node:** Individual nodes are not guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.
+- **Multiple compute nodes:** Individual nodes are not guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes.
-- **Do not use images with impending end-of-life (EOL) dates.**
- It is strongly recommended to avoid images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It is your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you are using a custom image with a specified node agent, then you will need to ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.
+- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It is your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.
-- **Do not reuse resource names.**
- Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. This can be done by using a GUID (either as the entire resource name, or as a part of it) or embedding the time the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can be used to give a resource a human readable name even if the actual resource ID is something that isn't that human friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
+- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can do this by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
+- **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that jobs won't run if something goes wrong with the pool. This is especially important for time-sensitive workloads. To fix this, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.
-- **Continuity during pool maintenance and failure:** It's best to have your jobs use pools dynamically. If your jobs use the same pool for everything, there's a chance that your jobs won't run if something goes wrong with the pool. This is especially important for time-sensitive workloads. To fix this, select or create a pool dynamically when you schedule each job, or have a way to override the pool name so that you can bypass an unhealthy pool.--- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors, capacity constraints, etc. For this reason, you should be ready to retarget jobs at a different pool (possibly with a different VM size - Batch supports this via [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid using a static pool ID with the expectation that it will never be deleted and never change.
+- **Business continuity during pool maintenance and failure:** There are many reasons why a pool may not grow to the size you desire, such as internal errors or capacity constraints. Make sure you can retarget jobs at a different pool (possibly with a different VM size; Batch supports this via [UpdateJob](/dotnet/api/microsoft.azure.batch.protocol.joboperationsextensions.update)) if necessary. Avoid relying on a static pool ID with the expectation that it will never be deleted and never change.
### Pool lifetime and billing
-Pool lifetime can vary depending upon the method of allocation and options applied to the pool configuration. Pools can have an arbitrary lifetime and a varying number of compute nodes in the pool at any point in time. It's your responsibility to manage the compute nodes in the pool either explicitly, or through features provided by the service ([autoscale](nodes-and-pools.md#automatic-scaling-policy) or [autopool](nodes-and-pools.md#autopools)).
+Pool lifetime can vary depending upon the method of allocation and options applied to the pool configuration. Pools can have an arbitrary lifetime and a varying number of compute nodes at any point in time. It's your responsibility to manage the compute nodes in the pool either explicitly, or through features provided by the service ([autoscale](nodes-and-pools.md#automatic-scaling-policy) or [autopool](nodes-and-pools.md#autopools)).
-- **Keep pools fresh:** Resize your pools to zero every few months to ensure you get the [latest node agent updates and bug fixes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md). Your pool won't receive node agent updates unless it's recreated, or resized to 0 compute nodes. Before you recreate or resize your pool, it's recommended to download any node agent logs for debugging purposes, as discussed in the [Nodes](#nodes) section.
+- **Pool freshness:** Resize your pools to zero every few months to ensure you get the [latest node agent updates and bug fixes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md). Your pool won't receive node agent updates unless it's recreated (or if it's resized to 0 compute nodes). Before you recreate or resize your pool, you should download any node agent logs for debugging purposes, as discussed in the [Nodes](#nodes) section.
-- **Pool re-creation:** On a similar note, it's not recommended to delete and re-create your pools on a daily basis. Instead, create a new pool, update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool.
+- **Pool recreation:** On a similar note, avoid deleting and recreating pools on a daily basis. Instead, create a new pool and then update your existing jobs to point to the new pool. Once all of the tasks have been moved to the new pool, then delete the old pool.
-- **Pool efficiency and billing:** Batch itself incurs no extra charges, but you do incur charges for the compute resources used. You're billed for every compute node in the pool, regardless of the state it's in. This includes any charges required for the node to run such as storage and networking costs. To learn more best practices, see [Cost analysis and budgets for Azure Batch](budget.md).
+- **Pool efficiency and billing:** Batch itself incurs no extra charges, but you do incur charges for the compute resources used. You're billed for every compute node in the pool, regardless of the state it's in. This includes any charges required for the node to run, such as storage and networking costs. For more information, see [Cost analysis and budgets for Azure Batch](budget.md).
### Pool allocation failures
Pool allocation failures can happen at any point during first allocation or subs
### Unplanned downtime
-It's possible for Batch pools to experience downtime events in Azure. Keep this in mind when planning and developing your scenario or workflow for Batch.
-
-In the case that a node fails, Batch automatically attempts to recover these compute nodes on your behalf. This may trigger rescheduling any running task on the node that is recovered. See [Designing for retries](#design-for-retries-and-re-execution) to learn more about interrupted tasks.
+It's possible for Batch pools to experience downtime events in Azure. Keep this in mind when planning and developing your scenario or workflow for Batch. If nodes fail, Batch automatically attempts to recover these compute nodes on your behalf. This may trigger rescheduling any running task on the node that is recovered. To learn more about interrupted tasks, see [Designing for retries](#design-for-retries-and-re-execution).
### Custom image pools
Pools can be created using third-party images published to Azure Marketplace. Wi
### Azure region dependency
-You shouldn't rely on a single Azure region if you have a time-sensitive or production workload. While rare, there are issues that can affect an entire region. For example, if your processing needs to start at a specific time, consider scaling up the pool in your primary region *well before your start time*. If that pool scale fails, you can fall back to scaling up a pool in a backup region (or regions). Pools across multiple accounts in different regions provide a ready, easily accessible backup if something goes wrong with another pool. For more information, see [Design your application for high availability](high-availability-disaster-recovery.md).
+You shouldn't rely on a single Azure region if you have a time-sensitive or production workload. While rare, there are issues that can affect an entire region. For example, if your processing needs to start at a specific time, consider scaling up the pool in your primary region *well before your start time*. If that pool scale fails, you can fall back to scaling up a pool in a backup region (or regions).
+
+Pools across multiple accounts in different regions provide a ready, easily accessible backup if something goes wrong with another pool. For more information, see [Design your application for high availability](high-availability-disaster-recovery.md).
## Jobs
A [job](jobs-and-tasks.md#jobs) is a container designed to contain hundreds, tho
Using a job to run a single task is inefficient. For example, it's more efficient to use a single job containing 1000 tasks rather than creating 100 jobs that contain 10 tasks each. Running 1000 jobs, each with a single task, would be the least efficient, slowest, and most expensive approach to take.
-Because of this, make sure not to design a Batch solution that requires thousands of simultaneously active jobs. There is no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas).
+Because of this, avoid designing a Batch solution that requires thousands of simultaneously active jobs. There is no quota for tasks, so executing many tasks under as few jobs as possible efficiently uses your [job and job schedule quotas](batch-quota-limit.md#resource-quotas).
### Job lifetime
There is a default [active job and job schedule quota](batch-quota-limit.md#reso
## Tasks
-[Tasks](jobs-and-tasks.md#tasks) are individual units of work that comprise a job. Tasks are submitted by the user and scheduled by Batch on to compute nodes. There are several design considerations to make when creating and executing tasks. The following sections explain common scenarios and how to design your tasks to handle issues and perform efficiently.
+[Tasks](jobs-and-tasks.md#tasks) are individual units of work that comprise a job. Tasks are submitted by the user and scheduled by Batch on to compute nodes. The following sections provide suggestions for designing your tasks to handle issues and perform efficiently.
### Save task data
-Compute nodes are by their nature ephemeral. There are many features in Batch such as [autopool](nodes-and-pools.md#autopools) and [autoscale](nodes-and-pools.md#automatic-scaling-policy) that can make it easy for nodes to disappear. When nodes leave a pool (due to a resize or a pool delete) all the files on those nodes are also deleted. Because of this, a task should move its output off of the node it is running on and to a durable store before it completes. Similarly, if a task fails, it should move logs required to diagnose the failure to a durable store.
+Compute nodes are by their nature ephemeral. Batch features such as [autopool](nodes-and-pools.md#autopools) and [autoscale](nodes-and-pools.md#automatic-scaling-policy) can make it easy for nodes to disappear. When nodes leave a pool (due to a resize or a pool delete), all the files on those nodes are also deleted. Because of this, a task should move its output off of the node it is running on and to a durable store before it completes. Similarly, if a task fails, it should move logs required to diagnose the failure to a durable store.
Batch has integrated support Azure Storage to upload data via [OutputFiles](batch-task-output-files.md), as well as a variety of shared file systems, or you can perform the upload yourself in your tasks.
When running these services, they must not take file locks on any files in Batch
Directory junctions, sometimes called directory hard-links, are difficult to deal with during task and job cleanup. Use symlinks (soft-links) rather than hard-links.
-### Collect the Batch agent logs
+### Collect Batch agent logs
If you notice a problem involving the behavior of a node or tasks running on a node, collect the Batch agent logs prior to deallocating the nodes in question. The Batch agent logs can be collected using the Upload Batch service logs API. These logs can be supplied as part of a support ticket to Microsoft and will help with issue troubleshooting and resolution. ### Manage OS upgrades
-For user subscription mode Batch accounts, automated OS upgrades can interrupt task progress, especially if the tasks are long-running. [Building idempotent tasks](#build-durable-tasks) can help to reduce errors caused by these interruptions. We also recommend [scheduling OS image upgrades for times where tasks aren't expected to run](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#manually-trigger-os-image-upgrades).
+For user subscription mode Batch accounts, automated OS upgrades can interrupt task progress, especially if the tasks are long-running. [Building idempotent tasks](#build-durable-tasks) can help to reduce errors caused by these interruptions. We also recommend [scheduling OS image upgrades for times when tasks aren't expected to run](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#manually-trigger-os-image-upgrades).
For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Allowing automatic updates is recommended, but you can set this value to `false` if you need to ensure that an OS update doesn't happen unexpectedly.
For Windows pools, `enableAutomaticUpdates` is set to `true` by default. Allowin
For the purposes of isolation, if your scenario requires isolating jobs from each other, do so by having them in separate pools. A pool is the security isolation boundary in Batch, and by default, two pools are not visible or able to communicate with each other. Avoid using separate Batch accounts as a means of isolation.
-## Moving Batch accounts across regions
-
-There are scenarios in which it might be helpful to move an existing Batch account from one region to another. For example, you may want to move to another region as part of disaster recovery planning.
-
-Azure Batch accounts cannot be directly moved from one region to another. You can however, use an Azure Resource Manager template to export the existing configuration of your Batch account. You can then stage the resource in another region by exporting the Batch account to a template, modifying the parameters to match the destination region, and then deploying the template to the new region.
-
-After you upload the template to the new region, you will have to recreate certificates, job schedules, and application packages. To commit the changes and complete the move of the Batch account, remember to delete the original Batch account or resource group.
-
-For more information on Resource Manager and templates, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md).
- ## Connectivity Review the following guidance related to connectivity in your Batch solutions.
For User Defined Routes (UDRs), ensure that you have a process in place to updat
### Honoring DNS
-Ensure that your systems are honoring DNS Time-to-Live (TTL) for your Batch account service URL. Additionally, ensure that your Batch service clients and other connectivity mechanisms to the Batch service do not rely on IP addresses (or [create a pool with static public IP addresses](create-pool-public-ip.md) as described below).
+Ensure that your systems honor DNS Time-to-Live (TTL) for your Batch account service URL. Additionally, ensure that your Batch service clients and other connectivity mechanisms to the Batch service do not rely on IP addresses (or [create a pool with static public IP addresses](create-pool-public-ip.md) as described below).
If your requests receive 5xx level HTTP responses and there is a "Connection: close" header in the response, your Batch service client should observe the recommendation by closing the existing connection, re-resolving DNS for the Batch account service URL, and attempt following requests on a new connection.
The automated cleanup for the working directory will be blocked if you run a ser
- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about [default Azure Batch quotas, limits, and constraints, and how to request quota increases](batch-quota-limit.md).-- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
+- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
certification How To Using The Components Feature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/how-to-using-the-components-feature.md
Previously updated : 03/03/2021 Last updated : 05/04/2021
While completing the [tutorial to add device details](tutorial-02-adding-device-
Every project submitted for certification will include one **Customer Ready Product** component (which in many cases will represent the holistic product itself). To better understand the distinction of a Customer Ready Product component type, view our [certification glossary](./resources-glossary.md). All additional components are at your discretion to include to accurately capture your device.
-1. Select `Add a component` on the Product details tab.
+1. Select `Add a component` on the Hardware tab.
- ![Add a component link](./media/images/add-a-component-link.png)
+ ![Add a component link](./media/images/add-component-new.png)
1. Complete relevant form fields for the component.
Now that you're ready to use our components feature, you're now ready to complet
- [Tutorial: Adding device details](tutorial-02-adding-device-details.md) - [Editing your published device](how-to-edit-published-device.md)-
certification Tutorial 02 Adding Device Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/tutorial-02-adding-device-details.md
Previously updated : 03/02/2021 Last updated : 05/04/2021
In this tutorial, you learn how to:
## Prerequisites * You should be signed in and have a project for your device created on the [Azure Certified Device portal](https://certify.azure.com). For more information, view the [tutorial](tutorial-01-creating-your-project.md).
-* You should have a Get Started guide for your device in PDF format. We provide a number of Get Started templates for you to use, depending on both the certification program and your preferred language. The templates are available at our [Get started templates](https://aka.ms/GSTemplate "Get started templates") GitHub location.
+* You should have a Get Started guide for your device in PDF format. We provide many Get Started templates for you to use, depending on both the certification program and your preferred language. The templates are available at our [Get started templates](https://aka.ms/GSTemplate "Get started templates") GitHub location.
## Adding technical device details The first section of your project page, called 'Input device details', allows you to provide information on the core hardware capabilities of your device, such as device name, description, processor, operating system, connectivity options, hardware interfaces, industry protocols, physical dimensions, and more. While many of the fields are optional, most of this information will be made available to potential customers on the Azure Certified Device catalog if you choose to publish your device after it has been certified.
-1. Click `Add` in the 'Input device details' section on your project summary page to open the device details section. You will see five sections for you to complete.
+1. Click `Add` in the 'Input device details' section on your project summary page to open the device details section. You will see six sections for you to complete.
![Image of the project details page](./media/images/device-details-menu.png) 2. Review the information you previously provided when you created the project under the `Basics` tab. 1. Review the certifications you are applying for with your device under the `Certifications` tab.
-1. Open the `Product details` tab and select at least one operating system.
-1. Add **at least** one discrete component that describes your device. You can view additional guidance on component usage [here](how-to-using-the-components-feature.md).
+1. Open the `Hardware` tab and add **at least** one discrete component that describes your device. You can also view our guidance on [component usage](how-to-using-the-components-feature.md).
1. Click `Save`. You will then be able to edit your component device and add more advanced details.
-1. List additional device details not captured by the component details under `Additional product details`.
+1. Add any relevant information regarding operating conditions (such as IP rating, operating temperature, or safety certification).
+
+![Image of the hardware section](./media/images/hardware-section.png)
+
+7. List additional device details not captured by the component details under `Additional product details`.
1. If you marked `Other` in any of the component fields or have a special circumstance you would like to flag with the Azure Certification team, leave a clarifying comment in the `Comments for reviewer` section.
-1. Use the `Dependencies` tab to list any dependencies if your device requires additional hardware or services to send data to Azure. You can view additional guidance on listing dependencies [here](how-to-indirectly-connected-devices.md).
+1. Open the `Software` tab and select **at least** one operating system.
+1. (**Required for Dev Kit devices** and highly recommended for all others) Select a level to indicate the expected set-up process to connect your device to Azure. If you select Level 2, you will be required to provide a link to the available software image.
+
+![Image of the software section](./media/images/software-section.png)
+
+11. Use the `Dependencies` tab to list any dependencies if your device requires additional hardware or services to send data to Azure. You can also view our additional guidance for [listing dependencies](how-to-indirectly-connected-devices.md).
1. Once you are satisfied with the information you've provided, you can use the `Review` tab for a read-only overview of the full set of device details that been entered. 1. Click `Project summary` at the top of the page to return to your summary page.
In this area, you will provide customer-ready marketing information for your de
> [!Note] > Please ensure all supplied URLs are valid or will be active at the time of publication following approval.*)
-1. Indicate up to 3 target industries that your device is optimized for.
-1. Provide information for up to 5 distributors of your device. This may include the manufacturer's own site.
+1. Indicate up to three target industries that your device is optimized for.
+1. Provide information for up to five distributors of your device. This may include the manufacturer's own site.
> [!Note] > If no distributor product page URL is supplied, then the `Shop` button on the catalog will default to the link supplied for `Distributor page`, which may not be specific to the device. Ideally, the distributor URL should lead to a specific page where a customer can purchase a device, but is not mandatory. If the distributor is the same as the manufacturer, this URL may be the same as the manufacturer's marketing page.*)
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
This table lists accepted data types, when each data type should be used, and th
| Data type | Used for testing | Recommended quantity | Used for training | Recommended quantity | |--|--|-|-|-| | [Audio](#audio-data-for-testing) | Yes<br>Used for visual inspection | 5+ audio files | No | N/A |
-| [Audio + Human-labeled transcripts](#audio--human-labeled-transcript-data-for-testingtraining) | Yes<br>Used to evaluate accuracy | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
-| [Related text](#related-text-data-for-training) | No | N/a | Yes | 1-200 MB of related text |
+| [Audio + Human-labeled transcripts](#audio-and-human-labeled-transcript-data) | Yes<br>Used to evaluate accuracy | 0.5-5 hours of audio | Yes | 1-20 hours of audio |
+| [Plain text](#plain-text-data-for-training) | No | N/a | Yes | 1-200 MB of related text |
+| [Pronunciation](#pronunciation-data-for-training) | No | N/a | Yes | 1 KB - 1 MB of pronunciation text |
Files should be grouped by type into a dataset and uploaded as a .zip file. Each dataset can only contain a single data type. > [!TIP]
-> When you train a new model, start with [related text](#related-text-data-for-training). This data will already improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes vs. days).
+> When you train a new model, start with [text](#plain-text-data-for-training). This data will already improve the recognition of special terms and phrases. Training with text is much faster than training with audio (minutes vs. days).
> [!NOTE] > Not all base models support training with audio. If a base model does not support it, the Speech service will only use the text from the transcripts and ignore the audio. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data. Even if a base model supports training with audio data, the service might use only part of the audio. Still it will use all the transcripts.
Files should be grouped by type into a dataset and uploaded as a .zip file. Each
## Upload data
-To upload your data, navigate to the <a href="https://speech.microsoft.com/customspeech" target="_blank">Speech Studio </a>. From the portal, click **Upload data** to launch the wizard and create your first dataset. You'll be asked to select a speech data type for your dataset, before allowing you to upload your data.
+To upload your data, navigate to <a href="https://speech.microsoft.com/customspeech" target="_blank">Custom Speech portal</a>. After creating a project, navigate to **Speech datasets** tab, and click **Upload data** to launch the wizard and create your first dataset. You'll be asked to select a speech data type for your dataset, before allowing you to upload your data.
-![Screenshot that highlights the Audio upload option from the Speech Portal.](./media/custom-speech/custom-speech-select-audio.png)
-
-Each dataset you upload must meet the requirements for the data type that you choose. Your data must be correctly formatted before it's uploaded. Correctly formatted data ensures it will be accurately processed by the Custom Speech service. Requirements are listed in the following sections.
+Firstly you need to specify whether the dataset is to be used for **Training** or **Testing**. And there are multiple types of data that can be uploaded and used for **Training** or **Testing**. Each dataset you upload must meet the requirements for the data type that you choose. Your data must be correctly formatted before it's uploaded. Correctly formatted data ensures it will be accurately processed by the Custom Speech service. Requirements are listed in the following sections.
After your dataset is uploaded, you have a few options:
-* You can navigate to the **Testing** tab and visually inspect audio only or audio + human-labeled transcription data.
-* You can navigate to the **Training** tab and use audio + human transcription data or related text data to train a custom model.
-
-## Audio data for testing
+* You can navigate to the **Train custom models** tab to train a custom model.
+* You can navigate to the **Test models** tab to visually inspect quality with audio only data or evaluate accuracy with audio + human-labeled transcription data.
-Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind, audio data is used to inspect the accuracy of speech with regards to a specific model's performance. If you're looking to quantify the accuracy of a model, use [audio + human-labeled transcription data](#audio--human-labeled-transcript-data-for-testingtraining).
-Use this table to ensure that your audio files are formatted correctly for use with Custom Speech:
+## Audio and human-labeled transcript data
-| Property | Value |
-|--|--|
-| File format | RIFF (WAV) |
-| Sample rate | 8,000 Hz or 16,000 Hz |
-| Channels | 1 (mono) |
-| Maximum length per audio | 2 hours |
-| Sample format | PCM, 16-bit |
-| Archive format | .zip |
-| Maximum archive size | 2 GB |
--
-> [!TIP]
-> When uploading training and testing data, the .zip file size cannot exceed 2 GB. If you require more data for training, divide it into several .zip files and upload them separately. Later, you can choose to train from *multiple* datasets. However, you can only test from a *single* dataset.
-
-Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX </a> to verify audio properties or convert existing audio to the appropriate formats. Below are some examples of how each of these activities can be done through the SoX command line:
-
-| Activity | Description | SoX command |
-|-|-|-|
-| Check audio format | Use this command to check<br>the audio file format. | `sox --i <filename>` |
-| Convert audio format | Use this command to convert<br>the audio file to single channel, 16-bit, 16 KHz. | `sox <input> -b 16 -e signed-integer -c 1 -r 16k -t wav <output>.wav` |
-
-## Audio + human-labeled transcript data for testing/training
-
-To measure the accuracy of Microsoft's speech-to-text accuracy when processing your audio files, you must provide human-labeled transcriptions (word-by-word) for comparison. While human-labeled transcription is often time consuming, it's necessary to evaluate accuracy and to train the model for your use cases. Keep in mind, the improvements in recognition will only be as good as the data provided. For that reason, it's important that only high-quality transcripts are uploaded.
+Audio + human-labeled transcript data can be used for both training and testing purposes. To improve the acoustic aspects like slight accents, speaking styles, background noises, or to measure the accuracy of Microsoft's speech-to-text accuracy when processing your audio files, you must provide human-labeled transcriptions (word-by-word) for comparison. While human-labeled transcription is often time consuming, it's necessary to evaluate accuracy and to train the model for your use cases. Keep in mind, the improvements in recognition will only be as good as the data provided. For that reason, it's important that only high-quality transcripts are uploaded.
Audio files can have silence at the beginning and end of the recording. If possible, include at least a half-second of silence before and after speech in each sample file. While audio with low recording volume or disruptive background noise is not helpful, it should not hurt your custom model. Always consider upgrading your microphones and signal processing hardware before gathering audio samples.
See [Set up your Azure account](custom-speech-overview.md#set-up-your-azure-acco
Not all base models support training with audio data. If the base model does not support it, the service will ignore the audio and just train with the text of the transcriptions. In this case, training will be the same as training with related text. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
-## Related text data for training
-
-Product names or features that are unique, should include related text data for training. Related text helps ensure correct recognition. Two types of related text data can be provided to improve recognition:
-
-| Data type | How this data improves recognition |
-|--||
-| Sentences (utterances) | Improve accuracy when recognizing product names, or industry-specific vocabulary within the context of a sentence. |
-| Pronunciations | Improve pronunciation of uncommon terms, acronyms, or other words with undefined pronunciations. |
+## Plain text data for training
-Sentences can be provided as a single text file or multiple text files. To improve accuracy, use text data that is closer to the expected spoken utterances. Pronunciations should be provided as a single text file. Everything can be packaged as a single zip file and uploaded to the <a href="https://speech.microsoft.com/customspeech" target="_blank">Speech Studio </a>.
+Domain related sentences can be used to improve accuracy when recognizing product names, or industry-specific jargon. Sentences can be provided as a single text file. To improve accuracy, use text data that is closer to the expected spoken utterances.
-Training with related text usually completes within a few minutes.
-
-### Guidelines to create a sentences file
+Training with plain text usually completes within a few minutes.
To create a custom model using sentences, you'll need to provide a list of sample utterances. Utterances _do not_ need to be complete or grammatically correct, but they must accurately reflect the spoken input you expect in production. If you want certain terms to have increased weight, add several sentences that include these specific terms.
Additionally, you'll want to account for the following restrictions:
* Don't use special characters or UTF-8 characters above `U+00A1`. * URIs will be rejected.
-### Guidelines to create a pronunciation file
-
-If there are uncommon terms without standard pronunciations that your users will encounter or use, you can provide a custom pronunciation file to improve recognition.
+## Pronunciation data for training
+If there are uncommon terms without standard pronunciations that your users will encounter or use, you can provide a custom pronunciation file to improve recognition.
> [!IMPORTANT] > It is not recommended to use custom pronunciation files to alter the pronunciation of common words.
-This includes examples of a spoken utterance, and a custom pronunciation for each:
+Pronunciations should be provided as a single text file. This includes examples of a spoken utterance, and a custom pronunciation for each:
| Recognized/displayed form | Spoken form | |--|--|
Use the following table to ensure that your related data file for pronunciations
| # of pronunciations per line | 1 | | Maximum file size | 1 MB (1 KB for free tier) |
+## Audio data for testing
+
+Audio data is optimal for testing the accuracy of Microsoft's baseline speech-to-text model or a custom model. Keep in mind, audio data is used to inspect the accuracy of speech with regards to a specific model's performance. If you're looking to quantify the accuracy of a model, use [audio + human-labeled transcription data](#audio-and-human-labeled-transcript-data).
+
+Use this table to ensure that your audio files are formatted correctly for use with Custom Speech:
+
+| Property | Value |
+|--|--|
+| File format | RIFF (WAV) |
+| Sample rate | 8,000 Hz or 16,000 Hz |
+| Channels | 1 (mono) |
+| Maximum length per audio | 2 hours |
+| Sample format | PCM, 16-bit |
+| Archive format | .zip |
+| Maximum archive size | 2 GB |
++
+> [!TIP]
+> When uploading training and testing data, the .zip file size cannot exceed 2 GB. If you require more data for training, divide it into several .zip files and upload them separately. Later, you can choose to train from *multiple* datasets. However, you can only test from a *single* dataset.
+
+Use <a href="http://sox.sourceforge.net" target="_blank" rel="noopener">SoX </a> to verify audio properties or convert existing audio to the appropriate formats. Below are some examples of how each of these activities can be done through the SoX command line:
+
+| Activity | Description | SoX command |
+|-|-|-|
+| Check audio format | Use this command to check<br>the audio file format. | `sox --i <filename>` |
+| Convert audio format | Use this command to convert<br>the audio file to single channel, 16-bit, 16 KHz. | `sox <input> -b 16 -e signed-integer -c 1 -r 16k -t wav <output>.wav` |
+ ## Next steps * [Inspect your data](how-to-custom-speech-inspect-data.md) * [Evaluate your data](how-to-custom-speech-evaluate-data.md)
-* [Train your model](how-to-custom-speech-train-model.md)
-* [Deploy your model](./how-to-custom-speech-train-model.md)
+* [Train custom model](how-to-custom-speech-train-model.md)
+* [Deploy model](./how-to-custom-speech-train-model.md)
cognitive-services Text Analytics How To Language Detection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection.md
All POST requests return a JSON-formatted response with the IDs and detected pro
Output is returned immediately. You can stream the results to an application that accepts JSON or save the output to a file on the local system. Then, import the output into an application that you can use to sort, search, and manipulate the data.
-Results for the example request should look like the following JSON. Notice that it's one document with multiple items. Output is in English. Language identifiers include a friendly name and a language code in [ISO 639-1](https://www.iso.org/standard/22109.html) format.
+Results for the example request should look like the following JSON document. Notice that it's one JSON document with multiple items with each item representing the detection result for every document you sumbit. Output is in English.
-A positive score of 1.0 expresses the highest possible confidence level of the analysis.
+Language detection will return one predominant language for one document, along with it's [ISO 639-1](https://www.iso.org/standard/22109.html) name, friendly name and confidence score. A positive score of 1.0 expresses the highest possible confidence level of the analysis.
```json {
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/understand-work-scopes.md
Title: Understand and work with Azure Cost Management scopes
description: This article helps you understand billing and resource management scopes available in Azure and how to use the scopes in Cost Management and APIs. Previously updated : 04/19/2021 Last updated : 05/05/2021
Microsoft Customer Agreement billing accounts have the following scopes:
- **Customer** - Represents a group of subscriptions that are associated to a specific customer that is onboarded to a Microsoft Customer Agreement by partner. This scope is specific to Cloud Solution Providers (CSP).
-Unlike EA billing scopes, Customer Agreement billing accounts _are_ bound to a single directory and can't have subscriptions across multiple Azure AD directories.
+Unlike EA billing scopes, Customer Agreement billing accounts _are_ managed by a single directory. Microsoft Customer Agreement billing accounts can have *linked* subscriptions that could be in different Azure AD directories.
Customer Agreement billing scopes don't apply to partners. Partner roles and permissions are documented at [Assign users roles and permissions](/partner-center/permissions-overview).
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
tags: billing
Previously updated : 10/28/2020 Last updated : 05/05/2021
There could be several reasons that you don't see an invoice:
## Get your invoice in email (.pdf)
-You can opt in and configure additional recipients to receive your Azure invoice in an email. This feature may not be available for certain subscriptions such as support offers, Enterprise Agreements, or Azure in Open. If you have a Microsoft Customer agreement, see Get your billing profile invoices in email.
+You can opt in and configure additional recipients to receive your Azure invoice in an email. This feature may not be available for certain subscriptions such as support offers, Enterprise Agreements, or Azure in Open. If you have a Microsoft Customer agreement, see [Get your billing profile invoices in email](../understand/download-azure-invoice.md#get-your-billing-profiles-invoice-in-email).
### Get your subscription's invoices in email
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
tags: billing
Previously updated : 04/06/2021 Last updated : 05/05/2021
Billing owners can create subscriptions when they have the [appropriate permissi
- You can link subscriptions from other tenants to your Microsoft Customer Agreement billing account. Taking billing ownership of a subscription only changes the invoicing arrangement. It doesn't affect the service tenant or Azure RBAC roles. - To change the subscription owner in the service tenant, you must transfer the [subscription to a different Azure Active Directory directory](../../role-based-access-control/transfer-subscription.md).
+An MCA billing account is managed by a single tenant/directory. The billing account only controls billing for the subscriptions in its tenant. However, you can use a billing ownership transfer to link a subscription to a billing account in a different tenant.
+
+### Billing ownership transfer
+
+A billing ownership transfer only changes the invoice arrangement for a single subscription. User and resource management for the subscription do not change.
+
+A billing ownership transfer does two things:
+
+- The subscriptionΓÇÖs original billing ownership is removed.
+- The subscription billing ownership is *linked* to the target billing account, which could be in a different tenant/directory.
+
+Billing ownership transfer doesnΓÇÖt affect:
+
+- Users
+- Resources
+- Azure RBAC permissions
++ ## Add guest users to your Microsoft Customer Agreement tenant Users that are added to your Microsoft Customer Agreement billing tenant, to manage billing responsibilities from a different tenant, must be invited as a guest.
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Pipeline runs are typically instantiated by passing arguments to parameters that
### An Azure Functions app pipeline throws an error with private endpoint connectivity
-You have Data Factory and an Azure function app running on a private endpoint. You're trying to run a pipeline that interacts with the function app. You've tried three different methods, but one returns error "Bad Request," and the other two methods return "103 Error Forbidden."
+You have Data Factory and a function app running on a private endpoint in Azure. You're trying to run a pipeline that interacts with the function app. You've tried three different methods, but one returns error "Bad Request," and the other two methods return "103 Error Forbidden."
**Cause**
databox-online Azure Stack Edge Mini R Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-mini-r-technical-specifications-compliance.md
Previously updated : 04/12/2021 Last updated : 05/04/2021 # Azure Stack Edge Mini R technical specifications
The following table lists the weight of the device including the battery.
|--|| | Total weight of the device | 7 lbs |
-## Enclosure environment specifications
- ## Enclosure environment This section lists the specifications related to the enclosure environment, such as temperature, humidity, and altitude.
databox-online Azure Stack Edge Reset Reactivate Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-reset-reactivate-device.md
Previously updated : 03/03/2020 Last updated : 05/05/2021
This article describes how to reset, reconfigure, and reactivate an Azure Stack
After you reset the device to remove the data, you'll need to reactivate the device as a new resource. Resetting a device removes the device configuration, so you'll need to reconfigure the device via the local web UI.
-In this article, you learn how to:
+For example, you might need to move an existing Azure Stack Edge resource to a new subscription. To do so, you would:
-> [!div class="checklist"]
->
-> * Wipe the data off the data disks on the device
-> * Reactivate the device by creating a new order, reconfiguring the device, and activating it
+1. Reset data on the device by following the steps in [Reset device](#reset-device).
+2. Create a new resource that uses the new subscription with your existing device, and then activate the device. Follow the steps in [Reactivate device](#reactivate-device).
-## Reset data from the device
+## Reset device
To wipe the data off the data disks of your device, you need to reset your device.
defender-for-iot How To Activate And Set Up Your On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-on-premises-management-console.md
Title: Activate and set up your on-premises management console description: Activating the management console ensures that sensors are registered with Azure and send information to the on-premises management console, and that the on-premises management console carries out management tasks on connected sensors. Previously updated : 04/29/2021 Last updated : 05/05/2021
After you sign in for the first time, you will need to activate the on-premises
1. Select a subscription to associate the on-premises management console to, and then select the **Download on-premises management console activation file** button. The activation file is downloaded.
- :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/cloud_download_opm_activation_file.png" alt-text="Download the activation file.":::
+ The on-premises management console can be associated to one, or more subscriptions. The activation file will be associated with all of the selected subscriptions, and the number of committed devices at the time of download.
+
+ :::image type="content" source="media/how-to-manage-sensors-from-the-on-premises-management-console/multiple-subscriptions.png" alt-text="You can select multiple subscriptions to onboard your on-premises management console to.":::
If you have not already onboarded a subscription, then [Onboard a subscription](how-to-manage-subscriptions.md#onboard-a-subscription).
+ > [!Note]
+ > If you delete a subscription, you will need to upload a new activation file to all on-premises management console that was affiliated with the deleted subscription.
+ 1. Navigate back to the **Activation** popup screen and select **Choose File**. 1. Select the downloaded file.
-After initial activation, the number of monitored devices can exceed the number of committed devices defined during onboarding. This occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices, and the number of committed devices, a warning will appear on the management console. If this happens, upload a new activation file.
+After initial activation, the number of monitored devices can exceed the number of committed devices defined during onboarding. This issue occurs if you connect more sensors to the management console. If there's a discrepancy between the number of monitored devices, and the number of committed devices, a warning will appear on the management console.
++
+If this warning appears, you need to upload a [new activation file](#activate-the-on-premises-management-console).
### Activate an expired license (versions under 10.0)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/release-notes.md
Title: What's new in Azure Defender for IoT description: This article lets you know what's new in the latest release of Defender for IoT. Previously updated : 04/25/2021 Last updated : 05/05/2021 # What's new in Azure Defender for IoT?
This feature is available on the on-premises management console with the release
### Add second network interface to On-premises management console (Public Preview)
-You can now enhance the security of your deployment by adding a second network interface to your on-premises management console. This feature allows your on-premises management to have itΓÇÖs connected sensors on one secure network, while allowing your users to access the on-premises management console through a second separate network interface.
+You can now enhance the security of your deployment by adding a second network interface to your on-premises management console. This feature allows your on-premises management to have its connected sensors on one secure network, while allowing your users to access the on-premises management console through a second separate network interface.
This feature is available on the on-premises management console with the release of version 10.2.
-### Add second network interface to On-premises management console (Public preview)
-
-You can now enhance the security of your deployment by adding a second network interface to your on-premises management console. This feature allows your on-premises management to have itΓÇÖs connected sensors on one secure network, while allowing your users to access the on-premises management console through a second separate network interface.
-
-This feature is available on the on-premises management console with the release of version 10.2.
### Device builder - new micro agent (Public preview) A new device builder module is available. The module, referred to as a micro-agent, allows:
dns Dns Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-custom-domain.md
Previously updated : 7/13/2019 Last updated : 05/05/2021
dns Dns For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-for-azure-services.md
na Previously updated : 09/21/2016 Last updated : 05/03/2021 # How Azure DNS works with other Azure services
The following table outlines the supported record types you can use for various
| Azure App Service | [External IP](dns-custom-domain.md#app-service-web-apps) |For external IP addresses, you can create a DNS A record. Otherwise, you must create a CNAME record that maps to the azurewebsites.net name. For more information, see [Map a custom domain name to an Azure app](../app-service/app-service-web-tutorial-custom-domain.md). | | Azure Resource Manager VMs |[Public IP](dns-custom-domain.md#public-ip-address) |Resource Manager VMs can have public IP addresses. A VM with a public IP address also can be behind a load balancer. You can create a DNS A, CNAME, or alias record for the public address. You can use this custom name to bypass the VIP on the load balancer. | | Classic VMs |[Public IP](dns-custom-domain.md#public-ip-address) |Classic VMs created by using PowerShell or CLI can be configured with a dynamic or static (reserved) virtual address. You can create a DNS CNAME or an A record, respectively. |++
+## Next steps
+
+* Learn how to [manage record sets and records](./dns-getstarted-portal.md) in your DNS zone.
dns Dns Protect Zones Recordsets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/dns-protect-zones-recordsets.md
Title: Protecting DNS Zones and Records - Azure DNS description: In this learning path, get started protecting DNS zones and record sets in Microsoft Azure DNS. -+ Previously updated : 2/20/2020- Last updated : 05/05/2021+ # How to protect DNS zones and records
The resource group *myResourceGroup* contains five zones for Contoso Corporation
The simplest way to assign Azure RBAC permissions is [via the Azure portal](../role-based-access-control/role-assignments-portal.md).
-Open **Access control (IAM)** for the resource group, then select **Add**, then select the **DNS Zone Contributor** role. Select the required users or groups to grant permissions.
+Open **Access control (IAM)** for the resource group, then select **+ Add**, then select the **DNS Zone Contributor** role. Select the required users or groups to grant permissions.
-![Resource group level Azure RBAC via the Azure portal](./media/dns-protect-zones-recordsets/rbac1.png)
Permissions can also be [granted using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md):
-```azurepowershell
+```azurepowershell-interactive
# Grant 'DNS Zone Contributor' permissions to all zones in a resource group $usr = "<user email address>"
New-AzRoleAssignment -SignInName $usr -RoleDefinitionName $rol -ResourceGroupNam
The equivalent command is also [available via the Azure CLI](../role-based-access-control/role-assignments-cli.md):
-```azurecli
+```azurecli-interactive
# Grant 'DNS Zone Contributor' permissions to all zones in a resource group az role assignment create \
Azure RBAC rules can be applied to a subscription, a resource group or to an ind
For example, the resource group *myResourceGroup* contains the zone *contoso.com* and a subzone *customers.contoso.com*. CNAME records are created for each customer account. The administrator account used to manage CNAME records is assigned permissions to create records in the *customers.contoso.com* zone. The account can manage *customers.contoso.com* only.
-Zone-level Azure RBAC permissions can be granted via the Azure portal. Open **Access control (IAM)** for the zone, select **Add**, then select the **DNS Zone Contributor** role and select the required users or groups to grant permissions.
+Zone-level Azure RBAC permissions can be granted via the Azure portal. Open **Access control (IAM)** for the zone, select **+ Add**, then select the **DNS Zone Contributor** role and select the required users or groups to grant permissions.
-![DNS Zone level Azure RBAC via the Azure portal](./media/dns-protect-zones-recordsets/rbac2.png)
Permissions can also be [granted using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md):
-```azurepowershell
+```azurepowershell-interactive
# Grant 'DNS Zone Contributor' permissions to a specific zone $usr = "<user email address>"
New-AzRoleAssignment -SignInName $usr -RoleDefinitionName $rol -ResourceGroupNam
The equivalent command is also [available via the Azure CLI](../role-based-access-control/role-assignments-cli.md):
-```azurecli
+```azurecli-interactive
# Grant 'DNS Zone Contributor' permissions to a specific zone az role assignment create \
az role assignment create \
Permissions are applied at the record set level. The user is granted control to entries they need and are unable to make any other changes.
-Record-set level Azure RBAC permissions can be configured via the Azure portal, using the **Access Control (IAM)** button in the record set page:
+Record-set level Azure RBAC permissions can be configured via the Azure portal, using the **Users** button in the record set page:
-![Record set level Azure RBAC via the Azure portal](./media/dns-protect-zones-recordsets/rbac3.png)
Record-set level Azure RBAC permissions can also be [granted using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md):
-```azurepowershell
+```azurepowershell-interactive
# Grant permissions to a specific record set $usr = "<user email address>"
New-AzRoleAssignment -SignInName $usr -RoleDefinitionName $rol -Scope $sco
The equivalent command is also [available via the Azure CLI](../role-based-access-control/role-assignments-cli.md):
-```azurecli
+```azurecli-interactive
# Grant permissions to a specific record set az role assignment create \
The remaining Actions are copied from the [DNS Zone Contributor built-in role](.
Custom role definitions can't currently be defined via the Azure portal. A custom role based on this role definition can be created using Azure PowerShell:
-```azurepowershell
+```azurepowershell-interactive
# Create new role definition based on input file New-AzRoleDefinition -InputFile <file path> ``` It can also be created via the Azure CLI:
-```azurecli
+```azurecli-interactive
# Create new role definition based on input file az role create -inputfile <file path> ```
There are two types of resource lock: **CanNotDelete** and **ReadOnly**. These l
To prevent changes being made, apply a ReadOnly lock to the zone. This lock prevents new record sets from being created, and existing record sets from being modified or deleted.
-Zone level resource locks can be created via the Azure portal. From the DNS zone page, select **Locks**, then select **+Add**:
+Zone level resource locks can be created via the Azure portal. From the DNS zone page, select **Locks**, then select **+ Add**:
-![Zone level resource locks via the Azure portal](./media/dns-protect-zones-recordsets/locks1.png)
Zone-level resource locks can also be created via [Azure PowerShell](/powershell/module/az.resources/new-azresourcelock):
-```azurepowershell
+```azurepowershell-interactive
# Lock a DNS zone $lvl = "<lock level>"
New-AzResourceLock -LockLevel $lvl -LockName $lnm -ResourceName $rsc -ResourceTy
The equivalent command is also [available via the Azure CLI](/cli/azure/lock#az_lock_create):
-```azurecli
+```azurecli-interactive
# Lock a DNS zone az lock create \
To prevent an existing DNS record set against modification, apply a ReadOnly loc
Record set level resource locks can currently only be configured using Azure PowerShell. They aren't supported in the Azure portal or Azure CLI.
-```azurepowershell
+```azurepowershell-interactive
# Lock a DNS record set $lvl = "<lock level>"
As an alternative, apply a CanNotDelete lock to a record set in the zone, such a
The following PowerShell command creates a CanNotDelete lock against the SOA record of the given zone:
-```azurepowershell
+```azurepowershell-interactive
# Protect against zone delete with CanNotDelete lock on the record set $lvl = "CanNotDelete"
healthcare-apis Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/customer-managed-key.md
Previously updated : 09/28/2020 Last updated : 05/04/2021
When you create a new Azure API for FHIR account, your data is encrypted using Microsoft-managed keys by default. Now, you can add a second layer of encryption for the data using your own key that you choose and manage yourself.
-In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you will have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data. To get started, you can refer to the following links:
+In Azure, this is typically accomplished using an encryption key in the customer's Azure Key Vault. Azure SQL, Azure Storage, and Cosmos DB are some examples that provide this capability today. Azure API for FHIR leverages this support from Cosmos DB. When you create an account, you will have the option to specify an Azure Key Vault key URI. This key will be passed on to Cosmos DB when the DB account is provisioned. When a FHIR request is made, Cosmos DB fetches your key and uses it to encrypt/decrypt the data.
+
+To get started, refer to the following links:
- [Register the Azure Cosmos DB resource provider for your Azure subscription](../../cosmos-db/how-to-setup-cmk.md#register-resource-provider) - [Configure your Azure Key Vault instance](../../cosmos-db/how-to-setup-cmk.md#configure-your-azure-key-vault-instance)
In Azure, this is typically accomplished using an encryption key in the customer
## Using Azure portal
-When creating your Azure API for FHIR account on Azure portal, you can see a "Data Encryption" configuration option under the "Database Settings" on the "Additional Settings" tab. By default, the service-managed key option will be chosen.
+When creating your Azure API for FHIR account on Azure portal, you'll notice **Data Encryption** configuration option under the **Database Settings** on the **Additional Settings** tab. By default, the service-managed key option will be selected.
+
+> [!Important]
+> The data encryption option is only available when the Azure API for FHIR is created and cannot be changed afterwards. However, you can view and update the encryption key if the **Customer-managed key** option is selected.
+ You can choose your key from the KeyPicker: :::image type="content" source="media/bring-your-own-key/bring-your-own-key-keypicker.png" alt-text="KeyPicker":::
-Or you can specify your Azure Key Vault key here by selecting "Customer-managed key" option. You can enter the key URI here:
+You can also specify your Azure Key Vault key here by selecting **Customer-managed key** option.
+
+You can also enter the key URI here:
:::image type="content" source="media/bring-your-own-key/bring-your-own-key-create.png" alt-text="Create Azure API for FHIR":::
-For existing FHIR accounts, you can view the key encryption choice (service- or customer-managed key) in "Database" blade as below. The configuration option can't be modified once chosen. However, you can modify and update your key.
+> [!Important]
+> Ensure all permissions for Azure Key Vault are set appropriately. For more information, see [Add an access policy to your Azure Key Vault instance](https://docs.microsoft.com/azure/cosmos-db/how-to-setup-cmk#add-access-policy).
+Additionally, ensure that the soft delete is enabled in the properties of the Key Vault. Not completing these steps will result in a deployment error. For more information, see [Verify if soft delete is enabled on a key vault and enable soft delete](https://docs.microsoft.com/azure/key-vault/general/key-vault-recovery?tabs=azure-portal#verify-if-soft-delete-is-enabled-on-a-key-vault-and-enable-soft-delete).
+
+For existing FHIR accounts, you can view the key encryption choice (**Service-managed key** or **Customer-managed key**) in the **Database** blade as shown below. The configuration option can't be modified once it's selected. However, you can modify and update your key.
:::image type="content" source="media/bring-your-own-key/bring-your-own-key-database.png" alt-text="Database"::: In addition, you can create a new version of the specified key, after which your data will get encrypted with the new version without any service interruption. You can also remove access to the key to remove access to the data. When the key is disabled, queries will result in an error. If the key is re-enabled, queries will succeed again. --- ## Using Azure PowerShell With your Azure Key Vault key URI, you can configure CMK using PowerShell by running the PowerShell command below:
New-AzResourceGroupDeployment `
## Next steps
-In this article, you learned how to configure customer-managed keys at rest using Azure portal, PowerShell, CLI, and Resource Manager Template. You can check out the Azure Cosmos DB FAQ section for additional questions you might have:
+In this article, you learned how to configure customer-managed keys at rest using the Azure portal, PowerShell, CLI, and Resource Manager Template. You can refer to the Azure Cosmos DB FAQ section for more information.
>[!div class="nextstepaction"] >[Cosmos DB: how to setup CMK](../../cosmos-db/how-to-setup-cmk.md#frequently-asked-questions)
iot-central Concepts App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-app-templates.md
# What are application templates?
-Application templates in Azure IoT Central are a tool to help solution builders kickstart their IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
+Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use app templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
Application templates consist of:
iot-central Concepts Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-best-practices.md
+# This article applies to device developers.
# Best practices for device development
-*This article applies to device developers.*
- These recommendations show how to implement devices to take advantage of the built-in disaster recovery and automatic scaling in IoT Central. The following list shows the high-level flow when a device connects to IoT Central:
To learn more about device error codes, see [Troubleshooting device connections]
## Next steps
-If you're a device developer, some suggested next steps are to:
+Some suggested next steps are to:
- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) - Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-device-templates.md
+# This article applies to device developers and solution builders.
# What are device templates?
-_This article applies to device developers and solution builders._
- A device template in Azure IoT Central is a blueprint that defines the characteristics and behaviors of a type of device that connects to your application. For example, the device template defines the telemetry that a device sends so that IoT Central can create visualizations that use the correct units and data types. A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template.
The telemetry, properties, and commands that you can add to a view are determine
## Next steps
-As a device developer, now that you've learned about device templates, a suggested next steps is to read [Telemetry, property, and command payloads](./concepts-telemetry-properties-commands.md) to learn more about the data a device exchanges with IoT Central.
-
-As a solution developer, a suggested next step is to read [Define a new IoT device type in your Azure IoT Central application](./howto-set-up-template.md) to learn more about how to create a device template.
+Now that you've learned about device templates, a suggested next steps is to read [Telemetry, property, and command payloads](./concepts-telemetry-properties-commands.md) to learn more about the data a device exchanges with IoT Central.
iot-central Concepts Get Connected https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-get-connected.md
+# This article applies to operators and device developers.
# Get connected to Azure IoT Central
-*This article applies to operators and device developers.*
- This article describes how devices connect to an Azure IoT Central application. Before a device can exchange data with IoT Central, it must: - *Authenticate*. Authentication with the IoT Central application uses either a _shared access signature (SAS) token_ or an _X.509 certificate_. X.509 certificates are recommended in production environments.
When a real device connects to your IoT Central application, its device status c
- A set of devices is added using **Import** on the **Devices** page without specifying the device template. - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
- The Operator can associate a device to a device template from the **Devices** page using the **Migrate** button.
+ An operator can associate a device to a device template from the **Devices** page using the **Migrate** button.
+
+## Device connection status
+When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events are shown for the device. These events are not sent by the device sends, they are generated internally by IoT Central.
+
+The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute..
++
+You can include connection and disconnection events in [exports from IoT Central](howto-export-data.md#set-up-data-export). To learn more, see [React to IoT Hub events > Limitations for device connected and device disconnected events](../../iot-hub/iot-hub-event-grid.md#limitations-for-device-connected-and-device-disconnected-events).
## SDK support
All data exchanged between devices and your Azure IoT Central is encrypted. IoT
## Next steps
-If you're a device developer, some suggested next steps are to:
+Some suggested next steps are to:
- Review [best practices](concepts-best-practices.md) for developing devices. - Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-iot-edge.md
+# This article applies to solution builders and device developers.
# Connect Azure IoT Edge devices to an Azure IoT Central application
-*This article applies to solution builders and device developers.*
- Azure IoT Edge moves cloud analytics and custom business logic to devices so your organization can focus on business insights instead of data management. Scale out your IoT solution by packaging your business logic into standard containers, deploy those containers to your devices, and monitor them from the cloud. This article describes:
To learn more, see [How to connect devices through an IoT Edge transparent gatew
## Next steps
-If you're a device developer, a suggested next step is to learn how to [Develop your own IoT Edge modules](../../iot-edge/module-development.md).
+A suggested next step is to learn how to [Develop your own IoT Edge modules](../../iot-edge/module-development.md).
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-telemetry-properties-commands.md
+
+# This article applies to device developers.
# Telemetry, property, and command payloads
-_This article applies to device developers._
- A device template in Azure IoT Central is a blueprint that defines the: * Telemetry a device sends to IoT Central. * Properties a device synchronizes with IoT Central. * Commands that IoT Central calls on a device.
-This article describes, for device developers, the JSON payloads that devices send and receive for telemetry, properties, and commands defined in a device template.
+This article describes the JSON payloads that devices send and receive for telemetry, properties, and commands defined in a device template.
The article doesn't describe every possible type of telemetry, property, and command payload, but the examples illustrate all the key types.
If you enable the **Queue if offline** option in the device template UI for the
## Next steps
-As a device developer, now that you've learned about device templates, a suggested next steps is to read [Get connected to Azure IoT Central](./concepts-get-connected.md) to learn more about how to register devices with IoT Central and how IoT Central secures device connections.
+Now that you've learned about device templates, a suggested next steps is to read [Get connected to Azure IoT Central](./concepts-get-connected.md) to learn more about how to register devices with IoT Central and how IoT Central secures device connections.
iot-central How To Move Device To Iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-move-device-to-iot.md
Last updated 02/20/2021
+
+# This article applies to operators and device developers.
# How to transfer a device to Azure IoT Central from IoT Hub
-*This article applies to operators and device developers.*
- This article describes how to transfer a device to an Azure IoT Central application from an IoT Hub. A device first connects to a DPS endpoint to retrieve the information it needs to connect to your application. Internally, your IoT Central application uses an IoT hub to handle device connectivity.
To interact with IoT Central, there must be a device template that models the pr
## Next steps
-If you're a device developer, some suggested next steps are to:
+Some suggested next steps are to:
- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) - Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md)
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-administer.md
Title: Change Azure IoT Central application settings | Microsoft Docs
-description: As an administrator, how to manage your Azure IoT Central application by changing application name, URL, upload image, and delete an application
+description: Learn how to manage your Azure IoT Central application by changing application name, URL, upload image, and delete an application
Last updated 12/19/2020
+
+# Administrator
# Change IoT Central application settings --
-This article describes how, as an administrator, you can manage application by changing application name and URL, uploading image, and delete an application in your Azure IoT Central application.
+This article describes how you can manage application by changing application name and URL, upload an image, and delete an application in your Azure IoT Central application.
To access and use the **Administration** section, you must be in the **Administrator** role for an Azure IoT Central application. If you create an Azure IoT Central application, you're automatically assigned to the **Administrator** role for that application.
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-authorize-rest-api.md
+
+ Title: Authorize REST API in Azure IoT Central
+description: How to authenticate and authorize IoT Central REST API calls
++ Last updated : 03/24/2020++++++
+# How to authenticate and authorize IoT Central REST API calls
+
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. Use the REST API to work with resources in your IoT Central application such as device templates, devices, jobs, users, and roles.
+
+Every IoT Central REST API call requires an authorization header that IoT Central uses to determine the identity of the caller and the permissions that caller is granted within the application.
+
+This article describes the types of token you can use in the authorization header, and how to get them.
+
+## Token types
+
+To access an IoT Central application using the REST API, you can use an:
+
+- _Azure Active Directory bearer token_. A bearer token is associated with an Azure Active Directory user account. The token grants the caller the same permissions the user has in the IoT Central application.
+- IoT Central API token. An API token is associated with a role in your IoT Central application.
+
+To learn more about users and roles in IoT Central, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
+
+## Get a bearer token
+
+To get a bearer token for your Azure Active Directory user account, use the following Azure CLI commands:
+
+```azurecli
+az login
+az account get-access-token --resource https://apps.azureiotcentral.com
+```
+
+> [!IMPORTANT]
+> The `az login` command is necessary even if you're using the Cloud Shell.
+
+The JSON output from the previous command looks like the following example:
+
+```json
+{
+ "accessToken": "eyJ0eX...fNQ",
+ "expiresOn": "2021-03-22 11:11:16.072222",
+ "subscription": "{your subscription id}",
+ "tenant": "{your tenant id}",
+ "tokenType": "Bearer"
+}
+```
+
+The bearer token is valid for approximately one hour, after which you need to create a new one.
+
+## Get an API token
+
+To get an API token, you can use the IoT Central UI or a REST API call.
+
+In the IoT Central UI:
+
+1. Navigate to **Administration > API tokens**.
+1. Select **+ Generate token**.
+1. Enter a name for the token and select a role.
+1. Select **Generate**.
+1. IoT Central displays the token that looks like the following example:
+
+ `SharedAccessSignature sr=5782ed70...&sig=dvZZE...&skn=operator-token&se=1647948035850`
+
+ This screen is the only time you can see the API token, if you lose it you need to generate a new one.
+
+An API token is valid for approximately one year. You can generate tokens for both built-in and custom roles in your IoT Central application.
+
+You can delete API tokens in the IoT Central UI if you need to revoke access.
+
+Using the REST API:
+
+1. Use the REST API to retrieve a list of role IDs from your application:
+
+ ```http
+ GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=1.0
+ ```
+
+ The response to this request looks like the following example:
+
+ ```json
+ {
+ "value": [
+ {
+ "displayName": "Administrator",
+ "id": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4"
+ },
+ {
+ "displayName": "Operator",
+ "id": "ae2c9854-393b-4f97-8c42-479d70ce626e"
+ },
+ {
+ "displayName": "Builder",
+ "id": "344138e9-8de4-4497-8c54-5237e96d6aaf"
+ }
+ ]
+ }
+ ```
+
+1. Use the REST API to create an API token for a role. For example, to create an API token called `operator-token` for the operator role:
+
+ ```http
+ PUT https://{your app subdomain}.azureiotcentral.com/api/roles/operator-token?api-version=1.0
+ ```
+
+ Request body:
+
+ ```json
+ {
+ "roles": [
+ {
+ "role": "ae2c9854-393b-4f97-8c42-479d70ce626e"
+ }
+ ]
+ }
+ ```
+
+ The response to the previous command looks like the following JSON:
+
+ ```json
+ {
+ "expiry": "2022-03-22T12:01:27.889Z",
+ "id": "operator-token",
+ "roles": [
+ {
+ "role": "ae2c9854-393b-4f97-8c42-479d70ce626e"
+ }
+ ],
+ "token": "SharedAccessSignature sr=e8a...&sig=jKY8W...&skn=operator-token&se=1647950487889"
+ }
+ ```
+
+ This response is the only time you have access to the API token, if you lose it you need to generate a new one.
+
+You can use the REST API to list and delete API tokens in an application.
+
+## Use a bearer token
+
+To use a bearer token when you make a REST API call, your authorization header looks like the following example:
+
+`Authorization: Bearer eyJ0eX...fNQ`
+
+## Use an API token
+
+To use an API token when you make a REST API call, your authorization header looks like the following example:
+
+`Authorization: SharedAccessSignature sr=e8a...&sig=jKY8W...&skn=operator-token&se=1647950487889`
+
+## Next steps
+
+Now that you've learned how to authorize REST API calls, a suggested next step is to [How to use the IoT Central REST API to manage users and roles](howto-manage-users-roles-with-rest-api.md).
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-build-iotc-device-bridge.md
Last updated 04/19/2021 +
+# Administrator
# Use the IoT Central device bridge to connect other IoT clouds to IoT Central
-*This article applies to administrators.*
-
-## Azure IoT Central device bridge
- The IoT Central device bridge is an open-source solution that connects other IoT clouds to your IoT Central application. Examples of other IoT clouds include [Sigfox](https://www.sigfox.com/), [Particle Device Cloud](https://www.particle.io/), and [The Things Network](https://www.thethingsnetwork.org/). The device bridge works by forwarding data from devices connected to other IoT clouds through to your IoT Central application. The device bridge only forwards data to IoT Central, it doesn't send commands or property updates from IoT Central back to the devices. The device bridge lets you combine the power of IoT Central with devices such as asset tracking devices connected to Sigfox's low-power wide area network, air quality monitoring devices on the Particle Device Cloud, or soil moisture monitoring devices on The Things Network. You can use IoT Central application features such as rules and analytics on the data, create workflows in Power Automate and Azure Logic apps, or export the data.
Complete the [Create an Azure IoT Central application](./quick-deploy-iot-centra
## Overview
-The IoT Central device bridge is an open-source solution in GitHub. It uses a custom Azure Resource Manager template deploy several resources to your Azure subscription, including an Azure function app.
+The IoT Central device bridge is an open-source solution in GitHub. It uses a custom Azure Resource Manager template to deploy several resources to your Azure subscription, including a function app in Azure Functions.
The function app is the core piece of the device bridge. It receives HTTP POST requests from other IoT platforms through a simple webhook. The [Azure IoT Central Device Bridge](https://github.com/Azure/iotc-device-bridge) repository includes examples that show how to connect Sigfox, Particle, and The Things Network clouds. You can extend this solution to connect to your custom IoT cloud if your platform can send HTTP POST requests to your function app. The function app transforms the data into a format accepted by IoT Central and forwards it using the device provisioning service and device client APIs: If your IoT Central application recognizes the device ID in the forwarded message, the telemetry from the device appears in IoT Central. If the device ID isn't recognized by your IoT Central application, the function app attempts to register a new device with the device ID. The new device appears as an **Unassociated device** on the **Devices** page in your IoT Central application. From the **Devices** page, you can associate the new device with a device template and then view the telemetry.
To deploy the device bridge to your subscription:
After the deployment is completed, you need to install the NPM packages the function requires:
-1. In the Azure portal, open the function app that was deployed to your subscription. Then navigate to **Development Tools > Console**. In the console, run the following commands to install the packages:
+1. In the Azure portal, open the function app that was deployed to your subscription. Then, go to **Development Tools** > **Console**. In the console, run the following commands to install the packages:
```bash cd IoTCIntegration
To switch on logging for the function app with Application Insights, navigate to
The Resource Manager template provisions the following resources in your Azure subscription:
-* Function App
+* Function app
* App Service plan * Storage account * Key vault The key vault stores the SAS group key for your IoT Central application.
-The Function App runs on a [consumption plan](https://azure.microsoft.com/pricing/details/functions/). While this option doesn't offer dedicated compute resources, it enables the device bridge to handle hundreds of device messages per minute, suitable for smaller fleets of devices or devices that send messages less frequently. If your application depends on streaming a large number of device messages, replace the consumption plan with a dedicated a [App service plan](https://azure.microsoft.com/pricing/details/app-service/windows/). This plan offers dedicated compute resources, which give faster server response times. Using a standard App Service Plan, the maximum observed performance of the Azure function in this repository was around 1,500 device messages per minute. To learn more, see [Azure Function hosting options](../../azure-functions/functions-scale.md).
+The function app runs on a [consumption plan](https://azure.microsoft.com/pricing/details/functions/). While this option doesn't offer dedicated compute resources, it enables the device bridge to handle hundreds of device messages per minute, suitable for smaller fleets of devices or devices that send messages less frequently. If your application depends on streaming a large number of device messages, replace the consumption plan with a dedicated a [App service plan](https://azure.microsoft.com/pricing/details/app-service/windows/). This plan offers dedicated compute resources, which give faster server response times. Using a standard App Service Plan, the maximum observed performance of the function from Azure in this repository was around 1,500 device messages per minute. To learn more, see [Azure Functions hosting options](../../azure-functions/functions-scale.md).
-To use a dedicated App Service Plan instead of a consumption plan, edit the custom template before deploying. Select **Edit template**.
+To use a dedicated App Service plan instead of a consumption plan, edit the custom template before deploying. Select **Edit template**.
:::image type="content" source="media/howto-build-iotc-device-bridge/edit-template.png" alt-text="Screenshot of Edit Template.":::
To connect a Particle device through the device bridge to IoT Central, go to the
"deviceId": "{{{PARTICLE_DEVICE_ID}}}" }, "measurements": {
- "{{{PARTICLE_EVENT_NAME}}}": {{{PARTICLE_EVENT_VALUE}}}
+ "{{{PARTICLE_EVENT_NAME}}}": "{{{PARTICLE_EVENT_VALUE}}}"
} } ```
-Paste in the **function URL** from your Azure function app, and you see Particle devices appear as unassociated devices in IoT Central. To learn more, see the [Here's how to integrate your Particle-powered projects with Azure IoT Central](https://blog.particle.io/2019/09/26/integrate-particle-with-azure-iot-central/) blog post.
+Paste in the **function URL** from your function app, and you see Particle devices appear as unassociated devices in IoT Central. To learn more, see the [Here's how to integrate your Particle-powered projects with Azure IoT Central](https://blog.particle.io/2019/09/26/integrate-particle-with-azure-iot-central/) blog post.
### Example 2: Connecting Sigfox devices through the device bridge
-Some platforms may not allow you to specify the format of device messages sent through a webhook. For such systems, you must convert the message payload to the expected body format before the device bridge processes it. You can do the conversion in same Azure function that runs the device bridge.
+Some platforms may not allow you to specify the format of device messages sent through a webhook. For such systems, you must convert the message payload to the expected body format before the device bridge processes it. You can do the conversion in the same function that runs the device bridge.
This section shows how to convert the payload of a Sigfox webhook integration to the body format expected by the device bridge. The Sigfox cloud transmits device data in a hexadecimal string format. For convenience, the device bridge includes a conversion function for this format, which accepts a subset of the possible field types in a Sigfox device payload: `int` and `uint` of 8, 16, 32, or 64 bits; `float` of 32 bits or 64 bits; little-endian and big-endian. To process messages from a Sigfox webhook integration, make the following changes to the _IoTCIntegration/index.js_ file in the function app.
context.res = {
To connect The Things Network devices to IoT Central: * Add a new HTTP integration to your application in The Things Network: **Application > Integrations > add integration > HTTP Integration**.
-* Make sure your application includes a decoder function that automatically converts the payload of your device messages to JSON before it's sent to the Azure Function: **Application > Payload Functions > decoder**.
+* Make sure your application includes a decoder function that automatically converts the payload of your device messages to JSON before it's sent to the function: **Application > Payload Functions > decoder**.
The following sample shows a JavaScript decoder function you can use to decode common numeric types from binary data:
function Decoder(bytes, port) {
} ```
-After you define the integration, add the following code before the call to `handleMessage` in line 21 of the *IoTCIntegration/index.js* file of your Azure function app. This code translates the body of your HTTP integration to the expected format.
+After you define the integration, add the following code before the call to `handleMessage` in line 21 of the *IoTCIntegration/index.js* file of your function app. This code translates the body of your HTTP integration to the expected format.
```javascript device: {
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-file-uploads.md
Last updated 12/23/2020 +
+# This topic applies to administrators and device developers.
# Upload files from your devices to the cloud
-*This topic applies to administrators and device developers.*
- IoT Central lets you upload media and other files from connected devices to cloud storage. You configure the file upload capability in your IoT Central application, and then implement file uploads in your device code. ## Prerequisites
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules-advanced.md
Last updated 05/12/2020
+
+# This article applies to solution builders.
# Use workflows to integrate your Azure IoT Central application with other cloud services
-*This article applies to solution builders.*
- You can create rules in IoT Central that trigger actions, such as sending an email, in response to telemetry-based conditions, such as device temperature exceeding a threshold. The Azure IoT Central V3 connector for Power Automate and Azure Logic Apps lets you create more advanced rules to automate operations in IoT Central:
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-configure-rules.md
+
+# This article applies to operators, builders, and administrators.
# Configure rules
-*This article applies to operators, builders, and administrators.*
- Rules in IoT Central serve as a customizable response tool that trigger on actively monitored events from connected devices. The following sections describe how rules are evaluated. ## Select target devices
iot-central Howto Connect Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-powerbi.md
Last updated 10/4/2019 +
+# This topic applies to administrators and solution developers.
# Visualize and analyze your Azure IoT Central data in a Power BI dashboard
-*This topic applies to administrators and solution developers.*
- > [!Note] > This solution uses [legacy data export features](./howto-export-data-legacy.md). Stay tuned for updated guidance on how to connect to Power BI using the latest data export.
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-rigado-cascade-500.md
Last updated 11/27/2019+
+# This article applies to solution builders.
# Connect a Rigado Cascade 500 gateway device to your Azure IoT Central application
-*This article applies to solution builders.*
-
-This article describes how, as a solution builder, you can connect a Rigado Cascade 500 gateway device to your Microsoft Azure IoT Central application.
+This article describes how you can connect a Rigado Cascade 500 gateway device to your Microsoft Azure IoT Central application.
## What is Cascade 500? Cascade 500 IoT gateway is a hardware offering from Rigado that is included as part of their Cascade Edge-as-a-Service solution. It provides commercial IoT project and product teams with flexible edge computing power, a robust containerized application environment, and a wide variety of wireless device connectivity options, including Bluetooth 5, LTE, & Wi-Fi.
-Cascade 500 is pre-certified for Azure IoT Plug and Play (preview) allowing our solution builders to easily onboard the device into their end to end solutions. The Cascade gateway allows you to wirelessly connect to a variety of condition monitoring sensors that are in proximity to the gateway device. These sensors can be onboarded into IoT Central via the gateway device.
+Cascade 500 is certified for Azure IoT Plug and Play and allows you to easily onboard the device into your end to end solutions. The Cascade gateway allows you to wirelessly connect to a variety of condition monitoring sensors that are in proximity to the gateway device. These sensors can be onboarded into IoT Central via the gateway device.
## Prerequisites To step through this how-to guide, you need the following resources:
You are now ready to use your C500 device in your IoT Central application!
## Next steps
-If you're a device developer, some suggested next steps are to:
+Some suggested next steps are to:
- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md) - Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Connect Ruuvi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-ruuvi.md
Last updated 11/27/2019+
+# This article applies to solution builders.
# Connect a RuuviTag sensor to your Azure IoT Central application
-*This article applies to solution builders.*
-
-This article describes how, as a solution builder, you can connect a RuuviTag sensor to your Microsoft Azure IoT Central application.
+This article describes how you can connect a RuuviTag sensor to your Microsoft Azure IoT Central application.
What is a Ruuvi tag?
To create a simulated RuuviTag:
## Next Steps
-If you're a device developer, some suggested next steps are to:
+Some suggested next steps are to:
- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md) - Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Connect Sphere https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-connect-sphere.md
Last updated 04/30/2020 +
+# This article applies to device developers.
# Connect an Azure Sphere device to your Azure IoT Central application
-*This article applies to device developers.*
- This article shows you how to connect an Azure Sphere (DevKit) device to an Azure IoT Central application. Azure Sphere is a secured, high-level application platform with built-in communication and security features for internet-connected devices. It includes a secured, connected, crossover microcontroller unit (MCU), a custom high-level Linux-based operating system (OS), and a cloud-based security service that provides continuous, renewable security. For more information, see [What is Azure Sphere?](/azure-sphere/product-overview/what-is-azure-sphere).
Before you can connect the Azure Sphere DevKit device to IoT Central, you need t
## Connect the device
-To enable the sample to connect to IoT Central, you must [configure an Azure IoT Central application and then modify the sample's application manifest](https://aka.ms/iotcentral-sphere-git-readme).
+To enable the sample to connect to IoT Central, you must [configure an Azure IoT Central application and then modify the sample's application manifest](https://github.com/Azure/azure-sphere-samples/blob/master/Samples/AzureIoT/READMEStartWithIoTCentral.md).
## View the telemetry from the device
To create a simulated device:
## Next steps
-If you're a device developer, some suggested next steps are to:
+Some suggested next steps are to:
- Read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md) - Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-control-devices-with-rest-api.md
+
+ Title: Use the REST API to manage devices in Azure IoT Central
+description: How to use the IoT Central REST API to control devices in an application
++ Last updated : 03/24/2020++++++
+# How to use the IoT Central REST API to control devices
+
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to control devices in your IoT Central application. The REST API lets you:
+
+- Read the last known telemetry value from a device.
+- Read property values from a device.
+- Set writable properties on a device.
+- Call commands on a device.
+
+This article describes how to use the `/devices/{device_id}` API to control individual devices. You can also use jobs to control devices in bulk.
+
+A device can group the properties, telemetry, and commands it supports into _components_ and _modules_.
+
+Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](https://docs.microsoft.com/rest/api/iotcentral/).
+
+## Components and modules
+
+Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot-pnp/concepts-modeling-guide.md).
+
+Not all device templates use components. The following screenshot shows the device template for a simple [thermostat](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-2.json) where all the capabilities are defined in a single interface called the **Default component**:
++
+The following screenshot shows a [temperature controller](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/temperaturecontroller-2.json) device template that uses components. The temperature controller has two thermostat components and a device information component:
++
+In IoT Central, a module refers to an IoT Edge module running on a connected IoT Edge device. A module can have a simple model such as the thermostat that doesn't use components. A module can also use components to organize a more complex set of capabilities. The following screenshot shows an example of a device template that uses modules. The environmental sensor device has a module called `SimulatedTemperatureSensor` and an inherited interface called `management`:
++
+## Get a device component
+
+Use the following request to retrieve the components from a device called `temperature-controller-01`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components?api-version=1.0
+```
+
+The response to this request looks like the following example. The `value` array contains details of each device component:
+
+```json
+{
+ "value": [
+ {
+ "@type": "Component",
+ "name": "thermostat1",
+ "displayName": "Thermostat One",
+ "description": "Thermostat One of Two."
+ },
+ {
+ "@type": "Component",
+ "name": "thermostat2",
+ "displayName": "Thermostat Two",
+ "description": "Thermostat Two of Two."
+ },
+ {
+ "@type": "Component",
+ "name": "deviceInformation",
+ "displayName": "Device Information interface",
+ "description": "Optional interface with basic device hardware information."
+ }
+ ]
+}
+```
+
+## Get a device module
+
+Use the following request to retrieve a list of modules running on a connected IoT Edge device called `environmental-sensor-01`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=1.0
+```
+
+The response to this request looks like the following example. The array of modules only includes custom modules running on the IoT Edge device, not the built-in `$edgeAgent` and `$edgeHub` modules:
+
+```json
+{
+ "value": [
+ {
+ "@type": [
+ "Relationship",
+ "EdgeModule"
+ ],
+ "name": "SimulatedTemperatureSensor",
+ "displayName": "SimulatedTemperatureSensor"
+ }
+ ]
+}
+```
+
+Use the following request to retrieve a list of the components in a module called `SimulatedTemperatureSensor`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules?api-version=1.0
+```
+
+## Read telemetry
+
+Use the following request to retrieve the last known telemetry value from a device that doesn't use components. In this example, the device is called `thermostat-01` and the telemetry is called `temperature`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/telemetry/temperature?api-version=1.0
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "timestamp": "2021-03-24T12:33:15.223Z",
+ "value": 40.10993804456927
+}
+```
+
+Use the following request to retrieve the last known telemetry value from a device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the telemetry is called `temperature`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/telemetry/temperature?api-version=1.0
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "timestamp": "2021-03-24T12:43:44.968Z",
+ "value": 70.29168040339141
+}
+```
+
+If the device is an IoT Edge device, use the following request to retrieve the last known telemetry value from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor` and telemetry called `ambient`. The `ambient` telemetry type has temperature and humidity values:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/telemetry/ambient?api-version=1.0
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "timestamp": "2021-03-25T15:44:34.955Z",
+ "value": {
+ "temperature": 21.18032378129676,
+ "humidity": 25
+ }
+}
+```
+
+## Read properties
+
+Use the following request to retrieve the property values from a device that doesn't use components. In this example, the device is called `thermostat-01`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=1.0
+```
+
+The response to this request looks like the following example. It shows the device is reporting a single property value:
+
+```json
+{
+ "maxTempSinceLastReboot": 93.95907131817654,
+ "$metadata": {
+ "maxTempSinceLastReboot": {
+ "lastUpdateTime": "2021-03-24T12:47:46.7571438Z"
+ }
+ }
+}
+```
+
+Use the following request to retrieve property values from all components. In this example, the device is called `temperature-controller-01`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/properties?api-version=1.0
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "serialNumber": "Explicabo animi nihil qui facere sit explicabo nisi.",
+ "$metadata": {
+ "serialNumber": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ }
+ },
+ "thermostat1": {
+ "maxTempSinceLastReboot": 79.7290121339184,
+ "$metadata": {
+ "maxTempSinceLastReboot": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ }
+ }
+ },
+ "thermostat2": {
+ "maxTempSinceLastReboot": 54.214860556320424,
+ "$metadata": {
+ "maxTempSinceLastReboot": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ }
+ }
+ },
+ "deviceInformation": {
+ "manufacturer": "Eveniet culpa sed sit omnis.",
+ "$metadata": {
+ "manufacturer": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ },
+ "model": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ },
+ "swVersion": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ },
+ "osName": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ },
+ "processorArchitecture": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ },
+ "processorManufacturer": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ },
+ "totalStorage": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ },
+ "totalMemory": {
+ "lastUpdateTime": "2021-03-24T13:58:52.5999859Z"
+ }
+ },
+ "model": "Necessitatibus id ab dolores vel eligendi fuga.",
+ "swVersion": "Ut minus ipsum ut omnis est asperiores harum.",
+ "osName": "Atque sit omnis eum sapiente eum tenetur est dolor.",
+ "processorArchitecture": "Ratione enim dolor iste iure.",
+ "processorManufacturer": "Aliquam eligendi sit ipsa.",
+ "totalStorage": 36.02825898541592,
+ "totalMemory": 55.442695395750505
+ }
+}
+```
+
+Use the following request to retrieve a property value from an individual component. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=1.0
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "maxTempSinceLastReboot": 24.445128131004935,
+ "$metadata": {
+ "maxTempSinceLastReboot": {
+ "lastUpdateTime": "2021-03-24T14:03:53.787491Z"
+ }
+ }
+}
+```
+
+If the device is an IoT Edge device, use the following request to retrieve property values from a from a module. This example uses a device called `environmental-sensor-01` with a module called `SimulatedTemperatureSensor`:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=1.0
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "$metadata": {
+ "SendData": {
+ "desiredValue": true,
+ "desiredVersion": 1
+ },
+ "SendInterval": {
+ "desiredValue": 10,
+ "desiredVersion": 1
+ }
+ }
+}
+```
+
+## Write properties
+
+Some properties are writable. For example, in the thermostat model the `targetTemperature` property is a writable property.
+
+Use the following request to write an individual property value to a device that doesn't use components. In this example, the device is called `thermostat-01`:
+
+```http
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/properties?api-version=1.0
+```
+
+The request body looks like the following example:
+
+```json
+{
+ "targetTemperature": 65.5
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "$metadata": {
+ "targetTemperature": {
+ "desiredValue": 65.5
+ }
+ }
+}
+```
+
+> [!TIP]
+> To update all the properties on a device, use `PUT` instead of `PATCH`.
+
+Use the following request to write an individual property value to a device that does use components. In this example, the device is called `temperature-controller-01` and the component is called `thermostat2`:
+
+```http
+PATCH https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/properties?api-version=1.0
+```
+
+The request body looks like the following example:
+
+```json
+{
+ "targetTemperature": 65.5
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "$metadata": {
+ "targetTemperature": {
+ "desiredValue": 65.5
+ }
+ }
+}
+```
+
+> [!TIP]
+> To update all the properties on a component, use `PUT` instead of `PATCH`.
+
+If the device is an IoT Edge device, use the following request to write an individual property value to a module. This example uses a device called `environmental-sensor-01`, a module called `SimulatedTemperatureSensor`, and a property called `SendInterval`:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/devices/environmental-sensor-01/modules/SimulatedTemperatureSensor/properties?api-version=1.0
+```
+
+The request body looks like the following example:
+
+```json
+{
+ "SendInterval": 20
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "$metadata": {
+ "SendInterval": {
+ "desiredValue": 20
+ }
+ }
+}
+```
+
+> [!TIP]
+> To update all the properties on a module, use `PUT` instead of `PATCH`.
+
+## Call commands
+
+You can use the REST API to call device commands and retrieve the device history.
+
+Use the following request to call a command on device that doesn't use components. In this example, the device is called `thermostat-01` and the command is called `getMaxMinReport`:
+
+```http
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=1.0
+```
+
+The request body looks like the following example:
+
+```json
+{
+ "since": "2021-03-24T12:55:20.789Z"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "response": {
+ "maxTemp": 21.002000799562367,
+ "minTemp": 73.09674605264892,
+ "avgTemp": 59.54553991653756,
+ "startTime": "2022-02-28T15:02:56.789Z",
+ "endTime": "2021-05-05T03:50:56.412Z"
+ },
+ "responseCode": 200
+}
+```
+
+To view the history for this command, use the following request:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/thermostat-01/commands/getMaxMinReport?api-version=1.0
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "response": {
+ "maxTemp": 71.43744908819954,
+ "minTemp": 51.29986610160005,
+ "avgTemp": 39.577384387771744,
+ "startTime": "2021-06-20T00:38:17.620Z",
+ "endTime": "2022-01-07T22:30:41.104Z"
+ },
+ "responseCode": 200
+ }
+ ]
+}
+```
+
+Use the following request to call a command on device that does use components. In this example, the device is called `temperature-controller-01`, the component is called `thermostat2`, and the command is called `getMaxMinReport`:
+
+```http
+POST https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=1.0
+```
+
+The formats of the request payload and response are the same as for a device that doesn't use components.
+
+To view the history for this command, use the following request:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-controller-01/components/thermostat2/commands/getMaxMinReport?api-version=1.0
+```
+
+## Next steps
+
+Now that you've learned how to control devices with the REST API, a suggested next step is to [Manage IoT Central applications with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-analytics.md
+
+# This article applies to operators, builders, and administrators.
# How to use analytics to analyze device data
-*This article applies to operators, builders, and administrators.*
- Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate various telemetries from your devices. To get started, visit **Analytics** on the left pane. ## Understanding the Analytics UI
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-analytics.md
+
+# Solution developer
# Extend Azure IoT Central with custom analytics using Azure Databricks
-This how-to guide shows you, as a solution developer, how to extend your IoT Central application with custom analytics and visualizations. The example uses an [Azure Databricks](/azure/azure-databricks/) workspace to analyze the IoT Central telemetry stream and to generate visualizations such as [box plots](https://wikipedia.org/wiki/Box_plot).
+This how-to guide shows you how to extend your IoT Central application with custom analytics and visualizations. The example uses an [Azure Databricks](/azure/azure-databricks/) workspace to analyze the IoT Central telemetry stream and to generate visualizations such as [box plots](https://wikipedia.org/wiki/Box_plot).
This how-to guide shows you how to extend IoT Central beyond what it can already do with the [built-in analytics tools](./howto-create-custom-analytics.md).
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-custom-rules.md
+
+# Solution developer
# Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid
-This how-to guide shows you, as a solution developer, how to extend your IoT Central application with custom rules and notifications. The example shows sending a notification to an operator when a device stops sending telemetry. The solution uses an [Azure Stream Analytics](../../stream-analytics/index.yml) query to detect when a device has stopped sending telemetry. The Stream Analytics job uses [Azure Functions](../../azure-functions/index.yml) to send notification emails using [SendGrid](https://sendgrid.com/docs/for-developers/partners/microsoft-azure/).
+This how-to guide shows you how to extend your IoT Central application with custom rules and notifications. The example shows sending a notification to an operator when a device stops sending telemetry. The solution uses an [Azure Stream Analytics](../../stream-analytics/index.yml) query to detect when a device has stopped sending telemetry. The Stream Analytics job uses [Azure Functions](../../azure-functions/index.yml) to send notification emails using [SendGrid](https://sendgrid.com/docs/for-developers/partners/microsoft-azure/).
This how-to guide shows you how to extend IoT Central beyond what it can already do with the built-in rules and actions.
iot-central Howto Create Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-create-webhooks.md
+
+# This topic applies to builders and administrators.
# Create webhook actions on rules in Azure IoT Central
-*This topic applies to builders and administrators.*
- Webhooks enable you to connect your IoT Central app to other applications and services for remote monitoring and notifications. Webhooks automatically notify other applications and services you connect whenever a rule is triggered in your IoT Central app. Your IoT Central app sends a POST request to the other application's HTTP endpoint whenever a rule is triggered. The payload contains device details and rule trigger details. ## Set up the webhook
iot-central Howto Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-customize-ui.md
# Customize the Azure IoT Central UI
-This article describes how, as an administrator, you can customize the UI of your application by applying custom themes and modifying the help links to point to your own custom help resources.
+This article describes how you can customize the UI of your application by applying custom themes and modifying the help links to point to your own custom help resources.
The following screenshot shows a page using the standard theme:
iot-central Howto Manage Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices.md
Title: Manage the devices in your Azure IoT Central application | Microsoft Docs
-description: As an operator, learn how to manage devices in your Azure IoT Central application. Learn how to manage individual devices and do bulk import and exports of the devices in your application.
+description: Learn how to manage devices in your Azure IoT Central application. Learn how to manage individual devices and do bulk import and exports of the devices in your application.
Last updated 10/08/2020
+
+# Operator
# Manage devices in your Azure IoT Central application
-This article describes how, as an operator, you manage devices in your Azure IoT Central application. As an operator, you can:
+This article describes how you manage devices in your Azure IoT Central application. You can:
- Use the **Devices** page to view, add, and delete devices connected to your Azure IoT Central application. - Import and export devices in bulk.
iot-central Howto Manage Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-preferences.md
+
+# This article applies to operators, builders, and administrators.
# Manage your personal application preferences
-*This article applies to operators, builders, and administrators.*
-
-IoT Central provides the flexibility to customize your applications to fit your need. We also provide some flexibility on a per-user basis to customize your own view. This article describes the various customization options that a user can apply to their profile.
+IoT Central provides the flexibility to customize your applications to fit your need. IoT Central also provides some flexibility on a per-user basis to customize your own view. This article describes the various customization options that a user can apply to their profile.
## Changing language
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
+
+ Title: Use the REST API to manage users and roles in Azure IoT Central
+description: How to use the IoT Central REST API to manage users and roles in an application
++ Last updated : 03/24/2020++++++
+# How to use the IoT Central REST API to manage users and roles
+
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to manage users and roles in your IoT Central application.
+
+Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](https://docs.microsoft.com/rest/api/iotcentral/).
+
+## Manage roles
+
+The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of role IDs from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=1.0
+```
+
+The response to this request looks like the following example that includes the three built-in roles and a custom role:
+
+```json
+{
+ "value": [
+ {
+ "id": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4",
+ "displayName": "Administrator"
+ },
+ {
+ "id": "ae2c9854-393b-4f97-8c42-479d70ce626e",
+ "displayName": "Operator"
+ },
+ {
+ "id": "344138e9-8de4-4497-8c54-5237e96d6aaf",
+ "displayName": "Builder"
+ },
+ {
+ "id": "16f8533f-6b82-478f-8ba8-7e676b541b1b",
+ "displayName": "Example custom role"
+ }
+ ]
+}
+```
+
+## Manage users
+
+The REST API lets you:
+
+- List the users in an application
+- Retrieve the details of an individual user
+- Create a user
+- Modify a user
+- Delete a user
+
+### List users
+
+Use the following request to retrieve a list of users from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/users?api-version=1.0
+```
+
+The response to this request looks like the following example. The role values identify the role ID the user is associated with:
+
+```json
+{
+ "value": [
+ {
+ "id": "91907508-04fe-4349-91b5-b872f3055a95",
+ "type": "email",
+ "roles": [
+ {
+ "role": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4"
+ }
+ ],
+ "email": "user1@contoso.com"
+ },
+ {
+ "id": "dc1c916b-a652-49ea-b128-7c465a54c759",
+ "type": "email",
+ "roles": [
+ {
+ "role": "ae2c9854-393b-4f97-8c42-479d70ce626e"
+ }
+ ],
+ "email": "user2@contoso.com"
+ },
+ {
+ "id": "3ab9375e-d2d9-42da-b419-6ae86a938321",
+ "type": "email",
+ "roles": [
+ {
+ "role": "344138e9-8de4-4497-8c54-5237e96d6aaf"
+ }
+ ],
+ "email": "user3@contoso.com"
+ },
+ {
+ "id": "fc5a250b-83fb-433d-892c-e0a144f68c2b",
+ "type": "email",
+ "roles": [
+ {
+ "role": "16f8533f-6b82-478f-8ba8-7e676b541b1b"
+ }
+ ],
+ "email": "user4@contoso.com"
+ }
+ ]
+}
+```
+
+### Get a user
+
+Use the following request to retrieve details of an individual user from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/users/dc1c916b-a652-49ea-b128-7c465a54c759?api-version=1.0
+```
+
+The response to this request looks like the following example. The role value identifies the role ID the user is associated with:
+
+```json
+{
+ "id": "dc1c916b-a652-49ea-b128-7c465a54c759",
+ "type": "email",
+ "roles": [
+ {
+ "role": "ae2c9854-393b-4f97-8c42-479d70ce626e"
+ }
+ ],
+ "email": "user2@contoso.com"
+}
+```
+
+### Create a user
+
+Use the following request to create a user in your application. The ID and email must be unique in the application:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+```
+
+In the following request body, the `role` value is for the operator role you retrieved previously:
+
+```json
+{
+ "id": "user-001",
+ "type": "email",
+ "roles": [
+ {
+ "role": "ae2c9854-393b-4f97-8c42-479d70ce626e"
+ }
+ ],
+ "email": "user5@contoso.com"
+}
+```
+
+The response to this request looks like the following example. The role value identifies which role the user is associated with:
+
+```json
+{
+ "id": "user-001",
+ "type": "email",
+ "roles": [
+ {
+ "role": "ae2c9854-393b-4f97-8c42-479d70ce626e"
+ }
+ ],
+ "email": "user5@contoso.com"
+}
+```
+
+### Change the role of a user
+
+Use the following request to change the role assigned to user. This example uses the ID of the builder role you retrieved previously:
+
+```http
+PATCH https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+```
+
+Request body. The value is for the builder role you retrieved previously:
+
+```json
+{
+ "roles": [
+ {
+ "role": "344138e9-8de4-4497-8c54-5237e96d6aaf"
+ }
+ ]
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "user-001",
+ "type": "email",
+ "roles": [
+ {
+ "role": "344138e9-8de4-4497-8c54-5237e96d6aaf"
+ }
+ ],
+ "email": "user5@contoso.com"
+}
+```
+
+### Delete a user
+
+Use the following request to delete a user:
+
+```http
+DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=1.0
+```
+
+## Next steps
+
+Now that you've learned how to manage users and roles with the REST API, a suggested next step is to [Manage IoT Central applications with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-users-roles.md
+
+# Administrator
# Manage users and roles in your IoT Central application
-This article describes how, as an administrator, you can add, edit, and delete users in your Azure IoT Central application. The article also describes how to manage roles in your application.
+This article describes how you can add, edit, and delete users in your Azure IoT Central application. The article also describes how to manage roles in your application.
To access and use the **Administration** section, you must be in the **Administrator** role for an Azure IoT Central application. If you create an Azure IoT Central application, you're automatically added to the **Administrator** role for that application.
iot-central Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-migrate.md
Title: Migrate a V2 Azure IoT Central application to V3 | Microsoft Docs
-description: As an administrator, learn how to migrate your V2 Azure IoT Central application to V3
+description: Learn how to migrate your V2 Azure IoT Central application to V3
Last updated 01/18/2021 +
+# Administrator
# Migrate your V2 IoT Central application to V3
-*This article applies to administrators.*
- Currently, when you create a new IoT Central application, it's a V3 application. If you previously created an application, then depending on when you created it, it may be V2. This article describes how to migrate a V2 to a V3 application to be sure you're using the latest IoT Central features. To learn how to identify the version of an IoT Central application, see [About your application](howto-get-app-info.md).
iot-central Howto Monitor Application Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-application-health.md
Title: Monitor the health of an Azure IoT Central application | Microsoft Docs
-description: As an operator or administrator, monitor the overall health of the devices connected to your IoT Central application.
+description: Monitor the overall health of the devices connected to your IoT Central application.
Last updated 01/27/2021
> [!NOTE] > Metrics are only available for version 3 IoT Central applications. To learn how to check your application version, see [About your application](./howto-get-app-info.md).
-*This article applies to operators and administrators.*
- In this article, you learn how to use the set of metrics provided by IoT Central to assess the health of devices connected to your IoT Central application and the health of your running data exports. Metrics are enabled by default for your IoT Central application and you access them from the [Azure portal](https://portal.azure.com/). The [Azure Monitor data platform exposes these metrics](../../azure-monitor/essentials/data-platform-metrics.md) and provides several ways for you to interact with them. For example, you can use charts in the Azure portal, a REST API, or queries in PowerShell or the Azure CLI.
Metrics may differ from the numbers shown on your Azure IoT Central invoice. Thi
- IoT Central [standard pricing plans](https://azure.microsoft.com/pricing/details/iot-central/) include two devices and varying message quotas for free. While the free items are excluded from billing, they're still counted in the metrics. -- IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. Solution builders may choose to validate their device templates before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
+- IoT Central autogenerates one test device ID for each device template in the application. This device ID is visible on the **Manage test device** page for a device template. You may choose to validate your device templates before publishing them by generating code that uses these test device IDs. While these devices are excluded from billing, they're still counted in the metrics.
- While metrics may show a subset of device-to-cloud communication, all communication between the device and the cloud [counts as a message for billing](https://azure.microsoft.com/pricing/details/iot-central/).
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-monitor-devices-azure-cli.md
+# This topic applies to device developers and solution builders.
# Monitor device connectivity using Azure CLI
-*This topic applies to device developers and solution builders.*
- Use the Azure CLI IoT extension to see messages your devices are sending to IoT Central and observe changes in the device twin. You can use this tool to debug and observe device connectivity and diagnose issues of device messages not reaching the cloud or devices not responding to twin changes. [Visit the Azure CLI extensions reference for more details](/cli/azure/iot/central)
az iot central device twin show --app-id <app-id> --device-id <device-id>
## Next steps
-If you're a device developer, a suggested next step is to read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md).
+A suggested next step is to read about [Device connectivity in Azure IoT Central](./concepts-get-connected.md).
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-set-up-template.md
Title: Define a new IoT device type in Azure IoT Central | Microsoft Docs
-description: This article shows you, as a solution builder, how to create a new Azure IoT device template in your Azure IoT Central application. You define the telemetry, state, properties, and commands for your type.
+description: This article shows you how to create a new Azure IoT device template in your Azure IoT Central application. You define the telemetry, state, properties, and commands for your type.
Last updated 12/06/2019
+
+# This article applies to solution builders and device developers.
# Define a new IoT device type in your Azure IoT Central application
-*This article applies to solution builders and device developers.*
- A device template is a blueprint that defines the characteristics and behaviors of a type of device that connects to an [Azure IoT Central application](concepts-app-templates.md). For example, a builder can create a device template for a connected fan that has the following characteristics:
After you publish a device template, an operator can go to the **Devices** page,
## Next steps
-If you're a device developer, a suggested next step is to read about [device template versioning](./howto-version-device-template.md).
+A suggested next step is to read about [device template versioning](./howto-version-device-template.md).
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-transform-data.md
Last updated 04/09/2021
+
+# This topic applies to solution builders.
# Transform data for IoT Central
-*This topic applies to solution builders.*
- IoT devices send data in various formats. To use the device data with your IoT Central application, you may need to use a transformation to: - Make the format of the data compatible with your IoT Central application.
iot-central Howto Use Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-action-groups.md
Last updated 12/06/2019
+
+# This article applies to builders and administrators.
# Group multiple actions to run from one or more rules
-*This article applies to builders and administrators.*
- In Azure IoT Central, you create rules to run actions when a condition is met. Rules are based on device telemetry or events. For example, you can notify an operator when the temperature of a device exceeds a threshold. This article describes how to use [Azure Monitor](../../azure-monitor/overview.md) *action groups* to attach multiple actions to an IoT Central rule. You can attach an action group to multiple rules. An [action group](../../azure-monitor/alerts/action-groups.md) is a collection of notification preferences defined by the owner of an Azure subscription. ## Prerequisites
iot-central Howto Use App Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-app-templates.md
# Export your application
-This article describes how, as a solution manager, to export an IoT Central application to be able to reuse it.
+This article describes how to export an IoT Central application to be able to reuse it.
You have two options:
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-commands.md
Title: How to use device commands in an Azure IoT Central solution
-description: How to use device commands in Azure IoT Central solution. This tutorial shows you how, as a device developer, to use device commands in client app to your Azure IoT Central application.
+description: How to use device commands in Azure IoT Central solution. This tutorial shows you how to use device commands in client app to your Azure IoT Central application.
Last updated 01/07/2021 +
+# Device developer
# How to use commands in an Azure IoT Central solution
-This how-to guide shows you how, as a device developer, to use commands that are defined in a device template.
+This how-to guide shows you how to use commands that are defined in a device template.
An operator can use the IoT Central UI to call a command on a device. Commands control the behavior of a device. For example, an operator might call a command to reboot a device or collect diagnostics data.
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-location-data.md
This article shows you how to use location data in an IoT Central application. A device connected to IoT Central can send location data as telemetry stream or use a device property to report location data.
-A solution builder can use the location data to:
+You can use the location data to:
* Plot the reported location on a map. * Plot the telemetry location history om a map.
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-use-properties.md
+# Solution developer
# Use properties in an Azure IoT Central solution
-This how-to guide shows you how, as a device developer, to use device properties that are defined in a device template in your Azure IoT Central application.
+This how-to guide shows you how to use device properties that are defined in a device template in your Azure IoT Central application.
Properties represent point-in-time values. For example, a device can use a property to report the target temperature it's trying to reach. By default, device properties are read-only in IoT Central. Writable properties let you synchronize state between your device and your Azure IoT Central application.
iot-central Howto Version Device Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-version-device-template.md
+
+# This article applies to solution builders and device developers.
# Create a new device template version
-*This article applies to solution builders and device developers.*
- A device template includes a schema that describes how a device interacts with IoT Central. These interactions include telemetry, properties, and commands. Both the device and the IoT Central application rely on a shared understanding of this schema to exchange information. You can only make limited changes to the schema without breaking the contract, that's why most schema changes require a new version of the device template. Versioning the device template lets older devices continue with the schema version they understand, while newer or updated devices use a later schema version. The schema in a device template is defined in the device model and its interfaces. Device templates include other information, such as cloud properties, display customizations, and views. If you make changes to those parts of the device template that don't define how the device exchanges data with IoT Central, you don't need to version the template.
You can create multiple versions of the device template. Over time, you'll have
## Next steps
-If you're an operator or solution builder, a suggested next step is to learn [how to manage your devices](./howto-manage-devices.md).
-
-If you're a device developer, a suggested next step is to read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md).
+A suggested next step is to learn [how to manage your devices](./howto-manage-devices.md).
iot-central Howto View Bill https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-view-bill.md
Title: Manage your bill and convert from the free pricing plan in Azure IoT Central application | Microsoft Docs
-description: As an administrator, learn how to manage your bill and move from the free pricing plan to a standard pricing plan in your Azure IoT Central application
+description: Learn how to manage your bill and move from the free pricing plan to a standard pricing plan in your Azure IoT Central application
Last updated 11/23/2019
+
+# Administrator
# Manage your bill in an IoT Central application
-This article describes how, as an administrator, you can manage your Azure IoT Central billing. You can move your application from the free pricing plan to a standard pricing plan, and also upgrade or downgrade your pricing plan.
+This article describes how you can manage your Azure IoT Central billing. You can move your application from the free pricing plan to a standard pricing plan, and also upgrade or downgrade your pricing plan.
To access the **Administration** section, you must be in the *Administrator* role or have a *custom user role* that allows you to view billing. If you create an Azure IoT Central application, you're automatically assigned to the **Administrator** role.
iot-central Iot Central Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/iot-central-supported-browsers.md
+
+# This article applies to operators, builders, and administrators.
# Supported browsers for Azure IoT Central
-*This article applies to operators, builders, and administrators.*
- Azure IoT Central can be accessed across most modern desktops, tablets, and browsers. The following article outlines the list of supported browsers and required connectivity. ## Supported browsers
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-admin.md
+
+# This article applies to administrators.
# IoT Central administrator guide
-*This article applies to administrators.*
- An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for administrators who manage IoT Central applications. In IoT Central, an administrator:
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
+
+# This article applies to device developers.
# IoT Central device development guide
-*This article applies to device developers.*
- An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is intended for device developers who implement code to run on devices that connect to IoT Central. Devices interact with an IoT Central application using the following primitives:
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
# Device groups, jobs, use dashboards and create personal dashboards
+# This article applies to operators.
# IoT Central operator guide
-*This article applies to operators.*
- An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for operators who use an IoT Central application to manage IoT devices. An operator:
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-solution-builder.md
+
+# This article applies to solution builders.
# IoT Central solution builder guide
-*This article applies to solution builders.*
- An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for solution builders who use IoT Central to build integrated solutions. An IoT Central application lets you manage devices, analyze device telemetry, and integrate with other back-end services. A solution builder:
As a solution builder, you can use the data export and rules capabilities in IoT
- [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md) - [Visualize and analyze your Azure IoT Central data in a Power BI dashboard](howto-connect-powerbi.md)
+## APIs
+
+IoT Central APIs let you build deep integrations with other services in your IoT solution. The available APIs are categorized as *data plane* or *control plane* APIs.
+
+You use data plane APIs to access the entities in and the capabilities of your IoT Central application. For example managing devices, device templates, users, and roles. The IoT Central REST API operations are *data plane* operations. To learn more, see [How to use the IoT Central REST API to manage users and roles](howto-manage-users-roles-with-rest-api.md).
+
+You use the *control plane* to manage IoT Central-related resources in your Azure subscription. You can use the Azure CLI and Resource Manager templates for control plane operations. For example, you can use the Azure CLI to create an IoT Central application. To learn more, see [Manage IoT Central from Azure CLI](howto-manage-iot-central-from-cli.md).
+ ## Next steps If you want to learn more about using IoT Central, the suggested next steps are to try the quickstarts, beginning with [Create an Azure IoT Central application](./quick-deploy-iot-central.md).
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-deploy-iot-central.md
In this quickstart, you created an IoT Central application. Here's the suggested
> [!div class="nextstepaction"] > [Add a simulated device to your IoT Central application](./quick-create-simulated-device.md)
-If you're a device developer and want to dive into some code, the suggested next step is to:
-> [!div class="nextstepaction"]
-> [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md)
iot-central Quick Monitor Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-monitor-devices.md
Title: Quickstart - Monitor your devices in Azure IoT Central
-description: Quickstart - As an operator, learn how to use your Azure IoT Central application to monitor your devices.
+description: Quickstart - Learn how to use your Azure IoT Central application to monitor your devices.
Last updated 11/16/2020
# Quickstart: Use Azure IoT Central to monitor your devices
-*This article applies to operators, builders, and administrators.*
-
-This quickstart shows you, as an operator, how to use your Azure IoT Central application to monitor your devices and change settings.
+This quickstart shows you how to use your Azure IoT Central application to monitor your devices and change settings.
## Prerequisites
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/troubleshoot-connection.md
# Troubleshoot why data from your devices isn't showing up in Azure IoT Central
-This document helps device developers find out why the data their devices are sending to IoT Central may not be showing up in the application.
+This document helps you find out why the data your devices are sending to IoT Central may not be showing up in the application.
There are two main areas to investigate:
iot-central Tutorial Add Edge As Leaf Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-add-edge-as-leaf-device.md
Title: Tutorial - Add an Azure IoT Edge device to Azure IoT Central | Microsoft Docs
-description: Tutorial - As an operator, add an Azure IoT Edge device to your Azure IoT Central application
+description: Tutorial - Add an Azure IoT Edge device to your Azure IoT Central application
Last updated 05/29/2020
# Tutorial: Add an Azure IoT Edge device to your Azure IoT Central application
-*This article applies to operators, solution builders, and device developers.*
- This tutorial shows you how to configure and add an Azure IoT Edge device to your Azure IoT Central application. The tutorial uses an IoT Edge-enabled Linux virtual machine (VM) to simulate an IoT Edge device. The IoT Edge device uses a module that generates simulated environmental telemetry. You view the telemetry on a dashboard in your IoT Central application. In this tutorial, you learn how to:
If you plan to continue working with the IoT Edge VM, you can keep and reuse the
* To delete the IoT Edge VM and its associated resources, delete the the **contoso-edge-rg** resource group in the Azure portal. * To delete the IoT Central application, navigate to the **Your application** page in the **Administration** section of the application and select **Delete**.
-As a solution developer or operator, now that you've learned how to work with and manage IoT Edge devices in IoT Central, a suggested next step is to:
-
-> [!div class="nextstepaction"]
-> [Use device groups to analyze device telemetry](./tutorial-use-device-groups.md)
- ## Next steps
-As a device developer, now that you've learned how to work with and manage IoT Edge devices in IoT Central, a suggested next step is to read:
+Now that you've learned how to work with and manage IoT Edge devices in IoT Central, a suggested next step is to read:
> [!div class="nextstepaction"] > [Develop IoT Edge modules](../../iot-edge/tutorial-develop-for-linux.md)
iot-central Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-connect-device.md
Title: Tutorial - Connect a generic client app to Azure IoT Central | Microsoft Docs
-description: This tutorial shows you how, as a device developer, to connect a device running either a C, C#, Java, JavaScript, or Python client app to your Azure IoT Central application. You modify the automatically generated device template by adding views that let an operator interact with a connected device.
+description: This tutorial shows you how to connect a device running either a C, C#, Java, JavaScript, or Python client app to your Azure IoT Central application. You modify the automatically generated device template by adding views that let an operator interact with a connected device.
Last updated 11/24/2020
zone_pivot_groups: programming-languages-set-twenty-six
# Tutorial: Create and connect a client application to your Azure IoT Central application
-*This article applies to solution builders and device developers.*
-
-This tutorial shows you how, as a device developer, to connect a client application to your Azure IoT Central application. The application simulates the behavior of a temperature controller device. When the application connects to IoT Central, it sends the model ID of the temperature controller device model. IoT Central uses the model ID to retrieve the device model and create a device template for you. You add customizations and views to the device template to enable an operator to interact with a device.
+This tutorial shows you how to connect a client application to your Azure IoT Central application. The application simulates the behavior of a temperature controller device. When the application connects to IoT Central, it sends the model ID of the temperature controller device model. IoT Central uses the model ID to retrieve the device model and create a device template for you. You add customizations and views to the device template to enable an operator to interact with a device.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## View raw data
-As a device developer, you can use the **Raw data** view to examine the raw data your device is sending to IoT Central:
+You can use the **Raw data** view to examine the raw data your device is sending to IoT Central:
:::image type="content" source="media/tutorial-connect-device/raw-data.png" alt-text="The raw data view":::
If you'd prefer to continue through the set of IoT Central tutorials and learn m
> [!div class="nextstepaction"] > [Create a gateway device template](./tutorial-define-gateway-device-type.md)-
-As a device developer, now that you've learned the basics of how to create a device, some suggested next steps are to:
-
-* Read [What are device templates?](./concepts-device-templates.md) to learn more about the role of device templates when you're implementing your device code.
-* Read [Get connected to Azure IoT Central](./concepts-get-connected.md) to learn more about how to register devices with IoT Central and how IoT Central secures device connections.
-* Read [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md) to learn more about the data the device exchanges with IoT Central.
-* Read [IoT Plug and Play device developer guide](../../iot-pnp/concepts-developer-guide-device.md).
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-create-telemetry-rules.md
# Tutorial: Create a rule and set up notifications in your Azure IoT Central application
-*This article applies to operators, builders, and administrators.*
- You can use Azure IoT Central to remotely monitor your connected devices. Azure IoT Central rules let you monitor your devices in near real time and automatically invoke actions, such as sending an email. This article explains how to create rules to monitor the telemetry your devices send. Devices use telemetry to send numerical data from the device. A rule triggers when the selected telemetry crosses a specified threshold.
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-define-gateway-device-type.md
# Tutorial - Define a new IoT gateway device type in your Azure IoT Central application
-*This article applies to solution builders and device developers.*
-
-This tutorial shows you, as a solution builder, how to use a gateway device template to define a gateway device in your IoT Central application. You then configure several downstream devices that connect to your IoT Central application through the gateway device.
+This tutorial shows you how to use a gateway device template to define a gateway device in your IoT Central application. You then configure several downstream devices that connect to your IoT Central application through the gateway device.
In this tutorial, you create a **Smart Building** gateway device template. A **Smart Building** gateway device has relationships with other downstream devices.
In this tutorial, you learned how to:
* Add relationships. * Publish your device template.
-Next, as a device developer, you can learn how to:
+Next you can learn how to:
> [!div class="nextstepaction"] > [Add an Azure IoT Edge device to your Azure IoT Central application](tutorial-add-edge-as-leaf-device.md)
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-use-device-groups.md
Title: Tutorial - Use device groups in your Azure IoT Central application | Microsoft Docs
-description: Tutorial - As an operator, learn how to use device groups to analyze telemetry from devices in your Azure IoT Central application.
+description: Tutorial - Learn how to use device groups to analyze telemetry from devices in your Azure IoT Central application.
Last updated 11/16/2020
# Tutorial: Use device groups to analyze device telemetry
-This article describes how, as an operator, to use device groups to analyze device telemetry in your Azure IoT Central application.
+This article describes how to use device groups to analyze device telemetry in your Azure IoT Central application.
A device group is a list of devices that are grouped together because they match some specified criteria. Device groups help you manage, visualize, and analyze devices at scale by grouping devices into smaller, logical groups. For example, you can create a device group to list all the air conditioner devices in Seattle to enable a technician to find the devices for which they're responsible.
iot-central How To Configure Connected Field Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/government/how-to-configure-connected-field-services.md
# Build end-to-end solution with Azure IoT Central and Dynamics 365 Field Service
-As a builder, you can enable integration of your Azure IoT Central application to other business systems.
+
+This article describes how you can enable integration of your Azure IoT Central application to other business systems.
For example, in a connected waste management solution you can optimize the dispatch of trash collections trucks. The optimization can be done based on IoT sensors data from connected waste bins. In your [IoT Central connected waste management application](./tutorial-connected-waste-management.md) you can configure rules and actions, and set it to trigger creating alerts in Dynamics Field Service. This scenario is accomplished by using Power Automate, which will be configured directly in IoT Central for automating workflows across applications and services. Additionally, based on service activities in Field Service, information can be sent back to Azure IoT Central.
iot-central Tutorial Continuous Patient Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
# Tutorial: Deploy and walkthrough a continuous patient monitoring app template
-This tutorial shows you, as a solution builder, how to get started by deploying an IoT Central continuous patient monitoring application template. You learn how to deploy and use the template.
+This tutorial shows you how to get started by deploying an IoT Central continuous patient monitoring application template. You learn how to deploy and use the template.
In this tutorial, you learn how to:
iot-central Overview Iot Central Retail https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/overview-iot-central-retail.md
Azure IoT Central is an IoT app platform that reduces the burden and cost associated with developing, managing, and maintaining enterprise-grade IoT solutions. Choosing to build with Azure IoT Central gives you the opportunity to focus your time, money, and energy on transforming your business with IoT data, rather than just maintaining and updating a complex and continually evolving IoT infrastructure.
-This article, describes several retail-specific IoT Central application templates. As a solution builder, you can use these templates to build IoT solutions that optimize supply chains, improve in-store experiences for customers, and track inventory more efficiently.
+This article, describes several retail-specific IoT Central application templates. You can use these templates to build IoT solutions that optimize supply chains, improve in-store experiences for customers, and track inventory more efficiently.
:::image type="content" source="media/overview-iot-central-retail/retail-app-templates.png" alt-text="Azure IoT Retail Overview":::
To learn more, see the [Deploy and walk through a digital distribution center ap
For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
-As a solution builder, you can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights helping the retailer to reduce operating costs and create a great experience for their customers.
+You can use the IoT Central in-store analytics condition monitoring application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using of different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights helping the retailer to reduce operating costs and create a great experience for their customers.
Use the application template to:
To learn more, see the [Create an in-store analytics application in Azure IoT Ce
For some retailers, the checkout experience within their stores is a key differentiator from their competitors. Retailers want to deliver a smooth checkout experience within their stores to encourage customers to return.
-As a solution builder, you can use the IoT Central in-store analytics checkout application template to build a solution that delivers insights from around the checkout zone of a store to retail staff. For example, sensors can provide information about queue lengths and average wait times for each checkout lane.
+You can use the IoT Central in-store analytics checkout application template to build a solution that delivers insights from around the checkout zone of a store to retail staff. For example, sensors can provide information about queue lengths and average wait times for each checkout lane.
Use the application template to:
To learn more, see the [Deploy and walk through a smart inventory management app
In the increasingly competitive retail landscape, retailers constantly face pressure to close the gap between demand and fulfillment. A new trend that has emerged to address the growing consumer demand is to house inventory near the end customers and the stores they visit.
-The IoT Central micro-fulfillment center application template enables solution builders to monitor and manage all aspects of their fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers.
+The IoT Central micro-fulfillment center application template enables you to monitor and manage all aspects of your fully automated fulfillment centers. The template includes a set of simulated condition monitoring sensors and robotic carriers to accelerate the solution development process. These sensor devices capture meaningful signals that can be converted into business insights allowing retailers to reduce their operating costs and create experiences for their customers.
The application template enables you to:
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Last updated 11/12/2019
# Tutorial: Create an in-store analytics application in Azure IoT Central
-The tutorial shows solution builders how to create an Azure IoT Central in-store analytics application. The sample application is for a retail store. It's a solution to the common business need to monitor and adapt to occupancy and environmental conditions.
+The tutorial shows you how to create an Azure IoT Central in-store analytics application. The sample application is for a retail store. It's a solution to the common business need to monitor and adapt to occupancy and environmental conditions.
The sample application that you build includes three real devices: a Rigado Cascade 500 gateway, and two RuuviTag sensors. The tutorial also shows how to use the simulated occupancy sensor included in the application template for testing purposes. The Rigado C500 gateway serves as the communication hub in your application. It communicates with sensors in your store and manages their connections to the cloud. The RuuviTag is an environmental sensor that provides telemetry including temperature, humidity, and pressure. The simulated occupancy sensor provides a way to track motion and presence in the checkout areas of a store.
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
Last updated 11/12/2019
# Tutorial: Customize the operator dashboard and manage devices in Azure IoT Central
-In this tutorial, as a builder, you learn how to customize the operator dashboard in your Azure IoT Central in-store analytics application. Application operators can use the customized dashboard to run the application and manage the attached devices.
+In this tutorial, you learn how to customize the operator dashboard in your Azure IoT Central in-store analytics application. Application operators can use the customized dashboard to run the application and manage the attached devices.
In this tutorial, you learn how to: > [!div class="checklist"]
iot-central Tutorial Video Analytics Create App Openvino https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-create-app-openvino.md
Last updated 10/06/2020
# Tutorial: Create a video analytics - object and motion detection application in Azure IoT Central (OpenVINO&trade;)
-As a solution builder, learn how to create a video analytics application with the IoT Central *video analytics - object and motion detection* application template, Azure IoT Edge devices, Azure Media Services, and Intel's hardware-optimized OpenVINO&trade; for object and motion detection. The solution uses a retail store to show how to meet the common business need to monitor security cameras. The solution uses automatic object detection in a video feed to quickly identify and locate interesting events.
+Learn how to create a video analytics application with the IoT Central *video analytics - object and motion detection* application template, Azure IoT Edge devices, Azure Media Services, and Intel's hardware-optimized OpenVINO&trade; for object and motion detection. The solution uses a retail store to show how to meet the common business need to monitor security cameras. The solution uses automatic object detection in a video feed to quickly identify and locate interesting events.
> [!TIP] > To use YOLO v3 instead of OpenVINO&trade; for object an motion detection, see [Tutorial: Create a video analytics - object and motion detection application in Azure IoT Central (YOLO v3)](tutorial-video-analytics-create-app-yolo-v3.md).
iot-central Tutorial Video Analytics Create App Yolo V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-video-analytics-create-app-yolo-v3.md
Last updated 10/06/2020
# Tutorial: Create a video analytics - object and motion detection application in Azure IoT Central (YOLO v3)
-As a solution builder, learn how to create a video analytics application with the IoT Central *video analytics - object and motion detection* application template, Azure IoT Edge devices, Azure Media Services, and the YOLO v3 real-time object and motion detection system. The solution uses a retail store to show how to meet the common business need to monitor security cameras. The solution uses automatic object detection in a video feed to quickly identify and locate interesting events.
+Learn how to create a video analytics application with the IoT Central *video analytics - object and motion detection* application template, Azure IoT Edge devices, Azure Media Services, and the YOLO v3 real-time object and motion detection system. The solution uses a retail store to show how to meet the common business need to monitor security cameras. The solution uses automatic object detection in a video feed to quickly identify and locate interesting events.
> [!TIP] > To use OpenVINO&trade; instead of YOLO v3 for object an motion detection, see [Tutorial: Create a video analytics - object and motion detection application in Azure IoT Central (OpenVINO&trade;)](tutorial-video-analytics-create-app-openvino.md).
iot-develop About Iot Develop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/about-iot-develop.md
To learn more about selecting an application platform and tools, see [Overview:
## Next steps Select one of the following quickstart series that is most relevant to your development role. These articles demonstrate the basics of creating an Azure IoT application to host devices, using an SDK, connecting a device, and sending telemetry. -- For device application development: [Quickstart: Send telemetry from a device to Azure IoT Central](quickstart-send-telemetry-python.md)
+- For device application development: [Quickstart: Send telemetry from a device to Azure IoT Central](quickstart-send-telemetry-central.md)
- For embedded device development: [Getting started with Azure IoT embedded device development](quickstart-device-development.md)
iot-develop About Iot Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/about-iot-sdks.md
Whenever possible, select an SDK that feels familiar to your development team. A
## How can I get started?
-The place to start is to explore the GitHub repositories of the Azure Device SDKs. You can also try a [quickstart](quickstart-send-telemetry-python.md) that shows how to quickly use an SDK to send telemetry to Azure IoT.
+The place to start is to explore the GitHub repositories of the Azure Device SDKs. You can also try a [quickstart](quickstart-send-telemetry-central.md) that shows how to quickly use an SDK to send telemetry to Azure IoT.
Your options to get started depend on what kind of device you have: - For constrained devices, use the [Embedded C SDK](#embedded-c-sdk).
The IoT Hub Device Provisioning Service (DPS) is a helper service for IoT Hub th
## Next Steps
-* [Quickstart: Connect a device to IoT Central (Python)](quickstart-send-telemetry-python.md)
+* [Quickstart: Connect a device to IoT Central](quickstart-send-telemetry-central.md)
* [Quickstart: Connect a device to IoT Hub (Python)](quickstart-send-telemetry-cli-python.md) * [Get started with embedded development](quickstart-device-development.md) * Learn more about the [benefits of developing using Azure IoT SDKs](https://azure.microsoft.com/blog/benefits-of-using-the-azure-iot-sdks-in-your-azure-iot-solution/)
iot-develop Quickstart Send Telemetry Central https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-central.md
+
+ Title: Quickstart - connect a device and send telemetry to Azure IoT Central
+description: This quickstart shows device developers how to connect a device securely to Azure IoT Central. You use an Azure IoT device SDK for C, C#, Python, Node.js, or Java, to run a client app on a simulated device, then you connect to IoT Central and send telemetry.
++++ Last updated : 04/27/2021+
+zone_pivot_groups: iot-device-application-development-languages
+
+#Customer intent: As a device application developer, I want to learn the basic workflow of using an Azure IoT device SDK to build a client app on a device, connect the device securely to Azure IoT Central, and send telemetry.
++
+# Quickstart: Send telemetry from a device to Azure IoT Central
+
+**Applies to**: [Device application developers](about-iot-develop.md#device-application-development)
+
+In this quickstart, you learn a basic Azure IoT application development workflow. First you create an Azure IoT Central application for hosting devices. Then you use an Azure IoT device SDK sample to run a simulated temperature controller, connect it securely to IoT Central, and send telemetry.
+++++++++++++
+## View telemetry
+After the simulated device connects to IoT Central, it begins sending telemetry. You can view the telemetry and other details about connected devices in IoT Central.
+
+In IoT Central, select **Devices**, click your device name, then select the **Raw data** tab. This view displays the raw telemetry from the simulated device.
++
+Your device is now securely connected and sending telemetry to Azure IoT.
+
+## Clean up resources
+If you no longer need the IoT Central resources created in this quickstart, you can delete them. Optionally, if you plan to continue following the documentation in this guide, you can keep the application you created and reuse it for other samples.
+
+To remove the Azure IoT Central sample application and all its devices and resources:
+1. Select **Administration** > **Your application**.
+1. Select **Delete**.
+
+## Next steps
+
+In this quickstart, you learned a basic Azure IoT application workflow for securely connecting a device to the cloud and sending device-to-cloud telemetry. You used Azure IoT Central to create an application and a device instance. Then you used an Azure IoT device SDK to create a simulated device, connect to IoT Central, and send telemetry. You also used IoT Central to monitor the telemetry.
+
+As a next step, explore the following articles to learn more about building device solutions with Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Send telemetry to Azure IoT hub](quickstart-send-telemetry-cli-python.md)
+> [!div class="nextstepaction"]
+> [Create an IoT Central application](../iot-central/core/quick-deploy-iot-central.md)
iot-develop Quickstart Send Telemetry Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-send-telemetry-python.md
- Title: Send device telemetry to Azure IoT Central quickstart (Python)
-description: In this quickstart, you use the Azure IoT Hub Device SDK for Python to send telemetry from a device to IoT Central.
---- Previously updated : 04/27/2021--
-# Quickstart: Send telemetry from a device to Azure IoT Central (Python)
-
-**Applies to**: [Device application developers](about-iot-develop.md#device-application-development)<br>
-**Completion time**: 12 minutes
-
-In this quickstart, you learn a basic IoT application development workflow. First you create a cloud application to manage devices in Azure IoT Central. Then, you use the Azure IoT Python SDK to build a simulated thermostat device, connect it to IoT Central, and send telemetry.
-
-## Prerequisites
-- [Python 3.7](https://www.python.org/downloads/) or later. To check your Python version, run `python --version`. --
-## Configure a simulated device
-In this section, you use the Python SDK samples to configure a simulated thermostat device.
-
-1. Open a terminal using Windows CMD, or PowerShell, or Bash (for Windows or Linux). You'll use the terminal to install the Python SDK, update environment variables, and run the Python code sample.
-
-1. Copy the [Azure IoT Python SDK device samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples) to your local machine.
-
- ```console
- git clone https://github.com/Azure/azure-iot-sdk-python
- ```
-
-1. Navigate to the samples directory.
-
- ```console
- cd azure-iot-sdk-python/azure-iot-device/samples/pnp
- ```
-1. Install the Azure IoT Python SDK.
- ```console
- pip3 install azure-iot-device
- ```
-
-1. Set each of the following environment variables, to enable your simulated device to connect to IoT Central. For `IOTHUB_DEVICE_DPS_ID_SCOPE`, `IOTHUB_DEVICE_DPS_DEVICE_KEY`, and `IOTHUB_DEVICE_DPS_DEVICE_ID`, use the device connection values that you saved.
-
- **Windows CMD**
-
- ```console
- set IOTHUB_DEVICE_SECURITY_TYPE=DPS
- set IOTHUB_DEVICE_DPS_ID_SCOPE=<application ID scope>
- set IOTHUB_DEVICE_DPS_DEVICE_KEY=<device primary key>
- set IOTHUB_DEVICE_DPS_DEVICE_ID=<your device ID>
- set IOTHUB_DEVICE_DPS_ENDPOINT=global.azure-devices-provisioning.net
- ```
-
- > [!NOTE]
- > For Windows CMD there are no quotation marks surrounding the variable values.
-
- **PowerShell**
-
- ```azurepowershell
- $env:IOTHUB_DEVICE_SECURITY_TYPE='DPS'
- $env:IOTHUB_DEVICE_DPS_ID_SCOPE='<application ID scope>'
- $env:IOTHUB_DEVICE_DPS_DEVICE_KEY='<device primary key>'
- $env:IOTHUB_DEVICE_DPS_DEVICE_ID='<your device ID>'
- $env:IOTHUB_DEVICE_DPS_ENDPOINT='global.azure-devices-provisioning.net'
- ```
-
- **Bash (Linux or Windows)**
-
- ```bash
- export IOTHUB_DEVICE_SECURITY_TYPE='DPS'
- export IOTHUB_DEVICE_DPS_ID_SCOPE='<application ID scope>'
- export IOTHUB_DEVICE_DPS_DEVICE_KEY='<device primary key>'
- export IOTHUB_DEVICE_DPS_DEVICE_ID='<your device ID>'
- export IOTHUB_DEVICE_DPS_ENDPOINT='global.azure-devices-provisioning.net'
- ```
-
-## Send telemetry
-After configuring your system, you're ready to run the code. The code creates a simulated thermostat,
-connects to your IoT Central application and device instance, and sends telemetry.
-
-1. In your terminal, run the following code sample. Optionally, you can run the Python sample code in your Python IDE.
- ```console
- python temp_controller_with_thermostats.py
- ```
-
- After your simulated device connects to your IoT Central application, it connects to the device instance you created in the application and begins to send telemetry. The connection details and telemetry output are shown in your console:
-
- ```output
- c:\azure-iot-sdk-python\azure-iot-device\samples\pnp>python temp_controller_with_thermostats.py
- Device was assigned
- iotc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azure-devices.net
- my-sdk-device
- Updating pnp properties for root interface
- {'serialNumber': 'alohomora'}
- Updating pnp properties for thermostat1
- {'thermostat1': {'maxTempSinceLastReboot': 98.34, '__t': 'c'}}
- Updating pnp properties for thermostat2
- {'thermostat2': {'maxTempSinceLastReboot': 48.92, '__t': 'c'}}
- Updating pnp properties for deviceInformation
- {'deviceInformation': {'swVersion': '5.5', 'manufacturer': 'Contoso Device Corporation', 'model': 'Contoso 4762B-turbo', 'osName': 'Mac Os', 'processorArchitecture': 'x86-64', 'processorManufacturer': 'Intel', 'totalStorage': 1024, 'totalMemory': 32, '__t': 'c'}}
- Listening for command requests and property updates
- Press Q to quit
- Sending telemetry from various components
- Sent message
- {"temperature": 33}
- ```
-
-1. In IoT Central, select **Devices**, click your device name, then select the **Raw data** tab. This view displays the raw telemetry from the simulated device.
-
- :::image type="content" source="media/quickstart-send-telemetry-python/iot-central-telemetry-output.png" alt-text="IoT Central device telemetry raw output":::
-
- Your device is now securely connected and sending telemetry to Azure IoT.
-
-## Clean up resources
-If you no longer need the IoT Central resources created in this quickstart, you can delete them. Optionally, if you plan to continue following the documentation in this guide, you can keep the application you created and reuse it for other samples.
-
-To remove the Azure IoT Central sample application and all its devices and resources:
-1. Select **Administration** > **Your application**.
-1. Select **Delete**.
-
-## Next steps
-
-In this quickstart, you learned a basic Azure IoT application workflow for securely connecting a device to the cloud and sending device-to-cloud telemetry. You used the Azure IoT Central to create an application and a device, then you used the Azure IoT Python SDK to create a simulated device and send telemetry. You also used IoT Central to monitor the telemetry.
-
-As a next step, explore IoT Central as a solution for hosting your devices, and explore the Azure SDK Python code samples.
-
-> [!div class="nextstepaction"]
-> [Create an IoT Central application](../iot-central/core/quick-deploy-iot-central.md)
-> [!div class="nextstepaction"]
-> [Asynchronous device amples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/async-hub-scenarios)
-> [!div class="nextstepaction"]
-> [Synchronous device samples](https://github.com/Azure/azure-iot-sdk-python/tree/master/azure-iot-device/samples/sync-samples)
iot-edge How To Update Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-update-iot-edge.md
If you want to update to the most recent version of IoT Edge, use the following
<!-- end 1.2 --> :::moniker-end
-# [Linux for Windows](#tab/linuxforwindows)
+# [Linux on Windows](#tab/linuxonwindows)
<!-- 1.2 --> :::moniker range=">=iotedge-2020-11"
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/troubleshoot.md
description: Use this article to learn standard diagnostic skills for Azure IoT
Previously updated : 04/01/2021 Last updated : 05/04/2021
sudo iotedge support-bundle --since 6h
:::moniker-end <!-- end 1.2 -->
-You can also use a [direct method](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics) call to your device to upload the output of the support-bundle command to Azure Blob Storage.
+By default, the `support-bundle` command creates a zip file called **support_bundle.zip** in the directory where the command is called. Use the flag `--output` to specify a different path or file name for the output.
+
+For more information about the command, view its help information.
+
+```bash/cmd
+iotedge support-bundle --help
+```
+
+You can also use the built-in direct method call [UploadSupportBundle](how-to-retrieve-iot-edge-logs.md#upload-support-bundle-diagnostics) to upload the output of the support-bundle command to Azure Blob Storage.
> [!WARNING] > Output from the `support-bundle` command can contain host, device and module names, information logged by your modules etc. Please be aware of this if sharing the output in a public forum.
iot-hub Iot Hub Device Management Iot Extension Azure Cli 2 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-device-management-iot-extension-azure-cli-2-0.md
Set a desired property interval = 3000 by running the following command:
```azurecli az iot hub device-twin update -n <your hub name> \
- -d <your device id> --set properties.desired.interval = 3000
+ -d <your device id> --set properties.desired.interval=3000
``` This property can be read from your device.
Add a field role = temperature&humidity to the device by running the following c
az iot hub device-twin update \ --hub-name <your hub name> \ --device-id <your device id> \
- --set tags = '{"role":"temperature&humidity"}}'
+ --set tags='{"role":"temperature&humidity"}'
``` ## Device twin queries
key-vault Assign Access Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/assign-access-policy-portal.md
For more information on creating groups in Azure Active Directory through the Az
![Selecting the security principal for the access policy](../media/authentication/assign-policy-portal-03.png)
- If you're using a managed identity for the app, search for and select the name of the app itself. (For more information on managed identity and service principals, see [Key Vault authentication - app identity and service principals](authentication.md#app-identity-and-security-principals).)
+ If you're using a managed identity for the app, search for and select the name of the app itself. (For more information on security principals, see [Key Vault authentication](authentication.md).
1. Back in the **Add access policy** pane, select **Add** to save the access policy.
key-vault Authentication Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/authentication-fundamentals.md
- Title: Azure Key Vault authentication fundamentals
-description: Learn about how key vault's authentication model works
-- Previously updated : 04/15/2021-----
-# Key Vault Authentication Fundamentals
-
-Azure Key Vault allows you to securely store and manage application credentials such as secrets, keys, and certificates in a central and secure cloud repository. Key Vault eliminates the need to store credentials in your applications. Your applications can authenticate to Key Vault at run time to retrieve credentials.
-
-As an administrator, you can tightly control which users and applications can access your key vault and you can limit and audit the operations they perform. This document explains the fundamental concepts of the key vault access model. It will provide you with an introductory level of knowledge and show you how you can authenticate a user or application to key vault from start to finish.
-
-## Required Knowledge
-
-This document assumes you are familiar with the following concepts. If you are not familiar with any of these concepts, follow the help links before proceeding.
-
-* Azure Active Directory [link](../../active-directory/fundamentals/active-directory-whatis.md)
-* Security Principals [link](./authentication.md#app-identity-and-security-principals)
-
-## Key Vault Configuration Steps Summary
-
-1. Register your user or application in Azure Active Directory as a security principal.
-1. Configure a role assignment for your security principal in Azure Active Directory.
-1. Configure key vault access policies for your security principal.
-1. Configure Key Vault firewall access to your key vault (optional).
-1. Test your security principal's ability to access key vault.
-
-## Register a user or application in Azure Active Directory as a security principal
-
-When a user or application makes a request to key vault, the request must first be authenticated by Azure Active Directory. For this to work, the user or application needs to be registered in Azure Active Directory as a security principal.
-
-Follow the documentation links below to understand how to register a user or application in Azure Active Directory.
-**Make sure you create a password for user registration and a client secret or client certificate credential for applications.**
-
-* Registering a user in Azure Active Directory [link](../../active-directory/fundamentals/add-users-azure-active-directory.md)
-* Registering an application in Azure Active Directory [link](../../active-directory/develop/quickstart-register-app.md)
-
-## Assign your security principal a role
-
-You can use Azure role-based access control (Azure RBAC) to assign permissions to security principals. These permissions are called role assignments.
-
-In the context of key vault, these role assignments determine a security principal's level of access to the management plane (also known as control plane) of key vault. These role assignments do not provide access to the data plane secrets directly, but they provide access to manage properties of key vault. For example a user or application assigned a **Reader role** will not be permitted to make changes to key vault firewall settings, whereas a user or application assigned a **Contributor role** can make changes. Neither role will have direct access to perform operations on secrets, keys, and certificates such as creating or retrieving their value until they are assigned access to the key vault data plane. This is covered in the next step.
-
->[!IMPORTANT]
-> Although users with the Contributor or Owner role do not have access to perform operations on secrets stored in key vault by default, the Contributor and Owner roles, provide permissions to add or remove access policies to secrets stored in key vault. Therefore a user with these role assignments can grant themselves access to access secrets in the key vault. For this reason, it is recommended that only administrators have access to the Contributor or Owner roles. Users and applications that only need to retrieve secrets from key vault should be granted the Reader role. **More details in the next section.**
-
->[!NOTE]
-> When you assign a role assignment to a user at the Azure Active Directory tenant level, this set of permissions will trickle down to all subscriptions, resource-groups, and resources within the scope of the assignment. To follow the principal of least-privilege you can make this role assignment at a more granular scope. For example you can assign a user a Reader role at the subscription level, and an Owner role for a single key vault. Go to the Identity Access Management (IAM) settings of a subscription, resource-group, or key vault to make a role assignment at a more granular scope.
-
-* To learn more about Azure roles [link](../../role-based-access-control/built-in-roles.md)
-* To learn more about assigning or removing role assignments [link](../../role-based-access-control/role-assignments-portal.md)
-
-## Configure key vault access policies for your security principal
-
-Before you grant access for your users and applications to access key vault, it is important to understand the different types of operations that can be performed on a key vault. There are two main types of key vault operations, management plane (also referred to as control plane) operations, and data plane Operations.
-
-This table shows several examples of the different operations that are controlled by the management plane vs the data plane. Operations that change the properties of the key vault are management plane operations. Operations that change or retrieve the value of secrets stored in key vault are data plane operations.
-
-|Management Plane Operations (Examples)|Data Plane Operations (Examples)|
-| | |
-| Create Key Vault | Create a Key, Secret, Certificate
-| Delete Key Vault | Delete a Key, Secret, Certificate
-| Add or Remove Key Vault Role Assignments | List and Get values of Keys, Secrets, Certificates
-| Add or Remove Key Vault Access Policies | Backup and Restore Keys, Secrets, Certificates
-| Modify Key Vault Firewall Settings | Renew Keys, Secrets, Certificates
-| Modify Key Vault Recovery Settings | Purge or Recover soft-deleted Keys, Secrets, Certificates
-| Modify Key Vault Diagnostic Logs Settings
-
-### Management Plane Access & Azure Active Directory Role Assignments
-
-Azure Active Directory role assignments grant access to perform management plane operations on a key vault. This access is typically granted to users, not to applications. You can restrict what management plane operations a user can perform by changing a userΓÇÖs role assignment.
-
-For example, assigning a user a Key Vault Reader Role to a user will allow them to see the properties of your key vault such as access policies, but will not allow them to make any changes. Assigning a user, an Owner role will allow them full access to change key vault management plane settings.
-
-Role assignments are controlled in the key vault Access Control (IAM) blade. If you want a particular user to have access to be a reader or be the administrator of multiple key vault resources, you can create a role assignment at the vault, resource group, or subscription level, and the role assignment will be added to all resources within the scope of the assignment.
-
-Data plane access, or access to perform operations on keys, secrets, and certificates stored in key vault can be added in one of two ways.
-
-### Data Plane Access Option 1: Classic Key Vault Access Policies
-
-Key vault access policies grant users and applications access to perform data plane operations on a key vault.
-
-> [!NOTE]
-> This access model is not compatible with Azure RBAC for key vault (Option 2) documented below. You must choose one. You will have the opportunity to make this selection when you click on the Access Policy tab of your key vault.
-
-Classic access policies are granular, which means you can allow or deny the ability of each individual user or application to perform individual operations within a key vault. Here are a few examples:
-
-* Security Principal 1 can perform any key operation but is not allowed to perform any secret or certificate operation.
-* Security Principal 2 can list and read all keys, secrets, and certificates but cannot perform any create, delete, or renew operations.
-* Security Principal 3 can backup and restore all secrets but cannot read the value of the secrets themselves.
-
-However, classic access policies do not allow per-object level permissions, and assigned permissions are applied to the scope of an individual key vault. For example, if you grant the ΓÇ£Secret GetΓÇ¥ access policy permission to a security principal in a particular key vault, the security principal has the ability to get all secrets within that particular key vault. However, this ΓÇ£Get SecretΓÇ¥ permission will not automatically extend to other key vaults and must be assigned explicitly.
-
-> [!IMPORTANT]
-> Classic key vault access policies and Azure Active Directory role assignments are independent of each other. Assigning a security principal a ΓÇÿContributorΓÇÖ role at a subscription level will not automatically allow the security principal the ability to perform data-plane operations on every key vault within the scope of the subscription. The security principal must still must be granted, or grant themselves access policy permissions to perform data plane operations.
-
-### Data Plane Access Option 2: Azure RBAC for Key Vault
-
-A new way to grant access to the key vault data plane is through Azure role-based access control (Azure RBAC) for key vault.
-
-> [!NOTE]
-> This access model is not compatible with key vault classic access policies shown above. You must choose one. You will have the opportunity to make this selection when you click on the Access Policy tab of your key vault.
-
-Key Vault role assignments are a set of Azure built-in role assignments that encompass common sets of permissions used to access keys, secrets, and certificates. This permission model also enables additional capabilities that are not available in the classic key vault access policy model.
-
-* Azure RBAC permissions can be managed at scale by allowing users to have these roles assigned at a subscription, resource group, or individual key vault level. A user will have the data plane permissions to all key vaults within the scope of the Azure RBAC assignment. This eliminates the need to assign individual access policy permissions per user/application per key vault.
-
-* Azure RBAC permissions are compatible with Privileged Identity Management or PIM. This allows you to configure just-in-time access controls for privileged roles like Key Vault Administrator. This is a best-security practice and follows the principal of least-privilege by eliminating standing access to your key vaults.
-
-To learn more about Azure RBAC for Key Vault, see the following documents:
-
-* Azure RBAC for Key Vault [link](rbac-guide.md)
-* Azure RBAC for Key Vault roles [link](../../role-based-access-control/built-in-roles.md#key-vault-administrator)
-
-## Configure Key Vault Firewall
-
-By default, key vault allows traffic from the public internet to send reach your key vault through a public endpoint. For an additional layer of security, you can configure the Azure Key Vault Firewall to restrict access to the key vault public endpoint.
-
-To enable key vault firewall, click on the Networking tab in the key vault portal and select the radio button to Allow Access From: ΓÇ£Private Endpoint and Selected NetworksΓÇ¥. If you choose to enable the key vault firewall, these are the ways you can allow traffic through the key vault firewall.
-
-* Add IPv4 addresses to the key vault firewall allow list. This option works best for applications that have static IP addresses.
-
-* Add a virtual network to the key vault firewall. This option works best for Azure resources that have dynamic IP addresses such as Virtual Machines. You can add Azure resources to a virtual network and add the virtual network to the key vault firewall allow list. This option uses a service endpoint which is a private IP address within the virtual network. This provides an additional layer of protection so no traffic between key vault and your virtual network are routed over the public internet. To learn more about service endpoint see the following documentation. [link](./network-security.md)
-
-* Add a private link connection to the key vault. This option connects your virtual network directly to a particular instance of key vault, effectively bringing your key vault inside your virtual network. To learn more about configuring a private endpoint connection to key vault, see the following [link](./private-link-service.md)
-
-## Test your service principal's ability to access key vault
-
-Once you have followed all of the steps above, you will be able to set and retrieve secrets from your key vault.
-
-### Authentication process for users (examples)
-
-* Users can log in to the Azure portal to use key vault. [Key Vault portal Quickstart](./quick-create-portal.md)
-
-* User can use Azure CLI to use key vault. [Key Vault Azure CLI Quickstart](./quick-create-cli.md)
-
-* User can use Azure PowerShell to use key vault. [Key Vault Azure PowerShell Quickstart](./quick-create-powershell.md)
-
-### Azure Active Directory authentication process for applications or services (examples)
-
-* An application provides a client secret and client ID in a function to get an Azure Active Directory token.
-
-* An application provides a certificate to get an Azure Active Directory token.
-
-* An Azure resource uses MSI authentication to get an Azure Active Directory token.
-
-* Learn more about MSI authentication [link](../../active-directory/managed-identities-azure-resources/overview.md)
-
-### Authentication process for application (Python Example)
-
-Use the following code sample to test whether your application can retrieve a secret from your key vault using the service principal you configured.
-
->[!NOTE]
->This sample is for demonstration and test purposes only. **DO NOT USE CLIENT SECRET AUTHENTICATION IN PRODUCTION** This is not a secure design practice. You should use client certificate or MSI Authentication as a best practice.
-
-```python
-from azure.identity import ClientSecretCredential
-from azure.keyvault.secrets import SecretClient
-
-tenant_id = "{ENTER YOUR TENANT ID HERE}" ##ENTER AZURE TENANT ID
-vault_url = "https://{ENTER YOUR VAULT NAME}.vault.azure.net/" ##ENTER THE URL OF YOUR KEY VAULT
-client_id = "{ENTER YOUR CLIENT ID HERE}" ##ENTER THE CLIENT ID OF YOUR SERVICE PRINCIPAL
-cert_path = "{ENTER YOUR CLIENT SECRET HERE}" ##ENTER THE CLIENT SECRET OF YOUR SERVICE PRINCIPAL
-
-def main():
-
- #AUTHENTICATION TO Azure Active Directory USING CLIENT ID AND CLIENT CERTIFICATE (GET Azure Active Directory TOKEN)
- token = ClientSecretCredential(tenant_id=tenant_id, client_id=client_id, client_secret=client_secret)
-
- #AUTHENTICATION TO KEY VAULT PRESENTING Azure Active Directory TOKEN
- client = SecretClient(vault_url=vault_url, credential=token)
-
- #CALL TO KEY VAULT TO GET SECRET
- #ENTER NAME OF A SECRET STORED IN KEY VAULT
- secret = client.get_secret('{SECRET_NAME}')
-
- #GET PLAINTEXT OF SECRET
- print(secret.value)
-
-#CALL MAIN()
-if __name__ == "__main__":
- main()
-```
-
-## Next Steps
-
-To learn about key vault authentication in more detail, see the following document. [Key Vault Authentication](./authentication.md)
key-vault Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/authentication.md
Last updated 03/31/2021 -+
-# Authenticate to Azure Key Vault
-
-Azure Key Vault allows you to store secrets and control their distribution in a centralized, secure cloud repository, which eliminates the need to store credentials in applications. Applications need only authenticate with Key Vault at run time to access those secrets.
-
-## App identity and security principals
+# Authentication in Azure Key Vault
Authentication with Key Vault works in conjunction with [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md), which is responsible for authenticating the identity of any given **security principal**.
For applications, there are two ways to obtain a service principal:
* If you cannot use managed identity, you instead **register** the application with your Azure AD tenant, as described on [Quickstart: Register an application with the Azure identity platform](../../active-directory/develop/quickstart-register-app.md). Registration also creates a second application object that identifies the app across all tenants.
-## Authorize a security principal to access Key Vault
-
-Key Vault works with two separate levels of authorization:
--- **Access policies** control whether a user, group, or service principal is authorized to access secrets, keys, and certificates *within* an existing Key Vault resource (sometimes referred to "data plane" operations). Access policies are typically granted to users, groups, and applications.-
- To assign access policies, see the following articles:
-
- - [Azure portal](assign-access-policy-portal.md)
- - [Azure CLI](assign-access-policy-cli.md)
- - [Azure PowerShell](assign-access-policy-portal.md)
--- **Role permissions** control whether a user, group, or service principal is authorized to create, delete, and otherwise manage a Key Vault resource (sometimes referred to as "management plane" operations). Such roles are most often granted only to administrators.
-
- To assign and manage roles, see the following articles:
-
- - [Azure portal](../../role-based-access-control/role-assignments-portal.md)
- - [Azure CLI](../../role-based-access-control/role-assignments-cli.md)
- - [Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md)
-
- For general information on roles, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
--
-> [!IMPORTANT]
-> For greatest security, always follow the principal of least privilege and grant only the most specific access policies and roles that are necessary.
-
## Configure the Key Vault firewall By default, Key Vault allows access to resources through public IP addresses. For greater security, you can also restrict access to specific IP ranges, service endpoints, virtual networks, or private endpoints. For more information, see [Access Azure Key Vault behind a firewall](./access-behind-firewall.md).
+## The Key Vault request operation flow with authentication
-## The Key Vault authentication flow
+Key Vault authentication occurs as part of every request operation on Key Vault. Once token is retrieved, it can be reused for subsequent calls. Authentication flow example:
-1. A service principal requests to authenticate with Azure AD, for example:
+1. A token requests to authenticate with Azure AD, for example:
+ * An Azure resource such as a virtual machine or App Service application with a managed identity contacts the REST endpoint to get an access token.
* A user logs into the Azure portal using a username and password.
- * An application invokes an Azure REST API, presenting a client ID and secret or a client certificate.
- * An Azure resource such as a virtual machine with a managed identity contacts the [Azure Instance Metadata Service (IMDS)](../../virtual-machines/windows/instance-metadata-service.md) REST endpoint to get an access token.
-1. If authentication with Azure AD is successful, the service principal is granted an OAuth token.
+1. If authentication with Azure AD is successful, the security principal is granted an OAuth token.
-1. The service principal makes a call to the Key Vault REST API through the Key Vault's endpoint (URI).
+1. A call to the Key Vault REST API through the Key Vault's endpoint (URI).
1. Key Vault Firewall checks the following criteria. If any criterion is met, the call is allowed. Otherwise the call is blocked and a forbidden response is returned.
For more information, see [Access Azure Key Vault behind a firewall](./access-be
* The caller is listed in the firewall by IP address, virtual network, or service endpoint. * The caller can reach Key Vault over a configured private link connection.
-1. If the firewall allows the call, Key Vault calls Azure AD to validate the service principalΓÇÖs access token.
+1. If the firewall allows the call, Key Vault calls Azure AD to validate the security principalΓÇÖs access token.
-1. Key Vault checks if the service principal has the necessary access policy for the requested operation. If not, Key Vault returns a forbidden response.
+1. Key Vault checks if the security principal has the necessary permission for requested operation. If not, Key Vault returns a forbidden response.
1. Key Vault carries out the requested operation and returns the result.
The following diagram illustrates the process for an application calling a Key V
> [!NOTE] > Key Vault SDK clients for secrets, certificates, and keys make an additional call to Key Vault without access token, which results in 401 response to retrieve tenant information. For more information see [Authentication, requests and responses](authentication-requests-and-responses.md)
-## Code examples
+## Authentication to Key Vault in application code
+
+Key Vault SDK is using Azure Identity client library, which allows seamless authentication to Key Vault across environments with same code
-The following table links to different articles that demonstrate how to work with Key Vault in application code using the Azure SDK libraries for the language in question. Other interfaces such as the Azure CLI and the Azure portal are included for convenience.
+**Azure Identity client libraries**
-| Key Vault Secrets | Key Vault Keys | Key Vault Certificates |
-| | | |
-| [Python](../secrets/quick-create-python.md) | [Python](../keys/quick-create-python.md) | [Python](../certificates/quick-create-python.md) |
-| [.NET](../secrets/quick-create-net.md) | [.NET](../keys/quick-create-net.md) | [.NET](../certificates/quick-create-net.md) |
-| [Java](../secrets/quick-create-java.md) | [Java](../keys/quick-create-java.md) | [Java](../certificates/quick-create-java.md) |
-| [JavaScript](../secrets/quick-create-node.md) | [JavaScript](../keys/quick-create-node.md) | [JavaScript](../certificates/quick-create-node.md) |
-| [Azure portal](../secrets/quick-create-portal.md) | [Azure portal](../keys/quick-create-portal.md) | [Azure portal](../certificates/quick-create-portal.md) |
-| [Azure CLI](../secrets/quick-create-cli.md) | [Azure CLI](../keys/quick-create-cli.md) | [Azure CLI](../certificates/quick-create-cli.md) |
-| [Azure PowerShell](../secrets/quick-create-powershell.md) | [Azure PowerShell](../keys/quick-create-powershell.md) | [Azure PowerShell](../certificates/quick-create-powershell.md) |
-| [ARM template](../secrets/quick-create-net.md) | -- | -- |
+| .NET | Python | Java | JavaScript |
+|--|--|--|--|
+|[Azure Identity SDK .NET](/dotnet/api/overview/azure/identity-readme)|[Azure Identity SDK Python](/python/api/overview/azure/identity-readme)|[Azure Identity SDK Java](/java/api/overview/azure/identity-readme)|[Azure Identity SDK JavaScript](/javascript/api/overview/azure/identity-readme)|
+
+More information about best practices and developer examples, see [Authenticate to Key Vault in code](developers-guide.md#authenticate-to-key-vault-in-code)
## Next Steps
+- [Key Vault developer's guide](developers-guide.md)
+- [Assign a Key Vault access policy using the Azure portal](assign-access-policy-portal.md)
+- [Assign Azure RBAC role to Key Vault](rbac-guide.md)
- [Key Vault access policy troubleshooting](troubleshooting-access-issues.md) - [Key Vault REST API error codes](rest-error-codes.md)-- [Key Vault developer's guide](developers-guide.md)+ - [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/administrator-guide.md
When you're assigning roles, it helps to follow these tips:
- To give educators the ability to create new labs and manage the labs that they create, you need only assign them the Lab Creator role. - To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they'll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab. For more information, see [Add Owners to a lab](./how-to-add-user-lab-owner.md).
+## Content filtering
+
+Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering.
+
+There are two approaches that schools typically consider for content filtering:
+- Configure a firewall to filter content at the network level.
+- Install 3rd party software directly on each computer that performs content filtering.
+
+The first approach isn't currently supported by Lab Services. Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. As a result, you don't have access to the underlying virtual network to do content filtering at the network level. For more information on Lab Services' architecture, read the article [Architecture Fundamentals](./classroom-labs-fundamentals.md).
+
+Instead, we recommend the second approach which is to install 3rd party software on each lab's template VM. There are a few key points to highlight as part of this solution:
+- If you plan to use the [auto-shutdown settings](./cost-management-guide.md#automatic-shutdown-settings-for-cost-control), you will need to unblock several Azure host names with the 3rd party software. The auto-shutdown settings use a diagnostic extension that must be able to communicate back to Lab Services. Otherwise, the auto-shutdown settings will fail to enable for the lab.
+- You may also want to have each student use a non-admin account on their VM so that they can't uninstall the content filtering software. By default, Lab Services creates an admin account that each student uses to sign into their VM. It is possible to add a non-admin account using a specialized image, but there are some known limitations.
+
+If your school needs to do content filtering, contact us via the [Azure Lab Services' forums](https://techcommunity.microsoft.com/t5/azure-lab-services/bd-p/AzureLabServices) for more information.
+ ## Pricing ### Azure Lab Services To learn about pricing, see [Azure Lab Services pricing](https://azure.microsoft.com/pricing/details/lab-services/). - ### Shared Image Gallery You also need to consider the pricing for the Shared Image Gallery service if you plan to use shared image galleries for storing and managing image versions.
lab-services Class Type Pltw https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/class-type-pltw.md
Last updated 10/28/2020
- **Computer Science A**
- Students expand their programming competence in this class by learning mobile app development. In this class, they learn [Java](https://www.java.com/) by using the [Microsoft Visual Studio Code development environment](https://code.visualstudio.com/). Students also use an emulator that allows them to run and test their mobile app code. For information about how to set up an emulator in Azure Lab Services, [contact Azure Lab Services](mailto:AzLabsCOVIDSupport@microsoft.com).
+ Students expand their programming competence in this class by learning mobile app development. In this class, they learn [Java](https://www.java.com/) by using the [Microsoft Visual Studio Code development environment](https://code.visualstudio.com/). Students also use an emulator that allows them to run and test their mobile app code. For information about how to set up an emulator in Azure Lab Services, contact us via the [Azure Lab Services' forums](https://techcommunity.microsoft.com/t5/azure-lab-services/bd-p/AzureLabServices) for more information.
For a full list of class software, go to the [PLTW site](https://www.pltw.org/pltw-software) for each class.
As you follow this recommendation, note the major tasks for setting up a lab:
1. Finally, publish the template VM to create the studentsΓÇÖ VMs.
+> [!NOTE]
+> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+ ## Student devices Students can connect to their lab VMs from Windows computers, Mac, and Chromebook. For instructions, see:
lab-services How To Configure Firewall Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-configure-firewall-settings.md
Each lab uses single public IP address and multiple ports. All VMs, both the te
>[!IMPORTANT] >Each lab will have a different public IP address.
+> [!NOTE]
+> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+ ## Find public IP for a lab The public IP addresses for each lab are listed in the **All labs** page of the Lab Services lab account. For directions how to find the **All labs** page, see [View labs in a lab account](manage-labs.md#view-labs-in-a-lab-account).
lab-services How To Connect Peer Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-connect-peer-virtual-network.md
Certain on-premises networks are connected to Azure Virtual Network either throu
> [!NOTE] > When creating a Azure Virtual Network that will be peered with a lab account, it's important to understand how the virtual network's region impacts where labs are created. For more information, see the administrator guide's section on [regions\locations](./administrator-guide.md#regionslocations).
+> [!NOTE]
+> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering).
+ ## Configure at the time of lab account creation During the new [lab account creation](tutorial-setup-lab-account.md), you can pick an existing virtual network that shows in the **Peer virtual network** dropdown list on the **Advanced** tab. The list will only show virtual networks in the same region as the lab account. The selected virtual network is connected (peered) to labs created under the lab account. All the virtual machines in labs that are created after the making this change will have access to the resources on the peered virtual network.
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Use [az network public-ip create](/cli/azure/network/public-ip#az_network_public
--sku Standard ```
-To create a zonal redundant public IP address in Zone 1:
+To create a zonal public IP address in Zone 1:
```azurecli-interactive az network public-ip create \
In this quickstart
To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"]
-> [What is Azure Load Balancer?](load-balancer-overview.md)
+> [What is Azure Load Balancer?](load-balancer-overview.md)
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
|Preconfigured&nbsp;for&nbsp;ML|Save time on setup tasks with pre-configured and up-to-date ML packages, deep learning frameworks, GPU drivers.| |Fully customizable|Broad support for Azure VM types including GPUs and persisted low-level customization such as installing packages and drivers makes advanced scenarios a breeze. |
-You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can [create a compute instance for you](how-to-create-manage-compute-instance.md?tabs=python#create-on-behalf-of-preview).
+You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can [create a compute instance for you](how-to-create-manage-compute-instance.md?tabs=python#on-behalf).
## <a name="contents"></a>Tools and environments
In your workspace in Azure Machine Learning studio, select **Compute**, then sel
![Manage a compute instance](./media/concept-compute-instance/manage-compute-instance.png)
-You can perform the following actions:
-
-* [Create a compute instance](#create).
-* Refresh the compute instances tab.
-* Start, stop, and restart a compute instance. You do pay for the instance whenever it is running. Stop the compute instance when you are not using it to reduce cost. Stopping a compute instance deallocates it. Then start it again when you need it. Please note stopping the compute instance stops the billing for compute hours but you will still be billed for disk, public IP, and standard load balancer.
-* Delete a compute instance.
-* Filter the list of compute instanced to show only those you have created.
-
-For each compute instance in your workspace that you can use, you can:
-
-* Access Jupyter, JupyterLab, RStudio on the compute instance
-* SSH into compute instance. SSH access is disabled by default but can be enabled at compute instance creation time. SSH access is through public/private key mechanism. The tab will give you details for SSH connection such as IP address, username, and port number.
-* Get details about a specific compute instance such as IP address, and region.
-
-[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user log in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
-
-These actions can be controlled by Azure RBAC:
-* *Microsoft.MachineLearningServices/workspaces/computes/read*
-* *Microsoft.MachineLearningServices/workspaces/computes/write*
-* *Microsoft.MachineLearningServices/workspaces/computes/delete*
-* *Microsoft.MachineLearningServices/workspaces/computes/start/action*
-* *Microsoft.MachineLearningServices/workspaces/computes/stop/action*
-* *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
-
-To create a compute instance you need to have permissions for the following actions:
-* *Microsoft.MachineLearningServices/workspaces/computes/write*
-* *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
-
+For more about managing the compute instance, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
### <a name="create"></a>Create a compute instance
-In your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-attach-compute-studio.md#compute-instance) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
+As an administrator, you can [create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#on-behalf). You can also [use a setup script (preview)](how-to-create-manage-compute-instance.md#setup-script) for an automated way to customize and configure the compute instance.
+
+To create your a compute instance for yourself, use your workspace in Azure Machine Learning studio, [create a new compute instance](how-to-create-attach-compute-studio.md#compute-instance) from either the **Compute** section or in the **Notebooks** section when you are ready to run one of your notebooks.
You can also create an instance * Directly from the [integrated notebooks experience](tutorial-train-models-with-aml.md#azure)
The dedicated cores per region per VM family quota and total regional quota, whi
Compute instance comes with P10 OS disk. Temp disk type depends on the VM size chosen. Currently, it is not possible to change the OS disk type.
-### Create on behalf of (preview)
-
-As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
-* [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/101-machine-learning-compute-create-computeinstance). For details on how to find the TenantID and ObjectID needed in this template, see [Find identity object IDs for authentication configuration](../healthcare-apis/fhir/find-identity-object-ids.md). You can also find these values in the Azure Active Directory portal.
-* REST API
-
-The data scientist you create the compute instance for needs the following Azure RBAC permissions:
-* *Microsoft.MachineLearningServices/workspaces/computes/start/action*
-* *Microsoft.MachineLearningServices/workspaces/computes/stop/action*
-* *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
-* *Microsoft.MachineLearningServices/workspaces/computes/applicationaccess/action*
-
-The data scientist can start, stop, and restart the compute instance. They can use the compute instance for:
-* Jupyter
-* JupyterLab
-* RStudio
-* Integrated notebooks
- ## Compute target Compute instances can be used as a [training compute target](concept-compute-target.md#train) similar to Azure Machine Learning compute training clusters.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
You can also create a compute instance with an [Azure Resource Manager template]
-## Create on behalf of (preview)
+## <a name="on-behalf"></a> Create on behalf of (preview)
As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
Logs from the setup script execution appear in the logs folder in the compute in
## Manage
-Start, stop, restart, and delete a compute instance. A compute instance does not automatically scale down, so make sure to stop the resource to prevent ongoing charges.
+Start, stop, restart, and delete a compute instance. A compute instance does not automatically scale down, so make sure to stop the resource to prevent ongoing charges. Stopping a compute instance deallocates it. Then start it again when you need it. While stopping the compute instance stops the billing for compute hours, you will still be billed for disk, public IP, and standard load balancer.
> [!TIP] > The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](how-to-access-terminal.md) to clear at least 1-2 GB before you stop or restart the compute instance.
For each compute instance in your workspace that you created (or that was create
-
-[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
+[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user log in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
These actions can be controlled by Azure RBAC:
* *Microsoft.MachineLearningServices/workspaces/computes/stop/action* * *Microsoft.MachineLearningServices/workspaces/computes/restart/action*
+To create a compute instance you need to have permissions for the following actions:
+* *Microsoft.MachineLearningServices/workspaces/computes/write*
+* *Microsoft.MachineLearningServices/workspaces/checkComputeNameAvailability/action*
++ ## Next steps * [Access the compute instance terminal](how-to-access-terminal.md)
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-custom-dns.md
When using an Azure Machine Learning workspace with a private endpoint, there are [several ways to handle DNS name resolution](../private-link/private-endpoint-dns.md). By default, Azure automatically handles name resolution for your workspace and private endpoint. If you instead __use your own custom DNS server__, you must manually create DNS entries or use conditional forwarders for the workspace. > [!IMPORTANT]
-> This article only covers how to find the fully qualified domain name (FQDN) and IP addresses for these entries it does NOT provide information on configuring the DNS records for these items. Consult the documentation for your DNS software for information on how to add records.
+> This article covers how to find the fully qualified domain names (FQDN) and IP addresses for these entries if you would like to manually register DNS records in your DNS solution. Additionally this article provides architecture recommendations for how to configure your custom DNS solution to automatically resolve FQDNs to the correct IP addresses. This article does NOT provide information on configuring the DNS records for these items. Consult the documentation for your DNS software for information on how to add records.
## Prerequisites
When using an Azure Machine Learning workspace with a private endpoint, there ar
- Familiarity with [Azure Private Endpoint DNS zone configuration](../private-link/private-endpoint-dns.md)
+- Familiarity with [Azure Private DNS](/azure/dns/private-dns-privatednszone)
+ - Optionally, [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps).
-## Public regions
+## Automated DNS server integration
+
+### Introduction
+
+There are two common architectures to use automated DNS server integration with Azure Machine Learning:
+
+* A custom [DNS server hosted in an Azure Virtual Network](#dns-vnet).
+* A custom [DNS server hosted on-premises](#dns-on-premises), connected to Azure Machine Learning through ExpressRoute.
+
+While your architecture may differ from these examples, you can use them as a reference point. Both example architectures provide troubleshooting steps that can help you identify components that may be misconfigured.
+
+### Workspace DNS resolution path
+
+Access to a given Azure Machine Learning workspace via Private Link is done by communicating with the following Fully Qualified Domains (called the workspace FQDNs) listed below:
+
+**Azure Public regions**:
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.azureml.ms```
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.azureml.ms```
+- ```<compute instance name>.<region the workspace was created in>.instances.azureml.ms```
+- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.notebooks.azure.net```
+
+**Azure China 21Vianet regions**:
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn```
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.ml.azure.cn```
+- ```<compute instance name>.<region the workspace was created in>.instances.ml.azure.cn```
+- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.notebooks.chinacloudapi.cn```
+
+**Azure US Government regions**:
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us```
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.cert.api.ml.azure.us```
+- ```<compute instance name>.<region the workspace was created in>.instances.ml.azure.us```
+- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.notebooks.usgovcloudapi.net```
+
+The Fully Qualified Domains resolve to the following Canonical Names (CNAMEs) called the workspace Private Link FQDNs:
+
+**Azure Public regions**:
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.privatelink.api.azureml.ms```
+- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.privatelink.notebooks.azure.net```
+
+**Azure China regions**:
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.privatelink.api.ml.azure.cn```
+- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.privatelink.notebooks.chinacloudapi.cn```
+
+**Azure US Government regions**:
+- ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.privatelink.api.ml.azure.us```
+- ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>.privatelink.notebooks.usgovcloudapi.net```
+
+The FQDNs resolve to the IP addresses of the Azure Machine Learning workspace in that region. However, resolution of the workspace Private Link FQDNs will be overridden when resolving with the Azure DNS Virtual Server IP address in a Virtual Network linked to the Private DNS Zones created as described above.
+
+## Manual DNS server integration
-The following list contains the fully qualified domain names (FQDN) used by your workspace if it is in a public region::
+This section discusses which Fully Qualified Domains to create A records for in a DNS Server, and which IP address to set the value of the A record to.
+
+### Retrieve Private Endpoint FQDNs
+
+#### Azure Public region
+
+The following list contains the fully qualified domain names (FQDNs) used by your workspace if it is in the Azure Public Cloud:
* `<workspace-GUID>.workspace.<region>.cert.api.azureml.ms` * `<workspace-GUID>.workspace.<region>.api.azureml.ms` * `ml-<workspace-name, truncated>-<region>-<workspace-guid>.notebooks.azure.net` > [!NOTE]
- > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` 63 characters.
+ > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` at 63 characters or less.
* `<instance-name>.<region>.instances.azureml.ms` > [!NOTE] > * Compute instances can be accessed only from within the virtual network. > * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
-## Azure China 21Vianet regions
+#### Azure China region
-The following FQDNs are for Azure China 21Vianet regions:
+The following FQDNs are for Azure China regions:
* `<workspace-GUID>.workspace.<region>.cert.api.ml.azure.cn` * `<workspace-GUID>.workspace.<region>.api.ml.azure.cn` * `ml-<workspace-name, truncated>-<region>-<workspace-guid>.notebooks.chinacloudapi.cn` > [!NOTE]
- > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` 63 characters.
+ > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` at 63 characters or less.
+
* `<instance-name>.<region>.instances.ml.azure.cn`
-## Find the IP addresses
+
+ * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
+
+#### Azure US Government
+
+The following FQDNs are for Azure US Government regions:
+
+* `<workspace-GUID>.workspace.<region>.cert.api.ml.azure.us`
+* `<workspace-GUID>.workspace.<region>.api.ml.azure.us`
+* `ml-<workspace-name, truncated>-<region>-<workspace-guid>.notebooks.usgovcloudapi.net`
+
+ > [!NOTE]
+ > The workspace name for this FQDN may be truncated. Truncation is done to keep `ml-<workspace-name, truncated>-<region>-<workspace-guid>` at 63 characters or less.
+* `<instance-name>.<region>.instances.ml.azure.us`
+ > * The IP address for this FQDN is **not** the IP of the compute instance. Instead, use the private IP address of the workspace private endpoint (the IP of the `*.api.azureml.ms` entries.)
+
+### Find the IP addresses
To find the internal IP addresses for the FQDNs in the VNet, use one of the following methods:
$workspaceDns.CustomDnsConfigs | format-table
-The information returned from all methods is the same; a list of the FQDN and private IP address for the resources. The following example is from a global Azure region:
+The information returned from all methods is the same; a list of the FQDN and private IP address for the resources. The following example is from the Azure Public Cloud:
| FQDN | IP Address | | -- | -- | | `fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.api.azureml.ms` | `10.1.0.5` |
+| `fb7e20a0-8891-458b-b969-55ddb3382f51.workspace.eastus.cert.api.azureml.ms` | `10.1.0.5` |
| `ml-myworkspace-eastus-fb7e20a0-8891-458b-b969-55ddb3382f51.notebooks.azure.net` | `10.1.0.6` |
-> [!IMPORTANT]
-> Some FQDNs are not shown in listed by the private endpoint, but are required by the workspace in eastus, southcentralus and westus2. These FQDNs are listed in the following table, and must also be added to your DNS server and/or an Azure Private DNS Zone:
->
-> * `<workspace-GUID>.workspace.<region>.cert.api.azureml.ms`
-> * `<workspace-GUID>.workspace.<region>.experiments.azureml.net`
-> * `<workspace-GUID>.workspace.<region>.modelmanagement.azureml.net`
-> * `<workspace-GUID>.workspace.<region>.aether.ms`
-> * If you have a compute instance, use `<instance-name>.<region>.instances.azureml.ms`, where `<instance-name>` is the name of your compute instance. Use the private IP address of workspace private endpoint. The compute instance can be accessed only from within the virtual network.
->
-> For all of these IP address, use the same address as the `*.api.azureml.ms` entries returned from the previous steps.
-
-The following table shows example IPs from Azure China 21Vianet regions:
+The following table shows example IPs from Azure China regions:
| FQDN | IP Address | | -- | -- | | `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.api.ml.azure.cn` | `10.1.0.5` |
+| `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.cert.api.ml.azure.cn` | `10.1.0.5` |
| `ml-mype-pltest-chinaeast2-52882c08-ead2-44aa-af65-08a75cf094bd.notebooks.chinacloudapi.cn` | `10.1.0.6` |+
+The following table shows example IPs from Azure US Government regions:
+
+| FQDN | IP Address |
+| -- | -- |
+| `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.api.ml.azure.us` | `10.1.0.5` |
+| `52882c08-ead2-44aa-af65-08a75cf094bd.workspace.chinaeast2.cert.api.ml.azure.us` | `10.1.0.5` |
+| `ml-mype-plt-usgovvirginia-52882c08-ead2-44aa-af65-08a75cf094bd.notebooks.usgovcloudapi.net` | `10.1.0.6` |
+
+<a id='dns-vnet'></a>
+
+### Create A records in custom DNS server
+
+Once the list of FQDNs and corresponding IP addresses are gathered, proceed to create A records in the configured DNS Server. Refer to the documentation for your DNS server to determine how to create A records. Note it is recommended to create a unique zone for the entire FQDN, and create the A record in the root of the zone.
+
+## Example: Custom DNS Server hosted in VNet
+
+This architecture uses the common Hub and Spoke virtual network topology. One virtual network contains the DNS server and one contains the private endpoint to the Azure Machine Learning workspace and associated resources. There must be a valid route between both virtual networks. For example, through a series of peered virtual networks.
++
+The following steps describe how this topology works:
+
+1. **Create Private DNS Zone and link to DNS Server Virtual Network**:
+
+ The first step in ensuring a Custom DNS solution works with your Azure Machine Learning workspace is to create two Private DNS Zones rooted at the following domains:
+
+ **Azure Public regions**:
+ - ```privatelink.api.azureml.ms```
+ - ```privatelink.notebooks.azure.net```
+
+ **Azure China regions**:
+ - ```privatelink.api.ml.azure.cn```
+ - ```privatelink.notebooks.chinacloudapi.cn```
+
+ **Azure US Government regions**:
+ - ```privatelink.api.ml.azure.us```
+ - ```privatelink.notebooks.usgovcloudapi.net```
+
+ Following creation of the Private DNS Zone, it needs to be linked to the DNS Server Virtual Network. The Virtual Network that contains the DNS Server.
+
+ A Private DNS Zone overrides name resolution for all names within the scope of the root of the zone. This override applies to all Virtual Networks the Private DNS Zone is linked to. For example, if a Private DNS Zone rooted at `privatelink.api.azureml.ms` is linked to Virtual Network foo, all resources in Virtual Network foo that attempt to resolve `bar.workspace.westus2.privatelink.api.azureml.ms` will receive any record that is listed in the `privatelink.api.azureml.ms` zone.
+
+ However, records listed in Private DNS Zones are only returned to devices resolving domains using the default Azure DNS Virtual Server IP address. So the custom DNS Server will resolve domains for devices spread throughout your network topology. But the custom DNS Server will need to resolve Azure Machine Learning-related domains against the Azure DNS Virtual Server IP address.
+
+2. **Create private endpoint with private DNS integration targeting Private DNS Zone linked to DNS Server Virtual Network**:
+
+ The next step is to create a Private Endpoint to the Azure Machine Learning workspace. A private endpoint ensures Private DNS integration is enabled. The private endpoint targets both Private DNS Zones created in step 1. This ensures all communication with the workspace is done via the Private Endpoint in the Azure Machine Learning Virtual Network.
+
+3. **Create conditional forwarder in DNS Server to forward to Azure DNS**:
+
+ Next, create a conditional forwarder to the Azure DNS Virtual Server. The conditional forwarder ensures that the DNS server always queries the Azure DNS Virtual Server IP address for FQDNs related to your workspace. This means that the DNS Server will return the corresponding record from the Private DNS Zone.
+
+ The zones to conditionally forward are listed below. The Azure DNS Virtual Server IP address is 168.63.129.16:
+
+ **Azure Public regions**:
+ - ``` privatelink.api.azureml.ms```
+ - ``` privatelink.notebooks.azure.net```
+
+ **Azure China regions**:
+ - ```privatelink.api.ml.azure.cn```
+ - ```privatelink.notebooks.chinacloudapi.cn```
+
+ **Azure US Government regions**:
+ - ```privatelink.api.ml.azure.us```
+ - ```privatelink.notebooks.usgovcloudapi.net```
+
+ > [!IMPORTANT]
+ > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
+
+4. **Resolve workspace domain**:
+
+ At this point, all setup is done. Now any client that uses DNS Server for name resolution and has a route to the Azure Machine Learning Private Endpoint can proceed to access the workspace.
+ The client will first start by querying DNS Server for the address of the following FQDNs:
+
+ **Azure Public regions**:
+ - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.azureml.ms```
+ - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.azure.net```
+
+ **Azure China regions**:
+ - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn```
+ - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.chinacloudapi.cn```
+
+ **Azure US Government regions**:
+ - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us```
+ - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.usgovcloudapi.net```
+
+5. **Public DNS responds with CNAME**:
+
+ DNS Server will proceed to resolve the FQDNs from step 4 from the Public DNS. The Public DNS will respond with one of the domains listed in the informational section in step 1.
+
+6. **DNS Server recursively resolves workspace domain CNAME record from Azure DNS**:
+
+ DNS Server will proceed to recursively resolve the CNAME received in step 5. Because there was a conditional forwarder setup in step 3, DNS Server will send the request to the Azure DNS Virtual Server IP address for resolution.
+
+7. **Azure DNS returns records from Private DNS zone**:
+
+ The corresponding records stored in the Private DNS Zones will be returned to DNS Server, which will mean Azure DNS Virtual Server returns the IP addresses of the Private Endpoint.
+
+8. **Custom DNS Server resolves workspace domain name to private endpoint address**:
+
+ Ultimately the Custom DNS Server now returns the IP addresses of the Private Endpoint to the client from step 4. This ensures that all traffic to the Azure Machine Learning workspace is via the Private Endpoint.
+
+#### Troubleshooting
+
+If you cannot access the workspace from a virtual machine or jobs fail on compute resources in the virtual network, use the following steps to identify the cause:
+
+1. **Locate the workspace FQDNs on the Private Endpoint**:
+
+ Navigate to the Azure portal using one of the following links:
+ - [Azure Public regions](https://ms.portal.azure.com/?feature.privateendpointmanagedns=false)
+ - [Azure China regions](https://portal.azure.cn/?feature.privateendpointmanagedns=false)
+ - [Azure US Government regions](https://portal.azure.us/?feature.privateendpointmanagedns=false)
+
+ Navigate to the Private Endpoint to the Azure Machine Learning workspace. The workspace FQDNs will be listed on the ΓÇ£OverviewΓÇ¥ tab.
+
+1. **Access compute resource in Virtual Network topology**:
+
+ Proceed to access a compute resource in the Azure Virtual Network topology. This will likely require accessing a Virtual Machine in a Virtual Network that is peered with the Hub Virtual Network.
+
+1. **Resolve workspace FQDNs**:
+
+ Open a command prompt, shell, or PowerShell. Then for each of the workspace FQDNs, run the following command:
+
+ ```nslookup <workspace FQDN>```
+
+ The result of each nslookup should return one of the two private IP addresses on the Private Endpoint to the Azure Machine Learning workspace. If it does not, then there is something misconfigured in the custom DNS solution.
+
+ Possible causes:
+ - The compute resource running the troubleshooting commands is not using DNS Server for DNS resolution
+ - The Private DNS Zones chosen when creating the Private Endpoint are not linked to the DNS Server VNet
+ - Conditional forwarders to Azure DNS Virtual Server IP were not configured correctly
+
+<a id='dns-on-premises'></a>
+
+## Example: Custom DNS Server hosted on-premises
+
+This architecture uses the common Hub and Spoke virtual network topology. ExpressRoute is used to connect from your on-premises network to the Hub virtual network. The Custom DNS server is hosted on-premises. A separate virtual network contains the private endpoint to the Azure Machine Learning workspace and associated resources. With this topology, there needs to be another virtual network hosting a DNS server that can send requests to the Azure DNS Virtual Server IP address.
++
+The following steps describe how this topology works:
+
+1. **Create Private DNS Zone and link to DNS Server Virtual Network**:
+
+ The first step in ensuring a Custom DNS solution works with your Azure Machine Learning workspace is to create two Private DNS Zones rooted at the following domains:
+
+ **Azure Public regions**:
+ - ``` privatelink.api.azureml.ms```
+ - ``` privatelink.notebooks.azure.net```
+
+ **Azure China regions**:
+ - ```privatelink.api.ml.azure.cn```
+ - ```privatelink.notebooks.chinacloudapi.cn```
+
+ **Azure US Government regions**:
+ - ```privatelink.api.ml.azure.us```
+ - ```privatelink.notebooks.usgovcloudapi.net```
+
+ Following creation of the Private DNS Zone, it needs to be linked to the DNS Server VNet ΓÇô the Virtual Network that contains the DNS Server.
+
+ > [!NOTE]
+ > The DNS Server in the virtual network is separate from the On-premises DNS Server.
+
+ A Private DNS Zone overrides name resolution for all names within the scope of the root of the zone. This override applies to all Virtual Networks the Private DNS Zone is linked to. For example, if a Private DNS Zone rooted at `privatelink.api.azureml.ms` is linked to Virtual Network foo, all resources in Virtual Network foo that attempt to resolve `bar.workspace.westus2.privatelink.api.azureml.ms` will receive any record that is listed in the privatelink.api.azureml.ms zone.
+
+ However, records listed in Private DNS Zones are only returned to devices resolving domains using the default Azure DNS Virtual Server IP address. The Azure DNS Virtual Server IP address is only valid within the context of a Virtual Network. When using an on-premises DNS server, it is not able to query the Azure DNS Virtual Server IP address to retrieve records.
+
+ To get around this behavior, create an intermediary DNS Server in a virtual network. This DNS server can query the Azure DNS Virtual Server IP address to retrieve records for any Private DNS Zone linked to the virtual network.
+
+ While the On-premises DNS Server will resolve domains for devices spread throughout your network topology, it will resolve Azure Machine Learning-related domains against the DNS Server. The DNS Server will resolve those domains from the Azure DNS Virtual Server IP address.
+
+2. **Create private endpoint with private DNS integration targeting Private DNS Zone linked to DNS Server Virtual Network**:
+
+ The next step is to create a Private Endpoint to the Azure Machine Learning workspace. A private endpoint ensures Private DNS integration is enabled. The private endpoint targets both Private DNS Zones created in step 1. This ensures all communication with the workspace is done via the Private Endpoint in the Azure Machine Learning Virtual Network.
+
+3. **Create conditional forwarder in DNS Server to forward to Azure DNS**:
+
+ Next, create a conditional forwarder to the Azure DNS Virtual Server. The conditional forwarder ensures that the DNS server always queries the Azure DNS Virtual Server IP address for FQDNs related to your workspace. This means that the DNS Server will return the corresponding record from the Private DNS Zone.
+
+ The zones to conditionally forward are listed below. The Azure DNS Virtual Server IP address is 168.63.129.16.
+
+ **Azure Public regions**:
+ - ``` privatelink.api.azureml.ms```
+ - ``` privatelink.notebooks.azure.net```
+
+ **Azure China regions**:
+ - ```privatelink.api.ml.azure.cn```
+ - ```privatelink.notebooks.chinacloudapi.cn```
+
+ **Azure US Government regions**:
+ - ```privatelink.api.ml.azure.us```
+ - ```privatelink.notebooks.usgovcloudapi.net```
+
+ > [!IMPORTANT]
+ > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
+
+4. **Create conditional forwarder in On-premises DNS Server to forward to DNS Server**:
+
+ Next, create a conditional forwarder to the DNS Server in the DNS Server Virtual Network. This forwarder is for the zones listed in step 1. This is similar to step 3, but, instead of forwarding to the Azure DNS Virtual Server IP address, the On-premises DNS Server will be targeting the IP address of the DNS Server. As the On-premises DNS Server is not in Azure, it is not able to directly resolve records in Private DNS Zones. In this case the DNS Server proxies requests from the On-premises DNS Server to the Azure DNS Virtual Server IP. This allows the On-premises DNS Server to retrieve records in the Private DNS Zones linked to the DNS Server Virtual Network.
+
+ The zones to conditionally forward are listed below. The IP addresses to forward to are the IP addresses of your DNS Servers:
+
+ **Azure Public regions**:
+ - ``` privatelink.api.azureml.ms```
+ - ``` privatelink.notebooks.azure.net```
+
+ **Azure China regions**:
+ - ```privatelink.api.ml.azure.cn```
+ - ```privatelink.notebooks.chinacloudapi.cn```
+
+ **Azure US Government regions**:
+ - ```privatelink.api.ml.azure.us```
+ - ```privatelink.notebooks.usgovcloudapi.net```
+
+ > [!IMPORTANT]
+ > Configuration steps for the DNS Server are not included here, as there are many DNS solutions available that can be used as a custom DNS Server. Refer to the documentation for your DNS solution for how to appropriately configure conditional forwarding.
+
+5. **Resolve workspace domain**:
+
+ At this point, all setup is done. Any client that uses on-premises DNS Server for name resolution, and has a route to the Azure Machine Learning Private Endpoint, can proceed to access the workspace.
+
+ The client will first start by querying On-premises DNS Server for the address of the following FQDNs:
+
+ **Azure Public regions**:
+ - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.azureml.ms```
+ - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.azure.net```
+
+ **Azure China regions**:
+ - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.cn```
+ - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.chinacloudapi.cn```
+
+ **Azure US Government regions**:
+ - ```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.ml.azure.us```
+ - ```ml-<workspace-name, truncated>-<region>-<per-workspace globally-unique identifier>. notebooks.usgovcloudapi.net```
+
+6. **Public DNS responds with CNAME**:
+
+ DNS Server will proceed to resolve the FQDNs from step 4 from the Public DNS. The Public DNS will respond with one of the domains listed in the informational section in step 1.
+
+7. **On-premises DNS Server recursively resolves workspace domain CNAME record from DNS Server**:
+
+ On-premises DNS Server will proceed to recursively resolve the CNAME received in step 6. Because there was a conditional forwarder setup in step 4, On-premises DNS Server will send the request to DNS Server for resolution.
+
+8. **DNS Server recursively resolves workspace domain CNAME record from Azure DNS**:
+
+ DNS Server will proceed to recursively resolve the CNAME received in step 5. Because there was a conditional forwarder setup in step 3, DNS Server will send the request to the Azure DNS Virtual Server IP address for resolution.
+
+9. **Azure DNS returns records from Private DNS zone**:
+
+ The corresponding records stored in the Private DNS Zones will be returned to DNS Server, which will mean the Azure DNS Virtual Server returns the IP addresses of the Private Endpoint.
+
+10. **On-premises DNS Server resolves workspace domain name to private endpoint address**:
+
+ The query from On-premises DNS Server to DNS Server in step 7 ultimately returns the IP addresses associated with the Private Endpoint to the Azure Machine Learning workspace. These IP addresses are returned to the original client, which will now communicate with the Azure Machine Learning workspace over the Private Endpoint configured in step 1.
++
+#### Troubleshooting
+
+If after running through the above steps you are unable to access the workspace from a virtual machine or jobs fail on compute resources in the Virtual Network containing the Private Endpoint to the Azure Machine learning workspace, follow the below steps to try to identify the cause.
+
+1. **Locate the workspace FQDNs on the Private Endpoint**:
+
+ Navigate to the Azure portal using one of the following links:
+ - [Azure Public regions](https://ms.portal.azure.com/?feature.privateendpointmanagedns=false)
+ - [Azure China regions](https://portal.azure.cn/?feature.privateendpointmanagedns=false)
+ - [Azure US Government regions](https://portal.azure.us/?feature.privateendpointmanagedns=false)
+
+ Navigate to the Private Endpoint to the Azure Machine Learning workspace. The workspace FQDNs will be listed on the ΓÇ£OverviewΓÇ¥ tab.
+
+1. **Access compute resource in Virtual Network topology**:
+
+ Proceed to access a compute resource in the Azure Virtual Network topology. This will likely require accessing a Virtual Machine in a Virtual Network that is peered with the Hub Virtual Network.
+
+1. **Resolve workspace FQDNs**:
+
+ Open a command prompt, shell, or PowerShell. Then for each of the workspace FQDNs, run the following command:
+
+ ```nslookup <workspace FQDN>```
+
+ The result of each nslookup should yield one of the two private IP addresses on the Private Endpoint to the Azure Machine Learning workspace. If it does not, then there is something misconfigured in the custom DNS solution.
+
+ Possible causes:
+ - The compute resource running the troubleshooting commands is not using DNS Server for DNS resolution
+ - The Private DNS Zones chosen when creating the Private Endpoint are not linked to the DNS Server VNet
+ - Conditional forwarders from DNS Server to Azure DNS Virtual Server IP were not configured correctly
+ - Conditional forwarders from On-premises DNS Server to DNS Server were not configured correctly
+ ## Next steps For more information on using Azure Machine Learning with a virtual network, see the [virtual network overview](how-to-network-security-overview.md).
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-dynamics-365.md
After you've considered the planning items described above, select one of the fo
| [Dynamics 365 for Operations](partner-center-portal/create-new-operations-offer.md) | When you're building for Enterprise Edition, first review these additional [publishing processes and guidelines](/dynamics365/fin-ops-core/dev-itpro/lcs-solutions/lcs-solutions-app-source). | | [Dynamics 365 for Business Central](partner-center-portal/create-new-business-central-offer.md) | | | [Dynamics 365 for Customer Engagement & Power Apps](dynamics-365-customer-engage-offer-setup.md) | First review these additional [publishing processes and guidelines](/dynamics365/customer-engagement/developer/publish-app-appsource). |
-| [Power BI](/partner-center-portal/create-power-bi-app-offer.md) | First review these additional [publishing processes and guidelines](/power-bi/developer/office-store). |
+| [Power BI](/azure/marketplace/partner-center-portal/create-power-bi-app-offer) | First review these additional [publishing processes and guidelines](/power-bi/developer/office-store). |
|||
mysql How To Configure Audit Log Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-configure-audit-log-cli.md
The article shows you how to configure [audit logs](concepts-audit-logs.md) for
## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).-- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+- Login to Azure account using [az login](/cli/azure/reference-index#az_login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
mysql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-configure-high-availability-cli.md
High availability feature provisions physically separate primary and standby rep
## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).-- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+- Login to Azure account using [az login](/cli/azure/reference-index#az_login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
mysql How To Configure Slow Query Log Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-configure-slow-query-log-cli.md
The article shows you how to configure [slow query logs](concepts-slow-query-log
## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).-- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+- Login to Azure account using [az login](/cli/azure/reference-index#az_login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
mysql How To Restart Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-restart-stop-start-server-cli.md
This article shows you how to perform restart, start and stop flexible server us
## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).-- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+- Login to Azure account using [az login](/cli/azure/reference-index#az_login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/how-to-restore-server-cli.md
This article provides step-by-step procedure to perform point-in-time recoveries
## Prerequisites - If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - Install or upgrade Azure CLI to the latest version. See [Install Azure CLI](/cli/azure/install-azure-cli).-- Login to Azure account using [az login](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
+- Login to Azure account using [az login](/cli/azure/reference-index#az_login) command. Note the **id** property, which refers to **Subscription ID** for your Azure account.
```azurecli-interactive az login
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/single-server-whats-new.md
+
+ Title: What's new in Azure Database for MySQL Single Server
+description: Learn about recent updates to Azure Database for MySQL - Single server, a relational database service in the Microsoft cloud based on the MySQL Community Edition.
+++++ Last updated : 05/05/2021+
+# What's new in Azure Database for MySQL - Single Server?
+
+Azure Database for MySQL is a relational database service in the Microsoft cloud. The service is based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine and supports versions 5.6, 5.7, and 8.0. [Azure Database for MySQL - Single Server](https://docs.microsoft.com/azure/mysql/overview#azure-database-for-mysqlsingle-server) is a deployment mode that provides a fully managed database service with minimal requirements for customizations of database. The Single Server platform is designed to handle most database management functions such as patching, backups, high availability, and security, all with minimal user configuration and control.
+
+This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021.
+
+## January 2021
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+- Enabled "reset password" to automatically fix the first admin permission.
+- Exposed the `auto_increment_increment/auto_increment_offset` server parameter and `session_track_gtids`.
+- Added new stored procedures for control innodb buffer pool dump/restore.
+- Exposed the innodb warm up related server parameter for large storage server.
+
+## February 2021
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+- Added new stored procedures to support the global transaction identifier (GTID) for data-in for the version 5.7 and 8.0 Large Storage server.
+- Updated to support MySQL versions to 5.6.50 and 5.7.32.
+
+## Contacts
+
+If you have any questions or suggestions about working with Azure Database for MySQL, contact the Azure Database for MySQL Team ([@Ask Azure DB for MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com)). This email address isn't a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597982-azure-database-for-mysql).
+
+## Next steps
+
+- Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/).
+- Browse the [public documentation](https://docs.microsoft.com/azure/mysql/single-server/) for Azure Database for MySQL ΓÇô Single Server.
+- Review details on [troubleshooting common errors](https://docs.microsoft.com/azure/mysql/howto-troubleshoot-common-errors).
postgresql Concepts Hyperscale Columnar https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-hyperscale-columnar.md
Previously updated : 04/07/2021 Last updated : 05/04/2021 # Columnar table storage (preview)
columnar table storage for analytic and data warehousing workloads. When
columns (rather than rows) are stored contiguously on disk, data becomes more compressible, and queries can request a subset of columns more quickly.
+## Usage
+ To use columnar storage, specify `USING columnar` when creating a table: ```postgresql
compressed in its eventual stripe.
Because of how it's measured, the compression rate may or may not match the size difference between row and columnar storage for a table. The only way to truly find that difference is to construct a row and columnar table that
-contain the same data, and compare:
+contain the same data, and compare.
-```postgresql
-CREATE TABLE contestant_row AS
- SELECT * FROM contestant;
+## Measuring compression
-SELECT pg_total_relation_size('contestant_row') as row_size,
- pg_total_relation_size('contestant') as columnar_size;
-```
+Let's create a new example with more data to benchmark the compression savings.
+
+```postgresql
+-- first a wide table using row storage
+CREATE TABLE perf_row(
+ c00 int8, c01 int8, c02 int8, c03 int8, c04 int8, c05 int8, c06 int8, c07 int8, c08 int8, c09 int8,
+ c10 int8, c11 int8, c12 int8, c13 int8, c14 int8, c15 int8, c16 int8, c17 int8, c18 int8, c19 int8,
+ c20 int8, c21 int8, c22 int8, c23 int8, c24 int8, c25 int8, c26 int8, c27 int8, c28 int8, c29 int8,
+ c30 int8, c31 int8, c32 int8, c33 int8, c34 int8, c35 int8, c36 int8, c37 int8, c38 int8, c39 int8,
+ c40 int8, c41 int8, c42 int8, c43 int8, c44 int8, c45 int8, c46 int8, c47 int8, c48 int8, c49 int8,
+ c50 int8, c51 int8, c52 int8, c53 int8, c54 int8, c55 int8, c56 int8, c57 int8, c58 int8, c59 int8,
+ c60 int8, c61 int8, c62 int8, c63 int8, c64 int8, c65 int8, c66 int8, c67 int8, c68 int8, c69 int8,
+ c70 int8, c71 int8, c72 int8, c73 int8, c74 int8, c75 int8, c76 int8, c77 int8, c78 int8, c79 int8,
+ c80 int8, c81 int8, c82 int8, c83 int8, c84 int8, c85 int8, c86 int8, c87 int8, c88 int8, c89 int8,
+ c90 int8, c91 int8, c92 int8, c93 int8, c94 int8, c95 int8, c96 int8, c97 int8, c98 int8, c99 int8
+);
+
+-- next a table with identical columns using columnar storage
+CREATE TABLE perf_columnar(LIKE perf_row) USING COLUMNAR;
```
- row_size | columnar_size
--+
- 16384 | 24576
+
+Fill both tables with the same large dataset:
+
+```postgresql
+INSERT INTO perf_row
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+INSERT INTO perf_columnar
+ SELECT
+ g % 00500, g % 01000, g % 01500, g % 02000, g % 02500, g % 03000, g % 03500, g % 04000, g % 04500, g % 05000,
+ g % 05500, g % 06000, g % 06500, g % 07000, g % 07500, g % 08000, g % 08500, g % 09000, g % 09500, g % 10000,
+ g % 10500, g % 11000, g % 11500, g % 12000, g % 12500, g % 13000, g % 13500, g % 14000, g % 14500, g % 15000,
+ g % 15500, g % 16000, g % 16500, g % 17000, g % 17500, g % 18000, g % 18500, g % 19000, g % 19500, g % 20000,
+ g % 20500, g % 21000, g % 21500, g % 22000, g % 22500, g % 23000, g % 23500, g % 24000, g % 24500, g % 25000,
+ g % 25500, g % 26000, g % 26500, g % 27000, g % 27500, g % 28000, g % 28500, g % 29000, g % 29500, g % 30000,
+ g % 30500, g % 31000, g % 31500, g % 32000, g % 32500, g % 33000, g % 33500, g % 34000, g % 34500, g % 35000,
+ g % 35500, g % 36000, g % 36500, g % 37000, g % 37500, g % 38000, g % 38500, g % 39000, g % 39500, g % 40000,
+ g % 40500, g % 41000, g % 41500, g % 42000, g % 42500, g % 43000, g % 43500, g % 44000, g % 44500, g % 45000,
+ g % 45500, g % 46000, g % 46500, g % 47000, g % 47500, g % 48000, g % 48500, g % 49000, g % 49500, g % 50000
+ FROM generate_series(1,50000000) g;
+
+VACUUM (FREEZE, ANALYZE) perf_row;
+VACUUM (FREEZE, ANALYZE) perf_columnar;
```
-For our tiny table, the columnar storage actually uses more space, but as the
-data grows, compression will win.
+For this data, you can see a compression ratio of better than 8X in the
+columnar table.
+
+```postgresql
+SELECT pg_total_relation_size('perf_row')::numeric/
+ pg_total_relation_size('perf_columnar') AS compression_ratio;
+ compression_ratio
+--
+ 8.0196135873627944
+(1 row)
+```
## Example
storage](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archivin
## Limitations
-This feature still has a number of significant limitations. See [Hyperscale
+This feature still has significant limitations. See [Hyperscale
(Citus) limits and limitations](concepts-hyperscale-limits.md#columnar-storage). ## Next steps
-* See an example of columnar storage in a Citus [timeseries
+* See an example of columnar storage in a Citus [time series
tutorial](https://docs.citusdata.com/en/stable/use_cases/timeseries.html#archiving-with-columnar-storage) (external link).
route-server About Dual Homed Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/about-dual-homed-network.md
Previously updated : 04/30/2021 Last updated : 05/04/2021 # About dual-homed network with Azure Route Server (Preview)
-Azure Route Server supports your typical hub-and-spoke network topology. This configuration is when both the Route Server and network virtual appliance (NVA) is in the hub virtual network. Router Server also enabled you to configure a different topology called a dual-homed network. This configuration is when you have a spoke virtual network peered with two or more hub virtual networks. Virtual machines in the spoke virtual network can communicate through either hub virtual network to your on-premises or the internet.
+Azure Route Server supports your typical hub-and-spoke network topology. This configuration is when both the Route Server and network virtual appliance (NVA) are in the hub virtual network. Router Server also enables you to configure a different topology called a dual-homed network. This configuration is when you have a spoke virtual network peered with two or more hub virtual networks. Virtual machines in the spoke virtual network can communicate through either hub virtual network to your on-premises or the Internet.
## How to set it up
As can be seen in the following diagram, you need to:
:::image type="content" source="./media/about-dual-homed-network/dual-homed-topology.png" alt-text="Diagram of Route Server in a dual-homed topology.":::
-## How does it work?
+### How does it work?
In the control plane, the NVA and the Route Server will exchange routes as if theyΓÇÖre deployed in the same virtual network. The NVA will learn about spoke virtual network addresses from the Route Server. The Route Server will learn routes from each of the NVAs. The Route Server will then program all the virtual machines in the spoke virtual network with the routes it learned.
security-center Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/continuous-export.md
Title: Continuous export can send Azure Security Center's alerts and recommendations to Log Analytics workspaces or Azure Event Hubs description: Learn how to configure continuous export of security alerts and recommendations to Log Analytics workspaces or Azure Event Hubs- Previously updated : 12/24/2020 Last updated : 05/05/2021
Continuous export can export the following data types whenever they change:
- Regulatory compliance data > [!NOTE]
-> The exporting of secure score and regulatory compliance data is a preview feature and isn't available on government clouds.
+> The exporting of secure score and regulatory compliance data is a preview feature.
## Set up a continuous export
security-center Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/custom-security-policies.md
As discussed in [the Azure Policy documentation](../governance/policy/concepts/d
[![Selecting a subscription for which you'll create your custom policy](media/custom-security-policies/custom-policy-selecting-a-subscription.png)](media/custom-security-policies/custom-policy-selecting-a-subscription.png#lightbox) > [!NOTE]
- > You must add custom standards at the subscription level (or higher) for them to be evaluated and displayed in Security Center.
- >
- > When you add a custom standard, it assigns an *initiative* to that scope. We therefore recommend that you select the widest scope required for that assignment.
+ > You must add custom initiatives at the subscription level (or higher) for them to be evaluated and displayed in Security Center. We recommend that you select the widest scope available.
1. In the Security policy page, under Your custom initiatives, click **Add a custom initiative**.
security-center Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/recommendations-reference.md
description: This article lists Azure Security Center's security recommendations
Previously updated : 04/26/2021 Last updated : 05/05/2021
security-center Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/secure-score-security-controls.md
Title: Secure score in Azure Security Center description: Description of Azure Security Center's secure score and its security controls -
-ms.assetd: c42d02e4-201d-4a95-8527-253af903a5c6
- Previously updated : 02/03/2021 Last updated : 05/05/2021
For more information, see [How your secure score is calculated](secure-score-sec
The contribution of each security control towards the overall secure score is shown clearly on the recommendations page.
-[![The enhanced secure score introduces security controls](media/secure-score-security-controls/security-controls.png)](media/secure-score-security-controls/security-controls.png#lightbox)
To get all the possible points for a security control, all your resources must comply with all of the security recommendations within the security control. For example, Security Center has multiple recommendations regarding how to secure your management ports. You'll need to remediate them all to make a difference to your secure score.
-For example, the security control called "Apply system updates" has a maximum score of six points, which you can see in the tooltip on the potential increase value of the control:
+### Example scores for a control
-[![The security control "Apply system updates"](media/secure-score-security-controls/apply-system-updates-control.png)](media/secure-score-security-controls/apply-system-updates-control.png#lightbox)
-The maximum score for this control, Apply system updates, is always 6. In this example, there are 50 resources. So we divide the max score by 50, and the result is that every resource contributes 0.12 points.
-* **Potential increase** (0.12 x 8 unhealthy resources = 0.96) - The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 2% (in this case, 0.96 points rounded up to 1 point).
-* **Current score** (0.12 x 42 healthy resources = 5.04) - The current score for this control. Each control contributes towards the total score. In this example, the control is contributing 5.04 points to current secure total.
-* **Max score** - The maximum number of points you can gain by completing all recommendations within a control. The maximum score for a control indicates the relative significance of that control. Use the max score values to triage the issues to work on first.
+In this example:
+
+| # | Name | Description |
+|:-:||--|
+| 1 | **Remediate vulnerabilities security control** | This control groups multiple recommendations related to discovering and resolving known vulnerabilities. |
+| 2 | **Max score** | The maximum number of points you can gain by completing all recommendations within a control. The maximum score for a control indicates the relative significance of that control and is fixed for every environment. Use the max score values to triage the issues to work on first.<br>For a list of all controls and their max scores, see [Security controls and their recommendations](#security-controls-and-their-recommendations). |
+| 3 | **Number of resources** | There are 35 resources affected by this control.<br>To understand the possible contribution of every resource, divide the max score by the number of resources.<br>For this example, 6/35=0.1714<br>**Every resource contributes 0.1714 points.** |
+| 4 | **Current score** | The current score for this control.<br>Current score=[Score per resource]*[Number of healthy resources]<br> 0.1714 x 5 healthy resources = 0.86<br>Each control contributes towards the total score. In this example, the control is contributing 0.86 points to current total secure score. |
+| 5 | **Potential score increase** | The remaining points available to you within the control. If you remediate all the recommendations in this control, your score will increase by 9%.<br>Potential score increase=[Score per resource]*[Number of unhealthy resources]<br> 0.1714 x 30 unhealthy resources = 5.14<br> |
+| | | |
+ ### Calculations - understanding your score
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following tables display the current Azure Sentinel feature availability in
| - [Cross-tenant/Cross-workspace incidents view](/azure/sentinel/multiple-workspace-view) |Public Preview | Public Preview | | - [Entity insights](/azure/sentinel/enable-entity-behavior-analytics) | Public Preview | Not Available | | - [Fusion](/azure/sentinel/fusion)<br>Advanced multistage attack detections <sup>[1](#footnote1)</sup> | GA | Not Available |
+| - [Hunting](/azure/sentinel/hunting) | GA | GA |
|- [Notebooks](/azure/sentinel/notebooks) | GA | GA | |- [SOC incident audit metrics](/azure/sentinel/manage-soc-with-incident-metrics) | GA | GA | |- [Watchlists](https://techcommunity.microsoft.com/t5/azure-sentinel/what-s-new-watchlist-is-now-in-public-preview/ba-p/1765887) | Public Preview | Not Available |
sentinel Create Custom Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/create-custom-connector.md
For examples of useful Logstash plugins, see:
## Connect with Logic Apps
-Use an [Azure Logic App](../logic-apps/index.yml) to create a serverless, custom connector for Azure Sentinel.
+Use [Azure Logic Apps](../logic-apps/index.yml) to create a serverless, custom connector for Azure Sentinel.
> [!NOTE] > While creating serverless connectors using Logic Apps may be convenient, using Logic Apps for your connectors may be costly for large volumes of data.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Previously updated : 04/08/2021 Last updated : 05/05/2021 # What's new in Azure Sentinel
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
> You can also contribute! Join us in the [Azure Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki). >
+## May 2021
+
+- [Zero Trust (TIC3.0) workbook](#zero-trust-tic30-workbook)
+
+### Zero Trust (TIC3.0) workbook
+
+The new, Azure Sentinel Zero Trust (TIC3.0) workbook provides an automated visualization of [Zero Trust](/security/zero-trust/) principles, cross-walked to the [Trusted Internet Connections](https://www.cisa.gov/trusted-internet-connections) (TIC) framework.
+
+We know that compliance isnΓÇÖt just an annual requirement, and organizations must monitor configurations over time like a muscle. Azure Sentinel's Zero Trust workbook uses the full breadth of Microsoft security offerings across Azure, Office 365, Teams, Intune, Windows Virtual Desktop, and many more.
+
+[ ![Zero Trust workbook.](media/zero-trust-workbook.gif) ](media/zero-trust-workbook.gif#lightbox)
+
+**The Zero Trust workbook**:
+
+- Enables Implementers, SecOps Analysts, Assessors, Security and Compliance Decision Makers, MSSPs, and others to gain situational awareness for cloud workloads' security posture.
+- Features over 75 control cards, aligned to the TIC 3.0 security capabilities, with selectable GUI buttons for navigation.
+- Is designed to augment staffing through automation, artificial intelligence, machine learning, query/alerting generation, visualizations, tailored recommendations, and respective documentation references.
+
+For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md).
+ ## April 2021 - [Azure Policy-based data connectors](#azure-policy-based-data-connectors)
service-bus-messaging Service Bus Dotnet Multi Tier App Using Service Bus Queues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-dotnet-multi-tier-app-using-service-bus-queues.md
Title: .NET multi-tier application using Azure Service Bus | Microsoft Docs
description: A .NET tutorial that helps you develop a multi-tier app in Azure that uses Service Bus queues to communicate between tiers. ms.devlang: dotnet Previously updated : 06/23/2020 Last updated : 04/30/2021
You will learn the following:
[!INCLUDE [create-account-note](../../includes/create-account-note.md)]
-In this tutorial you'll build and run the multi-tier application in an Azure cloud service. The front end is an ASP.NET MVC web role and the back end is a worker-role that uses a Service Bus queue. You can create the same multi-tier application with the front end as a web project, that is deployed to an Azure website instead of a cloud service. You can also try out the [.NET on-premises/cloud hybrid application](../azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay.md) tutorial.
+In this tutorial, you'll build and run the multi-tier application in an Azure cloud service. The front end is an ASP.NET MVC web role and the back end is a worker-role that uses a Service Bus queue. You can create the same multi-tier application with the front end as a web project, that is deployed to an Azure website instead of a cloud service. You can also try out the [.NET on-premises/cloud hybrid application](../azure-relay/service-bus-dotnet-hybrid-app-using-service-bus-relay.md) tutorial.
The following screenshot shows the completed application.
-![Screenshot of the application's Submit page.][0]
## Scenario overview: inter-role communication To submit an order for processing, the front-end UI component, running
configured with filter rules that restrict the set of messages passed to
the subscription queue to those that match the filter. The following example uses Service Bus queues.
-![Diagram showing the communication between the Web Role, the Service Bus, and the Worker Role.][1]
This communication mechanism has several advantages over direct messaging:
messaging:
pull messages at their own maximum rate. This pattern is often termed the *competing consumer* pattern.
- ![Diagram showing the communication between the Web Role, the Service Bus, and two Worker Roles.][2]
-
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-101.png" alt-text="Diagram showing the communication between the Web Role, the Service Bus, and two Worker Roles.":::
+
The following sections discuss the code that implements this architecture.
+## Prerequisites
+In this tutorial, you'll use Azure Active Directory (Azure AD) authentication to create `ServiceBusClient` and `ServiceBusAdministrationClient` objects. You'll also use `DefaultAzureCredential` and to use it, you need to do the following steps to test the application locally in a development environment.
+
+1. [Register an application in the Azure AD](../active-directory/develop/quickstart-register-app.md).
+1. [Add the application to the `Service Bus Data Owner` role](service-bus-managed-service-identity.md#to-assign-azure-roles-using-the-azure-portal).
+1. Set the `AZURE-CLIENT-ID`, `AZURE-TENANT-ID`, AND `AZURE-CLIENT-SECRET` environment variables. For instructions, see [this article](/dotnet/api/overview/azure/identity-readme#environment-variables).
++ ## Create a namespace The first step is to create a *namespace*, and obtain a [Shared Access Signature (SAS)](service-bus-sas.md) key for that namespace. A namespace provides an application boundary for each application exposed through Service Bus. A SAS key is generated by the system when a namespace is created. The combination of namespace name and SAS key provides the credentials for Service Bus to authenticate access to an application.
queue and displays status information about the queue.
In Visual Studio, on the **File** menu, click **New**, and then click **Project**.
-2. From **Installed Templates**, under **Visual C#**, click **Cloud** and
- then click **Azure Cloud Service**. Name the project
- **MultiTierApp**. Then click **OK**.
+2. On the **Templates** page, follow these steps:
+ 1. Select **C#** for programming language.
+ 1. Select **Cloud** for the project type.
+ 1. Select **Azure Cloud Service**.
+ 1. Select **Next**.
- ![Screenshot of the New Project dialog box with Cloud selected and Azure Cloud Service Visual C# highlighted and outlined in red.][9]
-3. From the **Roles** pane, double-click **ASP.NET Web
- Role**.
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-10.png" alt-text="Screenshot of the New Project dialog box with Cloud selected and Azure Cloud Service Visual C# highlighted and outlined in red.":::
+3. Name the project **MultiTierApp**, select location for the project, and then select **Create**.
+
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/project-name.png" alt-text="Specify project name.":::
+1. On the **Roles** page, double-click **ASP.NET Web Role**, and select **OK**.
- ![Screenshot of the New Microsoft Azure Cloud Service dialog box with ASP.NET Web Role selected and WebRole1 also selected.][10]
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-11.png" alt-text="Select Web Role":::
4. Hover over **WebRole1** under **Azure Cloud Service solution**, click the pencil icon, and rename the web role to **FrontendWebRole**. Then click **OK**. (Make sure you enter "Frontend" with a lower-case 'e,' not "FrontEnd".)
- ![Screenshot of the New Microsoft Azure Cloud Service dialog box with the solution renamed to FrontendWebRole.][11]
-5. From the **New ASP.NET Project** dialog box, in the **Select a template** list, click **MVC**.
-
- ![Screenshotof the New ASP.NET Project dialog box with MVC highlighted and outlined in red and the Change Authentication option outlined in red.][12]
-6. Still in the **New ASP.NET Project** dialog box, click the **Change Authentication** button. In the **Change Authentication** dialog box, ensure that **No Authentication** is selected, and then click **OK**. For this tutorial, you're deploying an app that doesn't need a user login.
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-02.png" alt-text="Screenshot of the New Microsoft Azure Cloud Service dialog box with the solution renamed to FrontendWebRole.":::
+5. In the **Create a new ASP.NET Web Application** dialog box, select **MVC**, and then select **Create**.
- ![Screenshot of the Change Authentication dialog box with the No Authentication option selected and outlined in red.][16]
-7. Back in the **New ASP.NET Project** dialog box, click **OK** to create the project.
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-12.png" alt-text="Screenshot of the New ASP.NET Project dialog box with MVC highlighted and outlined in red and the Change Authentication option outlined in red.":::
8. In **Solution Explorer**, in the **FrontendWebRole** project, right-click **References**, then click **Manage NuGet Packages**.
-9. Click the **Browse** tab, then search for **WindowsAzure.ServiceBus**. Select the **WindowsAzure.ServiceBus** package, click **Install**, and accept the terms of use.
-
- ![Screenshot of the Manage NuGet Packages dialog box with the WindowsAzure.ServiceBus highlighted and the Install option outlined in red.][13]
+9. Click the **Browse** tab, then search for **Azure.Messaging.ServiceBus**. Select the **Azure.Messaging.ServiceBus** package, select **Install**, and accept the terms of use.
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-13.png" alt-text="Screenshot of the Manage NuGet Packages dialog box with the Azure.Messaging.ServiceBus highlighted and the Install option outlined in red.":::
+ Note that the required client assemblies are now referenced and some new code files have been added.
-10. In **Solution Explorer**, right-click **Models** and click **Add**,
+10. Follow the same steps to add the `Azure.Identity` NuGet package to the project.
+10. In **Solution Explorer**, expand **FronendWebRole**, right-click **Models** and click **Add**,
then click **Class**. In the **Name** box, type the name **OnlineOrder.cs**. Then click **Add**.
In this section, you create the various pages that your application displays.
model you just created, as well as Service Bus. ```csharp
- using FrontendWebRole.Models;
- using Microsoft.ServiceBus.Messaging;
- using Microsoft.ServiceBus;
+ using FrontendWebRole.Models;
+ using Azure.Messaging.ServiceBus;
``` 3. Also in the HomeController.cs file in Visual Studio, replace the existing namespace definition with the following code. This code
In this section, you create the various pages that your application displays.
``` 4. From the **Build** menu, click **Build Solution** to test the accuracy of your work so far. 5. Now, create the view for the `Submit()` method you
- created earlier. Right-click within the `Submit()` method (the overload of `Submit()` that takes no parameters), and then choose
- **Add View**.
-
- ![Screenshot of the code with focus on the Submit method and a drop-down list with the Add View option highlighted.][14]
-6. A dialog box appears for creating the view. In the **Template** list, choose **Create**. In the **Model class** list, select the **OnlineOrder** class.
-
- ![A screenshot of the Add View dialog box with the Template and Model class drop-down lists outlined in red.][15]
-7. Click **Add**.
+ created earlier. Right-click within the `Submit()` method (the overload of `Submit()` that takes no parameters) in the **HomeController.cs** file, and then choose **Add View**.
+6. In the **Add New Scaffolded Item** dialog box, select **Add**.
+1. In the **Add View** dialog box, do these steps:
+ 1. In the **Template** list, choose **Create**.
+ 1. In the **Model class** list, select the **OnlineOrder** class.
+ 1. Select **Add**.
+
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-34.png" alt-text="A screenshot of the Add View dialog box with the Template and Model class drop-down lists outlined in red.":::
8. Now, change the displayed name of your application. In **Solution Explorer**, double-click the **Views\Shared\\_Layout.cshtml** file to open it in the Visual Studio editor.
In this section, you create the various pages that your application displays.
**Northwind Traders Products**. 10. Remove the **Home**, **About**, and **Contact** links. Delete the highlighted code:
- ![Screenshot of the code with three lines of H T M L Action Link code highlighted.][28]
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-40.png" alt-text="Screenshot of the code with three lines of H T M L Action Link code highlighted.":::
11. Finally, modify the submission page to include some information about the queue. In **Solution Explorer**, double-click the **Views\Home\Submit.cshtml** file to open it in the Visual Studio
In this section, you create the various pages that your application displays.
12. You now have implemented your UI. You can press **F5** to run your application and confirm that it looks as expected.
- ![Screenshot of the application's Submit page.][17]
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-app.png" alt-text="Screenshot of the application's Submit page.":::
### Write the code for submitting items to a Service Bus queue Now, add code for submitting items to a queue. First, you
Service Bus queue.
3. Now, add code that encapsulates the connection information and initializes the connection to a Service Bus queue. Replace the entire contents of QueueConnector.cs with the following code, and enter values for `your Service Bus namespace` (your namespace name) and `yourKey`, which is the **primary key** you previously obtained from the Azure portal. ```csharp
- using System;
- using System.Collections.Generic;
- using System.Linq;
- using System.Web;
- using Microsoft.ServiceBus.Messaging;
- using Microsoft.ServiceBus;
-
+ using System;
+ using System.Collections.Generic;
+ using System.Linq;
+ using System.Web;
+ using System.Threading.Tasks;
+ using Azure.Messaging.ServiceBus;
+ using Azure.Messaging.ServiceBus.Administration;
+
namespace FrontendWebRole {
- public static class QueueConnector
- {
- // Thread-safe. Recommended that you cache rather than recreating it
- // on every request.
- public static QueueClient OrdersQueueClient;
-
- // Obtain these values from the portal.
- public const string Namespace = "your Service Bus namespace";
-
- // The name of your queue.
- public const string QueueName = "OrdersQueue";
-
- public static NamespaceManager CreateNamespaceManager()
- {
- // Create the namespace manager which gives you access to
- // management operations.
- var uri = ServiceBusEnvironment.CreateServiceUri(
- "sb", Namespace, String.Empty);
- var tP = TokenProvider.CreateSharedAccessSignatureTokenProvider(
- "RootManageSharedAccessKey", "yourKey");
- return new NamespaceManager(uri, tP);
- }
-
- public static void Initialize()
- {
- // Using Http to be friendly with outbound firewalls.
- ServiceBusEnvironment.SystemConnectivity.Mode =
- ConnectivityMode.Http;
-
- // Create the namespace manager which gives you access to
- // management operations.
- var namespaceManager = CreateNamespaceManager();
-
- // Create the queue if it does not exist already.
- if (!namespaceManager.QueueExists(QueueName))
- {
- namespaceManager.CreateQueue(QueueName);
- }
-
- // Get a client to the queue.
- var messagingFactory = MessagingFactory.Create(
- namespaceManager.Address,
- namespaceManager.Settings.TokenProvider);
- OrdersQueueClient = messagingFactory.CreateQueueClient(
- "OrdersQueue");
- }
- }
+ public static class QueueConnector
+ {
+ // object to send messages to a Service Bus queue
+ internal static ServiceBusSender SBSender;
+
+ // object to create a queue and get runtime properties (like message count) of queue
+ internal static ServiceBusAdministrationClient SBAdminClient;
+
+ // Fully qualified Service Bus namespace
+ private const string FullyQualifiedNamespace = "<SERVICE BUS NAMESPACE NAME>.servicebus.windows.net";
+
+ // The name of your queue.
+ internal const string QueueName = "OrdersQueue";
+
+ public static async Task Initialize()
+ {
+ // Create a Service Bus client that you can use to send or receive messages
+ ServiceBusClient SBClient = new ServiceBusClient(FullyQualifiedNamespace, new DefaultAzureCredential());
+
+ // Create a Service Bus admin client to create queue if it doesn't exist or to get message count
+ SBAdminClient = new ServiceBusAdministrationClient(FullyQualifiedNamespace, new DefaultAzureCredential());
+
+ // create the OrdersQueue if it doesn't exist already
+ if (!(await SBAdminClient.QueueExistsAsync(QueueName)))
+ {
+ await SBAdminClient.CreateQueueAsync(QueueName);
+ }
+
+ // create a sender for the queue
+ SBSender = SBClient.CreateSender(QueueName);
+ }
+ }
} ``` 4. Now, ensure that your **Initialize** method gets called. In **Solution Explorer**, double-click **Global.asax\Global.asax.cs**. 5. Add the following line of code at the end of the **Application_Start** method. ```csharp
- FrontendWebRole.QueueConnector.Initialize();
+ FrontendWebRole.QueueConnector.Initialize().Wait();
``` 6. Finally, update the web code you created earlier, to submit items to the queue. In **Solution Explorer**,
Service Bus queue.
for the queue. ```csharp
- public ActionResult Submit()
- {
- // Get a NamespaceManager which allows you to perform management and
- // diagnostic operations on your Service Bus queues.
- var namespaceManager = QueueConnector.CreateNamespaceManager();
-
- // Get the queue, and obtain the message count.
- var queue = namespaceManager.GetQueue(QueueConnector.QueueName);
- ViewBag.MessageCount = queue.MessageCount;
-
- return View();
- }
+ public ActionResult Submit()
+ {
+ QueueRuntimeProperties properties = QueueConnector.adminClient.GetQueueRuntimePropertiesAsync(QueueConnector.queueName).Result;
+ ViewBag.MessageCount = properties.ActiveMessageCount;
+
+ return View();
+ }
``` 8. Update the `Submit(OnlineOrder order)` method (the overload that takes one parameter) as follows to submit order information to the queue. ```csharp
- public ActionResult Submit(OnlineOrder order)
- {
- if (ModelState.IsValid)
- {
- // Create a message from the order.
- var message = new BrokeredMessage(order);
-
- // Submit the order.
- QueueConnector.OrdersQueueClient.Send(message);
- return RedirectToAction("Submit");
- }
- else
- {
- return View(order);
- }
- }
+ public ActionResult Submit(OnlineOrder order)
+ {
+ if (ModelState.IsValid)
+ {
+ // create a message
+ var message = new ServiceBusMessage(new BinaryData(order));
+
+ // send the message to the queue
+ QueueConnector.sbSender.SendMessageAsync(message);
+
+ return RedirectToAction("Submit");
+ }
+ else
+ {
+ return View(order);
+ }
+ }
``` 9. You can now run the application again. Each time you submit an order, the message count increases.
- ![Screenshot of the application's Submit page with the message count incremented to 1.][18]
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-app2.png" alt-text="Screenshot of the application's Submit page with the message count incremented to 1.":::
## Create the worker role You will now create the worker role that processes the order
submissions. This example uses the **Worker Role with Service Bus Queue** Visual
2. In Visual Studio, in **Solution Explorer** right-click the **Roles** folder under the **MultiTierApp** project. 3. Click **Add**, and then click **New Worker Role Project**. The **Add New Role Project** dialog box appears.+
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/SBNewWorkerRole.png" alt-text="Screenshot of the Solution Explorer pane with the New Worker Role Project option and Add option highlighted.":::
+1. In the **Add New Role Project** dialog box, select **Worker Role**. Don't select **Worker Role with Service Bus Queue** as it generates code that uses the legacy Service Bus SDK.
- ![Screenshot of the Soultion Explorer pane with the New Worker Role Project option and Add option highlighted.][26]
-4. In the **Add New Role Project** dialog box, click **Worker Role with Service Bus Queue**.
-
- ![Screenshot of the Ad New Role Project dialog box with the Worker Role with Service Bus Queue option highlighted and outlined in red.][23]
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/SBWorkerRole1.png" alt-text="Screenshot of the Ad New Role Project dialog box with the Worker Role with Service Bus Queue option highlighted and outlined in red.":::
5. In the **Name** box, name the project **OrderProcessingRole**. Then click **Add**.
-6. Copy the connection string that you obtained in step 9 of the "Create a Service Bus namespace" section to the clipboard.
-7. In **Solution Explorer**, right-click the **OrderProcessingRole** you created in step 5 (make sure that you right-click **OrderProcessingRole** under **Roles**, and not the class). Then click **Properties**.
-8. On the **Settings** tab of the **Properties** dialog box, click inside the **Value** box for **Microsoft.ServiceBus.ConnectionString**, and then paste the endpoint value you copied in step 6.
+1. In **Solution Explorer**, right-click **OrderProcessingRole** project, and select **Manage NuGet Packages**.
+9. Select the **Browse** tab, then search for **Azure.Messaging.ServiceBus**. Select the **Azure.Messaging.ServiceBus** package, select **Install**, and accept the terms of use.
- ![Screenshot of the Properties dialog box with the Settings tab selected and the Microsoft.ServiceBus.ConnectionString table row outlined in red.][25]
-9. Create an **OnlineOrder** class to represent the orders as you process them from the queue. You can reuse a class you have already created. In **Solution Explorer**, right-click the **OrderProcessingRole** class (right-click the class icon, not the role). Click **Add**, then click **Existing Item**.
-10. Browse to the subfolder for **FrontendWebRole\Models**, and then double-click **OnlineOrder.cs** to add it to this project.
-11. In **WorkerRole.cs**, change the value of the **QueueName** variable from `"ProcessingQueue"` to `"OrdersQueue"` as shown in the following code.
-
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-13.png" alt-text="Screenshot of the Manage NuGet Packages dialog box with the Azure.Messaging.ServiceBus highlighted and the Install option outlined in red.":::
+1. Follow the same steps to add the `Azure.Identity` NuGet package to the project.
+1. Create an **OnlineOrder** class to represent the orders as you process them from the queue. You can reuse a class you have already created. In **Solution Explorer**, right-click the **OrderProcessingRole** class (right-click the class icon, not the role). Click **Add**, then click **Existing Item**.
+1. Browse to the subfolder for **FrontendWebRole\Models**, and then double-click **OnlineOrder.cs** to add it to this project.
+1. Add the following `using` statement to the **WorkerRole.cs** file in the **OrderProcessingRole** project.
+
+ ```csharp
+ using FrontendWebRole.Models;
+ using Azure.Messaging.ServiceBus;
+ using Azure.Messaging.ServiceBus.Administration;
+ ```
+1. In **WorkerRole.cs**, add the following properties.
+
+ > [!IMPORTANT]
+ > Use the connection string for the namespace you noted down as part of prerequisites.
+ ```csharp
- // The name of your queue.
- const string QueueName = "OrdersQueue";
+ // Fully qualified Service Bus namespace
+ private const string FullyQualifiedNamespace = "<SERVICE BUS NAMESPACE NAME>.servicebus.windows.net";
+
+ // The name of your queue.
+ private const string QueueName = "OrdersQueue";
+
+ // Service Bus Receiver object to receive messages message the specific queue
+ private ServiceBusReceiver SBReceiver;
+ ```
-12. Add the following using statement at the top of the WorkerRole.cs file.
+1. Update the `OnStart` method to create a `ServiceBusClient` object and then a `ServiceBusReceiver` object to receive messages from the `OrdersQueue`.
```csharp
- using FrontendWebRole.Models;
+ public override bool OnStart()
+ {
+ // Create a Service Bus client that you can use to send or receive messages
+ ServiceBusClient SBClient = new ServiceBusClient(FullyQualifiedNamespace, new DefaultAzureCredential());
+
+ CreateQueue(QueueName).Wait();
+
+ // create a receiver that we can use to receive the message
+ SBReceiver = SBClient.CreateReceiver(QueueName);
+
+ return base.OnStart();
+ }
+ private async Task CreateQueue(string queueName)
+ {
+ // Create a Service Bus admin client to create queue if it doesn't exist or to get message count
+ ServiceBusAdministrationClient SBAdminClient = new ServiceBusAdministrationClient(FullyQualifiedNamespace, new DefaultAzureCredential());
+
+ // create the OrdersQueue if it doesn't exist already
+ if (!(await SBAdminClient.QueueExistsAsync(queueName)))
+ {
+ await SBAdminClient.CreateQueueAsync(queueName);
+ }
+ }
```
-13. In the `Run()` function, inside the `OnMessage()` call, replace the contents of the `try` clause with the following code.
-
+12. Update the `RunAsync` method to include the code to receive messages.
+ ```csharp
- Trace.WriteLine("Processing", receivedMessage.SequenceNumber.ToString());
- // View the message as an OnlineOrder.
- OnlineOrder order = receivedMessage.GetBody<OnlineOrder>();
- Trace.WriteLine(order.Customer + ": " + order.Product, "ProcessingMessage");
- receivedMessage.Complete();
+ private async Task RunAsync(CancellationToken cancellationToken)
+ {
+ // TODO: Replace the following with your own logic.
+ while (!cancellationToken.IsCancellationRequested)
+ {
+ // receive message from the queue
+ ServiceBusReceivedMessage receivedMessage = await SBReceiver.ReceiveMessageAsync();
+
+ if (receivedMessage != null)
+ {
+ Trace.WriteLine("Processing", receivedMessage.SequenceNumber.ToString());
+
+ // view the message as an OnlineOrder
+ OnlineOrder order = receivedMessage.Body.ToObjectFromJson<OnlineOrder>();
+ Trace.WriteLine(order.Customer + ": " + order.Product, "ProcessingMessage");
+
+ // complete message so that it's removed from the queue
+ await SBReceiver.CompleteMessageAsync(receivedMessage);
+ }
+ }
+ }
``` 14. You have completed the application. You can test the full application by right-clicking the MultiTierApp project in Solution Explorer,
submissions. This example uses the **Worker Role with Service Bus Queue** Visual
can do this by right-clicking the emulator icon in the notification area of your taskbar and selecting **Show Compute Emulator UI**.
- ![Screenshot of what appears when you click the emulator icon. Show Compute Emulator UI is in the list of options.][19]
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-38.png" alt-text="Screenshot of what appears when you click the emulator icon. Show Compute Emulator UI is in the list of options.":::
- ![Screenshot of the Microsoft Azure Compute Emulator (Express) dialog box.][20]
+ :::image type="content" source="./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-39.png" alt-text="Screenshot of the Microsoft Azure Compute Emulator (Express) dialog box.":::
## Next steps To learn more about Service Bus, see the following resources:
To learn more about multi-tier scenarios, see:
* [.NET Multi-Tier Application Using Storage Tables, Queues, and Blobs][mutitierstorage]
-[0]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-app.png
-[1]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-100.png
-[2]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-101.png
-[9]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-10.png
-[10]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-11.png
-[11]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-02.png
-[12]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-12.png
-[13]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-13.png
-[14]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-33.png
-[15]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-34.png
-[16]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-14.png
-[17]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-app.png
-[18]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-app2.png
-
-[19]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-38.png
-[20]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-39.png
-[23]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/SBWorkerRole1.png
-[25]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/SBWorkerRoleProperties.png
-[26]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/SBNewWorkerRole.png
-[28]: ./media/service-bus-dotnet-multi-tier-app-using-service-bus-queues/getting-started-multi-tier-40.png
+ [sbacom]: https://azure.microsoft.com/services/service-bus/ [sbacomqhowto]: service-bus-dotnet-get-started-with-queues.md
service-fabric Service Fabric Cross Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cross-availability-zones.md
Last updated 04/16/2021 + # Deploy an Azure Service Fabric cluster across Availability Zones
-Availability Zones in Azure is a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
-Service Fabric supports clusters that span across Availability Zones by deploying node types that are pinned to specific zones. This will ensure high-availability of your applications. Azure Availability Zones are only available in select regions. For more information, see [Azure Availability Zones Overview](../availability-zones/az-overview.md).
+Availability Zones in Azure are a high-availability offering that protects your applications and data from datacenter failures. An Availability Zone is a unique physical location equipped with independent power, cooling, and networking within an Azure region.
+
+To support clusters that span across Availability Zones, Azure Service Fabric deploys node types that are pinned to specific zones. Availability Zones are available only in select regions. For more information, see the [Availability Zones overview](../availability-zones/az-overview.md).
-Sample templates are available: [Service Fabric cross availability zone template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
+Sample templates are available at [Service Fabric cross-Availability Zone templates](https://github.com/Azure-Samples/service-fabric-cluster-templates).
## Recommended topology for primary node type of Azure Service Fabric clusters spanning across Availability Zones
-A Service Fabric cluster distributed across Availability Zones ensures high availability of the cluster state. To span a Service Fabric cluster across zones, you must create a primary node type in each Availability Zone supported by the region. This will distribute seed nodes evenly across each of the primary node types.
-The recommended topology for the primary node type requires the resources outlined below:
+To span a Service Fabric cluster across Availability Zones, you must create a primary node type in each Availability Zone supported by the region. This distributes seed nodes evenly across each of the primary node types.
-* The cluster reliability level set to Platinum.
-* Three Node Types marked as primary.
- * Each Node Type should be mapped to its own virtual machine scale set located in different zones.
- * Each virtual machine scale set should have at least five nodes (Silver Durability).
-* A Single Public IP Resource using Standard SKU.
-* A Single Load Balancer Resource using Standard SKU.
-* A NSG referenced by the subnet in which you deploy your virtual machine scale sets.
+The recommended topology for the primary node type requires these resources:
+
+* The cluster reliability level set to `Platinum`
+* Three node types marked as primary
+ * Each node type should be mapped to its own virtual machine scale set located in a different zone.
+ * Each virtual machine scale set should have at least five nodes (Silver Durability).
+* A single public IP resource using Standard SKU
+* A single load balancer resource using Standard SKU
+* A network security group (NSG) referenced by the subnet in which you deploy your virtual machine scale sets
>[!NOTE]
-> The virtual machine scale set single placement group property must be set to true.
+>The virtual machine scale set single placement group property must be set to `true`.
+
+The following diagram shows the Azure Service Fabric Availability Zone architecture:
+
+![Diagram that shows the Azure Service Fabric Availability Zone architecture.][sf-architecture]
+
+The following sample node list depicts FD/UD formats in a virtual machine scale set spanning zones:
+
+![Screenshot that shows a sample node list of FD/UD formats in a virtual machine scale set spanning zones.][sf-multi-az-nodes]
-Diagram that shows the Azure Service Fabric Availability Zone architecture
- ![Diagram that shows the Azure Service Fabric Availability Zone architecture.][sf-architecture]
+## Distribution of service replicas across zones
-Sample node list depicting FD/UD formats in a virtual machine scale set spanning zones
+When a service is deployed on the node types that span Availability Zones, the replicas are placed to ensure that they land in separate zones. The fault domains on the nodes in each of these node types are configured with the zone information (that is, FD = fd:/zone1/1, etc.). For example, for five replicas or service instances, the distribution is 2-2-1, and the runtime will try to ensure equal distribution across zones.
- ![Sample node list depicting FD/UD formats in a virtual machine scale set spanning zones.][sf-multi-az-nodes]
+### User service replica configuration
-**Distribution of Service replicas across zones**:
-When a service is deployed on the nodeTypes which are spanning zones, the replicas are placed to ensure they land up in separate zones. This is ensured as the fault domainΓÇÖs on the nodes present in each of these nodeTypes are configured with the zone information (i.e FD = fd:/zone1/1 etc..). For example: for 5 replicas or instances of a service the distribution will be 2-2-1 and runtime will try to ensure equal distribution across AZs.
+Stateful user services deployed on the node types across Availability Zones should be configured like this: replica count with target = 9, min = 5. This configuration helps the service work even when one zone goes down because six replicas will be still up in the other two zones. An application upgrade in this scenario will also be successful.
-**User Service Replica Configuration**:
-Stateful user services deployed on the cross availability zone nodeTypes should be configured with this configuration: replica count with target = 9, min = 5. This configuration will help the service to be working even when one zone goes down since 6 replicas will be still up in the other two zones. An application upgrade in such a scenario will also go through.
+## Cluster ReliabilityLevel
-**Cluster ReliabilityLevel**:
-This defines the number of seed nodes in the cluster and also replica size of the system services. As a cross availability zone setup has a higher number of nodes, which are spread across zones to enable zone resiliency, a higher reliability value will ensure node more seed nodes and system service replicas are present and are evenly distributed across zones, so that in the event of a zone failure the cluster and the system services remain unimpacted. "ReliabilityLevel = Platinum" will ensure there are 9 seed nodes spread across zones in the cluster with 3 seeds in each zone hence that is the recommend for the cross availability zone setup.
+This value defines the number of seed nodes in the cluster and the replica size of the system services. A cross-Availability Zone setup has a higher number of nodes, which are spread across zones to enable zone resiliency.
+
+A higher `ReliabilityLevel` value ensures that more seed nodes and system service replicas are present and evenly distributed across zones, so that if a zone fails, the cluster and the system services aren't affected. `ReliabilityLevel = Platinum` (recommended) ensures that there are nine seed nodes spread across zones in the cluster, with three seeds in each zone.
-**Zone down scenario**:
-When a zone goes down, all the nodes in that zone will appear as down. Service replicas on these nodes will also be down. Since there are replicas in the other zones, the service continues to be responsive with primary replicas failing over to the zones which are functioning. The services will appear in warning state as the target replica count is not yet achieved and since the VM count is still more than min target replica size. Subsequently, Service Fabric load balancer will bring up replicas in the working zones to match the configured target replica count. At this point the services will appear healthy. When the zone which was down comes back up the load balance will again spread all the service replicas evenly across all the zones.
+### Zone-down scenario
+
+When a zone goes down, all of the nodes and service replicas for that zone appear as down. Because there are replicas in the other zones, the service continues to respond. Primary replicas fail over to the functioning zones. The services appear to be in warning states because the target replica count isn't yet achieved and the virtual machine (VM) count is still higher than the minimum target replica size.
+
+The Service Fabric load balancer brings up replicas in the working zones to match the target replica count. At this point, the services appear healthy. When the zone that was down comes back up, the load balancer will spread all of the service replicas evenly across the zones.
## Networking requirements
-### Public IP and Load Balancer Resource
-To enable the zones property on a virtual machine scale set resource, the load balancer and IP resource referenced by that virtual machine scale set must both be using a *Standard* SKU. Creating a load balancer or IP resource without the SKU property will create a Basic SKU, which does not support Availability Zones. A Standard SKU load balancer will block all traffic from the outside by default; to allow outside traffic, an NSG must be deployed to the subnet.
+
+### Public IP and load balancer resource
+
+To enable the `zones` property on a virtual machine scale set resource, the load balancer and the IP resource referenced by that virtual machine scale set must both use a Standard SKU. Creating a load balancer or IP resource without the SKU property creates a Basic SKU, which does not support Availability Zones. A Standard SKU load balancer blocks all traffic from the outside by default. To allow outside traffic, deploy an NSG to the subnet.
```json {
To enable the zones property on a virtual machine scale set resource, the load b
``` >[!NOTE]
-> It is not possible to do an in-place change of SKU on the public IP and load balancer resources. If you are migrating from existing resources which have a Basic SKU, see the migration section of this article.
+> It isn't possible to do an in-place change of SKU on the public IP and load balancer resources. If you're migrating from existing resources that have a Basic SKU, see the migration section of this article.
-### Virtual machine scale set NAT rules
-The load balancer inbound NAT rules should match the NAT pools from the virtual machine scale set. Each virtual machine scale set must have a unique inbound NAT pool.
+### NAT rules for virtual machine scale sets
+
+The inbound network address translation (NAT) rules for the load balancer should match the NAT pools from the virtual machine scale set. Each virtual machine scale set must have a unique inbound NAT pool.
```json {
The load balancer inbound NAT rules should match the NAT pools from the virtual
} ```
-### Standard SKU Load Balancer outbound rules
-Standard Load Balancer and Standard Public IP introduce new abilities and different behaviors to outbound connectivity when compared to using Basic SKUs. If you want outbound connectivity when working with Standard SKUs, you must explicitly define it either with Standard Public IP addresses or Standard public Load Balancer. For more information, see [Outbound connections](../load-balancer/load-balancer-outbound-connections.md) and [Azure Standard Load Balancer](../load-balancer/load-balancer-overview.md).
+### Outbound rules for a Standard SKU load balancer
->[!NOTE]
-> The standard template references an NSG which allows all outbound traffic by default. Inbound traffic is limited to the ports that are required for Service Fabric management operations. The NSG rules can be modified to meet your requirements.
+The Standard SKU load balancer and public IP introduce new abilities and different behaviors to outbound connectivity when compared to using Basic SKUs. If you want outbound connectivity when you're working with Standard SKUs, you must explicitly define it with either a Standard SKU public IP addresses or a Standard SKU load balancer. For more information, see [Outbound connections](../load-balancer/load-balancer-outbound-connections.md) and [What is Azure Load Balancer?](../load-balancer/load-balancer-overview.md).
>[!NOTE]
-> Any Service Fabric cluster making use of a Standard SKU SLB needs to ensure that each node type has a rule allowing outbound traffic on port 443. This is necessary to complete cluster setup, and any deployment without such a rule will fail.
+> The standard template references an NSG that allows all outbound traffic by default. Inbound traffic is limited to the ports that are required for Service Fabric management operations. The NSG rules can be modified to meet your requirements.
+
+>[!IMPORTANT]
+> Each node type in a Service Fabric cluster that uses a Standard SKU load balancer requires a rule allowing outbound traffic on port 443. This is necessary to complete cluster setup. Any deployment without this rule will fail.
+### Enable zones on a virtual machine scale set
-### Enabling zones on a virtual machine scale set
-To enable a zone, on a virtual machine scale set you must include the following three values in the virtual machine scale set resource.
+To enable a zone on a virtual machine scale set, include the following three values in the virtual machine scale set resource:
-* The first value is the **zones** property, which specifies which Availability Zone the virtual machine scale set will be deployed to.
-* The second value is the "singlePlacementGroup" property, which must be set to true.
-* The third value is the ΓÇ£faultDomainOverrideΓÇ¥ property in the Service Fabric virtual machine scale set extension. The value for this property should include only the zone in which this virtual machine scale set will be placed. Example: "faultDomainOverride": "az1" All virtual machine scale set resources must be placed in the same region because Azure Service Fabric clusters do not have cross region support.
+* The first value is the `zones` property, which specifies which Availability Zone the virtual machine scale set is deployed to.
+* The second value is the `singlePlacementGroup` property, which must be set to `true`.
+* The third value is the `faultDomainOverride` property in the Service Fabric virtual machine scale set extension. This property should include only the zone in which this virtual machine scale set will be placed. Example: `"faultDomainOverride": "az1"`. All virtual machine scale set resources must be placed in the same region because Azure Service Fabric clusters don't have cross-region support.
```json {
To enable a zone, on a virtual machine scale set you must include the following
} ```
-### Enabling multiple primary Node Types in the Service Fabric Cluster resource
-To set one or more node types as primary in a cluster resource, set the "isPrimary" property to "true". When deploying a Service Fabric cluster across Availability Zones, you should have three node types in distinct zones.
+### Enable multiple primary node types in the Service Fabric cluster resource
+
+To set one or more node types as primary in a cluster resource, set the `isPrimary` property to `true`. When you deploy a Service Fabric cluster across Availability Zones, you should have three node types in distinct zones.
```json {
To set one or more node types as primary in a cluster resource, set the "isPrima
} ```
-## Migrate to using Availability Zones from a cluster using a Basic SKU Load Balancer and a Basic SKU IP
-To migrate a cluster, which was using a Load Balancer and IP with a basic SKU, you must first create an entirely new Load Balancer and IP resource using the standard SKU. It is not possible to update these resources in-place.
+## Migrate to Availability Zones from a cluster by using a Basic SKU load balancer and a Basic SKU IP
-The new LB and IP should be referenced in the new cross Availability Zone node types that you would like to use. In the example above, three new virtual machine scale set resources were added in zones 1,2, and 3. These virtual machine scale sets reference the newly created LB and IP and are marked as primary node types in the Service Fabric Cluster Resource.
+To migrate a cluster that's using a load balancer and IP with a basic SKU, you must first create an entirely new load balancer and IP resource using the standard SKU. It isn't possible to update these resources.
-To begin, you will need to add the new resources to your existing Resource Manager template. These resources include:
-* A Public IP Resource using Standard SKU.
-* A Load Balancer Resource using Standard SKU.
-* A NSG referenced by the subnet in which you deploy your virtual machine scale sets.
-* Three node types marked as primary.
- * Each node type should be mapped to its own virtual machine scale set located in different zones.
- * Each virtual machine scale set should have at least five nodes (Silver Durability).
+Reference the new load balancer and IP in the new cross-Availability Zone node types that you want to use. In the previous example, three new virtual machine scale set resources were added in zones 1, 2, and 3. These virtual machine scale sets reference the newly created load balancer and IP and are marked as primary node types in the Service Fabric cluster resource.
-An example of these resources can be found in the [sample template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/10-VM-Ubuntu-2-NodeType-Secure).
+1. To begin, add the new resources to your existing Azure Resource Manager template. These resources include:
-```powershell
-New-AzureRmResourceGroupDeployment `
- -ResourceGroupName $ResourceGroupName `
- -TemplateFile $Template `
- -TemplateParameterFile $Parameters
-```
+ * A public IP resource using Standard SKU
+ * A load balancer resource using Standard SKU
+ * An NSG referenced by the subnet in which you deploy your virtual machine scale sets
+ * Three node types marked as primary
+ * Each node type should be mapped to its own virtual machine scale set located in a different zone.
+ * Each virtual machine scale set should have at least five nodes (Silver Durability).
-Once the resources have finished deploying, you can begin to disable the nodes in the primary node type from the original cluster. As the nodes are disabled, the system services will migrate to the new primary node type that had been deployed in the step above.
+ An example of these resources can be found in the [sample template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/10-VM-Ubuntu-2-NodeType-Secure).
-```powershell
-Connect-ServiceFabricCluster -ConnectionEndpoint $ClusterName `
- -KeepAliveIntervalInSec 10 `
- -X509Credential `
- -ServerCertThumbprint $thumb `
- -FindType FindByThumbprint `
- -FindValue $thumb `
- -StoreLocation CurrentUser `
- -StoreName My
+ ```powershell
+ New-AzureRmResourceGroupDeployment `
+ -ResourceGroupName $ResourceGroupName `
+ -TemplateFile $Template `
+ -TemplateParameterFile $Parameters
+ ```
-Write-Host "Connected to cluster"
+1. When the resources finish deploying, you can disable the nodes in the primary node type from the original cluster. When the nodes are disabled, the system services migrate to the new primary node type that you deployed previously.
-$nodeNames = @("_nt0_0", "_nt0_1", "_nt0_2", "_nt0_3", "_nt0_4")
+ ```powershell
+ Connect-ServiceFabricCluster -ConnectionEndpoint $ClusterName `
+ -KeepAliveIntervalInSec 10 `
+ -X509Credential `
+ -ServerCertThumbprint $thumb `
+ -FindType FindByThumbprint `
+ -FindValue $thumb `
+ -StoreLocation CurrentUser `
+ -StoreName My
-Write-Host "Disabling nodes..."
-foreach($name in $nodeNames) {
- Disable-ServiceFabricNode -NodeName $name -Intent RemoveNode -Force
-}
-```
+ Write-Host "Connected to cluster"
-Once the nodes are all disabled, the system services will be running on the primary node type, which is spread across zones. You can then remove the disabled nodes from the cluster. Once the nodes are removed, you can remove the original IP, Load Balancer, and virtual machine scale set resources.
+ $nodeNames = @("_nt0_0", "_nt0_1", "_nt0_2", "_nt0_3", "_nt0_4")
-```powershell
-foreach($name in $nodeNames){
- # Remove the node from the cluster
- Remove-ServiceFabricNodeState -NodeName $name -TimeoutSec 300 -Force
- Write-Host "Removed node state for node $name"
-}
+ Write-Host "Disabling nodes..."
+ foreach($name in $nodeNames) {
+ Disable-ServiceFabricNode -NodeName $name -Intent RemoveNode -Force
+ }
+ ```
-$scaleSetName="nt0"
-Remove-AzureRmVmss -ResourceGroupName $groupname -VMScaleSetName $scaleSetName -Force
+1. After the nodes are all disabled, the system services will run on the primary node type, which is spread across zones. You can then remove the disabled nodes from the cluster. After the nodes are removed, you can remove the original IP, load balancer, and virtual machine scale set resources.
-$lbname="LB-cluster-nt0"
-$oldPublicIpName="LBIP-cluster-0"
-$newPublicIpName="LBIP-cluster-1"
+ ```powershell
+ foreach($name in $nodeNames){
+ # Remove the node from the cluster
+ Remove-ServiceFabricNodeState -NodeName $name -TimeoutSec 300 -Force
+ Write-Host "Removed node state for node $name"
+ }
-Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force
-Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force
-```
+ $scaleSetName="nt0"
+ Remove-AzureRmVmss -ResourceGroupName $groupname -VMScaleSetName $scaleSetName -Force
-You should then remove the references to these resources from the Resource Manager template that you had deployed.
+ $lbname="LB-cluster-nt0"
+ $oldPublicIpName="LBIP-cluster-0"
+ $newPublicIpName="LBIP-cluster-1"
-The final step will involve updating the DNS name and Public IP.
+ Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force
+ Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force
+ ```
-```powershell
-$oldprimaryPublicIP = Get-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname
-$primaryDNSName = $oldprimaryPublicIP.DnsSettings.DomainNameLabel
-$primaryDNSFqdn = $oldprimaryPublicIP.DnsSettings.Fqdn
+1. Next, remove the references to these resources from the Resource Manager template that you deployed.
-Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force
-Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force
+1. Finally, update the DNS name and public IP.
-$PublicIP = Get-AzureRmPublicIpAddress -Name $newPublicIpName -ResourceGroupName $groupname
-$PublicIP.DnsSettings.DomainNameLabel = $primaryDNSName
-$PublicIP.DnsSettings.Fqdn = $primaryDNSFqdn
-Set-AzureRmPublicIpAddress -PublicIpAddress $PublicIP
+ ```powershell
+ $oldprimaryPublicIP = Get-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname
+ $primaryDNSName = $oldprimaryPublicIP.DnsSettings.DomainNameLabel
+ $primaryDNSFqdn = $oldprimaryPublicIP.DnsSettings.Fqdn
-```
+ Remove-AzureRmLoadBalancer -Name $lbname -ResourceGroupName $groupname -Force
+ Remove-AzureRmPublicIpAddress -Name $oldPublicIpName -ResourceGroupName $groupname -Force
-## (Preview) Enable multiple Availability zones in single virtual machine scale set
+ $PublicIP = Get-AzureRmPublicIpAddress -Name $newPublicIpName -ResourceGroupName $groupname
+ $PublicIP.DnsSettings.DomainNameLabel = $primaryDNSName
+ $PublicIP.DnsSettings.Fqdn = $primaryDNSFqdn
+ Set-AzureRmPublicIpAddress -PublicIpAddress $PublicIP
+
+ ```
-The previously mentioned solution uses one nodeType per AZ. The following solution will allow users to deploy 3 AZ's in the same nodeType.
+## (Preview) Enable multiple Availability Zones in single virtual machine scale set
-**As this feature is currently in preview, it is not currently supported for production scenarios.**
+The previous solution uses one node type in each Availability Zone. The following solution allows users to deploy three Availability Zones in the same node type.
-Full sample template is present [here](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-Windows-Multiple-AZ-Secure).
+> [!NOTE]
+> Because this feature is currently in preview, it's not currently supported for production scenarios.
-![Azure Service Fabric Availability Zone Architecture][sf-multi-az-arch]
+A full sample template is available on [GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/15-VM-Windows-Multiple-AZ-Secure).
+
+![Diagram of the Azure Service Fabric Availability Zone architecture.][sf-multi-az-arch]
### Configuring zones on a virtual machine scale set
-To enable zones on a virtual machine scale set you must include the following three values in the virtual machine scale set resource.
-* The first value is the **zones** property, which specifies the Availability Zones present in the virtual machine scale set.
-* The second value is the "singlePlacementGroup" property, which must be set to true. **The scale set spanned across 3 AZ's can scale upto 300 VMs even with "singlePlacementGroup = true".**
-* The third value is "zoneBalance", which ensures strict zone balancing. This should be "true". This ensures that the VM distributions across zones are not unbalanced, ensuring that when one of the zones goes down, the other two zones have sufficient VMs to ensure the cluster keeps running un-interrupted. A cluster with an unbalanced VM distribution may not survive a zone down scenario as that zone might have the majority of the VMs. Unbalanced VM distribution across zones will also lead to service placement related issues & infrastructure updates getting stuck. Read about [zoneBalancing](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing).
-* The FaultDomain and UpgradeDomain overrides are not required to be configured.
+To enable zones on a virtual machine scale set, include the following three values in the virtual machine scale set resource:
+
+* The first value is the `zones` property, which specifies the Availability Zones that are in the virtual machine scale set.
+* The second value is the `singlePlacementGroup` property, which must be set to `true`. The scale set that's spanned across three Availability Zones can scale up to 300 VMs even with `singlePlacementGroup = true`.
+* The third value is `zoneBalance`, which ensures strict zone balancing. This value should be `true`. This ensures that the VM distributions across zones are not unbalanced, which means that when one zone goes down, the other two zones have enough VMs to keep the cluster running.
+
+ A cluster with an unbalanced VM distribution might not survive a zone-down scenario because that zone might have the majority of the VMs. Unbalanced VM distribution across zones also leads to service placement issues and infrastructure updates getting stuck. Read more about [zoneBalancing](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md#zone-balancing).
+
+You don't need to configure the `FaultDomain` and `UpgradeDomain` overrides.
```json {
To enable zones on a virtual machine scale set you must include the following th
``` >[!NOTE]
-> * **Service Fabric clusters should have at least one Primary nodeType. DurabilityLevel of Primary nodeTypes should be Silver or above.**
-> * The AZ spanning virtual machine scale set should be configured with at least 3 Availability zones irrespective of the durabilityLevel.
-> * AZ spanning virtual machine scale set with Silver durability (or above), should have at least 15 VMs.
-> * AZ spanning virtual machine scale set with Bronze durability, should have at least 6 VMs.
-
-### Enabling the support for multiple zones in the Service Fabric nodeType
-The Service Fabric nodeType must be enabled to support multiple availability zones.
-
-* The first value is **multipleAvailabilityZones** which should be set to true for the nodeType.
-* The second value is **sfZonalUpgradeMode** and is optional. This property canΓÇÖt be modified if a node type with multiple AZΓÇÖs is already present in the cluster.
- The property controls the logical grouping of VMs in upgrade domains.
- **If value is set to "Parallel":** VMs under the node type will be grouped in UDs ignoring the zone info in 5 UDs. This will result in UD0 across all zones to get upgraded at the same time. This deployment mode is faster for upgrades but is not recommended as it goes against the safe deployment practices, which state that the updates should be applied only one zone at a time.
- **If value is omitted or set to "Hierarchical":** VMs will be grouped to reflect the zonal distribution in up to 15 UDs. Each of the 3 zones will have 5 UDs. This ensures that the updates go zone wise, moving to next zone only after completing 5 UDs within the first zone, slowly across 15 UDs (3 zones, 5 UDs), which is safer from the perspective of the cluster and the user application.
- This property only defines the upgrade behavior for ServiceFabric application and code upgrades. The underlying virtual machine scale set upgrades will still be parallel in all AZΓÇÖs.
- This property will not have any impact on the UD distribution for node types which do not have multiple zones enabled.
-* The third value is **vmssZonalUpgradeMode = Parallel**. This is a *mandatory* property to be configured in the cluster, if a nodeType with multiple AZs is added. This property defines the upgrade mode for the virtual machine scale set updates which will happen in parallel in all AZΓÇÖs at once.
- Right now this property can only be set to parallel.
-* The Service Fabric cluster resource apiVersion should be "2020-12-01-preview" or higher.
-* The cluster code version should be "7.2.445" or higher.
+>
+> * Service Fabric clusters should have at least one primary node type. The durability level of primary node types should be Silver or higher.
+> * The Availability Zone that spans virtual machine scale sets should be configured with at least three Availability Zones, no matter the durability level.
+> * Availability Zones that span virtual machine scale sets with Silver or higher durability should have at least 15 VMs.
+> * Availability Zones that span virtual machine scale sets with Bronze durability should have at least six VMs.
+
+### Enable support for multiple zones in the Service Fabric node type
+
+The Service Fabric node type must be enabled to support multiple Availability Zones.
+
+* The first value is `multipleAvailabilityZones`, which should be set to `true` for the node type.
+* The second value is `sfZonalUpgradeMode` and is optional. This property can't be modified if a node type with multiple Availability Zones is already present in the cluster.
+ This property controls the logical grouping of VMs in upgrade domains (UDs).
+
+ * If this value is set to `Parallel`: VMs under the node type are grouped into UDs and ignore the zone info in five UDs. This setting causes UDs across all zones to be upgraded at the same time. This deployment mode is faster for upgrades, we don't recommend it because it goes against the SDP guidelines, which state that the updates should be applied to one zone at a time.
+ * If this value is omitted or set to `Hierarchical`: VMs are grouped to reflect the zonal distribution in up to 15 UDs. Each of the three zones has five UDs. This ensures that the zones are updated one at a time, moving to next zone only after completing five UDs within the first zone. This update process is safer for the cluster and the user application.
+
+ This property only defines the upgrade behavior for Service Fabric application and code upgrades. The underlying virtual machine scale set upgrades are still parallel in all Availability Zones. This property doesn't affect the UD distribution for node types that don't have multiple zones enabled.
+* The third value is `vmssZonalUpgradeMode = Parallel`. This property is mandatory if a node type with multiple Availability Zones is added. This property defines the upgrade mode for the virtual machine scale set updates that happen in all Availability Zones at once.
+
+ Currently, this property can only be set to parallel.
+
+>[!IMPORTANT]
+>The Service Fabric cluster resource API version should be 2020-12-01-preview or later.
+>
+>The cluster code version should be 7.2.445 or later.
```json {
The Service Fabric nodeType must be enabled to support multiple availability zon
``` >[!NOTE]
-> * Public IP and Load Balancer Resources should be using the Standard SKU as described earlier in the article.
-> * "multipleAvailabilityZones" property on the nodeType can only be defined at the time of nodeType creation and can't be modified later. Hence, existing nodeTypes can't be configured with this property.
-> * When "sfZonalUpgradeMode" is omitted or set to "Hierarchical", the cluster and application deployments will be slower as there are more upgrade domains in the cluster. It is important to correctly adjust the upgrade policy timeouts to incorporate for the upgrade time duration for 15 upgrade domains. The upgrade policy for both app and cluster should be updated to ensure the deployment does not exceed the Azure Resource Service deployment timeouts of 12 hours. This means deployment should not take more than 12 hours for 15 UDs i.e should not take more than 40 min/UD.
-> * Set the cluster **reliabilityLevel = Platinum** to ensure the cluster survives the one zone down scenario.
+>
+> * Public IP and load balancer resources should use the Standard SKU described earlier in the article.
+> * The `multipleAvailabilityZones` property on the node type can only be defined when the node type is created and can't be modified later. Existing node types can't be configured with this property.
+> * When `sfZonalUpgradeMode` is omitted or set to `Hierarchical`, the cluster and application deployments will be slower because there are more upgrade domains in the cluster. It's important to correctly adjust the upgrade policy timeouts to account for the upgrade time required for 15 upgrade domains. The upgrade policy for both the app and the cluster should be updated to ensure that the deployment doesn't exceed the Azure Resource Service deployment time limit of 12 hours. This means that deployment shouldn't take more than 12 hours for 15 UDs (that is, shouldn't take more than 40 minutes for each UD).
+> * Set the cluster reliability level to `Platinum` to ensure that the cluster survives the one zone-down scenario.
->[!NOTE]
-> For best practice we recommend sfZonalUpgradeMode set to Hierarchical or be omitted. Deployment will follow the zonal distribution of VMs impacting a smaller amount of replicas and/or instances making them safer.
-> Use sfZonalUpgradeMode set to Parallel if deployment speed is a priority or only stateless workload runs on the node type with multiple AZ's. This will result in the UD walk to happen in parallel in all AZ's.
+>[!TIP]
+> We recommend setting `sfZonalUpgradeMode` to `Hierarchical` or omitting it. Deployment will follow the zonal distribution of VMs and affect a smaller amount of replicas or instances, making them safer.
+> Use `sfZonalUpgradeMode` set to `Parallel` if deployment speed is a priority or only stateless workloads run on the node type with multiple Availability Zones. This causes the UD walk to happen in parallel in all Availability Zones.
+
+### Migrate to the node type with multiple Availability Zones
-### Migration to the node type with multiple Availability Zones
-For all migration scenarios, a new node type needs to added which will have multiple availability zones supported. An existing nodeType can't be migrated to support multiple zones.
-The article [here](./service-fabric-scale-up-primary-node-type.md) captures the detailed steps of adding a new nodeType and also adding the other resources required for the new node type like the IP and LB resources.
-The same article also describes how to retire the existing node type after the nodeType with multiple Availability zones is added to the cluster.
+For all migration scenarios, you need to add a new node type that supports multiple Availability Zones. An existing node type can't be migrated to support multiple zones.
+The [Scale up a Service Fabric cluster primary node type](./service-fabric-scale-up-primary-node-type.md) article includes detailed steps to add a new node type and the other resources required for the new node type, such as IP and load balancer resources. That article also describes how to retire the existing node type after a new node type with multiple Availability Zones is added to the cluster.
-* Migration from a nodeType which is using Basic SKU for LB and IP resources:
- This is already described [here](#migrate-to-using-availability-zones-from-a-cluster-using-a-basic-sku-load-balancer-and-a-basic-sku-ip) for the solution with one node type per AZ.
- For the new node type, the only difference is that there is only 1 virtual machine scale set and 1 node type for all AZ's instead of 1 each per AZ.
-* Migration from a nodeType which is using the Standard SKU for LB and IP resources with NSG:
- Follow the same procedure as described above with the exception that there is no need to add new LB, IP and NSG resources, and the same resources can be reused in the new nodeType.
+* Migration from a node type that uses basic load balancer and IP resources: This process is already described in [a previous section](#migrate-to-availability-zones-from-a-cluster-by-using-a-basic-sku-load-balancer-and-a-basic-sku-ip) for the solution with one node type per Availability Zone.
+ For the new node type, the only difference is that there's only one virtual machine scale set and one node type for all Availability Zones instead of one each per Availability Zone.
+* Migration from a node type that uses the Standard SKU load balancer and IP resources with an NSG: Follow the same procedure described previously. However, there's no need to add new load balancer, IP, and NSG resources. The same resources can be reused in the new node type.
[sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-multi-az-arch]: ./media/service-fabric-cross-availability-zones/sf-multi-az-topology.png
site-recovery Azure To Azure How To Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-to-azure-how-to-enable-policy.md
Multiple Disks | Supported
Availability Sets | Supported Availability Zones | Supported Azure Disk Encryption (ADE) enabled VMs | Not supported
-Proximity Placement Groups (PPG) | Not supported
+Proximity Placement Groups (PPG) | Supported
Customer-managed keys (CMK) enabled disks | Not supported Storage spaces direct (S2D) clusters | Not supported Azure Resource Manager Deployment Model | Supported Classic Deployment Model | Not supported Zone to Zone DR | Supported
-Azure Disk Encryption v1 | Not supported
-Azure Disk Encryption v2 | Not supported
-Interoperability with Azure Backup | Not supported
-Hot add/remove of disks | Not supported
Interoperability with other policies applied as default by Azure (if any) | Supported >[!NOTE]
->If a not-supported VM is created within the scope of policy, Site Recovery will not be enabled for them. However, they wil reflect as _Non-complaint_ in Resource Compliance.
+>In the following cases, Site Recovery will not be enabled for them. However, they wil reflect as _Non-complaint_ in Resource Compliance:
+>1. If a not-supported VM is created within the scope of policy.
+>1. If a VM is a part of both an Availability Set as well as PPG.
## Create a Policy Assignment In this section, you create a policy assignment that enables Azure Site Recovery for all newly created resources.
site-recovery Site Recovery Ipconfig Cmdlet Parameter Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/site-recovery-ipconfig-cmdlet-parameter-deprecation.md
This article describes the deprecation, the corresponding implications, and the alternative options available for the customers for the following scenario:
-Configuring Primary IP Config settings for Failover or Test Failover. This cmdlet impacts all the customers of Azure to Azure DR scenario using the cmdlet New-AzRecoveryServicesAsrVMNicConfig.
+Configuring Primary IP Config settings for Failover or Test Failover.
+
+This cmdlet impacts all the customers of Azure to Azure DR scenario using the cmdlet New-AzRecoveryServicesAsrVMNicConfig in Version _Az Powershell 5.9.0 and above_.
> [!IMPORTANT] > Customers are advised to take the remediation steps at the earliest to avoid any disruption to their environment.
storage Storage Blobs Static Site Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blobs-static-site-github-actions.md
Previously updated : 01/11/2021 Last updated : 05/05/2021
In the example above, replace the placeholders with your subscription ID and res
creds: ${{ secrets.AZURE_CREDENTIALS }} ```
-1. Use the Azure CLI action to upload your code to blob storage and to purge your CDN endpoint. For `az storage blob upload-batch`, replace the placeholder with your storage account name. The script will upload to the `$web` container. For `az cdn endpoint purge`, replace the placeholders with your CDN profile name, CDN endpoint name, and resource group.
+1. Use the Azure CLI action to upload your code to blob storage and to purge your CDN endpoint. For `az storage blob upload-batch`, replace the placeholder with your storage account name. The script will upload to the `$web` container. For `az cdn endpoint purge`, replace the placeholders with your CDN profile name, CDN endpoint name, and resource group. To speed up your CDN purge, you can add the `--no-wait` option to `az cdn endpoint purge`.
```yaml - name: Upload to blob storage
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.19.1 | April 29, 2021 | April 29, 2022 |
| v1.19.0 | April 15, 2021 | April 15, 2022 | | v1.18.1 | March 4, 2021 | March 4, 2022 | | v1.18.0 | March 1, 2021 | March 1, 2022 |
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
CREATE EXTERNAL DATA SOURCE ecdc_cases WITH (
A caller may access data source without credential if an owner of data source allowed anonymous access or give explicit access to Azure AD identity of the caller. You can explicitly define a custom credential that will be used while accessing data on external data source.-- Managed Identity of the Synapse workspace-- Shared Access Signature of the Azure storage-- Read-only Cosmos Db account key
+- [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity) of the Synapse workspace
+- [Shared Access Signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature) of the Azure storage
+- Read-only Cosmos Db account key that enables you to read Cosmos DB analytical storage.
As a prerequisite, you will need to create a master key in the database: ```sql
In order to access Cosmos DB analytical storage, you need to define a credential
```sql CREATE DATABASE SCOPED CREDENTIAL MyCosmosDbAccountCredential
-WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 's5zarR2pT0JWH9k8roipnWxUYBegOuFGjJpSjGlR36y86cW0GQ6RaaG8kGjsRAQoWMw1QKTkkX8HQtFpJjC8Hg==';
+WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
+ SECRET = 's5zarR2pT0JWH9k8roipnWxUYBegOuFGjJpSjGlR36y86cW0GQ6RaaG8kGjsRAQoWMw1QKTkkX8HQtFpJjC8Hg==';
``` ### Define external file formats
The following external table is referencing the ECDC COVID parquet file placed i
```sql create external table ecdc_adls.cases (
- date_rep date,
- day smallint,
- month smallint,
- year smallint,
- cases smallint,
- deaths smallint,
- countries_and_territories varchar(256),
- geo_id varchar(60),
- country_territory_code varchar(16),
- pop_data_2018 int,
- continent_exp varchar(32),
- load_date datetime2(7),
- iso_country varchar(16)
+ date_rep date,
+ day smallint,
+ month smallint,
+ year smallint,
+ cases smallint,
+ deaths smallint,
+ countries_and_territories varchar(256),
+ geo_id varchar(60),
+ country_territory_code varchar(16),
+ pop_data_2018 int,
+ continent_exp varchar(32),
+ load_date datetime2(7),
+ iso_country varchar(16)
) with ( data_source= ecdc_cases, location = 'latest/ecdc_cases.parquet',
The security rules depend on your security policies. Some generic guidelines are
- You should provide `SELECT` permission only to the tables that some user should be able to use. - If you are providing access to data using the views, you should grant `REFERENCES` permission to the credential that will be used to access external data source.
+This user has minimal permissions needed to query external data. If you want to create a power-user who can set up permissions, external tables and views, you can give
+`CONTROL` permission to the user:
+
+```sql
+GRANT CONTROL TO [jovan@contoso.com]
+```
+ ## Next steps - To learn how to connect serverless SQL pool to Power BI Desktop and create reports, see [Connect serverless SQL pool to Power BI Desktop and create reports](tutorial-connect-power-bi-desktop.md).
virtual-machines Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/custom-data.md
We advise **not** to store sensitive data in custom data. For more information,
### Is custom data made available in IMDS?
-No, this feature is not currently available.
+Custom data is not available in IMDS. We suggest using user data though IMDS instead. For more information, see [User data through Azure Instance Metadata Service](./linux/instance-metadata-service.md?tabs=linux#get-user-data)
virtual-machines Edv4 Edsv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/edv4-edsv4-series.md
Edsv4-series sizes run on the Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | <sup>**</sup> Max cached and temp storage throughput: IOPS/MBps (cache size in GiB) | Max uncached disk throughput: IOPS/MBps | Max NICs|Expected Network bandwidth (Mbps) | |||||||||| | Standard_E2ds_v4 | 2 | 16 | 75 | 4 | 19000/120(50) | 3200/48 | 2|1000 |
-| Standard_E4ds_v4 | 4 | 32 | 150 | 8 | 38500/242(100) | 6400/96 | 2|2000 |
-| Standard_E8ds_v4 | 8 | 64 | 300 | 16 | 77000/485(200) | 12800/192 | 4|4000 |
-| Standard_E16ds_v4 | 16 | 128 | 600 | 32 | 154000/968(400) | 25600/384 | 8|8000 |
+| Standard_E4ds_v4 <sup>1</sup> | 4 | 32 | 150 | 8 | 38500/242(100) | 6400/96 | 2|2000 |
+| Standard_E8ds_v4 <sup>1</sup> | 8 | 64 | 300 | 16 | 77000/485(200) | 12800/192 | 4|4000 |
+| Standard_E16ds_v4 <sup>1</sup> | 16 | 128 | 600 | 32 | 154000/968(400) | 25600/384 | 8|8000 |
| Standard_E20ds_v4 | 20 | 160 | 750 | 32 | 193000/1211(500) | 32000/480 | 8|10000 |
-| Standard_E32ds_v4 | 32 | 256 | 1200 | 32 | 308000/1936(800) | 51200/768 | 8|16000 |
+| Standard_E32ds_v4 <sup>1</sup> | 32 | 256 | 1200 | 32 | 308000/1936(800) | 51200/768 | 8|16000 |
| Standard_E48ds_v4 | 48 | 384 | 1800 | 32 | 462000/2904(1200) | 76800/1152 | 8|24000 | | Standard_E64ds_v4 <sup>1</sup> | 64 | 504 | 2400 | 32 | 615000/3872(1600) | 80000/1200 | 8|30000 | | Standard_E80ids_v4 <sup>2</sup> | 80 | 504 | 2400 | 32 | 615000/3872(1600) | 80000/1500 | 8|30000 |
-<sup>1</sup> [Constrained core sizes available)](./constrained-vcpu.md).
+<sup>1</sup> [Constrained core sizes available](./constrained-vcpu.md).
<sup>2</sup> Instance is isolated to hardware dedicated to a single customer.
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/generation-2.md
Generation 2 VMs support the following Marketplace images:
* Windows 10 Pro, Windows 10 Enterprise * SUSE Linux Enterprise Server 15 SP1 * SUSE Linux Enterprise Server 12 SP4
-* Ubuntu Server 16.04, 18.04, 19.04, 19.10
-* RHEL 8.2, 8.1, 8.0, 7.9, 7.7, 7.6, 7.5, 7.4, 7.0
-* Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4
-* Oracle Linux 7.7, 7.7-CI
+* Ubuntu Server 16.04, 18.04, 19.04, 19.10, 20.04
+* RHEL 8.2, 8.1, 8.0, 7.9, 7.7, 7.6, 7.5, 7.4, 7.0, 8.3
+* Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 8.2, 8.3
+* Oracle Linux 7.7, 7.7-CI, 7.8
> [!NOTE] > Specific Virtual machine sizes like Mv2-Series may only support a subset of these images - please look at the relevant virtual machine size documentation for complete details.
virtual-machines Hb Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hb-series.md
HB-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connecte
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-hbv2-and-ndv2/ba-p/2067965) about performance and potential issues) <br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
<br> | Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max Ethernet vNICs |
virtual-machines Hbv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hbv2-series.md
HBv2-series VMs feature 200 Gb/sec Mellanox HDR InfiniBand. These VMs are connec
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-hbv2-and-ndv2/ba-p/2067965) about performance and potential issues) <br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
<br> | Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max Ethernet vNICs |
virtual-machines Hbv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hbv3-series.md
All HBv3-series VMs feature 200 Gb/sec HDR InfiniBand from NVIDIA Networking to
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Coming soon<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
<br> |Size |vCPU |Processor |Memory (GiB) |Memory bandwidth GB/s |Base CPU frequency (GHz) |All-cores frequency (GHz, peak) |Single-core frequency (GHz, peak) |RDMA performance (Gb/s) |MPI support |Temp storage (GiB) |Max data disks |Max Ethernet vNICs |
virtual-machines Hc Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/hc-series.md
HC-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connecte
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported ([Learn more](https://techcommunity.microsoft.com/t5/azure-compute/accelerated-networking-on-hb-hc-hbv2-and-ndv2/ba-p/2067965) about performance and potential issues)<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
<br> | Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max Ethernet vNICs |
virtual-machines Capture Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/capture-image.md
Use the Azure CLI to mark the VM as generalized and capture the image. In the fo
```azurecli az image create \ --resource-group myResourceGroup \
- --name myImage --source myVM
+ --name myImage --source myVM
``` > [!NOTE] > The image is created in the same resource group as your source VM. You can create VMs in any resource group within your subscription from this image. From a management perspective, you may wish to create a specific resource group for your VM resources and images. >
+ > If you are capturing an image of a generation 2 VM, also use the `--hyper-v-generation V2` parameter. for more information, see [Generation 2 VMs](../generation-2.md).
+ >
> If you would like to store your image in zone-resilient storage, you need to create it in a region that supports [availability zones](../../availability-zones/az-overview.md) and include the `--zone-resilient true` parameter. This command returns JSON that describes the VM image. Save this output for later reference.
virtual-machines Change Vm Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/change-vm-size.md
Title: How to resize a Linux VM with the Azure CLI
-description: How to scale up or scale down a Linux virtual machine, by changing the VM size.
+ Title: How to resize a VM with the Azure CLI
+description: How to scale up or scale down a virtual machine, by changing the VM size.
Last updated 02/10/2017 -
-# Resize a Linux virtual machine using Azure CLI
+# Resize a virtual machine using Azure CLI
-After you provision a virtual machine (VM), you can scale the VM up or down by changing the [VM size][vm-sizes]. In some cases, you must deallocate the VM first. You need to deallocate the VM if the desired size is not available on the hardware cluster that is hosting the VM. This article details how to resize a Linux VM with the Azure CLI.
+After you provision a virtual machine (VM), you can scale the VM up or down by changing the [VM size][vm-sizes]. In some cases, you must deallocate the VM first. You need to deallocate the VM if the desired size is not available on the hardware cluster that is hosting the VM. This article details how to resize a VM with the Azure CLI.
## Resize a VM To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) installed and logged in to an Azure account using [az login](/cli/azure/reference-index).
To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) inst
> Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected. ## Next steps
-For additional scalability, run multiple VM instances and scale out. For more information, see [Automatically scale Linux machines in a Virtual Machine Scale Set][scale-set].
+For additional scalability, run multiple VM instances and scale out. For more information, see [Automatically scale machines in a Virtual Machine Scale Set][scale-set].
<!-- links --> [boot-diagnostics]: https://azure.microsoft.com/blog/boot-diagnostics-for-virtual-machines-v2/
virtual-machines Nct4 V3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nct4-v3-series.md
The NCasT4_v3-series virtual machines are powered by [Nvidia Tesla T4](https://w
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
Nvidia NVLink Interconnect: Not Supported<br> <br>
virtual-machines Ncv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ncv2-series.md
The NC24rs v2 configuration provides a low latency, high-throughput network inte
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
Nvidia NVLink Interconnect: Not Supported > [!IMPORTANT]
virtual-machines Ncv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ncv3-series.md
NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. These GPUs can provide 1.
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
Nvidia NVLink Interconnect: Not Supported<br> > [!IMPORTANT]
virtual-machines Nd Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nd-series.md
The ND-series virtual machines are a new addition to the GPU family designed for
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
Nvidia NVLink Interconnect: Not Supported<br> > [!IMPORTANT]
virtual-machines Ndv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/ndv2-series.md
Critically, the NDv2 is built for both computationally intense scale-up (harness
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
InfiniBand: Supported<br> Nvidia NVLink Interconnect: Supported<br> <br>
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/np-series.md
The NP-series virtual machines are powered by [Xilinx U250 ](https://www.xilinx.
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> VM Generation Support: Generation 1<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | FPGA | FPGA memory: GiB | Max data disks | Max NICs/Expected network bandwidth (MBps) |
VM Generation Support: Generation 1<br>
## Frequently asked questions
+**Q:** How to request quota for NP VMs?
+
+**A:** Please follow this page [Increase limits by VM series](https://docs.microsoft.com/azure/azure-portal/supportability/per-vm-quota-requests). NP VMs are available in East US, West US2, West Europe and SouthEast Asia.
+ **Q:** What version of Vitis should I use? **A:** Xilinx recommends [Vitis 2020.2](https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html)
virtual-machines Nvv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nvv3-series.md
Each GPU in NVv3 instances comes with a GRID license. This license gives you the
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max NICs / Expected network bandwidth (Mbps) | Virtual Workstations | Virtual Applications |
virtual-machines Nvv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/nvv4-series.md
The NVv4-series virtual machines are powered by [AMD Radeon Instinct MI25](https
[Memory Preserving Updates](maintenance-and-updates.md): Not Supported<br> [VM Generation Support](generation-2.md): Generation 1 and 2<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported ([In preview](ephemeral-os-disks.md#previewephemeral-os-disks-can-now-be-stored-on-temp-disks))<br>
<br> | Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max NICs / Expected network bandwidth (MBps) |
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/resize-vm.md
Title: Resize a Windows VM in Azure
+ Title: Resize a virtual machine using the Azure portal or PowerShell
description: Change the VM size used for an Azure virtual machine. - Last updated 01/13/2020
-# Resize a Windows VM
+# Resize a virtual machine using the Azure portal or PowerShell
This article shows you how to move a VM to a different [VM size](../sizes.md).
$virtualMachines | Start-AzVM
## Next steps
-For additional scalability, run multiple VM instances and scale out. For more information, see [Automatically scale Windows machines in a Virtual Machine Scale Set](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md).
+For additional scalability, run multiple VM instances and scale out. For more information, see [Automatically scale machines in a Virtual Machine Scale Set](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md).
virtual-network Tutorial Connect Virtual Networks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/tutorial-connect-virtual-networks-powershell.md
$virtualNetwork1 = New-AzVirtualNetwork `
-AddressPrefix 10.0.0.0/16 ```
-Create a subnet configuration with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig). The following example creates a subnet configuration with a 10.0.0.0/24 address prefix:
+Create a subnet configuration with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig). The following example creates a subnet configuration with a 10.0.0.0/24 address prefix:
```azurepowershell-interactive $subnetConfig = Add-AzVirtualNetworkSubnetConfig `
virtual-network Tutorial Tap Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/tutorial-tap-virtual-network-cli.md
# Work with a virtual network TAP using the Azure CLI
+> [!IMPORTANT]
+> Virtual network TAP Preview is currently on hold in all Azure regions. You can email us at <azurevnettap@microsoft.com> with your subscription ID and we will notify you with future updates about the preview. In the interim, you can use agent based or NVA solutions that provide TAP/Network Visibility functionality through our [Packet Broker partner solutions](virtual-network-tap-overview.md#virtual-network-tap-partner-solutions) available in [Azure Marketplace Offerings](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking?page=1&subcategories=appliances%3Ball&search=Network%20Traffic&filters=partners).
+ Azure virtual network TAP (Terminal Access Point) allows you to continuously stream your virtual machine network traffic to a network packet collector or analytics tool. The collector or analytics tool is provided by a [network virtual appliance](https://azure.microsoft.com/solutions/network-appliances/) partner. For a list of partner solutions that are validated to work with virtual network TAP, see [partner solutions](virtual-network-tap-overview.md#virtual-network-tap-partner-solutions). ## Create a virtual network TAP resource
vpn-gateway Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/openvpn-azure-ad-client.md
# Azure Active Directory authentication: Configure a VPN client for P2S OpenVPN protocol connections
-This article helps you configure a VPN client to connect to a virtual network using Point-to-Site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md).
+This article helps you configure a VPN client to connect to a virtual network using Point-to-Site VPN and Azure Active Directory authentication. Before you can connect and authenticate using Azure AD, you must first configure your Azure AD tenant. For more information, see [Configure an Azure AD tenant](openvpn-azure-ad-tenant.md). For more information about Point-to-Site, see [About Point-to-Site VPN](point-to-site-about.md).
## <a name="profile"></a>Working with client profiles
vpn-gateway Openvpn Azure Ad Mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/openvpn-azure-ad-mfa.md
Previously updated : 09/03/2020 Last updated : 05/05/2021
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
Previously updated : 10/07/2020 Last updated : 05/05/2021 # Create an Active Directory (AD) tenant for P2S OpenVPN protocol connections
-When connecting to your VNet, you can use certificate-based authentication or RADIUS authentication. However, when you use the Open VPN protocol, you can also use Azure Active Directory authentication. If you want different set of users to be able to connect to different VPN gateways, you can register multiple apps in AD and link them to different VPN gateways. This article helps you set up an Azure AD tenant for P2S OpenVPN authentication and create and register multiple apps in Azure AD for allowing different access for different users and groups.
+When you connect to your VNet using Point-to-Site, you have a choice of which protocol to use. The protocol you use determines the authentication options that are available to you. If you want to use Azure Active Directory authentication, you can do so when using the OpenVPN protocol. If you want different set of users to be able to connect to different VPN gateways, you can register multiple apps in AD and link them to different VPN gateways. This article helps you set up an Azure AD tenant for P2S OpenVPN and create and register multiple apps in Azure AD for allowing different access for different users and groups. For more information about Point-to-Site protocols and authentication, see [About Point-to-Site VPN](point-to-site-about.md).
[!INCLUDE [create](../../includes/openvpn-azure-ad-tenant-multi-app.md)]
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/openvpn-azure-ad-tenant.md
Title: 'Create an Azure AD tenant for P2S VPN connections: Azure AD authentication'
-description: Learn how to set up an Azure AD tenant for P2S Open VPN authentication.
+description: Learn how to set up an Azure AD tenant for P2S Azure AD authentication - OpenVPN protocol.
Previously updated : 04/28/2021 Last updated : 05/05/2021 # Create an Azure Active Directory tenant for P2S OpenVPN protocol connections
-When connecting to your VNet, you can use certificate-based authentication or RADIUS authentication. However, when you use the Open VPN protocol, you can also use Azure Active Directory authentication. This article helps you set up an Azure AD tenant for P2S Open VPN authentication.
+When you connect to your VNet using Point-to-Site, you have a choice of which protocol to use. The protocol you use determines the authentication options that are available to you. If you want to use Azure Active Directory authentication, you can do so when using the OpenVPN protocol. This article helps you set up an Azure AD tenant. For more information about Point-to-Site protocols and authentication, see [About Point-to-Site VPN](point-to-site-about.md).
## <a name="tenant"></a>1. Verify Azure AD tenant