Updates from: 06/23/2021 03:10:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/microsoft-graph-operations.md
Previously updated : 04/22/2021 Last updated : 06/22/2021
For more information about accessing Azure AD B2C audit logs, see [Accessing Azu
- [Update a Conditional Access policy](/graph/api/conditionalaccesspolicy-update) - [Delete a Conditional Access policy](/graph/api/conditionalaccesspolicy-delete)
+## How to programmatically manage Microsoft Graph
+
+When you want to manage Microsoft Graph, you can either do it as the application using the application permissions, or you can use delegated permissions. For delegated permissions, either the user or an administrator consents to the permissions that the app requests. The app is delegated with the permission to act as a signed-in user when it makes calls to the target resource. Application permissions are used by apps that do not require a signed in user present and thus require application permissions. Because of this, only administrators can consent to application permissions.
+
+> [!NOTE]
+> Delegated permissions for users signing in through user flows or custom policies cannot be used against delegated permissions for Microsoft Graph.
## Code sample: How to programmatically manage user accounts This code sample is a .NET Core console application that uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview) to interact with Microsoft Graph API. Its code demonstrates how to call the API to programmatically manage users in an Azure AD B2C tenant.
active-directory-b2c Publish App To Azure Ad App Gallery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md
The sign-in flow involves following steps:
Depending on the user's SSO session and Azure AD identity settings, the user might be prompted to: - Provide their email address or phone number.-- Enter their password or sign in with the [Microsoft authenticator app](https://www.microsoft.com/account/authenticator).
+- Enter their password or sign in with the [Microsoft authenticator app](https://www.microsoft.com/p/microsoft-authenticator/9nblgggzmcj6).
- Complete multi-factor authentication. - Accept the consent page. Your customer's tenant administrator can [grant tenant-wide admin consent to an app](../active-directory/manage-apps/grant-admin-consent.md). When granted, the consent page won't be presented to the user.
active-directory-domain-services Join Rhel Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/join-rhel-linux-vm.md
To complete this tutorial, you need the following resources and privileges:
* An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, the first tutorial [creates and configures an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance]. * A user account that's a part of the managed domain.
+* Unique Linux VM names that are a maximum of 15 characters to avoid truncated names that might cause conflicts in Active Directory.
## Create and connect to a RHEL Linux VM
active-directory-domain-services Join Ubuntu Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/join-ubuntu-linux-vm.md
To complete this tutorial, you need the following resources and privileges:
* An Azure Active Directory Domain Services managed domain enabled and configured in your Azure AD tenant. * If needed, the first tutorial [creates and configures an Azure Active Directory Domain Services managed domain][create-azure-ad-ds-instance]. * A user account that's a part of the managed domain.
+* Unique Linux VM names that are a maximum of 15 characters to avoid truncated names that might cause conflicts in Active Directory.
## Create and connect to an Ubuntu Linux VM
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/secure-your-domain.md
Previously updated : 05/27/2021 Last updated : 06/22/2021
To complete this article, you need the following resources:
1. On the left-hand side, select **Security settings**. 1. Click **Enable** or **Disable** for the following settings: - **TLS 1.2 only mode**
- - **NTLM authentication**
+ - **NTLM authentication****
+ - **Password synchronization from on-premises**
- **NTLM password synchronization from on-premises** - **RC4 encryption** - **Kerberos armoring**
active-directory How To Migrate Mfa Server To Azure Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md
+
+ Title: Migrate to Azure AD MFA and Azure AD user authentication - Azure Active Directory
+description: Step-by-step guidance to move from Azure MFA Server on-premises to Azure AD MFA and Azure AD user authentication
+++++ Last updated : 06/22/2021++++++++
+# Migrate to Azure AD MFA and Azure AD user authentication
+
+Multi-factor authentication (MFA) helps secure your infrastructure and assets from bad actors.
+MicrosoftΓÇÖs Multi-Factor Authentication Server (MFA Server) is no longer offered for new deployments.
+Customers who are using MFA Server should move to Azure AD Multi-Factor Authentication (Azure AD MFA).
+
+There are several options for migrating your multi-factor authentication (MFA) from MFA Server to Azure Active Directory (Azure AD).
+These include:
+
+* Good: Moving only your [MFA service to Azure AD](how-to-migrate-mfa-server-to-azure-mfa.md).
+* Better: Moving your MFA service and user authentication to Azure AD, covered in this article.
+* Best: Moving all of your applications, your MFA service, and user authentication to Azure AD. See the move applications to Azure AD section of this article if you plan to move applications, covered in this article.
+
+To select the appropriate MFA migration option for your organization, see the considerations in [Migrate from MFA Server to Azure Active Directory MFA](how-to-migrate-mfa-server-to-azure-mfa.md).
+
+The following diagram shows the process for migrating to Azure AD MFA and cloud authentication while keeping some of your applications on AD FS.
+This process enables the iterative migration of users from MFA Server to Azure MFA based on group membership.
+
+Each step is explained in the subsequent sections of this article.
+
+>[!NOTE]
+>If you are planning on moving any applications to Azure Active Directory as a part of this migration, you should do so prior to your MFA migration. If you move all of your apps, you can skip sections of the MFA migration process. See the section on moving applications at the end of this article.
+
+## Process to migrate to Azure AD and user authentication
+
+![Process to migrate to Azure AD and user authentication.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/mfa-cloud-authentication-flow.png)
+
+## Prepare groups and Conditional Access
+
+Groups are used in three capacities for MFA migration.
+* **To iteratively move users to Azure AD MFA with staged rollout.**
+Use a group created in Azure AD, also known as a cloud-only group. You can use Azure AD security groups or Microsoft 365 Groups for both moving users to MFA and for Conditional Access policies. For more information see creating an Azure AD security group, and this overview of Microsoft 365 Groups for administrators.
+ >[!IMPORTANT]
+ >Nested and dynamic groups are not supported in the staged rollout process. Do not use these types of groups for your staged rollout effort.
+* **Conditional Access policies**.
+You can use either Azure AD or on-premises groups for conditional access.
+* **To invoke Azure AD MFA for AD FS applications with claims rules.**
+This applies only if you have applications on AD FS.
+This must be an on-premises Active Directory security group. Once Azure AD MFA is an additional authentication method, you can designate groups of users to use that method on each relying party trust. For example, you can call Azure AD MFA for users you have already migrated, and MFA Server for those not yet migrated. This is helpful both in testing, and during migration.
+
+>[!NOTE]
+>We do not recommend that you reuse groups that are used for security. When using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group.
+
+### Configure Conditional Access policies
+
+If you are already using Conditional Access to determine when users are prompted for MFA, you wonΓÇÖt need any changes to your policies.
+As users are migrated to cloud authentication, they will start using Azure AD MFA as defined by your existing Conditional Access policies.
+They wonΓÇÖt be redirected to AD FS and MFA Server anymore.
+
+If your federated domain(s) have SupportsMFA set to false, you are likely enforcing MFA on AD FS using claims rules.
+In this case, you will need to analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
+
+If you need to configure Conditional Access policies, you need to do so before enabling staged rollout.
+For more information, see the following resources:
+* [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md)
+* [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
+
+## Prepare AD FS
+
+If you do not have any applications in AD FS that require MFA, you can skip this section and go to the section Prepare staged rollout.
+
+### Upgrade AD FS server farm to 2019, FBL 4
+
+In AD FS 2019, Microsoft released new functionality that provides the ability to specify additional authentication methods for a relying party, such as an application.
+This is done by using group membership to determine the authentication provider.
+By specifying an additional authentication method, you can transition to Azure AD MFA while keeping other authentication intact during the transition.
+
+For more information, see [Upgrading to AD FS in Windows Server 2016 using a WID database](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server).
+The article covers both upgrading your farm to AD FS 2019 and upgrading your FBL to 4.
+
+### Configure claims rules to invoke Azure AD MFA
+
+Now that you have Azure AD MFA as an additional authentication method, you can assign groups of users to use Azure AD MFA. You do this by configuring claims rules, also known as *relying party trusts*. By using groups, you can control which authentication provider is called either globally or by application. For example, you can call Azure AD MFA for users who have registered for combined security information or had their phone numbers migrated, while calling MFA Server for those who have not.
+
+>[!NOTE]
+>Claims rules require on-premises security group.
+
+#### Back up existing rules
+
+Before configuring new claims rules, back up your existing rules.
+You will need to restore these as a part of your clean up steps.
+
+Depending on your configuration, you may also need to copy the existing rule and append the new rules being created for the migration.
+
+To view existing global rules, run:
+```powershell
+Get-AdfsAdditionalAuthenticationRule
+```
+
+To view existing relying party trusts, run the following command and replace RPTrustName with the name of the relying party trust claims rule:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name ΓÇ£RPTrustNameΓÇ¥).AdditionalAuthenticationRules
+```
+
+#### Access control policies
+
+>[!NOTE]
+>Access control policies canΓÇÖt be configured so that a specific authentication provider is invoked based on group membership.
+
+To transition from your access control policies to additional authentication rules, run this command for each of your Relying Party Trusts using the MFA Server authentication provider:
+
+```powershell
+Set-AdfsRelyingPartyTrust -**TargetName AppA -AccessControlPolicyName $Null**
+```
+
+This command will move the logic from your current Access Control Policy into Additional Authentication Rules.
+
+#### Set up the group, and find the SID
+
+You will need to have a specific group in which you place users for whom you want to invoke Azure AD MFA. You will need to find the security identifier (SID) for that group.
+To find the group SID use the following command, with your group name
+`Get-ADGroup ΓÇ£GroupNameΓÇ¥`
+
+![PowerShell command to get the group SID.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/find-the-sid.png)
+
+#### Setting the claims rules to call Azure MFA
+
+The following PowerShell cmdlets invoke Azure AD MFA for those in the group when they arenΓÇÖt on the corporate network.
+You must replace "YourGroupSid" with the SID found by running the preceding cmdlet.
+
+Make sure you review the [How to Choose Additional Auth Providers in 2019](/windows-server/identity/ad-fs/overview/whats-new-active-directory-federation-services-windows-server#how-to-choose-additional-auth-providers-in-2019).
+
+>[!IMPORTANT]
+>Backup your existing claims rules before proceeding.
+
+##### Set global claims rule
+
+Run the following command and replace RPTrustName with the name of the relying party trust claims rule:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name ΓÇ£RPTrustNameΓÇ¥).AdditionalAuthenticationRules
+```
+
+The command returns your current additional authentication rules for your relying party trust.
+You need to append the following rules to your current claim rules:
+
+```console
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+"YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");ΓÇÖ
+```
+
+The following example assumes your current claim rules are configured to prompt for MFA when users connect from outside your network.
+This example includes the additional rules that you need to append.
+
+```PowerShell
+Set-AdfsAdditionalAuthenticationRule -AdditionalAuthenticationRules 'c:[type ==
+"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"https://schemas.microsoft.com/claims/multipleauthn" );
+ c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+ΓÇ£YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value==ΓÇ£YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");ΓÇÖ
+```
+
+##### Set per-application claims rule
+
+This example modifies claim rules on a specific relying party trust (application). It includes the additional rules you need to append.
+
+```PowerShell
+Set-AdfsRelyingPartyTrust -TargetName AppA -AdditionalAuthenticationRules 'c:[type ==
+"https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork", value == "false"] => issue(type =
+"https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod", value =
+"https://schemas.microsoft.com/claims/multipleauthn" );
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+ΓÇ£YourGroupSID"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value==ΓÇ£YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");ΓÇÖ
+```
+
+### Configure Azure AD MFA as an authentication provider in AD FS
+
+In order to configure Azure AD MFA for AD FS, you must configure each AD FS server.
+If you have multiple AD FS servers in your farm, you can configure them remotely using Azure AD PowerShell.
+
+For step-by-step directions on this process, see [Configure the AD FS servers](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa#configure-the-ad-fs-servers).
+
+Once you have configured the servers, you can add Azure AD MFA as an additional authentication method.
+
+![Screenshot of how to add Azure AD MFA as an additional authentication method.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/edit-authentication-methods.png)
++
+## Prepare staged rollout
+
+Now you are ready to enable the staged rollout feature. Staged rollout helps you to iteratively move your users to either PHS or PTA.
+
+* Be sure to review the [supported scenarios](../hybrid/how-to-connect-staged-rollout.md#supported-scenarios).
+* First you will need to do either the [prework for PHS](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or the [prework for PTA](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication). We recommend PHS.
+* Next you will do the [prework for seamless SSO](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-seamless-sso).
+* [Enable the staged rollout of cloud authentication](../hybrid/how-to-connect-staged-rollout.md#enable-a-staged-rollout-of-a-specific-feature-on-your-tenant) for your selected authentication method.
+* Add the group(s) you created for staged rollout. Remember that you will add users to groups iteratively, and that they cannot be dynamic groups or nested groups.
+
+## Register users for Azure MFA
+
+There are two ways to register users for Azure MFA:
+
+* Register for combined security (MFA and self-service-password reset)
+* Migrate phone numbers from MFA Server
+
+The Microsoft Authenticator app can be used as a passwordless sign in method as well as a second factor for MFA with either method.
+
+### Register for combined security registration (recommended)
+
+We recommend having your users register for combined security information, which is a single place to register their authentication methods and devices for both MFA and SSPR.
+While it is possible to migrate data from the MFA Server to Azure AD MFA, the following challenges occur:
+
+* Only phone numbers can be migrated.
+* Authenticator apps will need to be reregistered.
+* Stale data can be migrated.
+
+Microsoft provides communication templates that you can provide to your users to guide them through the combined registration process.
+These include templates for email, posters, table tents, and a variety of other assets. Users register their information at `https://aka.ms/mysecurityinfo`, which takes them to the combined security registration screen.
+
+We recommend that you [secure the security registration process with Conditional Access](../conditional-access/howto-conditional-access-policy-registration.md) that requires the registration to occur from a trusted device or location. For information on tracking registration statuses, see [Authentication method activity for Azure Active Directory](howto-authentication-methods-activity.md).
+> [!NOTE]
+> Users who MUST register their combined security information from a non-trusted location or device can be issued a Temporary Access Pass or alternatively, temporarily excluded from the policy.
+
+### Migrate phone numbers from MFA Server
+
+While you can migrate usersΓÇÖ registered MFA phone numbers and hardware tokens, you cannot migrate device registrations such as their Microsoft Authenticator app settings.
+Migrating phone numbers can lead to stale numbers being migrated, and make users more likely to stay on phone-based MFA instead of setting up more secure methods like [passwordless sign-in with the Microsoft Authenticator app](howto-authentication-passwordless-phone.md).
+We therefore recommend that regardless of the migration path you choose, that you have all users register for [combined security information](howto-registration-mfa-sspr-combined.md).
+Combined security information enables users to also register for self-service password reset.
+
+If having users register their combined security information is not an option, it is possible to export the users along with their phone numbers from MFA Server and import the phone numbers into Azure AD.
+
+#### Export user phone numbers from MFA Server
+
+1. Open the Multi-Factor Authentication Server admin console on the MFA Server.
+1. Select **File** > **Export Users**.
+3) Save the CSV file. The default name is Multi-Factor Authentication Users.csv.
+
+#### Interpret and format the .csv file
+
+The .csv file contains a number of fields not necessary for migration and will need to be edited and formatted prior to importing the phone numbers into Azure AD.
+
+When opening the .csv file, columns of interest include Username, Primary Phone, Primary Country Code, Backup Country Code, Backup Phone, Backup Extension. You must interpret this data and format it, as necessary.
+
+#### Tips to avoid errors during import
+
+* The CSV file will need to be modified prior to using the Authentication Methods API to import the phone numbers into Azure AD.
+* We recommend simplifying the .csv to three columns: UPN, PhoneType, and PhoneNumber.
+
+ ![Screenshot of a csv example.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/csv-example.png)
+
+* Make sure the exported MFA Server Username matches the Azure AD UserPrincipalName. If it does not, update the username in the CSV file to match what is in Azure AD, otherwise the user will not be found.
+
+Users may have already registered phone numbers in Azure AD.
+When importing the phone numbers using the Authentication Methods API, you must decide whether to overwrite the existing phone number or to add the imported number as an alternate phone number.
+
+The following PowerShell cmdlets takes the CSV file you supply and add the exported phone numbers as a phone number for each UPN using the Authentication Methods API. You must replace "myPhones" with the name of your CSV file.
++
+```powershell
+$csv = import-csv myPhones.csv
+$csv|% { New-MgUserAuthenticationPhoneMethod -UserId $_.UPN -phoneType $_.PhoneType -phoneNumber $_.PhoneNumber}
+```
+
+For more information about managing usersΓÇÖ authentication methods, see [Manage authentication methods for Azure AD Multi-Factor Authentication](howto-mfa-userdevicesettings.md).
+
+### Add users to the appropriate groups
+
+* If you created new conditional access policies, add the appropriate users to those groups.
+* If you created on-premises security groups for claims rules, add the appropriate users to those groups.
+* Only after you have added users to the appropriate conditional access rules, add users to the group that you created for staged rollout. Once done, they will begin to use the Azure authentication method that you selected (PHS or PTA) and Azure AD MFA when they are required to perform multi-factor authentication.
+
+> [!IMPORTANT]
+> Nested and dynamic groups are not supported in the staged rollout process. Do not use these types of groups.
+
+We do not recommend that you reuse groups that are used for security. Therefore, if you are using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group.
+
+## Monitoring
+
+A number of [Azure Monitor workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) and usage & insights reports are available to monitor your deployment.
+These can be found in Azure AD in the navigation pane under **Monitoring**.
+
+### Monitoring staged rollout
+
+In the [workbooks](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) section, select **Public Templates**. Under **Hybrid Auth** section select the **Groups, Users and Sign-ins in Staged Rollout** workbook.
+
+This workbook can be used to monitor the following:
+* Users and groups added to Staged Rollout.
+* Users and groups removed from Staged Rollout.
+* Sign-in failures for users in staged rollout, and the reasons for failures.
+
+### Monitoring Azure MFA registration
+Azure MFA registration can be monitored using the [Authentication methods usage & insights report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/AuthenticationMethodsMenuBlade/AuthMethodsActivity/menuId/AuthMethodsActivity). This report can be found in Azure AD. Select **Monitoring**, then select **Usage & insights**.
+
+![Screenshot of how to find the Usage and Insights report.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/usage-report.png)
+
+In Usage & insights, select **Authentication methods**.
+
+Detailed Azure MFA registration information can be found on the Registration tab. You can drill down to view a list of registered users by selecting the **Users registered for Azure multi-factor authentication** hyperlink.
+
+![Screenshot of the Registration tab.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/registration-tab.png)
+
+### Monitoring app sign-in health
+
+Monitor applications you have moved to Azure AD with the App sign-in health workbook or the application activity usage report.
+
+* **App sign-in health workbook**. See [Monitoring application sign-in health for resilience](../fundamentals/monitor-sign-in-health-for-resilience.md) for detailed guidance on using this workbook.
+* **Azure AD application activity usage report**. This [report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsageAndInsightsMenuBlade/Azure%20AD%20application%20activity) can be used to view the successful and failed sign-ins for individual applications as well as the ability to drill down and view sign-in activity for a specific application.
+
+## Clean up tasks
+
+Once you have moved all your users to Azure AD cloud authentication and Azure MFA, you should be ready to decommission your MFA Server.
+We recommend reviewing MFA Server logs to ensure no users or applications are using it before you remove the server.
+
+### Convert your domains to managed authentication
+
+You should now [convert your federated domains in Azure AD to managed](../hybrid/plan-migrate-adfs-password-hash-sync.md#convert-domains-from-federated-to-managed) and remove the staged rollout configuration.
+This ensures new users use cloud authentication without being added to the migration groups.
+
+### Revert claims rules on AD FS and remove MFA Server authentication provider
+
+Follow the steps under [Configure claims rules to invoke Azure AD MFA](#configure-claims-rules-to-invoke-azure-ad-mfa) to revert back to the backed up claims rules and remove any AzureMFAServerAuthentication claims rules.
+
+For example, remove the following from the rule(s):
+
+```console
+c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
+ΓÇ£**YourGroupSID**"] => issue(Type = "https://schemas.microsoft.com/claims/authnmethodsproviders",
+Value = "AzureMfaAuthentication");
+not exists([Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid",
+Value=="YourGroupSid"]) => issue(Type =
+"https://schemas.microsoft.com/claims/authnmethodsproviders", Value =
+"AzureMfaServerAuthentication");ΓÇÖ
+```
++
+### Disable MFA Server as an authentication provider in AD FS
+
+This change ensures only Azure MFA is used as an authentication provider.
+
+1. Open the **AD FS management console**.
+1. Under **Services**, right-click on **Authentication Methods**, and select **Edit Multi-factor Authentication Methods**.
+1. Clear the **Azure Multi-Factor Authentication Server** checkbox.
+
+### Decommission the MFA Server
+
+Follow your enterprise server decommissioning process to remove the MFA Servers in your environment.
+
+Possible considerations when decommissions the MFA Server include:
+
+* We recommend reviewing MFA Server logs to ensure no users or applications are using it before you remove the server.
+* Uninstall Multi-Factor Authentication Server from the Control Panel on the server.
+* Optionally clean up logs and data directories that are left behind after backing them up first.
+* Uninstall the Multi-Factor Authentication Web Server SDK, if applicable including any files left over inetpub\wwwroot\MultiFactorAuthWebServiceSdk and/or MultiFactorAuth directories.
+* For pre-8.0.x versions of MFA Server, it may also be necessary to remove the Multi-Factor Auth Phone App Web Service.
+
+## Move application authentication to Azure Active Directory
+
+If you migrate all your application authentication along with your MFA and user authentication, you will be able to remove significant portions of your on-premises infrastructure, reducing costs and risks.
+If you move all application authentication, you can skip the [Prepare AD FS](#prepare-ad-fs) stage and simplify your MFA migration.
+
+The process for moving all application authentication is shown in the following diagram.
+
+![Process to migrate applications to to Azure AD MFA.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/mfa-app-migration-flow.png)
+
+If it is not possible to move all your applications prior to the migration, move applications that you can before starting.
+For more information on migrating applications to Azure, see [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md).
+
+## Next steps
+
+- [Migrate from Microsoft MFA Server to Azure multi-factor authentication (Overview)](how-to-migrate-mfa-server-to-azure-mfa.md)
+- [Migrate applications from Windows Active Directory to Azure Active Directory](../manage-apps/migrate-application-authentication-to-azure-active-directory.md)
+- [Plan your cloud authentication strategy](../fundamentals/active-directory-deployment-plans.md)
active-directory How To Migrate Mfa Server To Azure Mfa With Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-with-federation.md
+
+ Title: Migrate to Azure AD MFA with federations - Azure Active Directory
+description: Step-by-step guidance to move from Azure MFA Server on-premises to Azure AD MFA with federation
+++++ Last updated : 06/22/2021++++++++
+# Migrate to Azure AD MFA with federation
+
+Moving your multi-factor-authentication (MFA) solution to Azure Active Directory (Azure AD) is a great first step in your journey to the cloud. Consider also moving to Azure AD for user authentication in the future. For more information, see the process for migrating to Azure AD MFA with cloud authentication.
+
+To migrate to Azure AD MFA with federation, the Azure MFA authentication provider is installed on AD FS. The Azure AD relying party trust and other relying party trusts are configured to use Azure MFA for migrated users.
+
+The following diagram shows the process of this migration.
+
+![Flow chart showing the steps of the process. These align to the headings in this document in the same order](./media/how-to-migrate-mfa-server-to-azure-mfa-with-federation/mfa-federation-flow.png)
+
+## Create migration groups
+
+To create new conditional access policies, you'll need to assign those policies to groups. You can use existing Azure AD security groups or Microsoft 365 Groups for this purpose. You can also create or sync new ones.
+
+You'll also need an Azure AD security group for iteratively migrating users to Azure AD MFA. These groups are used in your claims rules.
+
+DonΓÇÖt reuse groups that are used for security. If you are using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group.
+
+## Prepare AD FS
+
+### Upgrade AD FS server farm to 2019, FBL 4
+
+In AD FS 2019, you can specify additional authentication methods for a relying party, such as an application. You use group membership to determine authentication provider. By specifying an additional authentication method, you can transition to Azure AD MFA while keeping other authentication intact during the transition. For more information, see [Upgrading to AD FS in Windows Server 2016 using a WID database](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server). The article covers both upgrading your farm to AD FS 2019 and upgrading your FBL to 4.
+
+### Configure claims rules to invoke Azure AD MFA
+
+Now that Azure AD MFA is an additional authentication method, you can assign groups of users to use it. You do so by configuring claims rules, also known as relying party trusts. By using groups, you can control which authentication provider is called globally or by application. For example, you can call Azure AD MFA for users who have registered for combined security information or had their phone numbers migrated, while calling MFA Server for those who haven't.
+
+> [!NOTE]
+> Claims rules require on-premises security group. Before making changes to claims rules, back them up.
++
+#### Back up existing rules
+
+Before configuring new claims rules, back up your existing rules. You'll need to restore these rules as a part of your cleanup steps.
+
+Depending on your configuration, you may also need to copy the existing rule and append the new rules being created for the migration.
+
+To view existing global rules, run:
+
+```powershell
+Get-AdfsAdditionalAuthenticationRule
+```
+
+To view existing relying party trusts, run the following command and replace RPTrustName with the name of the relying party trust claims rule:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name ΓÇ£RPTrustNameΓÇ¥).AdditionalAuthenticationRules
+```
+
+#### Access control policies
+
+> [!NOTE]
+> Access control policies canΓÇÖt be configured so that a specific authentication provider is invoked based on group membership.
+
+
+To transition from access control policies to additional authentication rules, run the following command for each of your Relying Party Trusts using the MFA Server authentication provider:
++
+```powershell
+Set-AdfsRelyingPartyTrust -TargetName AppA -AccessControlPolicyName $Null
+```
+
+
+
+This command will move the logic from your current Access Control Policy into Additional Authentication Rules.
++
+#### Set up the group, and find the SID
+
+You'll need to have a specific group in which you place users for whom you want to invoke Azure AD MFA. You will need the security identifier (SID) for that group.
+
+To find the group SID, use the following command, with your group name
+
+`Get-ADGroup ΓÇ£GroupNameΓÇ¥`
+
+![Image of screen shot showing the results of the Get-ADGroup script.](./media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/find-the-sid.png)
+
+#### Setting the claims rules to call Azure MFA
+
+The following PowerShell cmdlets invoke Azure AD MFA for users in the group when not on the corporate network. Replace "YourGroupSidΓÇ¥ with the SID found by running the above cmdlet.
+
+Make sure you review the [How to Choose Additional Auth Providers in 2019](/windows-server/identity/ad-fs/overview/whats-new-active-directory-federation-services-windows-server).
+
+ > [!IMPORTANT]
+> Backup your existing claims rules
+
+
+
+#### Set global claims rule
+
+Run the following PowerShell cmdlet:
+
+```powershell
+(Get-AdfsRelyingPartyTrust -Name ΓÇ£RPTrustNameΓÇ¥).AdditionalAuthenticationRules
+```
+
+
+
+The command returns your current additional authentication rules for your relying party trust. Append the following rules to your current claim rules:
+
+```console
+c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)", Value ==
+
+"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)",
+
+Value = "AzureMfaAuthentication");
+
+not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)",
+
+Value=="YourGroupSid"]) => issue(Type =
+
+"[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)", Value =
+
+"AzureMfaServerAuthentication");ΓÇÖ
+```
+
+The following example assumes your current claim rules are configured to prompt for MFA when users connect from outside your network. This example includes the additional rules that you need to append.
+
+```PowerShell
+
+Set-AdfsAdditionalAuthenticationRule -AdditionalAuthenticationRules 'c:[type ==
+
+"[https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork](https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork)", value == "false"] => issue(type =
+
+"[https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod](https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod)", value =
+
+"[https://schemas.microsoft.com/claims/multipleauthn](https://schemas.microsoft.com/claims/multipleauthn)" );
+
+ c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)", Value ==
+
+"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)",
+
+Value = "AzureMfaAuthentication");
+
+not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)",
+
+Value=="YourGroupSid"]) => issue(Type =
+
+"[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)", Value =
+
+"AzureMfaServerAuthentication");ΓÇÖ
+
+```
+
+#### Set per-application claims rule
+
+This example modifies claim rules on a specific relying party trust (application), and includes the information you must append.
+
+```PowerShell
+
+Set-AdfsRelyingPartyTrust -TargetName AppA -AdditionalAuthenticationRules 'c:[type ==
+
+"[https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork](https://schemas.microsoft.com/ws/2012/01/insidecorporatenetwork)", value == "false"] => issue(type =
+
+"[https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod](https://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod)", value =
+
+"[https://schemas.microsoft.com/claims/multipleauthn](https://schemas.microsoft.com/claims/multipleauthn)" );
+
+c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)", Value ==
+
+"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)",
+
+Value = "AzureMfaAuthentication");
+
+not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid)",
+
+Value=="YourGroupSid"]) => issue(Type =
+
+"[https://schemas.microsoft.com/claims/authnmethodsproviders](https://schemas.microsoft.com/claims/authnmethodsproviders)", Value =
+
+"AzureMfaServerAuthentication");ΓÇÖ
+
+```
+
+### Configure Azure AD MFA as an authentication provider in AD FS
+
+To configure Azure AD MFA for AD FS, you must configure each AD FS server. If you have multiple AD FS servers in your farm, you can configure them remotely using Azure AD PowerShell.
+
+For step-by-step directions on this process, see [Configure the AD FS servers](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa) in the article [Configure Azure AD MFA as authentication provider with AD FS](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa).
+
+Once you've configured the servers, you can add Azure AD MFA as an additional authentication method.
+
+![Screen shot showing the Edit authentication methods screen with Azure MFA and Azure Mutli-factor authentication Server selected](./media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/edit-authentication-methods.png)
+
+## Prepare Azure AD and implement
+
+### Ensure SupportsMFA is set to True
+
+For federated domains, MFA may be enforced by Azure AD Conditional Access or by the on-premises federation provider. Each federated domain in Azure AD has a SupportsMFA flag. When the SupportsMFA flag is set to True, Azure AD redirects users to MFA on AD FS or another federation providers. For example, if a user is accessing an application for which a Conditional Access policy that requires MFA has been configured, the user will be redirected to AD FS. Adding Azure AD MFA as an authentication method in AD FS, enables Azure AD MFA to be invoked once your configurations are complete.
+
+If the SupportsMFA flag is set to False, you're likely not using Azure MFA; you're probably using claims rules on AD FS relying parties to invoke MFA.
+
+You can check the status of your SupportsMFA flag with the following [Windows PowerShell cmdlet](/powershell/module/msonline/get-msoldomainfederationsettings?view=azureadps-1.0):
+
+```powershell
+Get-MsolDomainFederationSettings ΓÇôDomainName yourdomain.com
+```
+
+If the SupportsMFA flag is set to false or is blank for your federated domain, set it to true using the following Windows PowerShell cmdlet:
+
+```powershell
+Set-MsolDomainFederationSettings -DomainName contoso.com -SupportsMFA $true
+```
+
+This configuration allows the decision to use MFA Server or Azure MFA to be made on AD FS.
+
+### Configure Conditional Access policies if needed
+
+If you use Conditional Access to determine when users are prompted for MFA, you shouldn't need to change your policies.
+
+If your federated domain(s) have SupportsMFA set to false, analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
+
+After creating conditional access policies to enforce the same controls as AD FS, you can back up and remove your claim rules customizations on the Azure AD Relying Party.
+
+For more information, see the following resources:
+
+* [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md)
+
+* [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
++
+## Register users for Azure MFA
+
+There are two ways to register users for Azure MFA:
+
+* Register for combined security (MFA and self-service-password reset)
+
+* Migrate phone numbers from MFA Server
+
+The Microsoft Authenticator app can be used as in passwordless mode. It can also be used as a second factor for MFA with either registration method.
+
+### Register for combined security registration (recommended)
+
+Have users register for combined security information, which is a single place to register their authentication methods and devices for both MFA and SSPR. While it is possible to migrate data from the MFA Server to Azure AD MFA, it presents the following challenges:
+
+* Only phone numbers can be migrated.
+
+* Authenticator apps will need to be reregistered.
+
+* Stale data can be migrated.
+
+Microsoft provides communication templates that you can provide to your users to guide them through the combined registration process. These include templates for email, posters, table tents, and other assets. Users register their information at [https://aka.ms/mysecurityinfo](https://aka.ms/mysecurityinfo), which takes them to the combined security registration screen.
+
+We recommend that you [secure the security registration process with Conditional Access](../conditional-access/howto-conditional-access-policy-registration.md) that requires the registration to occur from a trusted device or location.
+
+> [!NOTE]
+> Users who MUST register their combined security information from a non-trusted location or device, the user can be issued a Temporary Access Pass or temporarily excluded from the policy.
+
+### Migrate phone numbers from MFA Server
+
+Migrating phone numbers can lead to stale numbers being migrated, and make users more likely to stay on phone-based MFA instead of setting up more secure methods like [passwordless sign-in with the Microsoft Authenticator](howto-authentication-passwordless-phone.md) app. You can't migrate device registrations such as their Microsoft Authenticator app settings. We recommend that you have all users register for [combined security information](howto-registration-mfa-sspr-combined.md). Combined security information also registers users for self-service password reset.
+
+If having users register their combined security information is absolutely not an option, it's possible to export the users and their phone numbers from MFA Server and import the phone numbers into Azure AD.
+
+
+#### Export user phone numbers from MFA Server
+
+1. Open the Multi-Factor Authentication Server admin console on the MFA Server.
+
+1. Select **File**, then **Export Users**.
+
+1. **Save** the CSV file. The default name is Multi-Factor Authentication Users.csv
+
+#### Interpret and format the .csv file
+
+The .csv file contains many fields not necessary for migration. Edit and format it before importing the phone numbers into Azure AD.
+
+When opening the .csv file, columns of interest include:
+
+* Username
+* Primary Phone
+* Primary Country Code
+* Backup Country Code
+* Backup Phone
+* Backup Extension
+
+You'll need to interpret, clean, and format the data.
+
+#### Tips to avoid errors during import
+
+* Modify the CSV file before using the Authentication Methods API to import the phone numbers into Azure AD.
+
+* Simplify the .csv to three columns: UPN, PhoneType, and PhoneNumber.
+
+* Make sure the exported MFA Server Username matches the Azure AD UserPrincipalName. If it doesn't, update the username in the CSV file to match what is in Azure AD, otherwise the user won't be found.
+
+Users may have already registered phone numbers in Azure AD. When you import the phone numbers using the Authentication Methods API, you must decide whether to overwrite the existing phone number or to add the imported number as an alternate phone number.
+
+The following PowerShell cmdlets takes the CSV file you supply and adds the exported phone numbers as a phone number for each UPN using the Authentication Methods API. Replace "myPhonesΓÇ¥ with the name of your CSV file.
+
+```powershell
+
+$csv = import-csv myPhones.csv
+
+$csv|% { New-MgUserAuthenticationPhoneMethod -UserId $_.UPN -phoneType $_.PhoneType -phoneNumber $_.PhoneNumber}
+
+```
+
+### Add users to the appropriate groups
+
+* If you created new conditional access policies, add the appropriate users to those groups.
+
+* If you created on-premises security groups for claims rules, add the appropriate users to those groups.
+
+We don't recommend that you reuse groups that are used for security. Therefore, if you are using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group.
+
+## Monitoring
+
+Azure MFA registration can be monitored using the [Authentication methods usage & insights report](https://portal.azure.com/). This report can be found in Azure AD. Select **Monitoring**, then select **Usage & insights**.
+
+In Usage & insights, select **Authentication methods**.
+
+Detailed Azure MFA registration information can be found on the Registration tab. You can drill down to view a list of registered users by selecting the **Users capable of Azure multi-factor authentication** hyperlink.
+
+![Image of Authentication methods activity screen showing user registrations to MFA](./media/how-to-migrate-mfa-server-to-azure-mfa-with-federation/authentication-methods.png)
+
+
+
+## Clean up steps
+
+Once you have completed migration to Azure MFA and are ready to decommission the MFA Server, do the following three things:
+
+1. Revert your claim rules on AD FS to their pre-migration configuration and remove the MFA Server authentication provider.
+
+1. Remove MFA server as an authentication provider in AD FS. This will ensure all users use Azure MFA as it will be the only additional authentication method enabled.
+
+1. Decommission the MFA Server.
+
+### Revert claims rules on AD FS and remove MFA Server authentication provider
+
+Follow the steps under Configure claims rules to invoke Azure AD MFA to revert back to the backed up claims rules and remove any AzureMFAServerAuthentication claims rules.
+
+For example, remove the following from the rule(s):
+
+
+```console
+c:[Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"), Value ==
+
+"YourGroupSID"] => issue(Type = "[https://schemas.microsoft.com/claims/authnmethodsproviders"](https://schemas.microsoft.com/claims/authnmethodsproviders"),
+
+Value = "AzureMfaAuthentication");
+
+not exists([Type == "[https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"](https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid"),
+
+Value=="YourGroupSid"]) => issue(Type =
+
+"[https://schemas.microsoft.com/claims/authnmethodsproviders"](https://schemas.microsoft.com/claims/authnmethodsproviders"), Value =
+
+"AzureMfaServerAuthentication");ΓÇÖ
+```
+
+### Disable MFA Server as an authentication provider in AD FS
+
+This change ensures only Azure MFA is used as an authentication provider.
+
+1. Open the **AD FS management console**.
+
+1. Under **Services**, right-click on **Authentication Methods**, and select **Edit Multi-factor Authentication Methods**.
+
+1. Uncheck the box next to **Azure Multi-Factor Authentication Server**.
+
+### Decommission the MFA Server
+
+Follow your enterprise server decommissioning process to remove the MFA Servers in your environment.
+
+Possible considerations when decommissions the MFA Servers include:
+
+* Review MFA Servers' logs to ensure no users or applications are using it before you remove the server.
+
+* Uninstall Multi-Factor Authentication Server from the Control Panel on the server
+
+* Optionally clean up logs and data directories that are left behind after backing them up first.
+
+* Uninstall the Multi-Factor Authentication Web Server SDK if applicable, including any files left over in etpub\wwwroot\MultiFactorAuthWebServiceSdk and or MultiFactorAuth directories
+
+* For MFA Server versions prior to 8.0, it may also be necessary to remove the Multi-Factor Auth Phone App Web Service
+
+## Next Steps
+
+- [Deploy password hash synchronization](../hybrid/whatis-phs.md)
+- [Learn more about Conditional Access](../conditional-access/overview.md)
+- [Migrate applications to Azure AD](../manage-apps/migrate-application-authentication-to-azure-active-directory.md)
+
+
+
+
+
+
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
+
+ Title: Migrate from Azure MFA Server to Azure multi-factor authentication - Azure Active Directory
+description: Step-by-step guidance to migrate from Azure MFA Server on-premises to Azure multi-factor authentication
+++++ Last updated : 06/22/2021++++++++
+# Migrate from Azure MFA Server to Azure multi-factor authentication
+
+Multi-factor authentication (MFA) is important to securing your infrastructure and assets from bad actors. Azure MFA Server isnΓÇÖt available for new deployments and will be deprecated. Customers who are using MFA Server should move to using cloud-based Azure Active Directory (Azure AD) multi-factor authentication.
+In this documentation, we assume that you have a hybrid environment where:
+
+- You are using MFA Server for MFA.
+- You are using federation on Azure AD with Active Directory Federation Services (AD FS) or another identity provider federation product.
+ - While this article is scoped to AD FS, similar steps apply to other identity providers.
+- Your MFA Server is integrated with AD FS.
+- You might have applications using AD FS for authentication.
+
+There are multiple possible end states to your migration, depending on your goal.
+
+| <br> | Goal: Decommission MFA Server ONLY | Goal: Decommission MFA Server and move to Azure AD Authentication | Goal: Decommission MFA Server and AD FS |
+|||-|--|
+|MFA provider | Change MFA provider from MFA Server to Azure AD MFA. | Change MFA provider from MFA Server to Azure AD MFA. | Change MFA provider from MFA Server to Azure AD MFA. |
+|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** seamless single sign-on.| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** seamless single sign-on. |
+|Application authentication | Continue to use AD FS authentication for your applications. | Continue to use AD FS authentication for your applications. | Move apps to Azure AD before migrating to Azure MFA. |
+
+If you can, move both your MFA and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD MFA and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md).
+
+If you canΓÇÖt move your user authentication, see the step-by-step guidance for [Moving to Azure AD MFA with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md).
+
+## Prerequisites
+
+- AD FS environment (required if you are not migrating all your apps to Azure AD prior to migrating MFA)
+ - Upgrade to AD FS for Windows Server 2019, Farm behavior level (FBL) 4. This enables you to select authentication provider based on group membership for a more seamless user transition. While it is possible to migrate while on AD FS for Windows Server 2016 FBL 3, it is not as seamless for users. During the migration users will be prompted to select an authentication provider (MFA Server or Azure MFA) until the migration is complete.
+- Permissions
+ - Enterprise administrator role in Active Directory to configure AD FS farm for Azure AD MFA
+ - Global administrator role in Azure AD to perform configuration of Azure AD using Azure AD PowerShell
++
+## Considerations for all migration paths
+
+Migrating from MFA Server to Azure AD MFA involves more than just moving the registered MFA phone numbers.
+MicrosoftΓÇÖs MFA server can be integrated with many systems, and you must evaluate how these systems are using MFA Server to understand the best ways to integrate with Azure AD MFA.
+
+### Migrating MFA user information
+
+Common ways to think about moving users in batches include moving them by regions, departments, or roles such as administrators.
+Whichever strategy you choose, ensure that you move users iteratively, starting with test and pilot groups, and that you have a rollback plan in place.
+
+While you can migrate usersΓÇÖ registered MFA phone numbers and hardware tokens, you cannot migrate device registrations such as their Microsoft Authenticator app settings.
+Users will need to register and add a new account on the Authenticator app and remove the old account.
+
+To help users to differentiate the newly added account from the old account linked to the MFA Server, make sure the Account name for the Mobile App on the MFA Server is named in a way to distinguish the two accounts.
+For example, the Account name that appears under Mobile App on the MFA Server has been renamed to OnPrem MFA Server.
+The account name on the Authenticator App will change with the next push notification to the user.
+
+Migrating phone numbers can also lead to stale numbers being migrated and make users more likely to stay on phone-based MFA instead of setting up more secure methods like Microsoft Authenticator in passwordless mode.
+We therefore recommend that regardless of the migration path you choose, that you have all users register for [combined security information](howto-registration-mfa-sspr-combined.md).
++
+#### Migrating hardware security keys
+
+Azure AD provides support for OATH hardware tokens.
+In order to migrate the tokens from MFA Server to Azure AD MFA, the [tokens must be uploaded into Azure AD using a CSV file](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview), commonly referred to as a "seed file".
+The seed file contains the secret keys and token serial numbers, as well as other necessary information needed to upload the tokens into Azure AD.
+
+If you no longer have the seed file with the secret keys, it is not possible to export the secret keys from MFA Server.
+If you no longer have access to the secret keys, please contact your hardware vendor for support.
+
+The MFA Server Web Service SDK can be used to export the serial number for any OATH tokens assigned to a given user.
+Using this information along with the seed file, IT admins can import the tokens into Azure AD and assign the OATH token to the specified user based on the serial number.
+The user will also need to be contacted at the time of import to supply OTP information from the device to complete the registration.
+Please refer to the GetUserInfo > userSettings > OathTokenSerialNumber topic in the Multi-Factor Authentication Server help file on your MFA Server.
++
+### Additional migrations
+
+The decision to migrate from MFA Server to Azure MFA opens the door for other migrations. Completing additional migrations depends upon many factors, including specifically:
+
+- Your willingness to use Azure AD authentication for users
+- Your willingness to move your applications to Azure AD
+
+As MFA Server is deeply integrated with both applications and user authentication, you may want to consider moving both of those functions to Azure as a part of your MFA migration, and eventually decommission AD FS.
+
+Our recommendations:
+
+- Use Azure AD for authentication as it enables more robust security and governance
+- Move applications to Azure AD if possible
+
+To select the user authentication method best for your organization, see [Choose the right authentication method for your Azure AD hybrid identity solution](../hybrid/choose-ad-authn.md).
+We recommend that you use Password Hash Synchronization (PHS).
+
+### Passwordless authentication
+
+As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-the-microsoft-authenticator-app).
+
+### Microsoft Identity Manager self-service password reset
+
+Microsoft Identity Manager (MIM) SSPR can use MFA Server to invoke SMS one-time passcodes as part of the password reset flow.
+MIM cannot be configured to use Azure MFA.
+We recommend you evaluate moving your SSPR service to Azure AD SSPR.
+
+You can use the opportunity of users registering for Azure MFA to use the combined registration experience to register for Azure AD SSPR.
++
+### RADIUS clients and Azure AD MFA
+
+MFA Server supports RADIUS to invoke MFA for applications and network devices that support the protocol.
+If you are using RADIUS with MFA Server, we recommend moving client applications to modern protocols such as SAML, Open ID Connect, or OAuth on Azure AD.
+If the application cannot be updated, then you can deploy Network Policy Server (NPS) with the Azure MFA extension.
+The network policy server (NPS) extension acts as an adapter between RADIUS-based applications and Azure AD MFA to provide a second factor of authentication. This allows you to move your RADIUS clients to Azure MFA and decommission your MFA Server.
+
+#### Important considerations
+
+There are limitations when using NPS for RADIUS clients, and we recommend evaluating any RADIUS clients to determine if you can upgrade them to modern authentication protocols.
+Check with the service provider for supported product versions and their capabilities.
+
+- The NPS extension does not use Azure AD Conditional Access policies. If you stay with RADIUS and use the NPS extension, all authentication requests going to NPS will require the user to perform MFA.
+- Users must register for Azure AD MFA prior to using the NPS extension. Otherwise, the extension fails to authenticate the user, which can generate help desk calls.
+- When the NPS extension invokes MFA, the MFA request is sent to the user's default MFA method.
+ - Because the sign-in happens on 3rd party applications, it is unlikely that the user will see visual notification that MFA is required and that a request has been sent to their device.
+ - During the MFA requirement, the user must have access to their default authentication method to complete the MFA requirement.
+ - Users can change their default MFA method in the Security Info page (aka.ms/mysecurityinfo).
+- Available MFA methods for RADIUS clients are controlled by the client systems sending the RADIUS access requests.
+ - MFA methods that require user input after they enter a password can only be used with systems that support access-challenge responses with RADIUS. Input methods might include OTP, hardware OATH tokens or the Microsoft Authenticator application.
+ - Some systems might limit available MFA methods to Microsoft Authenticator push notifications and phone calls.
++
+>[!NOTE]
+>The password encryption algorithm used between the RADIUS client and the NPS system, and the input methods the client can use affect which authentication methods are available. For more information, see [Determine which authentication methods your users can use](howto-mfa-nps-extension.md).
+
+Common RADIUS client integrations include applications such as [Remote Desktop Gateways](howto-mfa-nps-extension-rdg.md) and [VPN Servers](howto-mfa-nps-extension-vpn.md).
+Others might include:
+
+- Citrix Gateway
+ - [Citrix Gateway](https://docs.citrix.com/advanced-concepts/implementation-guides/citrix-gateway-microsoft-azure.html) supports both RADIUS and NPS extension integration, and a SAML integration.
+- Cisco VPN
+ - The Cisco VPN supports both RADIUS and [SAML authentication for SSO](../saas-apps/cisco-anyconnect.md).
+ - By moving from RADIUS authentication to SAML, you can integrate the Cisco VPN without deploying the NPS extension.
+- All VPNs
+ - We recommend federating your VPN as a SAML app if possible. This will allow you to use Conditional Access. For more information, see a [list of VPN vendors that are integrated into the Azure AD](../manage-apps/secure-hybrid-access.md#sha-through-vpn-and-sdp-applications) App gallery.
++
+### Resources for deploying NPS
+
+- [Adding new NPS infrastructure](/windows-server/networking/technologies/nps/nps-top)
+- [NPS deployment best practices](https://www.youtube.com/watch?v=qV9wddunpCY)
+- [Azure MFA NPS extension health check script](/samples/azure-samples/azure-mfa-nps-extension-health-check/azure-mfa-nps-extension-health-check/)
+- [Integrating existing NPS infrastructure with Azure AD MFA](howto-mfa-nps-extension-vpn.md)
+
+## Next steps
+
+- [Moving to Azure AD MFA with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md)
+- [Moving to Azure AD MFA and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md)
++
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-userstates.md
For Azure AD free tenants without Conditional Access, you can [use security defa
If needed, you can instead enable each account for per-user Azure AD Multi-Factor Authentication. When users are enabled individually, they perform multi-factor authentication each time they sign in (with some exceptions, such as when they sign in from trusted IP addresses or when the _remember MFA on trusted devices_ feature is turned on).
-Changing user states isn't recommended unless your Azure AD licenses don't include Conditional Access and you don't want to use security defaults. For more information on the different ways to enable MFA, see [Features and licenses for Azure AD Multi-Factor Authentication](concept-mfa-licensing.md).
+Changing [user states](https://docs.microsoft.com/azure/active-directory/authentication/howto-mfa-userstates#azure-ad-multi-factor-authentication-user-states) isn't recommended unless your Azure AD licenses don't include Conditional Access and you don't want to use security defaults. For more information on the different ways to enable MFA, see [Features and licenses for Azure AD Multi-Factor Authentication](concept-mfa-licensing.md).
> [!IMPORTANT] >
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-claims-mapping.md
In this example, you create a policy that adds the EmployeeID and TenantCountry
New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema": [{"Source":"user","ID":"employeeid","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/employeeid","JwtClaimType":"name"},{"Source":"company","ID":"tenantcountry","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/country","JwtClaimType":"country"}]}}') -DisplayName "ExtraClaimsExample" -Type "ClaimsMappingPolicy" ```
+ > [!WARNING]
+ > When you define a claims mapping policy for a directory extension attribute, use the `ExtensionID` property instead of the `ID` property within the body of the `ClaimsSchema` array.
+ 2. To see your new policy, and to get the policy ObjectId, run the following command: ``` powershell
active-directory Howto Add App Roles In Azure Ad Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md
Learn more about app roles with the following resources.
- Code samples on GitHub - [Add authorization using groups and group claims to an ASP.NET Core web app](https://aka.ms/groupssample)
- - [Angular single-page application (SPA) calling a .NET Core web API and using app roles and security groups](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups/blob/master/README.md)
+ - [Angular single-page application (SPA) calling a .NET Core web API and using app roles and security groups](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/5-AccessControl)
+ - [React single-page application (SPA) calling a Node.js web API and using app roles and security groups](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/5-AccessControl)
- Reference documentation - [Azure AD app manifest](./reference-app-manifest.md) - [Azure AD access tokens](access-tokens.md)
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
This quickstart uses MSAL Angular v2 with the authorization code flow. For a sim
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/AngularSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
See [How the sample works](#how-the-sample-works) for an illustration.
> > ### Option 1 (Express): Register and auto configure your app and then download your code sample >
-> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/JavascriptSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
+> 1. Go to the <a href="https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType/ReactSpaQuickstartPage/sourceType/docs" target="_blank">Azure portal - App registrations</a> quickstart experience.
> 1. Enter a name for your application. > 1. Under **Supported account types**, select **Accounts in any organizational directory and personal Microsoft accounts**. > 1. Select **Register**.
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/create-access-review.md
If you have assigned guests as reviewers and they have not accepted the invite,
## Create reviews via APIs You can also create access reviews using APIs. What you do to manage access reviews of groups and application users in the Azure portal can also be done using Microsoft Graph APIs.
-+ For more information, see the [Azure AD access reviews API reference](/graph/api/resources/accessreviewsv2-root?view=graph-rest-beta&preserve-view=true).
++ For more information, see the [Azure AD access reviews API reference](/graph/api/resources/accessreviewsv2-root). + For a tutorial, see [Use the access reviews API to review guest access to your Microsoft 365 groups](/graph/tutorial-accessreviews-m365group). + For a code sample, see [Example of retrieving Azure AD access reviews via Microsoft Graph](https://techcommunity.microsoft.com/t5/Azure-Active-Directory/Example-of-retrieving-Azure-AD-access-reviews-via-Microsoft/m-p/236096).
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Users in this role can read settings and administrative information across Micro
>- [Teams admin center](https://admin.teams.microsoft.com) - Global Reader cannot read **Teams lifecycle**, **Analytics & reports**, **IP phone device management** and **App catalog**. >- [Privileged Access Management (PAM)](/office365/securitycompliance/privileged-access-management-overview) doesn't support the Global Reader role. >- [Azure Information Protection](/azure/information-protection/what-is-information-protection) - Global Reader is supported [for central reporting](/azure/information-protection/reports-aip) only, and when your Azure AD organization isn't on the [unified labeling platform](/azure/information-protection/faqs#how-can-i-determine-if-my-tenant-is-on-the-unified-labeling-platform).
+> - [SharePoint](https://admin.microsoft.com/sharepoint) - Global Reader currently can't access SharePoint using PowerShell.
> > These features are currently in development. >
active-directory Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/prerequisites.md
To use AzureADPreview, follow these steps to make sure it is imported into the c
Get-Module -Name AzureADPreview ```
-1. If you don't see any output in the previous step, use [Import-Module](/powershell/module/powershellget/import-module) to import AzureADPreview. The `-Force` parameter removes the loaded module and then imports it again.
+1. If you don't see any output in the previous step, use [Import-Module](/powershell/module/microsoft.powershell.core/import-module) to import AzureADPreview. The `-Force` parameter removes the loaded module and then imports it again.
```powershell Import-Module -Name AzureADPreview -Force
active-directory Appian Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/appian-tutorial.md
Previously updated : 11/17/2020 Last updated : 06/21/2021
To get started, you need the following items:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * Appian single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Appian supports **SP and IDP** initiated SSO
-* Appian supports **Just In Time** user provisioning
+* Appian supports **SP and IDP** initiated SSO.
+* Appian supports **Just In Time** user provisioning.
## Adding Appian from the gallery
To configure the integration of Appian into Azure AD, you need to add Appian fro
1. In the **Add from the gallery** section, type **Appian** in the search box. 1. Select **Appian** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Appian Configure and test Azure AD SSO with Appian using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Appian.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Appian** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.appiancloud.com`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up Appian** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
Appian. This time, you have to sign in with the current user.
1. Once you have successfully verified that you can still sign in, click **Save Changes**. - ### Create Appian test user In this section, a user called Britta Simon is created in Appian. Appian supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Appian, a new one is created after authentication.
In this section, you test your Azure AD single sign-on configuration with follow
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Appian for which you set up the SSO
-
-You can also use Microsoft Access Panel to test the application in any mode. When you click the Appian tile in the Access Panel, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Appian for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](../user-help/my-apps-portal-end-user-access.md).
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Appian for which you set up the SSO.
+You can also use Microsoft My Apps to test the application in any mode. When you click the Appian tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Appian for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
active-directory Arcgis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/arcgis-tutorial.md
Previously updated : 02/08/2021 Last updated : 06/18/2021 # Tutorial: Azure Active Directory integration with ArcGIS Online
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. If you want to setup ArcGIS Online manually, open a new web browser window and log into your ArcGIS company site as an administrator and perform the following steps:
-2. Click **EDIT SETTINGS**.
+2. Go to the **Organization** -> **Settings**.
![Edit Settings](./media/arcgis-tutorial/settings.png "Edit Settings")
-3. Click **Security**.
+3. In the left menu, click **Security** and select **New SAML login** in the Logins tab.
- ![Security](./media/arcgis-tutorial/secure.png "Security")
+ ![screenshot for Security](./media/arcgis-tutorial/security.png)
-4. Under **Enterprise Logins**, click **SET IDENTITY PROVIDER**.
+4. In the **Set SAML login** window, choose the configuration as **One identity provider** and click **Next**.
- ![Enterprise Logins](./media/arcgis-tutorial/enterprise.png "Enterprise Logins")
+ ![Enterprise Logins](./media/arcgis-tutorial/identity-provider.png "Enterprise Logins")
-5. On the **Set Identity Provider** configuration page, perform the following steps:
+5. On the **Specify properties** tab, perform the following steps:
- ![Set Identity Provider](./media/arcgis-tutorial/identity-provider.png "Set Identity Provider")
+ ![Set Identity Provider](./media/arcgis-tutorial/set-saml-login.png "Set Identity Provider")
a. In the **Name** textbox, type your organizationΓÇÖs name.
- b. For **Metadata for the Enterprise Identity Provider will be supplied using**, select **A File**.
+ b. For **Metadata source for Enterprise Identity Provider**, select **File**.
- c. To upload your downloaded metadata file, click **Choose file**.
+ c. Click on **Choose File** to upload the **Federation Metadata XML** file, which you have downloaded from Azure portal.
- d. Click **SET IDENTITY PROVIDER**.
+ d. Click **Save**.
### Create ArcGIS Online test user
In the case of ArcGIS Online, provisioning is a manual task.
1. Log in to your **ArcGIS** tenant.
-2. Click **INVITE MEMBERS**.
+2. Go to the **Organization** -> **Members** and click **Invite members**.
![Invite Members](./media/arcgis-tutorial/invite.png "Invite Members")
-3. Select **Add members automatically without sending an email**, and then click **NEXT**.
+3. Select **Add members without sending invitations** method, and then click **Next**.
- ![Add Members Automatically](./media/arcgis-tutorial/members.png "Add Members Automatically")
+ ![Add Members Automatically](./media/arcgis-tutorial/add-members.png "Add Members Automatically")
-4. On the **Members** dialog page, perform the following steps:
+1. In the **Compile member list**, select **New member** and click **Next**.
+
+4. Fill the required fields in the following page and click **Next**.
![Add and review](./media/arcgis-tutorial/review.png "Add and review")
- a. Enter the **Email**, **First Name**, and **Last Name** of a valid Azure AD account you want to provision.
+5. In the next page, select the member you want to add and click **Next**.
+
+1. Set the required member properties in the next page and click **Next**.
- b. Click **ADD AND REVIEW**.
-5. Review the data you have entered, and then click **ADD MEMBERS**.
+1. In the **Confirm and complete** tab, click **Add members** .
![Add member](./media/arcgis-tutorial/add.png "Add member")
active-directory Ardoq Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ardoq-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
|| | `https://<CustomerName>.us.ardoq.com/saml/v2` | | `https://<CustomerName>.ardoq.com/saml/v2` |
- |
--
+
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<CustomerName>.ardoq.com/saml/v2`
Follow these steps to enable Azure AD SSO in the Azure portal.
|-| | `https://<CustomerName>.ardoq.com/saml/v2` | | `https://<CustomerName>.us.ardoq.com/saml/v2` |
- |
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Ardoq Client support team](mailto:support@ardoq.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
Follow these steps to enable Azure AD SSO in the Azure portal.
| assignedRoles | user.assignedroles | | mail | user.mail |
- > [!NOTE]
- > Ardoq expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui).
--
+ > [!NOTE]
+ > Ardoq expects roles for users that are assigned to the application. Be sure to set up these roles in Azure AD, so users can be assigned the appropriate roles. Your roles should be set up with the values "admin", "writer", "reader", and/or "contributor".
+ >
+ > Learn how to [configure roles in Azure AD](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui).
+
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
You can also use Microsoft My Apps to test the application in any mode. When you
## Next steps
-Once you configure Ardoq you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
+Once you configure Ardoq you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).
active-directory Aws Clientvpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/aws-clientvpn-tutorial.md
Previously updated : 12/29/2020 Last updated : 06/17/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* AWS ClientVPN supports **SP** initiated SSO
+* AWS ClientVPN supports **SP** initiated SSO.
-* AWS ClientVPN supports **Just In Time** user provisioning
+* AWS ClientVPN supports **Just In Time** user provisioning.
-## Adding AWS ClientVPN from the gallery
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add AWS ClientVPN from the gallery
To configure the integration of AWS ClientVPN into Azure AD, you need to add AWS ClientVPN from the gallery to your list of managed SaaS apps.
To configure the integration of AWS ClientVPN into Azure AD, you need to add AWS
1. In the **Add from the gallery** section, type **AWS ClientVPN** in the search box. 1. Select **AWS ClientVPN** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for AWS ClientVPN Configure and test Azure AD SSO with AWS ClientVPN using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS ClientVPN.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **AWS ClientVPN** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Sign on URL** text box, type a URL using the following pattern: `https://<LOCALHOST>`
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Set up AWS ClientVPN** section, copy the appropriate URL(s) based on your requirement. ![Copy configuration URLs](common/copy-configuration-urls.png)+ ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure AWS ClientVPN SSO
-To configure single sign-on on **AWS ClientVPN** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [AWS ClientVPN support team](https://aws.amazon.com/contact-us/). They set this setting to have the SAML SSO connection set properly on both sides.
+Follow the instructions given in the [link](https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#federated-authentication) to configure single sign-on on AWS ClientVPN side.
### Create AWS ClientVPN test user
active-directory Bpanda Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/bpanda-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Bpanda to support provisioning with Azure AD 1. Reach out to support@mid.de for more information on your authentication Tenant URL.
-2. A client secret for further generating access tokens. This must have been transmitted to you in a secure way. Reach out to support@mid.de for more information.
+2. A client secret for further generating access tokens. This secret must have been transmitted to you in a secure way. Reach out to support@mid.de for more information.
3. For establishing a successful connection between Azure AD and Bpanda, an access token must be retrieved in either of the following ways.
Add Bpanda from the Azure AD application gallery to start managing provisioning
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Bpanda, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+* When assigning users and groups to Bpanda, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add other roles.
* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
This section guides you through the steps to configure the Azure AD provisioning
|phoneNumbers[type eq "mobile"].value|String| |externalId|String| |title|String|
+ |preferredLanguage|String|
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Checkpoint Infinity Portal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/checkpoint-infinity-portal-tutorial.md
Previously updated : 06/04/2021 Last updated : 06/14/2021
In this tutorial, you configure and test Azure AD SSO in a test environment.
* Check Point Infinity Portal supports **Just In Time** user provisioning.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+ ## Add Check Point Infinity Portal from the gallery To configure the integration of Check Point Infinity Portal into Azure AD, you need to add Check Point Infinity Portal from the gallery to your list of managed SaaS apps.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Check Point Infinity Portal SSO
-1. On the Infinity Portal, click **Global Events > Account Settings**.
-
-1. Perform the following steps in the **Account Settings** page.
+1. Log in to your Check Point Infinity Portal company site as an administrator.
- ![screenshot to upload metadata.](./media/checkpoint-infinity-portal-tutorial/upload-metadata.png)
+2. Navigate to **Global Settings** > **Account Settings** and click **Define** under SSO Authentication.
+
+ ![Account](./media/checkpoint-infinity-portal-tutorial/define.png "Account")
- a. Move the **Use SSO Authorization** slider from **OFF** to **ON**.
+3. In the **SSO Authentication** page, select **SAML 2.0**
+as an **IDENTITY PROVIDER** and click **NEXT**.
+
+ ![Authentication](./media/checkpoint-infinity-portal-tutorial/identity-provider.png "Authentication")
- b. Click **Upload Metadata file** to upload the **Federation Metadata XML** file which you have downloaded from the Azure portal.
+4. In the **VERIFY DOMAIN** section, perform the following steps:
- c. Go to **Domain Name** field and enter the name of your company.
+ ![Verify Domain](./media/checkpoint-infinity-portal-tutorial/domain.png "Verify Domain")
+
+ a. Copy the DNS record values and add them to the DNS values in your company DNS server.
- d. In the **SSO Login URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+ b. Enter your companyΓÇÖs domain name in the **Domain** field and click **Validate**.
- e. Click **Save**.
+ c. Wait for Check Point to approve the DNS record update, it might take up to 30 minutes.
-#### Validate your Domain name
+ d. Click **NEXT** once the domain name is validated.
-1. You must add your company domain to the list of validated domains.
+5. In the **ALLOW CONNECTIVITY** section, perform the following steps:
+
+ ![Allow Connectivity](./media/checkpoint-infinity-portal-tutorial/connectivity.png "Allow Connectivity")
-1. If your company domain is not in the list, the **Domain is not validated** message appears.
+ a. Copy **Entity ID** value, paste this value into the **Azure AD Identifier** text box in the Basic SAML Configuration section in the Azure portal.
-1. Check Point requests **Your Company Domain Name** identity and generates a DNS TXT record.
+ b. Copy **Reply URL** value, paste this value into the **Reply URL** text box in the Basic SAML Configuration section in the Azure portal.
- ![screenshot to domain validating.](./media/checkpoint-infinity-portal-tutorial/domain-value.png)
+ c. Copy **Sign-on URL** value, paste this value into the **Sign on URL** text box in the Basic SAML Configuration section in the Azure portal.
+
+ d. Click **NEXT**.
- a. Copy the DNS record values and add them to the DNS values in your company domain name registrar.
+6. In the **CONFIGURE** section, click **Select File** and upload the **Federation Metadata XML** file which you have downloaded from the Azure portal and click **NEXT**.
- b. In the **SSO Authorization** section, click **Validate**.
+ ![Configure](./media/checkpoint-infinity-portal-tutorial/service.png "Configure")
- c. Wait for Check Point to approve your DNS record. The registrar update of the DNS records can last for up to 30 minutes.
+7. In the **CONFIRM IDENTITY PROVIDER** section, review the configurations and click **SUBMIT**.
+
+ ![Submit Configuration](./media/checkpoint-infinity-portal-tutorial/confirm.png "Submit Configuration")
### Create Check Point Infinity Portal test user
In this section, you test your Azure AD single sign-on configuration with follow
* You can use Microsoft My Apps. When you click the Check Point Infinity Portal tile in the My Apps, this will redirect to Check Point Infinity Portal Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). - ## Next steps Once you configure Check Point Infinity Portal you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-any-app).--
active-directory Clever Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/clever-tutorial.md
Previously updated : 12/17/2020 Last updated : 06/17/2021
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Clever supports **SP** initiated SSO
+* Clever supports **SP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Adding Clever from the gallery
+## Add Clever from the gallery
To configure the integration of Clever into Azure AD, you need to add Clever from the gallery to your list of managed SaaS apps.
To configure the integration of Clever into Azure AD, you need to add Clever fro
1. In the **Add from the gallery** section, type **Clever** in the search box. 1. Select **Clever** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Clever Configure and test Azure AD SSO with Clever using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Clever.
-To configure and test Azure AD SSO with Clever, complete the following building blocks:
+To configure and test Azure AD SSO with Clever, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **Clever** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://clever.com/in/<companyname>`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type the URL:
+ a. In the **Identifier (Entity ID)** text box, type the URL:
`https://clever.com/oauth/saml/metadata.xml`
- c. In the **Reply URL** text box, type a URL using the following pattern:
- `https://clever.com/<companyname>`
-
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://clever.com/<COMPANY_NAME>`
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://clever.com/in/<COMPANY_NAME>`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign-on URL and Reply URL. Contact [Clever Client support team](https://clever.com/about/contact/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [Clever Client support team](https://clever.com/about/contact/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Clever SSO
-Follow the instructions given in the [link](https://support.clever.com/hc/s/articles/205889768) to configure single sign-on on Clever side.
+1. In a different web browser window, log in to your Clever district dashboard as an administrator.
+
+2. From the left navigation, click **Menu** > **Portal** > **SSO Settings**.
+
+3. On the **SSO Settings** page, perform the following steps:
+
+ a. Select **Add Login Method**.
+
+ b. Select **Active Directory Authentication**.
+
+ ![Settings](./media/clever-tutorial/account.png "Settings")
+
+ c. Open the downloaded **App Federation Metadata Url** from the Azure portal into Notepad and paste the content into the **Metadata URL** textbox.
+
+ ![Upload Certificate](./media/clever-tutorial/metadata.png "Upload Certificate")
+
+ d.Click **Save**.
### Create Clever test user
active-directory Expensify Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/expensify-tutorial.md
Previously updated : 03/02/2021 Last updated : 06/18/2021
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, enter the values for the following fields:
- a. In the **Sign on URL** text box, type the URL:
- `https://www.expensify.com/authentication/saml/login`
-
- b. In the **Identifier (Entity ID)** text box, type the URL:
+ a. In the **Identifier (Entity ID)** text box, type the URL:
`https://www.expensify.com`
- c. b. In the **Reply URL** text box, type a URL using the following pattern:
+ b. In the **Reply URL** text box, type a URL using the following pattern:
`https://www.expensify.com/authentication/saml/loginCallback?domain=<yourdomain>`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://www.expensify.com/authentication/saml/login`
> [!NOTE] > The Reply URL value is not real. Update this value with the actual Reply URL. Contact [Expensify Client support team](mailto:help@expensify.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
To enable SSO in Expensify, you first need to enable **Domain Control** in the a
1. Sign on to your Expensify application.
-2. In the left panel, click **Settings** and navigate to **SAML**.
+2. In the left panel, hover over Settings, then click Domains and navigate to **SAML**.
3. Toggle the **SAML Login** option as **Enabled**.
To enable SSO in Expensify, you first need to enable **Domain Control** in the a
### Create Expensify test user
-In this section, you create a user called B.Simon in Expensify. Work with [Expensify Client support team](mailto:help@expensify.com) to add the users in the Expensify platform.
+In this section, you create the same user called B.Simon (For example, B.Simon@contoso.com) in Expensify. Check their guide [here](https://community.expensify.com/discussion/4869/how-to-manage-domain-members) for inviting members, or work with the [Expensify Client support team](mailto:help@expensify.com) to add the users in the Expensify platform.
## Test SSO
active-directory Golinks Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/golinks-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure GoLinks for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to GoLinks.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: b8a62f41-861f-417a-8925-70b892d9a4de
+++
+ na
+ms.devlang: na
+ Last updated : 06/21/2021+++
+# Tutorial: Configure GoLinks for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both GoLinks and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [GoLinks](https://www.golinks.io) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in GoLinks
+> * Remove users in GoLinks when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and GoLinks
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/golinks-tutorial) to GoLinks (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A GoLinks tenant on the [Enterprise plan](https://www.golinks.io/pricing.php).
+* A user account in [GoLinks](https://www.golinks.io) with admin access.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and GoLinks](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure GoLinks to support provisioning with Azure AD
+
+1. The Tenant URL is `https://api.golinks.io/scim/v2`. This value will be entered in the **Tenant URL** field in the Provisioning tab of your GoLinks application in the Azure portal.
+
+2. For the **Secret Token**, reach out to the GoLinks Support team at support@golinks.io or your Customer Success Manager. This value will be entered in the **Secret Token** field in the Provisioning tab of your GoLinks application in the Azure portal.
++
+## Step 3. Add GoLinks from the Azure AD application gallery
+
+Add GoLinks from the Azure AD application gallery to start managing provisioning to GoLinks. If you have previously setup GoLinks for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to GoLinks, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add other roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to GoLinks
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in GoLinks based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for GoLinks in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **GoLinks**.
+
+ ![The GoLinks link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your GoLinks Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to GoLinks. If the connection fails, ensure your GoLinks account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to GoLinks**.
+
+9. Review the user attributes that are synchronized from Azure AD to GoLinks in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in GoLinks for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the GoLinks API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |userName|String|&check;|
+ |externalId|String|
+ |active|Boolean|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+
+10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+11. To enable the Azure AD provisioning service for GoLinks, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+12. Define the users and/or groups that you would like to provision to GoLinks by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+13. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Logmein Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logmein-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure LogMeIn for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to LogMeIn.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: cf38e6ad-6391-4e5d-98f7-fbdaf3de54f5
+++
+ na
+ms.devlang: na
+ Last updated : 06/02/2021+++
+# Tutorial: Configure LogMeIn for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both LogMeIn and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LogMeIn](https://www.logmein.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in LogMeIn
+> * Remove users in LogMeIn when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and LogMeIn
+> * Provision groups and group memberships in LogMeIn
+> * [Single sign-on](https://docs.microsoft.com/azure/active-directory/saas-apps/logmein-tutorial) to LogMeIn (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An organization created in the LogMeIn Organization Center with at least one verified domain
+* A user account in the LogMeIn Organization Center with [permission](https://support.goto.com/meeting/help/manage-organization-users-g2m710102) to configure provisioning (for example, organization administrator role with Read & Write permissions) as shown in Step 2.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and LogMeIn](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure LogMeIn to support provisioning with Azure AD
+
+1. Log in to the [Organization Center](https://organization.logmeininc.com).
+
+2. The domain used in your account's email address is the domain that you are prompted to verify within 10 days.
+
+3. You can verify ownership of your domain using either of the following methods:
+
+ **Method 1: Add a DNS record to your domain zone file.**
+ To use the DNS method, you place a DNS record at the level of the email domain within your DNS zone. Examples using "main.com" as the domain would resemble: `@ IN TXT "logmein-verification-code=668e156b-f5d3-430e-9944-f1d4385d043e"` OR `main.com. IN TXT ΓÇ£logmein-verification-code=668e156b-f5d3-430e-9944-f1d4385d043eΓÇ¥`
+
+ Detailed instructions are as follows:
+ 1. Sign in to your domain's account at your domain host.
+ 2. Navigate to the page for updating your domain's DNS records.
+ 3. Locate the TXT records for your domain, then add a TXT record for the domain and for each subdomain.
+ 4. Save all changes.
+ 5. You can verify that the change has taken place by opening a command line and entering one of the following commands below (based on your operating system, with "main.com" as the domain example):
+ * For Unix and Linux systems: `$ dig TXT main.com`
+ * For Windows systems: `c:\ > nslookup -type=TXT main.com`
+ 6. The response will display on its own line.
+
+ **Method 2: Upload a web server file to the specific website.**
+ Upload a plain-text file to your web server root containing a verification string without any blank spaces or special characters outside of the string.
+
+ * Location: `http://<yourdomain>/logmein-verification-code.txt`
+ * Contents: `logmein-verification-code=668e156b-f5d3-430e-9944-f1d4385d043e`
+
+4. Once you have added the DNS record or TXT file, return to [Organization Center](https://organization.logmeininc.com) and click **Verify**.
+
+5. You have now created an organization in the Organization Center by verifying your domain, and the account used during this verification process is now the organization admin.
+
+## Step 3. Add LogMeIn from the Azure AD application gallery
+
+Add LogMeIn from the Azure AD application gallery to start managing provisioning to LogMeIn. If you have previously setup LogMeIn for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to LogMeIn, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to LogMeIn
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for LogMeIn in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **LogMeIn**.
+
+ ![The LogMeIn link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, click on **Authorize**. You will be redirected to **LogMeIn**'s authorization page. Input your LogMeIn username and click on the **Next** button. Input your LogMeIn password and click on the **Sign In** button. Click **Test Connection** to ensure Azure AD can connect to LogMeIn. If the connection fails, ensure your LogMeIn account has Admin permissions and try again.
+
+ ![authorization](./media/logmein-provisioning-tutorial/admin.png)
+
+ ![login](./media/logmein-provisioning-tutorial/username.png)
+
+ ![connection](./media/logmein-provisioning-tutorial/password.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to LogMeIn**.
+
+9. Review the user attributes that are synchronized from Azure AD to LogMeIn in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LogMeIn for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the LogMeIn API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|
+ |||
+ |userName|String|
+ |externalId|String|
+ |active|Boolean|
+ |name.givenName|String|
+ |name.familyName|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to LogMeIn**.
+
+11. Review the group attributes that are synchronized from Azure AD to LogMeIn in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in LogMeIn for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|
+ |||
+ |displayName|String|
+ |externalId|String|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for LogMeIn, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to LogMeIn by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Logmein Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/logmein-tutorial.md
Previously updated : 04/14/2021 Last updated : 06/18/2021 # Tutorial: Azure Active Directory single sign-on (SSO) integration with LogMeIn
To configure the integration of LogMeIn into Azure AD, you need to add LogMeIn f
1. In the **Add from the gallery** section, type **LogMeIn** in the search box. 1. Select **LogMeIn** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for LogMeIn Configure and test Azure AD SSO with LogMeIn using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in LogMeIn.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the Azure portal, on the **LogMeIn** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png) - ### Create an Azure AD test user In this section, you'll create a test user in the Azure portal called B.Simon.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure LogMeIn SSO
-1. In a different browser window, log in to your LogMeIn website as an administrator.
+1. To automate the configuration within LogMeIn, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up LogMeIn** will direct you to the LogMeIn application. From there, provide the admin credentials to sign into LogMeIn. The browser extension will automatically configure the application for you and automate steps 3-5.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to setup LogMeIn manually, in a different web browser window, sign in to your LogMeIn company site as an administrator.
1. Go to the **Identity Provider** tab and in the **Metadata url** textbox, paste the **Federation Metadata URL**, which you have copied from the Azure portal.
active-directory Mypolicies Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/mypolicies-provisioning-tutorial.md
The objective of this tutorial is to demonstrate the steps to be performed in my
The scenario outlined in this tutorial assumes that you already have the following prerequisites: * An Azure AD tenant.
-* [A myPolicies tenant](https://mypolicies.com/https://docsupdatetracker.net/index.html#section10).
+* [A myPolicies tenant](https://mypolicies.com/).
* A user account in myPolicies with Admin permissions. ## Assigning users to myPolicies
active-directory Saba Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/saba-cloud-tutorial.md
Previously updated : 03/22/2021 Last updated : 06/18/2021
To configure the integration of Saba Cloud into Azure AD, you need to add Saba C
1. In the **Add from the gallery** section, type **Saba Cloud** in the search box. 1. Select **Saba Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant. - ## Configure and test Azure AD SSO for Saba Cloud Configure and test Azure AD SSO with Saba Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Saba Cloud.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. In the **User Attributes & Claims** section, adjust the Unique User Identifier to whatever you organization intends to use as the primary username for Saba users.
- This step is required only if you're attempting to convert from username/password to SSO. If this is a new Saba Cloud deployment that doesn't have existing usrs, you can skip this step.
+ This step is required only if you're attempting to convert from username/password to SSO. If this is a new Saba Cloud deployment that doesn't have existing users, you can skip this step.
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Saba Cloud SSO
-1. Sign in to your Saba Cloud company site as an administrator.
+1. To automate the configuration within Saba Cloud, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
+
+ ![My apps extension](common/install-myappssecure-extension.png)
+
+1. After adding extension to the browser, click on **Set up Saba Cloud** will direct you to the Saba Cloud application. From there, provide the admin credentials to sign into Saba Cloud. The browser extension will automatically configure the application for you and automate steps 3-9.
+
+ ![Setup configuration](common/setup-sso.png)
+
+1. If you want to setup Saba Cloud manually, in a different web browser window, sign in to your Saba Cloud company site as an administrator.
+ 1. Click on **Menu** icon and click **Admin**, then select **System Admin** tab. ![screenshot for system admin](./media/saba-cloud-tutorial/system.png)
active-directory Smallstep Ssh Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/smallstep-ssh-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Smallstep SSH for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Smallstep SSH.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 1f37bd8a-4706-4385-b42e-5507912066f1
+++
+ na
+ms.devlang: na
+ Last updated : 06/21/2021+++
+# Tutorial: Configure Smallstep SSH for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Smallstep SSH and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Smallstep SSH](https://smallstep.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../manage-apps/user-provisioning.md).
++
+## Capabilities Supported
+> [!div class="checklist"]
+> * Create users in Smallstep SSH
+> * Remove users in Smallstep SSH when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Smallstep SSH
+> * Provision groups and group memberships in Smallstep SSH
+> * Single sign-on to Smallstep SSH (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](https://docs.microsoft.com/azure/active-directory/develop/quickstart-create-new-tenant)
+* A user account in Azure AD with [permission](https://docs.microsoft.com/azure/active-directory/users-groups-roles/directory-assign-admin-roles) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A [Smallstep SSH](https://smallstep.com/sso-ssh/) account.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](https://docs.microsoft.com/azure/active-directory/manage-apps/user-provisioning).
+2. Determine who will be in [scope for provisioning](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+3. Determine what data to [map between Azure AD and Smallstep SSH](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes).
+
+## Step 2. Configure Smallstep SSH to support provisioning with Azure AD
+
+1. Log in to your [Smallstep SSH](https://smallstep.com/sso-ssh/) account.
+
+2. Navigate to the **Users** tab and select **Azure AD** as your Identity Provider.
+
+3. On the next page, provide your **Azure AD tenant ID** and **domain whitelist** to configure OIDC.
+
+4. Under SCIM Details, copy and save your SCIM **Tenant URL** and **Secret Token**. These values will be entered in the **Tenant URL** and **Secret Token** field in the Provisioning tab of your Smallstep SSH application in the Azure portal.
+
+>Note!
+>You would need to grant access to your Smallstep managed hosts via Active Directory Groups. For example, you might have a group for your ssh users and one for your sudo users. Learn more about access control in [Azure AD Quickstart](https://smallstep.com/docs/ssh/azure-ad) and [Host Quickstart Guide](https://smallstep.com/docs/ssh/hosts).
+
+## Step 3. Add Smallstep SSH from the Azure AD application gallery
+
+Add Smallstep SSH from the Azure AD application gallery to start managing provisioning to Smallstep SSH. If you have previously setup Smallstep SSH for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](https://docs.microsoft.com/azure/active-directory/manage-apps/add-gallery-app).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
+
+* When assigning users and groups to Smallstep SSH, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](https://docs.microsoft.com/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps) to add other roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](https://docs.microsoft.com/azure/active-directory/manage-apps/define-conditional-rules-for-provisioning-user-accounts).
++
+## Step 5. Configure automatic user provisioning to Smallstep SSH
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Smallstep SSH based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Smallstep SSH in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **Smallstep SSH**.
+
+ ![The Smallstep SSH link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input your Smallstep SSH Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Smallstep SSH. If the connection fails, ensure your Smallstep SSH account has Admin permissions and try again.
+
+ ![Token](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Smallstep SSH**.
+
+9. Review the user attributes that are synchronized from Azure AD to Smallstep SSH in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Smallstep SSH for update operations. If you choose to change the [matching target attribute](https://docs.microsoft.com/azure/active-directory/manage-apps/customize-application-attributes), you will need to ensure that the Smallstep SSH API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ |||--|
+ |userName|String|&check;|
+ |active|Boolean|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |name.givenName|String|
+ |name.familyName|String|
+
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Smallstep SSH**.
+
+11. Review the group attributes that are synchronized from Azure AD to Smallstep SSH in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Smallstep SSH for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported For Filtering|
+ ||||
+ |displayName|String|&check;|
+ |members|Reference|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../manage-apps/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Smallstep SSH, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Smallstep SSH by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](https://docs.microsoft.com/azure/active-directory/reports-monitoring/concept-provisioning-logs) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](https://docs.microsoft.com/azure/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](https://docs.microsoft.com/azure/active-directory/manage-apps/application-provisioning-quarantine-status).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../manage-apps/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../manage-apps/check-status-user-account-provisioning.md)
active-directory Splan Visitor Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/splan-visitor-tutorial.md
Previously updated : 12/14/2020 Last updated : 06/21/2021
To get started, you need:
* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). * A Splan Visitor single sign-on (SSO) enabled subscription.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Scenario description In this tutorial, you'll configure and test Azure AD SSO in a test environment.
-Splan Visitor supports IdP-initiated SSO.
+* Splan Visitor supports IdP-initiated SSO.
## Add Splan Visitor from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal:
1. In the Azure portal, on the **Splan Visitor** application integration page, find the **Manage** section and select **single sign-on**. 1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, select the **edit/pen** icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, select the **pencil** icon for **Basic SAML Configuration** to edit the settings.
![Screenshot highlighting the edit/pen icon for Basic SAML Configuration.](common/edit-urls.png)
active-directory Configure Azure Active Directory For Fedramp High Impact https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/configure-azure-active-directory-for-fedramp-high-impact.md
The following is a list of FedRAMP resources:
* [FedRAMP High blueprint sample overview](../../governance/blueprints/samples/fedramp-h/index.md)
-* [Microsoft 365 compliance center](///microsoft-365/compliance/microsoft-365-compliance-center)
+* [Microsoft 365 compliance center](/microsoft-365/compliance/microsoft-365-compliance-center)
-* [Microsoft Compliance Manager](///microsoft-365/compliance/compliance-manager)
+* [Microsoft Compliance Manager](/microsoft-365/compliance/compliance-manager)
## Next steps
active-directory User Help Auth App Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/user-help-auth-app-faq.md
Previously updated : 04/30/2021 Last updated : 06/21/2021
On Android, Microsoft recommends allowing the app to access location all the tim
**A**: The Authenticator app collects three types of information: -- Account info you provide when you add your account. This data can be removed by removing your account.
+- Account info you provide when you add your account. After adding your account, depending on the features you enable for the account, your account data might sync down to the app. This data can be removed by removing your account.
- Diagnostic log data that stays only in the app until you **Send feedback** in the app's top menu to send logs to Microsoft. These logs can contain personal data such as email addresses, server addresses, or IP addresses. They also can contain device data such as device name and operating system version. Any personal data collected is limited to info needed to help troubleshoot app issues. You can browse these log files in the app at any time to see the info being gathered. If you send your log files, Authentication app engineers will use them only to troubleshoot customer-reported issues. - Non-personally identifiable usage data, such "started add account flow/successfully added account," or "notification approved." This data is an integral part of our engineering decisions. Your usage helps us determine where we can improve the apps in ways that are important to you. You see a notification of this data collection when you use the app for the first time. It informs you that it can be turned off on the app's **Settings** page. You can turn this setting on or off at any time.
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/operator-best-practices-identity.md
There are two levels of access needed to fully operate an AKS cluster:
To access other Azure services, like Cosmos DB, Key Vault, or Blob Storage, the pod needs access credentials. You could define access credentials with the container image or inject them as a Kubernetes secret. Either way, you would need to manually create and assign them. Usually, these credentials are reused across pods and aren't regularly rotated.
-With pod-managed identities for Azure resources, you automatically request access to services through Azure AD. Pod-managed identities is now currently in preview for AKS. Please refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)[( https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) documentation to get started.
+With pod-managed identities for Azure resources, you automatically request access to services through Azure AD. Pod-managed identities is now currently in preview for AKS. Please refer to the [Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview)](https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity) documentation to get started.
Instead of manually defining credentials for pods, pod-managed identities request an access token in real time, using it to access only their assigned services. In AKS, there are two components that handle the operations to allow pods to use managed identities:
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-connect-to-azure-storage.md
Title: Add Azure Storage (container)
+ Title: Mount Azure Storage as a local share (container)
description: Learn how to attach custom network share in a containerized app in Azure App Service. Share files between apps, manage static content remotely and access locally, etc. Previously updated : 7/01/2019 Last updated : 6/21/2021 zone_pivot_groups: app-service-containers-windows-linux
-# Access Azure Storage (preview) as a network share from a container in App Service
+# Mount Azure Storage as a local share in a container app in App Service
::: zone pivot="container-windows"
-This guide shows how to attach Azure Storage Files as a network share to a windows container in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-cli.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. Benefits include secured content, content portability, access to multiple apps, and multiple transferring methods.
- > [!NOTE]
->Azure Storage in App Service is **in preview** and **not supported** for **production scenarios**.
+>Azure Storage in App Service Windows container is **in preview** and **not supported** for **production scenarios**.
+
+This guide shows how to mount Azure Storage Files as a network share in a Windows container in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-cli.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include:
::: zone-end ::: zone pivot="container-linux"
-This guide shows how to attach Azure Storage to a Linux container App Service. Benefits include secured content, content portability, persistent storage, access to multiple apps, and multiple transferring methods.
+This guide shows how to mount Azure Storage as a network share in a built-in Linux container or a custom Linux container in App Service. The benefits of custom-mounted storage include:
-> [!NOTE]
->Azure Storage in App Service is **in preview** for App Service on Linux and Web App for Containers. It's **not supported** for **production scenarios**.
+
+- Configure persistent storage for your App Service app and manage the storage separately.
+- Make static content like video and images readily available for your App Service app.
+- Write application log files or archive older application log to Azure File shares.
+- Share content across multiple apps or with other Azure services.
++
+The following features are supported for Windows containers:
+
+- Azure Files (read/write).
+- Up to five mount points per app.
+++
+The following features are supported for Linux containers:
+
+- Secured access to storage accounts with [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) and [private links](../storage/common/storage-private-endpoints.md) (when [VNET integration](web-sites-integrate-with-vnet.md) is used).
+- Azure Files (read/write).
+- Azure Blobs (read-only).
+- Up to five mount points per app.
::: zone-end
This guide shows how to attach Azure Storage to a Linux container App Service. B
::: zone-end > [!NOTE]
-> Azure Files is non-default storage and billed separately, not included with the web app. It doesn't support using Firewall configuration due to infrastructure limitations.
+> Azure Storage is non-default storage for App Service and billed separately, not included with App Service.
> ## Limitations ::: zone pivot="container-windows" -- Azure Storage in App Service is currently **not supported** for bring your own code scenarios (non-containerized Windows apps).-- Azure Storage in App Service **doesn't support** using the **Storage Firewall** configuration because of infrastructure limitations.-- Azure Storage with App Service lets you specify **up to five** mount points per app.-- Azure Storage mounted to an app is not accessible through App Service FTP/FTPs endpoints. Use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
+- Storage mounts are not supported for native Windows (non-containerized) apps.
+- [Storage firewall](../storage/common/storage-network-security.md), [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network), and [private endpoints](../storage/common/storage-private-endpoints.md) are not supported.
+- FTP/FTPS access to mounted storage not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)).
+- Azure CLI, Azure PowerShell, and Azure SDK support is in preview.
+- Mapping `D:\` or `D:\home` to custom-mounted storage is not supported.
+- Drive letter assignments (`C:` to `Z:`) are not supported.
+- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation.
+- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
::: zone-end ::: zone pivot="container-linux" -- Azure Storage in App Service supports mounting **Azure Files containers** (Read / Write) and **Azure Blob containers** (Read Only)-- Azure Storage in App Service lets you specify **up to five** mount points per app.-- Azure Storage mounted to an app is not accessible through App Service FTP/FTPs endpoints. Use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
+- [Storage firewall](../storage/common/storage-network-security.md) is supported only through [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) and [private endpoints](../storage/common/storage-private-endpoints.md) (when [VNET integration](web-sites-integrate-with-vnet.md) is used). Custom DNS support is currently unavailable when the mounted Azure Storage account uses a private endpoint.
+- FTP/FTPS access to custom-mounted storage is not supported (use [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)).
+- Azure CLI, Azure PowerShell, and Azure SDK support is in preview.
+- Mapping `/` or `/home` to custom-mounted storage is not supported.
+- Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation.
+- Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts.
::: zone-end
-## Link storage to your app
- ::: zone pivot="container-windows"
-Once you've created your [Azure Storage account, file share and directory](#prerequisites), you can now configure your app with Azure Storage.
+## Mount storage to Windows container
-To mount an Azure Files Share to a directory in your App Service app, you use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command. Storage Type must be AzureFiles.
+Use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command. For example:
```azurecli az webapp config storage-account add --resource-group <group-name> --name <app-name> --custom-id <custom-id> --storage-type AzureFiles --share-name <share-name> --account-name <storage-account-name> --access-key "<access-key>" --mount-path <mount-path-directory> ```
-Note that the `mount-path-directory` should be in the form `/path/to/dir` or `\path\to\dir` with no drive letter, as it will always be mounted on the `C:\` drive.
+- `--storage-type` must be `AzureFiles` for Windows containers.
+- `mount-path-directory` must be in the form `/path/to/dir` or `\path\to\dir` with no drive letter. It's always mounted on the `C:\` drive. Do no use `/` or `\` (the root directory).
+
+Verify your storage is mounted by running the following command:
-You should do this for any other directories you want to be linked to an Azure Files share.
+```azurecli
+az webapp config storage-account list --resource-group <resource-group> --name <app-name>
+```
::: zone-end ::: zone pivot="container-linux"
-Once you've created your [Azure Storage account, file share and directory](#prerequisites), you can now configure your app with Azure Storage.
+## Mount storage to Linux container
-To mount a storage account to a directory in your App Service app, you use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command. Storage Type can be AzureBlob or AzureFiles. AzureFiles is used in this example. The mount path setting corresponds to the folder inside the container that you want to mount to Azure Storage. Setting it to '/' mounts the entire container to Azure Storage.
+# [Azure portal](#tab/portal)
+1. In the [Azure portal](https://porta.azure.com), navigate to the app.
+1. From the left navigation, click **Configuration** > **Path Mappings** > **New Azure Storage Mount**.
+1. Configure the storage mount according to the following table. When finished, click **OK**.
-> [!CAUTION]
-> The directory specified as the mount path in your web app should be empty. Any content stored in this directory will be deleted when an external mount is added. If you are migrating files for an existing app, make a backup of your app and its content before you begin.
->
+ | Setting | Description |
+ |-|-|
+ | **Name** | Name of the mount configuration. |
+ | **Configuration options** | Select **Basic** if the storage account is not using [service endpoints](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network) or [private endpoints](../storage/common/storage-private-endpoints.md). Otherwise, select **Advanced**. |
+ | **Storage accounts** | Azure Storage account. |
+ | **Storage type** | Select the type based on the storage you want to mount. Azure Blobs only supports read-only access. |
+ | **Storage container** or **Share name** | Files share or Blobs container to mount. |
+ | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. |
+ | **Mount path** | Directory inside the Linux container to mount to Azure Storage. Do not use `/` (the root directory). |
+
+ > [!CAUTION]
+ > The directory specified in **Mount path** in the Linux container should be empty. Any content stored in this directory is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
+ >
+
+# [Azure CLI](#tab/cli)
+
+Use the [`az webapp config storage-account add`](/cli/azure/webapp/config/storage-account#az_webapp_config_storage_account_add) command.
```azurecli az webapp config storage-account add --resource-group <group-name> --name <app-name> --custom-id <custom-id> --storage-type AzureFiles --share-name <share-name> --account-name <storage-account-name> --access-key "<access-key>" --mount-path <mount-path-directory> ```
-You should do this for any other directories you want to be linked to a storage account.
-
+- `--storage-type` can be `AzureBlob` or `AzureFiles`. `AzureBlob` is read-only.
+- `--mount-path` is the directory inside the Linux container to mount to Azure Storage. Do not use `/` (the root directory).
-## Verify linked storage
+> [!CAUTION]
+> The directory specified in `--mount-path` in the Linux container should be empty. Any content stored in this directory is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
+>
-Once the share is linked to the app, you can verify this by running the following command:
+Verify your configuration by running the following command:
```azurecli az webapp config storage-account list --resource-group <resource-group> --name <app-name> ``` +++
+> [!NOTE]
+> Adding, editing, or deleting a storage mount causes the app to be restarted.
++
+## Test the mounted storage
+
+To validate that the Azure Storage is mounted successfully for the app:
+
+1. [Open an SSH session](configure-linux-open-ssh-session.md) into the container.
+1. In the SSH terminal, execute the following command:
+
+ ```bash
+ df ΓÇôh
+ ```
+1. Check if the storage share is mounted. If it's not present, there's an issue with mounting the storage share.
+1. Check latency or general reachability of the storage mount with the following command:
+
+ ```bash
+ tcpping Storageaccount.file.core.windows.net
+ ```
+
+## Best practices
+
+- To avoid potential issues related to latency, place the app and the Azure Storage account in the same Azure region. Note, however, if the app and Azure Storage account are in same Azure region, and if you grant access from App Service IP addresses in the [Azure Storage firewall configuration](../storage/common/storage-network-security.md), then these IP restrictions are not honored.
+- The mount path in the container app should be empty. Any content stored at this path is deleted when the Azure Storage is mounted (if you specify a directory under `/home`, for example). If you are migrating files for an existing app, make a backup of the app and its content before you begin.
+- Mounting the storage to `/home` is not recommended because it may result in performance bottlenecks for the app.
+- In the Azure Storage account, avoid [regenerating the access key](../storage/common/storage-account-keys-manage.md) that's used to mount the storage in the app. The storage account contains two different keys. Use a stepwise approach to ensure that the storage mount remains available to the app during key regeneration. For example, assuming that you used **key1** to configure storage mount in your app:
+ 1. Regenerate **key2**.
+ 1. In the storage mount configuration, update the access the key to use the regenerated **key2**.
+ 1. Regenerate **key1**.
+- If you delete an Azure Storage account, container, or share, remove the corresponding storage mount configuration in the app to avoid possible error scenarios.
+- The mounted Azure Storage account can be either Standard or Premium performance tier. Based on the app capacity and throughput requirements, choose the appropriate performance tier for the storage account. See the scalability and performance targets that correspond to the storage type:
+ - [For Files](../storage/files/storage-files-scale-targets.md)
+ - [For Blobs](../storage/blobs/scalability-targets.md)
+- If your app [scales to multiple instances](../azure-monitor/autoscale/autoscale-get-started.md), all the instances connect to the same mounted Azure Storage account. To avoid performance bottlenecks and throughput issues, choose the appropriate performance tier for the storage account.
+- It's not recommended to use storage mounts for local databases (such as SQLite) or for any other applications and components that rely on file handles and locks.
+- When using Azure Storage [private endpoints](../storage/common/storage-private-endpoints.md) with the app, you need to set the following two app settings:
+ - `WEBSITE_DNS_SERVER` = `168.63.129.16`
+ - `WEBSITE_VNET_ROUTE_ALL` = `1`
+- If you [initiate a storage failover](../storage/common/storage-initiate-account-failover.md) and the storage account is mounted to the app, the mount will fail to connect until you either restart the app or remove and add the Azure Storage mount.
++ ## Next steps ::: zone pivot="container-windows"
app-service Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/creation.md
description: Learn how to create an App Service Environment.
ms.assetid: 7690d846-8da3-4692-8647-0bf5adfd862a Previously updated : 11/16/2020 Last updated : 06/21/2021 # Create an App Service Environment- > [!NOTE] > This article is about the App Service Environment v3 (preview) >
-The [App Service Environment (ASE)][Intro] is a single tenant deployment of the App Service that injects into your Azure Virtual Network (VNet). ASEv3 only supports exposing apps on a private address in your Vnet. When an ASEv3 is created during preview, these resources are added to your subscription.
--- App Service Environment-- Private endpoint-
-A deployment of an ASE will require use of two subnets. One subnet will hold the private endpoint. This subnet can be used for other things such as VMs. The other subnet is used for outbound calls made from the ASE. This subnet can't be used for anything else other than the ASE.
+The [App Service Environment (ASE)][Intro] is a single tenant deployment of the App Service that injects into your Azure Virtual Network (VNet). A deployment of an ASE will require use of one subnet. This subnet can't be used for anything else other than the ASE.
## Before you create your ASE
After your ASE is created, you can't change:
- Subnet size - Name of your ASE
-The outbound subnet needs to be large enough to hold the maximum size that you'll scale your ASE. Pick a large enough subnet to support your maximum scale needs since it can't be changed after creation. The recommended size is a /24 with 256 addresses.
-
-After the ASE is created, you can add apps to it. When your ASEv3 has no App Service plans in it, there is a charge as though you had one instance of an I1v2 App Service plan in your ASE.
-
-The ASEv3 is only offered in select regions. More regions will be added as the preview moves along towards GA.
+The subnet needs to be large enough to hold the maximum size that you'll scale your ASE. Pick a large enough subnet to support your maximum scale needs since it can't be changed after creation. The recommended size is a /24 with 256 addresses.
## Creating an ASE in the portal
-1. To create an ASEv3, search the marketplace for **App Service Environment (preview)**.
-2. Basics: Select the Subscription, select or create the Resource Group, and enter the name of your ASE. The ASE name will be also used for the domain suffix of your ASE. If your ASE name is *contoso* then the domain suffix will be *contoso.appserviceenvironment.net*. This name will be automatically set in your Azure DNS private zone used by the Vnet the ASE is deployed into.
-
- ![App Service Environment create basics tab](./media/creation/creation-basics.png)
-
-3. Hosting: Select OS Preference, Host Group deployment. The OS preference indicates the operating system you'll initially use for your apps in this ASE. You can add apps of the other OS after ASE creation. Host Group deployment is used to select dedicated hardware. With ASEv3 you can select Enabled and then land on dedicated hardware. You are charged for the entire dedicated host with ASE creation and then a reduced price for your App Service plan instances.
+1. To create an ASE, search the marketplace for **App Service Environment (preview)**.
+2. Basics: Select the Subscription, select or create the Resource Group, and enter the name of your ASE. Select the type of Virtual IP type. If you select Internal, your inbound ASE address will be an address in your ASE subnet. If you select External, your inbound ASE address will be a public internet facing address. The ASE name will be also used for the domain suffix of your ASE. If your ASE name is *contoso* and you have an Internal VIP ASE, then the domain suffix will be *contoso.appserviceenvironment.net*. If your ASE name is *contoso* and you have an external VIP, the domain suffix will be *contoso.p.azurewebsites.net*.
+![App Service Environment create basics tab](./media/creation/creation-basics.png)
+3. Hosting: Select *Enabled* or *Disabled* for Host Group deployment. Host Group deployment is used to select dedicated hardware. If you select Enabled, your ASE will be deployed onto dedicated hardware. When you deploy onto dedicated hardware, you are charged for the entire dedicated host during ASE creation and then a reduced price for your App Service plan instances.
+![App Service Environment hosting selections](./media/creation/creation-hosting.png)
+4. Networking: Select or create your Virtual Network, select or create your subnet. If you are creating an internal VIP ASE, you will have the option to configure Azure DNS private zones to point your domain suffix to your ASE.
+![App Service Environment networking selections](./media/creation/creation-networking.png)
+5. Review and Create: Check that your configuration is correct and select create. Your ASE can take up to nearly two hours to create.
- ![App Service Environment hosting selections](./media/creation/creation-hosting.png)
+ ![App Service Environment review and create](./media/creation/creation-review.png)
-4. Networking: Select or create your Virtual Network, select or create your inbound subnet, select or create your outbound subnet. Any subnet used for outbound must be empty and delegated to Microsoft.Web/hostingEnvironments. If you create your subnet through the portal, the subnet will automatically be delegated for you.
-
- ![App Service Environment networking selections](./media/creation/creation-networking.png)
-
-5. Review and Create: Check that your configuration is correct and select create. Your ASE will take approximately an hour to create.
-
- ![App Service Environment review and create](./media/creation/creation-review.png)
-
-After your ASE creation completes, you can select it as a location when creating your apps. To learn more about creating apps in your new ASE, read [Using an App Service Environment][UsingASE]
-
-## OS Preference
-In an ASE you can have Windows apps, Linux apps or both. In ASEv2, the initial preference selected during creation adds the high availability infrastructure for that OS during ASE creation. To add apps of the other OS, just make the apps as usual and select the OS you want. In ASEv3, this will not affect backend behavior appreciatively.
+After your ASE creation completes, you can select it as a location when creating your apps. To learn more about creating apps in your new ASE or managing your ASE, read [Using an App Service Environment][UsingASE]
## Dedicated hosts
-The ASE is normally deployed on VMs that are provisioned on a multi-tenant hypervisor. If you need to deploy on dedicated systems, including the hardware, you can provision your ASE onto dedicated hosts. In the initial ASEv3 preview, dedicated hosts come in a pair. Each dedicated host is in a separate availability zone, if the region permits it. Dedicated host-based ASE deployments are priced differently than normal. There is a charge for the dedicated host and then another charge for each App Service plan instance.
+
+The ASE is normally deployed on VMs that are provisioned on a multi-tenant hypervisor. If you need to deploy on dedicated systems, including the hardware, you can provision your ASE onto dedicated hosts. Dedicated hosts come in a pair to ensure redundancy. Dedicated host-based ASE deployments are priced differently than normal. There is a charge for the dedicated host and then another charge for each App Service plan instance. Deployments on host groups are not zone redundant. To deploy onto dedicated hosts, select **enable** for host group deployment on the Hosting tab.
<!--Links--> [Intro]: ./overview.md
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/networking.md
description: App Service Environment networking details
ms.assetid: 6f262f63-aef5-4598-88d2-2f2c2f2bfc24 Previously updated : 11/16/2020 Last updated : 06/21/2021
> This article is about the App Service Environment v3 (preview) >
-The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that hosts web apps, api apps, and function apps. When you install an ASE, you pick the Azure Virtual Network (VNet) that you want it to be deployed in. All of the inbound and outbound traffic application will be inside the VNet you specify.
-
-The ASEv3 uses two subnets. One subnet is used for the private endpoint that handles inbound traffic. This could be a pre-existing subnet or a new one. The inbound subnet can be used for other purposes than the private endpoint. The outbound subnet can only be used for outbound traffic from the ASE. Nothing else can go in the ASE outbound subnet.
+The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that hosts web apps, api apps, and function apps. When you install an ASE, you pick the Azure Virtual Network (VNet) that you want it to be deployed in. All of the inbound and outbound traffic application will be inside the VNet you specify. The ASE is deployed into a single subnet in your VNet. Nothing else can be deployed into that same subnet. The subnet needs to be delegated to Microsoft.Web/HostingEnvironments
## Addresses
-The ASE has the following addresses at creation:
+
+The ASE has the following network information at creation:
| Address type | description | |--|-|
-| Inbound address | The inbound address is the private endpoint address used by your ASE. |
-| Outbound subnet | The outbound subnet is also the ASE subnet. During preview this subnet is only used for outbound traffic. |
-| Windows outbound address | The Windows apps in this ASE will use this address, by default, when making outbound calls to the internet. |
-| Linux outbound address | The Linux apps in this ASE will use this address, by default, when making outbound calls to the internet. |
+| ASE virtual network | The VNet the ASE is deployed into |
+| ASE subnet | The subnet that the ASE is deployed into |
+| Domain suffix | The domain suffix that is used by the apps made in this ASE |
+| Virtual IP | This is the VIP type used by the ASE. The two possible values are internal and external |
+| Inbound address | The inbound address is the address your apps on this ASE are reached at. If you have an internal VIP, it is an address in your ASE subnet. If the address is external, it will be a public facing address |
+| Default outbound addresses | The apps in this ASE will use this address, by default, when making outbound calls to the internet. |
The ASEv3 has details on the addresses used by the ASE in the **IP Addresses** portion of the ASE portal. ![ASE addresses UI](./media/networking/networking-ip-addresses.png)
-If you delete the private endpoint used by the ASE, you can't reach the apps in your ASE.
-
-The ASE uses addresses in the outbound subnet to support the infrastructure used by the ASE. As you scale your App Service plans in your ASE, you'll use more addresses. Apps in the ASE don't have dedicated addresses in the outbound subnet. The addresses used by an app in the outbound subnet by an app will change over time.
+As you scale your App Service plans in your ASE, you'll use more addresses out of your ASE subnet. The number of addresses used will vary based on the number of App Service plan instances you have, and how much traffic your ASE is receiving. Apps in the ASE don't have dedicated addresses in the ASE subnet. The specific addresses used by an app in the ASE subnet by an app will change over time.
## Ports
-The ASE receives application traffic on ports 80 and 443. There are no inbound or outbound port requirements for the ASE.
+The ASE receives application traffic on ports 80 and 443. If those ports are blocked, you can't reach your apps. Port 80 needs to be open from the load balancer to the ASE subnet as this port is used for keep alive traffic.
## Extra configurations
-Unlike the ASEv2, with ASEv3 you can set Network Security Groups (NSGs) and Route Tables (UDRs) as you see fit without restriction. If you want to force tunnel all of the outbound traffic from your ASE to an NVA device. You can put WAF devices in front of all inbound traffic to your ASE.
+You can set Network Security Groups (NSGs) and Route Tables (UDRs) without restriction. You can force tunnel all of the outbound traffic from your ASE to an egress firewall device, such as the Azure Firewall, and not have to worry about anything other than your application dependencies. You can put WAF devices, such as the Application Gateway, in front of inbound traffic to your ASE to expose specific apps on that ASE. For a different dedicated outbound address to the internet, you can use a NAT Gateway with your ASE. To use a NAT Gateway with your ASE, configure the NAT Gateway against the ASE subnet.
## DNS
-The apps in your ASE will use the DNS that your VNet is configured with. Follow the instructions in [Using an App Service Environment](./using.md#dns-configuration) to configure your DNS server to point to your ASE. If you want some apps to use a different DNS server than what your VNet is configured with, you can manually set it on a per app basis with the app settings WEBSITE_DNS_SERVER and WEBSITE_DNS_ALT_SERVER. The app setting WEBSITE_DNS_ALT_SERVER configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server.
+### DNS configuration to your ASE
+
+If your ASE is made with an external VIP, your apps are automatically put into public DNS. If your ASE is made with an internal VIP, you may need to configure DNS for it. If you selected having Azure DNS private zones configured automatically during ASE creation, then DNS is configured in your ASE VNet. If you selected Manually configuring DNS, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address of your ASE, go to the **ASE portal > IP Addresses** UI.
+
+If you want to use your own DNS server, you need to add the following records:
+
+1. create a zone for &lt;ASE name&gt;.appserviceenvironment.net
+1. create an A record in that zone that points * to the inbound IP address used by your ASE
+1. create an A record in that zone that points @ to the inbound IP address used by your ASE
+1. create a zone in &lt;ASE name&gt;.appserviceenvironment.net named scm
+1. create an A record in the scm zone that points * to the IP address used by your ASE private endpoint
+
+To configure DNS in Azure DNS Private zones:
+
+1. create an Azure DNS private zone named <ASE name>.appserviceenvironment.net
+1. create an A record in that zone that points * to the inbound IP address
+1. create an A record in that zone that points @ to the inbound IP address
+1. create an A record in that zone that points *.scm to the inbound IP address
+
+The DNS settings for your ASE default domain suffix don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an ASE. If you then want to create a zone named *contoso.net*, you could do so and point it to the inbound IP address. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
+
+#### DNS configuration from your ASE
+
+The apps in your ASE will use the DNS that your VNet is configured with. If you want some apps to use a different DNS server than what your VNet is configured with, you can manually set it on a per app basis with the app settings WEBSITE_DNS_SERVER and WEBSITE_DNS_ALT_SERVER. The app setting WEBSITE_DNS_ALT_SERVER configures the secondary DNS server. The secondary DNS server is only used when there is no response from the primary DNS server.
+
+## Limitations
-## Preview limitation
+While the ASE does deploy into a customer VNet, there are a few networking features that aren't available with ASE.
-There are a few networking features that aren't available with ASEv3. The things that aren't available in ASEv3 include:
+* send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25
+* Use of Network Watcher or NSG Flow to monitor outbound traffic
-ΓÇó FTP
-ΓÇó Remote debug
-ΓÇó External load balancer deployment
-ΓÇó Ability to access a private container registry for container deployments
-ΓÇó Ability to make calls to globally peered Vnets
-ΓÇó Ability to backup/restore with a service endpoint or private endpoint secured storage account
-ΓÇó Ability to have app settings keyvault references on service endpoint or private endpoint secured keyvault accounts
-ΓÇó Ability to use BYOS to a service endpoint or private endpoint secured storage account
-ΓÇó Use of Network Watcher or NSG Flow on outbound traffic
-
-
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/overview.md
description: Overview on the App Service Environment
ms.assetid: 3d37f007-d6f2-4e47-8e26-b844e47ee919 Previously updated : 03/02/2021 Last updated : 06/21/2021 - # App Service Environment overview - > [!NOTE] > This article is about the App Service Environment v3 (preview) >
App Service environments (ASEs) are appropriate for application workloads that r
- High scale. - Isolation and secure network access. - High memory utilization.-- High requests per second (RPS). You can make multiple ASEs in a single Azure region or across multiple Azure regions. This flexibility makes ASEs ideal for horizontally scaling stateless applications with a high RPS requirement.
+- High requests per second (RPS). You can make multiple ASEs in a single Azure region or across multiple Azure regions. This flexibility makes an ASE ideal for horizontally scaling stateless applications with a high RPS requirement.
ASE's host applications from only one customer and do so in one of their VNets. Customers have fine-grained control over inbound and outbound application network traffic. Applications can establish high-speed secure connections over VPNs to on-premises corporate resources.
-ASEv3 comes with its own pricing tier, Isolated V2.
-App Service Environments v3 provide a surrounding to safeguard your apps in a subnet of your network and provides your own private deployment of Azure App Service.
-Multiple ASEs can be used to scale horizontally.
-Apps running on ASEs can have their access gated by upstream devices, such as web application firewalls (WAFs). For more information, see Web application firewall (WAF).
- ## Usage scenarios The App Service Environment has many use cases including:
The App Service Environment has many use cases including:
- Network isolated application hosting - Multi-tier applications
-There are a number of networking features that enable apps in the multi-tenant App Service to reach network isolated resources or become network isolated themselves. These features are enabled at the application level. With an ASE, there's no additional configuration on the apps for them to be in the VNet. The apps are deployed into a network isolated environment that is already in a VNet. On top of the ASE hosting network isolated apps, it's also a single-tenant system. There are no other customers using the ASE. If you really need a complete isolation story, you can also get your ASE deployed onto dedicated hardware. Between network isolated application hosting, single tenancy, and the ability
+There are many networking features that enable apps in the multi-tenant App Service to reach network isolated resources or become network isolated themselves. These features are enabled at the application level. With an ASE, there's no added configuration required for the apps to be in the VNet. The apps are deployed into a network isolated environment that is already in a VNet. On top of the ASE hosting network isolated apps, it's also a single-tenant system. There are no other customers using the ASE. If you really need a complete isolation story, you can also get your ASE deployed onto dedicated hardware.
## Dedicated environment
-An ASE is dedicated exclusively to a single subscription and can host 200 total App Service Plan instances across multiple App Service plans. The word "instance" refers to App Service plan horizontal scaling. Each instances is the equivalent to a worker role. While an ASE can have 200 total instances, a single Isolated v2 App Service plan can hold 100 instances. The ASE can hold two App Service plans with 100 instances in each, 200 single-instance App Service plans, or everything in between.
-An ASE is composed of front ends and workers. Front ends are responsible for HTTP/HTTPS termination and automatic load balancing of app requests within an ASE. Front ends are automatically added as the App Service plans in the ASE are scaled out.
+The ASE is a single tenant deployment of the Azure App Service that runs in your virtual network.
-Workers are roles that host customer apps. Workers are available in three fixed sizes:
+Applications are hosted in App Service plans, which are created in an App Service Environment. The App Service plan is essentially a provisioning profile for an application host. As you scale your App Service plan out, you create more application hosts with all of the apps in that App Service plan on each host. A single ASEv3 can have up to 200 total App Service plan instances across all of the App Service plans combined. A single Isolated v2 App Service plan can have up to 100 instances by itself.
-- Two vCPU/8 GB RAM-- Four vCPU/16 GB RAM-- Eight vCPU/32 GB RAM
+## Virtual network support
-Customers don't need to manage front ends and workers. All infrastructure is automatically. As App Service plans are created or scaled in an ASE, the required infrastructure is added or removed as appropriate.
+The ASE feature is a deployment of the Azure App Service into a single subnet in a customer's Azure Resource Manager virtual network (VNet). When you deploy an app into an ASE, the app will be exposed on the inbound address assigned to the ASE. If your ASE is deployed with an internal VIP, then the inbound address for all of the apps will be an address in the ASE subnet. If your ASE is deployed with an external VIP, then the inbound address will be an internet addressable address and your apps will be in public DNS.
-There's a charge for Isolated V2 App Service plan instances. If you have no App Service plans at all in your ASE, you are charged as though you had one App Service plan with one instance of the two core workers.
+The number of addresses used by an ASEv3 in its subnet will vary based on how many instances you have along with how much traffic. There are infrastructure roles that are automatically scaled depending on the number of App Service plans and the load. The recommended size for your ASEv3 subnet is a /24 CIDR block with 256 addresses in it as that can host an ASEv3 scaled out to its limit.
-## Virtual network support
-The ASE feature is a deployment of the Azure App Service directly into a customer's Azure Resource Manager virtual network. An ASE always exists in a subnet of a virtual network. You can use the security features of virtual networks to control inbound and outbound network communications for your apps.
+The apps in an ASE do not need any features enabled to access resources in the same VNet that the ASE is in. If the ASE VNet is connected to another network, then the apps in the ASE can access resources in those extended networks. Traffic can be blocked by user configuration on the network.
+
+The multi-tenant version of Azure App Service contains numerous features to enable your apps to connect to your various networks. Those networking features enable your apps to act as if they were deployed in a VNet. The apps in an ASEv3 do not need any configuration to be in the VNet. A benefit of using an ASE over the multi-tenant service is that any network access controls to the ASE hosted apps is completely external to the application configuration. With the apps in the multi-tenant service, you must enable the features on an app by app basis and use RBAC or policy to prevent any configuration changes.
+
+## Feature differences
+
+Compared to earlier versions of the ASE, there are some differences with ASEv3. With ASEv3:
+
+- There are no networking dependencies in the customer VNet. You can secure all inbound and outbound as desired. Outbound traffic can be routed also as desired.
+- You can deploy it enabled for zone redundancy. Zone redundancy can only be set during ASEv3 creation and only in regions where all ASEv3 dependencies are zone redundant.
+- You can deploy it on a dedicated host group. Host group deployments are not zone redundant.
+- Scaling is much faster than with ASEv2. While scaling still is not immediate as in the multi-tenant service, it is a lot faster.
+- Front end scaling adjustments are no longer required. The ASEv3 front ends automatically scale to meet needs and are deployed on better hosts.
+- Scaling no longer blocks other scale operations within the ASEv3 instance. Only one scale operation can be in effect for a combination of OS and size. For example, while your Windows small App Service plan was scaling, you could kick off a scale operation to run at the same time on a Windows medium or anything else other than Windows small.
+- Apps in an internal VIP ASEv3 can be reached across global peering. Access across global peering was not possible with ASEv2.
+
+There are a few features that are not available in ASEv3 that were available in earlier versions of the ASE. In ASEv3, you can't:
+
+- send SMTP traffic. You can still have email triggered alerts but your app can't send outbound traffic on port 25
+- deploy your apps with FTP
+- use remote debug with your apps
+- upgrade yet from ASEv2
+- monitor your traffic with Network Watcher or NSG Flow
+- configure IP-based SSL with your apps
-Network Security Groups restrict inbound network communications to the subnet where an ASE resides. You can use NSGs to run apps behind upstream devices and services such as WAFs and network SaaS providers.
+## Pricing
-Apps also frequently need to access corporate resources such as internal databases and web services. If you deploy the ASE in a virtual network that has a VPN connection to the on-premises network, the apps in the ASE can access the on-premises resources. This capability is true regardless of whether the VPN is a site-to-site or Azure ExpressRoute VPN.
+With ASEv3, there is a different pricing model depending on the type of ASE deployment you have. The three pricing models are:
-## Preview
-The App Service Environment v3 is in public preview. Some features are being added during the preview progression. The current limitations of ASEv3 include:
+- ASEv3: If ASE is empty, there is a charge as if you had one ASP with one instance of Windows I1v2. The one instance charge is not an additive charge but is only applied if the ASE is totally empty.
+- Availability Zone ASEv3: There is a minimum 9 Windows I1v2 instance charge. There is no added charge for availability zone support if you have 9 or more App Service plan instances.
+- Dedicated host ASEv3: With a dedicated host deployment, you are charged for two dedicated hosts per our pricing at ASEv3 creation then a small percentage of the Isolated V2 rate per core charge as you scale.
-- Inability to scale an App Service plan beyond 50 instances-- Inability to get a container from a private registry-- Inability for currently unsupported App Service features to go through customer VNet-- No external deployment model with an internet accessible endpoint-- No command line support (AZ CLI and PowerShell)-- No upgrade capability from ASEv2 to ASEv3-- No FTP support-- No support for some App Service features going through the customer VNet. Backup/restore, Key Vault references in app settings, using a private container registry, and Diagnostic logging to storage won't function with service endpoints or private endpoints
-
-### ASEv3 preview architecture
-In ASEv3 preview, the ASE will use private endpoints to support inbound traffic. The private endpoint will be replaced with load balancers by GA. While in preview, the ASE won't have built in support for an internet accessible endpoint. You could add an Application Gateway for such a purpose. The ASE needs resources in two subnets. Inbound traffic will flow through a private endpoint. The private endpoint can be placed in any subnet so long as it has an available address that can be used by private endpoints. The outbound subnet must be empty and delegated to Microsoft.Web/hostingEnvironments. While used by the ASE, the outbound subnet can't be used for anything else.
+Reserved Instance pricing for Isolated v2 will be available after GA.
-With ASEv3, there are no inbound or outbound networking requirements on the ASE subnet. You can control the traffic with Network Security Groups and Route Tables and it only will affect your application traffic. Don't delete the private endpoint associated with your ASE as that action can't be undone. The private endpoint used for the ASE is used for all of the apps in the ASE.
app-service Using https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/using.md
description: Learn how to use your App Service Environment to host isolated appl
ms.assetid: 377fce0b-7dea-474a-b64b-7fbe78380554 Previously updated : 11/16/2020 Last updated : 06/21/2021
To create an app in an ASE, you use the same process as when you normally create
- All App Service plans created in an ASE can only be in an Isolated v2 pricing tier. If you don't have an ASE, you can create one by following the instructions in [Create an App Service Environment][MakeASE].- To create an app in an ASE: 1. Select **Create a resource** > **Web + Mobile** > **Web App**.- 1. Select a subscription.- 1. Enter a name for a new resource group, or select **Use existing** and select one from the drop-down list.- 1. Enter a name for the app. If you already selected an App Service plan in an ASE, the domain name for the app reflects the domain name of the ASE:-
- ![create an app in an ASE][1]
-
+![create an app in an ASE][1]
1. Select your Publish type, Stack, and Operating System.-
-1. Select region. Here you need to select a pre-existing App Service Environment v3. You can't make an ASEv3 during app creation
-
-1. Select an existing App Service plan in your ASE, or create a new one. If creating a new app, select the size that you want for your App Service plan. The only SKU you can select for your app is an Isolated v2 pricing SKU.
-
- ![Isolated v2 pricing tiers][2]
-
- > [!NOTE]
- > Linux apps and Windows apps can't be in the same App Service plan, but they can be in the same App Service Environment.
- >
-
+1. Select region. Here you need to select a pre-existing App Service Environment v3. You can't make an ASEv3 during app creation
+1. Select an existing App Service plan in your ASE, or create a new one. If creating a new app, select the size that you want for your App Service plan. The only SKU you can select for your app is an Isolated v2 pricing SKU. Making a new App Service plan will normally take less than 20 minutes.
+![Isolated v2 pricing tiers][2]
1. Select **Next: Monitoring** If you want to enable App Insights with your app, you can do it here during the creation flow. - 1. Select **Next: Tags** Add any tags you want to the app - 1. Select **Review + create**, make sure the information is correct, and then select **Create**.
+Windows and Linux apps can be in the same ASE but cannot be in the same App Service plan.
+ ## How scale works Every App Service app runs in an App Service plan. App Service Environments hold App Service plans, and App Service plans hold apps. When you scale an app, you also scale the App Service plan and all the apps in that same plan.
-When you scale an App Service plan, the needed infrastructure is added automatically. There's a time delay to scale operations while the infrastructure is being added. When you scale an App Service plan, any other scale operations requested of the same OS and size will wait until the first one completes. After the blocking scale operation completes, all of the queued requests are processed at the same time. A scale operation on one size and OS won't block scaling of the other combinations of size and OS. For example, if you scaled a Windows I2v2 App Service plan then, any other requests to scale Windows I2v2 in that ASE will be queued until that completes.
+When you scale an App Service plan, the needed infrastructure is added automatically. There's a time delay to scale operations while the infrastructure is being added. When you scale an App Service plan, any other scale operations requested of the same OS and size will wait until the first one completes. After the blocking scale operation completes, all of the queued requests are processed at the same time. A scale operation on one size and OS won't block scaling of the other combinations of size and OS. For example, if you scaled a Windows I2v2 App Service plan then, any other requests to scale Windows I2v2 in that ASE will be queued until that completes. Scaling will normally take less than 20 minutes.
In the multitenant App Service, scaling is immediate because a pool of resources is readily available to support it. In an ASE, there's no such buffer, and resources are allocated based on need. ## App access
-In an ASE, the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your ASE is named _my-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+In an ASE with an internal VIP, the domain suffix used for app creation is *.&lt;asename&gt;.appserviceenvironment.net*. If your ASE is named _my-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
- contoso.my-ase.appserviceenvironment.net - contoso.scm.my-ase.appserviceenvironment.net
+The apps that are hosted on an ASE that uses an internal VIP will only be accessible if you are in the same virtual network as the ASE or are connected somehow to that virtual network. Publishing is also restricted to being only possible if you are in the same virtual network or are connected somehow to that virtual network.
+
+In an ASE with an external VIP, the domain suffix used for app creation is *.&lt;asename&gt;.p.azurewebsites.net*. If your ASE is named _my-ase_ and you host an app called _contoso_ in that ASE, you reach it at these URLs:
+
+- contoso.my-ase.p.azurewebsites.net
+- contoso.scm.my-ase.p.azurewebsites.net
+ For information about how to create an ASE, see [Create an App Service Environment][MakeASE]. The SCM URL is used to access the Kudu console or for publishing your app by using Web Deploy. For information on the Kudu console, see [Kudu console for Azure App Service][Kudu]. The Kudu console gives you a web UI for debugging, uploading files, editing files, and much more. ### DNS configuration
-The ASE uses private endpoints for inbound traffic. It is not automatically configured with Azure DNS private zones. If you want to use your own DNS server, you need to add the following records:
+If your ASE is made with an external VIP, your apps are automatically put into public DNS. If your ASE is made with an internal VIP, you may need to configure DNS for it. If you selected having Azure DNS private zones configured automatically during ASE creation then DNS is configured in your ASE VNet. If you selected Manually configuring DNS, you need to either use your own DNS server or configure Azure DNS private zones. To find the inbound address of your ASE, go to the **ASE portal > IP Addresses** UI.
+
+![IP addresses UI][6]
+
+If you want to use your own DNS server, you need to add the following records:
1. create a zone for &lt;ASE name&gt;.appserviceenvironment.net
-1. create an A record in that zone that points * to the inbound IP address used by your ASE private endpoint
-1. create an A record in that zone that points @ to the inbound IP address used by your ASE private endpoint
+1. create an A record in that zone that points * to the inbound IP address used by your ASE
+1. create an A record in that zone that points @ to the inbound IP address used by your ASE
1. create a zone in &lt;ASE name&gt;.appserviceenvironment.net named scm
-1. create an A record in the scm zone that points * to the IP address used by your ASE private endpoint
+1. create an A record in the scm zone that points * to the inbound address used by your ASE
To configure DNS in Azure DNS Private zones: 1. create an Azure DNS private zone named <ASE name>.appserviceenvironment.net
-1. create an A record in that zone that points * to the ILB IP address
-1. create an A record in that zone that points @ to the ILB IP address
-1. create an A record in that zone that points *.scm to the ILB IP address
+1. create an A record in that zone that points * to the inbound IP address
+1. create an A record in that zone that points @ to the inbound IP address
+1. create an A record in that zone that points *.scm to the inbound IP address
The DNS settings for your ASE default domain suffix don't restrict your apps to only being accessible by those names. You can set a custom domain name without any validation on your apps in an ASE. If you then want to create a zone named *contoso.net*, you could do so and point it to the inbound IP address. The custom domain name works for app requests but doesn't for the scm site. The scm site is only available at *&lt;appname&gt;.scm.&lt;asename&gt;.appserviceenvironment.net*.
In an ASE, as with the multitenant App Service, you can publish by these methods
- Drag and drop in the Kudu console - An IDE, such as Visual Studio, Eclipse, or IntelliJ IDEA
-With an ASE, the publishing endpoints are only available through the inbound address used by the private endpoint. If you don't have network access to the private endpoint address, you can't publish any apps on that ASE. Your IDEs must also have network access to the ILB to publish directly to it.
+With an internal VIP ASE, the publishing endpoints are only available through the inbound address. If you don't have network access to the inbound address, you can't publish any apps on that ASE. Your IDEs must also have network access to the inbound address on the ASE to publish directly to it.
-Without additional changes, internet-based CI systems like GitHub and Azure DevOps don't work with an ILB ASE because the publishing endpoint isn't internet accessible. You can enable publishing to an ILB ASE from Azure DevOps by installing a self-hosted release agent in the virtual network that contains the ILB ASE.
+Without additional changes, internet-based CI systems like GitHub and Azure DevOps don't work with an internal VIP ASE because the publishing endpoint isn't internet accessible. You can enable publishing to an internal VIP ASE from Azure DevOps by installing a self-hosted release agent in the virtual network that contains the ASE.
## Storage
An ASE has 1 TB of storage for all the apps in the ASE. An App Service plan in t
You can integrate your ASE with Azure Monitor to send logs about the ASE to Azure Storage, Azure Event Hubs, or Log Analytics. These items are logged today:
-| Situation | Message |
-||-|
-| ASE is unhealthy | The specified ASE is unhealthy due to an invalid virtual network configuration. The ASE will be suspended if the unhealthy state continues. Ensure the guidelines defined here are followed: https://docs.microsoft.com/azure/app-service/environment/network-info. |
-| ASE subnet is almost out of space | The specified ASE is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the ASE will not be able to scale. |
-| ASE is approaching total instance limit | The specified ASE is approaching the total instance limit of the ASE. It currently contains {0} App Service Plan instances of a maximum 201 instances. |
-| ASE is unable to reach a dependency | The specified ASE is not able to reach {0}. Ensure the guidelines defined here are followed: https://docs.microsoft.com/azure/app-service/environment/network-info. |
-| ASE is suspended | The specified ASE is suspended. The ASE suspension may be due to an account shortfall or an invalid virtual network configuration. Resolve the root cause and resume the ASE to continue serving traffic. |
-| ASE upgrade has started | A platform upgrade to the specified ASE has begun. Expect delays in scaling operations. |
-| ASE upgrade has completed | A platform upgrade to the specified ASE has finished. |
-| Scale operations have started | An App Service plan ({0}) has begun scaling. Desired state: {1} I{2} workers.
-| Scale operations have completed | An App Service plan ({0}) has finished scaling. Current state: {1} I{2} workers. |
-| Scale operations have failed | An App Service plan ({0}) has failed to scale. Current state: {1} I{2} workers. |
+|Situation |Message |
+|-|--|
+|ASE subnet is almost out of space | The specified ASE is in a subnet that is almost out of space. There are {0} remaining addresses. Once these addresses are exhausted, the ASE will not be able to scale. |
+|ASE is approaching total instance limit | The specified ASE is approaching the total instance limit of the ASE. It currently contains {0} App Service Plan instances of a maximum 200 instances. |
+|ASE is suspended | The specified ASE is suspended. The ASE suspension may be due to an account shortfall or an invalid virtual network configuration. Resolve the root cause and resume the ASE to continue serving traffic. |
+|ASE upgrade has started | A platform upgrade to the specified ASE has begun. Expect delays in scaling operations. |
+|ASE upgrade has completed | A platform upgrade to the specified ASE has finished. |
+|App Service plan creation has started | An App Service plan ({0}) creation has started. Desired state: {1} I{2}v2 workers.
+|Scale operations have completed | An App Service plan ({0}) creation has finished. Current state: {1} I{2}v2 workers. |
+|Scale operations have failed | An App Service plan ({0}) creation has failed. This may be due to the ASE operating at peak number of instances, or run out of subnet addresses. |
+|Scale operations have started | An App Service plan ({0}) has begun scaling. Current state: {1} I(2)v2. Desired state: {3} I{4}v2 workers.|
+|Scale operations have completed | An App Service plan ({0}) has finished scaling. Current state: {1} I{2}v2 workers. |
+|Scale operations were interrupted | An App Service plan ({0}) was interrupted while scaling. Previous desired state: {1} I{2}v2 workers. New desired state: {3} I{4}v2 workers. |
+|Scale operations have failed | An App Service plan ({0}) has failed to scale. Current state: {1} I{2}v2 workers. |
To enable logging on your ASE:
To enable logging on your ASE:
1. Provide a name for the log integration. 1. Select and configure the log destinations that you want. 1. Select **AppServiceEnvironmentPlatformLogs**.- ![ASE diagnostic log settings][4]
-If you integrate with Log Analytics, you can see the logs by selecting **Logs** from the ASE portal and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your ASE has an event that will trigger it. If your ASE doesn't have such an event, there won't be any logs. To quickly see an example of logs in your Log Analytics workspace, perform a scale operation with one of the App Service plans in your ASE. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
+If you integrate with Log Analytics, you can see the logs by selecting **Logs** from the ASE portal and creating a query against **AppServiceEnvironmentPlatformLogs**. Logs are only emitted when your ASE has an event that will trigger it. If your ASE doesn't have such an event, there won't be any logs. To quickly see an example of logs in your Log Analytics workspace, perform a scale operation with an App Service plan in your ASE. You can then run a query against **AppServiceEnvironmentPlatformLogs** to see those logs.
-**Creating an alert**
+### Creating an alert
To create an alert against your logs, follow the instructions in [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md). In brief:
To create an alert against your logs, follow the instructions in [Create, view,
## Internal Encryption
-The App Service Environment operates as a black box system where you cannot see the internal components or the communication within the system. To enable higher throughput, encryption is not enabled by default between internal components. The system is secure as the traffic is completely inaccessible to being monitored or accessed. If you have a compliance requirement though that requires complete encryption of the data path from end to end encryption, you can enable this in the ASE **Configuration** UI.
+The App Service Environment operates as a black box system where you cannot see the internal components or the communication within the system. To enable higher throughput, encryption is not enabled by default between internal components. The system is secure as the traffic is inaccessible to being monitored or accessed. If you have a compliance requirement though that requires complete encryption of the data path from end to end encryption, you can enable this in the ASE **Configuration** UI.
![Enable internal encryption][5]
If you have multiple ASEs, you might want some ASEs to be upgraded before others
- **Late**: Your ASE will be upgraded in the second half of the App Service upgrades. To configure your upgrade preference, go to the ASE **Configuration** UI. - The **upgradePreferences** feature makes the most sense when you have multiple ASEs because your "Early" ASEs will be upgraded before your "Late" ASEs. When you have multiple ASEs, you should set your development and test ASEs to be "Early" and your production ASEs to be "Late". ## Delete an ASE
The **upgradePreferences** feature makes the most sense when you have multiple A
To delete an ASE: 1. Select **Delete** at the top of the **App Service Environment** pane.- 1. Enter the name of your ASE to confirm that you want to delete it. When you delete an ASE, you also delete all the content within it.-
- ![ASE deletion][3]
-
+![ASE deletion][3]
1. Select **OK**. <!--Image references-->+ [1]: ./media/using/using-appcreate.png [2]: ./media/using/using-appcreate-skus.png [3]: ./media/using/using-delete.png [4]: ./media/using/using-logsetup.png [4]: ./media/using/using-logs.png [5]: ./media/using/using-configuration.png
+[6]: ./media/using/using-ip-addresses.png
<!--Links-->+ [Intro]: ./overview.md [MakeASE]: ./creation.md [ASENetwork]: ./networking.md
app-service Scenario Secure App Access Microsoft Graph As User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md
Previously updated : 01/28/2021 Last updated : 06/21/2021
The web app now has the required permissions to access Microsoft Graph as the si
> [!IMPORTANT] > If you don't configure App Service to return a usable access token, you receive a ```CompactToken parsing failed with error code: 80049217``` error when you call Microsoft Graph APIs in your code.
-Go to [Azure Resource Explorer](https://resources.azure.com/) and using the resource tree, locate your web app. The resource URL should be similar to `https://resources.azure.com/subscriptions/subscription-id/resourceGroups/SecureWebApp/providers/Microsoft.Web/sites/SecureWebApp20200915115914`.
+# [Azure Resource Explorer](#tab/azure-resource-explorer)
+Go to [Azure Resource Explorer](https://resources.azure.com/) and using the resource tree, locate your web app. The resource URL should be similar to `https://resources.azure.com/subscriptions/subscriptionId/resourceGroups/SecureWebApp/providers/Microsoft.Web/sites/SecureWebApp20200915115914`.
The Azure Resource Explorer is now opened with your web app selected in the resource tree. At the top of the page, select **Read/Write** to enable editing of your Azure resources.
-In the left browser, drill down to **config** > **authsettings**.
+In the left browser, drill down to **config** > **authsettingsV2**.
-In the **authsettings** view, select **Edit**. Set ```additionalLoginParams``` to the following JSON string by using the client ID you copied.
+In the **authsettingsV2** view, select **Edit**. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","resource=00000003-0000-0000-c000-000000000000" ]` .
```json
-"additionalLoginParams": ["response_type=code id_token","resource=00000003-0000-0000-c000-000000000000"],
+"identityProviders": {
+ "azureActiveDirectory": {
+ "enabled": true,
+ "login": {
+ "loginParameters":[
+ "response_type=code id_token",
+ "resource=00000003-0000-0000-c000-000000000000"
+ ]
+ }
+ }
+ }
+},
``` Save your settings by selecting **PUT**. This setting can take several minutes to take effect. Your web app is now configured to access Microsoft Graph with a proper access token. If you don't, Microsoft Graph returns an error saying that the format of the compact token is incorrect.
+# [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI to call the App Service Web App REST APIs to [get](/rest/api/appservice/web-apps/get-auth-settings) and [update](/rest/api/appservice/web-apps/update-auth-settings) the auth configuration settings so your web app can call Microsoft Graph. Open a command window and login to Azure CLI:
+
+```azurecli
+az login
+```
+
+Get your existing 'config/authsettingsv2ΓÇÖ settings and save to a local *authsettings.json* file.
+
+```azurecli
+az rest --method GET --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2/list?api-version=2020-06-01' > authsettings.json
+```
+
+Open the authsettings.json file using your preferred text editor. Find the **login** section of **identityProviders** -> **azureActiveDirectory** and add the following **loginParameters** settings: `"loginParameters":[ "response_type=code id_token","resource=00000003-0000-0000-c000-000000000000" ]` .
+
+```json
+"identityProviders": {
+ "azureActiveDirectory": {
+ "enabled": true,
+ "login": {
+ "loginParameters":[
+ "response_type=code id_token",
+ "resource=00000003-0000-0000-c000-000000000000"
+ ]
+ }
+ }
+ }
+},
+```
+
+Save your changes to the *authsettings.json* file and upload the local settings to your web app:
+
+```azurecli
+az rest --method PUT --url '/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP}/providers/Microsoft.Web/sites/{WEBAPP_NAME}/config/authsettingsv2?api-version=2020-06-01' --body @./authsettings.json
+```
++
+## Update the issuer URL
+In the [Azure portal](https://portal.azure.com), navigate to your App Service and then the **Authentication** blade.
+
+Click the **Edit** link next to the Microsoft identity provider.
+
+Check the the **Issuer URL** in the **Basics** tab. If the **Issuer URL** contains "/v2.0" at the end of it, remove it and click **Save**. If you donΓÇÖt remove ΓÇ£/v2.0ΓÇ¥, you get an *AADSTS901002: The 'resource' request parameter is not supported* when you sign in to the web app.
+ ## Call Microsoft Graph (.NET) Your web app now has the required permissions and also adds Microsoft Graph's client ID to the login parameters. Using the [Microsoft.Identity.Web library](https://github.com/AzureAD/microsoft-identity-web/), the web app gets an access token for authentication with Microsoft Graph. In version 1.2.0 and later, the Microsoft.Identity.Web library integrates with and can run alongside the App Service authentication/authorization module. Microsoft.Identity.Web detects that the web app is hosted in App Service and gets the access token from the App Service authentication/authorization module. The access token is then passed along to authenticated requests with the Microsoft Graph API.
In this tutorial, you learned how to:
> * Call Microsoft Graph from a web app for a signed-in user. > [!div class="nextstepaction"]
-> [App service accesses Microsoft Graph as the app](scenario-secure-app-access-microsoft-graph-as-app.md)
+> [App service accesses Microsoft Graph as the app](scenario-secure-app-access-microsoft-graph-as-app.md)
automation Automation Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-send-email.md
authenticate with Azure to retrieve the secret from Azure Key Vault. We'll call
$Conn = Get-AutomationConnection -Name AzureRunAsConnection Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint | Out-Null $VaultName = "<Enter your vault name>"
- $SENDGRID_API_KEY = (Get-AzKeyVaultSecret -VaultName $VaultName -Name "SendGridAPIKey").SecretValue
+ $SENDGRID_API_KEY = Get-AzKeyVaultSecret -VaultName $VaultName -Name "SendGridAPIKey" -AsPlainText
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization", "Bearer " + $SENDGRID_API_KEY) $headers.Add("Content-Type", "application/json")
automation Manage Change Tracking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/manage-change-tracking.md
You can do various searches against the Azure Monitor logs for change records. W
|Query |Description | |||
-|`ConfigurationData`<br>&#124; `where ConfigDataType == "WindowsServices" and SvcStartupType == "Auto"`<br>&#124; `where SvcState == "Stopped"`<br>&#124; `summarize arg_max(TimeGenerated, *) by SoftwareName, Computer` | Shows the most recent inventory records for Microsoft services that were set to Auto but were reported as being Stopped. Results are limited to the most recent record for the specified software name and computer. |
+|`ConfigurationData`<br>&#124; `where ConfigDataType == "WindowsServices" and SvcStartupType == "Auto"`<br>&#124; `where SvcState == "Stopped"`<br>&#124; `summarize arg_max(TimeGenerated, *) by SoftwareName, Computer` | Shows the most recent inventory records for Windows services that were set to Auto but were reported as being Stopped. Results are limited to the most recent record for the specified software name and computer. |
|`ConfigurationChange`<br>&#124; `where ConfigChangeType == "Software" and ChangeCategory == "Removed"`<br>&#124; `order by TimeGenerated desc`|Shows change records for removed software.| ## Next steps
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/overview.md
Title: Azure Automation Change Tracking and Inventory overview
description: This article describes the Change Tracking and Inventory feature, which helps you identify software and Microsoft service changes in your environment. Previously updated : 05/06/2021 Last updated : 06/18/2021
This article introduces you to Change Tracking and Inventory in Azure Automation
- Linux software (packages) - Windows and Linux files - Windows registry keys-- Microsoft services
+- Windows services
- Linux daemons > [!NOTE]
This article introduces you to Change Tracking and Inventory in Azure Automation
Change Tracking and Inventory makes use of [Azure Security Center File Integrity Monitoring (FIM)](../../security-center/security-center-file-integrity-monitoring.md) to examines operating system and application files, and Windows Registry. While FIM monitors those entities, Change Tracking and Inventory natively tracks: - Software changes-- Microsoft services
+- Windows services
- Linux daemons Enabling all features included in Change Tracking and Inventory might cause additional charges. Before proceeding, review [Automation Pricing](https://azure.microsoft.com/pricing/details/automation/) and [Azure Monitor Pricing](https://azure.microsoft.com/pricing/details/monitor/). Change Tracking and Inventory forwards data to Azure Monitor Logs, and this collected data is stored in a Log Analytics workspace. The File Integrity Monitoring (FIM) feature is available only when **Azure Defender for servers** is enabled. See Azure Security Center [Pricing](../../security-center/security-center-pricing.md) to learn more. FIM uploads data to the same Log Analytics workspace as the one created to store data from Change Tracking and Inventory. We recommend that you monitor your linked Log Analytics workspace to keep track of your exact usage. For more information about analyzing Azure Monitor Logs data usage, see [Manage usage and cost](../../azure-monitor/logs/manage-cost-storage.md).
-Machines connected to the Log Analytics workspace use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Microsoft services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis.
+Machines connected to the Log Analytics workspace use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. When data is available, the agent sends it to Azure Monitor Logs for processing. Azure Monitor Logs applies logic to the received data, records it, and makes it available for analysis.
> [!NOTE] > Change Tracking and Inventory requires linking a Log Analytics workspace to your Automation account. For a definitive list of supported regions, see [Azure Workspace mappings](../how-to/region-mappings.md). The region mappings don't affect the ability to manage VMs in a separate region from your Automation account.
The next table shows the data collection frequency for the types of changes supp
| Windows registry | 50 minutes | | Windows file | 30 minutes | | Linux file | 15 minutes |
-| Microsoft services | 10 seconds to 30 minutes</br> Default: 30 minutes |
+| Windows services | 10 seconds to 30 minutes</br> Default: 30 minutes |
| Linux daemons | 5 minutes | | Windows software | 30 minutes | | Linux software | 5 minutes |
The following table shows the tracked item limits per machine for Change Trackin
The average Log Analytics data usage for a machine using Change Tracking and Inventory is approximately 40 MB per month, depending on your environment. With the Usage and Estimated Costs feature of the Log Analytics workspace, you can view the data ingested by Change Tracking and Inventory in a usage chart. Use this data view to evaluate your data usage and determine how it affects your bill. See [Understand your usage and estimate costs](../../azure-monitor/logs/manage-cost-storage.md#understand-your-usage-and-estimate-costs).
-### Microsoft service data
+### Windows services data
-The default collection frequency for Microsoft services is 30 minutes. You can configure the frequency using a slider on the **Microsoft services** tab under **Edit Settings**.
+The default collection frequency for Windows services is 30 minutes. You can configure the frequency using a slider on the **Windows services** tab under **Edit Settings**.
-![Microsoft services slider](./media/overview/windowservices.png)
+![Windows services slider](./media/overview/windowservices.png)
To optimize performance, the Log Analytics agent only tracks changes. Setting a high threshold might miss changes if the service returns to its original state. Setting the frequency to a smaller value allows you to catch changes that might be missed otherwise.
automation Python 3 Packages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/python-3-packages.md
Title: Manage Python 3 packages in Azure Automation
description: This article tells how to manage Python 3 packages (preview) in Azure Automation. Previously updated : 02/19/2021 Last updated : 06/22/2021 # Manage Python 3 packages (preview) in Azure Automation
-Azure Automation allows you to run Python 3 runbooks (preview) on Azure and on Linux Hybrid Runbook Workers. To help in simplification of runbooks, you can use Python packages to import the modules that you need. To import a single package, see [Import a package](#import-a-package). To import a package with multiple packages, see [Import a package with dependencies](#import-a-package-with-dependencies). This article describes how to manage and use Python 3 packages (preview) in Azure Automation.
+Azure Automation allows you to run Python 3 runbooks (preview) on Azure Sandbox environment and on Linux Hybrid Runbook Workers. To help in simplification of runbooks, you can use Python packages to import the modules that you need. To import a single package, see [Import a package](#import-a-package). To import a package with multiple packages, see [Import a package with dependencies](#import-a-package-with-dependencies). This article describes how to manage and use Python 3 packages (preview) in Azure Automation.
+
+## Packages as source files
+
+Azure Automation supports only a Python package that only contains Python code and doesn't include other language extensions or code in other languages. However, the Azure Sandbox environment might not have the required compilers for C/C++ binaries, so it's recommended to use [wheel files](https://pythonwheels.com/) instead. The [Python Package Index](https://pypi.org/) (PyPI) is a repository of software for the Python programming language. When selecting a Python 3 package to import into your Automation account from PyPI, note the following filename parts:
+
+| Filename part | Description |
+|||
+|cp38|Automation supports **Python 3.8.x** for Cloud Jobs.|
+|amd64|Azure sandbox processes are **Windows 64-bit** architecture.|
+
+For example, if you wanted to import pandas, you could select a wheel file with a name similar as `pandas-1.2.3-cp38-win_amd64.whl`.
+
+Some Python packages available on PyPI don't provide a wheel file. In this case, download the source (.zip or .tar.gz file) and generate the wheel file using `pip`. For example, perform the following steps using a 64-bit machine with Python 3.8.x and wheel package installed:
+
+1. Download the source file `pandas-1.2.4.tar.gz`.
+1. Run pip to get the wheel file with the following command: `pip wheel --no-deps pandas-1.2.4.tar.gz`.
## Import a package
-In your Automation account, select **Python packages** under **Shared Resources**. select **+ Add a Python package**.
+In your Automation account, select **Python packages** under **Shared Resources**. Then select **+ Add a Python package**.
:::image type="content" source="media/python-3-packages/add-python-3-package.png" alt-text="Screenshot of the Python 3 packages page shows Python 3 packages in the left menu and Add a Python 2 package highlighted.":::
On the **Add Python Package** page, select **Python 3** for the **Version**, and
:::image type="content" source="media/python-3-packages/upload-package.png" alt-text="Screenshot shows the Add Python 3 Package page with an uploaded tar.gz file selected.":::
-Once a package has been imported, it's listed on the Python packages page in your Automation account, under the **Python 3 packages (preview)** tab. If you need to remove a package, select the package and click **Delete**.
+Once a package has been imported, it's listed on the Python packages page in your Automation account, under the **Python 3 packages (preview)** tab. If you need to remove a package, select the package and select **Delete**.
:::image type="content" source="media/python-3-packages/python-3-packages-list.png" alt-text="Screenshot shows the Python 3 packages page after a package has been imported.":::
automation Configure Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/configure-groups.md
Title: Use dynamic groups with Azure Automation Update Management
description: This article tells how to use dynamic groups with Azure Automation Update Management. Previously updated : 07/28/2020 Last updated : 06/22/2021
A dynamic group for non-Azure machines uses saved searches, also called computer
![Screenshot shows the Select groups page for Non-Azure (Preview) and the Preview pane on the right side.](./media/configure-groups/select-groups-2.png)
+> [!NOTE]
+> A saved search that [queries data stored across multiple Log Analytics workspaces](../../azure-monitor/logs/cross-workspace-query.md) is not supported.
+ ## Next steps You can [query Azure Monitor logs](query-logs.md) to analyze update assessments, deployments, and other related management tasks. It includes pre-defined queries to help you get started.
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/deploy-updates.md
Title: How to create update deployments for Azure Automation Update Management
description: This article describes how to schedule update deployments and review their status. Previously updated : 04/19/2021 Last updated : 06/22/2021
To schedule a new update deployment, perform the following steps. Depending on t
>[!NOTE] > Deploying updates by update classification doesn't work on RTM versions of CentOS. To properly deploy updates for CentOS, select all classifications to make sure updates are applied. There's currently no supported method to enable native classification-data availability on CentOS. See the following for more information about [Update classifications](overview.md#update-classifications).
+ >[!NOTE]
+ > Deploying updates by update classification does may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
+ >
+ > Update Management for Windows Server machines is unaffected; update classification and deployments are unchanged.
+ 8. Use the **Include/exclude updates** region to add or exclude selected updates from the deployment. On the **Include/Exclude** page, you enter KB article ID numbers to include or exclude for Windows updates. For supported Linux distros, you specify the package name. :::image type="content" source="./media/deploy-updates/include-specific-updates-example.png" alt-text="Example showing how to include specific updates.":::
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/operating-system-requirements.md
Title: Azure Automation Update Management Supported Clients
description: This article describes the supported Windows and Linux operating systems with Azure Automation Update Management. Previously updated : 06/07/2021 Last updated : 06/22/2021
The following table lists the supported operating systems for update assessments
|CentOS 6, 7, and 8 (x64) | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). | |Red Hat Enterprise 6, 7, and 8 (x64) | Linux agents require access to an update repository. | |SUSE Linux Enterprise Server 12, 15, and 15.1 (x64) | Linux agents require access to an update repository. |
-|Ubuntu 14.04 LTS, 16.04 LTS, and 18.04 LTS (x64) |Linux agents require access to an update repository. |
+|Ubuntu 14.04 LTS, 16.04 LTS, 18.04 LTS, and 20.04 LTS (x64) |Linux agents require access to an update repository. |
> [!NOTE] > Update Management does not support safely automating update management across all instances in an Azure virtual machine scale set. [Automatic OS image upgrades](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is the recommended method for managing OS image upgrades on your scale set.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/overview.md
Title: Azure Automation Update Management overview
description: This article provides an overview of the Update Management feature that implements updates for your Windows and Linux machines. Previously updated : 06/07/2021 Last updated : 06/22/2021
When you schedule an update to run on a Linux machine, that for example is confi
Categorization is done for Linux updates as **Security** or **Others** based on the OVAL files, which includes updates addressing security issues or vulnerabilities. But when the update schedule is run, it executes on the Linux machine using the appropriate package manager like YUM, APT, or ZYPPER to install them. The package manager for the Linux distro may have a different mechanism to classify updates, where the results may differ from the ones obtained from OVAL files by Update Management. To manually check the machine and understand which updates are security relevant by your package manager, see [Troubleshoot Linux update deployment](../troubleshoot/update-management.md#updates-linux-installed-different).
+>[!NOTE]
+> Deploying updates by update classification does may not work correctly for Linux distros supported by Update Management. This is a result of an issue identified with the naming schema of the OVAL file and this prevents Update Management from properly matching classifications based on filtering rules. Because of the different logic used in security update assessments, results may differ from the security updates applied during deployment if your update schedules for Linux has the classification set as **Critical and security updates**.
+>
+> Update Management for Windows Server machines is unaffected; update classification and deployments are unchanged.
+ ## Integrate Update Management with Configuration Manager Customers who have invested in Microsoft Endpoint Configuration Manager for managing PCs, servers, and mobile devices also rely on the strength and maturity of Configuration Manager to help manage software updates. To learn how to integrate Update Management with Configuration Manager, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/overview.md
The following table describes the scenarios that are currently supported for Arc
|Azure Regions |Direct connected mode |Indirect connected mode | |||| |East US|Available|Available
+|East US 2|Available|Available
+|West US|Available|Available
+|Central US|Not available|Available
+|South Central US|Available|Available
+|UK South|Available|Available
+|France Central|Available|Available
|West Europe |Available |Available |North Europe|Available|Available
+|Japan East|Not available|Available
+|Korea Central|Not available|Available
+|East Asia|Not available|Available
+|Southeast Asia|Available|Available
+|Australia East|Available|Available
## Next steps
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/private-link-security.md
+
+ Title: Use Azure Private Link to securely connect networks to Azure Arc
+description: Learn how to use Azure Private Link to securely connect networks to Azure Arc.
+ Last updated : 06/22/2021++
+# Use Azure Private Link to securely connect networks to Azure Arc
+
+[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. This means you can connect your on-premises or multi-cloud servers with Azure Arc and send all traffic over an Azure [ExpressRoute](../../expressroute/expressroute-introduction.md) or site-to-site [VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) instead of using public networks.
+
+Starting with Azure Arc enabled servers, you can use a Private Link Scope model to allow multiple servers or machines to communicate with their Azure Arc resources using a single private endpoint.
+
+This article covers when to use and how to set up an Azure Arc Private Link Scope (preview).
+
+> [!NOTE]
+> Azure Arc Private Link Scope (preview) is available in all commercial cloud regions, it is not available in the US Government cloud today.
+
+## Advantages
+
+With Private Link you can:
+
+- Connect privately to Azure Arc without opening up any public network access.
+- Ensure data from the Arc enabled machine or server is only accessed through authorized private networks. This also includes data from [VM extensions](manage-vm-extensions.md) installed on the machine or server that provide post-deployment management and monitoring support.
+- Prevent data exfiltration from your private networks by defining specific Azure Arc enabled servers and other Azure services resources, such as Azure Monitor, that connects through your private endpoint.
+- Securely connect your private on-premises network to Azure Arc using ExpressRoute and Private Link.
+- Keep all traffic inside the Microsoft Azure backbone network.
+
+For more information, see [Key Benefits of Private Link](../../private-link/private-link-overview.md#key-benefits).
+
+## How it works
+
+Azure Arc Private Link Scope (preview) connects private endpoints (and the virtual networks they're contained in) to an Azure resource, in this case Azure Arc enabled servers. When you enable any one of the Arc enabled servers supported VM extensions, such as Azure Automation Update Management or Azure Monitor, those resources connect other Azure resources. Such as:
+
+- Log Analytics workspace, required for Azure Automation Update Management, Azure Automation Change Tracking and Inventory, Azure Monitor VM insights, and Azure Monitor log collection with Log Analytics agent.
+- Azure Automation account, required for Update Management and Change Tracking and Inventory.
+- Azure Key Vault
+- Azure Blob storage, required for Custom Script Extension.
++
+Connectivity to the other Azure resources from an Arc enabled server listed earlier require configuring Private Link for each service. For more information, see the following to configure Private Link for [Azure Automation](../../automation/how-to/private-link-security.md), [Azure Monitor](../../azure-monitor/logs/private-link-security.md), [Azure Key Vault](../../key-vault/general/private-link-service.md), or [Azure Blob storage](../../private-link/tutorial-private-endpoint-storage-portal.md).
+
+> [!IMPORTANT]
+> Azure Private Link is now generally available. Both Private Endpoint and Private Link service (service behind standard load balancer) are generally available. Different Azure PaaS will onboard to Azure Private Link at different schedules. See [Private Link availability](../../private-link/availability.md) for an accurate status of Azure PaaS on Private Link. For known limitations, see [Private Endpoint](../../private-link/private-endpoint-overview.md#limitations) and [Private Link Service](../../private-link/private-link-service-overview.md#limitations).
+
+* The Private Endpoint on your VNet allows it to reach Azure Arc enabled servers endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Arc enabled servers resource without opening your VNet to outbound traffic not requested.
+
+* Traffic from the Private Endpoint to your resources will go over the Microsoft Azure backbone, and not routed to public networks.
+
+* You can configure each of your components to allow or deny ingestion and queries from public networks. That provides a resource-level protection, so that you can control traffic to specific resources.
+
+## Restrictions and limitations
+
+The Arc enabled servers Private Link Scope object has a number of limits you should consider when planning your Private Link setup.
+
+- You can associate at most one Azure Arc Private Link Scope with a virtual network.
+
+- An Azure Arc enabled machine or server resource can only connect to one Azure Arc enabled servers Private Link Scope.
+
+- All on-premises machines need to use the same private endpoint by resolving the correct private endpoint information (FQDN record name and private IP address) using the same DNS forwarder. For more information, see [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md)
+
+- The Azure Arc enabled machine or server, Azure Arc Private Link Scope, and virtual network must be in the same Azure region.
+
+- Traffic to Azure Active Directory and Azure Resource Manager service tags must be allowed through your on-premises network firewall during the preview.
+
+- Other Azure services that you will use, for example Azure Monitor, requires their own private endpoints in your virtual network.
+
+- Azure Arc enabled servers Private Link Scope is not currently available in Azure US Government regions.
+
+## Planning your Private Link setup
+
+To connect your server to Azure Arc over a private link, you need to configure your network to accomplish the following:
+
+1. Establish a connection between your on-premises network and an Azure virtual network using a [site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md) or [ExpressRoute circuit](../../expressroute/expressroute-howto-linkvnet-arm.md).
+
+1. Deploy an Azure Arc Private Link Scope (preview), which controls which machines or servers can communicate with Azure Arc over private endpoints and associate it with your Azure virtual network using a private endpoint.
+
+1. Update the DNS configuration on your local network to resolve the private endpoint addresses.
+
+1. Configure your local firewall to allow access to Azure Active Directory and Azure Resource Manager. This is a temporary step and will not be required when private endpoints for these services enter preview.
+
+1. Associate the machines or servers registered with Azure Arc enabled servers with the private link scope.
+
+1. Optionally, deploy private endpoints for other Azure services your machine or server is managed by, such as:
+
+ - Azure Monitor
+ - Azure Automation
+ - Azure Blob storage
+ - Azure Key Vault
+
+This article assumes you have already set up your ExpressRoute circuit or site-to-site VPN connection.
+
+## Network configuration
+
+Azure Arc enabled servers integrates with several Azure services to bring cloud management and governance to your hybrid machines or servers. Most of these services already offer private endpoints, but you need to configure your firewall and routing rules to allow access to Azure Active Directory and Azure Resource Manager over the internet until these services offer private endpoints.
+
+There are two ways you can achieve this:
+
+- If your network is configured to route all internet-bound traffic through the Azure VPN or ExpressRoute circuit, you can configure the network security group (NSG) associated with your subnet in Azure to allow outbound TCP 443 (HTTPS) access to Azure AD and Azure using [service tags](../../virtual-network/service-tags-overview.md). The NSG rules should look like the following:
+
+ |Setting |Azure AD rule | Azure rule |
+ |--|--|--|
+ |Source |Virtual network |Virtual network |
+ |Source port ranges |* |* |
+ |Destination |Service Tag |Service Tag |
+ |Destination service tag |AzureActiveDirectory |AzureResourceManager |
+ |Destination port ranges |443 |443 |
+ |Protocol |Tcp |Tcp |
+ |Action |Allow |Allow |
+ |Priority |150 (must be lower than any rules that block internet access) |151 (must be lower than any rules that block internet access) |
+ |Name |AllowAADOutboundAccess |AllowAzOutboundAccess |
+
+- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Azure AD and Azure using the downloadable service tag files. The JSON file contains all the public IP address ranges used by Azure AD and Azure and is updated monthly to reflect any changes. Azure ADs service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
+
+See the visual diagram under the section [How it works](#how-it-works) for the network traffic flows.
+
+## Create a Private Link Scope
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Go to **Create a resource** in the Azure portal and search for **Azure Arc Private Link Scope**. Or you can use the following link to open the [Azure Arc Private Link Scope](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.HybridCompute%2FprivateLinkScopes) page in the portal.
+
+ :::image type="content" source="./media/private-link-security/find-scope.png" alt-text="Find Private Link Scope" border="true":::
+
+1. Select **Create**.
+
+1. Pick a Subscription and Resource Group. During the preview, your virtual network and Azure Arc enabled servers must be in the same subscription as the Azure Arc Private Link Scope.
+
+1. Give the Azure Arc Private Link Scope a name. It's best to use a meaningful and clear name.
+
+ You can optionally require every Arc enabled machine or server associated with this Azure Arc Private Link Scope (preview) to send data to the service through the private endpoint. If you select **Enable public network access**, machines or servers associated with this Azure Arc Private Link Scope (preview) can communicate with the service over both private or public networks. You can change this setting after creating the scope if you change your mind.
+
+1. Select **Review + Create**.
+
+ :::image type="content" source="./media/private-link-security/create-private-link-scope.png" alt-text="Create Private Link Scope" border="true":::
+
+1. Let the validation pass, and then select **Create**.
+
+## Create a private endpoint
+
+Once your Azure Arc Private Link Scope (preview) is created, you need to connect it with one or more virtual networks using a private endpoint. The private endpoint exposes access to the Azure Arc services on a private IP in your virtual network address space.
+
+1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Add** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and selecting **Approve**.
+
+ :::image type="content" source="./media/private-link-security/create-private-endpoint.png" alt-text="Create Private Endpoint" border="true":::
+
+1. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the VNet you connect it to.
+
+1. Select **Next: Resource**.
+
+1. On the **Resource** page,
+
+ a. Pick the **Subscription** that contains your Azure Arc Private Link Scope resource.
+
+ b. For **Resource type**, choose **Microsoft.HybridCompute/privateLinkScopes**.
+
+ c. From the **Resource** drop-down, choose your Private Link scope you created earlier.
+
+ d. Select **Next: Configuration >**.
+
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-configuration.png" alt-text="Complete creation of Private Endpoint" border="true":::
+
+1. On the **Configuration** page,
+
+ a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Monitor resources.
+
+ b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below.
+
+ > [!NOTE]
+ > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Arc enabled servers.
+
+ c. Select **Review + create**.
+
+ d. Let validation pass.
+
+ e. Select **Create**.
+
+## Configure on-premises DNS forwarding
+
+Your on-premises machines or servers need to be able to resolve the private link DNS records to the private endpoint IP addresses. How you configure this depends on whether youΓÇÖre using Azure private DNS zones to maintain DNS records, or if youΓÇÖre using your own DNS server on-premises and how many servers youΓÇÖre configuring.
+
+### DNS configuration using Azure-integrated private DNS zones
+
+If you set up private DNS zones for Azure Arc enabled servers and Guest Configuration when creating the private endpoint, your on-premises machines or servers need to be able to forward DNS queries to the built-in Azure DNS servers to resolve the private endpoint addresses correctly. You need a DNS forwarder in Azure (either a purpose-built VM or an Azure Firewall instance with DNS proxy enabled), after which you can configure your on-premises DNS server to forward queries to Azure to resolve private endpoint IP addresses.
+
+The private endpoint documentation provides guidance for configuring [on-premises workloads using a DNS forwarder](../../private-link/private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder).
+
+### Manual DNS server configuration
+
+If you opted out of using Azure private DNS zones during private endpoint creation, you will need to create the required DNS records in your on-premises DNS server.
+
+1. Go to the Azure portal with the Azure Arc enabled servers private link preview features enabled.
+
+1. Navigate to the private endpoint resource associated with your virtual network and private link scope.
+
+1. From the left-hand pane, select **DNS configuration** to see a list of the DNS records and corresponding IP addresses youΓÇÖll need to set up on your DNS server. The FQDNs and IP addresses will change based on the region you selected for your private endpoint and the available IP addresses in your subnet.
+
+ :::image type="content" source="./media/private-link-security/dns-configuration.png" alt-text="DNS configuration details" border="true":::
+
+1. Follow the guidance from your DNS server vendor to add the necessary DNS zones and A records to match the table in the portal. Ensure that you select a DNS server that is appropriately scoped for your network. Every machine or server that uses this DNS server now resolves the private endpoint IP addresses and must be associated with the Azure Arc Private Link Scope (preview), or the connection will be refused.
+
+### Single server scenarios
+
+If youΓÇÖre only planning to use Private Links to support a few machines or servers, you may not want to update your entire networkΓÇÖs DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address.
+
+#### Windows
+
+1. Using an account with administrator privileges, open **C:\Windows\System32\drivers\etc\hosts**.
+
+1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file requires the IP address first followed by a space and then the hostname.
+
+1. Save the file with your changes. You may need to save to another directory first, then copy the file to the original path.
+
+#### Linux
+
+1. Using an account with the **sudoers** privilege, run `sudo nano /etc/hosts` to open the hosts file.
+
+1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file asks for the IP address first followed by a space and then the hostname.
+
+1. Save the file with your changes.
+
+## Connect to an Azure Arc enabled servers
+
+> [!NOTE]
+> The minimum supported version of the Azure Arc Connected Machine agent with private endpoint is version 1.4. The Arc enabled servers deployment script generated in the portal downloads the latest version.
+
+### Configure a new Arc enabled server to use Private link
+
+When connecting a machine or server with Azure Arc enabled servers for the first time, you can optionally connect it to a Private Link Scope.
+
+1. From your browser, go to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Servers -Azure Arc**.
+
+1. On the **Servers - Azure Arc** page, select **Add** at the upper left.
+
+1. On the **Select a method** page, select the **Add servers using interactive script** tile, and then select **Generate script**.
+
+1. On the **Generate script** page, select the subscription and resource group where you want the machine to be managed within Azure. Select an Azure location where the machine metadata will be stored. This location can be the same or different, as the resource group's location.
+
+1. On the **Prerequisites** page, review the information and then select **Next: Resource details**.
+
+1. On the **Resource details** page, provide the following:
+
+ 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from.
+ 1. In the **Region** drop-down list, select the Azure region to store the machine or server metadata.
+ 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on.
+ 1. Under **Network Connectivity**, select **Private endpoint** and select the Azure Arc Private Link Scope created in Part 1 from the list.
+
+ :::image type="content" source="./media/private-link-security/arc-enabled-servers-create-script.png" alt-text="Selecting Private Endpoint connectivity option" border="true":::
+
+ 1. Select **Next: Tags**.
+
+1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards.
+
+1. Select **Next: Download and run script**.
+
+1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**.
+
+After downloading the script, run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you may need to download the agent on a computer with internet access and transfer it to your machine or server. The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager.
+
+The script will return status messages letting you know if onboarding was successful after it completes.
+
+> [!NOTE]
+> If youΓÇÖre deploying the Connected Machine agent on a Linux server, there may be a five minute delay during the network connectivity check followed by an error saying that `you do not have access to login.windows.net`, even if your firewall is configured correctly. This is a known issue and will be fixed in a future agent release. Onboarding should still succeed if your firewall is configured correctly.
+
+### Configure an existing Arc enabled server
+
+For Arc enabled servers that were set up prior to your private link scope, you can allow them to start using the Arc enabled servers Private Link Scope by completing the following steps.
+
+1. In the Azure portal, navigate to your Azure Arc Private Link Scope resource.
+
+1. From the left-hand pane, select **Azure Arc resources** and then **+ Add**.
+
+1. Select the servers in the list that you want to associate with the Private Link Scope, and then select **Select** to save your changes.
+
+ > [!NOTE]
+ > Only Azure Arc enabled servers in the same subscription and region as your Private Link Scope is shown.
+
+ :::image type="content" source="./media/private-link-security/select-servers-private-link-scope.png" alt-text="Selecting Azure Arc resources" border="true":::
+
+It may take up to 15 minutes for the Private Link Scope to accept connections from the recently associated server(s).
+
+## Troubleshooting
+
+1. Ensure the required resource providers and feature flags are registered for your subscription.
+
+ To check with the Azure CLI, run the following commands.
+
+ ```azurecli
+ az feature show --namespace Microsoft.Network --name AllowPrivateEndpoints
+
+ {
+ "id": "/subscriptions/ID/providers/Microsoft.Features/providers/Microsoft.Network/features/AllowPrivateEndpoints",
+ "name": "Microsoft.Network/AllowPrivateEndpoints",
+ "properties": {
+ "state": "Registered"
+ },
+ "type": "Microsoft.Features/providers/features"
+ }
+ ```
+
+ ```azurecli
+ az feature show --namespace Microsoft.HybridCompute --name ArcServerPrivateLinkPreview
+
+ {
+ "id": "/subscriptions/ID/providers/Microsoft.Features/providers/microsoft.hybridcompute/features/ArcServerPrivateLinkPreview",
+ "name": "microsoft.hybridcompute/ArcServerPrivateLinkPreview",
+ "properties": {
+ "state": "Registered"
+ },
+ "type": "Microsoft.Features/providers/features"
+ }
+ ```
+
+ To check with Azure PowerShell, run the following commands:
+
+ ```azurepowershell
+ Get-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowPrivateEndpoints
+
+ FeatureName ProviderName RegistrationState
+ -- --
+ AllowPrivateEndpoints Microsoft.Network Registered
+ ```
+
+ ```azurepowershell
+ Get-AzProviderFeature -ProviderNamespace Microsoft.HybridCompute -FeatureName ArcServerPrivateLinkPreview
+
+ FeatureName ProviderName RegistrationState
+ -- --
+ ArcServerPrivateLinkPreview Microsoft.HybridCompute Registered
+ ```
+
+ If the features show as registered but you are still unable to see the `Microsoft.HybridCompute/privateLinkScopes` resource when creating a private endpoint, try re-registering the resource provider as shown [here](agent-overview.md#register-azure-resource-providers).
+
+1. Check your on-premises DNS server(s) to verify it is either forwarding to Azure DNS or is configured with appropriate A records in your private link zone. These lookup commands should return private IP addresses in your Azure virtual network. If they resolve public IP addresses, double check your machine or server and networkΓÇÖs DNS configuration.
+
+ nslookup gbl.his.arc.azure.com
+ nslookup agentserviceapi.guestconfiguration.azure.com
+
+1. If you are having trouble onboarding a machine or server, confirm that youΓÇÖve added the Azure Active Directory and Azure Resource Manager service tags to your local network firewall. The agent needs to communicate with these services over the internet until private endpoints are available for these services.
+
+## Next steps
+
+* To learn more about Private Endpoint, see [What is Azure Private Endpoint?](../../private-link/private-endpoint-overview.md).
+
+* If you are experiencing issues with your Azure Private Endpoint connectivity setup, see [Troubleshoot Azure Private Endpoint connectivity problems](../../private-link/troubleshoot-private-endpoint-connectivity.md).
+
+* See the following to configure Private Link for [Azure Automation](../../automation/how-to/private-link-security.md), [Azure Monitor](../../azure-monitor/logs/private-link-security.md), [Azure Key Vault](../../key-vault/general/private-link-service.md), or [Azure Blob storage](../../private-link/tutorial-private-endpoint-storage-portal.md).
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/creator-facility-ontology.md
The `facility` feature class defines the area of the site, building footprint, a
|`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| |`occupants` | array of [directoryInfo.Id](#directoryinfo) | false | The IDs of [directoryInfo](#directoryinfo) features. Used to represent one or many occupants in the feature. | |`addressId` | [directoryInfo.Id](#directoryinfo) | true | The ID of a [directoryInfo](#directoryinfo) feature. Used to represent the address of the feature.|
-|`addressRoomNumber` | [directoryInfo.Id](#directoryinfo)| true | Room/Unit/Apartment/Suite number of the unit.|
|`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000. | |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000.|
The `directoryInfo` object class feature defines the name, address, phone number
|`externalId` | string |true | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1000.| |`streetAddress` |string |false |Street address part of the address. Maximum length allowed is 1000. | |`unit` |string |false |Unit number part of the address. Maximum length allowed is 1000. |
-|`locality`| string| false |The locality of the address. For example: city, municipality, village). Maximum length allowed is 1000.|
+|`locality`| string| false |The locality of the address. For example: city, municipality, village. Maximum length allowed is 1000.|
|`adminDivisions`| string| false |Administrative division part of the address, from smallest to largest (County, State, Country). For example: ["King", "Washington", "USA" ] or ["West Godavari", "Andhra Pradesh", "IND" ]. Maximum length allowed is 1000.| |`postalCode`| string | false |Postal code part of the address. Maximum length allowed is 1000.| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1000.| |`nameSubtitle` | string | false | Subtitle that shows up under the `name` of the feature. Can be used to display the name in a different language, and so on. Maximum length allowed is 1000. | |`nameAlt` | string | false | Alternate name used for the feature. Maximum length allowed is 1000. |
-|`phoneNumber` | string | false | Phone number. |
+|`phoneNumber` | string | false | Phone number. Maximum length allowed is 1000. |
|`website` | string | false | Website URL. Maximum length allowed is 1000. | |`hoursOfOperation` | string | false | Hours of operation as text, following the [Open Street Map specification](https://wiki.openstreetmap.org/wiki/Key:opening_hours/specification). Maximum length allowed is 1000. |
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-render-custom-data.md
Title: Render custom data on a raster map | Microsoft Azure Maps
+ Title: Render custom data on a raster map in Microsoft Azure Maps
description: Learn how to add pushpins, labels, and geometric shapes to a raster map. See how to use the static image service in Azure Maps for this purpose. Previously updated : 05/26/2021 Last updated : 06/22/2021
# Render custom data on a raster map
-This article explains how to use the [static image service](/rest/api/maps/render/getmapimage), with image composition functionality, to allow overlays on top of a raster map. Image composition includes the ability to get a raster tile back, with additional data like custom pushpins, labels, and geometry overlays.
+This article describes how to use the [static image service](/rest/api/maps/render/getmapimage) with image composition functionality. Image composition functionality supports the retrieval of static raster tile that contains custom data.
-To render custom pushpins, labels, and geometry overlays, you can use the Postman application. You can use Azure Maps [Data Service APIs](/rest/api/maps/data) to store and render overlays.
+The following are examples of custom data:
+
+- Custom pushpins
+- Labels
+- Geometry overlays
> [!Tip]
-> To show a simple map on a web page, it's often more cost effective to use the Azure Maps Web SDK, rather than to use the static image service. The web SDK uses map tiles; and unless the user pans and zooms the map, they will often generate only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming. Additionally, the Azure Maps web SDK provides a richer set of data visualization options than a static map web service does.
+> To show a simple map on a web page, it's often more cost effective to use the Azure Maps Web SDK, rather than to use the static image service. The web SDK uses map tiles; and unless the user pans and zooms the map, they will often generate only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming. Also, the Azure Maps web SDK provides a richer set of data visualization options than a static map web service does.
## Prerequisites 1. [Make an Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account) 2. [Obtain a primary subscription key](quick-demo-map-app.md#get-the-primary-key-for-your-account), also known as the primary key or the subscription key.
-This tutorial uses the [Postman](https://www.postman.com/) application, but you may use a different API development environment.
+This article uses the [Postman](https://www.postman.com/) application, but you may use a different API development environment.
+
+We'll use the Azure Maps [Data Service APIs](/rest/api/maps/data) to store and render overlays.
## Render pushpins with labels and a custom image > [!Note]
-> The procedure in this section requires an Azure Maps account in Gen 1 or Gen 2 pricing tier.
-
+> The procedure in this section requires an Azure Maps account in the Gen 1 or Gen 2 pricing tier.
The Azure Maps account Gen 1 Standard S0 tier supports only a single instance of the `pins` parameter. It allows you to render up to five pushpins, specified in the URL request, with a custom image.
-To render pushpins with labels and a custom image, complete these steps:
+### Get static image with custom pins and labels
+
+To get a static image with custom pins and labels:
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
-1. Create a collection in which to store the requests. In the Postman app, select **New**. In the **Create New** window, select **Collection**. Name the collection and select the **Create** button.
+3. Enter a **Request name** for the request, such as *GET Static Image*.
-2. To create the request, select **New** again. In the **Create New** window, select **Request**. Enter a **Request name** for the pushpins. Select the collection you created in the previous step, as the location to save the request. Then, select **Save**.
-
- ![Create a request in Postman](./media/how-to-render-custom-data/postman-new.png)
+4. Select the **GET** HTTP method.
-3. Select the GET HTTP method on the builder tab and enter the following URL to create a GET request.
+5. Enter the following URL (replace `{subscription-key}` with your primary subscription key):
```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12&center=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FAzureMapsCodeSamples%2Fmaster%2FAzureMapsCodeSamples%2FCommon%2Fimages%2Ficons%2Fylw-pushpin.png ```
- Here's the resulting image:
+6. Select **Send**.
- ![A custom pushpin with a label](./media/how-to-render-custom-data/render-pins.png)
+7. The service returns the following image:
+ :::image type="content" source="./media/how-to-render-custom-data/render-pins.png" alt-text="A custom pushpin with a label.":::
-## Get data from Azure Maps data storage
+## Upload pins and path data
> [!Note] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier.
-You can also obtain the path and pin location information by using the [Data Upload API](/rest/api/maps/data-v2/upload-preview). Follow the steps below to upload the path and pins data.
+In this section, we'll upload path and pin data to Azure Map data storage.
+
+To upload pins and path data:
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *POST Path and Pin Data*.
+
+4. Select the **POST** HTTP method.
-1. In the Postman app, open a new tab in the collection you created in the previous section. Select the POST HTTP method on the builder tab and enter the following URL to make a POST request:
+5. Enter the following URL (replace `{subscription-key}` with your primary subscription key):
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={subscription-key}&api-version=2.0&dataFormat=geojson ```
-2. On the **Params** tab, enter the following key/value pairs, which are used for the POST request URL. Replace the `subscription-key` value with your Azure Maps subscription key.
+6. Select the **Body** tab.
- ![Key/value params in Postman](./media/how-to-render-custom-data/postman-key-vals.png)
+7. In the dropdown lists, select **raw** and **JSON**.
-3. On the **Body** tab, select the raw input format and choose JSON as the input format from the dropdown list. Provide this JSON as data to be uploaded:
-
+8. Copy the following JSON data as data to be uploaded, and then paste them in the **Body** window:
+
```JSON { "type": "FeatureCollection",
You can also obtain the path and pin location information by using the [Data Upl
} ```
-4. Select **Send** and review the response header. Upon a successful request, the *Operation-Location* header will contain the `status URL` to check the current status of the upload request. The `status URL` has the following format:
+9. Select **Send**.
+
+10. In the response window, select the **Headers** tab.
+
+11. Copy the value of the **Operation-Location** key, which is the `status URL`. We'll use the `status URL` to check the status of the upload request in the next section. The `status URL` has the following format:
```HTTP
- https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0
+ https://us.atlas.microsoft.com/mapData/operations/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx?api-version=2.0
```
-5. Copy your status URI and append the subscription-key parameter to it with the value of your Azure Maps account subscription key. Use the same account subscription key that you used to upload the data. The status URI format should look like the one below:
+>[!TIP]
+>To obtain your own path and pin location information, use the [Data Upload API](/rest/api/maps/data-v2/upload-preview).
+
+### Check pins and path data upload status
+
+To check the status of the data upload and retrieve its unique ID (`udid`):
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the `status URL` you copied in [Upload pins and path data](#upload-pins-and-path-data). The request should look like the following URL (replace `{subscription-key}` with your primary subscription key):
```HTTP
- https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0&subscription-key={Subscription-key}
+ https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0&subscription-key={subscription-key}
```
-6. To get the `udid`, open a new tab in the Postman app. Select GET HTTP method on the builder tab. Make a GET request at the `status URL`. If your data upload was successful, you'll receive a `udid` in the response body. Copy the `udid`.
+6. Select **Send**.
- ```JSON
- {
- "udid" : "{udId}"
- }
- ```
+7. In the response window, select the **Headers** tab.
-7. Use the `udid` value received from the Data Upload API to render features on the map. To do so, open a new tab in the collection you created in the preceding section. Select the GET HTTP method on the builder tab, replace the {subscription-key} and {udId} with your values, and enter this URL to make a GET request:
+8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
+
+ :::image type="content" source="./media/how-to-render-custom-data/resource-location-url.png" alt-text="Copy the resource location URL.":::
+
+### Render uploaded features on the map
+
+To render the uploaded pins and path data on the map:
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *GET Data Upload Status*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace `{subscription-key}` with your primary subscription key and `udid` with the `udid` of the uploaded data):
```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12&center=-73.96682739257812%2C40.78119135317995&pins=default|la-35+50|ls12|lc003C62|co9B2F15||'Times Square'-73.98516297340393 40.758781646381024|'Central Park'-73.96682739257812 40.78119135317995&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.30||udid-{udId} ```
- Here's the response image:
+6. The service returns the following image:
- ![Get data from Azure Maps data storage](./media/how-to-render-custom-data/uploaded-path.png)
+ :::image type="content" source="./media/how-to-render-custom-data/uploaded-path.png" alt-text="Render uploaded data in static map image.":::
## Render a polygon with color and opacity > [!Note] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier. - You can modify the appearance of a polygon by using style modifiers with the [path parameter](/rest/api/maps/render/getmapimage#uri-parameters).
-1. In the Postman app, open a new tab in the collection you created earlier. Select the GET HTTP method on the builder tab and enter the following URL to configure a GET request to render a polygon with color and opacity:
-
+To render a polygon with color and opacity:
+
+1. In the Postman app, select **New**.
+
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *GET Polygon*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace `{subscription-key}` with your primary subscription key):
+
```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500&center=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063 &subscription-key={subscription-key} ```
- Here's the response image:
-
- ![Render an opaque polygon](./media/how-to-render-custom-data/opaque-polygon.png)
+6. The service returns the following image:
+ :::image type="content" source="./media/how-to-render-custom-data/opaque-polygon.png" alt-text="Render an opaque polygon.":::
## Render a circle and pushpins with custom labels > [!Note] > The procedure in this section requires an Azure Maps account Gen 1 (S1) or Gen 2 pricing tier. - You can modify the appearance of the pins by adding style modifiers. For example, to make pushpins and their labels larger or smaller, use the `sc` "scale style" modifier. This modifier takes a value that's greater than zero. A value of 1 is the standard scale. Values larger than 1 will make the pins larger, and values smaller than 1 will make them smaller. For more information about style modifiers, see [static image service path parameters](/rest/api/maps/render/getmapimage#uri-parameters).
+To render a circle and pushpins with custom labels:
-Follow these steps to render a circle and pushpins with custom labels:
+1. In the Postman app, select **New**.
-1. In the Postman app, open a new tab in the collection you created earlier. Select the GET HTTP method on the builder tab and enter this URL to make a GET request:
+2. In the **Create New** window, select **HTTP Request**.
+
+3. Enter a **Request name** for the request, such as *GET Polygon*.
+
+4. Select the **GET** HTTP method.
+
+5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace `{subscription-key}` with your primary subscription key):
```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={subscription-key} ```
- Here's the response image:
+6. Select **Send**.
- ![Render a circle with custom pushpins](./media/how-to-render-custom-data/circle-custom-pins.png)
+7. The service returns the following image:
-2. To change the color of the pushpins from the last step, change the "co" style modifier. Look at `pins=default|la15+50|al0.66|lc003C62|co002D62|`, the current color would be specified as #002D62 in CSS. Let's say you want to change it to #41d42a. Write the new color value after the "co" specifier, like this: `pins=default|la15+50|al0.66|lc003C62|co41D42A|`. Make a new GET request:
+ :::image type="content" source="./media/how-to-render-custom-data/circle-custom-pins.png" alt-text="Render a circle with custom pushpins.":::
+
+8. Now we'll change the color of the pushpins by modifying the `co` style modifier. If you look at the value of the `pins` parameter (`pins=default|la15+50|al0.66|lc003C62|co002D62|`), you'll see that the current color is `#002D62`. To change the color to `#41d42a`, we'll replace `#002D62` with `#41d42a`. Now the `pins` parameter is `pins=default|la15+50|al0.66|lc003C62|co41D42A|`. The request looks like the following URL:
```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co41D42A||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={subscription-key} ```
- Here's the response image after changing the colors of the pins:
+9. Select **Send**.
+
+10. The service returns the following image:
- ![Render a circle with updated pushpins](./media/how-to-render-custom-data/circle-updated-pins.png)
+ :::image type="content" source="./media/how-to-render-custom-data/circle-updated-pins.png" alt-text="Render a circle with updated pushpins.":::
Similarly, you can change, add, and remove other style modifiers.
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-bing-maps-web-services.md
In Azure Maps, lines and polygons can also be added to a static map image by spe
> `&path=pathStyles||pathLocation1|pathLocation2|...`
-When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps does not support encoded paths currently. Larger data sets can be uploaded as a GeoJSON fills into the Azure Maps Data Storage API as documented [here](./how-to-render-custom-data.md#get-data-from-azure-maps-data-storage).
+When it comes to path locations, Azure Maps requires the coordinates to be in `longitude latitude` format whereas Bing Maps uses `latitude,longitude` format. Also note that **there is a space, not a comma separating** longitude and latitude in Azure Maps. Azure Maps does not support encoded paths currently. Larger data sets can be uploaded as a GeoJSON fills into the Azure Maps Data Storage API as documented [here](./how-to-render-custom-data.md#upload-pins-and-path-data).
Path styles in Azure Maps are added with the format `optionNameValue`, with multiple styles separated by pipe (`|`) characters like this `optionName1Value1|optionName2Value2`. Note the option names and values are not separated. The following style option names can be used to style paths in Azure Maps:
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps-web-services.md
Add lines and polygons to a static map image by specifying the `path` parameter
&path=pathStyles||pathLocation1|pathLocation2|... ```
-When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points. Upload larger data sets as a GeoJSON file into the Azure Maps Data Storage API as documented [here](how-to-render-custom-data.md#get-data-from-azure-maps-data-storage).
+When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points. Upload larger data sets as a GeoJSON file into the Azure Maps Data Storage API as documented [here](how-to-render-custom-data.md#upload-pins-and-path-data).
Add path styles with the `optionNameValue` format. Separate multiple styles by pipe (\|) characters, like this `optionName1Value1|optionName2Value2`. The option names and values aren't separated. Use the following style option names to style paths in Azure Maps:
azure-maps Power Bi Visual Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/power-bi-visual-getting-started.md
At this time, Azure Maps is currently available in all countries and regions exc
- China - South Korea
+- Azure Government (GCC + GCC High)
For coverage details for the different Azure Maps services that power this visual, see the [Geographic coverage information](geographic-coverage.md) document.
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-iot-hub-maps.md
Title: 'Tutorial: Implement IoT spatial analytics | Microsoft Azure Maps'
description: Tutorial on how to Integrate IoT Hub with Microsoft Azure Maps service APIs Previously updated : 09/01/2020 Last updated : 06/21/2021
Now, set up your Azure function.
1. In the Azure portal dashboard, select **Create a resource**. Type **Function App** in the search text box. Select **Function App** > **Create**.
-1. On the **Function App** creation page, name your function app. Under **Resource Group**, select **ContosoRental** from the drop-down list. Select **.NET Core** as the **Runtime Stack**. At the bottom of the page, select **Next: Hosting >**.
+1. On the **Function App** creation page, name your function app. Under **Resource Group**, select **ContosoRental** from the drop-down list. Select **.NET** as the **Runtime Stack**. Select **3.1** as the **Version**. At the bottom of the page, select **Next: Hosting >**.
:::image type="content" source="./media/tutorial-iot-hub-maps/rental-app.png" alt-text="Screenshot of create a function app.":::
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/agents-overview.md
Last updated 01/12/2021
Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. This article describes the agents used by Azure Monitor and helps you determine which you need to meet the requirements for your particular environment. > [!NOTE]
-> Azure Monitor currently has multiple agents because of recent consolidation of Azure Monitor and Log Analytics. While there may be overlap in their features, each has unique capabilities. Depending on your requirements, you may need one or more of the agents on your machines.
-
-You may have a specific set of requirements that can't be completely met with a single agent for a particular machine. For example, you may want to use metric alerts which requires Azure diagnostics extension but also want to leverage the functionality of VM insights which requires the Log Analytics agent and the Dependency agent. In cases such as this, you can use multiple agents, and this is a common scenario for customers who require functionality from each.
+> Azure Monitor recently launched a new agent, the Azure Monitor agent, that provides all capabilities necessary to collect guest operating system monitoring data. While there are multiple legacy agents that exist due to the consolidation of Azure Monitor and Log Analytics, each with their unique capabilities with some overlap, we recommend that you use the new agent that aims to consolidate features from all existing agents, and provide additional benefits. [Learn More](./azure-monitor-agent-overview.md)
## Summary of agents The following tables provide a quick comparison of the Azure Monitor agents for Windows and Linux. Further detail on each is provided in the section below.
-> [!NOTE]
-> The Azure Monitor agent is currently in preview with limited capabilities. This table will be updated
- ### Windows agents
-| | Azure Monitor agent (preview) | Diagnostics<br>extension (WAD) | Log Analytics<br>agent | Dependency<br>agent |
+| | Azure Monitor agent | Diagnostics<br>extension (WAD) | Log Analytics<br>agent | Dependency<br>agent |
|:|:|:|:|:| | **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | | **Agent requirements** | None | None | None | Requires Log Analytics agent |
The following tables provide a quick comparison of the Azure Monitor agents for
| | Azure Monitor agent (preview) | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent | Dependency<br>agent | |:|:|:|:|:|:|
-| **Environments supported** (see table below for supported operating systems) | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
+| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
| **Agent requirements** | None | None | None | None | Requires Log Analytics agent | | **Data collected** | Syslog<br>Performance | Syslog<br>Performance | Performance | Syslog<br>Performance| Process dependencies<br>Network connection metrics | | **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics | Azure Storage<br>Event Hub | Azure Monitor Metrics | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
The [Azure Monitor agent](azure-monitor-agent-overview.md) is meant to replace t
Use the Azure Monitor agent if you need to: - Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises. ([Azure Arc enabled servers](../../azure-arc/servers/overview.md) required for machines outside of Azure.)
+- Manage data collection configuration centrally, using [data collection rules](./data-collection-rule-overview.md) and use Azure Resource Manager (ARM) templates or policies for management overall
- Send data to Azure Monitor Logs and Azure Monitor Metrics for analysis with Azure Monitor. -- Send data to Azure Storage for archiving.
+- Leverage Windows event filtering or multi-homing for logs on Windows and Linux
+<! Send data to Azure Storage for archiving.
- Send data to third-party tools using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Manage the security of your machines using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Sentinel](../../sentinel/overview.md). (Not available in preview.)
+- Manage the security of your machines using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Sentinel](../../sentinel/overview.md). (Available in private preview.)
+- Use [VM insights](../vm/vminsights-overview.md) which allows you to monitor your machines at scale and monitors their processes and dependencies on other resources and external processes..
+- Manage the security of your machines using [Azure Security Center](../../security-center/security-center-introduction.md) or [Azure Sentinel](../../sentinel/overview.md).
+- Use different [solutions](../monitor-reference.md#insights-and-core-solutions) to monitor a particular service or application. */
+-->
+Limitations of the Azure Monitor Agent include:
+- Cannot use the Log Analytics solutions in production (only available in preview, [see what's supported](../faq.yml#which-log-analytics-solutions-are-supported-on-the-new-azure-monitor-agent-)).
+- No support yet for networking scenarios involving private links or direct proxies (Log Analytics/OMS gateway).
+- No support yet collecting custom logs (files) or IIS log files.
+- No support yet for Event Hubs and Storage accounts as destinations.
+- No support for Hybrid Runbook workers.
-See [current feature gaps](/azure/azure-monitor/faq#is-the-new-azure-monitor-agent-at-parity-with-existing-agents) when compared to existing agents.
## Log Analytics agent
Consider the following when using the Dependency agent:
## Virtual machine extensions
-The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) install the Dependency agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
+The [Azure Monitor agent](./azure-monitor-agent-install.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) install the Dependency agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
-On hybrid machines, use [Azure Arc enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Log Analytics and Azure Monitor Dependency VM extensions.
+On hybrid machines, use [Azure Arc enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Azure Monitor agent, Log Analytics and Azure Monitor Dependency VM extensions.
## Supported operating systems
Since the Dependency agent works at the kernel level, support is also dependent
Get more details on each of the agents at the following:
+- [Overview of the Azure Monitor agent](./azure-monitor-agent-overview.md)
- [Overview of the Log Analytics agent](./log-analytics-agent.md) - [Azure Diagnostics extension overview](./diagnostics-extension-overview.md) - [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md)
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
ServiceNow admins must generate a client ID and client secret for their ServiceN
- [Set up OAuth for Paris](https://docs.servicenow.com/bundle/paris-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for Orlando](https://docs.servicenow.com/bundle/orlando-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for New York](https://docs.servicenow.com/bundle/newyork-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for Madrid](https://docs.servicenow.com/bundle/madrid-platform-administration/page/administer/security/task/t_SettingUpOAuth.html) - [Set up OAuth for London](https://docs.servicenow.com/bundle/london-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for Kingston](https://docs.servicenow.com/bundle/kingston-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for Jakarta](https://docs.servicenow.com/bundle/jakarta-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for Istanbul](https://docs.servicenow.com/bundle/istanbul-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for Helsinki](https://docs.servicenow.com/bundle/helsinki-platform-administration/page/administer/security/task/t_SettingUpOAuth.html)-- [Set up OAuth for Geneva](https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/security/task/t_SettingUpOAuth.html) As a part of setting up OAuth, we recommend:
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/api-custom-events-metrics.md
To send a single metric value:
*JavaScript* ```javascript
-appInsights.trackMetric("queueLength", 42.0);
+appInsights.trackMetric({name: "queueLength", average: 42});
``` *C#*
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-ad-authentication.md
Below is an example on how to configure Java agent to use user-assigned managed
} } ``` #### Client secret
Below is an example on how to configure Java agent to use service principal for
} } ```+ ### [Python](#tab/python)
This indicates that the SDK has been configured with credentials that haven't be
Next steps should be to review the Application Insights resource's access control. The SDK must be configured with a credential that has been granted the "Monitoring Metrics Publisher" role.
+### Language specific troubleshooting
+ ### [ASP.NET and .NET](#tab/net) #### Event Source
appInsights.setup("InstrumentationKey=00000000-0000-0000-0000-000000000000;Inges
### [Java](#tab/java)
-#### HTTP Traffic
+#### HTTP traffic
You can inspect network traffic using a tool like Fiddler. To enable the traffic to tunnel through fiddler either add the following proxy settings in configuration file:
If using fiddler, you might see the following response header: `HTTP/1.1 401 Una
#### CredentialUnavailableException
-If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£clientIdΓÇ¥ in your User Assigned Managed Identity configuration
+If the following exception is seen in the log file `com.azure.identity.CredentialUnavailableException: ManagedIdentityCredential authentication unavailable. Connection to IMDS endpoint cannot be established`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid `clientId` in your User Assigned Managed Identity configuration
#### Failed to send telemetry
If the following WARN message is seen in the log file, `WARN c.m.a.TelemetryChan
If using fiddler, you might see the following response header: `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`. Root cause might be one of the following reasons:-- You've created the resource with ΓÇ£system-assigned managed identityΓÇ¥ enabled or you might have associated the ΓÇ£user-assigned identityΓÇ¥ with the resource but forgot to add the ΓÇ£Monitoring Metrics PublisherΓÇ¥ role to the resource (if using SAMI) or ΓÇ£user-assigned identityΓÇ¥ (if using UAMI).-- You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (vm, app service etc.) or user-assigned identity with ΓÇ£Monitoring Metrics PublisherΓÇ¥ roles in your Application Insights resource.
+- You've created the resource with System-assigned managed identity enabled or you might have associated the User-assigned identity with the resource but forgot to add the `Monitoring Metrics Publisher` role to the resource (if using SAMI) or User-assigned identity (if using UAMI).
+- You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (vm, app service etc.) or user-assigned identity with `Monitoring Metrics Publisher` roles in your Application Insights resource.
#### Invalid TenantId
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier ΓÇÿ' is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£tenantIdΓÇ¥ in your client secret configuration.
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Specified tenant identifier <TENANT-ID> is neither a valid DNS name, nor a valid external domain.`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong `tenantId` in your client secret configuration.
#### Invalid client secret
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£clientSecretΓÇ¥ in your client secret configuration.
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Invalid client secret is provided`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid `clientSecret` in your client secret configuration.
#### Invalid ClientId
-If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier 'ΓÇÖ was not found in the directory 'ΓÇÖ`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£clientIdΓÇ¥ in your client secret configuration
+If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token and the probable reason might be you've provided invalid/wrong ΓÇ£clientIdΓÇ¥ in your client secret configuration
This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.
You're probably missing a credential or your credential is set to `None`, but yo
Usually occurs when the provided credentials don't grant access to ingest telemetry for the Application Insights resource. Make sure your AI resource has the correct role assignments.
-## Next Steps
+## Next steps
* [Monitor your telemetry in the portal](overview-dashboard.md). * [Diagnose with Live Metrics Stream](live-stream.md).
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
If you want to add custom dimensions to all of your telemetry:
## Telemetry processors (preview)
-This feature is in preview.
- It allows you to configure rules that will be applied to request, dependency and trace telemetry, for example: * Mask sensitive data * Conditionally add custom dimensions
The setting applies to all of these metrics:
[//]: # "}" [//]: # "```"
+## Authentication (preview)
+> [!NOTE]
+> Authentication feature is available starting from version 3.2.0-BETA
+
+It allows you to configure agent to generate [token credentials](https://go.microsoft.com/fwlink/?linkid=2163810) that are required for Azure Active Directory Authentication.
+For more information, check out the [Authentication](./azure-ad-authentication.md) documentation.
+ ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
Valid values for `retentionInDays` are from 30 through 730.
The `Usage` and `AzureActivity` data types cannot be set with custom retention. They will take on the maximum of the default workspace retention or 90 days.
-A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and [Daniel Bowbyes](https://blog.bowbyes.co.nz/2016/11/02/using-armclient-to-directly-access-azure-arm-rest-apis-and-list-arm-policy-details/). Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention:
+A great tool to connect directly to Azure Resource Manager to set retention by data type is the OSS tool [ARMclient](https://github.com/projectkudu/ARMClient). Learn more about ARMclient from articles by [David Ebbo](http://blog.davidebbo.com/2015/01/azure-resource-manager-client.html) and Daniel Bowbyes. Here's an example using ARMClient, setting SecurityEvent data to a 730-day retention:
``` armclient PUT /subscriptions/00000000-0000-0000-0000-00000000000/resourceGroups/MyResourceGroupName/providers/Microsoft.OperationalInsights/workspaces/MyWorkspaceName/Tables/SecurityEvent?api-version=2017-04-26-preview "{properties: {retentionInDays: 730}}"
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 05/27/2021 Last updated : 06/22/2021 # Solution architectures using Azure NetApp Files
This section provides references for High Performance Computing (HPC) solutions.
* [Azure NetApp Files: A shared file system to use with SAS Grid on Microsoft Azure](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/705192) * [Azure NetApp Files: A shared file system to use with SAS Grid on MS Azure – RHEL8.3/nconnect UPDATE](https://communities.sas.com/t5/Administration-and-Deployment/Azure-NetApp-Files-A-shared-file-system-to-use-with-SAS-Grid-on/m-p/722261#M21648) * [Best Practices for Using Microsoft Azure with SAS®](https://communities.sas.com/t5/Administration-and-Deployment/Best-Practices-for-Using-Microsoft-Azure-with-SAS/m-p/676833#M19680)
+* [SAS on Azure architecture guide - Azure Architecture Center | Azure NetApp Files](/azure/architecture/guide/sas/sas-overview#azure-netapp-files-nfs)
## Azure platform services solutions
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/azure-portal-safelist-urls.md
Title: Safelist the Azure portal URLs on your firewall or proxy server
-description: Add these URLs to proxy server bypass to communicate with the Azure portal and its services
Previously updated : 04/10/2020
+ Title: Allow the Azure portal URLs on your firewall or proxy server
+description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist.
Last updated : 06/21/2021
-# Safelist the Azure portal URLs on your firewall or proxy server
+# Allow the Azure portal URLs on your firewall or proxy server
-You can configure on-premises security devices to bypass security restrictions for the Azure portal URLs. This configuration can improve performance and connectivity between your local- or wide-area network and the Azure cloud.
+To optimize connectivity between your network and the Azure portal and its services, we recommend you add specific Azure portal URLs to your allowlist. Doing so can improve performance and connectivity between your local- or wide-area network and the Azure cloud.
-Network administrators often deploy proxy servers, firewalls, or other devices. These devices help secure and give control over how users access the internet. Rules designed to protect users can sometimes block or slow down legitimate business-related internet traffic. This traffic includes communications between you and Azure. To optimize connectivity between your network and the Azure portal and its services, we recommend you add Azure portal URLs to your safelist.
+Network administrators often deploy proxy servers, firewalls, or other devices, which can help secure and give control over how users access the internet. Rules designed to protect users can sometimes block or slow down legitimate business-related internet traffic. This traffic includes communications between you and Azure over the URLs listed here.
+
+> [!TIP]
+> For help diagnosing issues with network connections to these domains, check https://portal.azure.com/selfhelp.
## Azure portal URLs for proxy bypass
-The URL endpoints to safelist for the Azure portal are specific to the Azure cloud where your organization is deployed. To allow network traffic to these endpoints to bypass restrictions, select your cloud. Then add the list of URLs to your proxy server or firewall.
+The URL endpoints to allow for the Azure portal are specific to the Azure cloud where your organization is deployed. To allow network traffic to these endpoints to bypass restrictions, select your cloud, then add the list of URLs to your proxy server or firewall. We do not recommend adding any additional portal-related URLs aside from those listed here, although you may want to add URLs related to other Microsoft products and services.
#### [Public Cloud](#tab/public-cloud)
The URL endpoints to safelist for the Azure portal are specific to the Azure clo
*.applicationinsights.io *.azure.com *.azure.net
-*.azurefd.net
*.azure-api.net *.azuredatalakestore.net *.azureedge.net
The URL endpoints to safelist for the Azure portal are specific to the Azure clo
> [!NOTE] > Traffic to these endpoints uses standard TCP ports for HTTP (80) and HTTPS (443).
->
->
azure-portal How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/supportability/how-to-create-azure-support-request.md
Title: How to create an Azure support request
description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. ms.assetid: fd6841ea-c1d5-4bb7-86bd-0c708d193b89 + Last updated 05/25/2021
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of list* are shown in the following table.
| Microsoft.BotService/botServices/channels | [listChannelWithKeys](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/botservice/resource-manager/Microsoft.BotService/stable/2020-06-02/botservice.json#L553) | | Microsoft.Cache/redis | [listKeys](/rest/api/redis/redis/listkeys) | | Microsoft.CognitiveServices/accounts | [listKeys](/rest/api/cognitiveservices/accountmanagement/accounts/listkeys) |
-| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/getbuildsourceuploadurl) |
+| Microsoft.ContainerRegistry/registries | [listBuildSourceUploadUrl](/rest/api/containerregistry/registries%20(tasks)/get-build-source-upload-url) |
| Microsoft.ContainerRegistry/registries | [listCredentials](/rest/api/containerregistry/registries/listcredentials) | | Microsoft.ContainerRegistry/registries | [listUsages](/rest/api/containerregistry/registries/listusages) | | Microsoft.ContainerRegistry/registries/agentpools | listQueueStatus |
The possible uses of list* are shown in the following table.
| Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |
-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listconnectionstrings) |
-| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listkeys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-03-15/notebookworkspaces/listconnectioninfo) |
+| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-04-15/database-accounts/list-connection-strings) |
+| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-04-15/database-accounts/list-keys) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-04-15/notebook-workspaces/list-connection-info) |
| Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/version2020-06-01/domains/listsharedaccesskeys) |
The possible uses of list* are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/workspacesandcomputes/machinelearningcompute/listkeys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/workspacesandcomputes/machinelearningcompute/listnodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspacesandcomputes/workspaces/listkeys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
The possible uses of list* are shown in the following table.
| Microsoft.Relay/namespaces/WcfRelays/authorizationRules | [listkeys](/rest/api/relay/wcfrelays/listkeys) | | Microsoft.Search/searchServices | [listAdminKeys](/rest/api/searchmanagement/adminkeys/get) | | Microsoft.Search/searchServices | [listQueryKeys](/rest/api/searchmanagement/querykeys/listbysearchservice) |
-| Microsoft.ServiceBus/namespaces/authorizationRules | [listkeys](/rest/api/servicebus/stable/namespaces%20-%20authorization%20rules/listkeys) |
+| Microsoft.ServiceBus/namespaces/authorizationRules | [listkeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) |
| Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/servicebus/stable/disasterrecoveryconfigs/listkeys) |
-| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listkeys](/rest/api/servicebus/stable/queues%20-%20authorization%20rules/listkeys) |
-| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listkeys](/rest/api/servicebus/stable/topics%20ΓÇô%20authorization%20rules/listkeys) |
+| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listkeys](/rest/rest/api/servicebus/stable/queues-authorization-rules/list-keys) |
+| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listkeys](/rest/api/servicebus/stable/topics%20%E2%80%93%20authorization%20rules/list-keys) |
| Microsoft.SignalRService/SignalR | [listkeys](/rest/api/signalr/signalr/listkeys) | | Microsoft.Storage/storageAccounts | [listAccountSas](/rest/api/storagerp/storageaccounts/listaccountsas) | | Microsoft.Storage/storageAccounts | [listkeys](/rest/api/storagerp/storageaccounts/listkeys) |
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
The following limitations apply to tags:
> [!NOTE] > * Azure DNS zones and Traffic Manager doesn't support the use of spaces in the tag or a tag that starts with a number. >
- > * Azure Front Door doesn't support the use of `#` in the tag name.
+ > * Azure Front Door doesn't support the use of `#` or `:` in the tag name.
> > * The following Azure resources only support 15 tags: > * Azure Automation
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/syntax.md
Title: Template structure and syntax description: Describes the structure and properties of Azure Resource Manager templates (ARM templates) using declarative JSON syntax. Previously updated : 05/17/2021 Last updated : 06/22/2021 # Understand the structure and syntax of ARM templates
For inline comments, you can use either `//` or `/* ... */`.
> [!NOTE] >
-> To deploy templates with comments, use Azure PowerShell or Azure CLI. For CLI, use version 2.3.0 or later, and specify the `--handle-extended-json-format` switch.
->
-> Comments aren't supported when you deploy the template through the Azure portal, a DevOps pipeline, or the REST API.
+> When using Azure CLI to deploy templates with comments, use version 2.3.0 or later, and specify the `--handle-extended-json-format` switch.
```json {
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of list* are shown in the following table.
| Microsoft.DevTestLab/labs/schedules | [ListApplicable](/rest/api/dtl/schedules/listapplicable) | | Microsoft.DevTestLab/labs/users/serviceFabrics | [ListApplicableSchedules](/rest/api/dtl/servicefabrics/listapplicableschedules) | | Microsoft.DevTestLab/labs/virtualMachines | [ListApplicableSchedules](/rest/api/dtl/virtualmachines/listapplicableschedules) |
-| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listconnectionstrings) |
-| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-03-01-preview/databaseaccounts/listkeys) |
-| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-03-15/notebookworkspaces/listconnectioninfo) |
+| Microsoft.DocumentDB/databaseAccounts | [listConnectionStrings](/rest/api/cosmos-db-resource-provider/2021-04-15/database-accounts/list-connection-strings) |
+| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-04-15/database-accounts/list-keys) |
+| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-04-15/notebook-workspaces/list-connection-info) |
| Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) | | Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/version2020-06-01/domains/listsharedaccesskeys) |
The possible uses of list* are shown in the following table.
| Microsoft.Relay/namespaces/WcfRelays/authorizationRules | [listkeys](/rest/api/relay/wcfrelays/listkeys) | | Microsoft.Search/searchServices | [listAdminKeys](/rest/api/searchmanagement/adminkeys/get) | | Microsoft.Search/searchServices | [listQueryKeys](/rest/api/searchmanagement/querykeys/listbysearchservice) |
-| Microsoft.ServiceBus/namespaces/authorizationRules | [listkeys](/rest/api/servicebus/stable/namespaces%20-%20authorization%20rules/listkeys) |
+| Microsoft.ServiceBus/namespaces/authorizationRules | [listkeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) |
| Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/servicebus/stable/disasterrecoveryconfigs/listkeys) |
-| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listkeys](/rest/api/servicebus/stable/queues%20-%20authorization%20rules/listkeys) |
-| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listkeys](/rest/api/servicebus/stable/topics%20ΓÇô%20authorization%20rules/listkeys) |
+| Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listkeys](/rest/api/servicebus/stable/queues-authorization-rules/list-keys) |
+| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listkeys](/rest/api/servicebus/stable/topics%20%E2%80%93%20authorization%20rules/list-keys) |
| Microsoft.SignalRService/SignalR | [listkeys](/rest/api/signalr/signalr/listkeys) | | Microsoft.Storage/storageAccounts | [listAccountSas](/rest/api/storagerp/storageaccounts/listaccountsas) | | Microsoft.Storage/storageAccounts | [listkeys](/rest/api/storagerp/storageaccounts/listkeys) |
azure-signalr Signalr Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-azure-functions-csharp.md
Title: Azure SignalR Service serverless quickstart - C#
-description: A quickstart for using Azure SignalR Service and Azure Functions to create a chat room using C#.
+description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using C#.
ms.devlang: dotnet Previously updated : 09/25/2020 Last updated : 06/09/2021
-# Quickstart: Create a chat room with Azure Functions and SignalR Service using C\#
+# Quickstart: Create an App showing GitHub star count with Azure Functions and SignalR Service using C\#
-Azure SignalR Service lets you easily add real-time functionality to your application. Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Functions to build a serverless, real-time chat application.
+Azure SignalR Service lets you easily add real-time functionality to your application. Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with C# to broadcast messages to clients.
+
+> [!NOTE]
+> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/csharp)
## Prerequisites
-If you don't already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads). Make sure that you enable **Azure development** during the Visual Studio setup.
+If you don't already have Visual Studio Code installed, you can download and use it for free(https://code.visualstudio.com/Download).
-You may also run this tutorial on the command line (macOS, Windows, or Linux) using the [Azure Functions Core Tools (v2)](https://github.com/Azure/azure-functions-core-tools#installing), the [.NET Core SDK](https://dotnet.microsoft.com/download), and your favorite code editor.
+You may also run this tutorial on the command line (macOS, Windows, or Linux) using the [Azure Functions Core Tools)](../azure-functions/functions-run-local.md?tabs=windows%2Ccsharp%2Cbash#v2). Also the [.NET Core SDK](https://dotnet.microsoft.com/download), and your favorite code editor.
If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/dotnet) before you begin. Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp).
-## Log in to Azure
+## Log in to Azure and create SignalR Service instance
Sign in to the Azure portal at <https://portal.azure.com/> with your Azure account.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp). -
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp).
-
-## Configure and run the Azure Function app
-
-1. Start Visual Studio (or another code editor) and open the solution in the *src/chat/csharp* folder of the cloned repository.
-
-1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
-
- ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
-
-1. Select **Keys** to view the connection strings for the SignalR Service instance.
-
-1. Select and copy the primary connection string.
-
-1. Back in the Visual Studio - **Solution Explorer**, rename *local.settings.sample.json* to *local.settings.json*.
-
-1. In *local.settings.json*, paste the connection string into the value of the **AzureSignalRConnectionString** setting. Save the file.
-
-1. Open *Functions.cs*. There are two HTTP triggered functions in this function app:
-
- - **GetSignalRInfo** - Uses the `SignalRConnectionInfo` input binding to generate and return valid connection information.
- - **SendMessage** - Receives a chat message in the request body and uses the *SignalR* output binding to broadcast the message to all connected client applications.
-
-1. Use one of the following options to start the Azure Function app locally.
-
- - **Visual Studio**: In the *Debug* menu, select *Start debugging* to run the application.
-
- ![Debug the application](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-debug-vs.png)
-
- - **Command line**: Execute the following command to start the function host.
-
+## Setup and run the Azure Function locally
+
+1. Make sure you have Azure Function Core Tools installed. And create an empty directory and navigate to the directory with command line.
+
+ ```bash
+ # Initialize a function project
+ func init --worker-runtime dotnet
+
+ # Add SignalR Service package reference to the project
+ dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService
+ ```
+
+2. After you initialize a project. Create a new file with name *Function.cs*. Add the following code to *Function.cs*.
+
+ ```csharp
+ using System;
+ using System.IO;
+ using System.Net.Http;
+ using System.Threading.Tasks;
+ using Microsoft.AspNetCore.Http;
+ using Microsoft.AspNetCore.Mvc;
+ using Microsoft.Azure.WebJobs;
+ using Microsoft.Azure.WebJobs.Extensions.Http;
+ using Microsoft.Azure.WebJobs.Extensions.SignalRService;
+ using Newtonsoft.Json;
+
+ namespace CSharp
+ {
+ public static class Function
+ {
+ private static HttpClient httpClient = new HttpClient();
+
+ [FunctionName("index")]
+ public static IActionResult Index([HttpTrigger(AuthorizationLevel.Anonymous)]HttpRequest req, ExecutionContext context)
+ {
+ var path = Path.Combine(context.FunctionAppDirectory, "content", "https://docsupdatetracker.net/index.html");
+ return new ContentResult
+ {
+ Content = File.ReadAllText(path),
+ ContentType = "text/html",
+ };
+ }
+
+ [FunctionName("negotiate")]
+ public static SignalRConnectionInfo Negotiate(
+ [HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req,
+ [SignalRConnectionInfo(HubName = "serverlessSample")] SignalRConnectionInfo connectionInfo)
+ {
+ return connectionInfo;
+ }
+
+ [FunctionName("broadcast")]
+ public static async Task Broadcast([TimerTrigger("*/5 * * * * *")] TimerInfo myTimer,
+ [SignalR(HubName = "serverlessSample")] IAsyncCollector<SignalRMessage> signalRMessages)
+ {
+ var request = new HttpRequestMessage(HttpMethod.Get, "https://api.github.com/repos/azure/azure-signalr");
+ request.Headers.UserAgent.ParseAdd("Serverless");
+ var response = await httpClient.SendAsync(request);
+ var result = JsonConvert.DeserializeObject<GitResult>(await response.Content.ReadAsStringAsync());
+ await signalRMessages.AddAsync(
+ new SignalRMessage
+ {
+ Target = "newMessage",
+ Arguments = new[] { $"Current start count of https://github.com/Azure/azure-signalr is: {result.StartCount}" }
+ });
+ }
+
+ private class GitResult
+ {
+ [JsonRequired]
+ [JsonProperty("stargazers_count")]
+ public string StartCount { get; set; }
+ }
+ }
+ }
+ ```
+ These codes have three functions. The `Index` is used to get a website as client. The `Negotiate` is used for client to get access token. The `Broadcast` is periodically
+ get start count from GitHub and broadcast messages to all clients.
+
+3. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `GetHomePage` function, create a new file `https://docsupdatetracker.net/index.html` in `content` directory. And copy the following content.
+ ```html
+ <html>
+
+ <body>
+ <h1>Azure SignalR Serverless Sample</h1>
+ <div id="messages"></div>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
+ <script>
+ let messages = document.querySelector('#messages');
+ const apiBaseUrl = window.location.origin;
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl(apiBaseUrl + '/api')
+ .configureLogging(signalR.LogLevel.Information)
+ .build();
+ connection.on('newMessage', (message) => {
+ document.getElementById("messages").innerHTML = message;
+ });
+
+ connection.start()
+ .catch(console.error);
+ </script>
+ </body>
+
+ </html>
+ ```
+
+4. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
+
+ 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
+
+ ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
+
+ 1. Select **Keys** to view the connection strings for the SignalR Service instance.
+
+ ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
+
+ 1. Copy the primary connection string. And execute the command below.
+
```bash
- func start
+ func settings add AzureSignalRConnectionString '<signalr-connection-string>'
```
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp).
+
+5. Run the Azure Function in local:
+ ```bash
+ func start
+ ```
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp).
+ After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current start count. And if you star or unstar in the GitHub, you will get a start count refreshing every few seconds.
+ > [!NOTE]
+ > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
+ > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp) +
+Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
+ ## Next steps
-In this quickstart, you built and ran a real-time serverless application in Visual Studio. Next, learn more about how to develop and deploy Azure Functions with Visual Studio.
+In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
+Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+
+> [!div class="nextstepaction"]
+> [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
+
+> [!div class="nextstepaction"]
+> [Bi-directional communicating in Serverless](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/BidirectionChat)
> [!div class="nextstepaction"] > [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md)
azure-signalr Signalr Quickstart Azure Functions Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-azure-functions-java.md
Title: Use Java to create a chat room with Azure Functions and SignalR Service
-description: A quickstart for using Azure SignalR Service and Azure Functions to create a chat room using Java.
+description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using Java.
Previously updated : 03/04/2019 Last updated : 06/09/2021 ms.devlang: java
- mode-api
-# Quickstart: Use Java to create a chat room with Azure Functions and SignalR Service
+# Quickstart: Use Java to create an App showing GitHub star count with Azure Functions and SignalR Service
-Azure SignalR Service lets you easily add real-time functionality to your application and Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, you use Java to build a serverless, real-time chat application using SignalR Service and Functions.
+Azure SignalR Service lets you easily add real-time functionality to your application and Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with Java to broadcast messages to clients.
+
+> [!NOTE]
+> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/java)
## Prerequisites
Azure SignalR Service lets you easily add real-time functionality to your applic
> [!NOTE] > To install extensions, Azure Functions Core Tools requires the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. However, no knowledge of .NET is required to build JavaScript Azure Function apps. -- [Java Developer Kit](https://www.azul.com/downloads/zulu/), version 8
+- [Java Developer Kit](https://www.azul.com/downloads/zulu/), version 11
- [Apache Maven](https://maven.apache.org), version 3.0 or above > [!NOTE]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava). -
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava).
## Configure and run the Azure Function app
-1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
+1. Make sure you have Azure Function Core Tools, java (version 11 in the sample) and maven installed.
+
+ ```bash
+ mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DjavaVersion=11
+ ```
- ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
+ Maven asks you for values needed to finish generating the project. You can provide the following values.
+
+ | Prompt | Value | Description |
+ | | -- | -- |
+ | **groupId** | `com.signalr` | A value that uniquely identifies your project across all projects, following the [package naming rules](https://docs.oracle.com/javase/specs/jls/se6/html/packages.html#7.7) for Java. |
+ | **artifactId** | `java` | A value that is the name of the jar, without a version number. |
+ | **version** | `1.0-SNAPSHOT` | Choose the default value. |
+ | **package** | `com.signalr` | A value that is the Java package for the generated function code. Use the default. |
+
+2. After you initialize a project. Go to the folder `src/main/java/com/signalr` and copy the following codes to `Function.java`
+
+ ```java
+ package com.signalr;
+
+ import com.google.gson.Gson;
+ import com.microsoft.azure.functions.ExecutionContext;
+ import com.microsoft.azure.functions.HttpMethod;
+ import com.microsoft.azure.functions.HttpRequestMessage;
+ import com.microsoft.azure.functions.HttpResponseMessage;
+ import com.microsoft.azure.functions.HttpStatus;
+ import com.microsoft.azure.functions.annotation.AuthorizationLevel;
+ import com.microsoft.azure.functions.annotation.FunctionName;
+ import com.microsoft.azure.functions.annotation.HttpTrigger;
+ import com.microsoft.azure.functions.annotation.TimerTrigger;
+ import com.microsoft.azure.functions.signalr.*;
+ import com.microsoft.azure.functions.signalr.annotation.*;
+
+ import org.apache.commons.io.IOUtils;
+
+
+ import java.io.IOException;
+ import java.io.InputStream;
+ import java.net.URI;
+ import java.net.http.HttpClient;
+ import java.net.http.HttpRequest;
+ import java.net.http.HttpResponse;
+ import java.net.http.HttpResponse.BodyHandlers;
+ import java.nio.charset.StandardCharsets;
+ import java.util.Optional;
+
+ public class Function {
+ @FunctionName("index")
+ public HttpResponseMessage run(
+ @HttpTrigger(
+ name = "req",
+ methods = {HttpMethod.GET},
+ authLevel = AuthorizationLevel.ANONYMOUS)HttpRequestMessage<Optional<String>> request,
+ final ExecutionContext context) throws IOException {
+
+ InputStream inputStream = getClass().getClassLoader().getResourceAsStream("content/https://docsupdatetracker.net/index.html");
+ String text = IOUtils.toString(inputStream, StandardCharsets.UTF_8.name());
+ return request.createResponseBuilder(HttpStatus.OK).header("Content-Type", "text/html").body(text).build();
+ }
+
+ @FunctionName("negotiate")
+ public SignalRConnectionInfo negotiate(
+ @HttpTrigger(
+ name = "req",
+ methods = { HttpMethod.POST },
+ authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> req,
+ @SignalRConnectionInfoInput(
+ name = "connectionInfo",
+ hubName = "serverless") SignalRConnectionInfo connectionInfo) {
+
+ return connectionInfo;
+ }
+
+ @FunctionName("broadcast")
+ @SignalROutput(name = "$return", hubName = "serverless")
+ public SignalRMessage broadcast(
+ @TimerTrigger(name = "timeTrigger", schedule = "*/5 * * * * *") String timerInfo) throws IOException, InterruptedException {
+
+ HttpClient client = HttpClient.newHttpClient();
+ HttpRequest req = HttpRequest.newBuilder().uri(URI.create("https://api.github.com/repos/azure/azure-signalr")).header("User-Agent", "serverless").build();
+ HttpResponse<String> res = client.send(req, BodyHandlers.ofString());
+ Gson gson = new Gson();
+ GitResult result = gson.fromJson(res.body(), GitResult.class);
+ return new SignalRMessage("newMessage", "Current start count of https://github.com/Azure/azure-signalr is:".concat(result.stargazers_count));
+ }
+
+ class GitResult {
+ public String stargazers_count;
+ }
+ }
+ ```
-1. Select **Keys** to view the connection strings for the SignalR Service instance.
+3. Some dependencies need to be added. So open the `pom.xml` and add some dependency that used in codes.
+
+ ```xml
+ <dependency>
+ <groupId>com.microsoft.azure.functions</groupId>
+ <artifactId>azure-functions-java-library-signalr</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ <dependency>
+ <groupId>commons-io</groupId>
+ <artifactId>commons-io</artifactId>
+ <version>2.4</version>
+ </dependency>
+ <dependency>
+ <groupId>com.google.code.gson</groupId>
+ <artifactId>gson</artifactId>
+ <version>2.8.7</version>
+ </dependency>
+ ```
-1. Select and copy the primary connection string.
+4. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `index` function, create a new file `content/https://docsupdatetracker.net/index.html` in `resources` directory. Your directory tree should look like this.
+
+ ```
+ FunctionsProject
+ | - src
+ | | - main
+ | | | - java
+ | | | | - com
+ | | | | | - signalr
+ | | | | | | - Function.java
+ | | | - resources
+ | | | | - content
+ | | | | | - https://docsupdatetracker.net/index.html
+ | - pom.xml
+ | - host.json
+ | - local.settings.json
+ ```
- ![Create SignalR Service](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
+ Open the `https://docsupdatetracker.net/index.html` and copy the following content.
-1. In your code editor, open the *src/chat/java* folder in the cloned repository.
+ ```html
+ <html>
+
+ <body>
+ <h1>Azure SignalR Serverless Sample</h1>
+ <div id="messages"></div>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
+ <script>
+ let messages = document.querySelector('#messages');
+ const apiBaseUrl = window.location.origin;
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl(apiBaseUrl + '/api')
+ .configureLogging(signalR.LogLevel.Information)
+ .build();
+ connection.on('newMessage', (message) => {
+ document.getElementById("messages").innerHTML = message;
+ });
+
+ connection.start()
+ .catch(console.error);
+ </script>
+ </body>
+
+ </html>
+ ```
-1. Rename *local.settings.sample.json* to *local.settings.json*.
+5. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
-1. In **local.settings.json**, paste the connection string into the value of the **AzureSignalRConnectionString** setting. Save the file.
+ 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
-1. The main file that contains the functions are in *src/chat/java/src/main/java/com/function/Functions.java*:
+ ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
- - **negotiate** - Uses the *SignalRConnectionInfo* input binding to generate and return valid connection information.
- - **sendMessage** - Receives a chat message in the request body and uses the *SignalR* output binding to broadcast the message to all connected client applications.
+ 1. Select **Keys** to view the connection strings for the SignalR Service instance.
+
+ ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
-1. In the terminal, ensure that you are in the *src/chat/java* folder. Build the function app.
+ 1. Copy the primary connection string. And execute the command below.
+
+ ```bash
+ func settings add AzureSignalRConnectionString '<signalr-connection-string>'
+ # Also we need to set AzureWebJobsStorage as Azure Function's requirement
+ func settings add AzureWebJobsStorage 'UseDevelopmentStorage=true'
+ ```
+
+6. Run the Azure Function in local:
```bash mvn clean package
+ mvn azure-functions:run
```
-1. Run the function app locally.
+ After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current start count. And if you star or unstar in the GitHub, you will get a start count refreshing every few seconds.
- ```bash
- mvn azure-functions:run
- ```
+ > [!NOTE]
+ > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
+ > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava). -
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjava).
[!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Next steps
-In this quickstart, you built and ran a real-time serverless application using Maven. Next, learn about how to create Java Azure Functions from scratch.
+In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
+Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+
+> [!div class="nextstepaction"]
+> [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
+
+> [!div class="nextstepaction"]
+> [Bi-directional communicating in Serverless](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/BidirectionChat)
> [!div class="nextstepaction"] > [Create your first function with Java and Maven](../azure-functions/create-first-function-cli-csharp.md?pivots=programming-language-java%2cprogramming-language-java)
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
Title: Use JavaScript to create a chat room with Azure Functions and SignalR Service
-description: A quickstart for using Azure SignalR Service and Azure Functions to create a chat room using JavaScript.
+description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using JavaScript.
Previously updated : 12/14/2019 Last updated : 06/09/2021 ms.devlang: javascript
- devx-track-js - mode-api
-# Quickstart: Use JavaScript to create a chat room with Azure Functions and SignalR Service
+# Quickstart: Use JavaScript to create an App showing GitHub star count with Azure Functions and SignalR Service
-Azure SignalR Service lets you easily add real-time functionality to your application and Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, you use JavaScript to build a serverless, real-time chat application using SignalR Service and Functions.
+Azure SignalR Service lets you easily add real-time functionality to your application and Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with JavaScript to broadcast messages to clients.
+
+> [!NOTE]
+> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/javascript)
## Prerequisites
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs). -
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs).
-## Configure and run the Azure Function app
+## Setup and run the Azure Function locally
-1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
+1. Make sure you have Azure Function Core Tools installed. And create an empty directory and navigate to the directory with command line.
- ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
+ ```bash
+ # Initialize a function project
+ func init --worker-runtime javascript
+ ```
-1. Select **Keys** to view the connection strings for the SignalR Service instance.
+2. After you initialize a project, you need to create functions. In this sample, we need to create 3 functions.
-1. Select and copy the primary connection string.
+ 1. Run the following command to create a `index` function, which will host a web page for client.
- ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+
+ Open `index/index.js` and copy the following codes.
-1. In your code editor, open the *src/chat/javascript* folder in the cloned repository.
+ ```javascript
+ var fs = require('fs').promises
+
+ module.exports = async function (context, req) {
+ const path = context.executionContext.functionDirectory + '/../content/https://docsupdatetracker.net/index.html'
+ try {
+ var data = await fs.readFile(path);
+ context.res = {
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ body: data
+ }
+ context.done()
+ } catch (error) {
+ context.log.error(err);
+ context.done(err);
+ }
+ }
+ ```
+
+ 2. Create a `negotiate` function for clients to get access token.
+
+ ```bash
+ func new -n negotiate -t SignalRNegotiateHTTPTrigger
+ ```
+
+ Open `negotiate/function.json` and copy the following json codes:
+
+ ```json
+ {
+ "disabled": false,
+ "bindings": [
+ {
+ "authLevel": "anonymous",
+ "type": "httpTrigger",
+ "direction": "in",
+ "methods": [
+ "post"
+ ],
+ "name": "req",
+ "route": "negotiate"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "res"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+
+ 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use time trigger to broadcast messages periodically.
+
+ ```bash
+ func new -n broadcast -t TimerTrigger
+ ```
+
+ Open `broadcast/function.json` and copy the following codes.
+
+ ```json
+ {
+ "bindings": [
+ {
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in",
+ "schedule": "*/5 * * * * *"
+ },
+ {
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+
+ Open `broadcast/index.js` and copy the following codes.
+
+ ```javascript
+ var https = require('https');
+
+ module.exports = function (context) {
+ var req = https.request("https://api.github.com/repos/azure/azure-signalr", {
+ method: 'GET',
+ headers: {'User-Agent': 'serverless'}
+ }, res => {
+ var body = "";
+
+ res.on('data', data => {
+ body += data;
+ });
+ res.on("end", () => {
+ var jbody = JSON.parse(body);
+ context.bindings.signalRMessages = [{
+ "target": "newMessage",
+ "arguments": [ `Current star count of https://github.com/Azure/azure-signalr is: ${jbody['stargazers_count']}` ]
+ }]
+ context.done();
+ });
+ }).on("error", (error) => {
+ context.log(error);
+ context.res = {
+ status: 500,
+ body: error
+ };
+ context.done();
+ });
+ req.end();
+ }
+ ```
+
+3. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `index` function, create a new file `https://docsupdatetracker.net/index.html` in `content` directory. And copy the following content.
+
+ ```html
+ <html>
+
+ <body>
+ <h1>Azure SignalR Serverless Sample</h1>
+ <div id="messages"></div>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
+ <script>
+ let messages = document.querySelector('#messages');
+ const apiBaseUrl = window.location.origin;
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl(apiBaseUrl + '/api')
+ .configureLogging(signalR.LogLevel.Information)
+ .build();
+ connection.on('newMessage', (message) => {
+ document.getElementById("messages").innerHTML = message;
+ });
+
+ connection.start()
+ .catch(console.error);
+ </script>
+ </body>
+
+ </html>
+ ```
-1. Rename *local.settings.sample.json* to *local.settings.json*.
+4. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
-1. In **local.settings.json**, paste the connection string into the value of the **AzureSignalRConnectionString** setting. Save the file.
+ 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
-1. JavaScript functions are organized into folders. In each folder are two files: *function.json* defines the bindings that are used in the function, and *index.js* is the body of the function. There are two HTTP triggered functions in this function app:
+ ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
- - **negotiate** - Uses the *SignalRConnectionInfo* input binding to generate and return valid connection information.
- - **messages** - Receives a chat message in the request body and uses the *SignalR* output binding to broadcast the message to all connected client applications.
+ 1. Select **Keys** to view the connection strings for the SignalR Service instance.
+
+ ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
-1. In the terminal, ensure that you are in the *src/chat/javascript* folder. Run the function app.
+ 1. Copy the primary connection string. And execute the command below.
+
+ ```bash
+ func settings add AzureSignalRConnectionString '<signalr-connection-string>'
+ ```
+
+5. Run the Azure Function in local:
```bash func start ```
- ![Create SignalR Service](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-run-application.png)
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs).
+ After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current start count. And if you star or unstar in the GitHub, you will get a start count refreshing every few seconds.
+ > [!NOTE]
+ > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
+ > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsjs).
+Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qscsharp)
[!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Next steps
-In this quickstart, you built and ran a real-time serverless application in VS Code. Next, learn more about how to deploy Azure Functions from VS Code.
+In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
+Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+
+> [!div class="nextstepaction"]
+> [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
+
+> [!div class="nextstepaction"]
+> [Bi-directional communicating in Serverless](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/BidirectionChat)
> [!div class="nextstepaction"] > [Deploy Azure Functions with VS Code](/azure/developer/javascript/tutorial-vscode-serverless-node-01)
azure-signalr Signalr Quickstart Azure Functions Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-quickstart-azure-functions-python.md
Title: Azure SignalR Service serverless quickstart - Python
-description: A quickstart for using Azure SignalR Service and Azure Functions to create a chat room using Python.
+description: A quickstart for using Azure SignalR Service and Azure Functions to create an App showing GitHub star count using Python.
Previously updated : 12/14/2019 Last updated : 06/09/2021 ms.devlang: python
- devx-track-python - mode-api
-# Quickstart: Create a chat room with Azure Functions and SignalR Service using Python
+# Quickstart: Create an App showing GitHub star count with Azure Functions and SignalR Service using Python
-Azure SignalR Service lets you easily add real-time functionality to your application. Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Functions to build a serverless, real-time chat application.
+Azure SignalR Service lets you easily add real-time functionality to your application. Azure Functions is a serverless platform that lets you run your code without managing any infrastructure. In this quickstart, learn how to use SignalR Service and Azure Functions to build a serverless application with Python to broadcast messages to clients.
+
+> [!NOTE]
+> You can get all codes mentioned in the article from [GitHub](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/QuickStartServerless/python)
## Prerequisites
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython). -
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
-
-## Configure and run the Azure Function app
-
-1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
-
- ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
-1. Select **Keys** to view the connection strings for the SignalR Service instance.
+## Setup and run the Azure Function locally
-1. Select and copy the primary connection string.
-
- ![Select and copy the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
-
-1. In your code editor, open the *src/chat/python* folder in the cloned repository.
-
-1. To locally develop and test Python functions, you must work in a Python 3.6 or 3.7 environment. Run the following commands to create and activate a virtual environment named `.venv`.
-
- **Linux or macOS:**
+1. Make sure you have Azure Function Core Tools installed. And create an empty directory and navigate to the directory with command line.
```bash
- python3.7 -m venv .venv
- source .venv/bin/activate
+ # Initialize a function project
+ func init --worker-runtime python
```
- **Windows:**
+2. After you initialize a project, you need to create functions. In this sample, we need to create 3 functions.
- ```powershell
- py -3.7 -m venv .venv
- .venv\scripts\activate
- ```
+ 1. Run the following command to create a `index` function, which will host a web page for client.
-1. Rename *local.settings.sample.json* to *local.settings.json*.
+ ```bash
+ func new -n index -t HttpTrigger
+ ```
+
+ Open `index/__init__.py` and copy the following codes.
-1. In **local.settings.json**, paste the connection string into the value of the **AzureSignalRConnectionString** setting. Save the file.
-
-1. Python functions are organized into folders. In each folder are two files: *function.json* defines the bindings that are used in the function, and *\_\_init\_\_.py* is the body of the function. There are two HTTP triggered functions in this function app:
+ ```javascript
+ import os
+
+ import azure.functions as func
+
+
+ def main(req: func.HttpRequest) -> func.HttpResponse:
+ f = open(os.path.dirname(os.path.realpath(__file__)) + '/../content/https://docsupdatetracker.net/index.html')
+ return func.HttpResponse(f.read(), mimetype='text/html')
+ ```
+
+ 2. Create a `negotiate` function for clients to get access token.
+
+ ```bash
+ func new -n negotiate -t SignalRNegotiateHTTPTrigger
+ ```
+
+ Open `negotiate/function.json` and copy the following json codes:
+
+ ```json
+ {
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "authLevel": "function",
+ "type": "httpTrigger",
+ "direction": "in",
+ "name": "req",
+ "methods": [
+ "post"
+ ]
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "signalRConnectionInfo",
+ "name": "connectionInfo",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "in"
+ }
+ ]
+ }
+ ```
+
+ And open the `negotiate/__init__.py` and copy the following codes:
+
+ ```python
+ import azure.functions as func
+
+
+ def main(req: func.HttpRequest, connectionInfo) -> func.HttpResponse:
+ return func.HttpResponse(connectionInfo)
+ ```
+
+ 3. Create a `broadcast` function to broadcast messages to all clients. In the sample, we use time trigger to broadcast messages periodically.
+
+ ```bash
+ func new -n broadcast -t TimerTrigger
+ # install requests
+ pip install requests
+ ```
+
+ Open `broadcast/function.json` and copy the following codes.
+
+ ```json
+ {
+ "scriptFile": "__init__.py",
+ "bindings": [
+ {
+ "name": "myTimer",
+ "type": "timerTrigger",
+ "direction": "in",
+ "schedule": "*/5 * * * * *"
+ },
+ {
+ "type": "signalR",
+ "name": "signalRMessages",
+ "hubName": "serverless",
+ "connectionStringSetting": "AzureSignalRConnectionString",
+ "direction": "out"
+ }
+ ]
+ }
+ ```
+
+ Open `broadcast/__init__.py` and copy the following codes.
+
+ ```python
+ import requests
+ import json
+
+ import azure.functions as func
+
+
+ def main(myTimer: func.TimerRequest, signalRMessages: func.Out[str]) -> None:
+ headers = {'User-Agent': 'serverless'}
+ res = requests.get('https://api.github.com/repos/azure/azure-signalr', headers=headers)
+ jres = res.json()
+
+ signalRMessages.set(json.dumps({
+ 'target': 'newMessage',
+ 'arguments': [ 'Current star count of https://github.com/Azure/azure-signalr is: ' + str(jres['stargazers_count']) ]
+ }))
+ ```
+
+3. The client interface of this sample is a web page. Considered we read HTML content from `content/https://docsupdatetracker.net/index.html` in `index` function, create a new file `https://docsupdatetracker.net/index.html` in `content` directory. And copy the following content.
+
+ ```html
+ <html>
+
+ <body>
+ <h1>Azure SignalR Serverless Sample</h1>
+ <div id="messages"></div>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/microsoft-signalr/3.1.7/signalr.min.js"></script>
+ <script>
+ let messages = document.querySelector('#messages');
+ const apiBaseUrl = window.location.origin;
+ const connection = new signalR.HubConnectionBuilder()
+ .withUrl(apiBaseUrl + '/api')
+ .configureLogging(signalR.LogLevel.Information)
+ .build();
+ connection.on('newMessage', (message) => {
+ document.getElementById("messages").innerHTML = message;
+ });
+
+ connection.start()
+ .catch(console.error);
+ </script>
+ </body>
+
+ </html>
+ ```
+
+4. It's almost done now. The last step is to set a connection string of the SignalR Service to Azure Function settings.
- - **negotiate** - Uses the *SignalRConnectionInfo* input binding to generate and return valid connection information.
- - **messages** - Receives a chat message in the request body and uses the *SignalR* output binding to broadcast the message to all connected client applications.
+ 1. In the browser where the Azure portal is opened, confirm the SignalR Service instance you deployed earlier was successfully created by searching for its name in the search box at the top of the portal. Select the instance to open it.
-1. In the terminal with the virtual environment activated, ensure that you are in the *src/chat/python* folder. Install the necessary Python packages using PIP.
+ ![Search for the SignalR Service instance](media/signalr-quickstart-azure-functions-csharp/signalr-quickstart-search-instance.png)
- ```bash
- python -m pip install -r requirements.txt
- ```
+ 1. Select **Keys** to view the connection strings for the SignalR Service instance.
+
+ ![Screenshot that highlights the primary connection string.](media/signalr-quickstart-azure-functions-javascript/signalr-quickstart-keys.png)
-1. Run the function app.
+ 1. Copy the primary connection string. And execute the command below.
+
+ ```bash
+ func settings add AzureSignalRConnectionString '<signalr-connection-string>'
+ ```
+
+5. Run the Azure Function in local:
```bash func start ```
- ![Run function app](media/signalr-quickstart-azure-functions-python/signalr-quickstart-run-application.png)
-
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
-
+ After Azure Function running locally. Use your browser to visit `http://localhost:7071/api/index` and you can see the current start count. And if you star or unstar in the GitHub, you will get a start count refreshing every few seconds.
-Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qspython).
+ > [!NOTE]
+ > SignalR binding needs Azure Storage, but you can use local storage emulator when the Function is running locally.
+ > If you got some error like `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.` You need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md)
[!INCLUDE [Cleanup](includes/signalr-quickstart-cleanup.md)]
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Next steps
-In this quickstart, you built and ran a real-time serverless application in VS Code. Next, learn more about how to deploy Azure Functions from VS Code.
+In this quickstart, you built and ran a real-time serverless application in local. Learn more how to use SignalR Service bindings for Azure Functions.
+Next, learn more about how to bi-directional communicating between clients and Azure Function with SignalR Service.
+
+> [!div class="nextstepaction"]
+> [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md)
+
+> [!div class="nextstepaction"]
+> [Bi-directional communicating in Serverless](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/BidirectionChat)
> [!div class="nextstepaction"] > [Deploy Azure Functions with VS Code](/azure/developer/javascript/tutorial-vscode-serverless-node-01)
azure-sql Authentication Azure Ad Only Authentication Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-tutorial.md
For more information on how to assign a role to an Azure AD account, see [Assign
For more information on the required permission to enable or disable Azure AD-only authentication, see the [Permissions section of Azure AD-only authentication](authentication-azure-ad-only-authentication.md#permissions) article.
-1. In our example, we'll assign the **SQL Security Manager** role to the user `UserSqlSecurityManager@contoso.onmicrosoft.com`. Using privileged user that can assign Azure AD roles, sign into the [Azure portal](https://portal.zure.com).
+1. In our example, we'll assign the **SQL Security Manager** role to the user `UserSqlSecurityManager@contoso.onmicrosoft.com`. Using privileged user that can assign Azure AD roles, sign into the [Azure portal](https://portal.azure.com/).
1. Go to your SQL server resource, and select **Access control (IAM)** in the menu. Select the **Add** button and then **Add role assignment** in the drop-down menu. :::image type="content" source="media/authentication-azure-ad-only-authentication/azure-ad-only-authentication-access-control.png" alt-text="Access control pane in the Azure portal":::
For more information on the required permission to enable or disable Azure AD-on
To enable Azure AD-only authentication auth in the Azure portal, see the steps below.
-1. Using the user with the [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role, go to the [Azure portal](https://portal.zure.com).
+1. Using the user with the [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role, go to the [Azure portal](https://portal.azure.com/).
1. Go to your SQL server resource, and select **Azure Active Directory** under the **Settings** menu. :::image type="content" source="media/authentication-azure-ad-only-authentication/azure-ad-only-authentication-portal.png" alt-text="Enable Azure AD only auth menu":::
Check whether Azure AD-only authentication is enabled for your server or instanc
# [Portal](#tab/azure-portal)
-Go to your **SQL server** resource in the [Azure portal](https://portal.zure.com). Select **Azure Active Directory** under the **Settings** menu. Portal support for Azure AD-only authentication is only available for Azure SQL Database.
+Go to your **SQL server** resource in the [Azure portal](https://portal.azure.com/). Select **Azure Active Directory** under the **Settings** menu. Portal support for Azure AD-only authentication is only available for Azure SQL Database.
# [Azure CLI](#tab/azure-cli)
By disabling the Azure AD-only authentication feature, you allow both SQL authen
# [Portal](#tab/azure-portal)
-1. Using the user with the [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role, go to the [Azure portal](https://portal.zure.com).
+1. Using the user with the [SQL Security Manager](../../role-based-access-control/built-in-roles.md#sql-security-manager) role, go to the [Azure portal](https://portal.azure.com/).
1. Go to your SQL server resource, and select **Azure Active Directory** under the **Settings** menu. 1. To disable the Azure AD-only authentication feature, uncheck the **Support only Azure Active Directory authentication for this server** checkbox and **Save** the setting.
After disabling Azure AD-only authentication, test connecting using a SQL authen
## Next steps [Azure AD-only authentication with Azure SQL](authentication-azure-ad-only-authentication.md)---
azure-sql Connect Query Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-nodejs.md
Open a command prompt and create a folder named *sqltest*. Open the folder you c
/* //Use Azure VM Managed Identity to connect to the SQL database
- const connection = new Connection({
- server: process.env["db_server"],
- authentication: {
- type: 'azure-active-directory-msi-vm',
- },
- options: {
- database: process.env["db_database"],
- encrypt: true,
- port: 1433
- }
- });
+ const config = {
+ server: process.env["db_server"],
+ authentication: {
+ type: 'azure-active-directory-msi-vm',
+ },
+ options: {
+ database: process.env["db_database"],
+ encrypt: true,
+ port: 1433
+ }
+ };
+
//Use Azure App Service Managed Identity to connect to the SQL database
- const connection = new Connection({
- server: process.env["db_server"],
- authentication: {
- type: 'azure-active-directory-msi-app-service',
- },
- options: {
- database: process.env["db_database"],
- encrypt: true,
- port: 1433
- }
- });
+ const config = {
+ server: process.env["db_server"],
+ authentication: {
+ type: 'azure-active-directory-msi-app-service',
+ },
+ options: {
+ database: process.env["db_database"],
+ encrypt: true,
+ port: 1433
+ }
+ });
*/
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes.md
ms.devlang: Previously updated : 06/03/2021 Last updated : 06/22/2021 # What's new in Azure SQL Database & SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
This table provides a quick comparison for the change in terminology:
| Feature | Details | | | |
-| Elastic database jobs (preview) | For information, see [Create, configure, and manage elastic jobs](elastic-jobs-overview.md). |
+| Elastic database jobs | For information, see [Create, configure, and manage elastic jobs](elastic-jobs-overview.md). |
| Elastic queries | For information, see [Elastic query overview](elastic-query-overview.md). |
-| Elastic transactions | [Distributed transactions across cloud databases](elastic-transactions-overview.md). |
+| Elastic transactions | For information, see [Distributed transactions across cloud databases](elastic-transactions-overview.md). |
| Query editor in the Azure portal |For information, see [Use the Azure portal's SQL query editor to connect and query data](connect-query-portal.md).|
-|SQL Analytics|For information, see [Azure SQL Analytics](../../azure-monitor/insights/azure-sql.md).|
-| &nbsp; |
+| SQL Analytics|For information, see [Azure SQL Analytics](../../azure-monitor/insights/azure-sql.md).|
+| Query Store hints | For information, see [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-current&preserve-view=true).|
+| | |
### [Azure SQL Managed Instance](#tab/managed-instance)
This table provides a quick comparison for the change in terminology:
| [Transactional Replication](../managed-instance/replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](../managed-instance/replication-between-two-instances-configure-tutorial.md). | | Threat detection |For information, see [Configure threat detection in Azure SQL Managed Instance](../managed-instance/threat-detection-configure.md).| | Long-term backup retention | For information, see [Configure long-term back up retention in Azure SQL Managed Instance](../managed-instance/long-term-backup-retention-configure.md), which is currently in limited public preview. |
+| Query Store hints | For information, see [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-mi-current&preserve-view=true).|
+| | |
azure-sql Scale Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scale-resources.md
Last updated 06/25/2019
# Dynamically scale database resources with minimal downtime [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-Azure SQL Database and SQL Managed Instance enable you to dynamically add more resources to your database with minimal [downtime](https://azure.microsoft.com/support/legal/sla/sql-database); however, there is a switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated using retry logic.
+Azure SQL Database and SQL Managed Instance enable you to dynamically add more resources to your database with minimal [downtime](https://azure.microsoft.com/support/legal/sla/azure-sql-database); however, there is a switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated using retry logic.
## Overview
backup Backup Azure Arm Userestapi Backupazurevms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-backupazurevms.md
Let's assume you want to protect a VM "testVM" under a resource group "testRG" t
### Discover unprotected Azure VMs
-First, the vault should be able to identify the Azure VM. This is triggered using the [refresh operation](/rest/api/backup/protectioncontainers/refresh). It's an asynchronous *POST* operation that makes sure the vault gets the latest list of all unprotected VM in the current subscription and 'caches' them. Once the VM is 'cached', Recovery services will be able to access the VM and protect it.
+First, the vault should be able to identify the Azure VM. This is triggered using the [refresh operation](/rest/api/backup/2021-02-10/protection-containers/refresh). It's an asynchronous *POST* operation that makes sure the vault gets the latest list of all unprotected VM in the current subscription and 'caches' them. Once the VM is 'cached', Recovery services will be able to access the VM and protect it.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/refreshContainers?api-version=2016-12-01
X-Powered-By: ASP.NET
### Selecting the relevant Azure VM
- You can confirm that "caching" is done by [listing all protectable items](/rest/api/backup/backupprotectableitems/list) under the subscription and locate the desired VM in the response. [The response of this operation](#example-responses-to-get-operation) also gives you information on how Recovery Services identifies a VM. Once you are familiar with the pattern, you can skip this step and directly proceed to [enabling protection](#enabling-protection-for-the-azure-vm).
+ You can confirm that "caching" is done by [listing all protectable items](/rest/api/backup/2021-02-10/backup-protected-items/list) under the subscription and locate the desired VM in the response. [The response of this operation](#example-responses-to-get-operation) also gives you information on how Recovery Services identifies a VM. Once you are familiar with the pattern, you can skip this step and directly proceed to [enabling protection](#enabling-protection-for-the-azure-vm).
This operation is a *GET* operation.
The *GET* URI has all the required parameters. No additional request body is nee
|Name |Type |Description | ||||
-|200 OK | [WorkloadProtectableItemResourceList](/rest/api/backup/backupprotectableitems/list#workloadprotectableitemresourcelist) | OK |
+|200 OK | [WorkloadProtectableItemResourceList](/rest/api/backup/2021-02-10/backup-protected-items/list#workloadprotectableitemresourcelist) | OK |
#### Example responses to get operation
In the example, the above values translate to:
### Enabling protection for the Azure VM
-After the relevant VM is "cached" and "identified", select the policy to protect. To know more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/backuppolicies/list). Then select the [relevant policy](/rest/api/backup/protectionpolicies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](backup-azure-arm-userestapi-createorupdatepolicy.md). "DefaultPolicy" is selected in the example below.
+After the relevant VM is "cached" and "identified", select the policy to protect. To know more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/2021-02-10/backup-policies/list). Then select the [relevant policy](/rest/api/backup/2021-02-10/protection-policies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](backup-azure-arm-userestapi-createorupdatepolicy.md). "DefaultPolicy" is selected in the example below.
Enabling protection is an asynchronous *PUT* operation that creates a 'protected item'.
To create a protected item, following are the components of the request body.
|||| |properties | AzureIaaSVMProtectedItem |ProtectedItem Resource properties |
-For the complete list of definitions of the request body and other details, refer to [create protected item REST API document](/rest/api/backup/protecteditems/createorupdate#request-body).
+For the complete list of definitions of the request body and other details, refer to [create protected item REST API document](/rest/api/backup/2021-02-10/protected-items/create-or-update#request-body).
##### Example request body
It returns two responses: 202 (Accepted) when another operation is created and t
|Name |Type |Description | ||||
-|200 OK | [ProtectedItemResource](/rest/api/backup/protecteditemoperationresults/get#protecteditemresource) | OK |
+|200 OK | [ProtectedItemResource](/rest/api/backup/2021-02-10/protected-item-operation-results/get#protecteditemresource) | OK |
|202 Accepted | | Accepted | ##### Example responses to create protected item operation
To trigger an on-demand backup, following are the components of the request body
|Name |Type |Description | ||||
-|properties | [IaaSVMBackupRequest](/rest/api/backup/backups/trigger#iaasvmbackuprequest) |BackupRequestResource properties |
+|properties | [IaaSVMBackupRequest](/rest/api/backup/2021-02-10/backups/trigger#iaasvmbackuprequest) |BackupRequestResource properties |
-For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/backups/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/2021-02-10/backups/trigger#request-body).
#### Example request body for on-demand backup
If the Azure VM is already backed up, you can specify the list of disks to be ba
> [!IMPORTANT] > The request body above is always the final copy of data disks to be excluded or included. This doesn't *add* to the previous configuration. For example: If you first update the protection as "exclude data disk 1" and then repeat with "exclude data disk 2", *only data disk 2 is excluded* in the subsequent backups and data disk 1 will be included. This is always the final list which will be included/excluded in the subsequent backups.
-To get the current list of disks which are excluded or included, get the protected item information as mentioned [here](/rest/api/backup/protecteditems/get). The response will provide the list of data disk LUNs and indicates whether they are included or excluded.
+To get the current list of disks which are excluded or included, get the protected item information as mentioned [here](/rest/api/backup/2021-02-10/protected-items/get). The response will provide the list of data disk LUNs and indicates whether they are included or excluded.
### Stop protection but retain existing data
The response will follow the same format as mentioned [for triggering an on-dema
### Stop protection and delete data
-To remove the protection on a protected VM and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/protecteditems/delete).
+To remove the protection on a protected VM and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/2021-02-10/protected-items/delete).
Stopping protection and deleting data is a *DELETE* operation.
The response will follow the same format as mentioned [for triggering an on-dema
For more information on the Azure Backup REST APIs, see the following documents: - [Azure Recovery Services provider REST API](/rest/api/recoveryservices/)-- [Get started with Azure REST API](/rest/api/azure/)
+- [Get started with Azure REST API](/rest/api/azure/)
backup Backup Azure Arm Userestapi Createorupdatepolicy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-createorupdatepolicy.md
ms.assetid: 5ffc4115-0ae5-4b85-a18c-8a942f6d4870
# Create Azure Recovery Services backup policies using REST API
-The steps to create a backup policy for an Azure Recovery Services vault are outlined in the [policy REST API document](/rest/api/backup/protectionpolicies/createorupdate). Let's use this document as a reference to create a policy for Azure VM backup.
+The steps to create a backup policy for an Azure Recovery Services vault are outlined in the [policy REST API document](/rest/api/backup/2021-02-10/protection-policies/create-or-update). Let's use this document as a reference to create a policy for Azure VM backup.
## Create or update a policy
For example, to create a policy for Azure VM backup, following are the component
|Name |Required |Type |Description | |||||
-|properties | True | ProtectionPolicy:[AzureIaaSVMProtectionPolicy](/rest/api/backup/protectionpolicies/createorupdate#azureiaasvmprotectionpolicy) | ProtectionPolicyResource properties |
+|properties | True | ProtectionPolicy:[AzureIaaSVMProtectionPolicy](/rest/api/backup/2021-02-10/protection-policies/create-or-update#azureiaasvmprotectionpolicy) | ProtectionPolicyResource properties |
|tags | | Object | Resource tags |
-For the complete list of definitions in the request body, refer to the [backup policy REST API document](/rest/api/backup/protectionpolicies/createorupdate).
+For the complete list of definitions in the request body, refer to the [backup policy REST API document](/rest/api/backup/2021-02-10/protection-policies/create-or-update).
### Example request body
It returns two responses: 202 (Accepted) when another operation is created, and
|Name |Type |Description | ||||
-|200 OK | [Protection PolicyResource](/rest/api/backup/protectionpolicies/createorupdate#protectionpolicyresource) | OK |
+|200 OK | [Protection PolicyResource](/rest/api/backup/2021-02-10/protection-policies/create-or-update#protectionpolicyresource) | OK |
|202 Accepted | | Accepted | ### Example responses
backup Backup Azure Arm Userestapi Managejobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-managejobs.md
An operation such as triggering backup will always return a jobID. For example:
} ```
-The Azure VM backup job is identified by "jobId" field and can be tracked as mentioned [here](/rest/api/backup/jobdetails/) using a simple *GET* request.
+The Azure VM backup job is identified by "jobId" field and can be tracked as mentioned [here](/rest/api/backup/2021-02-10/job-details) using a simple *GET* request.
## Tracking the job
The `{jobName}` is "jobId" mentioned above. The response is always 200 OK with t
|Name |Type |Description | ||||
-|200 OK | [JobResource](/rest/api/backup/jobdetails/get#jobresource) | OK |
+|200 OK | [JobResource](/rest/api/backup/2021-02-10/job-details/get#jobresource) | OK |
#### Example response
backup Backup Azure Arm Userestapi Restoreazurevms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-restoreazurevms.md
For any restore operation, one has to identify the relevant recovery point first
## Select Recovery point
-The available recovery points of a backup item can be listed using the [list recovery point REST API](/rest/api/backup/recoverypoints/list). It's a simple *GET* operation with all the relevant values.
+The available recovery points of a backup item can be listed using the [list recovery point REST API](/rest/api/backup/2021-02-10/recovery-points/list). It's a simple *GET* operation with all the relevant values.
```http GET https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/recoveryPoints?api-version=2019-05-13
The *GET* URI has all the required parameters. There's no need for an additional
|Name |Type |Description | ||||
-|200 OK | [RecoveryPointResourceList](/rest/api/backup/recoverypoints/list#recoverypointresourcelist) | OK |
+|200 OK | [RecoveryPointResourceList](/rest/api/backup/2021-02-10/recovery-points/list#recoverypointresourcelist) | OK |
#### Example response
After selecting the [relevant restore point](#select-recovery-point), proceed to
> [!IMPORTANT] > All details about various restore options and their dependencies are mentioned [here](./backup-azure-arm-restore-vms.md#restore-options). Please review before proceeding to triggering these operations.
-Triggering restore operations is a *POST* request. To know more about the API, refer to the ["trigger restore" REST API](/rest/api/backup/restores/trigger).
+Triggering restore operations is a *POST* request. To know more about the API, refer to the ["trigger restore" REST API](/rest/api/backup/2021-02-10/restores/trigger).
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13
To trigger a disk restore from an Azure VM backup, following are the components
|Name |Type |Description | ||||
-|properties | [IaaSVMRestoreRequest](/rest/api/backup/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
+|properties | [IaaSVMRestoreRequest](/rest/api/backup/2021-02-10/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
-For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
##### Example request
The following request body defines properties required to trigger a disk restore
### Restore disks selectively
-If you are [selectively backing up disks](backup-azure-arm-userestapi-backupazurevms.md#excluding-disks-in-azure-vm-backup), then the current backed-up disk list is provided in the [recovery point summary](#select-recovery-point) and [detailed response](/rest/api/backup/recoverypoints/get). You can also selectively restore disks and more details are provided [here](selective-disk-backup-restore.md#selective-disk-restore). To selectively restore a disk among the list of backed up disks, find the LUN of the disk from the recovery point response and add the **restoreDiskLunList** property to the [request body above](#example-request) as shown below.
+If you are [selectively backing up disks](backup-azure-arm-userestapi-backupazurevms.md#excluding-disks-in-azure-vm-backup), then the current backed-up disk list is provided in the [recovery point summary](#select-recovery-point) and [detailed response](/rest/api/backup/2021-02-10/recovery-points/get). You can also selectively restore disks and more details are provided [here](selective-disk-backup-restore.md#selective-disk-restore). To selectively restore a disk among the list of backed up disks, find the LUN of the disk from the recovery point response and add the **restoreDiskLunList** property to the [request body above](#example-request) as shown below.
```json {
To trigger a disk replacement from an Azure VM backup, following are the compone
|Name |Type |Description | ||||
-|properties | [IaaSVMRestoreRequest](/rest/api/backup/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
+|properties | [IaaSVMRestoreRequest](/rest/api/backup/2021-02-10/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
-For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
#### Example request
backup Backup Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-file-share-rest-api.md
For this article, we'll use the following resources:
### Discover storage accounts with unprotected Azure file shares
-The vault needs to discover all Azure storage accounts in the subscription with file shares that can be backed up to the Recovery Services vault. This is triggered using the [refresh operation](/rest/api/backup/protectioncontainers/refresh). It's an asynchronous *POST* operation that ensures the vault gets the latest list of all unprotected Azure File shares in the current subscription and 'caches' them. Once the file share is 'cached', Recovery services can access the file share and protect it.
+The vault needs to discover all Azure storage accounts in the subscription with file shares that can be backed up to the Recovery Services vault. This is triggered using the [refresh operation](/rest/api/backup/2021-02-10/protection-containers/refresh). It's an asynchronous *POST* operation that ensures the vault gets the latest list of all unprotected Azure File shares in the current subscription and 'caches' them. Once the file share is 'cached', Recovery services can access the file share and protect it.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/refreshContainers?api-version=2016-12-01&$filter={$filter}
Date : Mon, 27 Jan 2020 10:53:04 GMT
### Get List of storage accounts with file shares that can be backed up with Recovery Services vault
-To confirm that ΓÇ£cachingΓÇ¥ is done, list all the storage accounts in the subscription with file shares that can be backed up with the Recovery Services vault. Then locate the desired storage account in the response. This is done using the [GET ProtectableContainers](/rest/api/backup/protectablecontainers/list) operation.
+To confirm that ΓÇ£cachingΓÇ¥ is done, list all the storage accounts in the subscription with file shares that can be backed up with the Recovery Services vault. Then locate the desired storage account in the response. This is done using the [GET ProtectableContainers](/rest/api/backup/2021-02-10/protectable-containers/list) operation.
```http GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectableContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'
Since we can locate the *testvault2* storage account in the response body with t
### Register storage account with Recovery Services vault
-This step is only needed if you didn't register the storage account with the vault earlier. You can register the vault via the [ProtectionContainers-Register operation](/rest/api/backup/protectioncontainers/register).
+This step is only needed if you didn't register the storage account with the vault earlier. You can register the vault via the [ProtectionContainers-Register operation](/rest/api/backup/2021-02-10/protection-containers/register).
```http PUT https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}?api-version=2016-12-01
The create request body is as follows:
} ```
-For the complete list of definitions of the request body and other details, refer to [ProtectionContainers-Register](/rest/api/backup/protectioncontainers/register#azurestoragecontainer).
+For the complete list of definitions of the request body and other details, refer to [ProtectionContainers-Register](/rest/api/backup/2021-02-10/protection-containers/register#azurestoragecontainer).
This is an asynchronous operation and returns two responses: "202 Accepted" when the operation is accepted and "200 OK" when the operation is complete. To track the operation status, use the location header to get the latest status of the operation.
You can verify if the registration was successful from the value of the *registr
### Inquire all unprotected files shares under a storage account
-You can inquire about protectable items in a storage account using the [Protection Containers-Inquire](/rest/api/backup/protectioncontainers/inquire) operation. It's an asynchronous operation and the results should be tracked using the location header.
+You can inquire about protectable items in a storage account using the [Protection Containers-Inquire](/rest/api/backup/2021-02-10/protection-containers/inquire) operation. It's an asynchronous operation and the results should be tracked using the location header.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/inquire?api-version=2016-12-01
Date : Mon, 27 Jan 2020 10:53:05 GMT
### Select the file share you want to back up
-You can list all protectable items under the subscription and locate the desired file share to be backed up using the [GET backupprotectableItems](/rest/api/backup/backupprotectableitems/list) operation.
+You can list all protectable items under the subscription and locate the desired file share to be backed up using the [GET backupprotectableItems](/rest/api/backup/2021-02-10/backup-protectable-items/list) operation.
```http GET https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupProtectableItems?api-version=2016-12-01&$filter={$filter}
The response contains the list of all unprotected file shares and contains all t
### Enable backup for the file share
-After the relevant file share is "identified" with the friendly name, select the policy to protect. To learn more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/backuppolicies/list). Then select the [relevant policy](/rest/api/backup/protectionpolicies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](./backup-azure-arm-userestapi-createorupdatepolicy.md).
+After the relevant file share is "identified" with the friendly name, select the policy to protect. To learn more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/2021-02-10/backup-policies/list). Then select the [relevant policy](/rest/api/backup/2021-02-10/protection-policies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](./backup-azure-arm-userestapi-createorupdatepolicy.md).
Enabling protection is an asynchronous *PUT* operation that creates a "protected item".
To trigger an on-demand backup, following are the components of the request body
| - | -- | | | Properties | AzurefilesharebackupReques | BackupRequestResource properties |
-For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/backups/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/2021-02-10/backups/trigger#request-body).
Request Body example
backup Manage Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-azure-file-share-rest-api.md
For example, the final response of a [trigger backup REST API](backup-azure-file
} ```
-The Azure file share backup job is identified by the **jobId** field and can be tracked as mentioned [here](/rest/api/backup/jobdetails/) using a GET request.
+The Azure file share backup job is identified by the **jobId** field and can be tracked as mentioned [here](/rest/api/backup/2021-02-10/job-details) using a GET request.
### Tracking the job
GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af
## Stop protection and delete data
-To remove the protection on a protected file share and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/protecteditems/delete).
+To remove the protection on a protected file share and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/2021-02-10/protected-items/delete).
```http DELETE https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}?api-version=2019-05-13
backup Restore Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-azure-file-share-rest-api.md
For this article, we'll use the following resources:
## Fetch ContainerName and ProtectedItemName
-For most of the restore related API calls, you need to pass values for the {containerName} and {protectedItemName} URI parameters. Use the ID attribute in the response body of the [GET backupprotectableitems](/rest/api/backup/protecteditems/get) operation to retrieve values for these parameters. In our example, the ID of the file share we want to protect is:
+For most of the restore related API calls, you need to pass values for the {containerName} and {protectedItemName} URI parameters. Use the ID attribute in the response body of the [GET backupprotectableitems](/rest/api/backup/2021-02-10/protected-items/get) operation to retrieve values for these parameters. In our example, the ID of the file share we want to protect is:
`"/Subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afsaccount/protectableItems/azurefileshare;azurefiles`
The recovery point is identified with the {name} field in the response above.
## Full share recovery using REST API Use this restore option to restore the complete file share in the original or an alternate location.
-Triggering restore is a POST request and you can perform this operation using the [trigger restore](/rest/api/backup/restores/trigger) REST API.
+Triggering restore is a POST request and you can perform this operation using the [trigger restore](/rest/api/backup/2021-02-10/restores/trigger) REST API.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13
Name | Type | Description
| - | - Properties | AzureFileShareRestoreRequest | RestoreRequestResource properties
-For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
### Restore to original location
Name | Type | Description
| - | - Properties | AzureFileShareRestoreRequest | RestoreRequestResource properties
-For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
### Restore to original location for item-level recovery using REST API
backup Use Restapi Update Vault Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/use-restapi-update-vault-properties.md
So you need to carefully choose whether or not to disable soft-delete for a part
### Fetch soft delete state using REST API
-By default, the soft-delete state will be enabled for any newly created Recovery Services vault. To fetch/update the state of soft-delete for a vault, use the backup vault's config related [REST API document](/rest/api/backup/backupresourcevaultconfigs)
+By default, the soft-delete state will be enabled for any newly created Recovery Services vault. To fetch/update the state of soft-delete for a vault, use the backup vault's config related [REST API document](/rest/api/backup/2021-02-10/backup-resource-vault-configs)
To fetch the current state of soft-delete for a vault, use the following *GET* operation
The successful response for the 'GET' operation is shown below:
|Name |Type |Description | ||||
-|200 OK | [BackupResourceVaultConfig](/rest/api/backup/backupresourcevaultconfigs/get#backupresourcevaultconfigresource) | OK |
+|200 OK | [BackupResourceVaultConfig](/rest/api/backup/2021-02-10/backup-resource-vault-configs/get#backupresourcevaultconfigresource) | OK |
##### Example response
PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000
THe following common definitions are used to create a request body
-For more details, refer to [the REST API documentation](/rest/api/backup/backupresourcevaultconfigs/update#request-body)
+For more details, refer to [the REST API documentation](/rest/api/backup/2021-02-10/backup-resource-vault-configs/update#request-body)
|Name |Required |Type |Description | |||||
The successful response for the 'PATCH' operation is shown below:
|Name |Type |Description | ||||
-|200 OK | [BackupResourceVaultConfig](/rest/api/backup/backupresourcevaultconfigs/get#backupresourcevaultconfigresource) | OK |
+|200 OK | [BackupResourceVaultConfig](/rest/api/backup/2021-02-10/backup-resource-vault-configs/get#backupresourcevaultconfigresource) | OK |
##### Example response for the PATCH operation
bastion Bastion Connect Vm Rdp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-connect-vm-rdp.md
Previously updated : 10/21/2020 Last updated : 06/21/2021 # Customer intent: As someone with a networking background, I want to connect to an Azure virtual machine running Windows that doesn't have a public IP address by using Azure Bastion.
Before you begin, verify that you have met the following criteria:
## Next steps
-Read the [Bastion FAQ](bastion-faq.md).
+Read the [Bastion FAQ](bastion-faq.md).
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-faq.md
Previously updated : 02/05/2021 Last updated : 06/22/2021 # Azure Bastion FAQ
+## FAQs
+### <a name="publicip"></a>Do I need a public IP on my virtual machine to connect via Azure Bastion?
+
+No. When you connect to a VM using Azure Bastion, you don't need a public IP on the Azure virtual machine that you are connecting to. The Bastion service will open the RDP/SSH session/connection to your virtual machine over the private IP of your virtual machine, within your virtual network.
+
+### Is IPv6 supported?
+
+At this time, IPv6 is not supported. Azure Bastion supports IPv4 only.
+
+### Can I use Azure Bastion with Azure Private DNS Zones?
+
+The use of Azure Bastion with Private endpoint integrated Azure Private DNS Zones is not supported at this time. Before you deploy your Azure Bastion resource, please make sure that the host virtual network is not linked to a Private endpoint integrated private DNS zone.
+
+### <a name="rdpssh"></a>Do I need an RDP or SSH client?
+
+No. You don't need an RDP or SSH client to access the RDP/SSH to your Azure virtual machine in your Azure portal. Use the [Azure portal](https://portal.azure.com) to let you get RDP/SSH access to your virtual machine directly in the browser.
+
+### <a name="agent"></a>Do I need an agent running in the Azure virtual machine?
+
+No. You don't need to install an agent or any software on your browser or your Azure virtual machine. The Bastion service is agentless and doesn't require any additional software for RDP/SSH.
+
+### <a name="limits"></a>How many concurrent RDP and SSH sessions does each Azure Bastion support?
+
+Both RDP and SSH are a usage-based protocol. High usage of sessions will cause the bastion host to support a lower total number of sessions. The numbers below assume normal day-to-day workflows.
++
+### <a name="rdpfeaturesupport"></a>What features are supported in an RDP session?
+
+At this time, only text copy/paste is supported. Features, such as file copy, are not supported. Feel free to share your feedback about new features on the [Azure Bastion Feedback page](https://feedback.azure.com/forums/217313-networking?category_id=367303).
+
+### <a name="aadj"></a>Does Bastion hardening work with AADJ VM extension-joined VMs?
+
+This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information, see [Windows Azure VMs and Azure AD](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#requirements).
+
+### <a name="browsers"></a>Which browsers are supported?
+
+The browser must support HTML 5. Use the Microsoft Edge browser or Google Chrome on Windows. For Apple Mac, use Google Chrome browser. Microsoft Edge Chromium is also supported on both Windows and Mac, respectively.
+
+### <a name="data"></a>Where does Azure Bastion store customer data?
+
+Azure Bastion doesn't move or store customer data out of the region it is deployed in.
+
+### <a name="roles"></a>Are any roles required to access a virtual machine?
+
+In order to make a connection, the following roles are required:
+
+* Reader role on the virtual machine.
+* Reader role on the NIC with private IP of the virtual machine.
+* Reader role on the Azure Bastion resource.
+* Reader Role on the Virtual Network (Not needed if there is no peered virtual network).
+
+### <a name="pricingpage"></a>What is the pricing?
+
+For more information, see the [pricing page](https://aka.ms/BastionHostPricing).
+
+### <a name="rdscal"></a>Does Azure Bastion require an RDS CAL for administrative purposes on Azure-hosted VMs?
+
+No, access to Windows Server VMs by Azure Bastion does not require an [RDS CAL](https://www.microsoft.com/p/windows-server-remote-desktop-services-cal/dg7gmgf0dvsv?activetab=pivot:overviewtab) when used solely for administrative purposes.
+
+### <a name="keyboard"></a>Which keyboard layouts are supported during the Bastion remote session?
+
+Azure Bastion currently supports en-us-qwerty keyboard layout inside the VM. Support for other locales for keyboard layout is work in progress.
+
+### <a name="udr"></a>Is user-defined routing (UDR) supported on an Azure Bastion subnet?
+
+No. UDR is not supported on an Azure Bastion subnet.
+
+For scenarios that include both Azure Bastion and Azure Firewall/Network Virtual Appliance (NVA) in the same virtual network, you donΓÇÖt need to force traffic from an Azure Bastion subnet to Azure Firewall because the communication between Azure Bastion and your VMs is private. For more information, see [Accessing VMs behind Azure Firewall with Bastion](https://azure.microsoft.com/blog/accessing-virtual-machines-behind-azure-firewall-with-azure-bastion/).
+
+### <a name="subnet"></a> Can I deploy multiple Azure resources in my Azure Bastion subnet?
+
+No. The Azure Bastion subnet (*AzureBastionSubnet*) is reserved only for the deployment of your Azure Bastion resource.
+
+### <a name="session"></a>Why do I get "Your session has expired" error message before the Bastion session starts?
+
+A session should be initiated only from the Azure portal. Sign in to the Azure portal and begin your session again. If you go to the URL directly from another browser session or tab, this error is expected. It helps ensure that your session is more secure and that the session can be accessed only through the Azure portal.
+
+### <a name="udr"></a>How do I handle deployment failures?
+
+Review any error messages and [raise a support request in the Azure portal](../azure-portal/supportability/how-to-create-azure-support-request.md) as needed. Deployment failures may result from [Azure subscription limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md). Specifically, customers may encounter a limit on the number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail.
+
+### <a name="dr"></a>How do I incorporate Azure Bastion in my Disaster Recovery plan?
+
+Azure Bastion is deployed within VNets or peered VNets, and is associated to an Azure region. You are responsible for deploying Azure Bastion to a Disaster Recovery (DR) site VNet. In the event of an Azure region failure, perform a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR region to connect to the VMs that are now deployed there.
+
+## <a name="peering"></a>VNet peering
+
+### Can I still deploy multiple Bastion hosts across peered virtual networks?
+
+Yes. By default, a user sees the Bastion host that is deployed in the same virtual network in which VM resides. However, in the **Connect** menu, a user can see multiple Bastion hosts detected across peered networks. They can select the Bastion host that they prefer to use to connect to the VM deployed in the virtual network.
+
+### If my peered VNets are deployed in different subscriptions, will connectivity via Bastion work?
+
+Yes, connectivity via Bastion will continue to work for peered VNets across different subscription for a single Tenant. Subscriptions across two different Tenants are not supported. To see Bastion in the **Connect** drop down menu, the user must select the subs they have access to in **Subscription > global subscription**.
++
+### I have access to the peered VNet, but I can't see the VM deployed there.
+
+Make sure the user has **read** access to both the VM, and the peered VNet. Additionally, check under IAM that the user has **read** access to following resources:
+
+* Reader role on the virtual machine.
+* Reader role on the NIC with private IP of the virtual machine.
+* Reader role on the Azure Bastion resource.
+* Reader Role on the Virtual Network (Not needed if there is no peered virtual network).
+
+|Permissions|Description|Permission type|
+||| |
+|Microsoft.Network/bastionHosts/read |Gets a Bastion Host|Action|
+|Microsoft.Network/virtualNetworks/BastionHosts/action |Gets Bastion Host references in a Virtual Network.|Action|
+|Microsoft.Network/virtualNetworks/bastionHosts/default/action|Gets Bastion Host references in a Virtual Network.|Action|
+|Microsoft.Network/networkInterfaces/read|Gets a network interface definition.|Action|
+|Microsoft.Network/networkInterfaces/ipconfigurations/read|Gets a network interface IP configuration definition.|Action|
+|Microsoft.Network/virtualNetworks/read|Get the virtual network definition|Action|
+|Microsoft.Network/virtualNetworks/subnets/virtualMachines/read|Gets references to all the virtual machines in a virtual network subnet|Action|
+|Microsoft.Network/virtualNetworks/virtualMachines/read|Gets references to all the virtual machines in a virtual network|Action|
bastion Bastion Nsg https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-nsg.md
Previously updated : 12/09/2020 Last updated : 06/21/2021 # Working with NSG access and Azure Bastion
Azure Bastion is deployed specifically to ***AzureBastionSubnet***.
* **Ingress Traffic from Azure Load Balancer:** For health probes, enable port 443 inbound from the **AzureLoadBalancer** service tag. This enables Azure Load Balancer to detect connectivity
- :::image type="content" source="./media/bastion-nsg/inbound.png" alt-text="Screenshot shows inbound security rules for Azure Bastion connectivity.":::
+ :::image type="content" source="./media/bastion-nsg/inbound.png" alt-text="Screenshot shows inbound security rules for Azure Bastion connectivity." lightbox="./media/bastion-nsg/inbound.png":::
* **Egress Traffic:**
Azure Bastion is deployed specifically to ***AzureBastionSubnet***.
* **Egress Traffic to Internet:** Azure Bastion needs to be able to communicate with the Internet for session and certificate validation. For this reason, we recommend enabling port 80 outbound to the **Internet.**
- :::image type="content" source="./media/bastion-nsg/outbound.png" alt-text="Screenshot shows outbound security rules for Azure Bastion connectivity.":::
+ :::image type="content" source="./media/bastion-nsg/outbound.png" alt-text="Screenshot shows outbound security rules for Azure Bastion connectivity." lightbox="./media/bastion-nsg/outbound.png":::
### Target VM Subnet This is the subnet that contains the target virtual machine that you want to RDP/SSH to.
bastion Bastion Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/bastion-overview.md
Previously updated : 10/13/2020 Last updated : 06/22/2021
Subscribe to the RSS feed and view the latest Azure Bastion feature updates on t
## FAQ
+For frequently asked questions, see the Bastion [FAQ](bastion-faq.md).
## Next steps
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/tutorial-create-host-portal.md
This section helps you create the bastion object in your VNet. This is required
[!INCLUDE [Connect to a Windows VM](../../includes/bastion-vm-rdp.md)] + ## Clean up resources If you're not going to continue to use this application, delete
bastion Vnet Peering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/bastion/vnet-peering.md
Previously updated : 12/09/2020 Last updated : 06/22/2021
This figure shows the architecture of an Azure Bastion deployment in a hub-and-s
## FAQ
+For frequently asked questions, see the Bastion VNet Peering [FAQ](bastion-faq.md#peering).
## Next steps
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| NCv3 | All sizes | | NCasT4_v3 | All sizes | | ND | All sizes |
+| NDv4 | All sizes |
| NDv2 | None - not yet available | | NP | All sizes | | NV | All sizes |
It is strongly recommended to avoid images with impending Batch support end of l
## Next steps - Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks.-- For information about using compute-intensive VM sizes, see [Use RDMA-capable or GPU-enabled instances in Batch pools](batch-pool-compute-intensive-sizes.md).
+- For information about using compute-intensive VM sizes, see [Use RDMA-capable or GPU-enabled instances in Batch pools](batch-pool-compute-intensive-sizes.md).
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Deployments that utilized the old diagnostics plugins need the settings removed
```xml <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" /> ```
-## Subscription Access Level
+## Access Control
The subsciption containing networking resources needs to have [network contributor](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#network-contributor) access or above for Cloud Services (extended support). For more details on please refer to [RBAC built in roles](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles)
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
To download the web installer for the .NET Framework, choose the version that yo
* [.NET Framework 4.8 Web installer](https://go.microsoft.com/fwlink/?LinkId=2150985) * [.NET Framework 4.7.2 web installer](https://go.microsoft.com/fwlink/?LinkId=863262)
-* [.NET Framework 4.6.2 web installer](https://www.microsoft.com/download/details.aspx?id=53345)
+* [.NET Framework 4.6.2 web installer](https://dotnet.microsoft.com/download/dotnet-framework/net462)
To add the installer for a *web* role: 1. In **Solution Explorer**, under **Roles** in your cloud service project, right-click your *web* role and select **Add** > **New Folder**. Create a folder named **bin**.
You can use startup tasks to perform operations before a role starts. Installing
REM ***** To install .NET 4.5.2 set the variable netfx to "NDP452" ***** REM ***** To install .NET 4.6 set the variable netfx to "NDP46" ***** REM ***** To install .NET 4.6.1 set the variable netfx to "NDP461" ***** https://go.microsoft.com/fwlink/?LinkId=671729
- REM ***** To install .NET 4.6.2 set the variable netfx to "NDP462" ***** https://www.microsoft.com/download/details.aspx?id=53345
+ REM ***** To install .NET 4.6.2 set the variable netfx to "NDP462" ***** https://dotnet.microsoft.com/download/dotnet-framework/net462
REM ***** To install .NET 4.7 set the variable netfx to "NDP47" ***** REM ***** To install .NET 4.7.1 set the variable netfx to "NDP471" ***** https://go.microsoft.com/fwlink/?LinkId=852095 REM ***** To install .NET 4.7.2 set the variable netfx to "NDP472" ***** https://go.microsoft.com/fwlink/?LinkId=863262
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
[3051768]:https://support.microsoft.com/kb/3051768 [3061518]:https://support.microsoft.com/kb/3061518
-[3038314]:https://web.archive.org/web/20180920122209/https:/support.microsoft.com/en-us/help/3038314/ms15-032-cumulative-security-update-for-internet-explorer-april-14-201
+[3038314]:https://support.microsoft.com/en-us/topic/ms15-018-cumulative-security-update-for-internet-explorer-march-10-2015-ebbad1d0-8db0-4639-a143-10213c78afb5
[3042553]:https://support.microsoft.com/kb/3042553 [3046306]:https://support.microsoft.com/kb/3046306 [3046269]:https://support.microsoft.com/kb/3046269
cognitive-services Intro To Spatial Analysis Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/intro-to-spatial-analysis-public-preview.md
Previously updated : 03/29/2021 Last updated : 06/21/2021 # What is Spatial Analysis?
The core operations of Spatial Analysis are all built on a pipeline that ingests
## Get started
-### Follow a quickstart
- Follow the [quickstart](spatial-analysis-container.md) to set up the container and begin analyzing video. ## Responsible use of Spatial Analysis technology
cognitive-services Overview Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview-image-analysis.md
Previously updated : 03/30/2021 Last updated : 06/21/2021 keywords: computer vision, computer vision applications, computer vision service
cognitive-services Overview Ocr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview-ocr.md
Previously updated : 03/29/2021 Last updated : 06/21/2021
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview.md
Previously updated : 03/29/2021 Last updated : 06/21/2021 keywords: computer vision, computer vision applications, computer vision service
cognitive-services How To Custom Commands Developer Flow Test https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-developer-flow-test.md
In this article, you learn different approaches to testing a custom commands app
Test in the portal is the simplest and quickest way to check if your custom command application work as expected. After the app is successfully trained, click `Test` button to start testing. > [!div class="mx-imgBorder"]
-> ![Test in the portal](media/custom-commands/create-basic-test-chat.png)
+> ![Test in the portal](media/custom-commands/create-basic-test-chat-no-mic.png)
## Test with Windows Voice Assistant Client
cognitive-services How To Custom Commands Update Command From Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-client.md
You can test this scenario in the Custom Commands portal:
1. Open the side panel and select **Activity editor**. 1. Type and send the `RemoteCommand` event specified in the previous section. > [!div class="mx-imgBorder"]
- > ![Screenshot that shows the event for a remote command.](media/custom-commands/send-remote-command-activity.png)
+ > ![Screenshot that shows the event for a remote command.](media/custom-commands/send-remote-command-activity-no-mic.png)
Note how the value for the parameter `"OnOff"` was set to `"on"` through an activity from the client instead of speech or text.
To test this scenario, let's create a new command in the current application:
} ``` 1. Send the text `get device info`.
- ![Screenshot that shows an activity for sending client context.](media/custom-commands/send-client-context-activity.png)
+ ![Screenshot that shows an activity for sending client context.](media/custom-commands/send-client-context-activity-no-mic.png)
Note a few things: - You need to send this activity only once (ideally, right after you start a connection).
cognitive-services How To Custom Commands Update Command From Web Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-custom-commands-update-command-from-web-endpoint.md
Let's hook up the Azure function with the existing Custom Commands app:
1. Select **Test**. 1. Send `increment` a few times (which is the example sentence for the `IncrementCounter` command). > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/custom-commands/increment-counter-example.png" alt-text="Screenshot that shows an increment counter example.":::
+ > :::image type="content" source="./media/custom-commands/increment-counter-example-no-mic.png" alt-text="Screenshot that shows an increment counter example.":::
Notice how the Azure function increments the value of the `Counter` parameter on each turn.
cognitive-services How To Develop Custom Commands Application https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/how-to-develop-custom-commands-application.md
Try out the following utterance examples by using voice or text:
- Expected response: Ok, setting an alarm for 9 am tomorrow > [!div class="mx-imgBorder"]
-> ![Screenshot showing the test in a web-chat interface.](media/custom-commands/create-basic-test-chat.png)
+> ![Screenshot showing the test in a web-chat interface.](media/custom-commands/create-basic-test-chat-no-mic.png)
> [!TIP] > In the test pane, you can select **Turn details** for information about how this voice input or text input was processed.
cognitive-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/label-tool.md
To try out the Form Recognizer Sample Labeling Tool online, go to the [FOTT webs
### [v2.1](#tab/v2-1) > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://fott-2.1.azurewebsites.net/)
+> [Try Prebuilt Models](https://fott.azurewebsites.net/)
### [v2.0](#tab/v2-0)
cognitive-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-rewards.md
# Reward scores indicate success of personalization
-The reward score indicates how well the personalization choice, [RewardActionID](/rest/api/cognitiveservices/personalizer/rank/rank#response), resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior.
+The reward score indicates how well the personalization choice, [RewardActionID](/rest/api/personalizer/1.0/rank/rank#response), resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior.
Personalizer trains its machine learning models by evaluating the rewards.
Learn [how to](how-to-settings.md#configure-rewards-for-the-feedback-loop) confi
## Use Reward API to send reward score to Personalizer
-Rewards are sent to Personalizer by the [Reward API](/rest/api/cognitiveservices/personalizer/events/reward). Typically, a reward is a number from 0 to 1. A negative reward, with the value of -1, is possible in certain scenarios and should only be used if you are experienced with reinforcement learning (RL). Personalizer trains the model to achieve the highest possible sum of rewards over time.
+Rewards are sent to Personalizer by the [Reward API](/rest/api/personalizer/1.0/events/reward). Typically, a reward is a number from 0 to 1. A negative reward, with the value of -1, is possible in certain scenarios and should only be used if you are experienced with reinforcement learning (RL). Personalizer trains the model to achieve the highest possible sum of rewards over time.
Rewards are sent after the user behavior has happened, which could be days later. The maximum amount of time Personalizer will wait until an event is considered to have no reward or a default reward is configured with the [Reward Wait Time](#reward-wait-time) in the Azure portal.
Follow these recommendations for better results.
* [Reinforcement learning](concepts-reinforcement-learning.md) * [Try the Rank API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Rank/console)
-* [Try the Reward API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward)
+* [Try the Reward API](https://westus2.dev.cognitive.microsoft.com/docs/services/personalizer-api/operations/Reward)
cognitive-services Model Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/concepts/model-versioning.md
Previously updated : 06/03/2021 Last updated : 06/17/2021
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
Previously updated : 06/14/2021 Last updated : 06/17/2021 # Text Analytics API v3 language support
-> [!NOTE]
-> Languages are added as new model versions are released for specific Text Analytics features. See [Model versioning](concepts/model-versioning.md) for the latest model version for the features you're using, and for more information.
- #### [Sentiment Analysis](#tab/sentiment-analysis)
+> [!NOTE]
+> Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for Sentiment Analysis is `2020-04-01`.
+ | Language | Language code | v3 support | Starting v3 model version: | Notes | |:-|:-:|:-:|:--:|-:| | Chinese-Simplified | `zh-hans` | Γ£ô | 2019-10-01 | `zh` also accepted |
> [!NOTE] > * Only "Person", "Location" and "Organization" entities are returned for languages marked with *.
+> * Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for NER is `2021-06-01`.
| Language | Language code | v3 support | Starting with v3 model version: | Notes | |:--|:-:|:-:|:-:|::|
| Swedish | `sv` | Γ£ô* | 2019-10-01 | | | Turkish | `tr` | Γ£ô* | 2019-10-01 | |
-#### [Key phrase extraction](#tab/key-phrase-extraction)
+#### [Key Phrase Extraction](#tab/key-phrase-extraction)
+
+> [!NOTE]
+> Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for Key Phrase Extraction is `2021-06-01`.
| Language | Language code | v3 support | Starting with v3 model version: | Notes | |:-|:-:|:-:|:--:|::|
| Swedish               |     `sv`      |     ✓      |                2019-10-01                 |                    | | Turkish              |     `tr`      |     ✓      |                2020-07-01                 |                    |
-#### [Entity linking](#tab/entity-linking)
+#### [Entity Linking](#tab/entity-linking)
+
+> [!NOTE]
+> Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for Entity Linking is `2020-02-01`.
| Language | Language code | v3 support | Starting with v3 model version: | Notes | |:|:-:|:-:|:--:|:--:| | English | `en` | Γ£ô | 2019-10-01 | | | Spanish | `es` | Γ£ô | 2019-10-01 | |
+#### [Text Analytics for health](#tab/health)
+
+> [!NOTE]
+> * The container uses different model versions than the API endpoints and SDK.
+> * Languages are added as new model versions are released for specific Text Analytics features. The current [model versions](concepts/model-versioning.md) for Text Analytics for health are:
+> * API and SDK: `2021-05-15`
+> * Container: `2021-03-01`
++
+| Language | Language code | v3 support | Starting with v3 model version: | Notes |
+|:|:-:|:-:|:--:|:--:|
+| English | `en` | Γ£ô | API endpoint: 2019-10-01 <br> Container: 2020-04-16 | |
+ #### [Personally Identifiable Information (PII)](#tab/pii)
+> [!NOTE]
+> Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for PII is `2021-01-15`.
+ | Language | Language code | v3 support | Starting with v3 model version: | Notes | |:--|:-:|:-:|:-:|::| | Chinese-Simplified | `zh-hans` | Γ£ô | 2021-01-15 | `zh` also accepted |
#### [Language Detection](#tab/language-detection)
+> [!NOTE]
+> Languages are added as new [model versions](concepts/model-versioning.md) are released for specific Text Analytics features. The current model version for Language Detection is `2021-01-05`.
+ The Text Analytics API can detect a wide range of languages, variants, dialects, and some regional/cultural languages, and return detected languages with their name and code. Text Analytics Language Detection language code parameters conform to [BCP-47](https://tools.ietf.org/html/bcp47) standard with most of them conforming to [ISO-639-1](https://www.iso.org/iso-639-language-codes.html) identifiers. If you have content expressed in a less frequently used language, you can try Language Detection to see if it returns a code. The response for languages that cannot be detected is `unknown`.
cognitive-services Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/whats-new-docs.md
Welcome to what's new in the Cognitive Services docs from May 1, 2021 through Ma
### Updated articles -- [Azure Cognitive Services container image tags and release notes](/azure/cognitive-services/containers/container-image-tags.md)
+- [Azure Cognitive Services container image tags and release notes](/azure/cognitive-services/containers/container-image-tags)
## Form Recognizer ### New articles -- [Reference: Azure Form Recognizer client library v3.0.0 and REST API v2.0](/azure/cognitive-services/form-recognizer/api-v2-0/reference-sdk-api-v2-0.md)
+- [Reference: Azure Form Recognizer client library v3.0.0 and REST API v2.0](/azure/cognitive-services/form-recognizer/api-v2-0/reference-sdk-api-v2-0)
### Updated articles -- [Form Recognizer prebuilt business cards model](/azure/cognitive-services/form-recognizer/concept-business-cards.md)-- [Quickstart: Get started with the client library SDKs or REST API](/azure/cognitive-services/form-recognizer/quickstarts/client-library.md)-- [What's new in Form Recognizer](/azure/cognitive-services/form-recognizer/whats-new.md)-- [Form Recognizer landing page](/azure/cognitive-services/form-recognizer/form-recognizer.md)
+- [Form Recognizer prebuilt business cards model](/azure/cognitive-services/form-recognizer/concept-business-cards)
+- [Quickstart: Get started with the client library SDKs or REST API](/azure/cognitive-services/form-recognizer/quickstarts/client-library)
+- [What's new in Form Recognizer](/azure/cognitive-services/form-recognizer/whats-new)
+- [Form Recognizer landing page](/azure/cognitive-services/form-recognizer)
## Translator
Welcome to what's new in the Cognitive Services docs from May 1, 2021 through Ma
### Updated articles -- [What's new in Personalizer](/azure/cognitive-services/personalizer/whats-new.md)
+- [What's new in Personalizer](/azure/cognitive-services/personalizer/whats-new)
## Text Analytics ### Updated articles -- [Tutorial: Integrate Power BI with the Text Analytics Cognitive Service](/azure/cognitive-services/text-analytics/tutorials/tutorial-power-bi-key-phrases.md)-- [Extract information in Excel using Text Analytics and Power Automate](/azure/cognitive-services/text-analytics/tutorials/extract-excel-information.md)-- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api.md)-- [How to use Named Entity Recognition in Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking.md)-- [What's new in the Text Analytics API?](/azure/cognitive-services/text-analytics/whats-new.md)
+- [Tutorial: Integrate Power BI with the Text Analytics Cognitive Service](/azure/cognitive-services/text-analytics/tutorials/tutorial-power-bi-key-phrases)
+- [Extract information in Excel using Text Analytics and Power Automate](/azure/cognitive-services/text-analytics/tutorials/extract-excel-information)
+- [How to call the Text Analytics REST API](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api)
+- [How to use Named Entity Recognition in Text Analytics](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-entity-linking)
+- [What's new in the Text Analytics API?](/azure/cognitive-services/text-analytics/whats-new)
## Community contributors
container-registry Manual Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/manual-regional-move.md
For more information, see [Use exported template from the Azure portal](../azure
> [!IMPORTANT] > If you want to encrypt the target registry using a customer-managed key, make sure to update the template with settings for the required managed identity, key vault, and key. You can only enable the customer-managed key when you deploy the registry. >
-> For more information, see [Encrypt registry using customer-managed key](/container-registry-customer-managed-keys.md#enable-customer-managed-keytemplate).
+> For more information, see [Encrypt registry using customer-managed key](/azure/container-registry/container-registry-customer-managed-keys#enable-customer-managed-keytemplate).
### Create resource group
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/create-sql-api-dotnet-v4.md
Title: Manage Azure Cosmos DB SQL API resources using .NET V4 SDK
-description: Quickstart to build a console app using .NET V4 SDK to manage Azure Cosmos DB SQL API account resources.
+description: Use this quickstart to build a console app by using the .NET V4 SDK to manage Azure Cosmos DB SQL API account resources.
Last updated 04/07/2021
-# Quickstart: Build a console app using the .NET V4 SDK (Preview) to manage Azure Cosmos DB SQL API account resources.
+# Quickstart: Build a console app by using the .NET V4 SDK (preview) to manage Azure Cosmos DB SQL API account resources
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
> * [Python](create-sql-api-python.md) > * [Xamarin](create-sql-api-xamarin-dotnet.md)
-Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this doc to install the .NET V4 (Azure.Cosmos) package, build an app, and try out the example code for basic CRUD operations on the data stored in Azure Cosmos DB.
+Get started with the Azure Cosmos DB SQL API client library for .NET. Follow the steps in this article to install the .NET V4 (Azure.Cosmos) package and build an app. Then, try out the example code for basic create, read, update, and delete (CRUD) operations on the data stored in Azure Cosmos DB.
> [!IMPORTANT]
-> The .NET V4 SDK for Azure Cosmos DB is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> The .NET V4 SDK for Azure Cosmos DB is currently in public preview. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+>
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Azure Cosmos DB is Microsoft's fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value, document, and graph databases. Use the Azure Cosmos DB SQL API client library for .NET to:
-* Create an Azure Cosmos database and a container
-* Add sample data to the container
-* Query the data
-* Delete the database
+* Create an Azure Cosmos database and a container.
+* Add sample data to the container.
+* Query the data.
+* Delete the database.
[Library source code](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/v4) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Cosmos) ## Prerequisites
-* Azure subscription - [create one for free](https://azure.microsoft.com/free/) or you can [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
+* Azure subscription. [Create one for free](https://azure.microsoft.com/free/). You can also [try Azure Cosmos DB](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments.
* [NET Core 3 SDK](https://dotnet.microsoft.com/download/dotnet-core). You can verify which version is available in your environment by running `dotnet --version`.
-## Setting up
+## Set up
-This section walks you through creating an Azure Cosmos account and setting up a project that uses Azure Cosmos DB SQL API client library for .NET to manage resources. The example code described in this article creates a `FamilyDatabase` database and family members (each family member is an item) within that database. Each family member has properties such as `Id, FamilyName, FirstName, LastName, Parents, Children, Address,`. The `LastName` property is used as the partition key for the container.
+This section walks you through creating an Azure Cosmos account and setting up a project that uses the Azure Cosmos DB SQL API client library for .NET to manage resources.
+
+The example code described in this article creates a `FamilyDatabase` database and family members within that database. Each family member is an item and has properties such as `Id`, `FamilyName`, `FirstName`, `LastName`, `Parents`, `Children`, and `Address`. The `LastName` property is used as the partition key for the container.
### <a id="create-account"></a>Create an Azure Cosmos account
-If you use the [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account, you must create an Azure Cosmos DB account of type **SQL API**. An Azure Cosmos DB test account is already created for you. You don't have to create the account explicitly, so you can skip this section and move to the next section.
+If you use the [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) option to create an Azure Cosmos account, you must create an Azure Cosmos account of type **SQL API**. An Azure Cosmos test account is already created for you. You don't have to create the account explicitly, so you can skip this section and move to the next section.
If you have your own Azure subscription or created a subscription for free, you should create an Azure Cosmos account explicitly. The following code will create an Azure Cosmos account with session consistency. The account is replicated in `South Central US` and `North Central US`.
-You can use Azure Cloud Shell to create the Azure Cosmos account. Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work, either Bash or PowerShell. For this quickstart, choose **Bash** mode. Azure Cloud Shell also requires a storage account, you can create one when prompted.
+You can use Azure Cloud Shell to create the Azure Cosmos account. Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work: either Bash or PowerShell.
-Select the **Try It** button next to the following code, choose **Bash** mode select **create a storage account** and login to Cloud Shell. Next copy and paste the following code to Azure Cloud Shell and run it. The Azure Cosmos account name must be globally unique, make sure to update the `mysqlapicosmosdb` value before you run the command.
+For this quickstart, use Bash. Azure Cloud Shell also requires a storage account. You can create one when prompted.
-```azurecli-interactive
+1. Select the **Try It** button next to the following code, choose **Bash** mode, select **create a storage account**, and sign in to Cloud Shell.
-# Set variables for the new SQL API account, database, and container
-resourceGroupName='myResourceGroup'
-location='southcentralus'
+1. Copy and paste the following code to Azure Cloud Shell and run it. The Azure Cosmos account name must be globally unique, so be sure to update the `mysqlapicosmosdb` value before you run the command.
-# The Azure Cosmos account name must be globally unique, make sure to update the `mysqlapicosmosdb` value before you run the command
-accountName='mysqlapicosmosdb'
+ ```azurecli-interactive
-# Create a resource group
-az group create \
- --name $resourceGroupName \
- --location $location
+ # Set variables for the new SQL API account, database, and container
+ resourceGroupName='myResourceGroup'
+ location='southcentralus'
-# Create a SQL API Cosmos DB account with session consistency and multi-region writes enabled
-az cosmosdb create \
- --resource-group $resourceGroupName \
- --name $accountName \
- --kind GlobalDocumentDB \
- --locations regionName="South Central US" failoverPriority=0 --locations regionName="North Central US" failoverPriority=1 \
- --default-consistency-level "Session" \
- --enable-multiple-write-locations true
+ # The Azure Cosmos account name must be globally unique, so be sure to update the `mysqlapicosmosdb` value before you run the command
+ accountName='mysqlapicosmosdb'
-```
+ # Create a resource group
+ az group create \
+ --name $resourceGroupName \
+ --location $location
-The creation of the Azure Cosmos account takes a while, once the operation is successful, you can see the confirmation output. After the command completes successfully, sign into the [Azure portal](https://portal.azure.com/) and verify that the Azure Cosmos account with the specified name exists. You can close the Azure Cloud Shell window after the resource is created.
+ # Create a SQL API Cosmos DB account with session consistency and multi-region writes enabled
+ az cosmosdb create \
+ --resource-group $resourceGroupName \
+ --name $accountName \
+ --kind GlobalDocumentDB \
+ --locations regionName="South Central US" failoverPriority=0 --locations regionName="North Central US" failoverPriority=1 \
+ --default-consistency-level "Session" \
+ --enable-multiple-write-locations true
+
+ ```
-### <a id="create-dotnet-core-app"></a>Create a new .NET app
+The creation of the Azure Cosmos account takes a while. After the operation is successful, you can see the confirmation output. Sign in to the [Azure portal](https://portal.azure.com/) and verify that the Azure Cosmos account with the specified name exists. You can close the Azure Cloud Shell window after the resource is created.
-Create a new .NET application in your preferred editor or IDE. Open the Windows command prompt or a Terminal window from your local computer. You will run all the commands in the next sections from the command prompt or terminal. Run the following dotnet new command to create a new app with the name `todo`. The `--langVersion` parameter sets the LangVersion property in the created project file.
+### <a id="create-dotnet-core-app"></a>Create a .NET app
+
+Create a .NET application in your preferred editor or IDE. Open the Windows command prompt or a terminal window from your local computer. You'll run all the commands in the next sections from the command prompt or terminal.
+
+Run the following `dotnet new` command to create an app with the name `todo`. The `--langVersion` parameter sets the `LangVersion` property in the created project file.
```bash dotnet new console --langVersion:8 -n todo ```
-Change your directory to the newly created app folder. You can build the application with:
+Use the following commands to change your directory to the newly created app folder and build the application:
```bash cd todo
Time Elapsed 00:00:34.17
### <a id="install-package"></a>Install the Azure Cosmos DB package
-While still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the dotnet add package command.
+While you're still in the application directory, install the Azure Cosmos DB client library for .NET Core by using the `dotnet add package` command:
```bash dotnet add package Azure.Cosmos --version 4.0.0-preview3
While still in the application directory, install the Azure Cosmos DB client lib
### Copy your Azure Cosmos account credentials from the Azure portal
-The sample application needs to authenticate to your Azure Cosmos account. To authenticate, you should pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
+The sample application needs to authenticate to your Azure Cosmos account. To authenticate, pass the Azure Cosmos account credentials to the application. Get your Azure Cosmos account credentials by following these steps:
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Navigate to your Azure Cosmos account.
+1. Go to your Azure Cosmos account.
-1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** of your account. You will add the URI and keys values to an environment variable in the next step.
+1. Open the **Keys** pane and copy the **URI** and **PRIMARY KEY** values for your account. You'll add the URI and key values to an environment variable in the next procedure.
-## <a id="object-model"></a>Object model
+## <a id="object-model"></a>Learn the object model
-Before you start building the application, let's look into the hierarchy of resources in Azure Cosmos DB and the object model used to create and access these resources. The Azure Cosmos DB creates resources in the following order:
+Before you continue building the application, let's look into the hierarchy of resources in Azure Cosmos DB and the object model that's used to create and access these resources. Azure Cosmos DB creates resources in the following order:
* Azure Cosmos account * Databases * Containers * Items
-To learn in more about the hierarchy of different entities, see the [working with databases, containers, and items in Azure Cosmos DB](account-databases-containers-items.md) article. You will use the following .NET classes to interact with these resources:
+To learn more about the hierarchy of entities, see the [Azure Cosmos DB resource model](account-databases-containers-items.md) article. You'll use the following .NET classes to interact with these resources:
+
+* `CosmosClient`. This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
+* `CreateDatabaseIfNotExistsAsync`. This method creates (if it doesn't exist) or gets (if it already exists) a database resource as an asynchronous operation.
+* `CreateContainerIfNotExistsAsync`. This method creates (if it doesn't exist) or gets (if it already exists) a container as an asynchronous operation. You can check the status code from the response to determine whether the container was newly created (201) or an existing container was returned (200).
+* `CreateItemAsync`. This method creates an item within the container.
+* `UpsertItemAsync`. This method creates an item within the container if it doesn't already exist or replaces the item if it already exists.
+* `GetItemQueryIterator`. This method creates a query for items under a container in an Azure Cosmos database by using a SQL statement with parameterized values.
+* `DeleteAsync`. This method deletes the specified database from your Azure Cosmos account.
-* CosmosClient - This class provides a client-side logical representation for the Azure Cosmos DB service. The client object is used to configure and execute requests against the service.
-* CreateDatabaseIfNotExistsAsync - This method creates (if doesn't exist) or gets (if already exists) a database resource as an asynchronous operation.
-* CreateContainerIfNotExistsAsync - This method creates (if it doesn't exist) or gets (if it already exists) a container as an asynchronous operation. You can check the status code from the response to determine whether the container was newly created (201) or an existing container was returned (200).
-* CreateItemAsync - This method creates an item within the container.
-* UpsertItemAsync - This method creates an item within the container if it doesn't already exist or replaces the item if it already exists.
-* GetItemQueryIterator - This method creates a query for items under a container in an Azure Cosmos database using a SQL statement with parameterized values.
-* DeleteAsync - Deletes the specified database from your Azure Cosmos account. `DeleteAsync` method only deletes the database.
+ ## <a id="code-examples"></a>Configure code examples
- ## <a id="code-examples"></a>Code examples
+The sample code described in this article creates a family database in Azure Cosmos DB. The family database contains family details such as name, address, location, parents, children, and pets.
-The sample code described in this article creates a family database in Azure Cosmos DB. The family database contains family details such as name, address, location, the associated parents, children, and pets. Before populating the data to your Azure Cosmos account, define the properties of a family item. Create a new class named `Family.cs` at the root level of your sample application and add the following code to it:
+Before you populate the data for your Azure Cosmos account, define the properties of a family item. Create a new class named `Family.cs` at the root level of your sample application and add the following code to it:
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Family.cs)]
-### Add the using directives & define the client object
+### Add the using directives and define the client object
-From the project directory, open the `Program.cs` file in your editor and add the following using directives at the top of your application:
+From the project directory, open the *Program.cs* file in your editor and add the following `using` directives at the top of your application:
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Usings)]
-Add the following global variables in your `Program` class. These will include the endpoint and authorization keys, the name of the database, and container that you will create. Make sure to replace the endpoint and authorization keys values according to your environment.
+Add the following global variables in your `Program` class. These variables will include the endpoint and authorization keys, the name of the database, and the container that you'll create. Be sure to replace the endpoint and authorization key values according to your environment.
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=Constants)]
Finally, replace the `Main` method:
### Create a database
-Define the `CreateDatabaseAsync` method within the `program.cs` class. This method creates the `FamilyDatabase` if it doesn't already exist.
+Define the `CreateDatabaseAsync` method within the `program.cs` class. This method creates the `FamilyDatabase` database if it doesn't already exist.
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=CreateDatabaseAsync)] ### Create a container
-Define the `CreateContainerAsync` method within the `Program` class. This method creates the `FamilyContainer` if it doesn't already exist.
+Define the `CreateContainerAsync` method within the `Program` class. This method creates the `FamilyContainer` container if it doesn't already exist.
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=CreateContainerAsync)] ### Create an item
-Create a family item by adding the `AddItemsToContainerAsync` method with the following code. You can use the `CreateItemAsync` or `UpsertItemAsync` methods to create an item:
+Create a family item by adding the `AddItemsToContainerAsync` method with the following code. You can use the `CreateItemAsync` or `UpsertItemAsync` method to create an item.
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=AddItemsToContainerAsync)] ### Query the items
-After inserting an item, you can run a query to get the details of "Andersen" family. The following code shows how to execute the query using the SQL query directly. The SQL query to get the "Anderson" family details is: `SELECT * FROM c WHERE c.LastName = 'Andersen'`. Define the `QueryItemsAsync` method within the `Program` class and add the following code to it:
+After you insert an item, you can run a query to get the details of the Andersen family. The following code shows how to execute the query by using the SQL query directly. The SQL query to get the Andersen family details is `SELECT * FROM c WHERE c.LastName = 'Andersen'`. Define the `QueryItemsAsync` method within the `Program` class and add the following code to it:
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=QueryItemsAsync)] ### Replace an item
-Read a family item and then update it by adding the `ReplaceFamilyItemAsync` method with the following code.
+Read a family item and then update it by adding the `ReplaceFamilyItemAsync` method with the following code:
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=ReplaceFamilyItemAsync)] ### Delete an item
-Delete a family item by adding the `DeleteFamilyItemAsync` method with the following code.
+Delete a family item by adding the `DeleteFamilyItemAsync` method with the following code:
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=DeleteFamilyItemAsync)] ### Delete the database
-Finally you can delete the database adding the `DeleteDatabaseAndCleanupAsync` method with the following code:
+You can delete the database by adding the `DeleteDatabaseAndCleanupAsync` method with the following code:
[!code-csharp[Main](~/cosmos-dotnet-v4-getting-started/src/Program.cs?name=DeleteDatabaseAndCleanupAsync)]
-After you add all the required methods, save the `Program` file.
+After you add all the required methods, save the *Program.cs* file.
## Run the code
-Next build and run the application to create the Azure Cosmos DB resources.
+Run the application to create the Azure Cosmos DB resources:
```bash dotnet run ```
-The following output is generated when you run the application. You can also sign into the Azure portal and validate that the resources are created:
+The following output is generated when you run the application:
```bash Created Database: FamilyDatabase
The following output is generated when you run the application. You can also sig
End of demo, press any key to exit. ```
-You can validate that the data is created by signing into the Azure portal and see the required items in your Azure Cosmos account.
+You can validate that the data is created by signing in to the Azure portal and seeing the required items in your Azure Cosmos account.
## Clean up resources
-When no longer needed, you can use the Azure CLI or Azure PowerShell to remove the Azure Cosmos account and the corresponding resource group. The following command shows how to delete the resource group by using the Azure CLI:
+When you no longer need the Azure Cosmos account and the corresponding resource group, you can use the Azure CLI or Azure PowerShell to remove them. The following command shows how to delete the resource group by using the Azure CLI:
```azurecli az group delete -g "myResourceGroup"
az group delete -g "myResourceGroup"
## Next steps
-In this quickstart, you learned how to create an Azure Cosmos account, create a database and a container using a .NET Core app. You can now import additional data to your Azure Cosmos account with the instructions int the following article.
+In this quickstart, you learned how to create an Azure Cosmos account, create a database, and create a container by using a .NET Core app. You can now import more data to your Azure Cosmos account by using the instructions in the following article:
> [!div class="nextstepaction"] > [Import data into Azure Cosmos DB](import-data.md)
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/free-tier.md
Azure Cosmos DB free tier makes it easy to get started, develop, test your appli
Free tier lasts indefinitely for the lifetime of the account and it comes with all the [benefits and features](introduction.md#key-benefits) of a regular Azure Cosmos DB account. These benefits include unlimited storage and throughput (RU/s), SLAs, high availability, turnkey global distribution in all Azure regions, and more.
-You can have up to one free tier Azure Cosmos DB account per an Azure subscription and you must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier. When creating a new account, itΓÇÖs recommended to enable the free tier discount if itΓÇÖs available.
+You can have up to one free tier Azure Cosmos DB account per an Azure subscription and you must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier. If you create an account with free tier and then delete it, you can apply free tier for a new account. When creating a new account, itΓÇÖs recommended to enable the free tier discount if itΓÇÖs available.
> [!NOTE] > Free tier is currently not available for serverless accounts.
cosmos-db Local Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/local-emulator-release-notes.md
This article shows the Azure Cosmos DB Emulator release notes with a list of fea
## Release notes
+### 2.14.1 (18 June 2021)
+
+ - This release improves the start-up time for the emulator while reducing the footprint of its data on the disk. This new optimization is activated by "/EnablePreview" argument.
+ ### 2.14.0 (15 June 2021) - This release updates the local Data Explorer content to latest Azure Portal version; in this release we addresses a known issue when importing multiple document items by using the JSON file uploading feature.
cosmos-db Optimize Cost Throughput https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/optimize-cost-throughput.md
The following are some guidelines to decide on a provisioned throughput strategy
1. You have a few dozen Azure Cosmos containers and want to share throughput across some or all of them.
-2. You are migrating from a single-tenant database designed to run on IaaS-hosted VMs or on-premises, for example, NoSQL or relational databases to Azure Cosmos DB. And if you have many collections/tables/graphs and you do not want to make any changes to your data model. Note, you might have to compromise some of the benefits offered by Azure Cosmos DB if you are not updating your data model when migrating from an on-premises database. It's recommended that you always reaccess your data model to get the most in terms of performance and also to optimize for costs.
+2. You are migrating from a single-tenant database designed to run on IaaS-hosted VMs or on-premises, for example, NoSQL or relational databases to Azure Cosmos DB. And if you have many collections/tables/graphs and you do not want to make any changes to your data model. Note, you might have to compromise some of the benefits offered by Azure Cosmos DB if you are not updating your data model when migrating from an on-premises database. It's recommended that you always reassess your data model to get the most in terms of performance and also to optimize for costs.
3. You want to absorb unplanned spikes in workloads by virtue of pooled throughput at the database level subjected to unexpected spike in workload.
cosmos-db Secure Access To Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/secure-access-to-data.md
Previously updated : 05/27/2021 Last updated : 06/22/2021
User user = await database.CreateUserAsync("User 1");
### Permissions<a id="permissions"></a>
-A permission resource is associated with a user and assigned at the container as well as partition key level. Each user may contain zero or more permissions. A permission resource provides access to a security token that the user needs when trying to access a specific container or data in a specific partition key. There are two available access levels that may be provided by a permission resource:
+A permission resource is associated with a user and assigned to a specific resource. Each user may contain zero or more permissions. A permission resource provides access to a security token that the user needs when trying to access a specific container or data in a specific partition key. There are two available access levels that may be provided by a permission resource:
- All: The user has full permission on the resource. - Read: The user can only read the contents of the resource but cannot perform write, update, or delete operations on the resource.
user.CreatePermissionAsync(
new PermissionProperties( id: "permissionUser1Orders", permissionMode: PermissionMode.All,
- container: benchmark.container,
+ container: container,
resourcePartitionKey: new PartitionKey("012345"))); ```
cosmos-db Sql Api Sdk Java Spark V3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-api-sdk-java-spark-v3.md
Title: 'Azure Cosmos DB Apache Spark 3 OLTP Connector for SQL API (Preview) release notes and resources'
-description: Learn about the Azure Cosmos DB Apache Spark 3 OLTP Connector for SQL API (Preview), including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.
+description: Learn about the Azure Cosmos DB Apache Spark 3 OLTP Connector for SQL API, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Java SDK.
ms.devlang: java Previously updated : 04/06/2021 Last updated : 06/21/2021
-# Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API (Preview): Release notes and resources
+# Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API: Release notes and resources
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
> * [Bulk executor - .NET v2](sql-api-sdk-bulk-executor-dot-net.md) > * [Bulk executor - Java](sql-api-sdk-bulk-executor-java.md)
-**Azure Cosmos DB Spark 3 OLTP connector (Preview)** provides Apache Spark v3 support for Azure Cosmos DB using
+**Azure Cosmos DB Spark 3 OLTP connector** provides Apache Spark v3 support for Azure Cosmos DB using
the SQL API. [Azure Cosmos DB](introduction.md) is a globally-distributed database service which allows developers to work with data using a variety of standard APIs, such as SQL, MongoDB, Cassandra, Graph, and Table.
-> [!Note]
-> This version of Azure Cosmos DB Spark 3 OLTP connector is a Preview build.
-> This build hasn't been load or performance tested.
-> This build isn't recommended for use in production scenarios.
->
- ## Documentation - [Getting started](https://github.com/Azure/azure-sdk-for-jav)
developers to work with data using a variety of standard APIs, such as SQL, Mong
| Connector | Spark | Minimum Java version | Supported Scala versions | | - | - | -- | -- |
-| 4.0.0-beta.1 | 3.1.1 | 8 | 2.12 |
+| 4.0.0 | 3.1.1 | 8 | 2.12 |
-## Download
+## Download
You can use the maven coordinate of the jar to auto install the Spark Connector to your Databricks Runtime 8 from Maven:
-`com.azure.cosmos.spark:azure-cosmos-spark_3-1_2-12:4.0.0-beta.1`
+`com.azure.cosmos.spark:azure-cosmos-spark_3-1_2-12:4.1.0`
You can also integrate against Cosmos DB Spark Connector in your SBT project: ```scala
-libraryDependencies += "com.azure.cosmos.spark" % "azure-cosmos-spark_3-1_2-12" % "4.0.0-beta.1"
+libraryDependencies += "com.azure.cosmos.spark" % "azure-cosmos-spark_3-1_2-12" % "4.1.0"
```
-Cosmos DB Spark Connector is available on [Maven Central Repo](https://search.maven.org/artifact/com.azure.cosmos.spark/azure-cosmos-spark_3-1_2-12/4.0.0-beta.1/jar).
+Cosmos DB Spark Connector is available on [Maven Central Repo](https://search.maven.org/artifact/com.azure.cosmos.spark/azure-cosmos-spark_3-1_2-12/).
### General
cosmos-db Sql Query Mathematical Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-mathematical-functions.md
Previously updated : 05/26/2021 Last updated : 06/22/2021
The following supported built-in mathematical functions perform a calculation, u
| [SQRT](sql-query-sqrt.md) | Full scan | Full scan | | | [SQUARE](sql-query-square.md) | Full scan | Full scan | | | [TAN](sql-query-tan.md) | Full scan | Full scan | |-
+| [TRUNC](sql-query-trunc.md) | Index seek | Index seek | |
## Next steps - [System functions Azure Cosmos DB](sql-query-system-functions.md)
cosmos-db Sql Query Trunc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql-query-trunc.md
Previously updated : 09/13/2019 Last updated : 06/22/2021
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/troubleshoot-dot-net-sdk-slow-request.md
+
+ Title: Troubleshoot Azure Cosmos DB slow requests with the .NET SDK
+description: Learn how to diagnose and fix slow requests when using Azure Cosmos DB .NET SDK.
+++ Last updated : 06/15/2021+++++
+# Diagnose and troubleshoot Azure Cosmos DB .NET SDK slow requests
+
+Azure Cosmos DB slow requests can happen for multiple reasons such as request throttling or the way your application is designed. This article explains the different root causes for this issue.
+
+## Request rate too large (429 throttles)
+
+Request throttling is the most common reason for slow requests. Azure Cosmos DB will throttle requests if they exceed the allocated RUs for the database or container. The SDK has built-in logic to retry these requests. The [request rate too large](troubleshoot-request-rate-too-large.md#how-to-investigate) troubleshooting article explains how to check if the requests are being throttled and how to scale your account to avoid these issues in the future.
+
+## Application design
+
+If your application doesn't follow the SDK best practices, it can result in different issues that will cause slow or failed requests. Follow the [.NET SDK best practices](performance-tips-dotnet-sdk-v3-sql.md) for the best performance.
+
+Consider the following when developing your application:
+* Application should be in the same region as your Azure Cosmos DB account.
+* Singleton instance of the SDK instance. The SDK has several caches that have to be initialized which may slow down the first few requests.
+* Use Direct + TCP connectivity mode
+* Avoid High CPU. Make sure to look at Max CPU and not average, which is the default for most logging systems. Anything above roughly 40% can increase the latency.
++
+## Capture the diagnostics
+
+All the responses in the SDK including `CosmosException` have a Diagnostics property. This property records all the information related to the single request including if there were retries or any transient failures.
+
+The Diagnostics are returned as a string. The string changes with each version as it is improved to better troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Do not parse the string to avoid breaking changes. The following code sample shows how to read diagnostic logs using the .NET SDK:
+
+```c#
+try
+{
+ ItemResponse<Book> response = await this.Container.CreateItemAsync<Book>(item: testItem);
+ if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan)
+ {
+ // Log the diagnostics and add any additional info necessary to correlate to other logs
+ Console.Write(response.Diagnostics.ToString());
+ }
+}catch(CosmosException cosmosException){
+ // Log the full exception including the stack trace
+ Console.Write(cosmosException.ToString());
+ // The Diagnostics can be logged separately if required.
+ Console.Write(cosmosException.Diagnostics.ToString());
+}
+
+ResponseMessage response = await this.Container.CreateItemStreamAsync(partitionKey, stream);
+if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || IsFailureStatusCode(response.StatusCode))
+{
+ // Log the diagnostics and add any additional info necessary to correlate to other logs
+ Console.Write(response.Diagnostics.ToString());
+}
+```
++
+## Diagnostics in version 3.19 and higher
+The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. This covers a few key things to look at:
+
+### CPU history
+High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries where the requests might do multiple connections for a single query.
+
+If the error contains `TransportException` information, it might contain also `CPU History`:
+
+```
+CPU history:
+(2020-08-28T00:40:09.1769900Z 0.114),
+(2020-08-28T00:40:19.1763818Z 1.732),
+(2020-08-28T00:40:29.1759235Z 0.000),
+(2020-08-28T00:40:39.1763208Z 0.063),
+(2020-08-28T00:40:49.1767057Z 0.648),
+(2020-08-28T00:40:59.1689401Z 0.137),
+CPU count: 8)
+```
+
+* If the CPU utilization is over 70%, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it or scale the machine to a larger resource size.
+* If the CPU measurements are not happening every 10 seconds, the gaps or measurement times indicate larger times in between measurements. In such a case, the cause is thread starvation. The solution is to investigate the source/s of the thread starvation (potentially locked threads), or scale the machine/s to a larger resource size.
+
+#### Solution:
+The client application that uses the SDK should be scaled up or out.
++
+### HttpResponseStats
+HttpResponseStats are request going to [gateway](sql-sdk-connection-modes.md). Even in Direct mode the SDK gets all the meta data information from the gateway.
+
+If the request is slow, first verify all the suggestions above don't yield results.
+
+If it is still slow different patterns point to different issues:
+
+Single store result for a single request
+
+| Number of requests | Scenario | Description |
+|-|-|-|
+| Single to all | Request Timeout or HttpRequestExceptions | Points to [SNAT Port exhaustion](troubleshoot-dot-net-sdk.md#snat) or lack of resources on the machine to process request in time. |
+| Single or small percentage (SLA is not violated) | All | A single or small percentage of slow requests can be caused by several different transient issues and should be expected. |
+| All | All | Points to an issue with the infrastructure or networking. |
+| SLA Violated | No changes to application and SLA dropped | Points to an issue with the Azure Cosmos DB service. |
+
+```json
+"HttpResponseStats": [
+ {
+ "StartTimeUTC": "2021-06-15T13:53:09.7961124Z",
+ "EndTimeUTC": "2021-06-15T13:53:09.7961127Z",
+ "RequestUri": "https://127.0.0.1:8081/dbs/347a8e44-a550-493e-88ee-29a19c070ecc/colls/4f72e752-fa91-455a-82c1-bf253a5a3c4e",
+ "ResourceType": "Collection",
+ "HttpMethod": "GET",
+ "ActivityId": "e16e98ec-f2e3-430c-b9e9-7d99e58a4f72",
+ "StatusCode": "OK"
+ }
+]
+```
+
+### StoreResult
+StoreResult represents a single request to Azure Cosmos DB using Direct mode with TCP protocol.
+
+If it is still slow different patterns point to different issues:
+
+Single store result for a single request
+
+| Number of requests | Scenario | Description |
+|-|-|-|
+| Single to all | StoreResult contains TransportException | Points to [SNAT Port exhaustion](troubleshoot-dot-net-sdk.md#snat) or lack of resources on the machine to process request in time. |
+| Single or small percentage (SLA is not violated) | All | A single or small percentage of slow requests can be caused by several different transient issues and should be expected. |
+| All | All | An issue with the infrastructure or networking. |
+| SLA Violated | Requests contain multiple failure error codes like 410 and IsValid is true | Points to an issue with the Cosmos DB service |
+| SLA Violated | Requests contain multiple failure error codes like 410 and IsValid is false | Points to an issue with the machine |
+| SLA Violated | StorePhysicalAddress is the same with no failure status code | Likely an issue with Cosmos DB service |
+| SLA Violated | StorePhysicalAddress have the same partition ID but different replica IDs with no failure status code | Likely an issue with the Cosmos DB service |
+| SLA Violated | StorePhysicalAddress are random with no failure status code | Points to an issue with the machine |
+
+RntbdRequestStats show the time for the different stages of sending and receiving a request.
+
+* ChannelAcquisitionStarted: The time to get or create a new connection. New connections can be created for numerous different regions. For example, a connection was unexpectedly closed or too many requests were getting sent through the existing connections so a new connection is being created.
+* Pipelined time is large points to possibly a large request.
+* Transit time is large, which leads to a networking issue. Compare this number to the `BELatencyInMs`. If the BELatencyInMs is small, then the time was spent on the network and not on the Azure Cosmos DB service.
+
+Multiple StoreResults for single request:
+
+* Strong and bounded staleness consistency will always have at least two store results
+* Check the status code of each StoreResult. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly being improved to cover more scenarios.
+
+```json
+"StoreResult": {
+ "ActivityId": "a3d325c1-f4e9-405b-820c-bab4d329ee4c",
+ "StatusCode": "Created",
+ "SubStatusCode": "Unknown",
+ "LSN": 1766,
+ "PartitionKeyRangeId": "0",
+ "GlobalCommittedLSN": -1,
+ "ItemLSN": -1,
+ "UsingLocalLSN": false,
+ "QuorumAckedLSN": 1765,
+ "SessionToken": "-1#1766",
+ "CurrentWriteQuorum": 1,
+ "CurrentReplicaSetSize": 1,
+ "NumberOfReadRegions": 0,
+ "IsClientCpuOverloaded": false,
+ "IsValid": true,
+ "StorePhysicalAddress": "rntbd://127.0.0.1:10253/apps/DocDbApp/services/DocDbServer92/partitions/a4cb49a8-38c8-11e6-8106-8cdcd42c33be/replicas/1p/",
+ "RequestCharge": 11.05,
+ "BELatencyInMs": "7.954",
+ "RntbdRequestStats": [
+ {
+ "EventName": "Created",
+ "StartTime": "2021-06-15T13:53:10.1302477Z",
+ "DurationInMicroSec": "6383"
+ },
+ {
+ "EventName": "ChannelAcquisitionStarted",
+ "StartTime": "2021-06-15T13:53:10.1366314Z",
+ "DurationInMicroSec": "96511"
+ },
+ {
+ "EventName": "Pipelined",
+ "StartTime": "2021-06-15T13:53:10.2331431Z",
+ "DurationInMicroSec": "50834"
+ },
+ {
+ "EventName": "Transit Time",
+ "StartTime": "2021-06-15T13:53:10.2839774Z",
+ "DurationInMicroSec": "17677"
+ },
+ {
+ "EventName": "Received",
+ "StartTime": "2021-06-15T13:53:10.3016546Z",
+ "DurationInMicroSec": "7079"
+ },
+ {
+ "EventName": "Completed",
+ "StartTime": "2021-06-15T13:53:10.3087338Z",
+ "DurationInMicroSec": "0"
+ }
+ ],
+ "TransportException": null
+}
+```
++
+### Failure rate violates the Azure Cosmos DB SLA
+Contact [Azure Support](https://aka.ms/azure-support).
+
+## Next steps
+* [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) issues when you use the Azure Cosmos DB .NET SDK.
+* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/ea-portal-administration.md
Title: Azure EA portal administration
description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal. Previously updated : 03/19/2021 Last updated : 06/22/2021
To confirm account ownership:
## Change Azure subscription or account ownership
-Enterprise administrators can use the Azure Enterprise portal to transfer account ownership of selected or all subscriptions in an enrollment.
+This section only applies when a subscription owner is being changed. Changing a subscription ownership does not require an Azure support ticket. Enterprise administrators can use the Azure Enterprise portal to transfer account ownership of selected or all subscriptions in an enrollment. They also have the option to change the subscription directory (tenant).
+
+However, an EA admin can't transfer an account from one enrollment to another enrollment. To transfer an account from one enrollment to another, a support request is required. For information about transferring an account from one enrollment to another enrollment, see [Transfer an enterprise account to a new enrollment](ea-transfers.md#transfer-an-enterprise-account-to-a-new-enrollment).
When you complete a subscription or account ownership transfer, Microsoft updates the account owner.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
An in-progress status is returned as an `Accepted` state under `provisioningStat
To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/scripting/gallery/installing-psget).
-Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscription) command, using the billing scope `"/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"`.
+Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/get-azsubscriptionalias) command, using the billing scope `"/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"`.
```azurepowershell-interactive New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321" -Workload "Production"
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
When you or your organization signed the Microsoft Customer Agreement, a billing
## Update your PO and tax ID number
-[Update your PO number](../manage/change-azure-account-profile.md#update-a-po-number) in your billing profile and, after moving your subscriptions, ensure you [update your tax ID](/manage/change-azure-account-profile.md#update-your-tax-id). The tax ID is used for tax exemption calculations and appears on your invoice. [Learn more about how to update your billing account settings](/microsoft-store/update-microsoft-store-for-business-account-settings).
+[Update your PO number](../manage/change-azure-account-profile.md#update-a-po-number) in your billing profile and, after moving your subscriptions, ensure you [update your tax ID](../manage/change-azure-account-profile.md#update-your-tax-id). The tax ID is used for tax exemption calculations and appears on your invoice. [Learn more about how to update your billing account settings](/microsoft-store/update-microsoft-store-for-business-account-settings).
## Confirm payment details
If you have questions or need help, [create a support request](https://go.micros
## Next steps - [Learn how about the charges on your invoice](https://www.youtube.com/watch?v=e2LGZZ7GubA)-- [Take a step-by-step invoice tutorial](../understand/review-customer-agreement-bill.md)
+- [Take a step-by-step invoice tutorial](../understand/review-customer-agreement-bill.md)
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-connector-format.md
Previously updated : 06/10/2021 Last updated : 06/21/2021
Last updated 06/10/2021
This article explores troubleshooting methods related to connector and format for mapping data flows in Azure Data Factory (ADF).
+## Azure Blob Storage
+
+### Account kind of Storage (general purpose v1) doesn't support service principal and MI authentication
+
+#### Symptoms
+
+In data flows, if you use Azure Blob Storage (general purpose v1) with the service principal or MI authentication, you may encounter the following error message:
+
+`com.microsoft.dataflow.broker.InvalidOperationException: ServicePrincipal and MI auth are not supported if blob storage kind is Storage (general purpose v1)`
+
+#### Cause
+
+When you use the Azure Blob linked service in data flows, the managed identity or service principal authentication is not supported when the account kind is empty or "Storage". This situation is shown in Image 1 and Image 2 below.
+
+Image 1: The account kind in the Azure Blob Storage linked service
++
+Image 2: Storage account page
+++
+#### Recommendation
+
+To solve this issue, refer to the following recommendations:
+
+- If the storage account kind is **None** in the Azure Blob linked service, specify the proper account kind, and refer to Image 3 shown below to accomplish it. Furthermore, refer to Image 2 to get the storage account kind, and check and confirm the account kind is not Storage (general purpose v1).
+
+ Image 3: Specify the storage account kind in the Azure Blob Storage linked service
+
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/specify-storage-account-kind.png" alt-text="Screenshot that shows how to specify storage account kind in Azure Blob Storage linked service.":::
+
+
+- If the account kind is Storage (general purpose v1), upgrade your storage account to the **general purpose v2** or choose a different authentication.
+
+ Image 4: Upgrade the storage account to general purpose v2
+
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/upgrade-storage-account.png" alt-text="Screenshot that shows how to upgrade the storage account to general purpose v2." lightbox="./media/data-flow-troubleshoot-connector-format/upgrade-storage-account.png":::
-## Cosmos DB & JSON
+## Azure Cosmos DB and JSON format
### Support customized schemas in the source
For example:
:::image type="content" source="./media/data-flow-troubleshoot-connector-format/set-parameter-in-query.png" alt-text="Screenshot that shows the set parameter in the query.":::
-## CDM
-
-### Model.Json files with special characters
-
-#### Symptoms
-You may encounter an issue that the final name of the model.json file contains special characters.  
-
-#### Error message  
-`at Source 'source1': java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: PPDFTable1.csv@snapshot=2020-10-21T18:00:36.9469086Z. ` 
-
-#### Recommendation  
-Replace the special chars in the file name, which will work in the synapse but not in ADF.  
+## Azure Data Lake Storage Gen1
-### No data output in the data preview or after running pipelines
+### Fail to create files with service principle authentication
#### Symptoms
-When you use the manifest.json for CDM, no data is shown in the data preview or shown after running a pipeline. Only headers are shown. You can see this issue in the picture below.<br/>
+When you try to move or transfer data from different sources into the ADLS gen1 sink, if the linked service's authentication method is service principle authentication, your job may fail with the following error message:
-![Screenshot that shows the no data output symptom.](./media/data-flow-troubleshoot-connector-format/no-data-output.png)
+`org.apache.hadoop.security.AccessControlException: CREATE failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.). [2b5e5d92-xxxx-xxxx-xxxx-db4ce6fa0487] failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.)`
#### Cause
-The manifest document describes the CDM folder, for example, what entities that you have in the folder, references of those entities and the data that corresponds to this instance. Your manifest document misses the `dataPartitions` information that indicates ADF where to read the data, and  since it is empty, it returns zero data. 
-
-#### Recommendation
-Update your manifest document to have the `dataPartitions` information, and you can refer to this example manifest document to update your document: [Common Data Model metadata: Introducing manifest-Example manifest document](/common-data-model/cdm-manifest#example-manifest-document).
-
-### JSON array attributes are inferred as separate columns
-
-#### Symptoms 
-You may encounter an issue where one attribute (string type) of the CDM entity has a JSON array as data. When this data is encountered, ADF infers the data as separate columns incorrectly. As you can see from the following pictures, a single attribute presented in the source (msfp_otherproperties) is inferred as a separate column in the CDM connector’s preview.<br/> 
--- In the CSV source data (refer to the second column): <br/>-
- ![Screenshot that shows the attribute in the CSV source data.](./media/data-flow-troubleshoot-connector-format/json-array-csv.png)
--- In the CDM source data preview: <br/>-
- ![Screenshot that shows the separate column in the CDM source data.](./media/data-flow-troubleshoot-connector-format/json-array-cdm.png)
-
- 
-You may also try to map drifted columns and use the data flow expression to transform this attribute as an array. But since this attribute is read as a separate column when reading, transforming to an array does not work.  
-#### Cause
-This issue is likely caused by the commas within your JSON object value for that column. Since your data file is expected to be a CSV file, the comma indicates that it is the end of a column’s value.
+The RWX permission or the dataset property is not set correctly.
#### Recommendation
-To solve this problem, you need to double quote your JSON column and avoid any of the inner quotes with a backslash (`\`). In this way, the contents of that column’s value can be read in as a single column entirely.  
-  
->[!Note]
->The CDM doesn’t inform that the data type of the column value is JSON, yet it informs that it is a string and parsed as such.
-### Unable to fetch data in the data flow preview
+- If the target folder doesn't have correct permissions, refer to this document to assign the correct permission in Gen1: [Use service principal authentication](./connector-azure-data-lake-store.md#use-service-principal-authentication).
-#### Symptoms
-You use CDM with model.json generated by Power BI. When you preview the CDM data using the data flow preview, you encounter an error: `No output data.`
+- If the target folder has the correct permission and you use the file name property in the data flow to target to the right folder and file name, but the file path property of the dataset is not set to the target file path (usually leave not set), as the example shown in the following pictures, you will encounter this failure because the backend system tries to create files based on the file path of the dataset, and the file path of the dataset doesn't have the correct permission.
+
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-path-property.png" alt-text="Screenshot that shows the file path property":::
+
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-name-property.png" alt-text="Screenshot that shows the file name property":::
-#### Cause
- The following code exists in the partitions in the model.json file generated by the Power BI data flow.
-```json
-"partitions": [  
-{  
-"name": "Part001",  
-"refreshTime": "2020-10-02T13:26:10.7624605+00:00",  
-"location": "https://datalakegen2.dfs.core.windows.net/powerbi/salesEntities/salesPerfByYear.csv @snapshot=2020-10-02T13:26:10.6681248Z"  
-}  
-```
-For this model.json file, the issue is the naming schema of the data partition file has special characters, and supporting file paths with '@' do not exist currently.  
+
+ There are two methods to solve this issue:
+ 1. Assign the WX permission to the file path of the dataset.
+ 1. Set the file path of the dataset as the folder with WX permission, and set the rest folder path and file name in data flows.
-#### Recommendation
-Please remove the `@snapshot=2020-10-02T13:26:10.6681248Z` part from the data partition file name and the model.json file, and then try again.
+## Azure Data Lake Storage Gen2
-### The corpus path is null or empty
+### Failed with an error: "Error while reading file XXX. It is possible the underlying files have been updated"
#### Symptoms
-When you use CDM in the data flow with the model format, you cannot preview the data, and you encounter the error: `DF-CDM_005 The corpus path is null or empty`. The error is shown in the following picture:  
-![Screenshot that shows the corpus path error.](./media/data-flow-troubleshoot-connector-format/corpus-path-error.png)
-
-#### Cause
-Your data partition path in the model.json is pointing to a blob storage location and not your data lake. The location should have the base URL of **.dfs.core.windows.net** for the ADLS Gen2. 
-
-#### Recommendation
-To solve this issue, you can refer to this article: [ADF Adds Support for Inline Datasets and Common Data Model to Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory/adf-adds-support-for-inline-datasets-and-common-data-model-to/ba-p/1441798), and the following picture shows the way to fix the corpus path error in this article.
-
-![Screenshot that shows how to fix the corpus path error.](./media/data-flow-troubleshoot-connector-format/fix-format-issue.png)
-
-### Unable to read CSV data files
-
-#### Symptoms 
-You use the inline dataset as the common data model with manifest as a source, and you have provided the entry manifest file, root path, entity name and path. In the manifest, you have the data partitions with the CSV file location. Meanwhile, the entity schema and csv schema are identical, and all validations were successful. However, in the data preview, only the schema rather than the data gets loaded and the data is invisible, which is shown in the following picture:
+When you use the ADLS Gen2 as a sink in the data flow (to preview data, debug/trigger run, etc.) and the partition setting in **Optimize** tab in the **Sink** stage is not default, you may find job fail with the following error message:
-![Screenshot that shows the issue of unable to read data files.](./media/data-flow-troubleshoot-connector-format/unable-read-data.png)
+`Job failed due to reason: Error while reading file abfss:REDACTED_LOCAL_PART@prod.dfs.core.windows.net/import/data/e3342084-930c-4f08-9975-558a3116a1a9/part-00000-tid-7848242374008877624-5df7454e-7b14-4253-a20b-d20b63fe9983-1-1-c000.csv. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.`
#### Cause
-Your CDM folder is not separated into logical and physical models, and only physical models exist in the CDM folder. The following two articles describe the difference: [Logical definitions](/common-data-model/sdk/logical-definitions) and [Resolving a logical entity definition](/common-data-model/sdk/convert-logical-entities-resolved-entities).<br/> 
-
-#### Recommendation
-For the data flow using CDM as a source, try to use a logical model as your entity reference, and use the manifest that describes the location of the physical resolved entities and the data partition locations. You can see some samples of logical entity definitions within the public CDM github repository: [CDM-schemaDocuments](https://github.com/microsoft/CDM/tree/master/schemaDocuments)<br/>
-
-A good starting point to forming your corpus is to copy the files within the schema documents folder (just that level inside the github repository), and put those files into a folder. Afterwards, you can use one of the predefined logical entities within the repository (as a starting or reference point) to create your logical model.<br/>
-
-Once the corpus is set up, you are recommended to use CDM as a sink within data flows, so that a well-formed CDM folder can be properly created. You can use your CSV dataset as a source and then sink it to your CDM model that you created.
-
-## Delta
-
-### The sink does not support the schema drift with upsert or update
-
-#### Symptoms
-You may face the issue that the delta sink in mapping data flows does not support schema drift with upsert/update. The problem is that the schema drift does not work when the delta is the target in a mapping data flow and user configure an update/upsert. 
-
-If a column is added to the source after an "initial" load to the delta, the subsequent jobs just fail with an error that it cannot find the new column, and this happens when you upsert/update with the alter row. It seems to work for inserts only.
-
-#### Error message
-`DF-SYS-01 at Sink 'SnkDeltaLake': org.apache.spark.sql.AnalysisException: cannot resolve target.BICC_RV in UPDATE clause given columns target. `
-#### Cause
-This is an issue for delta format because of the limitation of io delta library used in the data flow runtime. This issue is still in fixing.
+1. You don't assign a proper permission to your MI/SP authentication.
+1. You may have a customized job to handle files that you don't want, which will affect the data flow's middle output.
#### Recommendation
-To solve this problem, you need to update the schema firstly and then write the data. You can follow the steps below: <br/>
-1. Create one data flow that includes an insert-only delta sink with the merge schema option to update the schema. 
-1. After Step 1, use delete/upsert/update to modify the target sink without changing the schema. <br/>
+1. Check if your linked service has the R/W/E permission for Gen2. If you use the MI auth/SP authentication, at least grant the Storage Blob Data Contributor role in the Access control (IAM).
+1. Confirm if you have specific jobs that move/delete files to other place whose name does not match your rule. Because data flows will write down partition files into the target folder firstly and then do the merge and rename operations, the middle file's name might not match your rule.
-## Azure PostgreSQL
+## Azure Database for PostgreSQL
### Encounter an error: Failed with exception: handshake_failure
If you use the flexible server or Hyperscale (Citus) for your Azure PostgreSQL s
#### Recommendation You can try to use copy activities to unblock this issue.
-## CSV and Excel
-
-### Set the quote character to 'no quote char' is not supported in the CSV
+## Azure SQL Database
-#### Symptoms
+### Unable to connect to the SQL Database
-There are several issues that are not supported in the CSV when the quote character is set to 'no quote char':
+#### Symptoms
-1. When the quote character is set to 'no quote char', multi-char column delimiter can't start and end with the same letters.
-2. When the quote character is set to 'no quote char', multi-char column delimiter can't contain the escape character: `\`.
-3. When the quote character is set to 'no quote char', column value can't contain row delimiter.
-4. The quote character and the escape character cannot both be empty (no quote and no escape) if the column value contains a column delimiter.
+Your Azure SQL Database can work well in the data copy, dataset preview-data and test-connection in the linked service, but it fails when the same Azure SQL Database is used as a source or sink in the data flow with error like `Cannot connect to SQL database: 'jdbc:sqlserver://powerbasenz.database.windows.net;..., Please check the linked service configuration is correct, and make sure the SQL database firewall allows the integration runtime to access`
#### Cause
-Causes of the symptoms are stated below with examples respectively:
-1. Start and end with the same letters.<br/>
-`column delimiter: $*^$*`<br/>
-`column value: abc$*^ def`<br/>
-`csv sink: abc$*^$*^$*def ` <br/>
-`will be read as "abc" and "^&*def"`<br/>
+There are wrong firewall settings on your Azure SQL Database server, so that it cannot be connected by the data flow runtime. Currently, when you try to use the data flow to read/write Azure SQL Database, Azure Databricks is used to build spark cluster to run the job, but it does not support fixed IP ranges. For more details, please refer to [Azure Integration Runtime IP addresses](./azure-integration-runtime-ip-addresses.md).
-2. The multi-char delimiter contains escape characters.<br/>
-`column delimiter: \x`<br/>
-`escape char:\`<br/>
-`column value: "abc\\xdef"`<br/>
-The escape character will either escape the column delimiter or the escape the character.
+#### Recommendation
-3. The column value contains the row delimiter. <br/>
-`We need quote character to tell if row delimiter is inside column value or not.`
+Check the firewall settings of your Azure SQL Database and set it as "Allow access to Azure services" rather than set the fixed IP range.
-4. The quote character and the escape character both be empty and the column value contains column delimiters.<br/>
-`Column delimiter: \t`<br/>
-`column value: 111\t222\t33\t3`<br/>
-`It will be ambigious if it contains 3 columns 111,222,33\t3 or 4 columns 111,222,33,3.`<br/>
+
+### Syntax error when using queries as input
+
+#### Symptoms
+
+When you use queries as input in the data flow source with the Azure SQL, you fail with the following error message:
+
+`at Source 'source1': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax XXXXXXXX.`
++
+#### Cause
+
+The query used in the data flow source should be able to run as a sub query. The reason of the failure is that either the query syntax is incorrect or it can't be run as a sub query. You can run the following query in the SSMS to verify it:
+
+`SELECT top(0) * from ($yourQuery) as T_TEMP`
#### Recommendation
-The first symptom and the second symptom cannot be solved currently. For the third and fourth symptoms, you can apply the following methods:
-- For Symptom 3, do not use the 'no quote char' for a multiline csv file.-- For Symptom 4, set either the quote character or the escape character as non-empty, or you can remove all column delimiters inside your data.
-### Read files with different schemas error
+Provide a correct query and test it in the SSMS firstly.
+
+### Failed with an error: "SQLServerException: 111212; Operation cannot be performed within a transaction."
#### Symptoms
-When you use data flows to read files such as CSV and Excel files with different schemas, the data flow debug, sandbox or activity run will fail.
-- For CSV, the data misalignment exists when the schema of files is different.
+When you use the Azure SQL Database as a sink in the data flow to preview data, debug/trigger run and do other activities, you may find your job fails with following error message:
- ![Screenshot that shows the first schema error.](./media/data-flow-troubleshoot-connector-format/schema-error-1.png)
+`{"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sink': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: 111212;Operation cannot be performed within a transaction.","Details":"at Sink 'sink': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: 111212;Operation cannot be performed within a transaction."}`
-- For Excel, an error occurs when the schema of the file is different.
+#### Cause
+The error "`111212;Operation cannot be performed within a transaction.`" only occurs in the Synapse dedicated SQL pool. But you mistakenly use the Azure SQL Database as the connector instead.
- ![Screenshot that shows the second schema error.](./media/data-flow-troubleshoot-connector-format/schema-error-2.png)
+#### Recommendation
+Confirm if your SQL Database is a Synapse dedicated SQL pool. If so, please use Azure Synapse Analytics as a connector shown in the picture below.
-#### Cause
-Reading files with different schemas in the data flow is not supported.
+### Data with the decimal type become null
-#### Recommendation
+#### Symptoms
-If you still want to transfer files such as CSV and Excel files with different schemas in the data flow, you can use the ways below to work around:
+You want to insert data into a table in the SQL database. If the data contains the decimal type and need to be inserted into a column with the decimal type in the SQL database, the data value may be changed to null.
-- For CSV, you need to manually merge the schema of different files to get the full schema. For example, file_1 has columns `c_1, c_2, c_3` while file_2 has columns `c_3, c_4,... c_10`, so the merged and the full schema is `c_1, c_2... c_10`. Then make other files also have the same full schema even though it does not have data, for example, file_x only has columns `c_1, c_2, c_3, c_4`, please add additional columns `c_5, c_6, ... c_10` in the file, then it can work.
+If you do the preview, in previous stages, it will show the value like the following picture:
-- For Excel, you can solve this issue by applying one of the following options:
- - **Option-1**: You need to manually merge the schema of different files to get the full schema. For example, file_1 has columns `c_1, c_2, c_3` while file_2 has columns `c_3, c_4,... c_10`, so the merged and full schema is `c_1, c_2... c_10`. Then make other files also have the same schema even though it does not have data, for example, file_x with sheet "SHEET_1" only has columns `c_1, c_2, c_3, c_4`, please add additional columns `c_5, c_6, ... c_10` in the sheet too, and then it can work.
- - **Option-2**: Use **range (for example, A1:G100) + firstRowAsHeader=false**, and then it can load data from all Excel files even though the column name and count is different.
+In the sink stage, it will become null, which is shown in the picture below.
++
+#### Cause
+The decimal type has scale and precision properties. If your data type doesn't match that in the sink table, the system will validate that the target decimal is wider than the original decimal, and the original value does not overflow in the target decimal. Therefore, the value will be cast to null.
+
+#### Recommendation
+Check and compare the decimal type between data and table in the SQL database, and alter the scale and precision to the same.
+
+You can use toDecimal (IDecimal, scale, precision) to figure out if the original data can be cast to the target scale and precision. If it returns null, it means that the data cannot be cast and furthered when inserting.
## Azure Synapse Analytics
You use the Azure Blob Storage as the staging linked service to link to a storag
#### Recommendation Create an Azure Data Lake Gen2 linked service for the storage, and select the Gen2 storage as the staging linked service in data flow activities.
+## Common Data Model format
-## Azure Blob Storage
-
-### Account kind of Storage (general purpose v1) doesn't support service principal and MI authentication
+### Model.Json files with special characters
-#### Symptoms
+#### Symptoms
+You may encounter an issue that the final name of the model.json file contains special characters.  
-In data flows, if you use Azure Blob Storage (general purpose v1) with the service principal or MI authentication, you may encounter the following error message:
+#### Error message  
+`at Source 'source1': java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: PPDFTable1.csv@snapshot=2020-10-21T18:00:36.9469086Z. ` 
-`com.microsoft.dataflow.broker.InvalidOperationException: ServicePrincipal and MI auth are not supported if blob storage kind is Storage (general purpose v1)`
+#### Recommendation  
+Replace the special chars in the file name, which will work in the synapse but not in ADF.  
-#### Cause
+### No data output in the data preview or after running pipelines
-When you use the Azure Blob linked service in data flows, the managed identity or service principal authentication is not supported when the account kind is empty or "Storage". This situation is shown in Image 1 and Image 2 below.
+#### Symptoms
+When you use the manifest.json for CDM, no data is shown in the data preview or shown after running a pipeline. Only headers are shown. You can see this issue in the picture below.<br/>
-Image 1: The account kind in the Azure Blob Storage linked service
+![Screenshot that shows the no data output symptom.](./media/data-flow-troubleshoot-connector-format/no-data-output.png)
+#### Cause
+The manifest document describes the CDM folder, for example, what entities that you have in the folder, references of those entities and the data that corresponds to this instance. Your manifest document misses the `dataPartitions` information that indicates ADF where to read the data, and  since it is empty, it returns zero data. 
-Image 2: Storage account page
+#### Recommendation
+Update your manifest document to have the `dataPartitions` information, and you can refer to this example manifest document to update your document: [Common Data Model metadata: Introducing manifest-Example manifest document](/common-data-model/cdm-manifest#example-manifest-document).
+### JSON array attributes are inferred as separate columns
+#### Symptoms 
+You may encounter an issue where one attribute (string type) of the CDM entity has a JSON array as data. When this data is encountered, ADF infers the data as separate columns incorrectly. As you can see from the following pictures, a single attribute presented in the source (msfp_otherproperties) is inferred as a separate column in the CDM connector’s preview.<br/> 
-#### Recommendation
+- In the CSV source data (refer to the second column): <br/>
-To solve this issue, refer to the following recommendations:
+ ![Screenshot that shows the attribute in the CSV source data.](./media/data-flow-troubleshoot-connector-format/json-array-csv.png)
-- If the storage account kind is **None** in the Azure Blob linked service, specify the proper account kind, and refer to Image 3 shown below to accomplish it. Furthermore, refer to Image 2 to get the storage account kind, and check and confirm the account kind is not Storage (general purpose v1).
+- In the CDM source data preview: <br/>
- Image 3: Specify the storage account kind in the Azure Blob Storage linked service
+ ![Screenshot that shows the separate column in the CDM source data.](./media/data-flow-troubleshoot-connector-format/json-array-cdm.png)
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/specify-storage-account-kind.png" alt-text="Screenshot that shows how to specify storage account kind in Azure Blob Storage linked service.":::
-
+ 
+You may also try to map drifted columns and use the data flow expression to transform this attribute as an array. But since this attribute is read as a separate column when reading, transforming to an array does not work.  
-- If the account kind is Storage (general purpose v1), upgrade your storage account to the **general purpose v2** or choose a different authentication.
+#### Cause
+This issue is likely caused by the commas within your JSON object value for that column. Since your data file is expected to be a CSV file, the comma indicates that it is the end of a column’s value.
- Image 4: Upgrade the storage account to general purpose v2
+#### Recommendation
+To solve this problem, you need to double quote your JSON column and avoid any of the inner quotes with a backslash (`\`). In this way, the contents of that column’s value can be read in as a single column entirely.  
+  
+>[!Note]
+>The CDM doesn’t inform that the data type of the column value is JSON, yet it informs that it is a string and parsed as such.
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/upgrade-storage-account.png" alt-text="Screenshot that shows how to upgrade the storage account to general purpose v2." lightbox="./media/data-flow-troubleshoot-connector-format/upgrade-storage-account.png":::
-
+### Unable to fetch data in the data flow preview
-## Snowflake
+#### Symptoms
+You use CDM with model.json generated by Power BI. When you preview the CDM data using the data flow preview, you encounter an error: `No output data.`
-### Unable to connect to the Snowflake linked service
+#### Cause
+ The following code exists in the partitions in the model.json file generated by the Power BI data flow.
+```json
+"partitions": [  
+{  
+"name": "Part001",  
+"refreshTime": "2020-10-02T13:26:10.7624605+00:00",  
+"location": "https://datalakegen2.dfs.core.windows.net/powerbi/salesEntities/salesPerfByYear.csv @snapshot=2020-10-02T13:26:10.6681248Z"  
+}  
+```
+For this model.json file, the issue is the naming schema of the data partition file has special characters, and supporting file paths with '@' do not exist currently.  
-#### Symptoms
+#### Recommendation
+Please remove the `@snapshot=2020-10-02T13:26:10.6681248Z` part from the data partition file name and the model.json file, and then try again.
-You encounter the following error when you create the Snowflake linked service in the public network, and you use the auto-resolve integration runtime.
+### The corpus path is null or empty
-`ERROR [HY000] [Microsoft][Snowflake] (4) REST request for URL https://XXXXXXXX.east-us- 2.azure.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?requestId=XXXXXXXXXXXXXXXXXXXXXXXXX&request_guid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
+#### Symptoms
+When you use CDM in the data flow with the model format, you cannot preview the data, and you encounter the error: `DF-CDM_005 The corpus path is null or empty`. The error is shown in the following picture:  
+![Screenshot that shows the corpus path error.](./media/data-flow-troubleshoot-connector-format/corpus-path-error.png)
#### Cause-
-You have not applied the account name in the format that is given in the Snowflake account document (including additional segments that identify the region and cloud platform), for example, `XXXXXXXX.east-us-2.azure`. You can refer to this document: [Linked service properties](./connector-snowflake.md#linked-service-properties) for more information.
+Your data partition path in the model.json is pointing to a blob storage location and not your data lake. The location should have the base URL of **.dfs.core.windows.net** for the ADLS Gen2. 
#### Recommendation
+To solve this issue, you can refer to this article: [ADF Adds Support for Inline Datasets and Common Data Model to Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory/adf-adds-support-for-inline-datasets-and-common-data-model-to/ba-p/1441798), and the following picture shows the way to fix the corpus path error in this article.
-To solve the issue, change the account name format. The role should be one of the roles shown in the following picture, but the default one is **Public**.
-
+![Screenshot that shows how to fix the corpus path error.](./media/data-flow-troubleshoot-connector-format/fix-format-issue.png)
-### SQL access control error: "Insufficient privileges to operate on schema"
+### Unable to read CSV data files
-#### Symptoms
+#### Symptoms 
+You use the inline dataset as the common data model with manifest as a source, and you have provided the entry manifest file, root path, entity name and path. In the manifest, you have the data partitions with the CSV file location. Meanwhile, the entity schema and csv schema are identical, and all validations were successful. However, in the data preview, only the schema rather than the data gets loaded and the data is invisible, which is shown in the following picture:
-When you try to use "import projection", "data preview", etc. in the Snowflake source of data flows, you meet errors like `net.snowflake.client.jdbc.SnowflakeSQLException: SQL access control error: Insufficient privileges to operate on schema`.
+![Screenshot that shows the issue of unable to read data files.](./media/data-flow-troubleshoot-connector-format/unable-read-data.png)
#### Cause
+Your CDM folder is not separated into logical and physical models, and only physical models exist in the CDM folder. The following two articles describe the difference: [Logical definitions](/common-data-model/sdk/logical-definitions) and [Resolving a logical entity definition](/common-data-model/sdk/convert-logical-entities-resolved-entities).<br/> 
-You meet this error because of the wrong configuration. When you use the data flow to read Snowflake data, the runtime Azure Databricks (ADB) is not directly select the query to Snowflake. Instead, a temporary stage are created, and data are pulled from tables to the stage and then compressed and pulled by ADB. This process is shown in the picture below.
-
+#### Recommendation
+For the data flow using CDM as a source, try to use a logical model as your entity reference, and use the manifest that describes the location of the physical resolved entities and the data partition locations. You can see some samples of logical entity definitions within the public CDM github repository: [CDM-schemaDocuments](https://github.com/microsoft/CDM/tree/master/schemaDocuments)<br/>
-So the user/role used in ADB should have necessary permission to do this in the Snowflake. But usually the user/role do not have the permission since the database is created on the share.
+A good starting point to forming your corpus is to copy the files within the schema documents folder (just that level inside the github repository), and put those files into a folder. Afterwards, you can use one of the predefined logical entities within the repository (as a starting or reference point) to create your logical model.<br/>
-#### Recommendation
-To solve this issue, you can create different database and create views on the top of the shared DB to access it from ADB. For more details, please refer to [Snowflake](https://community.snowflake.com/s/question/0D50Z000095ktE4SAI/insufficient-privileges-to-operate-on-schema).
+Once the corpus is set up, you are recommended to use CDM as a sink within data flows, so that a well-formed CDM folder can be properly created. You can use your CSV dataset as a source and then sink it to your CDM model that you created.
-### Failed with an error: "SnowflakeSQLException: IP x.x.x.x is not allowed to access Snowflake. Contact your local security administrator"
+## CSV and Excel format
+### Set the quote character to 'no quote char' is not supported in the CSV
+
#### Symptoms
-When you use snowflake in Azure Data Factory, you can successfully use test-connection in the Snowflake linked service, preview-data/import-schema on Snowflake dataset and run copy/lookup/get-metadata or other activities with it. But when you use Snowflake in the data flow activity, you may meet error like `SnowflakeSQLException: IP 13.66.58.164 is not allowed to access Snowflake. Contact your local security administrator.`
+There are several issues that are not supported in the CSV when the quote character is set to 'no quote char':
+
+1. When the quote character is set to 'no quote char', multi-char column delimiter can't start and end with the same letters.
+2. When the quote character is set to 'no quote char', multi-char column delimiter can't contain the escape character: `\`.
+3. When the quote character is set to 'no quote char', column value can't contain row delimiter.
+4. The quote character and the escape character cannot both be empty (no quote and no escape) if the column value contains a column delimiter.
#### Cause
-The Azure Data Factory data flow does not support the use of fixed IP ranges, and you can refer to [Azure Integration Runtime IP addresses](./azure-integration-runtime-ip-addresses.md) for more detailed information.
+Causes of the symptoms are stated below with examples respectively:
+1. Start and end with the same letters.<br/>
+`column delimiter: $*^$*`<br/>
+`column value: abc$*^ def`<br/>
+`csv sink: abc$*^$*^$*def ` <br/>
+`will be read as "abc" and "^&*def"`<br/>
-#### Recommendation
+2. The multi-char delimiter contains escape characters.<br/>
+`column delimiter: \x`<br/>
+`escape char:\`<br/>
+`column value: "abc\\xdef"`<br/>
+The escape character will either escape the column delimiter or the escape the character.
-To solve this issue, you can change the Snowflake account firewall settings with the following steps:
+3. The column value contains the row delimiter. <br/>
+`We need quote character to tell if row delimiter is inside column value or not.`
-1. You can get the IP range list of service tags from the "service tags IP range download link": [Discover service tags by using downloadable JSON files](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
+4. The quote character and the escape character both be empty and the column value contains column delimiters.<br/>
+`Column delimiter: \t`<br/>
+`column value: 111\t222\t33\t3`<br/>
+`It will be ambigious if it contains 3 columns 111,222,33\t3 or 4 columns 111,222,33,3.`<br/>
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/ip-range-list.png" alt-text="Screenshot that shows the IP range list.":::
+#### Recommendation
+The first symptom and the second symptom cannot be solved currently. For the third and fourth symptoms, you can apply the following methods:
+- For Symptom 3, do not use the 'no quote char' for a multiline csv file.
+- For Symptom 4, set either the quote character or the escape character as non-empty, or you can remove all column delimiters inside your data.
-1. If you run a data flow in the "southcentralus" region, you need to allow the access from all addresses with name "AzureCloud.southcentralus", for example:
+### Read files with different schemas error
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/allow-access-with-name.png" alt-text="Screenshot that shows how to allow access from all addresses with the certain name.":::
+#### Symptoms
-### Queries in the source does not work
+When you use data flows to read files such as CSV and Excel files with different schemas, the data flow debug, sandbox or activity run will fail.
+- For CSV, the data misalignment exists when the schema of files is different.
-#### Symptoms
+ ![Screenshot that shows the first schema error.](./media/data-flow-troubleshoot-connector-format/schema-error-1.png)
-When you try to read data from Snowflake with query, you may meet error like:
+- For Excel, an error occurs when the schema of the file is different.
-1. `SQL compilation error: error line 1 at position 7 invalid identifier 'xxx'`
-2. `SQL compilation error: Object 'xxx' does not exist or not authorized.`
+ ![Screenshot that shows the second schema error.](./media/data-flow-troubleshoot-connector-format/schema-error-2.png)
#### Cause
-You encounter this error because of your wrong configuration.
+Reading files with different schemas in the data flow is not supported.
#### Recommendation
-For Snowflake, it applies the following rules for storing identifiers at creation/definition time and resolving them in queries and other SQL statements:
-
-When an identifier (table name, schema name, column name, etc.) is unquoted, it is stored and resolved in uppercase by default, and it is case-in-sensitive. For example:
--
-Because it is case-in-sensitive, so you can feel free to use following query to read snowflake data while the result is the same:<br/>
-- `Select MovieID, title from Public.TestQuotedTable2`<br/>-- `Select movieId, title from Public.TESTQUOTEDTABLE2`<br/>-- `Select movieID, TITLE from PUBLIC.TESTQUOTEDTABLE2`<br/>-
-When an identifier (table name, schema name, column name, etc.) is double-quoted, it is stored and resolved exactly as entered, including case as it is case-sensitive, and you can see an example in the following picture. For more details, please refer to this document: [Identifier Requirements](https://docs.snowflake.com/en/sql-reference/identifiers-syntax.html#identifier-requirements).
--
-Because the case-sensitive identifier (table name, schema name, column name, etc.) has lowercase character, you must quote the identifier during data reading with the query, for example: <br/>
+If you still want to transfer files such as CSV and Excel files with different schemas in the data flow, you can use the ways below to work around:
-- Select **"movieId"**, **"title"** from Public.**"testQuotedTable2"**
+- For CSV, you need to manually merge the schema of different files to get the full schema. For example, file_1 has columns `c_1, c_2, c_3` while file_2 has columns `c_3, c_4,... c_10`, so the merged and the full schema is `c_1, c_2... c_10`. Then make other files also have the same full schema even though it does not have data, for example, file_x only has columns `c_1, c_2, c_3, c_4`, please add additional columns `c_5, c_6, ... c_10` in the file, then it can work.
-If you meet up error with the Snowflake query, check whether some identifiers (table name, schema name, column name, etc.) are case-sensitive with the following steps:
+- For Excel, you can solve this issue by applying one of the following options:
-1. Login the Snowflake server (`https://{accountName}.azure.snowflakecomputing.com/`, replace {accountName} with your account name) to check the identifier (table name, schema name, column name, etc.).
+ - **Option-1**: You need to manually merge the schema of different files to get the full schema. For example, file_1 has columns `c_1, c_2, c_3` while file_2 has columns `c_3, c_4,... c_10`, so the merged and full schema is `c_1, c_2... c_10`. Then make other files also have the same schema even though it does not have data, for example, file_x with sheet "SHEET_1" only has columns `c_1, c_2, c_3, c_4`, please add additional columns `c_5, c_6, ... c_10` in the sheet too, and then it can work.
+ - **Option-2**: Use **range (for example, A1:G100) + firstRowAsHeader=false**, and then it can load data from all Excel files even though the column name and count is different.
-1. Create worksheets to test and validate the query:
- - Run `Use database {databaseName}`, replace {databaseName} with your database name.
- - Run a query with table, for example: `select "movieId", "title" from Public."testQuotedTable2"`
-
-1. After the SQL query of Snowflake is tested and validated, you can use it in the data flow Snowflake source directly.
+## Delta format
-## Azure SQL Database
-
-### Unable to connect to the SQL Database
+### The sink does not support the schema drift with upsert or update
#### Symptoms
+You may face the issue that the delta sink in mapping data flows does not support schema drift with upsert/update. The problem is that the schema drift does not work when the delta is the target in a mapping data flow and user configure an update/upsert. 
-Your Azure SQL Database can work well in the data copy, dataset preview-data and test-connection in the linked service, but it fails when the same Azure SQL Database is used as a source or sink in the data flow with error like `Cannot connect to SQL database: 'jdbc:sqlserver://powerbasenz.database.windows.net;..., Please check the linked service configuration is correct, and make sure the SQL database firewall allows the integration runtime to access`
+If a column is added to the source after an "initial" load to the delta, the subsequent jobs just fail with an error that it cannot find the new column, and this happens when you upsert/update with the alter row. It seems to work for inserts only.
-#### Cause
+#### Error message
+`DF-SYS-01 at Sink 'SnkDeltaLake': org.apache.spark.sql.AnalysisException: cannot resolve target.BICC_RV in UPDATE clause given columns target. `
-There are wrong firewall settings on your Azure SQL Database server, so that it cannot be connected by the data flow runtime. Currently, when you try to use the data flow to read/write Azure SQL Database, Azure Databricks is used to build spark cluster to run the job, but it does not support fixed IP ranges. For more details, please refer to [Azure Integration Runtime IP addresses](./azure-integration-runtime-ip-addresses.md).
+#### Cause
+This is an issue for delta format because of the limitation of io delta library used in the data flow runtime. This issue is still in fixing.
#### Recommendation
+To solve this problem, you need to update the schema firstly and then write the data. You can follow the steps below: <br/>
+1. Create one data flow that includes an insert-only delta sink with the merge schema option to update the schema. 
+1. After Step 1, use delete/upsert/update to modify the target sink without changing the schema. <br/>
-Check the firewall settings of your Azure SQL Database and set it as "Allow access to Azure services" rather than set the fixed IP range.
-### Syntax error when using queries as input
+## Snowflake
+
+### Unable to connect to the Snowflake linked service
#### Symptoms
-When you use queries as input in the data flow source with the Azure SQL, you fail with the following error message:
+You encounter the following error when you create the Snowflake linked service in the public network, and you use the auto-resolve integration runtime.
-`at Source 'source1': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax XXXXXXXX.`
+`ERROR [HY000] [Microsoft][Snowflake] (4) REST request for URL https://XXXXXXXX.east-us- 2.azure.snowflakecomputing.com.snowflakecomputing.com:443/session/v1/login-request?requestId=XXXXXXXXXXXXXXXXXXXXXXXXX&request_guid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
#### Cause
-The query used in the data flow source should be able to run as a sub query. The reason of the failure is that either the query syntax is incorrect or it can't be run as a sub query. You can run the following query in the SSMS to verify it:
-
-`SELECT top(0) * from ($yourQuery) as T_TEMP`
+You have not applied the account name in the format that is given in the Snowflake account document (including additional segments that identify the region and cloud platform), for example, `XXXXXXXX.east-us-2.azure`. You can refer to this document: [Linked service properties](./connector-snowflake.md#linked-service-properties) for more information.
#### Recommendation
-Provide a correct query and test it in the SSMS firstly.
+To solve the issue, change the account name format. The role should be one of the roles shown in the following picture, but the default one is **Public**.
-### Failed with an error: "SQLServerException: 111212; Operation cannot be performed within a transaction."
-#### Symptoms
+### SQL access control error: "Insufficient privileges to operate on schema"
-When you use the Azure SQL Database as a sink in the data flow to preview data, debug/trigger run and do other activities, you may find your job fails with following error message:
+#### Symptoms
-`{"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sink': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: 111212;Operation cannot be performed within a transaction.","Details":"at Sink 'sink': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: 111212;Operation cannot be performed within a transaction."}`
+When you try to use "import projection", "data preview", etc. in the Snowflake source of data flows, you meet errors like `net.snowflake.client.jdbc.SnowflakeSQLException: SQL access control error: Insufficient privileges to operate on schema`.
#### Cause
-The error "`111212;Operation cannot be performed within a transaction.`" only occurs in the Synapse dedicated SQL pool. But you mistakenly use the Azure SQL Database as the connector instead.
-#### Recommendation
-Confirm if your SQL Database is a Synapse dedicated SQL pool. If so, please use Azure Synapse Analytics as a connector shown in the picture below.
+You meet this error because of the wrong configuration. When you use the data flow to read Snowflake data, the runtime Azure Databricks (ADB) is not directly select the query to Snowflake. Instead, a temporary stage are created, and data are pulled from tables to the stage and then compressed and pulled by ADB. This process is shown in the picture below.
-### Data with the decimal type become null
+So the user/role used in ADB should have necessary permission to do this in the Snowflake. But usually the user/role do not have the permission since the database is created on the share.
+
+#### Recommendation
+To solve this issue, you can create different database and create views on the top of the shared DB to access it from ADB. For more details, please refer to [Snowflake](https://community.snowflake.com/s/question/0D50Z000095ktE4SAI/insufficient-privileges-to-operate-on-schema).
+
+### Failed with an error: "SnowflakeSQLException: IP x.x.x.x is not allowed to access Snowflake. Contact your local security administrator"
#### Symptoms
-You want to insert data into a table in the SQL database. If the data contains the decimal type and need to be inserted into a column with the decimal type in the SQL database, the data value may be changed to null.
+When you use snowflake in Azure Data Factory, you can successfully use test-connection in the Snowflake linked service, preview-data/import-schema on Snowflake dataset and run copy/lookup/get-metadata or other activities with it. But when you use Snowflake in the data flow activity, you may meet error like `SnowflakeSQLException: IP 13.66.58.164 is not allowed to access Snowflake. Contact your local security administrator.`
-If you do the preview, in previous stages, it will show the value like the following picture:
+#### Cause
+The Azure Data Factory data flow does not support the use of fixed IP ranges, and you can refer to [Azure Integration Runtime IP addresses](./azure-integration-runtime-ip-addresses.md) for more detailed information.
-In the sink stage, it will become null, which is shown in the picture below.
+#### Recommendation
+To solve this issue, you can change the Snowflake account firewall settings with the following steps:
-#### Cause
-The decimal type has scale and precision properties. If your data type doesn't match that in the sink table, the system will validate that the target decimal is wider than the original decimal, and the original value does not overflow in the target decimal. Therefore, the value will be cast to null.
+1. You can get the IP range list of service tags from the "service tags IP range download link": [Discover service tags by using downloadable JSON files](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
-#### Recommendation
-Check and compare the decimal type between data and table in the SQL database, and alter the scale and precision to the same.
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/ip-range-list.png" alt-text="Screenshot that shows the IP range list.":::
-You can use toDecimal (IDecimal, scale, precision) to figure out if the original data can be cast to the target scale and precision. If it returns null, it means that the data cannot be cast and furthered when inserting.
+1. If you run a data flow in the "southcentralus" region, you need to allow the access from all addresses with name "AzureCloud.southcentralus", for example:
-## ADLS Gen2
+ :::image type="content" source="./media/data-flow-troubleshoot-connector-format/allow-access-with-name.png" alt-text="Screenshot that shows how to allow access from all addresses with the certain name.":::
-### Failed with an error: "Error while reading file XXX. It is possible the underlying files have been updated"
+### Queries in the source does not work
#### Symptoms
-When you use the ADLS Gen2 as a sink in the data flow (to preview data, debug/trigger run, etc.) and the partition setting in **Optimize** tab in the **Sink** stage is not default, you may find job fail with the following error message:
+When you try to read data from Snowflake with query, you may meet error like:
-`Job failed due to reason: Error while reading file abfss:REDACTED_LOCAL_PART@prod.dfs.core.windows.net/import/data/e3342084-930c-4f08-9975-558a3116a1a9/part-00000-tid-7848242374008877624-5df7454e-7b14-4253-a20b-d20b63fe9983-1-1-c000.csv. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved.`
+1. `SQL compilation error: error line 1 at position 7 invalid identifier 'xxx'`
+2. `SQL compilation error: Object 'xxx' does not exist or not authorized.`
#### Cause
-1. You don't assign a proper permission to your MI/SP authentication.
-1. You may have a customized job to handle files that you don't want, which will affect the data flow's middle output.
+You encounter this error because of your wrong configuration.
#### Recommendation
-1. Check if your linked service has the R/W/E permission for Gen2. If you use the MI auth/SP authentication, at least grant the Storage Blob Data Contributor role in the Access control (IAM).
-1. Confirm if you have specific jobs that move/delete files to other place whose name does not match your rule. Because data flows will write down partition files into the target folder firstly and then do the merge and rename operations, the middle file's name might not match your rule.
-## ADLS Gen1
+For Snowflake, it applies the following rules for storing identifiers at creation/definition time and resolving them in queries and other SQL statements:
-### Fail to create files with service principle authentication
+When an identifier (table name, schema name, column name, etc.) is unquoted, it is stored and resolved in uppercase by default, and it is case-in-sensitive. For example:
-#### Symptoms
-When you try to move or transfer data from different sources into the ADLS gen1 sink, if the linked service's authentication method is service principle authentication, your job may fail with the following error message:
-`org.apache.hadoop.security.AccessControlException: CREATE failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.). [2b5e5d92-xxxx-xxxx-xxxx-db4ce6fa0487] failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.)`
+Because it is case-in-sensitive, so you can feel free to use following query to read snowflake data while the result is the same:<br/>
+- `Select MovieID, title from Public.TestQuotedTable2`<br/>
+- `Select movieId, title from Public.TESTQUOTEDTABLE2`<br/>
+- `Select movieID, TITLE from PUBLIC.TESTQUOTEDTABLE2`<br/>
-#### Cause
+When an identifier (table name, schema name, column name, etc.) is double-quoted, it is stored and resolved exactly as entered, including case as it is case-sensitive, and you can see an example in the following picture. For more details, please refer to this document: [Identifier Requirements](https://docs.snowflake.com/en/sql-reference/identifiers-syntax.html#identifier-requirements).
-The RWX permission or the dataset property is not set correctly.
-#### Recommendation
+Because the case-sensitive identifier (table name, schema name, column name, etc.) has lowercase character, you must quote the identifier during data reading with the query, for example: <br/>
-- If the target folder doesn't have correct permissions, refer to this document to assign the correct permission in Gen1: [Use service principal authentication](./connector-azure-data-lake-store.md#use-service-principal-authentication).
+- Select **"movieId"**, **"title"** from Public.**"testQuotedTable2"**
-- If the target folder has the correct permission and you use the file name property in the data flow to target to the right folder and file name, but the file path property of the dataset is not set to the target file path (usually leave not set), as the example shown in the following pictures, you will encounter this failure because the backend system tries to create files based on the file path of the dataset, and the file path of the dataset doesn't have the correct permission.
-
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-path-property.png" alt-text="Screenshot that shows the file path property":::
-
- :::image type="content" source="./media/data-flow-troubleshoot-connector-format/file-name-property.png" alt-text="Screenshot that shows the file name property":::
+If you meet up error with the Snowflake query, check whether some identifiers (table name, schema name, column name, etc.) are case-sensitive with the following steps:
+1. Login the Snowflake server (`https://{accountName}.azure.snowflakecomputing.com/`, replace {accountName} with your account name) to check the identifier (table name, schema name, column name, etc.).
+
+1. Create worksheets to test and validate the query:
+ - Run `Use database {databaseName}`, replace {databaseName} with your database name.
+ - Run a query with table, for example: `select "movieId", "title" from Public."testQuotedTable2"`
- There are two methods to solve this issue:
- 1. Assign the WX permission to the file path of the dataset.
- 1. Set the file path of the dataset as the folder with WX permission, and set the rest folder path and file name in data flows.
+1. After the SQL query of Snowflake is tested and validated, you can use it in the data flow Snowflake source directly.
## Next steps For more help with troubleshooting, see these resources:
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
It is an user error because JSON payload that hits management.azure.com is corru
Perform network tracing of your API call from ADF portal using Edge/Chrome browser **Developer tools**. You will see offending JSON payload, which could be due to a special characters(for example $), spaces and other types of user input. Once you fix the string expression, you will proceed with rest of ADF usage calls in the browser.
+### Unable to publish event trigger with Access Denied failure
+
+**Cause**
+
+When the Azure account lacks the required access via a role membership, it unable to access to the storage account used for the trigger.
+
+**Resolution**
+
+The Azure account needs to be assigned to a role with sufficient permissions in the storage account's access control (IAM) for the event trigger publish to succeed. The role can be the Owner role, Contributor role, or any custom role with the **Microsoft.EventGrid/EventSubscriptions/Write** permission to the storage account.
+
+[Role-based access control for an event trigger](./how-to-create-event-trigger.md#role-based-access-control)
+[Storage Event Trigger - Permission and RBAC setting](https://techcommunity.microsoft.com/t5/azure-data-factory/storage-event-trigger-permission-and-rbac-setting/ba-p/2101782)
+ ### ForEach activities do not run in parallel mode **Cause**
data-factory Quickstart Create Data Factory Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-portal.md
description: Create a data factory with a pipeline that copies data from one loc
Previously updated : 12/14/2020 Last updated : 06/16/2021 + # Quickstart: Create a data factory by using the Azure Data Factory UI
databox-online Azure Stack Edge Gpu Create Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md
Previously updated : 06/09/2021 Last updated : 06/14/2021 #Customer intent: As an IT admin, I need to understand how to create Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
The steps for preparing a custom VM image vary for a Windows or Linux VM.
Do the following steps to create a Windows VM image:
-1. Create a Windows virtual machine in Azure. For portal instructions, see [Create a Windows virtual machine in the Azure portal](/azure/virtual-machines/windows/quick-create-portal). For PowerShell instructions, see [Tutorial: Create and manage Windows VMs with Azure PowerShell](../virtual-machines/windows/tutorial-manage-vm.md).
+1. Create a Windows virtual machine in Azure. For portal instructions, see [Create a Windows virtual machine in the Azure portal](/azure/virtual-machines/windows/quick-create-portal). For PowerShell instructions, see [Tutorial: Create and manage Windows VMs with Azure PowerShell](../virtual-machines/windows/tutorial-manage-vm.md).
- The virtual machine must be a Generation 1 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+ The virtual machine must be a Generation 1 VM. The OS disk that you use to create your VM image must be a fixed-size VHD of any size that Azure supports. For VM size options, see [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md#supported-vm-sizes).
+
+ You can use any Windows Gen1 VM with a fixed-size VHD in Azure Marketplace. For a list of commonly used Azure Marketplace images that could work, see [Azure Marketplace items available for Azure Stack Hub](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images).
2. Generalize the virtual machine. To generalize the VM, [connect to the virtual machine](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-windows-vm), open a command prompt, and run the following `sysprep` command:
Do the following steps to create a Linux VM image:
1. Create a Linux virtual machine in Azure. For portal instructions, see [Quickstart: Create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md). For PowerShell instructions, see [Quickstart: Create a Linux VM in Azure with PowerShell](../virtual-machines/linux/quick-create-powershell.md).
- You can use any Gen1 VM with a fixed-size VHD in Azure Marketplace to create Linux custom images, with the exception of Red Hat Enterprise Linux (RHEL) images, which require extra steps. For a list of Azure Marketplace images that could work, see [Azure Marketplace items available for Azure Stack Hub](/azure-stack/operator/azure-stack-marketplace-azure-items?view=azs-1910&preserve-view=true). For guidance on RHEL images, see [Using RHEL BYOS images](#using-rhel-byos-images), below.
+ You can use any Gen1 VM with a fixed-size VHD in Azure Marketplace to create Linux custom images, with the exception of Red Hat Enterprise Linux (RHEL) images, which require extra steps. For a list of Azure Marketplace images that could work, see [Azure Marketplace items available for Azure Stack Hub](azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md#commonly-used-marketplace-images). For guidance on RHEL images, see [Using RHEL BYOS images](#using-rhel-byos-images), below.
1. Deprovision the VM. Use the Azure VM agent to delete machine-specific files and data. Use the `waagent` command with the `-deprovision+user` parameter on your source Linux VM. For more information, see [Understanding and using Azure Linux Agent](../virtual-machines/extensions/agent-linux.md).
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
Previously updated : 06/07/2021 Last updated : 06/14/2021 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.
For more information, go to [Deploy a VM on your Azure Stack Edge Pro device usi
## Prerequisites
-Before you can use Azure Marketplace images for Azure Stack Edge, make sure that you are connected to Azure in either of the following ways.
+Before you can use Azure Marketplace images for Azure Stack Edge, make sure you're connected to Azure in either of the following ways.
[!INCLUDE [azure-cli-prepare-your-environment](../../includes/azure-cli-prepare-your-environment-no-header.md)] ## Search for Azure Marketplace images
-You will now identify a specific Azure Marketplace image that you wish to use. Azure Marketplace hosts thousands of VM images.
+You'll now identify a specific Azure Marketplace image that you wish to use. Azure Marketplace hosts thousands of VM images.
-To find some of the most common Marketplace images that match your search criteria, run the following command.
+To find some of the most commonly used Marketplace images that match your search criteria, run the following command.
```azurecli az vm image list --all [--publisher <Publisher>] [--offer <Offer>] [--sku <SKU>]
In this example, we will select Windows Server 2019 Datacenter Core, version 201
:::image type="content" source="media/azure-stack-edge-create-virtual-machine-marketplace-image/marketplace-image-1.png" alt-text="List of marketplace images":::
-Below is a list of URNs for some of the most common images. If you just want the latest version of a particular OS, the version number can be replaced with ΓÇ£latestΓÇ¥ in the URN. For example, ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥.
+### Commonly used Marketplace images
+
+Below is a list of URNs for some of the most commonly used images. If you just want the latest version of a particular OS, the version number can be replaced with ΓÇ£latestΓÇ¥ in the URN. For example, ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥.
| OS | SKU | Version | URN |
databox Data Box Disk Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-portal-customer-managed-shipping.md
Previously updated : 05/08/2021 Last updated : 06/22/2021
When you place a Data Box Disk order, you can choose self-managed shipping optio
- Order name - Company name - Company legal name (if different)
- - Tax ID
+ - CNPJ (Business Tax ID, format: 00.000.000/0000-00) or CPF (Individual Tax ID, format: 000.000.000-00)
- Address - Country - Phone number
databox Data Box Portal Customer Managed Shipping https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-portal-customer-managed-shipping.md
Previously updated : 05/08/2021 Last updated : 06/22/2021
When you place a Data Box order, you can choose the self-managed shipping option
- Order name - Company name - Company legal name (if different)
- - Tax ID
- - Address
+ - CNPJ (Business Tax ID, format: 00.000.000/0000-00) or CPF (Individual Tax ID, format: 000.000.000-00)
+ - Address
- Country - Phone number - Contact name of the person who will pick up the Data Box Disk (A government-issued photo ID will be required to validate the contactΓÇÖs identity upon arrival.)
defender-for-iot How To Activate And Set Up Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-activate-and-set-up-your-sensor.md
The console supports the following certificate types:
> [!IMPORTANT] > We recommend that you don't use the default self-signed certificate. The certificate is not secure and should be used for test environments only. The owner of the certificate can't be validated, and the security of your system can't be maintained. Never use this option for production networks.
+See [Manage certificates](how-to-manage-individual-sensors.md#manage-certificates) for more information about working with certificates.
+ ### Sign in and activate the sensor **To sign in and activate:**
The console supports the following certificate types:
1. Enter the credentials defined during the sensor installation, or select the **Password recovery** option. If you purchased a preconfigured sensor from Arrow, generate a password first. For more information on password recovery, see [Investigate password failure at initial sign-in](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#investigate-password-failure-at-initial-sign-in).
-1. After you sign in, the **Activation** dialog box opens. Select **Upload** and go to the activation file that you downloaded during sensor onboarding.
+1. After you sign in, the **Activation** dialog box opens. Select **Upload** and go to the activation file that you downloaded during the sensor onboarding.
:::image type="content" source="media/how-to-activate-and-set-up-your-sensor/activation-upload-screen-with-upload-button.png" alt-text="Select Upload and go to the activation file.":::
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-manage-individual-sensors.md
Azure Defender for IoT uses SSL/TLS certificates to:
- Allow validation between the management console and connected sensors, and between a management console and a High Availability management console. Validations is evaluated against a Certificate Revocation List, and the certificate expiration date. *If validation fails, communication between the management console and the sensor is halted and a validation error is presented in the console*. This option is enabled by default after installation.
- When validation is `ON`, the appliance should be able to establish connection to the CRL server defined by the certificate.
+- Third party Forwarding rules, for example alert information sent to SYSLOG, Splunk or ServiceNow; or communications with Active Directory are not validated.
- Third party Forwarding rules, for example alert information sent to SYSLOG, Splunk or ServiceNow; or communications with Active Directory are not validated.
+### About CRL servers
-#### SSL certificates
+When validation is on, the appliance should be able to establish connection to the CRL server defined by the certificate. By default, the certificate will reference the CRL URL on HTTP port 80. Some organizational security policies may block access to this port. If your organization does not have access to port 80, you can:
+1. Define another URL and a specific port in the certificate.
+- The URL should be defined as http://<URL>:<Port> instead of http://<URL>.
+- Verify that the destination CRL server can listen on the port you defined.
+1. Use a proxy server that will access the CRL on port 80.
+1. Not carry out CRL validation. In this case, remove the CRL URL reference in the certificate.
++
+### About SSL certificates
The Defender for IoT sensor, and on-premises management console use SSL, and TLS certificates for the following functions:
digital-twins How To Ingest Opcua Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-ingest-opcua-data.md
Using the [Azure portal](https://portal.azure.com), create an Ubuntu Server virt
#### Install IoT Edge container
-Follow the instructions to [Install IoT Edge on Linux](../virtual-machines/linux/use-remote-desktop.md).
+Follow the instructions to [Install IoT Edge on Linux](../iot-edge/how-to-install-iot-edge.md).
Once the installation completes, run the following command to verify the status of your installation:
dms Known Issues Azure Mysql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/known-issues-azure-mysql-online.md
- Title: "Known issues: Online migrations to Azure Database for MySQL"-
-description: Learn about known issues and migration limitations with online migrations to Azure Database for MySQL when using the Azure Database Migration Service.
--------- Previously updated : 02/20/2020--
-# Online migration issues & limitations to Azure DB for MySQL with Azure Database Migration Service
-
-Known issues and limitations associated with online migrations from MySQL to Azure Database for MySQL are described in the following sections.
-
-## Online migration configuration
---- The source MySQL Server version must be version 5.6.35, 5.7.18 or later-- Azure Database for MySQL supports:
- - MySQL community edition
- - InnoDB engine
-- Same version migration. Migrating MySQL 5.6 to Azure Database for MySQL 5.7 isn't supported. Migrations to or from MySQL 8.0 are not supported.-- Enable binary logging in my.ini (Windows) or my.cnf (Unix)
- - Set Server_id to any number larger or equals to 1, for example, Server_id=1 (only for MySQL 5.6)
- - Set log-bin = \<path> (only for MySQL 5.6)
- - Set binlog_format = row
- - Expire_logs_days = 5 (recommended - only for MySQL 5.6)
-- User must have the ReplicationAdmin role.-- Collations defined for the source MySQL database are the same as the ones defined in target Azure Database for MySQL.-- Schema must match between source MySQL database and target database in Azure Database for MySQL.-- Schema in target Azure Database for MySQL must not have foreign keys. Use the following query to drop foreign keys:
- ```sql
- SET group_concat_max_len = 8192;
- SELECT SchemaName, GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery, GROUP_CONCAT(AddQuery SEPARATOR ';\n') as AddQuery
- FROM
- (SELECT
- KCU.REFERENCED_TABLE_SCHEMA as SchemaName, KCU.TABLE_NAME, KCU.COLUMN_NAME,
- CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' DROP FOREIGN KEY ', KCU.CONSTRAINT_NAME) AS DropQuery,
- CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' ADD CONSTRAINT ', KCU.CONSTRAINT_NAME, ' FOREIGN KEY (`', KCU.COLUMN_NAME, '`) REFERENCES `', KCU.REFERENCED_TABLE_NAME, '` (`', KCU.REFERENCED_COLUMN_NAME, '`) ON UPDATE ',RC.UPDATE_RULE, ' ON DELETE ',RC.DELETE_RULE) AS AddQuery
- FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE KCU, information_schema.REFERENTIAL_CONSTRAINTS RC
- WHERE
- KCU.CONSTRAINT_NAME = RC.CONSTRAINT_NAME
- AND KCU.REFERENCED_TABLE_SCHEMA = RC.UNIQUE_CONSTRAINT_SCHEMA
- AND KCU.REFERENCED_TABLE_SCHEMA = ['schema_name']) Queries
- GROUP BY SchemaName;
- ```
-
- Run the drop foreign key (which is the second column) in the query result.
-- Schema in target Azure Database for MySQL must not have any triggers. To drop triggers in target database:
- ```
- SELECT Concat('DROP TRIGGER ', Trigger_Name, ';') FROM information_schema.TRIGGERS WHERE TRIGGER_SCHEMA = 'your_schema';
- ```
-
-## Datatype limitations
--- **Limitation**: If there's a JSON datatype in the source MySQL database, migration will fail during continuous sync.-
- **Workaround**: Modify JSON datatype to medium text or longtext in source MySQL database.
--- **Limitation**: If there's no primary key on tables, continuous sync will fail.-
- **Workaround**: Temporarily set a primary key for the table for migration to continue. You can remove the primary key after data migration is complete.
-
-## LOB limitations
-
-Large Object (LOB) columns are columns that could grow large in size. For MySQL, Medium text, Longtext, Blob, Mediumblob, Longblob, etc., are some of the datatypes of LOB.
--- **Limitation**: If LOB data types are used as primary keys, migration will fail.-
- **Workaround**: Replace primary key with other datatypes or columns that aren't LOB.
--- **Limitation**: If the length of Large Object (LOB) column is bigger than the "Limit LOB size" parameter (should not be greater than 64 KB), data might be truncated at the target. You can check the length of LOB column using this query:
- ```
- SELECT max(length(description)) as LEN from catalog;
- ```
-
- **Workaround**: If you have LOB object that is bigger than 64 KB, use the "Allow unlimited LOB size" parameter. Note that migrations using "Allow unlimited LOB size" parameter will be slower than migrations using "Limit LOB size" parameter.
-
-## Limitations when migrating online from AWS RDS MySQL
-
-When you try to perform an online migration from AWS RDS MySQL to Azure Database for MySQL, you may come across the following errors.
--- **Error:** Database '{0}' has foreign key(s) on target. Fix the target and start a new data migration activity. Execute below script on target to list the foreign key(s)-
- **Limitation**: If you have foreign keys in your schema, the initial load and continuous sync of the migration will fail.
- **Workaround**: Execute the following script in MySQL workbench to extract the drop foreign key script and add foreign key script:
-
- ```
- SET group_concat_max_len = 8192; SELECT SchemaName, GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery, GROUP_CONCAT(AddQuery SEPARATOR ';\n') as AddQuery FROM (SELECT KCU.REFERENCED_TABLE_SCHEMA as SchemaName, KCU.TABLE_NAME, KCU.COLUMN_NAME, CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' DROP FOREIGN KEY ', KCU.CONSTRAINT_NAME) AS DropQuery, CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' ADD CONSTRAINT ', KCU.CONSTRAINT_NAME, ' FOREIGN KEY (`', KCU.COLUMN_NAME, '`) REFERENCES `', KCU.REFERENCED_TABLE_NAME, '` (`', KCU.REFERENCED_COLUMN_NAME, '`) ON UPDATE ',RC.UPDATE_RULE, ' ON DELETE ',RC.DELETE_RULE) AS AddQuery FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE KCU, information_schema.REFERENTIAL_CONSTRAINTS RC WHERE KCU.CONSTRAINT_NAME = RC.CONSTRAINT_NAME AND KCU.REFERENCED_TABLE_SCHEMA = RC.UNIQUE_CONSTRAINT_SCHEMA AND KCU.REFERENCED_TABLE_SCHEMA = 'SchemaName') Queries GROUP BY SchemaName;
- ```
--- **Error:** Database '{0}' does not exist on server. Provided MySQL source server is case sensitive. Please check the database name.-
- **Limitation**: When migrating a MySQL database to Azure using Command Line Interface (CLI), users may hit this error. The service couldn't locate the database on the source server, which could be because you may have provided incorrect database name or the database doesn't exist on the listed server. Note database names are case-sensitive.
-
- **Workaround**: Provide the exact database name, and then try again.
--- **Error:** There are tables with the same name in the database '{database}'. Azure Database for MySQL does not support case sensitive tables.-
- **Limitation**: This error happens when you have two tables with the same name in the source database. Azure Database for MySQL doesn't support case-sensitive tables.
-
- **Workaround**: Update the table names to be unique, and then try again.
--- **Error:** The target database {database} is empty. Please migrate the schema.-
- **Limitation**: This error occurs when the target Azure Database for MySQL database doesn't have the required schema. Schema migration is required to enable migrating data to your target.
-
- **Workaround**: [Migrate the schema](./tutorial-mysql-azure-mysql-online.md#migrate-the-sample-schema) from your source database to the target database.
-
-## Other limitations
--- A password string that has opening and closing curly brackets { } at the beginning and end of the password string isn't supported. This limitation applies to both connecting to source MySQL and target Azure Database for MySQL.-- The following DDLs aren't supported:
- - All partition DDLs
- - Drop table
- - Rename table
-- Using the *alter table <table_name> add column <column_name>* statement to add columns to the beginning or to the middle of a table isn't supported. THe *alter table <table_name> add column <column_name>* adds the column at the end of the table.-- Indexes created on only part of the column data aren't supported. The following statement is an example that creates an index using only part of the column data:-
- ```
- CREATE INDEX partial_name ON customer (name(10));
- ```
--- In Azure Database Migration Service, the limit of databases to migrate in one single migration activity is four.--- Azure DMS does not support the CASCADE referential action, which helps to automatically delete or update a matching row in the child table when a row is deleted or updated in the parent table. For more information, in the MySQL documentation, see the Referential Actions section of the article [FOREIGN KEY Constraints](https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html). Azure DMS requires that you drop foreign key constraints in the target database server during the initial data load, and you cannot use referential actions. If your workload depends on updating a related child table via this referential action, we recommend that you perform a [dump and restore](../mysql/concepts-migrate-dump-restore.md) instead. --- **Error:** Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.-
- **Limitation**: This error happens when you're migrating to Azure Database for MySQL using the InnoDB storage engine and any table row size is too large (>8126 bytes).
-
- **Workaround**: Update the schema of the table that has a row size greater than 8126 bytes. We don't recommend changing the strict mode because the data will be truncated. Changing the page_size isn't supported.
dms Known Issues Troubleshooting Dms Source Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/known-issues-troubleshooting-dms-source-connectivity.md
Potential issues associated with connecting to a source AWS RDS SQL Server datab
## Known issues * [Known issues/migration limitations with online migrations to Azure SQL Database](./index.yml)
-* [Known issues/migration limitations with online migrations to Azure Database for MySQL](./known-issues-azure-mysql-online.md)
* [Known issues/migration limitations with online migrations to Azure Database for PostgreSQL](./known-issues-azure-postgresql-online.md) ## Next steps
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/known-issues-troubleshooting-dms.md
The following error occurs when creating an activity for a database migration pr
| - | - | | This error displays when you've selected more than four databases for a single migration activity. At present, each migration activity is limited to four databases. | Select four or fewer databases per migration activity. If you need to migrate more than four databases in parallel, provision another instance of Azure Database Migration Service. Currently, each subscription supports up to two Azure Database Migration Service instances.<br><br> |
-## Errors for MySQL migration to Azure MySQL with recovery failures
-
-When you migrate from MySQL to Azure Database for MySQL using Azure Database Migration Service, the migration activity fails with the following error:
-
-* **Error**: Database migration error - Task 'TaskID' was suspended due to [n] successive recovery failures.
-
-| Cause | Resolution |
-| - | - |
-| This error may occur when the user doing the migration is missing ReplicationAdmin role and/or privileges of REPLICATION CLIENT, REPLICATION REPLICA, and SUPER (versions earlier than MySQL 5.6.6).<br><br><br><br><br><br><br><br><br><br><br><br><br> | Make sure the [pre-requisite privileges](./tutorial-mysql-azure-mysql-online.md#prerequisites) for the user account are configured accurately on the Azure Database for MySQL instance. For example, the following steps can be followed to create a user named 'migrateuser' with required privileges:<br>1. CREATE USER migrateuser@'%' IDENTIFIED BY 'secret'; <br>2. Grant all privileges on db_name.* to 'migrateuser'@'%' identified by 'secret'; // repeat this step to grant access on more databases <br>3. Grant replication slave on *.* to 'migrateuser'@'%' identified by 'secret';<br>4. Grant replication client on *.* to 'migrateuser'@'%' identified by 'secret';<br>5. Flush privileges; |
- ## Error when attempting to stop Azure Database Migration Service You receive following error when stopping the Azure Database Migration Service instance:
When you try to connect Azure Database Migration Service to SQL Server source th
## Additional known issues * [Known issues/migration limitations with online migrations to Azure SQL Database](./index.yml)
-* [Known issues/migration limitations with online migrations to Azure Database for MySQL](./known-issues-azure-mysql-online.md)
* [Known issues/migration limitations with online migrations to Azure Database for PostgreSQL](./known-issues-azure-postgresql-online.md) ## Next steps
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/resource-scenario-status.md
The following table shows Azure Database Migration Service support for offline m
| **Azure SQL VM** | SQL Server | Γ£ö | GA | | | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | Public Preview |
-| | RDS MySQL | Γ£ö | Public Preview |
-| | Azure DB for MySQL* | Γ£ö | Public Preview |
-| **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | Public Preview |
-| | RDS MySQL | Γ£ö | Public Preview |
-| | Azure DB for MySQL* | Γ£ö | Public Preview |
+| **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | GA |
+| | RDS MySQL | Γ£ö | GA |
+| | Azure DB for MySQL* | Γ£ö | GA |
+| **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | GA |
+| | RDS MySQL | Γ£ö | GA |
+| | Azure DB for MySQL* | Γ£ö | GA |
| **Azure DB for PostgreSQL - Single server** | PostgreSQL | X | | | RDS PostgreSQL | X | | | **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | X |
dms Tutorial Mysql Azure Mysql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-mysql-azure-mysql-online.md
- Title: "Tutorial: Migrate MySQL online to Azure Database for MySQL"-
-description: Learn to perform an online migration from MySQL on-premises to Azure Database for MySQL by using Azure Database Migration Service.
--------- Previously updated : 01/08/2020--
-# Tutorial: Migrate MySQL to Azure Database for MySQL online using DMS
-
-You can use Azure Database Migration Service to migrate the databases from an on-premises MySQL instance to [Azure Database for MySQL](../mysql/index.yml) with minimal downtime. In other words, migration can be achieved with minimum downtime to the application. In this tutorial, you migrate the **Employees** sample database from an on-premises instance of MySQL 5.7 to Azure Database for MySQL by using an online migration activity in Azure Database Migration Service.
-
-> [!IMPORTANT]
-> The ΓÇ£MySQL to Azure Database for MySQLΓÇ¥ online migration scenario will **no longer be available after June 1, 2021**. A parallelized, highly performant [offline migration capability](./tutorial-mysql-azure-mysql-offline-portal.md) is **now available in preview** to support ΓÇ£MySQL to Azure Database for MySQLΓÇ¥ migrations. For online migrations, you can use open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md).
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
->
-> * Migrate the sample schema using mysqldump utility.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project by using Azure Database Migration Service.
-> * Run the migration.
-> * Monitor the migration.
-
-> [!NOTE]
-> Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.
-
-> [!IMPORTANT]
-> For an optimal migration experience, Microsoft recommends creating an instance of Azure Database Migration Service in the same Azure region as the target database. Moving data across regions or geographies can slow down the migration process and introduce errors.
-
-> [!NOTE]
-> Bias-free communication
->
-> Microsoft supports a diverse and inclusionary environment. This article contains references to the word _slave_. The Microsoft [style guide for bias-free communication](https://github.com/MicrosoftDocs/microsoft-style-guide/blob/master/styleguide/bias-free-communication.md) recognizes this as an exclusionary word. The word is used in this article for consistency because it's currently the word that appears in the software. When the software is updated to remove the word, this article will be updated to be in alignment.
->
--
-## Prerequisites
-
-To complete this tutorial, you need to:
-
-* Download and install [MySQL community edition](https://dev.mysql.com/downloads/mysql/) 5.6 or 5.7. The on-premises MySQL version must match with Azure Database for MySQL version. For example, MySQL 5.6 can only migrate to Azure Database for MySQL 5.6 and not upgraded to 5.7. Migrations to or from MySQL 8.0 are not supported.
-* [Create an instance in Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md). Refer to the article [Use MySQL Workbench to connect and query data](../mysql/connect-workbench.md) for details about how to connect and create a database using the Workbench application.
-* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-
- > [!NOTE]
- > During virtual networkNet setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- >
- > * Target database endpoint (for example, SQL endpoint, Cosmos DB endpoint, and so on)
- > * Storage endpoint
- > * Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
-
-* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
-* Open your Windows firewall to allow Azure Database Migration Service to access the source MySQL Server, which by default is TCP port 3306.
-* When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.
-* Create a server-level [firewall rule](../azure-sql/database/firewall-configure.md) for Azure Database for MySQL to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
-* The source MySQL must be on supported MySQL community edition. To determine the version of MySQL instance, in the MySQL utility or MySQL Workbench, run the following command:
-
- ```
- SELECT @@version;
- ```
-
-* Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html)
-
-* Enable binary logging in the my.ini (Windows) or my.cnf (Unix) file in source database by using the following configuration:
-
- * **server_id** = 1 or greater (relevant only for MySQL 5.6)
- * **log-bin** =\<path> (relevant only for MySQL 5.6)
- For example: log-bin = E:\MySQL_logs\BinLog
- * **binlog_format** = row
- * **Expire_logs_days** = 5 (it's recommended to not use zero; relevant only for MySQL 5.6)
- * **Binlog_row_image** = full (relevant only for MySQL 5.6)
- * **log_slave_updates** = 1
-
-* The user must have the ReplicationAdmin role with the following privileges:
-
- * **REPLICATION CLIENT** - Required for Change Processing tasks only. In other words, Full Load only tasks don't require this privilege.
- * **REPLICATION REPLICA** - Required for Change Processing tasks only. In other words, Full Load only tasks don't require this privilege.
- * **SUPER** - Only required in versions earlier than MySQL 5.6.6.
-
-## Migrate the sample schema
-
-To complete all the database objects like table schemas, indexes and stored procedures, we need to extract schema from the source database and apply to the database. To extract schema, you can use mysqldump with the `--no-data` parameter.
-
-Assuming you have MySQL **Employees** sample database in the on-premises system, the command to do schema migration using mysqldump is:
-
-```
-mysqldump -h [servername] -u [username] -p[password] --databases [db name] --no-data > [schema file path]
-```
-
-For example:
-
-```
-mysqldump -h 10.10.123.123 -u root -p --databases employees --no-data > d:\employees.sql
-```
-
-To import schema to Azure Database for MySQL target, run the following command:
-
-```
-mysql.exe -h [servername] -u [username] -p[password] [database]< [schema file path]
- ```
-
-For example:
-
-```
-mysql.exe -h shausample.mysql.database.azure.com -u dms@shausample -p employees < d:\employees.sql
- ```
-
-If you have foreign keys in your schema, the initial load and continuous sync of the migration will fail. Execute the following script in MySQL Workbench to extract the drop foreign key script and add foreign key script.
-
-```sql
-SET group_concat_max_len = 8192;
- SELECT SchemaName, GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery, GROUP_CONCAT(AddQuery SEPARATOR ';\n') as AddQuery
- FROM
- (SELECT
- KCU.REFERENCED_TABLE_SCHEMA as SchemaName,
- KCU.TABLE_NAME,
- KCU.COLUMN_NAME,
- CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' DROP FOREIGN KEY ', KCU.CONSTRAINT_NAME) AS DropQuery,
- CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' ADD CONSTRAINT ', KCU.CONSTRAINT_NAME, ' FOREIGN KEY (`', KCU.COLUMN_NAME, '`) REFERENCES `', KCU.REFERENCED_TABLE_NAME, '` (`', KCU.REFERENCED_COLUMN_NAME, '`) ON UPDATE ',RC.UPDATE_RULE, ' ON DELETE ',RC.DELETE_RULE) AS AddQuery
- FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE KCU, information_schema.REFERENTIAL_CONSTRAINTS RC
- WHERE
- KCU.CONSTRAINT_NAME = RC.CONSTRAINT_NAME
- AND KCU.REFERENCED_TABLE_SCHEMA = RC.UNIQUE_CONSTRAINT_SCHEMA
- AND KCU.REFERENCED_TABLE_SCHEMA = 'SchemaName') Queries
- GROUP BY SchemaName;
- ```
-
-Run the drop foreign key (which is the second column) in the query result to drop foreign key.
-
-> [!NOTE]
-> Azure DMS does not support the CASCADE referential action, which helps to automatically delete or update a matching row in the child table when a row is deleted or updated in the parent table. For more information, in the MySQL documentation, see the Referential Actions section of the article [FOREIGN KEY Constraints](https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html).
-> Azure DMS requires that you drop foreign key constraints in the target database server during the initial data load, and you cannot use referential actions. If your workload depends on updating a related child table via this referential action, we recommend that you perform a [dump and restore](../mysql/concepts-migrate-dump-restore.md) instead.
--
-> [!IMPORTANT]
-> If importing data using a backup, remove the CREATE DEFINER commands manually or by using the --skip-definer command when performing a mysqldump. DEFINER requires super privileges to create and is restricted in Azure Database for MySQL.
-
-If you have triggers in the database, it will enforce data integrity in the target ahead of full data migration from the source. The recommendation is to disable triggers on all the tables in the target during migration, and then enable the triggers after migration is done.
-
-Execute the following script in MySQL Workbench on the target database to extract the drop trigger script and add trigger script.
-
-```sql
-SELECT
- SchemaName,
- GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery,
- Concat('DELIMITER $$ \n\n', GROUP_CONCAT(AddQuery SEPARATOR '$$\n'), '$$\n\nDELIMITER ;') as AddQuery
-FROM
-(
-SELECT
- TRIGGER_SCHEMA as SchemaName,
- Concat('DROP TRIGGER `', TRIGGER_NAME, "`") as DropQuery,
- Concat('CREATE TRIGGER `', TRIGGER_NAME, '` ', ACTION_TIMING, ' ', EVENT_MANIPULATION,
- '\nON `', EVENT_OBJECT_TABLE, '`\n' , 'FOR EACH ', ACTION_ORIENTATION, ' ',
- ACTION_STATEMENT) as AddQuery
-FROM
- INFORMATION_SCHEMA.TRIGGERS
-ORDER BY EVENT_OBJECT_SCHEMA, EVENT_OBJECT_TABLE, ACTION_TIMING, EVENT_MANIPULATION, ACTION_ORDER ASC
-) AS Queries
-GROUP BY SchemaName
-```
-
-Run the generated drop trigger query (DropQuery column) in the result to drop triggers in the target database. The add trigger query can be saved, to be used post data migration completion.
-
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-mysql-to-azure-mysql-online/01-portal-select-subscriptions.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-mysql-to-azure-mysql-online/02-01-portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-mysql-to-azure-mysql-online/02-02-portal-register-resource-provider.png)
-
-## Create a Database Migration Service instance
-
-1. In the Azure portal, select + **Create a resource**, search for Azure Database Migration Service, and then select **Azure Database Migration Service** from the drop-down list.
-
- ![Azure Marketplace](media/tutorial-mysql-to-azure-mysql-online/03-dms-portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/tutorial-mysql-to-azure-mysql-online/04-dms-portal-marketplace-create.png)
-
-3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
-
-4. Select a pricing tier and move to the networking screen. Offline migration capability is available in both Standard and Premium pricing tier.
-
- For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-
- ![Configure Azure Database Migration Service basic settings](media/tutorial-mysql-to-azure-mysql-online/05-dms-portal-create-basic.png)
-
-5. Select an existing virtual network from the list or provide the name of new virtual network to be created. Move to the review + create screen. Optionally you can add tags to the service using the tags screen.
-
- The virtual network provides Azure Database Migration Service with access to the source SQL Server and the target Azure SQL Database instance.
-
- ![Configure Azure Database Migration Service network settings](media/tutorial-mysql-to-azure-mysql-online/06-dms-portal-create-networking.png)
-
- For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-
-6. Review the configurations and select **Create** to create the service.
-
- ![Azure Database Migration Service create](media/tutorial-mysql-to-azure-mysql-online/07-dms-portal-create-submit.png)
-
-## Create a migration project
-
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-mysql-to-azure-mysql-online/08-01-dms-portal-search-service.png)
-
-2. Select your migration service instance from the search results and select + **New Migration Project**.
-
- ![Create a new migration project](media/tutorial-mysql-to-azure-mysql-online/08-02-dms-portal-new-project.png)
-
-3. On the **New migration project** screen, specify a name for the project, in the **Source server type** selection box, select **MySQL**, in the **Target server type** selection box, select **Azure Database For MySQL** and in the **Migration activity type** selection box, select **Online data migration**. Select **Create and run activity**.
-
- ![Create Database Migration Service Project](media/tutorial-mysql-to-azure-mysql-online/09-dms-portal-project-mysql-create.png)
-
- > [!NOTE]
- > Alternately, you can choose **Create project only** to create the migration project now and execute the migration later.
-
-## Configure migration project
-
-1. On the **Select source** screen, specify the connection details for the source MySQL instance, and select **Next : Select target>>**
-
- ![Add source details screen](media/tutorial-mysql-to-azure-mysql-online/10-dms-portal-project-mysql-source.png)
-
-2. On the **Select target** screen, specify the connection details for the target Azure Database for MySQL instance, and select **Next : Select databases>>**
-
- ![Add target details screen](media/tutorial-mysql-to-azure-mysql-online/11-dms-portal-project-mysql-target.png)
-
-3. On the **Select databases** screen, map the source and the target database for migration, and select **Next : Configure migration settings>>**. You can select the **Make Source Server Readonly** option to make the source as read-only, but be cautious that this is a server level setting. If selected, it sets the entire server to read-only, not just the selected databases.
-
- If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
- ![Select database details screen](media/tutorial-mysql-to-azure-mysql-online/12-dms-portal-project-mysql-select-db.png)
-
- > [!NOTE]
- > Though you can select multiple databases in this step, each instance of Azure Database Migration Service supports up to 4 databases for concurrent migration. Also, there is a limit of 10 instances of Azure Database Migration Service per subscription per region. For example, if you have 80 databases to migrate, you can migrate 40 of them to the same region concurrently, but only if you have created 10 instances of the Azure Database Migration Service.
-
-4. On the **Configure migration settings** screen, select the tables to be part of migration, and select **Next : Summary>>**. If the target tables have any data, they are not selected by default but you can explicitly select them and they will be truncated before starting the migration.
-
- ![Select tables screen](media/tutorial-mysql-to-azure-mysql-online/13-dms-portal-project-mysql-select-tbl.png)
-
-5. On the **Summary** screen, in the **Activity name** text box, specify a name for the migration activity and review the summary to ensure that the source and target details match what you previously specified.
-
- ![Migration project summary](media/tutorial-mysql-to-azure-mysql-online/14-dms-portal-project-mysql-activity-summary.png)
-
-6. Select **Start migration**. The migration activity window appears, and the **Status** of the activity is **Initializing**. The **Status** changes to **Running** when the table migrations start.
-
-## Monitor the migration
-
-1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Complete**.
-
- ![Activity Status - complete](media/tutorial-mysql-to-azure-mysql-online/15-dms-activity-completed.png)
-
-2. Under **Database Name**, select specific database to get to the migration status for **Full data load** and **Incremental data sync** operations.
-
- Full data load will show the initial load migration status while Incremental data sync will show change data capture (CDC) status.
-
- ![Activity Status - Full load completed](media/tutorial-mysql-to-azure-mysql-online/16-dms-activity-full-load-completed.png)
-
- ![Activity Status - Incremental data sync](media/tutorial-mysql-to-azure-mysql-online/17-dms-activity-incremental-data-sync.png)
-
-## Perform migration cutover
-
-After the initial Full load is completed, the databases are marked **Ready to cutover**.
-
-1. When you're ready to complete the database migration, select **Start Cutover**.
-
- ![Start cutover](media/tutorial-mysql-to-azure-mysql-online/18-dms-start-cutover.png)
-
-2. Make sure to stop all the incoming transactions to the source database; wait until the **Pending changes** counter shows **0**.
-3. Select **Confirm**, and the select **Apply**.
-4. When the database migration status shows **Completed**, connect your applications to the new target Azure SQL Database.
-
-## Next steps
-
-* For information about known issues and limitations when performing online migrations to Azure Database for MySQL, see the article [Known issues and workarounds with Azure Database for MySQL online migrations](known-issues-azure-mysql-online.md).
-* For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
-* For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
dms Tutorial Rds Mysql Server Azure Db For Mysql Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-rds-mysql-server-azure-db-for-mysql-online.md
- Title: "Tutorial: Migrate RDS MySQL online to Azure Database for MySQL"-
-description: Learn to perform an online migration from RDS MySQL to Azure Database for MySQL by using the Azure Database Migration Service.
--------- Previously updated : 06/09/2020--
-# Tutorial: Migrate RDS MySQL to Azure Database for MySQL online using DMS
-
-You can use Azure Database Migration Service to migrate databases from an RDS MySQL instance to [Azure Database for MySQL](../mysql/index.yml) while the source database remains online during migration. In other words, migration can be achieved with minimal downtime to the application. In this tutorial, you migrate the **Employees** sample database from an instance of RDS MySQL to Azure Database for MySQL by using the online migration activity in Azure Database Migration Service.
-
-> [!IMPORTANT]
-> The ΓÇ£RDS MySQL to Azure Database for MySQLΓÇ¥ online migration scenario is being replaced with a parallelized, highly performant offline migration scenario on June 1, 2021. For online migrations, you can use this new offering together with [data-in replication](../mysql/concepts-data-in-replication.md). Alternatively, use open-source tools such as [MyDumper/MyLoader](https://centminmod.com/mydumper.html) with data-in replication for online migrations.
-
-In this tutorial, you learn how to:
-> [!div class="checklist"]
->
-> * Migrate the sample schema by using the mysqldump and mysql utilities.
-> * Create an instance of Azure Database Migration Service.
-> * Create a migration project by using Azure Database Migration Service.
-> * Run the migration.
-> * Monitor the migration.
-
-> [!NOTE]
-> Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier. For more information, see the Azure Database Migration Service [pricing](https://azure.microsoft.com/pricing/details/database-migration/) page.
-
-> [!IMPORTANT]
-> For an optimal migration experience, Microsoft recommends creating an instance of the Azure Database Migration Service in the same Azure region as the target database. Moving data across regions or geographies can slow down the migration process and introduce errors.
--
-This article describes how to perform an online migration from an instance of RDS MySQL to Azure Database for MySQL.
-
-## Prerequisites
-
-To complete this tutorial, you need to:
-
-* Ensure that the source MySQL server is running a supported MySQL community edition. To determine the version of your MySQL instance, in the mysql utility or MySQL Workbench, run the command:
-
- ```
- SELECT @@version;
- ```
-
- For more information, see the article [Supported Azure Database for MySQL versions](../mysql/concepts-supported-versions.md).
-
-* Download and install the [MySQL **Employees** sample database](https://dev.mysql.com/doc/employee/en/employees-installation.html).
-* Create an instance of [Azure Database for MySQL](../mysql/quickstart-create-mysql-server-database-using-azure-portal.md).
-* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
-* Ensure that your virtual network Network Security Group rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage and AzureMonitor. For more detail on virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
-* Configure your [Windows Firewall](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access) (or your Linux firewall) to allow for database engine access. For MySQL server, allow port 3306 for connectivity.
-
-> [!NOTE]
-> Azure Database for MySQL only supports InnoDB tables. To convert MyISAM tables to InnoDB, please see the article [Converting Tables from MyISAM to InnoDB](https://dev.mysql.com/doc/refman/5.7/en/converting-tables-to-innodb.html) .
-
-### Set up AWS RDS MySQL for replication
-
-1. To create a new parameter group, follow the instructions provided by AWS in the article [MySQL Database Log Files](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MySQL.html), in the **Binary Logging Format** section.
-2. Create a new parameter group with the following configuration:
- * log_bin = ON
- * binlog_format = row
- * binlog_checksum = NONE
-3. Save the new parameter group.
-4. Associate the new parameter group with the RDS MySQL instance. A reboot might be required.
-5. Once the parameter group is in place, connect to the MySQL instance and [set binlog retention](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_set_configuration.html#mysql_rds_set_configuration-usage-notes.binlog-retention-hours) to at least 5 days.
-```
-call mysql.rds_set_configuration('binlog retention hours', 120);
-```
-
-## Migrate the schema
-
-1. Extract the schema from the source database and apply to the target database to complete migration of all database objects such as table schemas, indexes, and stored procedures.
-
- The easiest way to migrate only the schema is to use mysqldump with the --no-data parameter. The command to migrate the schema is:
-
- ```
- mysqldump -h [servername] -u [username] -p[password] --databases [db name] --no-data > [schema file path]
- ```
-
- For example, to dump a schema file for the **Employees** database, use the following command:
-
- ```
- mysqldump -h 10.10.123.123 -u root -p --databases employees --no-data > d:\employees.sql
- ```
-
-2. Import the schema to target service, which is Azure Database for MySQL. To restore the schema dump file, run the following command:
-
- ```
- mysql.exe -h [servername] -u [username] -p[password] [database]< [schema file path]
- ```
-
- For example, to import the schema for the **Employees** database:
-
- ```
- mysql.exe -h shausample.mysql.database.azure.com -u dms@shausample -p employees < d:\employees.sql
- ```
-
-3. If you have foreign keys in your schema, the initial load and continuous sync of the migration will fail. To extract the drop foreign key script and add foreign key script at the destination (Azure Database for MySQL), run the following script in MySQL Workbench:
-
- ```
- SET group_concat_max_len = 8192;
- SELECT SchemaName, GROUP_CONCAT(DropQuery SEPARATOR ';\n') as DropQuery, GROUP_CONCAT(AddQuery SEPARATOR ';\n') as AddQuery
- FROM
- (SELECT
- KCU.REFERENCED_TABLE_SCHEMA as SchemaName,
- KCU.TABLE_NAME,
- KCU.COLUMN_NAME,
- CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' DROP FOREIGN KEY ', KCU.CONSTRAINT_NAME) AS DropQuery,
- CONCAT('ALTER TABLE ', KCU.TABLE_NAME, ' ADD CONSTRAINT ', KCU.CONSTRAINT_NAME, ' FOREIGN KEY (`', KCU.COLUMN_NAME, '`) REFERENCES `', KCU.REFERENCED_TABLE_NAME, '` (`', KCU.REFERENCED_COLUMN_NAME, '`) ON UPDATE ',RC.UPDATE_RULE, ' ON DELETE ',RC.DELETE_RULE) AS AddQuery
- FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE KCU, information_schema.REFERENTIAL_CONSTRAINTS RC
- WHERE
- KCU.CONSTRAINT_NAME = RC.CONSTRAINT_NAME
- AND KCU.REFERENCED_TABLE_SCHEMA = RC.UNIQUE_CONSTRAINT_SCHEMA
- AND KCU.REFERENCED_TABLE_SCHEMA = 'SchemaName') Queries
- GROUP BY SchemaName;
- ```
-
-4. Run the drop foreign key (which is the second column) in the query result to drop the foreign key.
-
-> [!NOTE]
-> Azure DMS does not support the CASCADE referential action, which helps to automatically delete or update a matching row in the child table when a row is deleted or updated in the parent table. For more information, in the MySQL documentation, see the Referential Actions section of the article [FOREIGN KEY Constraints](https://dev.mysql.com/doc/refman/8.0/en/create-table-foreign-keys.html).
-> Azure DMS requires that you drop foreign key constraints in the target database server during the initial data load, and you cannot use referential actions. If your workload depends on updating a related child table via this referential action, we recommend that you perform a [dump and restore](../mysql/concepts-migrate-dump-restore.md) instead.
-
-5. If you have triggers (insert or update trigger) in the data, it will enforce data integrity in the target before replicating data from the source. The recommendation is to disable triggers in all the tables *at the target* during migration, and then enable the triggers after migration is complete.
-
- To disable triggers in target database:
-
- ```
- select concat ('alter table ', event_object_table, ' disable trigger ', trigger_name)
- from information_schema.triggers;
- ```
-
-6. If there are instances of the ENUM data type in any tables, we recommend temporarily updating to the ΓÇÿcharacter varyingΓÇÖ datatype in the target table. WHen data replication is complete, then revert the data type to ENUM.
-
-## Register the Microsoft.DataMigration resource provider
-
-1. Sign in to the Azure portal, select **All services**, and then select **Subscriptions**.
-
- ![Show portal subscriptions](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/portal-select-subscription1.png)
-
-2. Select the subscription in which you want to create the instance of Azure Database Migration Service, and then select **Resource providers**.
-
- ![Show resource providers](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/portal-select-resource-provider.png)
-
-3. Search for migration, and then to the right of **Microsoft.DataMigration**, select **Register**.
-
- ![Register resource provider](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/portal-register-resource-provider.png)
-
-## Create an instance of Azure Database Migration Service
-
-1. In the Azure portal, select + **Create a resource**, search for Azure Database Migration Service, and then select **Azure Database Migration Service** from the drop-down list.
-
- ![Azure Marketplace](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-create1.png)
-
-3. On the **Create Migration Service** screen, specify a name for the service, the subscription, and a new or existing resource group.
-
-4. Select the location in which you want to create the instance of Azure Database Migration Service.
-
-5. Select an existing virtual network or create a new one.
-
- The virtual network provides Azure Database Migration Service with access to the source MySQL instance and the target Azure Database for MySQL instance.
-
- For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-
-6. Select a pricing tier; for this online migration, be sure to select the Premium: 4vCores pricing tier.
-
- ![Configure Azure Database Migration Service instance settings](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-settings3.png)
-
-7. Select **Create** to create the service.
-
-## Create a migration project
-
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
-
- ![Locate all instances of the Azure Database Migration Service](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, search for the name of the Azure Database Migration Service instance that you created, and then select the instance.
-
- ![Locate your instance of the Azure Database Migration Service](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-instance-search.png)
-
-3. Select + **New Migration Project**.
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **MySQL**, and then in the **Target server type** text box, select **AzureDbForMySQL**.
-5. In the **Choose type of activity** section, select **Online data migration**.
-
- > [!IMPORTANT]
- > Be sure to select **Online data migration**; offline migrations are not supported for this scenario.
-
- ![Create Database Migration Service Project](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-create-project6.png)
-
- > [!NOTE]
- > Alternately, you can choose **Create project only** to create the migration project now and execute the migration later.
-
-6. Select **Save**.
-
-7. Select **Create and run activity** to create the project and run the migration activity.
-
- > [!NOTE]
- > Please make a note of the pre-requisites needed to set up online migration in the project creation blade.
-
-## Specify source details
-
-* On the **Migration source detail** screen, specify the connection details for the source MySQL instance.
-
- ![Source Details](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-source-details5.png)
-
-## Specify target details
-
-1. Select **Save**, and then on the **Target details** screen, specify the connection details for the target Azure Database for MySQL server, which is pre-provisioned and has the **Employees** schema deployed using MySQLDump.
-
- ![Select Target](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-select-target5.png)
-
-2. Select **Save**, and then on the **Map to target databases** screen, map the source and the target database for migration.
-
- If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
-
- ![Map to target databases](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-map-targets-activity5.png)
-
-3. Select **Save**, on the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
-
- ![Migration Summary](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-migration-summary2.png)
-
-## Run the migration
-
-* Select **Run migration**.
-
- The migration activity window appears, and the **Status** of the activity is **Initializing**.
-
-## Monitor the migration
-
-1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Running**.
-
- ![Activity Status - running](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-activity-status4.png)
-
-2. Under **DATABASE NAME**, select a specific database to get to the migration status for **Full data load** and **Incremental data sync** operations.
-
- **Full data load** shows the initial load migration status, while **Incremental data sync** shows change data capture (CDC) status.
-
- ![Inventory screen - full data load](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-inventory-full-load.png)
-
- ![Inventory screen - incremental data sync](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-inventory-incremental.png)
-
-## Perform migration cutover
-
-After the initial Full load is completed, the databases are marked **Ready to Cutover**.
-
-1. When you're ready to complete the database migration, select **Start Cutover**.
-
- ![Start cut over](media/tutorial-rds-mysql-server-azure-db-for-mysql-online/dms-inventory-start-cutover.png)
-
-2. Make sure to stop all the incoming transactions to the source database; wait until the **Pending changes** counter shows **0**.
-3. Select **Confirm**, and the select **Apply**.
-4. When the database migration status shows **Completed**, connect your applications to the new target Azure Database for MySQL database.
-
-Your online migration of an on-premises instance of MySQL to Azure Database for MySQL is now complete.
-
-## Next steps
-
-* For information about the Azure Database Migration Service, see the article [What is the Azure Database Migration Service?](./dms-overview.md).
-* For information about Azure Database for MySQL, see the article [What is Azure Database for MySQL?](../mysql/overview.md).
-* For other questions, email the [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) alias.
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/whats-new.md
Title: What's new? Release notes - Azure Event Grid
+ Title: What's new? Azure Event Grid
description: Learn what is new with Azure Event Grid, such as the latest release notes, known issues, bug fixes, deprecated functionality, and upcoming changes. Last updated 04/27/2021
Last updated 04/27/2021
>Get notified about when to revisit this page for updates by copying and pasting this URL: `https://docs.microsoft.com/api/search/rss?search=%22Release+notes+-+Azure+Event+Grid%22&locale=en-us` into your ![RSS feed reader icon](./media/whats-new/feed-icon-16x16.png) feed reader.
-Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
--- The latest releases-- Known issues-- Bug fixes-- Deprecated functionality-- Plans for changes
+Azure Event Grid receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about the features that are added or updated in a release.
## 6.1.0-preview (2020-10) - [Managed identities for system topics](enable-identity-system-topics.md)
expressroute Expressroute Howto Add Gateway Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-gateway-portal-resource-manager.md
You can view a [Video](https://azure.microsoft.com/documentation/videos/azure-ex
| Public IP address name | Provide a name for the public IP address. | > [!IMPORTANT]
- > If you plan to use IPv6-based private peering over ExpressRoute, make sure to select an AZ SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ) for **SKU**.
+ > If you plan to use IPv6-based private peering over ExpressRoute, please refer to the [PowerShell documentation](https://docs.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-resource-manager) for creating your gateway with a Public IP address of type Standard, Static.
> >
expressroute Expressroute Howto Add Gateway Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-gateway-resource-manager.md
The steps for this task use a VNet based on the values in the following configur
```azurepowershell-interactive $pip = New-AzPublicIpAddress -Name $GWIPName -ResourceGroupName $RG -Location $Location -AllocationMethod Dynamic ```
+
+ If you plan to use IPv6-based private peering over ExpressRoute, please set the IP SKU to Standard and the AllocationMethod to Static:
+ ```azurepowershell-interactive
+ $pip = New-AzPublicIpAddress -Name $GWIPName -ResourceGroupName $RG -Location $Location -AllocationMethod Static -SKU Standard
+ ```
+
1. Create the configuration for your gateway. The gateway configuration defines the subnet and the public IP address to use. In this step, you're specifying the configuration that will be used when you create the gateway. Use the following sample to create your gateway configuration. ```azurepowershell-interactive
The steps for this task use a VNet based on the values in the following configur
New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG -Location $Location -IpConfigurations $ipconf -GatewayType Expressroute -GatewaySku Standard ``` > [!IMPORTANT]
-> If you plan to use IPv6-based private peering over ExpressRoute, make sure to select an AZ SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ) for **-GatewaySku**.
+> If you plan to use IPv6-based private peering over ExpressRoute, make sure to select an AZ SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ) for **-GatewaySku** or use Non-AZ SKU (Standard, HighPerformance, UltraPerformance) for -GatewaySKU with Standard and Static Public IP.
> >
expressroute Expressroute Howto Add Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-ipv6-portal.md
This article describes how to add IPv6 support to connect via ExpressRoute to your resources in Azure using the Azure portal.
-> [!Note]
-> This feature is currently available for preview in [Azure regions with Availability Zones](../availability-zones/az-region.md#azure-regions-with-availability-zones). Your ExpressRoute circuit can therefore be created using any peering location, but the IPv6-based deployments it connects to must be in a region with Availability Zones.
- ## Register for Public Preview Before adding IPv6 support, you must first enroll your subscription. To enroll, run the following commands via Azure PowerShell:
From a browser, go to the [Azure portal](https://portal.azure.com), and then sig
## Update your connection to an existing virtual network
-Follow the steps below if you have an existing environment of Azure resources in a region with Availability Zones that you would like to use your IPv6 Private Peering with.
+Follow the steps below if you have an existing environment of Azure resources that you would like to use your IPv6 Private Peering with.
1. Navigate to the virtual network that your ExpressRoute circuit is connected to.
Follow the steps below if you have an existing environment of Azure resources in
1. Navigate to the **Subnets** tab and select the **GatewaySubnet**. Check **Add IPv6 address space** and provide an IPv6 address space for your subnet. The gateway IPv6 subnet should be /64 or larger. **Save** your configuration once you've specified all parameters. :::image type="content" source="./media/expressroute-howto-add-ipv6-portal/add-ipv6-gateway-space.png" alt-text="Screenshot of add Ipv6 address space to the subnet.":::
+
+1. If you have an existing zone-redundant gateway, run the following in PowerShell to enable IPv6 connectivity (note that it may take up to 1 hour for changes to reflect). Otherwise, [create the virtual network gateway](./expressroute-howto-add-gateway-portal-resource-manager.md) using any SKU and a Standard, Static public IP address. If you plan to use FastPath, use UltraPerformance or ErGw3AZ (note that this option is only available for circuits using ExpressRoute Direct).
-1. If you have an existing zone-redundant gateway, navigate to your ExpressRoute gateway and refresh the resource to enable IPv6 connectivity. Otherwise, [create the virtual network gateway](expressroute-howto-add-gateway-portal-resource-manager.md) using a zone-redundant SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ). If you plan to use FastPath, use ErGw3AZ.
-
- :::image type="content" source="./media/expressroute-howto-add-ipv6-portal/refresh-gateway.png" alt-text="Screenshot of refreshing the gateway.":::
+ ```azurepowershell-interactive
+ $gw = Get-AzVirtualNetworkGateway -Name "GatewayName" -ResourceGroupName "ExpressRouteResourceGroup"
+ Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw
## Create a connection to a new virtual network
-Follow the steps below if you plan to connect to a new set of Azure resources in a region with Availability Zones using your IPv6 Private Peering.
+Follow the steps below if you plan to connect to a new set of Azure resources using your IPv6 Private Peering.
1. Create a dual-stack virtual network with both IPv4 and IPv6 address space. For more information, see [Create a virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network). 1. [Create the dual-stack gateway subnet](expressroute-howto-add-gateway-portal-resource-manager.md#create-the-gateway-subnet).
-1. [Create the virtual network gateway](expressroute-howto-add-gateway-portal-resource-manager.md#create-the-virtual-network-gateway) using a zone-redundant SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ). If you plan to use FastPath, use ErGw3AZ (note that this option is only available for circuits using ExpressRoute Direct).
+1. [Create the virtual network gateway](expressroute-howto-add-gateway-portal-resource-manager.md#create-the-virtual-network-gateway) using any SKU and a Standard, Static public IP address. If you plan to use FastPath, use UltraPerformance or ErGw3AZ (note that this option is only available for circuits using ExpressRoute Direct).
1. [Link your virtual network to your ExpressRoute circuit](expressroute-howto-linkvnet-portal-resource-manager.md). ## Limitations
-While IPv6 support is available for connections to deployments in regions with Availability Zones, it doesn't support the following use cases:
+While IPv6 support is available for connections to deployments in Public Azure regions, it doesn't support the following use cases:
-* Connections to deployments in Azure via a non-AZ ExpressRoute gateway SKU
-* Connections to deployments in non-AZ regions
+* Connections to existing ExpressRoute gateways that are *not* zone-redundant
* Global Reach connections between ExpressRoute circuits * Use of ExpressRoute with virtual WAN * FastPath with non-ExpressRoute Direct circuits
While IPv6 support is available for connections to deployments in regions with A
To troubleshoot ExpressRoute problems, see the following articles: * [Verifying ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md)
-* [Troubleshooting network performance](expressroute-troubleshooting-network-performance.md)
+* [Troubleshooting network performance](expressroute-troubleshooting-network-performance.md)
expressroute Expressroute Howto Add Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-add-ipv6-powershell.md
This article describes how to add IPv6 support to connect via ExpressRoute to your resources in Azure using Azure PowerShell.
-> [!Note]
-> This feature is currently available for preview in [Azure regions with Availability Zones](../availability-zones/az-region.md#azure-regions-with-availability-zones). Your ExpressRoute circuit can therefore be created using any peering location, but the IPv6-based deployments it connects to must be in a region with Availability Zones.
- ## Working with Azure PowerShell [!INCLUDE [updated-for-az](../../includes/hybrid-az-ps.md)]
Your request will then be approved by the ExpressRoute team within 2-3 business
## Update your connection to an existing virtual network
-Follow the steps below if you have an existing environment of Azure resources in an a region with Availability Zones that you would like to use your IPv6 Private Peering with.
+Follow the steps below if you have an existing environment of Azure resources that you would like to use your IPv6 Private Peering with.
1. Retrieve the virtual network that your ExpressRoute circuit is connected to.
Follow the steps below if you have an existing environment of Azure resources in
Set-AzVirtualNetwork -VirtualNetwork $vnet ```
-4. If you have an existing zone-redundant gateway, run the following to enable IPv6 connectivity. Otherwise, [create the virtual network gateway](./expressroute-howto-add-gateway-resource-manager.md) using a zone-redundant SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ).
+4. If you have an existing zone-redundant gateway, run the following to enable IPv6 connectivity (note that it may take up to 1 hour for changes to reflect). Otherwise, [create the virtual network gateway](./expressroute-howto-add-gateway-resource-manager.md) using any SKU. If you plan to use FastPath, use UltraPerformance or ErGw3AZ (note that this is only available for circuits using ExpressRoute Direct).
```azurepowershell-interactive $gw = Get-AzVirtualNetworkGateway -Name "GatewayName" -ResourceGroupName "ExpressRouteResourceGroup"
Follow the steps below if you have an existing environment of Azure resources in
## Create a connection to a new virtual network
-Follow the steps below if you plan to connect to a new set of Azure resources in a region with Availability Zones using your IPv6 Private Peering.
+Follow the steps below if you plan to connect to a new set of Azure resources using your IPv6 Private Peering.
1. Create a dual-stack virtual network with both IPv4 and IPv6 address space. For more information, see [Create a virtual network](../virtual-network/quick-create-portal.md#create-a-virtual-network). 2. [Create the dual-stack gateway subnet](./expressroute-howto-add-gateway-resource-manager.md#add-a-gateway).
-3. [Create the virtual network gateway](./expressroute-howto-add-gateway-resource-manager.md#add-a-gateway) using a zone-redundant SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ). If you plan to use FastPath, use ErGw3AZ (note that this is only available for circuits using ExpressRoute Direct).
+3. [Create the virtual network gateway](./expressroute-howto-add-gateway-resource-manager.md#add-a-gateway) using any SKU. If you plan to use FastPath, use UltraPerformance or ErGw3AZ (note that this is only available for circuits using ExpressRoute Direct).
4. [Link your virtual network to your ExpressRoute circuit](./expressroute-howto-linkvnet-arm.md). ## Limitations
-While IPv6 support is available for connections to deployments in regions with Availability Zones, it does not support the following use cases:
+While IPv6 support is available for connections to deployments in Public Azure regions, it does not support the following use cases:
-* Connections to deployments in Azure via a non-AZ ExpressRoute gateway SKU
-* Connections to deployments in non-AZ regions
+* Connections to existing ExpressRoute gateways that are *not* zone-redundant
* Global Reach connections between ExpressRoute circuits * Use of ExpressRoute with virtual WAN * FastPath with non-ExpressRoute Direct circuits
While IPv6 support is available for connections to deployments in regions with A
To troubleshoot ExpressRoute problems, see the following articles: * [Verifying ExpressRoute connectivity](expressroute-troubleshooting-expressroute-overview.md)
-* [Troubleshooting network performance](expressroute-troubleshooting-network-performance.md)
+* [Troubleshooting network performance](expressroute-troubleshooting-network-performance.md)
expressroute Monitor Expressroute Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/monitor-expressroute-reference.md
+
+ Title: Monitoring ExpressRoute data reference
+description: Important reference material needed when you monitor ExpressRoute
+++++ Last updated : 06/22/2021++
+# Monitoring ExpressRoute data reference
+
+This article provides a reference of log and metric data collected to analyze the performance and availability of ExpressRoute.
+See [Monitoring ExpressRoute](monitor-expressroute.md) for details on collecting and analyzing monitoring data for ExpressRoute.
+
+## Metrics
+
+This section lists all the automatically collected platform metrics for ExpressRoute. For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+
+| Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| ExpressRoute circuit | [Microsoft.Network/expressRouteCircuits](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutecircuits) |
+| ExpressRoute circuit peering | [Microsoft.Network/expressRouteCircuits/peerings](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutecircuitspeerings) |
+| ExpressRoute Gateways | [Microsoft.Network/expressRouteGateways](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressroutegateways) |
+| ExpressRoute Direct | [Microsoft.Network/expressRoutePorts](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkexpressrouteports) |
+
+>[!NOTE]
+> Using *GlobalGlobalReachBitsInPerSecond* and *GlobalGlobalReachBitsOutPerSecond* will only be visible if at least one Global Reach connection is established.
+>
+
+## Metric Dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+ExpressRoute has the following dimensions associated with its metrics.
+
+### Dimension for ExpressRoute circuit
+
+| Dimension Name | Description |
+| - | -- |
+| **PeeringType** | The type of peering configured. The supported values are Microsoft and Private peering. |
+| **Peering** | The supported values are Primary and Secondary. |
+| **PeeredCircuitSkey** | The remote ExpressRoute circuit service key connected using Global Reach. |
+
+### Dimension for ExpressRoute gateway
+
+| Dimension Name | Description |
+| - | -- |
+| **roleInstance** | The gateway instance. Each ExpressRoute gateway is comprised of multiple instances, and the supported values are GatewayTenantWork_IN_X (where X is a minimum of 0 and a maximum of the number of gateway instances -1). |
+
+### Dimension for Express Direct
+
+| Dimension Name | Description |
+| - | -- |
+| **Link** | The physical link. Each ExpressRoute Direct port pair is comprised of two physical links for redundancy, and the supported values are link1 and link2. |
+
+## Resource logs
+
+This section lists the types of resource logs you can collect for ExpressRoute.
+
+|Resource Log Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| ExpressRoute Circuit | [Microsoft.Network/expressRouteCircuits](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworkexpressroutecircuits) |
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
+
+## Azure Monitor Logs tables
+
+Azure ExpressRoute uses Kusto tables from Azure Monitor Logs. You can query these tables with Log analytics. For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+## Activity log
+
+The following table lists the operations related to ExpressRoute that may be created in the Activity log.
+
+| Operation | Description |
+|:|:|
+| All Administrative operations | All administrative operations including create, update and delete of an ExpressRoute circuit. |
+| Create or update ExpressRoute circuit | An ExpressRoute circuit was created or updated. |
+| Deletes ExpressRoute circuit | An ExpressRoute circuit was deleted.|
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
+
+## Schemas
+
+For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../azure-monitor/essentials/resource-logs-schema.md).
+
+When reviewing any metrics through Log Analytics, the output will contain the following columns:
+
+|**Column**|**Type**|**Description**|
+| | | |
+|TimeGrain|string|PT1M (metric values are pushed every minute)|
+|Count|real|Usually equal to 2 (each MSEE pushes a single metric value every minute)|
+|Minimum|real|The minimum of the two metric values pushed by the two MSEEs|
+|Maximum|real|The maximum of the two metric values pushed by the two MSEEs|
+|Average|real|Equal to (Minimum + Maximum)/2|
+|Total|real|Sum of the two metric values from both MSEEs (the main value to focus on for the metric queried)|
+
+## See Also
+
+- See [Monitoring Azure ExpressRoute](monitor-expressroute.md) for a description of monitoring Azure ExpressRoute.
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
expressroute Monitor Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/monitor-expressroute.md
+
+ Title: Monitoring Azure ExpressRoute
+description: Start here to learn how to monitor Azure ExpressRoute.
++++++ Last updated : 06/22/2021++
+# Monitoring Azure ExpressRoute
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure ExpressRoute. Azure ExpressRoute uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+
+## ExpressRoute insights
+
+Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
+
+ExpressRoute uses Network insights to provide a detailed topology mapping of all ExpressRoute components (peerings, connections, gateways) in relation with one another. Network insights for ExpressRoute also have preloaded metrics dashboard for availability, throughput, packet drops, and gateway metrics. For more information, see [Azure ExpressRoute Insights using Networking Insights](expressroute-network-insights.md).
+
+## Monitoring data
+
+Azure ExpressRoute collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+
+See [Monitoring Azure ExpressRoute data reference](monitor-expressroute-reference.md) for detailed information on the metrics and logs metrics created by Azure ExpressRoute.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure ExpressRoute* are listed in [Azure ExpressRoute monitoring data reference](monitor-expressroute-reference.md#resource-logs).
+
+> [!IMPORTANT]
+> Enabling these settings requires additional Azure services (storage account, event hub, or Log Analytics), which may increase your cost. To calculate an estimated cost, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator).
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+You can analyze metrics for *Azure ExpressRoute* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool.
++
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+* To view **ExpressRoute** metrics, filter by Resource Type *ExpressRoute circuits*.
+* To view **Global Reach** metrics, filter by Resource Type *ExpressRoute circuits* and select an ExpressRoute circuit resource that has Global Reach enabled.
+* To view **ExpressRoute Direct** metrics, filter Resource Type by *ExpressRoute Ports*.
+
+Once a metric is selected, the default aggregation will be applied. Optionally, you can apply splitting, which will show the metric with different dimensions.
+
+### Aggregation Types:
+
+Metrics explorer supports SUM, MAX, MIN, AVG, and COUNT as [aggregation types](../azure-monitor/essentials/metrics-charts.md#aggregation). Use the recommended Aggregation type when reviewing the insights for each ExpressRoute metric.
+
+* Sum: The sum of all values captured during the aggregation interval.
+* Count: The number of measurements captured during the aggregation interval.
+* Average: The average of the metric values captured during the aggregation interval.
+* Min: The smallest value captured during the aggregation interval.
+* Max: The largest value captured during the aggregation interval.
+
+>[!NOTE]
+>Using **Classic Metrics** is not recommended.
+>
+
+### Circuits metrics
+
+#### Bits In and Out - Metrics across all peerings
+
+Aggregation type: *Avg*
+
+You can view metrics across all peerings on a given ExpressRoute circuit.
++
+#### Bits In and Out - Metrics per peering
+
+Aggregation type: *Avg*
+
+You can view metrics for private, public, and Microsoft peering in bits/second.
++
+#### BGP Availability - Split by Peer
+
+Aggregation type: *Avg*
+
+You can view near to real-time availability of BGP (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Primary BGP session status is up for private peering and the Second BGP session status is down for private peering.
++
+### ARP Availability - Split by Peering
+
+Aggregation type: *Avg*
+
+You can view near to real-time availability of [ARP](./expressroute-troubleshooting-arp-resource-manager.md) (Layer-3 connectivity) across peerings and peers (Primary and Secondary ExpressRoute routers). This dashboard shows the Private Peering ARP session status is up across both peers, but down for Microsoft peering for both peers. The default aggregation (Average) was utilized across both peers.
++
+### ExpressRoute Direct Metrics
+
+#### Admin State - Split by link
+
+Aggregation type: *Avg*
+
+You can view the Admin state for each link of the ExpressRoute Direct port pair. The Admin state represents if the physical port is on or off. This state is required to pass traffic across the ExpressRoute Direct connection.
++
+#### Bits In Per Second - Split by link
+
+Aggregation type: *Avg*
+
+You can view the bits in per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare inbound bandwidth for both links.
++
+#### Bits Out Per Second - Split by link
+
+Aggregation type: *Avg*
+
+You can also view the bits out per second across both links of the ExpressRoute Direct port pair. Monitor this dashboard to compare outbound bandwidth for both links.
++
+#### Line Protocol - Split by link
+
+Aggregation type: *Avg*
+
+You can view the line protocol across each link of the ExpressRoute Direct port pair. The Line Protocol indicates if the physical link is up and running over ExpressRoute Direct. Monitor this dashboard and set alerts to know when the physical connection has gone down.
++
+#### Rx Light Level - Split by link
+
+Aggregation type: *Avg*
+
+You can view the Rx light level (the light level that the ExpressRoute Direct port is **receiving**) for each port. Healthy Rx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Rx light level falls outside of the healthy range.
++
+#### Tx Light Level - Split by link
+
+Aggregation type: *Avg*
+
+You can view the Tx light level (the light level that the ExpressRoute Direct port is **transmitting**) for each port. Healthy Tx light levels generally fall within a range of -10 dBm to 0 dBm. Set alerts to be notified if the Tx light level falls outside of the healthy range.
++
+### ExpressRoute Virtual Network Gateway Metrics
+
+Aggregation type: *Avg*
+
+When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are six gateway metrics available to you to better understand the performance of your gateway:
+
+* CPU Utilization
+* Packets per seconds
+* Count of routes advertised to peers
+* Count of routes learned from peers
+* Frequency of routes changed
+* Number of VMs in the virtual network
+
+It's highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
+
+#### CPU Utilization - Split Instance
+
+Aggregation type: *Avg*
+
+You can view the CPU utilization of each gateway instance. The CPU utilization may spike briefly during routine host maintenance but prolong high CPU utilization could indicate your gateway is reaching a performance bottleneck. Increasing the size of the ExpressRoute gateway may resolve this issue. Set an alert for how frequent the CPU utilization exceeds a certain threshold.
++
+#### Packets Per Second - Split by Instance
+
+Aggregation type: *Avg*
+
+This metric captures the number of inbound packets traversing the ExpressRoute gateway. You should expect to see a consistent stream of data here if your gateway is receiving traffic from your on-premises network. Set an alert for when the number of packets per second drops below a threshold indicating that your gateway is no longer receiving traffic.
++
+#### Count of Routes Advertised to Peer - Split by Instance
+
+Aggregation type: *Count*
+
+This metric is the count for the number of routes the ExpressRoute gateway is advertising to the circuit. The address spaces may include virtual networks that are connected using VNet peering and uses remote ExpressRoute gateway. You should expect the number of routes to remain consistent unless there are frequent changes to the virtual network address spaces. Set an alert for when the number of advertised routes drop below the threshold for the number of virtual network address spaces you're aware of.
++
+#### Count of Routes Learned from Peer - Split by Instance
+
+Aggregation type: *Max*
+
+This metric shows the number of routes the ExpressRoute gateway is learning from peers connected to the ExpressRoute circuit. These routes can be either from another virtual network connected to the same circuit or learned from on-premises. Set an alert for when the number of learned routes drop below a certain threshold. A high number of routes being dropped could indicate either the gateway is seeing a performance problem or remote peers are no longer advertising routes to the ExpressRoute circuit.
++
+#### Frequency of Routes change - Split by Instance
+
+Aggregation type: *Sum*
+
+This metric shows the frequency of routes being learned from or advertised to remote peers. You should first investigate your on-premises devices to understand why the network is changing so frequently. A high frequency in routes change could indicate a performance problem on the ExpressRoute gateway where scaling the gateway SKU up may resolve the problem. Set an alert for a frequency threshold to be aware of when your ExpressRoute gateway is seeing abnormal route changes.
++
+#### Number of VMs in the Virtual Network
+
+Aggregation type: *Max*
+
+This metric shows the number of virtual machines that are using the ExpressRoute gateway. The number of virtual machines may include VMs from peered virtual networks that use the same ExpressRoute gateway. Set an alert for this metric if the number of VMs goes above a certain threshold that could affect the gateway performance.
++
+#### ExpressRoute gateway connections in bits/seconds
+
+Aggregation type: *Avg*
+
+This metric shows the bandwidth usage for a specific connection to an ExpressRoute circuit.
++
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). The schema for ExpressRoute resource logs is found in the [Azure ExpressRoute Data Reference](monitor-expressroute-reference.md#schemas).
+
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform logging that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+ExpressRoute stores data in the following tables.
+
+| Table | Description |
+| -- | -- |
+| AzureDiagnostics | Common table used by multiple services to store Resource logs. Resource logs from ExpressRoute can be identified with `MICROSOFT.NETWORK`. |
+| AzureMetrics | Metric data emitted by ExpressRoute that measure their health and performance.
+
+To view these tables, navigate to your ExpressRoute circuit resource and select **Logs** under *Monitoring*.
+
+### Sample Kusto queries
+
+Here are some queries that you can enter into the Log search bar to help you monitor your Azure ExpressRoute resources. These queries work with the [new language](../azure-monitor/logs/log-query-overview.md).
+
+* To query for BGP route table learned over the last 12 hours.
+
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated > ago(12h)
+ | where ResourceType == "EXPRESSROUTECIRCUITS"
+ | project TimeGenerated, ResourceType , network s, path s, OperationName
+ ```
+
+* To query for BGP informational messages by level, resource type, and network.
+
+ ```Kusto
+ AzureDiagnostics
+ | where Level == "Informational"
+ | where ResourceType == "EXPRESSROUTECIRCUITS"
+ | project TimeGenerated, ResourceId , Level, ResourceType , network s, path s
+ ```
+
+* To query for Traffic graph BitInPerSeconds in the last one hour.
+
+ ```Kusto
+ AzureMetrics
+ | where MetricName == "BitsInPerSecond"
+ | summarize by Average, bin(TimeGenerated, 1h), Resource
+ | render timechart
+ ```
+
+* To query for Traffic graph BitOutPerSeconds in the last one hour.
+
+ ```Kusto
+ AzureMetrics
+ | where MetricName == "BitsOutPerSecond"
+ | summarize by Average, bin(TimeGenerated, 1h), Resource
+ | render timechart
+ ```
+
+* To query for graph of ArpAvailability in 5-minute intervals.
+
+ ```Kusto
+ AzureMetrics
+ | where MetricName == "ArpAvailability"
+ | summarize by Average, bin(TimeGenerated, 5m), Resource
+ | render timechart
+ ```
+
+* To query for graph of BGP availability in 5-minute intervals.
+
+ ```Kusto
+ AzureMetrics
+ | where MetricName == "BGPAvailability"
+ | summarize by Average, bin(TimeGenerated, 5m), Resource
+ | render timechart
+ ```
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+
+The following table lists common and recommended alert rules for ExpressRoute.
+
+| Alert type | Condition | Description |
+|:|:|:|
+| ARP availability down | Dimension name: Peering Type, Aggregation type: Avg, Operator: Less than, Threshold value: 100% | When ARP availability is down for a peering type. |
+| BGP availability down | Dimension name: Peer, Aggregation type: Avg, Operator: Less than, Threshold value: 100% | When BGP availability is down for a peer. |
+
+### Alerts for ExpressRoute gateway connections
+
+1. To configure alerts, navigate to **Azure Monitor**, then select **Alerts**.
+
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/eralertshowto.jpg" alt-text="alerts":::
+2. Select **+ Select Target** and select the ExpressRoute gateway connection resource.
+
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/alerthowto2.jpg" alt-text="target":::
+3. Define the alert details.
+
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/alerthowto3.jpg" alt-text="action group":::
+4. Define and add the action group.
+
+ :::image type="content" source="./media/expressroute-monitoring-metrics-alerts/actiongroup.png" alt-text="add action group":::
++
+### Alerts based on each peering
++
+### Configure alerts for activity logs on circuits
+
+In the **Alert Criteria**, you can select **Activity Log** for the Signal Type and select the Signal.
++
+## Next steps
+
+* See [Monitoring ExpressRoute data reference](monitor-expressroute-reference.md) for a reference of the metrics, logs, and other important values created by ExpressRoute.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/overview.md) for details on monitoring Azure resources.
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Previously updated : 06/01/2021 Last updated : 06/22/2021
To learn more about Azure Firewall Premium Preview Intermediate CA certificate r
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium Preview provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are fully managed and continuously updated. IDPS can be applied to inbound traffic, spoke-to-spoke traffic (East-West), and outbound traffic.
+Azure Firewall Premium Preview provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they are fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic.
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
germany Germany Migration Management Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/germany/germany-migration-management-tools.md
Title: Migrate Azure management tools from Azure Germany to global Azure description: This article provides information about migrating your Azure management tools from Azure Germany to global Azure. Previously updated : 10/16/2020 Last updated : 06/22/2021
# Migrate management tool resources to global Azure This article has information that can help you migrate Azure management tools from Azure Germany to global Azure.
For more information:
- Learn how to [create a Traffic Manager profile](../traffic-manager/quickstart-create-traffic-manager-profile.md). - Read about the [Blue-Green scenario](https://azure.microsoft.com/blog/blue-green-deployments-using-azure-traffic-manager/).
-## Backup
+## Azure Backup
-You can't migrate Azure Backup jobs and snapshots from Azure Germany to global Azure.
+Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it from the Microsoft Azure cloud. The backup data move is now enabled from Germany Central (GEC) and Germany Northeast (GNE) to Germany West Central (GWC) via PowerShell cmdlets.
-For more information:
+### Prerequisite for moving hybrid workloads
-- Refresh your knowledge by completing the [Backup tutorials](../backup/index.yml).-- Review the [Azure Backup overview](../backup/backup-overview.md).
+Once the move operation starts, backups are stopped in the existing vaults. So, itΓÇÖs important to protect your data in a new vault in GWC for hybrid workloads (Data Protection Manager (DPM) server/ Azure Backup Server (MABS)/ Microsoft Azure Recovery Services (MARS) before you start moving backup data from the regions.
+To start protecting in a new vault:
+
+1. Create a new vault (VaultN) in GWC.
+1. Re-register your DPM server/MABS/MARS agent to VaultN.
+1. Assign Policy and start taking backups.
+
+The initial backup will be a full copy followed by incremental backups.
+
+>[!Important]
+>- Before initiating the backup data move operation, ensure that the first full backup in VaultN is completed.
+>- For DPM/MABS, maintain the passphrase from the original vault in a secure location as you need the same to restore data from the target vault. Without the original passphrase, restoration from the source vault is not possible.
+
+### Step 1: Download the resources
+
+Download and install the required resources.
+
+1. [Download](https://github.com/PowerShell/PowerShell/releases/tag/v7.0.3) the latest version of PowerShell (PowerShell 7).
+1. Use Az.RecoveryServices module version 4.2.0 available in Azure Cloud Shell.
+1. [Update](https://aka.ms/azurebackup_agent) all MARS agents to the latest version.
+1. Validate your passphrase. If you need to regenerate, follow the [validation steps](https://support.microsoft.com/topic/mandatory-update-for-azure-backup-for-microsoft-azure-recovery-services-agent-411e371c-aace-e134-de6b-bf9fda48026e#section-3).
+
+### Step 2: Create a target vault in GWC
+
+Create a Target Vault (Vault 2) in GWC. To learn how to create a vault, see [Create and configure a Recovery Services vault](../backup/backup-create-rs-vault.md).
+
+>[!Note]
+>- Ensure that the vault has no protected items.
+>- Ensure that the target vault has necessary redundancy - locally redundant sorage (LRS) or geo-redundant storage (GRS).
+
+### Step 3: - Use PowerShell to trigger Backup data move
+
+#### Get the source vault from GNE or GEC
+
+Run these cmdlets:
+
+1. `Connect-AzAccount -Environment AzureGermanCloud`
+1. `Set-AzContext -Subscription "subscriptionName"`
+1. `$srcVault = Get-AzRecoveryServicesVault -name ΓÇ£srcVaultΓÇ¥ -ResourceGroupName ΓÇ£TestSourceRGΓÇ¥`
+
+>[!Note]
+>- `srcVault` = Source Vault
+>- `TestSourceRG` = Source Resource Group
+
+#### Get the target vault in GWC
+
+Run these cmdlets:
+
+1. `Connect-AzAccount`
+1. `Set-AzContext -Subscription "subscriptionName"`
+1. `$trgVault = Get-AzRecoveryServicesVault -name ΓÇ£targetVaultΓÇ¥ -ResourceGroupName ΓÇ£TestTargetRGΓÇ¥`
+
+>[!Note]
+>- `targetVault` = Target Vault
+>- `TestTargetRG` = Test Resource Group
+
+#### Perform validation
+
+Run these cmdlets:
+
+1. `$validated = $false`
+1. `$validated = Test-AzRecoveryServicesDSMove -SourceVault $srcVault -TargetVault $trgVault`
+
+#### Initialize/prepare DS move
+
+Run these cmdlets:
+
+1. `Connect-AzAccount -Environment AzureGermanCloud`
+1. `Set-AzContext -SubscriptionName $srcSub`
+1. ```azurepowershell
+ if($validated) {
+ $corr = Initialize-AzRecoveryServicesDSMove -SourceVault $srcVault -TargetVault $trgVault
+ }
+ ```
+1. `$corr`
+
+#### Trigger DS move
+
+Run these cmdlets:
+
+1. `Connect-AzAccount`
+1. `Set-AzContext -SubscriptionName $trgSub`
+1. `Copy-AzRecoveryServicesVault - CorrelationIdForDataMove $corr -SourceVault $srcVault -TargetVault $trgVault -Force`
+
+You can monitor the operation using the `Get-AzRecoveryServicesBackupJob` cmdlet.
+
+>[!Note]
+>- During the backup data move operation, all backup items are moved to a transient state. In this state, the new Recovery Points (RPs) are not created, and old RPs are not cleaned up.
+>- As this feature is enabled in GEC and GNE, we recommend you to perform these steps on a small vault and validate the movement. On success, perform these steps on all vaults.
+>- Along side the backup data move is triggered for the entire vault, the move happens per container (VMs, DPM and MABS servers, and MARS agents). Track the progress of the moves per container in the **Jobs** section.
+
+ ![monitor progress of move jobs.](./media/germany-migration-management-tools/track-move-jobs.png)
+
+During the move operation, the following actions are blocked on the source vault:
+
+- New Scheduled Backups
+- Stop backup with Delete data.
+- Delete Data
+- Resume Backup
+- Modify Policy
+
+### Step 4: Check the status of the move job
+
+The backup data move operation happens per container. For Azure VM backups, the VM backups are considered as the containers. To indicate progress of the backup data move operation, a job is created for every container.
+
+To monitor the jobs, run these cmdlets:
+
+1. `Get-AzRecoveryServicesBackupJob -Operation BackupDataMove -VaultId $trgVault.ID`
+1. `$Jobs = Get-AzRecoveryServicesBackupJob -Operation BackupDataMove -VaultId $trgVault.ID`
+1. `Get-AzRecoveryServicesBackupJobDetail -Job $Jobs[0] -VaultId $trgVault.ID`
+1. `$JobDetails.ErrorDetails`
+
+### Step 5: Post move operations
+
+Once the backup data move operation for all containers to the target vault is complete, no further action is required for VM backups.
+++
+#### Verify the movement of containers is complete
+
+To check if all containers from the source vault have moved to the target vault, go to the target vault and check for all containers in that vault.
+
+Run the following cmdlet to list all VMs moved from the source vault to target vault:
+
+```azurepowershell
+Get-AzRecoveryServicesBackupContainer -BackupManagementType ΓÇ£AzureVMΓÇ¥ -VaultId $trgVault.ID
+```
+
+#### Verify the movement of policies is complete
+
+After the backup data is moved successfully to the new region, all policies that were applied to Azure VM backup items in the source vault are applied to the target vault.
+
+To verify if all policies have moved from the source vault to the target vault, go to the target vault and run the following cmdlet to get the list of all moved policies:
+
+```azurepowershell
+Get-AzRecoveryServicesBackupProtectionPolicy -VaultId $trgVault.ID
+```
+
+These policies continue to apply on your backup data after the move operation so that the lifecycle management of the moved recovery points is continued.
+
+To avoid sudden clean-up of several recovery points (that may have expired during the move process or may expire immediately after the move process), the clean-up of older recovery points (RPs) are paused for a period of 10 days after the move. During this period, you are not billed for the additional data incurred by the old RPs.
+
+>[!Important]
+>If you need to recover from these older RPs, recover them immediately the backup data move within this 10-day period. Once this safety period is complete, the policies applied on each of the backup items would take effect and will enforce clean-up of the old RPs.
+
+#### Restore operations
+
+**Restore Azure Virtual Machines**
+
+For Azure Virtual machines, you can restore from the recovery points in the target vault.
+
+#### Configure MARS agent
+
+1. Re-register to the target vault.
+1. Restore from the recovery points.
+1. Re-register Post Recovery to the new vault (VaultN) and resume backups.
+
+>[!Note]
+>While the MARS agent is registered to the target vault, no new backups take place.
+
+#### Configure DPM/MABS
+
+**Recommended**
+
+Use the External DPM method to perform restore. For more information, see [Recover data from Azure Backup Server](../backup/backup-azure-alternate-dpm-server.md).
+
+>[!Note]
+>- Original-Location Recovery (OLR) is not supported.
+>- Backups will continue in VaultN for all the machines registered.
+
+**Other option**
+
+For Original-Location Recovery (OLR):
+
+1. Re-register the DPM server/MABS to the target vault.
+1. Perform restore operation.
+1. Re-register the DPM server/MABS back to the new vault.
+
+>[!Note]
+>Limitations of using DPM: <br><br> <ul><li>Backup operation for all the machines registered to the DPM server are stopped when you connect the DPM server to the target-vault.</li><li>After the DPM server is re-registered to the new vault after restore, consistency checks takes place (time taken to complete the same will depend on the amount of data) before resuming backups.</li></ul>
+
+### Error codes
+
+#### UserErrorConflictingDataMoveOnVault
+
+**Message:** There is another data move operation currently running on vault.
+
+**Scenario:** You are trying the data move operation on a source vault, while other data move operation is already running on the same source vault.
+
+**Recommended action:** Wait until the current data move operation completes, and then try again.
+
+#### UserErrorOperationNotAllowedDuringDataMove
+
+**Message:** This operation is not allowed since data move operation is in progress.
+
+**Scenarios:** While data move operation is in progress, following operations are not allowed in the source vault:
+
+- Stop Backup with Retain Data
+- Stop Backup with delete data.
+- Delete backup data.
+- Resume backup
+- Modify policy.
+
+**Recommended action:** Wait until the data move operation completes, and then try again. [Learn more](#azure-backup) about the supported operations.
+
+#### UserErrorNoContainersFoundForDataMove
+
+**Message:** There are no containers in this vault which are supported for data move operation.
+
+**Scenarios:** This message displays if:
+
+- Source vault has no containers at all.
+- Source vault has only unsupported containers.
+- Source vault has all containers which are previously moved to some target vault and you have passed IgnoreMoved = true in the API.
+
+**Recommended action:** [Learn](#azure-backup) about the supported containers for data move.
+
+#### UserErrorDataMoveNotSupportedAtContainerLevel
+
+**Message:** Data move operation is not supported at container level.
+
+**Scenario:** You have chosen a container level data move operation.
+
+**Recommended action:** Try the vault level data move operation.
+
+### UserErrorDataMoveNotAllowedContainer RegistrationInProgress
+
+**Message:** Data move operation is not allowed because a container registration operation is running in source vault.
+
+**Scenario:** A container registration operation is in progress in the source vault when you tried data move.
+
+**Recommended action:** Try the data move operation after some time.
+
+#### UserErrorDataMoveNotAllowedTargetVaultNotEmpty
+
+**Message:** Data move operation is not allowed because target vault has some containers already registered.
+
+**Scenario:** The chosen target vault has some containers already registered.
+
+**Recommended action:** Try the data move operation on an empty target vault.
+
+#### UserErrorUnsupportedSourceRegionForDataMove
+
+**Message:** Data move operation is not supported for this region.
+
+**Scenario:** Source region not valid.
+
+**Recommended action:** Check the [list of supported regions](#azure-backup) for data move.
+
+#### UserErrorUnsupportedTargetRegionForDataMove
+
+**Message:** Data move operation is not supported to this region.
+
+**Scenario:** Target region ID not valid.
+
+**Recommended action:** Check the [list of supported regions](#azure-backup) for data move.
++
+#### UserErrorDataMoveTargetVaultWithPrivate EndpointNotSupported
+
+**Message:** Data cannot be moved as selected target vault has private endpoints.
+
+**Scenario:** Private end points are enabled in the target vault.
+
+**Recommended action:** Delete the private endpoints and retry the move operation. [Learn more](#azure-backup) about the supported operations.
+
+### UserErrorDataMoveSourceVaultWithPrivate EndpointNotSupported
+
+**Message:** Data cannot be moved as selected source vault has private endpoints.
+
+**Scenario:** Private end points are enabled in the source vault.
+
+**Recommended action:** Delete the private endpoints and retry the move operation. [Learn more](../backup/private-endpoints.md#deleting-private-endpoints) about the supported operations.
+
+#### UserErrorDataMoveSourceVaultWithCMK NotSupported
+
+**Message:** Data cannot be moved as selected source vault is encryption enabled.
+
+**Scenario:** Customer-Managed Keys (CMK) are enabled in the source vault.
+
+**Recommended action:** [Learn](#azure-backup) about the supported operations.
+
+#### UserErrorDataMoveTargetVaultWithCMKNotSupported
+
+**Message:** Data cannot be moved as selected target vault is encryption enabled.
+
+**Scenario:** Customer-Managed Keys (CMK) are enabled in the target vault
+
+**Recommended action:** [Learn](#azure-backup) about the supported operations.
## Scheduler
For more information:
- Learn how to [view policies by using the Azure CLI](../governance/policy/tutorials/create-and-manage.md#view-policy-definitions-with-azure-cli) or [view policies by using PowerShell](../governance/policy/tutorials/create-and-manage.md#view-policy-definitions-with-powershell). - Learn how to [create a policy definition by using the Azure CLI](../governance/policy/tutorials/create-and-manage.md#create-a-policy-definition-with-azure-cli) or [create a policy definition by using PowerShell](../governance/policy/tutorials/create-and-manage.md#create-a-policy-definition-with-powershell).
+## Frequently asked questions
+
+### Where can I move the backup data?
+
+You can move your backup data from Recovery Services Vaults (RSVs) in Germany Central (GEC) and Germany Northeast (GNE) to Germany West Central (GWC).
+
+### What backup data can I move?
+
+From June 21, 2021, you can move the backup data for the following workloads from one region to another:
+
+- Azure Virtual Machines
+- Hybrid Workloads
+- Files/folder backup using Microsoft Azure Recovery Services (MARS) Agent
+- Data Protection Manager (DPM) server
+- Azure Backup Server (MABS)
+
+### How can I move backup data to another region?
+
+To ensure that data in the existing regions are not lost, Azure Backup has enabled backup data move from GEC and GNE to GWC.
+
+While the migration happens, backups will stop in GEC and GNE. So, it is essential to protect the workloads in the new region before you start the migration operation.
+
+### What to do if the backup data move operation fails?
+
+The backup data can happen due to the following error scenarios:
+
+| Error messages | Causes |
+| | |
+| Please provide an empty target vault. The target vault should not have any backup items or backup containers. | You have chosen a target vault that already has some protected items. |
+| Azure Backup data is only allowed to be moved to supported target regions. | You have chosen a target vault from a region that is not one of the supported regions for move. |
+
+You need to retry backup from scratch by running the same command (given below) with a new empty target vault, or you may retry and move failed items from the source vault by indicating with a flag.
+
+```azurepowershell
+ if($validated) {
+ $corr = Initialize-AzRecoveryServicesDSMove -SourceVault $srcVault -TargetVault $trgVault -RetryOnlyFailed
+ }
+```
+
+### Is there a cost involved in moving this backup data?
+
+No. There is no additional cost for moving your backup data from one region to another. Azure Backup bears the cost of moving data across regions. Once the move operation is complete, you will have a 10-day no billing period only. After this period, billing will start in the Target vault.
+
+### If I face issues in moving backup data, whom should I contact?
+
+For any issues with backup data move from GEC or GNE to GWC, write to us at [GESupportAzBackup@microsoft.com](mailto:GESupportAzBackup@microsoft.com).
+ ## Next steps Learn about tools, techniques, and recommendations for migrating resources in the following service categories:
Learn about tools, techniques, and recommendations for migrating resources in th
- [Integration](./germany-migration-integration.md) - [Identity](./germany-migration-identity.md) - [Security](./germany-migration-security.md)-- [Media](./germany-migration-media.md)
+- [Media](./germany-migration-media.md)
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-control-devices-with-rest-api.md
GET https://{your app subdomain}.azureiotcentral.com/api/devices/temperature-con
## Next steps
-Now that you've learned how to control devices with the REST API, a suggested next step is to [Manage IoT Central applications with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
+Now that you've learned how to control devices with the REST API, a suggested next step is to learn [How to use the IoT Central REST API to create and manage jobs](howto-manage-jobs-with-rest-api.md).
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
+
+ Title: Use the REST API to manage jobs in Azure IoT Central
+description: How to use the IoT Central REST API to create and manage jobs in an application
++ Last updated : 06/21/2020++++++
+# How to use the IoT Central REST API to create and manage jobs
+
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to create and manage jobs in your IoT Central application. The REST API lets you:
+
+- List jobs and view job details in your application.
+- Create jobs in your application.
+- Stop, resume, and rerun jobs in your application.
+
+> [!IMPORTANT]
+> The jobs API is currently in preview. All The REST API calls described in this article should include `?api-version=preview`.
+
+This article describes how to use the `/jobs/{job_id}` API to control devices in bulk. You can also control devices individually.
+
+Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
+
+## Job payloads
+
+Many of the APIs described in this article include a definition that looks like the following JSON snippet:
+
+```json
+{
+ "id": "job-014",
+ "displayName": "Set target temperature",
+ "description": "Set target temperature for all thermostat devices",
+ "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "batch": {
+ "type": "percentage",
+ "value": 25
+ },
+ "cancellationThreshold": {
+ "type": "percentage",
+ "value": 10,
+ "batch": false
+ },
+ "data": [
+ {
+ "type": "PropertyJobData",
+ "target": "dtmi:modelDefinition:zomtmdxh:eqod32zbyl",
+ "path": "targetTemperature",
+ "value": "56"
+ }
+ ],
+ "status": "complete"
+}
+```
+
+The following table describes the fields in the previous JSON snippet:
+
+| Field | Description |
+| -- | -- |
+| `id` | A unique ID for the job in your application. |
+| `displayName` | The display name for the job in your application. |
+| `description` | A description of the job. |
+| `group` | The ID of the device group that the job applies to. Use the `deviceGroups` preview REST API to get a list of the device groups in your application. |
+| `status` | The [status](howto-run-a-job.md#view-job-status) of the job. One of `complete`, `cancelled`, `failed`, `pending`, `running`, `stopped`. |
+| `batch` | If present, this section defines how to [batch](howto-run-a-job.md#create-and-run-a-job) the devices in the job. |
+| `batch/type` | The size of each batch is either a `percentage` of the total devices in the group or a `number` of devices. |
+| `batch/value` | Either the percentage of devices or the number of devices in each batch. |
+| `cancellationThreshold` | If present, this section defines the [cancellation threshold](howto-run-a-job.md#create-and-run-a-job) for the job. |
+| `cancellationThreshold/batch` | `true` or `false`. If true, the cancellation threshold is set for each batch. If `false`, the cancellation threshold applies to the whole job. |
+| `cancellationThreshold/type` | The cancellation threshold for the job is either a `percentage` or a `number` of devices. |
+| `cancellationThreshold/value` | Either the percentage of devices or the number of devices that define the cancellation threshold. |
+| `data` | An array of operations the job performs. |
+| `data/type` | One of `PropertyJobData`, `CommandJobData`, or `CloudPropertyJobData` |
+| `data/target` | The model ID of the target devices. |
+| `data/path` | The name of the property, command, or cloud property. |
+| `data/value` | The property value to set or the command parameter to send. |
+
+## Get job information
+
+Use the following request to retrieve the list of the jobs in your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs?api-version=preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "job-006",
+ "displayName": "Set customer name",
+ "description": "Set the customer name cloud property",
+ "group": "4fcbec3b-5ee8-4324-855d-0f03b56bcf7f",
+ "data": [
+ {
+ "type": "CloudPropertyJobData",
+ "target": "dtmi:modelDefinition:bojo9tfju:yfvu5gv2vl",
+ "path": "CustomerName",
+ "value": "Contoso"
+ }
+ ],
+ "status": "complete"
+ },
+ {
+ "id": "job-005",
+ "displayName": "Set target temperature",
+ "description": "Set target temperature device property",
+ "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "data": [
+ {
+ "type": "PropertyJobData",
+ "target": "dtmi:modelDefinition:zomtmdxh:eqod32zbyl",
+ "path": "targetTemperature",
+ "value": 56
+ }
+ ],
+ "status": "complete"
+ },
+ {
+ "id": "job-004",
+ "displayName": "Run device report",
+ "description": "Call command to run the device reports",
+ "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "batch": {
+ "type": "percentage",
+ "value": 25
+ },
+ "cancellationThreshold": {
+ "type": "percentage",
+ "value": 10,
+ "batch": false
+ },
+ "data": [
+ {
+ "type": "CommandJobData",
+ "target": "dtmi:modelDefinition:zomtmdxh:eqod32zbyl",
+ "path": "getMaxMinReport",
+ "value": "2021-06-15T05:00:00.000Z"
+ }
+ ],
+ "status": "complete"
+ }
+ ]
+}
+```
+
+Use the following request to retrieve an individual job by ID:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004?api-version=preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "job-004",
+ "displayName": "Run device report",
+ "description": "Call command to run the device reports",
+ "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "batch": {
+ "type": "percentage",
+ "value": 25
+ },
+ "cancellationThreshold": {
+ "type": "percentage",
+ "value": 10,
+ "batch": false
+ },
+ "data": [
+ {
+ "type": "CommandJobData",
+ "target": "dtmi:modelDefinition:zomtmdxh:eqod32zbyl",
+ "path": "getMaxMinReport",
+ "value": "2021-06-15T05:00:00.000Z"
+ }
+ ],
+ "status": "complete"
+}
+```
+
+Use the following request to retrieve the details of the devices in a job:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004/devices?api-version=preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "therm-01",
+ "status": "completed"
+ },
+ {
+ "id": "therm-02",
+ "status": "completed"
+ },
+ {
+ "id": "therm-03",
+ "status": "completed"
+ },
+ {
+ "id": "therm-04",
+ "status": "completed"
+ }
+ ]
+}
+```
+
+## Create a job
+
+Use the following request to retrieve the details of the devices in a job:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006?api-version=preview
+{
+ "displayName": "Set target temperature",
+ "description": "Set target temperature device property",
+ "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "batch": {
+ "type": "percentage",
+ "value": 25
+ },
+ "cancellationThreshold": {
+ "type": "percentage",
+ "value": 10,
+ "batch": false
+ },
+ "data": [
+ {
+ "type": "PropertyJobData",
+ "target": "dtmi:modelDefinition:zomtmdxh:eqod32zbyl",
+ "path": "targetTemperature",
+ "value": "55"
+ }
+ ]
+}
+```
+
+The response to this request looks like the following example. The initial job status is `pending`:
+
+```json
+{
+ "id": "job-006",
+ "displayName": "Set target temperature",
+ "description": "Set target temperature device property",
+ "group": "833d7a7d-8f99-4e04-9e56-745806bdba6e",
+ "batch": {
+ "type": "percentage",
+ "value": 25
+ },
+ "cancellationThreshold": {
+ "type": "percentage",
+ "value": 10,
+ "batch": false
+ },
+ "data": [
+ {
+ "type": "PropertyJobData",
+ "target": "dtmi:modelDefinition:zomtmdxh:eqod32zbyl",
+ "path": "targetTemperature",
+ "value": "55"
+ }
+ ],
+ "status": "pending"
+}
+```
+
+## Stop, resume, and rerun jobs
+
+Use the following request to stop a running job:
+
+```http
+POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/stop?api-version=preview
+```
+
+If the request succeeds, it returns a `204 - No Content` response.
+
+Use the following request to resume a stopped job:
+
+```http
+POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/resume?api-version=preview
+```
+
+If the request succeeds, it returns a `204 - No Content` response.
+
+Use the following command to rerun an existing job on any failed devices:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/rerun/rerun-001?api-version=preview
+```
+
+## Next steps
+
+Now that you've learned how to manage jobs with the REST API, a suggested next step is to learn how to [Manage IoT Central applications with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-v
## Next steps
-Now that you've learned how to manage users and roles with the REST API, a suggested next step is to [Manage IoT Central applications with the REST API](/learn/modules/manage-iot-central-apps-with-rest-api/).
+Now that you've learned how to manage users and roles with the REST API, a suggested next step is to [How to use the IoT Central REST API to control devices](howto-control-devices-with-rest-api.md).
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/quick-configure-rules.md
The smartphone app sends telemetry that includes values from the accelerometer s
| Notes | Your phone is face down! | > [!TIP]
- > To receive an email notification, the email address must be a [user ID in the application](howto-administer.md), and the user must have signed in to the application at least once.
+ > To receive an email notification, the email address must be a [user ID in the application](howto-manage-users-roles.md), and the user must have signed in to the application at least once.
:::image type="content" source="media/quick-configure-rules/rule-action.png" alt-text="Screenshot that shows an email action added to the rule":::
iot-dps Concepts Tpm Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/concepts-tpm-attestation.md
Title: Azure IoT Hub Device Provisioning Service - TPM Attestation description: This article provides a conceptual overview of the TPM attestation flow using IoT Device Provisioning Service (DPS).-- Previously updated : 04/04/2019++ Last updated : 06/21/2021
The device can then sign a SAS token using the decrypted nonce and reestablish a
Now the device connects to IoT Hub, and you rest secure in the knowledge that your devicesΓÇÖ keys are securely stored. Now that you know how the Device Provisioning Service securely verifies a deviceΓÇÖs identity using TPM, check out the following articles to learn more: * [Learn about the concepts of provisioning](about-iot-dps.md#provisioning-process)
-* [Get started using auto-provisioning](./quick-setup-auto-provision.md) using the SDKs to take care of the flow.
+* [Get started using auto-provisioning](./quick-setup-auto-provision.md)
+* [Create TPM enrollments using the SDKs](./quick-enroll-device-tpm-java.md)
iot-dps How To Connect Mxchip Iot Devkit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/how-to-connect-mxchip-iot-devkit.md
- Title: How to use Azure IoT Hub Device Provisioning Service auto-provisioning to register the MXChip IoT DevKit with IoT Hub | Microsoft Docs
-description: How to use Azure IoT Hub Device Provisioning Service (DPS) auto-provisioning to register the MXChip IoT DevKit with IoT Hub.
-- Previously updated : 06/25/2019------
-# Use Azure IoT Hub Device Provisioning Service auto-provisioning to register the MXChip IoT DevKit with IoT Hub
-
-This article describes how to use the Azure IoT Hub Device Provisioning Service to [provisioning](about-iot-dps.md#provisioning-process) the MXChip IoT DevKit to an Azure IoT Hub. In this tutorial, you learn how to:
-
-* Configure the global endpoint of the Device Provisioning service on a device.
-* Use a unique device secret (UDS) to generate an X.509 certificate.
-* Enroll an individual device.
-* Verify that the device is registered.
-
-The [MXChip IoT DevKit](https://aka.ms/iot-devkit) is an all-in-one Arduino-compatible board with rich peripherals and sensors. You can develop for it using [Azure IoT Device Workbench](https://aka.ms/iot-workbench) or [Azure IoT Tools](https://aka.ms/azure-iot-tools) extension pack in Visual Studio Code. The DevKit comes with a growing [projects catalog](https://microsoft.github.io/azure-iot-developer-kit/docs/projects/) to guide your prototype Internet of Things (IoT) solutions that take advantage of Azure services.
-
-## Before you begin
-
-To complete the steps in this tutorial, first do the following tasks:
-
-* Configure your DevKit's Wi-Fi and prepare your development environment by following the "Prepare the development environment" section steps in [Connect IoT DevKit AZ3166 to Azure IoT Hub in the cloud](../iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md#prepare-the-development-environment).
-* Upgrade to the latest firmware (1.3.0 or later) with the [Update DevKit firmware](https://microsoft.github.io/azure-iot-developer-kit/docs/firmware-upgrading/) tutorial.
-* Create and link an IoT Hub with a Device Provisioning service instance by following the steps in [Set up the IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md).
-
-## Open sample project
-
-1. Make sure your IoT DevKit is **not connected** to your computer. Start VS Code first, and then connect the DevKit to your computer.
-
-1. Click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Open Examples...**. Then select **IoT DevKit** as board.
-
-1. In the IoT Workbench Examples page, find **Device Registration with DPS** and click **Open Sample**. Then selects the default path to download the sample code.
- ![Open sample](media/how-to-connect-mxchip-iot-devkit/open-sample.png)
-
-## Save a Unique Device Secret on device security storage
-
-Auto-provisioning can be configured on a device based on the device's [attestation mechanism](concepts-service.md#attestation-mechanism). The MXChip IoT DevKit uses the [Device Identity Composition Engine](https://trustedcomputinggroup.org/wp-content/uploads/Foundational-Trust-for-IOT-and-Resource-Constrained-Devices.pdf) from the [Trusted Computing Group](https://trustedcomputinggroup.org). A **Unique Device Secret** (UDS) saved in an STSAFE security chip ([STSAFE-A100](https://microsoft.github.io/azure-iot-developer-kit/docs/understand-security-chip/)) on the DevKit is used to generate the device's unique [X.509 certificate](concepts-x509-attestation.md). The certificate is used later for the enrollment process in the Device Provisioning service, and during registration at runtime.
-
-A typical UDS is a 64-character string, as seen in the following sample:
-
-```
-19e25a259d0c2be03a02d416c05c48ccd0cc7d1743458aae1cb488b074993eae
-```
-
-To save a UDS on the DevKit:
-
-1. In VS Code, click on the status bar to select the COM port for the DevKit.
- ![Select COM Port](media/how-to-connect-mxchip-iot-devkit/select-com.png)
-
-1. On DevKit, hold down **button A**, push and release the **reset** button, and then release **button A**. Your DevKit enters configuration mode.
-
-1. Click `F1` to open the command palette, type and select **Azure IoT Device Workbench: Configure Device Settings... > Config Unique Device String (UDS)**.
- ![Configure UDS](media/how-to-connect-mxchip-iot-devkit/config-uds.png)
-
-1. Note down the generated UDS string. You will need it to generate the X.509 certificate. Then press `Enter`.
- ![Copy UDS](media/how-to-connect-mxchip-iot-devkit/copy-uds.png)
-
-1. Confirm from the notification that UDS has been configured on the STSAFE successfully.
- ![Configure UDS Success](media/how-to-connect-mxchip-iot-devkit/config-uds-success.png)
-
-> [!NOTE]
-> Alternatively, you can configure UDS via serial port by using utilities such as Putty. Follow [Use configuration mode](https://microsoft.github.io/azure-iot-developer-kit/docs/use-configuration-mode/) to do so.
-
-## Update the Global Device Endpoint and ID Scope
-
-In device code, you need to specify the [Device provisioning endpoint](./concepts-service.md#device-provisioning-endpoint) and ID scope to ensure the tenant isolation.
-
-1. In the Azure portal, select the **Overview** pane of your Device Provisioning service and note down the **Global device endpoint** and **ID Scope** values.
- ![Device Provisioning Service Global Endpoint and ID Scope](media/how-to-connect-mxchip-iot-devkit/dps-global-endpoint.png)
-
-1. Open **DevKitDPS.ino**. Find and replace `[Global Device Endpoint]` and `[ID Scope]` with the values you just noted down.
- ![Device Provisioning Service Endpoint](media/how-to-connect-mxchip-iot-devkit/endpoint.png)
-
-1. Fill the `registrationId` variable in the code. Only alphanumeric, lowercase, and hyphen combination with a maximum of 128 characters is allowed. Also noted down the value.
- ![Registration ID](media/how-to-connect-mxchip-iot-devkit/registration-id.png)
-
-1. Click `F1`, type and select **Azure IoT Device Workbench: Upload Device Code**. It starts compiling and uploading the code to DevKit.
- ![Device Upload](media/how-to-connect-mxchip-iot-devkit/device-upload.png)
-
-## Generate X.509 certificate
-
-The [attestation mechanism](./concepts-service.md#attestation-mechanism) used by this sample is X.509 certificate. You need to use a utility to generate it.
-
-1. In VS Code, click `F1`, type and select **Open New Terminal** to open terminal window.
-
-1. Run `dps_cert_gen.exe` in `tool` folder.
-
-1. Specify the compiled binary file location as `..\.build\DevKitDPS`. Then paste the **UDS** and **registrationId** you just noted down.
- ![Generate X.509](media/how-to-connect-mxchip-iot-devkit/gen-x509.png)
-
-1. A `.pem` X.509 certificate generates in the same folder.
- ![X.509 file](media/how-to-connect-mxchip-iot-devkit/pem-file.png)
-
-## Create a device enrollment entry
-
-1. In the Azure portal, open your Device Provision Service, navigate to Manage enrollments section, and click **Add individual enrollment**.
- ![Add individual enrollment](media/how-to-connect-mxchip-iot-devkit/add-enrollment.png)
-
-1. Click file icon next to **Primary Certificate .pem or .cer file** to upload the `.pem` file generated.
- ![Upload .pem](media/how-to-connect-mxchip-iot-devkit/upload-pem.png)
-
-## Verify the DevKit is registered with Azure IoT Hub
-
-Press the **Reset** button on your DevKit. You should see **DPS Connected!** on DevKit screen. After the device reboots, the following actions take place:
-
-1. The device sends a registration request to your Device Provisioning service.
-1. The Device Provisioning service sends back a registration challenge to which your device responds.
-1. On successful registration, the Device Provisioning service sends the IoT Hub URI, device ID, and the encrypted key back to the device.
-1. The IoT Hub client application on the device connects to your hub.
-1. On successful connection to the hub, you see the device appear in the IoT Hub Device Explorer.
- ![Device registered](./media/how-to-connect-mxchip-iot-devkit/device-registered.png)
-
-## Problems and feedback
-
-If you encounter problems, refer to the Iot DevKit [FAQs](https://microsoft.github.io/azure-iot-developer-kit/docs/faq/), or reach out to the following channels for support:
-
-* [Gitter.im](https://gitter.im/Microsoft/azure-iot-developer-kit)
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/iot-devkit)
-
-## Next steps
-
-In this tutorial, you learned to enroll a device securely to the Device Provisioning Service by using the Device Identity Composition Engine, so that the device can automatically register with Azure IoT Hub.
-
-In summary, you learned how to:
-
-> [!div class="checklist"]
-> * Configure the global endpoint of the Device Provisioning service on a device.
-> * Use a unique device secret to generate an X.509 certificate.
-> * Enroll an individual device.
-> * Verify that the device is registered.
-
-Learn how to [Create and provision a simulated device](./quick-create-simulated-device.md).
iot-dps Quick Enroll Device Tpm Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/quick-enroll-device-tpm-python.md
Title: Quickstart - Enroll TPM device to Azure Device Provisioning Service using Python
-description: Quickstart - Enroll TPM device to Azure IoT Hub Device Provisioning Service (DPS) using Python provisioning service SDK. This quickstart uses individual enrollments.
+ Title: Quickstart - Create Device Provisioning Service (DPS) enrollments using Python
+description: Quickstart - Create Device Provisioning Service (DPS) using Python provisioning service SDK. This quickstart uses individual enrollments.
Previously updated : 11/08/2019 Last updated : 06/21/2021
ms.devlang: python
-# Quickstart: Enroll TPM device to IoT Hub Device Provisioning Service using Python provisioning service SDK
+# Quickstart: Create DPS enrollments using Python service SDK
[!INCLUDE [iot-dps-selector-quick-enroll-device-tpm](../../includes/iot-dps-selector-quick-enroll-device-tpm.md)]
-In this quickstart, you programmatically create an individual enrollment for a TPM device in the Azure IoT Hub Device Provisioning Service, using the Python Provisioning Service SDK with the help of a sample Python application.
+In this quickstart, you programmatically create an individual device enrollment in the Azure IoT Hub Device Provisioning Service (DPS). The Python Provisioning Service SDK will be used to create the enrollment.
## Prerequisites
In this quickstart, you programmatically create an individual enrollment for a T
pip install azure-iothub-provisioningserviceclient ```
-1. You need the endorsement key for your device. If you have followed the [Create and provision a simulated device](quick-create-simulated-device.md) quickstart to create a simulated TPM device, use the key created for that device. Otherwise, you can use the following endorsement key supplied with the SDK:
+1. This topic demonstrates both symmetric key and TPM enrollments from the tabs below.
+
+ # [Symmetric Key](#tab/symmetrickey)
+
+ For symmetric key device enrollments, you need a primary and secondary key for your device. If you don't have a valid symmetric key, you can use the following example keys for this example:
+
+ *Primary Symmetric key*
```
- AToAAQALAAMAsgAgg3GXZ0SEs/gakMyNRqXXJP1S124GUgtk8qHaGzMUaaoABgCAAEMAEAgAAAAAAAEAtW6MOyCu/Nih47atIIoZtlYkhLeCTiSrtRN3q6hqgOllA979No4BOcDWF90OyzJvjQknMfXS/Dx/IJIBnORgCg1YX/j4EEtO7Ase29Xd63HjvG8M94+u2XINu79rkTxeueqW7gPeRZQPnl1xYmqawYcyzJS6GKWKdoIdS+UWu6bJr58V3xwvOQI4NibXKD7htvz07jLItWTFhsWnTdZbJ7PnmfCa2vbRH/9pZIow+CcAL9mNTNNN4FdzYwapNVO+6SY/W4XU0Q+dLMCKYarqVNH5GzAWDfKT8nKzg69yQejJM8oeUWag/8odWOfbszA+iFjw3wVNrA5n8grUieRkPQ==
+ UmorGiEVPNIQuaWGXXbe8v9gWayS7XtOZmNMo6DEaEXP65GvhuK3OeRf8RVZ9BymBCHxNg3oRTey0pUHUwwYKQ==
```
+ *Secondary Symmetric key*
+
+ ```
+ Zx8/eE7PUBmnouB1qlNQxI7fcQ2HbJX+y96F1uCVQvDj88jFL+q6L9YWLLi4jqTmkRPOulHlSbSv2uFgj4vKtw==
+ ```
+
+ # [TPM](#tab/tpm)
+
+ For TPM enrollments, you need the endorsement key for your device. If you have followed the [Create and provision a simulated device](quick-create-simulated-device.md) quickstart to create a simulated TPM device, use the key created for that device. Otherwise, you can use the following endorsement key supplied with the SDK:
+
+ ```
+ AToAAQALAAMAsgAgg3GXZ0SEs/gakMyNRqXXJP1S124GUgtk8qHaGzMUaaoABgCAAEMAEAgAAAAAAAEAtW6MOyCu/Nih47atIIoZtlYkhLeCTiSrtRN3q6hqgOllA979No4BOcDWF90OyzJvjQknMfXS/Dx/IJIBnORgCg1YX/j4EEtO7Ase29Xd63HjvG8M94+u2XINu79rkTxeueqW7gPeRZQPnl1xYmqawYcyzJS6GKWKdoIdS+UWu6bJr58V3xwvOQI4NibXKD7htvz07jLItWTFhsWnTdZbJ7PnmfCa2vbRH/9pZIow+CcAL9mNTNNN4FdzYwapNVO+6SY/W4XU0Q+dLMCKYarqVNH5GzAWDfKT8nKzg69yQejJM8oeUWag/8odWOfbszA+iFjw3wVNrA5n8grUieRkPQ==
+ ```
+
+
## Modify the Python sample code
-This section shows how to add the provisioning details of your TPM device to the sample code.
+This section shows how to add the provisioning details of your individual enrollment to the sample code.
-1. Using a text editor, create a new **TpmEnrollment.py** file.
+1. Using a text editor, create a new **Enrollment.py** file.
+
+1. Add the following `import` statements and variables at the start of the **Enrollment.py** file. Then replace `dpsConnectionString` with your connection string found under **Shared access policies** in your **Device Provisioning Service** on the **Azure portal**. Replace key(s) for your device with the value noted previously in [Prepare the environment](quick-enroll-device-tpm-python.md#prepareenvironment). Finally, create a unique `registrationid` and be sure that it only consists of lower-case alphanumerics and hyphens.
+
+ # [Symmetric Key](#tab/symmetrickey)
-1. Add the following `import` statements and variables at the start of the **TpmEnrollment.py** file. Then replace `dpsConnectionString` with your connection string found under **Shared access policies** in your **Device Provisioning Service** on the **Azure portal**. Replace `endorsementKey` with the value noted previously in [Prepare the environment](quick-enroll-device-tpm-python.md#prepareenvironment). Finally, create a unique `registrationid` and be sure that it only consists of lower-case alphanumerics and hyphens.
-
```python from provisioningserviceclient import ProvisioningServiceClient from provisioningserviceclient.models import IndividualEnrollment, AttestationMechanism
+ from provisioningserviceclient.protocol.models import SymmetricKeyAttestation
- CONNECTION_STRING = "{dpsConnectionString}"
+ CONNECTION_STRING = "Enter your DPS connection string"
+ PRIMARY_KEY = "Add a valid key"
+ SECONDARY_KEY = "Add a valid key"
+ REGISTRATION_ID = "Enter a registration ID"
+ ```
- ENDORSEMENT_KEY = "{endorsementKey}"
+ # [TPM](#tab/tpm)
+
+ ```python
+ from provisioningserviceclient import ProvisioningServiceClient
+ from provisioningserviceclient.models import IndividualEnrollment, AttestationMechanism
- REGISTRATION_ID = "{registrationid}"
+ CONNECTION_STRING = "Enter your DPS connection string"
+ ENDORSEMENT_KEY = "Enter the endorsement key for your device"
+ REGISTRATION_ID = "Enter a registration ID"
```
-1. Add the following function and function call to implement the group enrollment creation:
+
+
+1. Add the following function and function call to implement the creation of the individual enrollment:
+ # [Symmetric Key](#tab/symmetrickey)
+
+ ```python
+ def main():
+ print ( "Starting individual enrollment..." )
+
+ psc = ProvisioningServiceClient.create_from_connection_string(CONNECTION_STRING)
+
+ symAtt = SymmetricKeyAttestation(primary_key=PRIMARY_KEY, secondary_key=SECONDARY_KEY)
+ att = AttestationMechanism(type="symmetricKey", symmetric_key=symAtt)
+ ie = IndividualEnrollment.create(REGISTRATION_ID, att)
+
+ ie = psc.create_or_update(ie)
+
+ print ( "Individual enrollment successful." )
+
+ if __name__ == '__main__':
+ main()
+ ```
+
+ # [TPM](#tab/tpm)
+ ```python def main(): print ( "Starting individual enrollment..." )
This section shows how to add the provisioning details of your TPM device to the
main() ```
-1. Save and close the **TpmEnrollment.py** file.
+
+
+1. Save and close the **Enrollment.py** file.
-## Run the sample TPM enrollment
+## Run the sample to create an enrollment
1. Open a command prompt, and run the script. ```cmd/sh
- python TpmEnrollment.py
+ python Enrollment.py
``` 1. Observe the output for the successful enrollment.
-1. Navigate to your provisioning service in the Azure portal. Select **Manage enrollments**. Notice that your TPM device appears under the **Individual Enrollments** tab, with the name `registrationid` created earlier.
+1. Navigate to your provisioning service in the Azure portal. Select **Manage enrollments**. Notice that your device enrollment appears under the **Individual Enrollments** tab, with the name `registrationid` created earlier.
![Verify successful TPM enrollment in portal](./media/quick-enroll-device-tpm-python/1.png)
If you plan to explore the Java service sample, do not clean up the resources cr
## Next steps
-In this quickstart, youΓÇÖve programmatically created an individual enrollment entry for a TPM device, and, optionally, created a TPM simulated device on your machine and provisioned it to your IoT hub using the Azure IoT Hub Device Provisioning Service. To learn about device provisioning in depth, continue to the tutorial for the Device Provisioning Service setup in the Azure portal.
+In this quickstart, youΓÇÖve programmatically created an individual enrollment entry for a device. To learn about device provisioning in depth, continue to the tutorial for the Device Provisioning Service setup in the Azure portal.
> [!div class="nextstepaction"] > [Azure IoT Hub Device Provisioning Service tutorials](./tutorial-set-up-cloud.md)
iot-dps Tutorial Provision Device To Hub https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/tutorial-provision-device-to-hub.md
Your IoT device can be a real device, or a simulated device. Since the IoT devic
Simulated device examples, using both TPM and X.509 attestation, are included for C, Java, C#, Node.js, and Python. For example, a simulated device using TPM and the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) would follow the process covered in the [Simulate first boot sequence for a device](quick-create-simulated-device.md#simulate-first-boot-sequence-for-the-device) section. The same device using X.509 certificate attestation would refer to this [boot sequence](quick-create-simulated-device-x509.md#simulate-first-boot-sequence-for-the-device) section.
-Refer to the [How-to guide for the MXChip Iot DevKit](how-to-connect-mxchip-iot-devkit.md) as an example for a real device.
- Start the device to allow your device's client application to start the registration with your Device Provisioning service. ## Verify the device is registered
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
If you haven't already done so, be sure to familiarize yourself with the basic [
| updateName | Identifier for a class of updates. The class can be anything you choose. It will often be a device or model name. | updateVersion | Version number distinguishing this update from others that have the same Provider and Name. Does not have match a version of an individual software component on the device (but can if you choose). | updateType | <ul><li>Specify `microsoft/swupdate:1` for image update</li><li>Specify `microsoft/apt:1` for package update</li></ul>
- | installedCriteria | <ul><li>Specify value of SWVersion for `microsoft/swupdate:1` update type</li><li>Specify **name-version**, where _name_ is the name of the APT Manifest and _version_ is the version of the APT Manifest. For example, contoso-iot-edge-1.0.0.0.
+ | installedCriteria | Used during deployment to compare the version already on the device with the version of the update. installedCriteria must match the version that is on the device, or deploying the update to the device will show as ΓÇ£failedΓÇ¥.<ul><li>For `microsoft/swupdate:1` update type, specify value of SWVersion </li><li>For `microsoft/apt:1` update type, specify **name-version**, where _name_ is the name of the APT Manifest and _version_ is the version of the APT Manifest. For example, contoso-iot-edge-1.0.0.0.
| updateFilePath(s) | Path to the update file(s) on your computer
iot-hub Iot Hub Arduino Iot Devkit Az3166 Devkit Remote Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring.md
- Title: Connect MXChip IoT DevKit to Azure IoT Hub Remote Monitoring
-description: In this tutorial, learn how to send status of sensors on IoT DevKit AZ3166 to the Azure IoT Remote Monitoring solution accelerator.
----- Previously updated : 02/02/2018---
-# Connect MXChip IoT DevKit to Azure IoT Remote Monitoring solution accelerator
-
-In this tutorial, you learn how to run a sample app on your DevKit to send sensor data to your Azure IoT Remote Monitoring solution accelerator.
-
-The [MXChip IoT DevKit](https://aka.ms/iot-devkit) is an all-in-one Arduino compatible board with rich peripherals and sensors. You can develop for it using [Visual Studio Code extension for Arduino](https://aka.ms/arduino). And it comes with a growing [projects catalog](https://microsoft.github.io/azure-iot-developer-kit/docs/projects/) to guide you prototype Internet of Things (IoT) solutions that take advantage of Microsoft Azure services.
-
-## What you need
-
-Finish the [Getting Started Guide](./iot-hub-arduino-iot-devkit-az3166-get-started.md) to:
-
-* Have your DevKit connected to Wi-Fi
-* Prepare the development environment
-
-An active Azure subscription. If you do not have one, you can register via one of these two methods:
-
-* Activate a [free 30-day trial Microsoft Azure account](https://azure.microsoft.com/free/)
-
-* Claim your [Azure credit](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) if you are MSDN or Visual Studio subscriber
-
-## Create an Azure IoT Remote Monitoring solution accelerator
-
-1. Go to [Azure IoT solution accelerators site](https://www.azureiotsolutions.com/) and click **Create a new solution**.
-
- ![Select Azure IoT solution accelerator type](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/azure-iot-suite-solution-types.png)
-
- > [!WARNING]
- > By default, this sample creates an S2 IoT Hub after it creates one IoT Remote Monitoring solution accelerator. If this IoT hub is not used with massive number of devices, we highly recommend you downgrade it from S2 to S1, and delete the IoT Remote Monitoring solution accelerator so the related IoT Hub can also be deleted, when you no longer need it.
-
-2. Select **Remote monitoring**.
-
-3. Enter a solution name, select a subscription and a region, and then click **Create solution**. The solution may take a while to be provisioned.
-
- ![Create solution](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/azure-iot-suite-new-solution.png)
-
-4. After provisioning finishes, click **Launch**. Some simulated devices are created for the solution during the provisioning process. Click **DEVICES** to check them out.
-
- ![Dashboard](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/azure-iot-suite-new-solution-created.png)
-
- ![Console](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/azure-iot-suite-console.png)
-
-5. Click **ADD A DEVICE**.
-
-6. Click **Add New** for **Custom Device**.
-
- ![Add new device](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/azure-iot-suite-add-new-device.png)
-
-7. Click **Let me define my own Device ID**, enter `AZ3166`, and then click **Create**.
-
- ![Create device with ID](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/azure-iot-suite-new-device-configuration.png)
-
-8. Make a note of **IoT Hub Hostname**, and click **Done**.
-
-## Open the RemoteMonitoring sample
-
-1. Disconnect the DevKit from your computer, if it is connected.
-
-2. Start VS Code.
-
-3. Connect the DevKit to your computer. VS Code automatically detects your DevKit and opens the following pages:
-
- * The DevKit introduction page.
- * Arduino Examples: Hands-on samples to get started with DevKit.
-
-4. Expand left side **ARDUINO EXAMPLES** section, browse to **Examples for MXCHIP AZ3166 > AzureIoT**, and select **RemoteMonitoring**. It opens a new VS Code window with a project folder in it.
-
- > [!NOTE]
- > If you happen to close the pane, you can reopen it. Use `Ctrl+Shift+P` (macOS: `Cmd+Shift+P`) to open the command palette, type **Arduino**, and then find and select **Arduino: Examples**.
-
-## Provision required Azure services
-
-In the solution window, run your task through `Ctrl+P` (macOS: `Cmd+P`) by entering `task cloud-provision` in the provided text box.
-
-In the VS Code terminal, an interactive command line guides you through provisioning the required Azure services.
-
-![Provision Azure resources](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/provision.png)
-
-## Build and upload the device code
-
-1. Use `Ctrl+P` (macOS: `Cmd + P`) and type **task config-device-connection**.
-
-2. The terminal asks whether you want to use a connection string that it retrieves from the `task cloud-provision` step. You could also input your own device connection string by clicking 'Create New...'
-
-3. The terminal prompts you to enter configuration mode. To do so, hold down button A, then push and release the reset button. The screen displays the DevKit ID and 'Configuration'.
-
- ![Input connection string](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/config-device-connection.png)
-
-4. After `task config-device-connection` finishes, click `F1` to load VS Code commands and select `Arduino: Upload`. VS Code starts verifying and uploading the Arduino sketch.
-
- ![Verification and upload of the Arduino sketch](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/arduino-upload.png)
-
-The DevKit reboots and starts running the code.
-
-## Test the project
-
-When the sample app runs, DevKit sends sensor data over WiFi to your Azure IoT Remote Monitoring solution accelerator. To see the result, follow these steps:
-
-1. Go to your Azure IoT Remote Monitoring solution accelerator, and click **DASHBOARD**.
-
-2. On the Remote Monitoring solution console, you will see your DevKit sensor status.
-
- ![Sensor data in Azure IoT Remote Monitoring solution accelerator](media/iot-hub-arduino-iot-devkit-az3166-devkit-remote-monitoring/sensor-status.png)
-
-## Change device ID
-
-If you want to change the hardcoded **AZ3166** to a customized device ID in the code, modify the line of code displayed in the [remote monitoring example](/previous-versions/azure/iot-accelerators/iot-accelerators-arduino-iot-devkit-az3166-devkit-remote-monitoring-v2).
-
-## Problems and feedback
-
-If you encounter problems, refer to [the IoT developer kit FAQs](https://microsoft.github.io/azure-iot-developer-kit/docs/faq/) or reach out to us using the following channels:
-
-* [Gitter.im](https://gitter.im/Microsoft/azure-iot-developer-kit)
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/iot-devkit)
-
-## Next steps
-
-Now that you have learned how to connect a DevKit device to your Azure IoT Remote Monitoring solution accelerator and visualize the sensor data, here are the suggested next steps:
-
-* [Azure IoT solution accelerators overview](/azure/iot-suite/)
-
-* [IoT developer kit](https://microsoft.github.io/azure-iot-developer-kit/)
iot-hub Iot Hub Arduino Iot Devkit Az3166 Devkit State https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-devkit-state.md
- Title: Use Azure device twins to control MXChip IoT DevKit user LED | Microsoft Docs
-description: In this tutorial, learn how to monitor DevKit states and control the user LED with Azure IoT Hub device twins.
----- Previously updated : 04/04/2018---
-# MXChip IoT DevKit
-
-You can use this example to monitor the MXChip IoT DevKit WiFi information and sensor states and to control the color of the user LED using Azure IoT Hub device twins.
-
-## What you learn
--- How to monitor the MXChip IoT DevKit sensor states.--- How to use Azure device twins to control the color of the DevKit's RGB LED.-
-## What you need
--- Set up your development environment by following the [Getting Started Guide](./iot-hub-arduino-iot-devkit-az3166-get-started.md).--- From your GitBash terminal window (or other Git command-line interface), type the following commands:-
- ```bash
- git clone https://github.com/DevKitExamples/DevKitState.git
- cd DevKitState
- code .
- ```
-
-## Provision Azure Services
-
-1. Click the **Tasks** drop-down menu in Visual Studio Code and select **Run Task...** - **cloud-provision**.
-
-2. Your progress is displayed under the **TERMINAL** tab of the **Welcome** panel.
-
-3. When prompted with the message *What subscription would you like to choose*, select a subscription.
-
-4. Select or choose a resource group.
-
- > [!NOTE]
- > If you already have a free IoT Hub, you can skip this step.
-
-5. When prompted with the message *What IoT hub would you like to choose*, select or create an IoT Hub.
-
-6. Something similar to *function app: function app name: xxx*, is displayed. Write down the function app name; it will be used in a later step.
-
-7. Wait for the Azure Resource Manager template deployment to finish, which is indicated when the message *Resource Manager template deployment: Done* is displayed.
-
-## Deploy Function App
-
-1. Click the **Tasks** drop-down menu in Visual Studio Code and select **Run Task...** - **cloud-deploy**.
-
-2. Wait for function app code uploading process to finish; the message *function app deploys: Done* is displayed.
-
-## Configure IoT Hub Device Connection String in DevKit
-
-1. Connect your MXChip IoT DevKit to your computer.
-
-2. Click the **Tasks** drop-down menu in Visual Studio Code and select **Run Task...** - **config-device-connection**
-
-3. On the MXChip IoT DevKit, press and hold button **A**, press the **Reset** button, and then release button **A** to make the DekKit enter configuration mode.
-
-4. Wait for connection string configuration process to be completed.
-
-## Upload Arduino Code to DevKit
-
-With your MXChip IoT DevKit connected to your computer:
-
-1. Click the **Tasks** drop-down menu in Visual Studio Code and select **Run Build Task...** The Arduino sketch is compiled and uploaded to the DevKit.
-
-2. When the sketch has been uploaded successfully, a *Build & Upload Sketch: success* message is displayed.
-
-## Monitor DevKit State in Browser
-
-1. In a Web browser, open the `DevKitState\web\https://docsupdatetracker.net/index.html` file--which was created during the What you need step.
-
-2. The following Web page appears:![Specify the function app name.](media/iot-hub-arduino-iot-devkit-az3166-devkit-state/devkit-state-function-app-name.png)
-
-3. Input the function app name you wrote down earlier.
-
-4. Click the **Connect** button
-
-5. Within a few seconds, the page refreshes and displays the DevKit's WiFi connection status and the state of each of the onboard sensors.
-
-## Control the DevKit's User LED
-
-1. Click the user LED graphic on the Web page illustration.
-
-2. Within a few seconds, the screen refreshes and shows the current color status of the user LED.
-
-3. Try changing the color value of the RGB LED by clicking in various locations on the RGB slider controls.
-
-## Example operation
-
-![Example test procedure](media/iot-hub-arduino-iot-devkit-az3166-devkit-state/devkit-state.gif)
-
-> [!NOTE]
-> You can see raw data of device twin in Azure portal:
-> IoT Hub -\> IoT devices -\> *\<your device\>* -\> Device Twin.
-
-## Next steps
-
-You have learned how to:
-- Connect an MXChip IoT DevKit device to your Azure IoT Remote Monitoring solution accelerator.-- Use the Azure IoT device twins function to sense and control the color of the DevKit's RGB LED.-
-Here is the suggested next step: [Azure IoT Remote Monitoring solution accelerator overview](/azure/iot-suite/)
iot-hub Iot Hub Arduino Iot Devkit Az3166 Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-door-monitor.md
- Title: Send email when door is opened using Azure Functions
-description: Monitor the magnetic sensor to detect when a door is opened and use Azure Functions to send an email notification.
---- Previously updated : 03/19/2018---
-# Door Monitor -- Using Azure Functions and SendGrid, send email when a door is opened
-
-The MXChip IoT DevKit contains a built-in magnetic sensor. In this project, you detect the presence or absence of a nearby strong magnetic field -- in this case, coming from a small, permanent magnet.
-
-## What you learn
-
-In this project, you learn:
-- How to use the MXChip IoT DevKit's magnetic sensor to detect the movement of a nearby magnet.-- How to use the SendGrid service to send a notification to your email address.-
-> [!NOTE]
-> For a practical use of this project, perform the following tasks:
-> - Mount a magnet to the edge of a door.
-> - Mount the DevKit on the door jamb close to the magnet. Opening or closing the door will trigger the sensor, resulting in your receiving an email notification of the event.
-
-## What you need
-
-Finish the [Getting Started Guide](iot-hub-arduino-iot-devkit-az3166-get-started.md) to:
-
-* Have your DevKit connected to Wi-Fi
-* Prepare the development environment
-
-An active Azure subscription. If you do not have one, you can register via one of these methods:
-
-* Activate a [free 30-day trial Microsoft Azure account](https://azure.microsoft.com/free/).
-* Claim your [Azure credit](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) if you are an MSDN or Visual Studio subscriber.
-
-## Deploy the SendGrid service in Azure
-
-[SendGrid](https://sendgrid.com/) is a cloud-based email delivery platform. This service will be used to send email notifications.
-
-> [!NOTE]
-> If you have already deployed a SendGrid service, you may proceed directly to [Deploy IoT Hub in Azure](#deploy-iot-hub-in-azure).
-
-### SendGrid Deployment
-
-To provision Azure services, use the **Deploy to Azure** button. This button enables quick and easy deployment of your open-source projects to Microsoft Azure.
-
-Click the **Deploy to Azure** button below.
-
-[![Deploy to Azure](https://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FVSChina%2Fdevkit-door-monitor%2Fmaster%2FSendGridDeploy%2Fazuredeploy.json)
-
-If you are not already signed into your Azure account, sign in now.
-
-You now see the SendGrid sign-up form.
-
-![SendGrid Deployment](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/sendgrid-deploy.png)
-
-Complete the sign-up form:
-
- * **Resource group**: Create a resource group to host the SendGrid service, or use an existing one. See [Using resource groups to manage your Azure resources](../azure-resource-manager/management/manage-resource-groups-portal.md).
-
- * **Name**: The name for your SendGrid service. Choose a unique name, differing from other services you may have.
-
- * **Password**: The service requires a password, which will not be used for anything in this project.
-
- * **Email**: The SendGrid service will send verification to this email address.
-
-Check the **Pin to dashboard** option to make this application easier to find in the future, then click **Purchase** to submit the sign-in form.
-
-### SendGrid API Key creation
-
-After the deployment completes, click it and then click the **Manage** button. Your SendGrid account page appears, where you need to verify your email address.
-
-![SendGrid Manage](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/sendgrid-manage.png)
-
-On the SendGrid page, click **Settings** > **API Keys** > **Create API Key**.
-
-![SendGrid Create API First](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/sendgrid-create-api-first.png)
-
-On the **Create API Key** page, input the **API Key Name** and click **Create & View**.
-
-![SendGrid Create API Second](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/sendgrid-create-api-second.png)
-
-Your API key is displayed only one time. Be sure to copy and store it safely, as it is used in the next step.
-
-## Deploy IoT Hub in Azure
-
-The following steps will provision other Azure IoT related services and deploy Azure Functions for this project.
-
-Click the **Deploy to Azure** button below.
-
-[![Deploy to Azure](https://azuredeploy.net/deploybutton.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FVSChina%2Fdevkit-door-monitor%2Fmaster%2Fazuredeploy.json)
-
-The sign-up form appears.
-
-![IoTHub Deployment](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/iot-hub-deploy.png)
-
-Fill in the fields on the sign-up form.
-
- * **Resource group**: Create a resource group to host the SendGrid service, or use an existing one. See [Using resource groups to manage your Azure resources](../azure-resource-manager/management/manage-resource-groups-portal.md).
-
- * **Iot Hub Name**: The name for your IoT hub. Choose a unique name, differing from other services you may have.
-
- * **Iot Hub Sku**: F1 (limited to one per subscription) is free. You can see more pricing information on the [pricing page](https://azure.microsoft.com/pricing/details/iot-hub/).
-
- * **From Email**: This field should be the same email address you used when setting up the SendGrid service.
-
-Check the **Pin to dashboard** option to make this application easier to find in the future, then click **Purchase** when you're ready to continue to the next step.
-
-## Build and upload the code
-
-Next, load the sample code in VS Code and provision the necessary Azure services.
-
-### Start VS Code
--- Make sure your DevKit is **not** connected to your computer.-- Start VS Code.-- Connect the DevKit to your computer.-
-> [!NOTE]
-> When you launch VS Code, you may receive an error message stating that it cannot find the Arduino IDE or related board package. If you receive this error, close VS Code, launch the Arduino IDE again, and VS Code should locate the Arduino IDE path correctly.
-
-### Open Arduino Examples folder
-
-Expand the left side **ARDUINO EXAMPLES** section, browse to **Examples for MXCHIP AZ3166 > AzureIoT**, and select **DoorMonitor**. This action opens a new VS Code window with a project folder in it.
-
-![mini-solution-examples](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/vscode-examples.png)
-
-You can also open the example app from the command palette. Use `Ctrl+Shift+P` (macOS: `Cmd+Shift+P`) to open the command palette, type **Arduino**, and then find and select **Arduino: Examples**.
-
-### Provision Azure services
-
-In the solution window, run the cloud provisioning task:
-- Type `Ctrl+P` (macOS: `Cmd+P`).-- Enter `task cloud-provision` in the provided text box.-
-In the VS Code terminal, an interactive command line guides you through provisioning the required Azure services. Select all of the same items from the prompted list that you previously provisioned in [Deploy IoT Hub in Azure](#deploy-iot-hub-in-azure).
-
-![Cloud Provision](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/cloud-provision.png)
-
-> [!NOTE]
-> If the page hangs in the loading status when trying to sign in to Azure, refer to the ["page hanges when logging in" section of the IoT DevKit FAQ](https://microsoft.github.io/azure-iot-developer-kit/docs/faq/#page-hangs-when-log-in-azure) to resolve this issue.
-
-### Build and upload the device code
-
-Next, upload the code for the device.
-
-#### Windows
-
-1. Use `Ctrl+P` to run `task device-upload`.
-
-2. The terminal prompts you to enter configuration mode. To do so, hold down button A, then push and release the reset button. The screen displays the DevKit identification number and the word *Configuration*.
-
-#### macOS
-
-1. Put the DevKit into configuration mode: Hold down button A, then push and release the reset button. The screen displays 'Configuration'.
-
-2. Click `Cmd+P` to run `task device-upload`.
-
-#### Verify, upload, and run the sample app
-
-The connection string that is retrieved from the [Provision Azure services](#provision-azure-services) step is now set.
-
-VS Code then starts verifying and uploading the Arduino sketch to the DevKit.
-
-![Screenshot shows Visual Studio Code verifying and uploading the Arduino sketch.](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/device-upload.png)
-
-The DevKit reboots and starts running the code.
-
-> [!NOTE]
-> Occasionally, you may receive an "Error: AZ3166: Unknown package" error message. This error occurs when the board package index is not refreshed correctly. To resolve this error, refer to the [development section of the IoT DevKit FAQ](https://microsoft.github.io/azure-iot-developer-kit/docs/faq/#development).
-
-## Test the project
-
-The program first initializes when the DevKit is in the presence of a stable magnetic field.
-
-After initialization, `Door closed` is displayed on the screen. When there is a change in the magnetic field, the state changes to `Door opened`. Each time the door state changes, you receive an email notification. (These email messages may take up to five minutes to be received.)
-
-![Magnets close to the sensor: Door Closed](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/test-door-closed.jpg "Magnets close to the sensor: Door Closed")
-
-![Magnet moved away from the sensor: Door Opened](media/iot-hub-arduino-iot-devkit-az3166-door-monitor/test-door-opened.jpg "Magnet moved away from the sensor: Door Opened")
-
-## Problems and feedback
-
-If you encounter problems, refer to the [IoT DevKit FAQ](https://microsoft.github.io/azure-iot-developer-kit/docs/faq/) or connect using the following channels:
-
-* [Gitter.im](https://gitter.im/Microsoft/azure-iot-developer-kit)
-* [Stack Overflow](https://stackoverflow.com/questions/tagged/iot-devkit)
-
-## Next steps
-
-You have learned how to connect a DevKit device to your Azure IoT Remote Monitoring solution accelerator and used the SendGrid service to send an email. Here is the suggested next step:[Azure IoT Remote Monitoring solution accelerator overview](/azure/iot-suite/)
iot-hub Iot Hub Arduino Iot Devkit Az3166 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-arduino-iot-devkit-az3166-get-started.md
- Title: Connect IoT DevKit AZ3166 to an Azure IoT Hub
-description: In this tutorial, learn how to set up and connect IoT DevKit AZ3166 to Azure IoT Hub so it can send data to the Azure cloud platform.
---- Previously updated : 04/29/2021----
-# Connect IoT DevKit AZ3166 to Azure IoT Hub
--
-You can use the [MXChip IoT DevKit](https://microsoft.github.io/azure-iot-developer-kit/) to develop and prototype Internet of Things (IoT) solutions that take advantage of Microsoft Azure services. The kit includes an Arduino-compatible board with rich peripherals and sensors, an open-source board package, and a rich [sample gallery](https://microsoft.github.io/azure-iot-developer-kit/docs/projects/).
-
-## What you learn
-
-* How to create an IoT hub and register a device for the MXChip IoT DevKit.
-* How to connect the IoT DevKit to Wi-Fi and configure the IoT Hub connection string.
-* How to send the DevKit sensor telemetry data to your IoT hub.
-* How to prepare the development environment and develop application for the IoT DevKit.
-
-Don't have a DevKit yet? Try the [DevKit simulator](https://azure-samples.github.io/iot-devkit-web-simulator/) or [purchase a DevKit](https://aka.ms/iot-devkit-purchase).
-
-You can find the source code for all DevKit tutorials from [code samples gallery](/samples/browse/?term=mxchip).
-
-## What you need
--- A MXChip IoT DevKit board with a Micro-USB cable. [Get it now](https://aka.ms/iot-devkit-purchase).-- A computer running Windows 10, macOS 10.10+ or Ubuntu 18.04+.-- An active Azure subscription. [Activate a free 30-day trial Microsoft Azure account](https://azureinfo.microsoft.com/us-freetrial.html).
-
-## Prepare the development environment
-
-Follow these steps to prepare the development environment for the DevKit:
-
-#### Install Visual Studio Code with Azure IoT Tools extension package
-
-1. Install [Arduino IDE](https://www.arduino.cc/en/Main/Software). It provides the necessary toolchain for compiling and uploading Arduino code.
- * **Windows**: Use Windows Installer version. Do not install from the App Store.
- * **macOS**: Drag and drop the extracted **Arduino.app** into `/Applications` folder.
- * **Ubuntu**: Unzip it into folder such as `$HOME/Downloads/arduino-1.8.8`
-
-2. Install [Visual Studio Code](https://code.visualstudio.com/), a cross platform source code editor with powerful intellisense, code completion, and debugging support as well as rich extensions can be installed from marketplace.
-
-3. Launch VS Code, look for **Arduino** in the extension marketplace and install it. This extension provides enhanced experiences for developing on Arduino platform.
-
- ![Install Arduino](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-arduino.png)
-
-4. Look for [Azure IoT Tools](https://aka.ms/azure-iot-tools) in the extension marketplace and install it.
-
- ![Screenshot that shows Azure IoT Tools in the extension marketplace.](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-azure-iot-tools.png)
-
- Or copy and paste this URL into a browser window: `vscode:extension/vsciot-vscode.azure-iot-tools`
-
- > [!NOTE]
- > The Azure IoT Tools extension pack contains the [Azure IoT Device Workbench](https://aka.ms/iot-workbench) which is used to develop and debug on various IoT devkit devices. The [Azure IoT Hub extension](https://aka.ms/iot-toolkit), also included with the Azure IoT Tools extension pack, is used to manage and interact with Azure IoT Hubs.
-
-5. Configure VS Code with Arduino settings.
-
- In Visual Studio Code, click **File > Preferences > Settings** (on macOS, **Code > Preferences > Settings**). Then click the **Open Settings (JSON)** icon in the upper-right corner of the *Settings* page.
-
- ![Install Azure IoT Tools](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/user-settings-arduino.png)
-
- The correct path to your Arduino installation must be configured in VS Code. Add following lines to configure Arduino depending on your platform and the directory path where you installed the Arduino IDE:
-
- * **Windows**:
-
- ```json
- "arduino.path": "C:\\Program Files (x86)\\Arduino",
- "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
- ```
-
- * **macOS**:
-
- ```json
- "arduino.path": "/Applications",
- "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
- ```
-
- * **Ubuntu**:
-
- Replace the **{username}** placeholder below with your username.
-
- ```json
- "arduino.path": "/home/{username}/Downloads/arduino-1.8.13",
- "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json"
- ```
-
-6. Click `F1` to open the command palette, type and select **Arduino: Board Manager**. Search for **AZ3166** and install the latest version.
-
- ![Install DevKit SDK](media/iot-hub-arduino-devkit-az3166-get-started/getting-started/install-az3166-sdk.png)
-
-#### Install ST-Link drivers
-
-[ST-Link/V2](https://www.st.com/en/development-tools/st-link-v2.html) is the USB interface that IoT Dev