Updates from: 11/12/2022 12:57:11
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 11/04/2022 Last updated : 11/11/2022
When a user goes through combined registration to set up the Authenticator app,
### AD FS adapter
-The AD FS adapter supports number matching after installing an update. Earlier versions of Windows Server don't support number matching. On earlier versions, users will continue to see the **Approve**/**Deny** experience and won't see number matching until you upgrade.
+The AD FS adapter supports number matching after installing an update. Unpatched versions of Windows Server don't support number matching. Users will continue to see the **Approve**/**Deny** experience and won't see number matching unless these updates are applied.
| Version | Update | ||--|
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 10/10/2022 Last updated : 11/11/2022
Depending on user activity, the data file can become outdated quickly. Any chang
### Install MFA Server update Run the new installer on the Primary MFA Server. Before you upgrade a server, remove it from load balancing or traffic sharing with other MFA Servers. You don't need to uninstall your current MFA Server before running the installer. The installer performs an in-place upgrade using the current installation path (for example, C:\Program Files\Multi-Factor Authentication Server). If you're prompted to install a Microsoft Visual C++ 2015 Redistributable update package, accept the prompt. Both the x86 and x64 versions of the package are installed. It isn't required to install updates for User portal, Web SDK, or AD FS Adapter.
-After the installation is complete, it can take several minutes for the datafile to be upgraded. During this time, the User portal may have issues connecting to the MFA Service. **Don't restart the MFA Service, or the MFA Server during this time.** This behavior is normal. Once the upgrade is complete, the primary serverΓÇÖs main service will again be functional.
-
-You can check \Program Files\Multi-Factor Authentication Server\Logs\MultiFactorAuthSvc.log to see progress and make sure the upgrade is complete. **Completed performing tasks to upgrade from 23 to 24**.
-
-If you have thousands of users, you might schedule the upgrade during a maintenance window and take the User portal offline during this time. To estimate how long the upgrade will take, plan on around 4 minutes per 10,000 users. You can minimize the time by cleaning up disabled or inactive users prior to the upgrade.
- >[!NOTE] >After you run the installer on your primary server, secondary servers may begin to log **Unhandled SB** entries. This is due to schema changes made on the primary server that will not be recognized by secondary servers. These errors are expected. In environments with 10,000 users or more, the amount of log entries can increase significantly. To mitigate this issue, you can increase the file size of your MFA Server logs, or upgrade your secondary servers.
active-directory Howto Configure Publisher Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-publisher-domain.md
Previously updated : 06/23/2021 Last updated : 11/11/2022 -+
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Previously updated : 09/27/2021 Last updated : 11/12/2022 -+ # Mark your app as publisher verified
active-directory Scenario Web App Sign User Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md
In ASP.NET Core, for Microsoft identity platform applications, the **Sign in** b
# [ASP.NET](#tab/aspnet)
-In ASP.NET MVC, the sign-out button is exposed in `Views\Shared\_LoginPartial.cshtml`. It's displayed only when there's an authenticated account. That is, it's displayed when the user has previously signed in.
+In ASP.NET MVC, the **Sign in** button is exposed in `Views\Shared\_LoginPartial.cshtml`. It's displayed only when the user isn't authenticated. That is, it's displayed when the user hasn't yet signed in or has signed out.
```html @if (Request.IsAuthenticated)
This controller also handles the Azure AD B2C applications.
# [ASP.NET](#tab/aspnet)
-In ASP.NET, signing out is triggered from the `SignOut()` method on a controller (for instance, [AccountController.cs#L16-L23](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/Controllers/AccountController.cs#L16-L23)). This method isn't part of the ASP.NET framework (contrary to what happens in ASP.NET Core). It sends an OpenID sign-in challenge after proposing a redirect URI.
+In ASP.NET, Sign in is triggered from the `SignIn()` method on a controller (for instance, [AccountController.cs#L16-L23](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/a2da310539aa613b77da1f9e1c17585311ab22b7/WebApp/Controllers/AccountController.cs#L16-L23)). This method isn't part of the ASP.NET framework (contrary to what happens in ASP.NET Core). It sends an OpenID sign-in challenge after proposing a redirect URI.
```csharp public void SignIn()
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
Most commonly caused when the verification is being performed via Graph API, and
This feature isn't supported for Microsoft consumer accounts. Only applications registered in Azure AD by an Azure AD user are supported.
-Occurs when a consumer account (Hotmail, Messenger, OneDrive, MSN, Xbox Live, or Microsoft 365).
+Occurs when a consumer account is used for app registration (Hotmail, Messenger, OneDrive, MSN, Xbox Live, or Microsoft 365).
### InteractionRequired
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
You can enforce naming policy for groups in two different ways:
### Prefix-suffix naming policy
-The general structure of the naming convention is ΓÇÿPrefix[GroupName]SuffixΓÇÖ. While you can define multiple prefixes and suffixes, you can only have one instance of the [GroupName] in the setting. The prefixes or suffixes can be either fixed strings or user attributes such as \[Department\] that are substituted based on the user who is creating the group. The total allowable number of characters for your prefix and suffix strings including group name is 53 characters.
+The general structure of the naming convention is ΓÇÿPrefix[GroupName]SuffixΓÇÖ. While you can define multiple prefixes and suffixes, you can only have one instance of the [GroupName] in the setting. The prefixes or suffixes can be either fixed strings or user attributes such as \[Department\] that are substituted based on the user who is creating the group. The total allowable number of characters for your prefix and suffix strings including group name is 63 characters.
Prefixes and suffixes can contain special characters that are supported in group name and group alias. Any characters in the prefix or suffix that are not supported in the group alias are still applied in the group name, but removed from the group alias. Because of this restriction, the prefixes and suffixes applied to the group name might be different from the ones applied to the group alias.
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
Previously updated : 06/16/2022 Last updated : 11/11/2022
In this article, you'll learn how to update the [guest user's](user-properties.m
To manage these scenarios previously, you had to manually delete the guest userΓÇÖs account from your directory and reinvite the user. Now you can use the Azure portal, PowerShell or the Microsoft Graph invitation API to reset the user's redemption status and reinvite the user while keeping the user's object ID, group memberships, and app assignments. When the user redeems the new invitation, the [UPN](../hybrid/plan-connect-userprincipalname.md#what-is-userprincipalname) of the user doesn't change, but the user's sign-in name changes to the new email. Then the user can sign in using the new email or an email you've added to the `otherMails` property of the user object.
+## Required Azure AD roles
+
+To reset a user's redemption status, you'll need one of the following roles:
+
+- [Guest Inviter](../roles/permissions-reference.md#guest-inviter) (least privileged)
+- [User Administrator](../roles/permissions-reference.md#user-administrator)
+- [Global Administrator](../roles/permissions-reference.md#global-administrator)
+ ## Use the Azure portal to reset redemption status
-1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator or User administrator account for the directory.
-1. Search for and select **Azure Active Directory**.
-1. Select **Users**.
-1. In the list, select the user's name to open their user profile.
-1. If the user wants to sign in using a different email:
+1. Sign in to the [Azure portal](https://portal.azure.com/) using an account that has one of the [required Azure AD roles](#required-azure-ad-roles).
+2. Search for and select **Azure Active Directory**.
+3. Select **Users**.
+4. In the list, select the user's name to open their user profile.
+5. If the user wants to sign in using a different email:
- Select **Edit properties**. - Select the **Contact Information** tab. - Next to **Email**, type the new email.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
For Microsoft Graph the parameters for the **Generate Temporary Access Pass and
### Add user to groups
-Allows users to be added to cloud-only groups. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be added to Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and privileged access groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task. :::image type="content" source="media/lifecycle-workflow-task/add-group-task.png" alt-text="Screenshot of Workflows task: Add user to group task.":::
For Microsoft Graph the parameters for the **Run a Custom Task Extension** task
|category | joiner, leaver | |displayName | Run a Custom Task Extension (Customizable by user) | |description | Run a Custom Task Extension to call-out to an external system. (Customizable by user) |
-|taskDefinitionId | "d79d1fcc-16be-490c-a865-f4533b1639ee |
-|argument | Argument contains a name parameter that is the "LogicAppURL", and a value parameter that is the Logic App HTTP trigger. |
+|taskDefinitionId | d79d1fcc-16be-490c-a865-f4533b1639ee |
+|argument | Argument contains a name parameter that is the "customTaskExtensionID", and a value parameter that is the ID of the previously created extension that contains information about the Logic App. |
For Microsoft Graph the parameters for the **Run a Custom Task Extension** task
"taskDefinitionId": "d79d1fcc-16be-490c-a865-f4533b1639ee", "arguments": [ {
- "name": "CustomTaskExtensionID",
+ "name": "customTaskExtensionID",
"value": ""<ID of your Custom Task Extension>"" } ]
For Microsoft Graph the parameters for the **Disable user account** task are as
### Remove user from selected groups
-Allows you to remove a user from cloud-only groups. Dynamic and Privileged Access Groups not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from Microsoft 365 and cloud-only security groups. Mail-enabled, distribution, dynamic and privileged access groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal. :::image type="content" source="media/lifecycle-workflow-task/remove-group-task.png" alt-text="Screenshot of Workflows task: Remove user from select groups.":::
For Microsoft Graph the parameters for the **Remove user from selected groups**
### Remove users from all groups
-Allows users to be removed from every cloud-only group they're a member of. Dynamic and Privileged Access Groups not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+Allows users to be removed from every Microsoft 365 and cloud-only security group they're a member of. Mail-enabled, distribution, dynamic and privileged access groups are not supported. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
You're able to customize the task name and description for this task in the Azure portal.
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
Properties that will trigger the creation of a new version are as follows:
-While new versions of these workflows are made as soon as you make the updates in the Azure portal, making a new version of a workflow using the API with Microsoft Graph requires running the workflow creation call again with the changes included. For a step by step guide for updating either tasks, or execution conditions, see: [Manage Workflow Versions](manage-workflow-tasks.md).
+While new versions of these workflows are made as soon as you make the updates in the Azure portal, creating a new version of a workflow using the API with Microsoft Graph requires running the createNewVersion method. For a step by step guide for updating either tasks, or execution conditions, see: [Manage Workflow Versions](manage-workflow-tasks.md).
> [!NOTE] > If the workflow is on-demand, the configure information associated with execution conditions will not be present.
active-directory Four Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/four-steps.md
Security logs and reports provide you with an electronic record of suspicious ac
### Assign least privileged admin roles for operations
-As you think about your approach to operations, there are a couple levels of administration to consider. The first level places the burden of administration on your global administrator(s). Always using the global administrator role, might be appropriate for smaller companies. But for larger organizations with help desk personnel and administrators responsible for specific tasks, assigning the role of global administrator can be a security risk since it provides those individuals with the ability to manage tasks that are above and beyond what they should be capable of doing.
+As you think about your approach to operations, there are a couple levels of administration to consider. The first level places the burden of administration on your Hybrid Identity Administrator(s). Always using the Hybrid Identity Administrator role, might be appropriate for smaller companies. But for larger organizations with help desk personnel and administrators responsible for specific tasks, assigning the role of Hybrid Identity Administrator can be a security risk since it provides those individuals with the ability to manage tasks that are above and beyond what they should be capable of doing.
In this case, you should consider the next level of administration. Using Azure AD, you can designate end users as "limited administrators" who can manage tasks in less-privileged roles. For example, you might assign your help desk personnel the [security reader](../roles/permissions-reference.md#security-reader) role to provide them with the ability to manage security-related features with read-only access. Or perhaps it makes sense to assign the [authentication administrator](../roles/permissions-reference.md#authentication-administrator) role to individuals to give them the ability to reset non-password credentials or read and configure Azure Service Health.
active-directory How To Bypassdirsyncoverrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-bypassdirsyncoverrides.md
Clear-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -MobilePhoneInAAD -Alt
## Next Steps
-Learn more about [Azure AD Connect: ADSyncTools PowerShell Module](reference-connect-adsynctools.md)
+Learn more about [Azure AD Connect: ADSyncTools PowerShell Module](reference-connect-adsynctools.md)
active-directory How To Connect Create Custom Sync Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-create-custom-sync-rule.md
documentationcenter: '' na
active-directory How To Connect Device Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-device-writeback.md
documentationcenter: '' ms.assetid: c0ff679c-7ed5-4d6e-ac6c-b2b6392e7892
active-directory How To Connect Emergency Ad Fs Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-emergency-ad-fs-certificate-rotation.md
Now that you have added the first certificate and made it primary and removed th
## Update Azure AD with the new token-signing certificate Open the Microsoft Azure Active Directory Module for Windows PowerShell. Alternatively, open Windows PowerShell and then run the command `Import-Module msonline`
-Connect to Azure AD by running the following command: `Connect-MsolService`, and then, enter your global administrator credentials.
+Connect to Azure AD by running the following command: `Connect-MsolService`, and then, enter your Hybrid Identity Administrator credentials.
>[!Note] > If you are running these commands on a computer that is not the primary federation server, enter the following command first: `Set-MsolADFSContext ΓÇôComputer <servername>`. Replace \<servername\> with the name of the AD FS server. Then enter the administrator credentials for the AD FS server when prompted.
active-directory How To Connect Fed Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-compatibility.md
documentationcenter: '' ms.assetid: 22c8693e-8915-446d-b383-27e9587988ec
active-directory How To Connect Fed Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-management.md
You can use Azure AD Connect to check the current health of the AD FS and Azure
1. Select **Repair AAD and ADFS Trust** from the list of additional tasks. ![Repair AAD and ADFS Trust](./media/how-to-connect-fed-management/RepairADTrust1.PNG)
-2. On the **Connect to Azure AD** page, provide your global administrator credentials for Azure AD, and click **Next**.
+2. On the **Connect to Azure AD** page, provide your Hybrid Identity Administrator credentials for Azure AD, and click **Next**.
![Screenshot that shows the "Connect to Azure AD" page with example credentials entered.](./media/how-to-connect-fed-management/RepairADTrust2.PNG) 3. On the **Remote access credentials** page, enter the credentials for the domain administrator.
Configuring alternate login ID for AD FS consists of two main steps:
![Additional federation server](./media/how-to-connect-fed-management/AddNewADFSServer1.PNG)
-2. On the **Connect to Azure AD** page, enter your global administrator credentials for Azure AD, and click **Next**.
+2. On the **Connect to Azure AD** page, enter your Hybrid Identity Administratoristrator credentials for Azure AD, and click **Next**.
![Screenshot that shows the "Connect to Azure AD" page with sample credentials entered.](./media/how-to-connect-fed-management/AddNewADFSServer2.PNG)
Configuring alternate login ID for AD FS consists of two main steps:
![Deploy Web Application Proxy](./media/how-to-connect-fed-management/WapServer1.PNG)
-2. Provide the Azure global administrator credentials.
+2. Provide the Azure Hybrid Identity Administrator credentials.
![Screenshot that shows the "Connect to Azure AD" page with an example username and password entered.](./media/how-to-connect-fed-management/wapserver2.PNG)
active-directory How To Connect Fed O365 Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-o365-certs.md
documentationcenter: '' ms.assetid: 543b7dc1-ccc9-407f-85a1-a9944c0ba1be
active-directory How To Connect Fix Default Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fix-default-rules.md
description: Learn how to fix modified default rules that come with Azure AD Con
active-directory How To Connect Health Adds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adds.md
documentationcenter: '' ms.assetid: 19e3cf15-f150-46a3-a10c-2990702cd700
active-directory How To Connect Health Adfs Risky Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs-risky-ip.md
# Risky IP report (public preview)
-AD FS customers may expose password authentication endpoints to the internet to provide authentication services for end users to access SaaS applications such as Microsoft 365. In this case, it is possible for a bad actor to attempt logins against your AD FS system to guess an end userΓÇÖs password and get access to application resources. AD FS provides the extranet account lockout functionality to prevent these types of attacks since AD FS in Windows Server 2012 R2. If you are on a lower version, we strongly recommend that you upgrade your AD FS system to Windows Server 2016. <br />
+AD FS customers may expose password authentication endpoints to the internet to provide authentication services for end users to access SaaS applications such as Microsoft 365. It is possible for a bad actor to attempt logins against your AD FS system to guess an end userΓÇÖs password and get access to application resources. AD FS provides the extranet account lockout functionality to prevent these types of attacks since AD FS in Windows Server 2012 R2. If you are on a lower version, we strongly recommend that you upgrade your AD FS system to Windows Server 2016. <br />
-Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the ΓÇ£Risky IP reportΓÇ¥ that detects this condition and notifies administrators when this occurs. The following are the key benefits for this report:
+Additionally, it is possible for a single IP address to attempt multiple logins against multiple users. In these cases, the number of attempts per user may be under the threshold for account lockout protection in AD FS. Azure AD Connect Health now provides the ΓÇ£Risky IP reportΓÇ¥ that detects this condition and notifies administrators. The following are the key benefits for this report:
- Detection of IP addresses that exceed a threshold of failed password-based logins - Supports failed logins due to bad password or due to extranet lockout state-- Email notification to alert administrators as soon as this occurs with customizable email settings
+- Email notification to alert administrators with customizable email settings
- Customizable threshold settings that match with the security policy of an organization - Downloadable reports for offline analysis and integration with other systems via automation
Additionally, it is possible for a single IP address to attempt multiple logins
> ## What is in the report?
-The failed sign in activity client IP addresses are aggregated through Web Application Proxy servers. Each item in the Risky IP report shows aggregated information about failed AD FS sign-in activities which exceed designated threshold. It provides the following information:
+The failed sign in activity client IP addresses are aggregated through Web Application Proxy servers. Each item in the Risky IP report shows aggregated information about failed AD FS sign-in activities that have exceeded the designated threshold. It provides the following information:
![Screenshot that shows a Risky IP report with column headers highlighted.](./media/how-to-connect-health-adfs/report4a.png) | Report Item | Description | | - | -- | | Time Stamp | Shows the time stamp based on Azure portal local time when the detection time window starts.<br /> All daily events are generated at mid-night UTC time. <br />Hourly events have the timestamp rounded to the beginning of the hour. You can find first activity start time from ΓÇ£firstAuditTimestampΓÇ¥ in the exported file. |
-| Trigger Type | Shows the type of detection time window. The aggregation trigger types are per hour or per day. This is helpful to detect versus a high frequency brute force attack versus a slow attack where the number of attempts is distributed throughout the day. |
-| IP Address | The single risky IP address that had either bad password or extranet lockout sign-in activities. This could be an IPv4 or an IPv6 address. |
+| Trigger Type | Shows the type of detection time window. The aggregation trigger types are per hour or per day. Helpful in determing between a high frequency brute force attack versus a slow attack where the number of attempts is distributed throughout the day. |
+| IP Address | The single risky IP address that had either bad password or extranet lockout sign-in activities. It can be either IPv4 or an IPv6 address. |
| Bad Password Error Count | The count of Bad Password error occurred from the IP address during the detection time window. The Bad Password errors can happen multiple times to certain users. Notice this does not include failed attempts due to expired passwords. |
-| Extranet Lock Out Error Count | The count of Extranet Lockout error occurred from the IP address during the detection time window. The Extranet Lockout errors can happen multiple times to certain users. This will only be seen if Extranet Lockout is configured in AD FS (versions 2012R2 or higher). <b>Note</b> We strongly recommend turning this feature on if you allow extranet logins using passwords. |
-| Unique Users Attempted | The count of unique user accounts attempted from the IP address during the detection time window. This provides a mechanism to differentiate a single user attack pattern versus multi-user attack pattern. |
+| Extranet Lock Out Error Count | The count of Extranet Lockout error occurred from the IP address during the detection time window. The Extranet Lockout errors can happen multiple times to certain users. This will only be seen if Extranet Lockout is configured in AD FS (versions 2012R2 or higher). <b>Note</b> We strongly recommend enabling this feature if you allow extranet logins using passwords. |
+| Unique Users Attempted | The count of unique user accounts attempted from the IP address during the detection time window. Differentiates between a single user attack pattern versus multi-user attack pattern. |
For example, the below report item indicates from the 6pm to 7pm hour window on 02/28/2018, IP address <i>104.2XX.2XX.9</i> had no bad password errors and 284 extranet lockout errors. 14 unique users were impacted within the criteria. The activity event exceeded the designated report hourly threshold.
For example, the below report item indicates from the 6pm to 7pm hour window on
![Screenshot that shows the Risky IP report with the "Download", "Notification Settings", and "Threshold Settings" highlighted.](./media/how-to-connect-health-adfs/report4c.png) ## Load balancer IP addresses in the list
-Load balancer aggregate failed sign-in activities and hit the alert threshold. If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Please configure your load balancer correctly to pass forward client IP address.
+Load balancer aggregate failed sign-in activities and hit the alert threshold. If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Configure your load balancer correctly to pass forward client IP address.
## Download risky IP report Using the **Download** functionality, the whole risky IP address list in the past 30 days can be exported from the Connect Health Portal
Besides the highlighted aggregations in the portal, the export result also shows
## Configure notification settings Admin contacts of the report can be updated through the **Notification Settings**. By default, the risky IP alert email notification is in off state. You can enable the notification by toggle the button under ΓÇ£Get email notifications for IP addresses exceeding failed activity threshold reportΓÇ¥
-Like generic alert notification settings in Connect Health, it allows you to customize designated notification recipient list about risky IP report from here. You can also notify all global admins while making the change.
+Like generic alert notification settings in Connect Health, it allows you to customize designated notification recipient list about risky IP report from here. You can also notify all Hybrid Identity Administrators while making the change.
## Configure threshold settings Alerting threshold can be updated through Threshold Settings. To start with, system has threshold set by default. The default values are given below. There are four categories in the risk IP report threshold settings:
Alerting threshold can be updated through Threshold Settings. To start with, sys
Private IP addresses (<i>10.x.x.x, 172.x.x.x & 192.168.x.x</i>) and Exchange IP addresses are filtered and marked as True in the IP approved list. If you are seeing private IP address ranges, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. **Why am I seeing load balancer IP addresses in the report?** <br />
-If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Please configure your load balancer correctly to pass forward client IP address.
+If you are seeing load balancer IP addresses, it is highly likely that your external load balancer is not sending the client IP address when it passes the request to the Web Application Proxy server. Configure your load balancer correctly to pass forward client IP address.
**What do I do to block the IP address?** <br /> You should add identified malicious IP address to the firewall or block in Exchange. <br />
You should add identified malicious IP address to the firewall or block in Excha
- Audits is not enabled in AD FS farms. **Why am I seeing no access to the report?** <br />
-Global Admin or [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) permission is required. Please contact your global admin to get access.
+Global Admin or [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) permission is required. Contact your global admin to get access.
## Next steps
active-directory How To Connect Health Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-adfs.md
documentationcenter: ''
ms.assetid: dc0e53d8-403e-462a-9543-164eaa7dd8b3
active-directory How To Connect Health Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-agent-install.md
documentationcenter: '' ms.assetid: 1cc8ae90-607d-4925-9c30-6770a4bd1b4e
The following table lists requirements for using Azure AD Connect Health.
| Requirement | Description | | | | | There is an Azure AD Premium (P1 or P2) Subsciption. |Azure AD Connect Health is a feature of Azure AD Premium (P1 or P2). For more information, see [Sign up for Azure AD Premium](../fundamentals/active-directory-get-started-premium.md). <br /><br />To start a free 30-day trial, see [Start a trial](https://azure.microsoft.com/trial/get-started-active-directory/). |
-| You're a global administrator in Azure AD. |By default, only global administrators can install and configure the health agents, access the portal, and do any operations within Azure AD Connect Health. For more information, see [Administering your Azure AD directory](../fundamentals/active-directory-whatis.md). <br /><br /> By using Azure role-based access control (Azure RBAC), you can allow other users in your organization to access Azure AD Connect Health. For more information, see [Azure RBAC for Azure AD Connect Health](how-to-connect-health-operations.md#manage-access-with-azure-rbac). <br /><br />**Important**: Use a work or school account to install the agents. You can't use a Microsoft account. For more information, see [Sign up for Azure as an organization](../fundamentals/sign-up-organization.md). |
+| You're a Hybrid Identity Administrator in Azure AD. |By default, only Hybrid Identity Administrators or global administrators can install and configure the health agents, access the portal, and do any operations within Azure AD Connect Health. For more information, see [Administering your Azure AD directory](../fundamentals/active-directory-whatis.md). <br /><br /> By using Azure role-based access control (Azure RBAC), you can allow other users in your organization to access Azure AD Connect Health. For more information, see [Azure RBAC for Azure AD Connect Health](how-to-connect-health-operations.md#manage-access-with-azure-rbac). <br /><br />**Important**: Use a work or school account to install the agents. You can't use a Microsoft account. For more information, see [Sign up for Azure as an organization](../fundamentals/sign-up-organization.md). |
| The Azure AD Connect Health agent is installed on each targeted server. | Health agents must be installed and configured on targeted servers so that they can receive data and provide monitoring and analytics capabilities. <br /><br />For example, to get data from your Active Directory Federation Services (AD FS) infrastructure, you must install the agent on the AD FS server and the Web Application Proxy server. Similarly, to get data from your on-premises Azure AD Domain Services (Azure AD DS) infrastructure, you must install the agent on the domain controllers. | | The Azure service endpoints have outbound connectivity. | During installation and runtime, the agent requires connectivity to Azure AD Connect Health service endpoints. If firewalls block outbound connectivity, add the [outbound connectivity endpoints](how-to-connect-health-agent-install.md#outbound-connectivity-to-the-azure-service-endpoints) to the allow list. | |Outbound connectivity is based on IP addresses. | For information about firewall filtering based on IP addresses, see [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=56519).|
After the installation finishes, select **Configure Now**.
![Screenshot showing the confirmation message for the Azure AD Connect Health AD FS agent installation.](./media/how-to-connect-health-agent-install/install2.png)
-A PowerShell window opens to start the agent registration process. When you're prompted, sign in by using an Azure AD account that has permissions to register the agent. By default, the global admin account has permissions.
+A PowerShell window opens to start the agent registration process. When you're prompted, sign in by using an Azure AD account that has permissions to register the agent. By default, the Hybrid Identity Administrator account has permissions.
![Screenshot showing the sign-in window for Azure AD Connect Health AD FS.](./media/how-to-connect-health-agent-install/install3.png)
active-directory How To Connect Health Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-health-operations.md
You can configure the Azure AD Connect Health service to send email notification
2. Select **Sync errors** 3. Select **Notification Settings**. 5. At the email notification switch, select **ON**.
-6. Select the check box if you want all global administrators to receive email notifications.
+6. Select the check box if you want all Hybrid Identity Administrators to receive email notifications.
7. If you want to receive email notifications at any other email addresses, specify them in the **Additional Email Recipients** box. To remove an email address from this list, right-click the entry and select **Delete**. 8. To finalize the changes, click **Save**. Changes take effect only after you save.
When you're deleting a service instance, be aware of the following:
[//]: # (Start of RBAC section) ## Manage access with Azure RBAC
-[Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md) for Azure AD Connect Health provides access to users and groups other than global administrators. Azure RBAC assigns roles to the intended users and groups, and provides a mechanism to limit the global administrators within your directory.
+[Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md) for Azure AD Connect Health provides access to users and groups other than Hybrid Identity Administratoristrators. Azure RBAC assigns roles to the intended users and groups, and provides a mechanism to limit the Hybrid Identity Administrators within your directory.
### Roles Azure AD Connect Health supports the following built-in roles: | Role | Permissions | | | |
-| Owner |Owners can *manage access* (for example, assign a role to a user or group), *view all information* (for example, view alerts) from the portal, and *change settings* (for example, email notifications) within Azure AD Connect Health. <br>By default, Azure AD global administrators are assigned this role, and this cannot be changed. |
+| Owner |Owners can *manage access* (for example, assign a role to a user or group), *view all information* (for example, view alerts) from the portal, and *change settings* (for example, email notifications) within Azure AD Connect Health. <br>By default, Azure AD Hybrid Identity Administrators are assigned this role, and this cannot be changed. |
| Contributor |Contributors can *view all information* (for example, view alerts) from the portal, and *change settings* (for example, email notifications) within Azure AD Connect Health. | | Reader |Readers can *view all information* (for example, view alerts) from the portal within Azure AD Connect Health. |
active-directory How To Connect Install Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-custom.md
After installing the required components, select your users' single sign-on meth
|Enable single sign-on|This option is available with both password hash sync and pass-through authentication. It provides a single sign-on experience for desktop users on corporate networks. For more information, see [Single sign-on](how-to-connect-sso.md). </br></br>**Note:** For AD FS customers, this option is unavailable. AD FS already offers the same level of single sign-on.</br> ### Connect to Azure AD
-On the **Connect to Azure AD** page, enter a global admin account and password. If you selected **Federation with AD FS** on the previous page, don't sign in with an account that's in a domain you plan to enable for federation.
+On the **Connect to Azure AD** page, enter a Hybrid Identity Administrator account and password. If you selected **Federation with AD FS** on the previous page, don't sign in with an account that's in a domain you plan to enable for federation.
You might want to use an account in the default *onmicrosoft.com* domain, which comes with your Azure AD tenant. This account is used only to create a service account in Azure AD. It's not used after the installation finishes.
active-directory How To Connect Install Existing Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-existing-database.md
Important notes to take note of before you proceed:
1. On the **Install required components** screen, the **Use an existing SQL Server** option is enabled. Specify the name of the SQL server that is hosting the ADSync database. If the SQL engine instance used to host the ADSync database is not the default instance on the SQL server, you must specify the SQL engine instance name. Further, if SQL browsing is not enabled, you must also specify the SQL engine instance port number. For example: ![Screenshot that shows the "Install required components" page.](./media/how-to-connect-install-existing-database/db4.png)
-1. On the **Connect to Azure AD** screen, you must provide the credentials of a global admin of your Azure AD directory. The recommendation is to use an account in the default onmicrosoft.com domain. This account is only used to create a service account in Azure AD and is not used after the wizard has completed.
+1. On the **Connect to Azure AD** screen, you must provide the credentials of a Hybrid Identity Administrator of your Azure AD directory. The recommendation is to use an account in the default onmicrosoft.com domain. This account is only used to create a service account in Azure AD and is not used after the wizard has completed.
![Connect](./media/how-to-connect-install-existing-database/db5.png) 1. On the **Connect your directories** screen, the existing AD forest configured for directory synchronization is listed with a red cross icon beside it. To synchronize changes from an on-premises AD forest, an AD DS account is required. The Azure AD Connect wizard is unable to retrieve the credentials of the AD DS account stored in the ADSync database because the credentials are encrypted and can only be decrypted by the previous Azure AD Connect server. Click **Change Credentials** to specify the AD DS account for the AD forest.
active-directory How To Connect Install Express https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-express.md
description: Learn how to download, install and run the setup wizard for Azure A
ms.assetid: b6ce45fd-554d-4f4d-95d1-47996d561c9f
If express settings does not match your topology, see [related documentation](#r
3. On the Welcome screen, select the box agreeing to the licensing terms and click **Continue**. 4. On the Express settings screen, click **Use express settings**. ![Welcome to Azure AD Connect](./media/how-to-connect-install-express/express.png)
-5. On the Connect to Azure AD screen, enter the username and password of a global administrator for your Azure AD. Click **Next**.
+5. On the Connect to Azure AD screen, enter the username and password of a Hybrid Identity Administrator for your Azure AD. Click **Next**.
![Connect to Azure AD](./media/how-to-connect-install-express/connectaad.png) If you receive an error and have problems with connectivity, then see [Troubleshoot connectivity problems](tshoot-connect-connectivity.md). 6. On the Connect to AD DS screen, enter the username and password for an enterprise admin account. You can enter the domain part in either NetBios or FQDN format, that is, FABRIKAM\administrator or fabrikam.com\administrator. Click **Next**.
active-directory How To Connect Install Move Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-move-db.md
Use the following steps to move the Azure AD Connect database to a remote SQL Se
12. On the **Install required components** screen, the **Use an existing SQL Server** option is enabled. Specify the name of the SQL server that is hosting the ADSync database. If the SQL engine instance used to host the ADSync database is not the default instance on the SQL server, you must specify the SQL engine instance name. Further, if SQL browsing is not enabled, you must also specify the SQL engine instance port number. For example: ![Screenshot that shows the "Install required components" page.](./media/how-to-connect-install-move-db/db4.png)
-13. On the **Connect to Azure AD** screen, you must provide the credentials of a global admin of your Azure AD directory. The recommendation is to use an account in the default onmicrosoft.com domain. This account is only used to create a service account in Azure AD and is not used after the wizard has completed.
+13. On the **Connect to Azure AD** screen, you must provide the credentials of a Hybrid Identity Administrator of your Azure AD directory. The recommendation is to use an account in the default onmicrosoft.com domain. This account is only used to create a service account in Azure AD and is not used after the wizard has completed.
![Connect](./media/how-to-connect-install-move-db/db5.png) 14. On the **Connect your directories** screen, the existing AD forest configured for directory synchronization is listed with a red cross icon beside it. To synchronize changes from an on-premises AD forest, an AD DS account is required. The Azure AD Connect wizard is unable to retrieve the credentials of the AD DS account stored in the ADSync database because the credentials are encrypted and can only be decrypted by the previous Azure AD Connect server. Click **Change Credentials** to specify the AD DS account for the AD forest.
active-directory How To Connect Install Multiple Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-multiple-domains.md
documentationcenter: '' ms.assetid: 5595fb2f-2131-4304-8a31-c52559128ea4
The following documentation provides guidance on how to use multiple top-level domains and subdomains when federating with Microsoft 365 or Azure AD domains. ## Multiple top-level domain support
-Federating multiple, top-level domains with Azure AD requires some additional configuration that is not required when federating with one top-level domain.
+Federating multiple, top-level domains with Azure AD requires some extra configuration that is not required when federating with one top-level domain.
When a domain is federated with Azure AD, several properties are set on the domain in Azure. One important one is IssuerUri. This property is a URI that is used by Azure AD to identify the domain that the token is associated with. The URI doesnΓÇÖt need to resolve to anything but it must be a valid URI. By default, Azure AD sets the URI to the value of the federation service identifier in your on-premises AD FS configuration.
This parameter makes Azure AD configure the IssuerUri so that it is based on the
![Screenshot that shows a successful completion of the PowerShell command.](./media/how-to-connect-install-multiple-domains/convert.png)
-Looking at the settings for the bmfabrikam.com domain you can see the following:
+Looking at the screenshot for the bmfabrikam.com domain you can see the following settings:
![Screenshot that shows the settings for the "bmfabrikam.com" domain.](./media/how-to-connect-install-multiple-domains/settings.png)
Thus during authentication to Azure AD or Microsoft 365, the IssuerUri element i
For example, if a userΓÇÖs UPN is bsimon@bmcontoso.com, the IssuerUri element in the token, AD FS issuer, will be set to `http://bmcontoso.com/adfs/services/trust`. This element will match the Azure AD configuration, and authentication will succeed.
-The following is the customized claim rule that implements this logic:
+The following customized claim rule implements this logic:
``` c:[Type == "http://schemas.xmlsoap.org/claims/UPN"] => issue(Type = "http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid", Value = regexreplace(c.Value, ".+@(?<domain>.+)", "http://${domain}/adfs/services/trust/"));
Use the following steps to remove the Microsoft Online trust and update your ori
2. On the left, expand **Trust Relationships** and **Relying Party Trusts** 3. On the right, delete the **Microsoft Office 365 Identity Platform** entry. ![Remove Microsoft Online](./media/how-to-connect-install-multiple-domains/trust4.png)
-4. On a machine that has [Azure Active Directory Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following: `$cred=Get-Credential`.
-5. Enter the username and password of a global administrator for the Azure AD domain you are federating with.
+4. On a machine that has [Azure Active Directory Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following PowerShell: `$cred=Get-Credential`.
+5. Enter the username and password of a Hybrid Identity Administrator for the Azure AD domain you are federating with.
6. In PowerShell, enter `Connect-MsolService -Credential $cred` 7. In PowerShell, enter `Update-MSOLFederatedDomain -DomainName <Federated Domain Name> -SupportMultipleDomain`. This update is for the original domain. So using the above domains it would be: `Update-MsolFederatedDomain -DomainName bmcontoso.com -SupportMultipleDomain` Use the following steps to add the new top-level domain using PowerShell
-1. On a machine that has [Azure Active Directory Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following: `$cred=Get-Credential`.
-2. Enter the username and password of a global administrator for the Azure AD domain you are federating with
+1. On a machine that has [Azure Active Directory Module for Windows PowerShell](/previous-versions/azure/jj151815(v=azure.100)) installed on it run the following PowerShell: `$cred=Get-Credential`.
+2. Enter the username and password of a Hybrid Identity Administratoristrator for the Azure AD domain you are federating with
3. In PowerShell, enter `Connect-MsolService -Credential $cred` 4. In PowerShell, enter `New-MsolFederatedDomain ΓÇôSupportMultipleDomain ΓÇôDomainName`
So lets say, for example, that I have bmcontoso.com and then add corp.bmcontoso.
### How To enable support for subdomains In order to work around this behavior, the AD FS relying party trust for Microsoft Online needs to be updated. To do this, you must configure a custom claim rule so that it strips off any subdomains from the userΓÇÖs UPN suffix when constructing the custom Issuer value.
-The following claim will do this:
+Use the following claim:
``` c:[Type == "http://schemas.xmlsoap.org/claims/UPN"] => issue(Type = "http://schemas.microsoft.com/ws/2008/06/identity/claims/issuerid", Value = regexreplace(c.Value, "^.*@([^.]+\.)*?(?<domain>([^.]+\.?){2})$", "http://${domain}/adfs/services/trust/")); ``` [!NOTE]
-The last number in the regular expression set is how many parent domains there are in your root domain. Here bmcontoso.com is used, so two parent domains are necessary. If three parent domains were to be kept (i.e.: corp.bmcontoso.com), then the number would have been three. Eventually a range can be indicated, the match will always be made to match the maximum of domains. "{2,3}" will match two to three domains (i.e.: bmfabrikam.com and corp.bmcontoso.com).
+The last number in the regular expression set is how many parent domains there are in your root domain. Here bmcontoso.com is used, so two parent domains are necessary. If three parent domains were to be kept (that is, corp.bmcontoso.com), then the number would have been three. Eventually a range can be indicated, the match will always be made to match the maximum of domains. "{2,3}" will match two to three domains (that is, bmfabrikam.com and corp.bmcontoso.com).
Use the following steps to add a custom claim to support subdomains.
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
To read more about securing your Active Directory environment, see [Best practic
- You must configure TLS/SSL certificates. For more information, see [Managing SSL/TLS protocols and cipher suites for AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs) and [Managing SSL certificates in AD FS](/windows-server/identity/ad-fs/operations/manage-ssl-certificates-ad-fs-wap). - You must configure name resolution. - It is not supported to break and analyze traffic between Azure AD Connect and Azure AD. Doing so may disrupt the service.-- If your global administrators have MFA enabled, the URL https://secure.aadcdn.microsoftonline-p.com *must* be in the trusted sites list. You're prompted to add this site to the trusted sites list when you're prompted for an MFA challenge and it hasn't been added before. You can use Internet Explorer to add it to your trusted sites.
+- If your Hybrid Identity Administrators have MFA enabled, the URL https://secure.aadcdn.microsoftonline-p.com *must* be in the trusted sites list. You're prompted to add this site to the trusted sites list when you're prompted for an MFA challenge and it hasn't been added before. You can use Internet Explorer to add it to your trusted sites.
- If you plan to use Azure AD Connect Health for syncing, ensure that the prerequisites for Azure AD Connect Health are also met. For more information, see [Azure AD Connect Health agent installation](how-to-connect-health-agent-install.md). ### Harden your Azure AD Connect server
We recommend that you harden your Azure AD Connect server to decrease the securi
* You can have only one sync engine per SQL instance. Sharing a SQL instance with FIM/MIM Sync, DirSync, or Azure AD Sync *isn't supported*. ### Accounts
-* You must have an Azure AD Global Administrator account for the Azure AD tenant you want to integrate with. This account must be a *school or organization account* and can't be a *Microsoft account*.
+* You must have an Azure AD Global Administrator account or Hybrid Identity Administrator account for the Azure AD tenant you want to integrate with. This account must be a *school or organization account* and can't be a *Microsoft account*.
* If you use [express settings](reference-connect-accounts-permissions.md#express-settings-installation) or upgrade from DirSync, you must have an Enterprise Administrator account for your on-premises Active Directory. * If you use the custom settings installation path, you have more options. For more information, see [Custom installation settings](reference-connect-accounts-permissions.md#custom-installation-settings).
active-directory How To Connect Post Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-post-installation.md
documentationcenter: '' ms.assetid: c18bee36-aebf-4281-b8fc-3fe14116f1a5
active-directory How To Connect Pta Disable Do Not Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-disable-do-not-configure.md
In this article, you learn how to disable pass-through authentication by using A
## Prerequisites
-Before you begin, ensure that you have the following:
+Before you begin, ensure that you have the following prerequisite.
- A Windows machine with pass-through authentication agent version 1.5.1742.0 or later installed. Any earlier version might not have the requisite cmdlets for completing this operation.
- If you don't already have an agent, you can install it by doing the following:
+ If you don't already have an agent, you can install it.
1. Go to the [Azure portal](https://portal.azure.com). 1. Download the latest Auth Agent.
- 1. Install the feature by running either of the following:
+ 1. Install the feature by running either of the following commands.
* `.\AADConnectAuthAgentSetup.exe` * `.\AADConnectAuthAgentSetup.exe ENVIRONMENTNAME=<identifier>` > [!IMPORTANT]
Before you begin, ensure that you have the following:
>| - | - | >| AzureUSGovernment | US Gov | -- An Azure global administrator account for running the PowerShell cmdlets.
+- An Azure Hybrid Identity Administrator account for running the PowerShell cmdlets.
## Use Azure AD Connect
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Ensure that the following prerequisites are in place.
### In the Azure Active Directory admin center
-1. Create a cloud-only global administrator account or a Hybrid Identity administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
+1. Create a cloud-only Hybrid Identity Administrator account or a Hybrid Identity administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only Hybrid Identity Administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names. ### In your on-premises environment
If you have already installed Azure AD Connect by using the [express installatio
Follow these instructions to verify that you have enabled Pass-through Authentication correctly:
-1. Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with the global administrator credentials for your tenant.
+1. Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with the Hybrid Identity Administratoristrator credentials for your tenant.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**. 4. Verify that the **Pass-through authentication** feature appears as **Enabled**.
For most customers, three Authentication Agents in total are sufficient for high
To begin, follow these instructions to download the Authentication Agent software:
-1. To download the latest version of the Authentication Agent (version 1.5.193.0 or later), sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with your tenant's global administrator credentials.
+1. To download the latest version of the Authentication Agent (version 1.5.193.0 or later), sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) with your tenant's Hybrid Identity Administrator credentials.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**, select **Pass-through authentication**, and then select **Download Agent**. 4. Select the **Accept terms & download** button.
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
The following sections discuss these phases in detail.
### Authentication Agent installation
-Only global administrators or Hybrid Identity administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
+Only Hybrid Identity Administrators or Hybrid Identity administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
- The Authentication Agent application itself. This application runs with [NetworkService](/windows/win32/services/networkservice-account) privileges. - The Updater application that's used to auto-update the Authentication Agent. This application runs with [LocalSystem](/windows/win32/services/localsystem-account) privileges.
The Authentication Agents use the following steps to register themselves with Az
![Agent registration](./media/how-to-connect-pta-security-deep-dive/pta1.png)
-1. Azure AD first requests that a global administrator or hybrid identity administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the global administrator or hybrid identity administrator.
+1. Azure AD first requests that a Hybrid Identity Administratoristrator or hybrid identity administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the
2. The Authentication Agent then generates a key pair: a public key and a private key. - The key pair is generated through standard RSA 2048-bit encryption. - The private key stays on the on-premises server where the Authentication Agent resides.
The Authentication Agents use the following steps to register themselves with Az
- The access token acquired in step 1. - The public key generated in step 2. - A Certificate Signing Request (CSR or Certificate Request). This request applies for a digital identity certificate, with Azure AD as its certificate authority (CA).
-4. Azure AD validates the access token in the registration request and verifies that the request came from a global administrator or hybrid identity administrator.
+4. Azure AD validates the access token in the registration request and verifies that the request came from a Hybrid Identity Administrator or hybrid identity administrator.
5. Azure AD then signs and sends a digital identity certificate back to the Authentication Agent. - The root CA in Azure AD is used to sign the certificate.
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
After completion of the wizard, Seamless SSO is enabled on your tenant.
Follow these instructions to verify that you have enabled Seamless SSO correctly:
-1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the global administrator or hybrid identity administrator credentials for your tenant.
+1. Sign in to the [Azure Active Directory administrative center](https://aad.portal.azure.com) with the Hybrid Identity Administrator or hybrid identity administrator credentials for your tenant.
2. Select **Azure Active Directory** in the left pane. 3. Select **Azure AD Connect**. 4. Verify that the **Seamless single sign-on** feature appears as **Enabled**.
active-directory How To Connect Staged Rollout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-staged-rollout.md
For an overview of the feature, view this "Azure Active Directory: What is Stage
- If you plan to use Azure AD Multi-Factor Authentication, we recommend that you use [combined registration for self-service password reset (SSPR) and Multi-Factor Authentication](../authentication/concept-registration-mfa-sspr-combined.md) to have your users register their authentication methods once. Note- when using SSPR to reset password or change password using MyProfile page while in Staged Rollout, Azure AD Connect needs to sync the new password hash which can take up to 2 minutes after reset. -- To use the Staged Rollout feature, you need to be a global administrator on your tenant.
+- To use the Staged Rollout feature, you need to be a Hybrid Identity Administrator on your tenant.
- To enable *seamless SSO* on a specific Active Directory forest, you need to be a domain administrator.
Enable *seamless SSO* by doing the following:
`Import-Module .\AzureADSSO.psd1`
-4. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command opens a pane where you can enter your tenant's global administrator credentials.
+4. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. This command opens a pane where you can enter your tenant's Hybrid Identity Administratoristrator credentials.
5. Call `Get-AzureADSSOStatus | ConvertFrom-Json`. This command displays a list of Active Directory forests (see the "Domains" list) on which this feature has been enabled. By default, it is set to false at the tenant level.
active-directory How To Dirsync Upgrade Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-dirsync-upgrade-get-started.md
The passwords used by DirSync for the service accounts cannot be retrieved and a
### High-level steps for upgrading from DirSync to Azure AD Connect 1. Welcome to Azure AD Connect 2. Analysis of current DirSync configuration
-3. Collect Azure AD global admin password
+3. Collect Azure AD Hybrid Identity Administrator password
4. Collect credentials for an enterprise admin account (only used during the installation of Azure AD Connect) 5. Installation of Azure AD Connect * Uninstall DirSync (or temporarily disable it)
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
Before you begin your migration, ensure that you meet these prerequisites.
### Required roles
-For staged rollout, you need to be a global administrator on your tenant.
+For staged rollout, you need to be a Hybrid Identity Administrator on your tenant.
To enable seamless SSO on a specific Windows Active Directory Forest, you need to be a domain administrator.
active-directory Plan Connect User Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-user-signin.md
documentationcenter: '' ms.assetid: 547b118e-7282-4c7f-be87-c035561001df
active-directory Plan Hybrid Identity Design Considerations Data Protection Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-data-protection-strategy.md
Since the options for incident response use a multilayer approach, comparison be
[Determine hybrid identity management tasks](plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md) ## See Also
-[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
+[Design considerations overview](plan-hybrid-identity-design-considerations-overview.md)
active-directory Plan Hybrid Identity Design Considerations Identity Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-identity-adoption-strategy.md
In this task, you define the hybrid identity adoption strategy for your hybrid i
* [Determine multi-factor authentication requirements](plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md) ## Define business needs strategy
-The first task addresses determining the organizations business needs. This can be very broad and scope creep can occur if you are not careful. In the beginning, keep it simple but always remember to plan for a design that will accommodate and facilitate change in the future. Regardless of whether it is a simple design or an extremely complex one, Azure Active Directory is the Microsoft Identity platform that supports Microsoft 365, Microsoft Online Services, and cloud aware applications.
+The first task addresses determining the organizations business needs. This task can be broad and scope creep can occur if you are not careful. In the beginning, keep it simple but always remember to plan for a design that will accommodate and facilitate change in the future. Regardless of whether it is a simple design or a complex one, Azure Active Directory is the Microsoft Identity platform that supports Microsoft 365, Microsoft Online Services, and cloud aware applications.
## Define an integration strategy
-Microsoft has three main integration scenarios which are cloud identities, synchronized identities, and federated identities. You should plan on adopting one of these integration strategies. The strategy you choose can vary and the decisions in choosing one may include, what type of user experience you want to provide, do you have an existing infrastructure, and what is the most cost effective.
+Microsoft has three main integration scenarios: cloud identities, synchronized identities, and federated identities. You should plan on adopting one of these integration strategies. The strategy you choose can vary. Decisions in choosing one may include, what type of user experience you want to provide, do you have an existing infrastructure, and what is the most cost effective.
![integration scenarios](./media/plan-hybrid-identity-design-considerations/integration-scenarios.png) The scenarios defined in the above figure are:
-* **Cloud identities**: these are identities that exist solely in the cloud. In the case of Azure AD, they would reside specifically in your Azure AD directory.
-* **Synchronized**: these are identities that exist on-premises and in the cloud. Using Azure AD Connect, these users are either created or joined with existing Azure AD accounts. The userΓÇÖs password hash is synchronized from the on-premises environment to the cloud in what is called a password hash. When using synchronized the one caveat is that if a user is disabled in the on-premises environment, it can take up to three hours for that account status to show up in Azure AD. This is due to the synchronization time interval.
-* **Federated**: these identities exist both on-premises and in the cloud. Using Azure AD Connect, these users are either created or joined with existing Azure AD accounts.
+* **Cloud identities**: identities that exist solely in the cloud. In the case of Azure AD, they would reside specifically in your Azure AD directory.
+* **Synchronized**: identities that exist on-premises and in the cloud. Using Azure AD Connect, users are either created or joined with existing Azure AD accounts. The userΓÇÖs password hash is synchronized from the on-premises environment to the cloud in what is called a password hash. Remember that if a user is disabled in the on-premises environment, it can take up to three hours for that account status to show up in Azure AD. This behavior is due to the synchronization time interval.
+* **Federated**: identities exist both on-premises and in the cloud. Using Azure AD Connect, users are either created or joined with existing Azure AD accounts.
> [!NOTE] > For more information about the Synchronization options, read [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
The following table helps in determining the advantages and disadvantages of eac
| Strategy | Advantages | Disadvantages | | | | |
-| **Cloud identities** |Easier to manage for small organization. <br> Nothing to install on-premises. No additional hardware needed<br>Easily disabled if the user leaves the company |Users will need to sign in when accessing workloads in the cloud <br> Passwords may or may not be the same for cloud and on-premises identities |
+| **Cloud identities** |Easier to manage for small organization. <br> Nothing to install on-premises. No extra hardware needed<br>Easily disabled if the user leaves the company |Users will need to sign in when accessing workloads in the cloud <br> Passwords may or may not be the same for cloud and on-premises identities |
| **Synchronized** |On-premises password authenticates both on-premises and cloud directories <br>Easier to manage for small, medium, or large organizations <br>Users can have single sign-on (SSO) for some resources <br> Microsoft preferred method for synchronization <br> Easier to manage |Some customers may be reluctant to synchronize their directories with the cloud due specific companyΓÇÖs policies |
-| **Federated** |Users can have single sign-on (SSO) <br>If a user is terminated or leaves, the account can be immediately disabled and access revoked,<br> Supports advanced scenarios that cannot be accomplished with synchronized |More steps to set up and configure <br> Higher maintenance <br> May require additional hardware for the STS infrastructure <br> May require additional hardware to install the federation server. Additional software is required if AD FS is used <br> Require extensive setup for SSO <br> Critical point of failure if the federation server is down, users wonΓÇÖt be able to authenticate |
+| **Federated** |Users can have single sign-on (SSO) <br>If a user is terminated or leaves, the account can be immediately disabled and access revoked,<br> Supports advanced scenarios that cannot be accomplished with synchronized |More steps to set up and configure <br> Higher maintenance <br> May require extra hardware for the STS infrastructure <br> May require extra hardware to install the federation server. Other software is required if AD FS is used <br> Require extensive setup for SSO <br> Critical point of failure if the federation server is down, users wonΓÇÖt be able to authenticate |
### Client experience The strategy that you use will dictate the user sign-in experience. The following tables provide you with information on what the users should expect their sign-in experience to be. Not all federated identity providers support SSO in all scenarios.
The strategy that you use will dictate the user sign-in experience. The followi
| Exchange ActiveSync |Prompt for credentials |single sign-on for Lync, prompted credentials for Exchange | | Mobile apps |Prompt for credentials |Prompt for credentials |
-If you have determined from task 1 that you have a third-party IdP or are going to use one to provide federation with Azure AD, you need to be aware of the following supported capabilities:
+If you have a third-party IdP or are going to use one to provide federation with Azure AD, you need to be aware of the following supported capabilities:
* Any SAML 2.0 provider that is compliant for the SP-Lite profile can support authentication to Azure AD and associated applications * Supports passive authentication, which facilitates authentication to OWA, SPO, etc.
You must also be aware of what capabilities will not be available:
> ## Define synchronization strategy
-In this task you will define the tools that will be used to synchronize the organizationΓÇÖs on-premises data to the cloud and what topology you should use. Because, most organizations use Active Directory, information on using Azure AD Connect to address the questions above is provided in some detail. For environments that do not have Active Directory, there is information about using FIM 2010 R2 or MIM 2016 to help plan this strategy. However, future releases of Azure AD Connect will support LDAP directories, so depending on your timeline, this information may be able to assist.
+This task defines the tools that will be used to synchronize the organizationΓÇÖs on-premises data to the cloud and what topology you should use. Because, most organizations use Active Directory, information on using Azure AD Connect to address the questions above is provided in some detail. For environments that do not have Active Directory, there is information about using FIM 2010 R2 or MIM 2016 to help plan this strategy. However, future releases of Azure AD Connect will support LDAP directories, so depending on your timeline, this information may be able to assist.
### Synchronization tools Over the years, several synchronization tools have existed and used for various scenarios. Currently Azure AD Connect is the go to tool of choice for all supported scenarios. AAD Sync and DirSync are also still around and may even be present in your environment now.
Over the years, several synchronization tools have existed and used for various
### Supported topologies When defining a synchronization strategy, the topology that is used must be determined. Depending on the information that was determined in step 2 you can determine which topology is the proper one to use.
-The single forest, single Azure AD topology is the most common and consists of a single Active Directory forest and a single instance of Azure AD. This is going to be used in a majority of the scenarios and is the expected topology when using Azure AD Connect Express installation as shown in the figure below.
+The single forest, single Azure AD topology is the most common and consists of a single Active Directory forest and a single instance of Azure AD. This topology is going to be used in a most scenarios and is the expected topology when using Azure AD Connect Express installation as shown in the figure below.
![Supported topologies](./media/plan-hybrid-identity-design-considerations/single-forest.png) Single Forest Scenario
It is common for large and even small organizations to have multiple forests, as
Multi-Forest Scenario
-If this is the case, then the multi-forest single Azure AD topology should be considered if the following items are true:
+The multi-forest single Azure AD topology should be considered if the following items are true:
-* Users have only 1 identity across all forests ΓÇô the uniquely identifying users section below describes this in more detail.
+* Users have only 1 identity across all forests ΓÇô the uniquely identifying users section below describes this scenario in more detail.
* The user authenticates to the forest in which their identity is located * UPN and Source Anchor (immutable id) will come from this forest
-* All forests are accessible by Azure AD Connect ΓÇô this means it does not need to be domain joined and can be placed in a DMZ if this facilitates this.
+* All forests are accessible by Azure AD Connect ΓÇô meaning it does not need to be domain joined and can be placed in a DMZ.
* Users have only one mailbox * The forest that hosts a userΓÇÖs mailbox has the best data quality for attributes visible in the Exchange Global Address List (GAL)
-* If there is no mailbox on the user, then any forest may be used to contribute these values
+* If there is no mailbox on the user, then any forest may be used to contribute values
* If you have a linked mailbox, then there is also another account in a different forest used to sign in. > [!NOTE]
-> Objects that exist in both on-premises and in the cloud are ΓÇ£connectedΓÇ¥ via a unique identifier. In the context of Directory Synchronization, this unique identifier is referred to as the SourceAnchor. In the context of Single Sign-On, this is referred to as the ImmutableId. [Design concepts for Azure AD Connect](plan-connect-design-concepts.md#sourceanchor) for more considerations regarding the use of SourceAnchor.
+> Objects that exist in both on-premises and in the cloud are ΓÇ£connectedΓÇ¥ via a unique identifier. In the context of Directory Synchronization, this unique identifier is referred to as the SourceAnchor. In the context of Single Sign-On, this identifier is referred to as the ImmutableId. [Design concepts for Azure AD Connect](plan-connect-design-concepts.md#sourceanchor) for more considerations regarding the use of SourceAnchor.
> >
-If the above are not true and you have more than one active account or more than one mailbox, Azure AD Connect will pick one and ignore the other. If you have linked mailboxes but no other account, these accounts will not be exported to Azure AD and that user will not be a member of any groups. This is different from how it was in the past with DirSync and is intentional to better support these multi-forest scenarios. A multi-forest scenario is shown in the figure below.
+If the above are not true and you have more than one active account or more than one mailbox, Azure AD Connect will pick one and ignore the other. If you have linked mailboxes but no other account, accounts will not be exported to Azure AD and that user will not be a member of any groups. This behavior is different from how it was in the past with DirSync and is intentional to better support multi-forest scenarios. A multi-forest scenario is shown in the figure below.
![multiple Azure AD tenants](./media/plan-hybrid-identity-design-considerations/multiforest-multipleAzureAD.png) **Multi-forest multiple Azure AD scenario**
-It is recommended to have just a single directory in Azure AD for an organization but it is supported it a 1:1 relationship is kept between an Azure AD Connect sync server and an Azure AD directory. For each instance of Azure AD, you need an installation of Azure AD Connect. Also, Azure AD, by design is isolated and users in one instance of Azure AD will not be able to see users in another instance.
+It is recommended to have just a single directory in Azure AD for an organization. However, it is supported if a 1:1 relationship is kept between an Azure AD Connect sync server and an Azure AD directory. For each instance of Azure AD, you need an installation of Azure AD Connect. Also, Azure AD, by design is isolated and users in one instance of Azure AD, will not be able to see users in another instance.
It is possible and supported to connect one on-premises instance of Active Directory to multiple Azure AD directories as shown in the figure below:
It is possible and supported to connect one on-premises instance of Active Direc
**Single-forest filtering scenario**
-To do this, the following must be true:
+The following statements must be true:
* Azure AD Connect sync servers must be configured for filtering so they each have a mutually exclusive set of objects. This done, for example, by scoping each server to a particular domain or OU. * A DNS domain can only be registered in a single Azure AD directory so the UPNs of the users in the on-premises AD must use separate namespaces * Users in one instance of Azure AD will only be able to see users from their instance. They will not be able to see users in the other instances * Only one of the Azure AD directories can enable Exchange hybrid with the on-premises AD
-* Mutual exclusivity also applies to write-back. This makes some write-back features not supported with this topology since these assume a single on-premises configuration. This includes:
+* Mutual exclusivity also applies to write-back. Thus, some write-back features are not supported with this topology since it is assumed to be a single on-premises configuration.
* Group write-back with default configuration * Device write-back
-The following is not supported and should not be chosen as an implementation:
+The following items are not supported and should not be chosen as an implementation:
* It is not supported to have multiple Azure AD Connect sync servers connecting to the same Azure AD directory even if they are configured to synchronize mutually exclusive set of object * It is unsupported to sync the same user to multiple Azure AD directories.
The following is not supported and should not be chosen as an implementation:
> ## Define multi-factor authentication strategy
-In this task you will define the multi-factor authentication strategy to use. Azure AD Multi-Factor Authentication comes in two different versions. One is a cloud-based and the other is on-premises based using the Azure MFA Server. Based on the evaluation you did above you can determine which solution is the correct one for your strategy. Use the table below to determine which design option best fulfills your companyΓÇÖs security requirement:
+In this task, you will define the multi-factor authentication strategy to use. Azure AD Multi-Factor Authentication comes in two different versions. One is a cloud-based and the other is on-premises based using the Azure MFA Server. Based on the evaluation you did above you can determine which solution is the correct one for your strategy. Use the table below to determine which design option best fulfills your companyΓÇÖs security requirement:
Multi-factor design options:
Multi-factor design options:
| IIS applications not published through the Azure AD App Proxy |no |yes | | Remote access as VPN, RDG |no |yes |
-Even though you may have settled on a solution for your strategy, you still need to use the evaluation from above on where your users are located. This may cause the solution to change. Use the table below to assist you determining this:
+Even though you may have settled on a solution for your strategy, you still need to use the evaluation from above. This decision may cause the solution to change. Use the table below to assist you determining this:
| User location | Preferred design option | | | |
Even though you may have settled on a solution for your strategy, you still need
> ## Multi-Factor Auth Provider
-Multi-factor authentication is available by default for global administrators who have an Azure Active Directory tenant. However, if you wish to extend multi-factor authentication to all of your users and/or want to your global administrators to be able to take advantage features such as the management portal, custom greetings, and reports, then you must purchase and configure Multi-Factor Authentication Provider.
+Multi-factor authentication is available by default for Hybrid Identity Administrators who have an Azure Active Directory tenant. However, if you wish to extend multi-factor authentication to all of your users and/or want to your Hybrid Identity Administrators to be able to take advantage features such as the management portal, custom greetings, and reports, then you must purchase and configure Multi-Factor Authentication Provider.
> [!NOTE] > You should also ensure that the multi-factor authentication design option that you selected supports the features that are required for your design.
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
In addition to these three accounts used to run Azure AD Connect, you will also
- **AD DS Enterprise Administrator account**: Optionally used to create the ΓÇ£AD DS Connector accountΓÇ¥ above. -- **Azure AD Global Administrator account**: used to create the Azure AD Connector account and configure Azure AD. You can view global administrator accounts in the Azure portal. See [List Azure AD role assignments](../../active-directory/roles/view-assignments.md).
+- **Azure AD Global Administrator account**: used to create the Azure AD Connector account and configure Azure AD. You can view Hybrid Identity Administrator accounts in the Azure portal. See [List Azure AD role assignments](../../active-directory/roles/view-assignments.md).
- **SQL SA account (optional)**: used to create the ADSync database when using the full version of SQL Server. This SQL Server may be local or remote to the Azure AD Connect installation. This account may be the same account as the Enterprise Administrator. Provisioning the database can now be performed out of band by the SQL administrator and then installed by the Azure AD Connect administrator with database owner rights. For information on this see [Install Azure AD Connect using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md)
active-directory Reference Connect Health User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-user-privacy.md
See [how to remove a service instance from Azure AD Connect Health](how-to-conne
See [how to remove a server from Azure AD Connect Health](how-to-connect-health-operations.md#delete-a-server-from-the-azure-ad-connect-health-service). ### Disable data collection and monitoring for all monitored services in Azure AD Connect Health
-Azure AD Connect Health also provides the option to stop data collection of **all** registered services in the tenant. We recommend careful consideration and full acknowledgement of all global admins before taking the action. Once the process begins, Connect Health service will stop receiving, processing, and reporting any data of all your services. Existing data in Connect Health service will be retained for no more than 30 days.
+Azure AD Connect Health also provides the option to stop data collection of **all** registered services in the tenant. We recommend careful consideration and full acknowledgement of all Hybrid Identity Administrators before taking the action. Once the process begins, Connect Health service will stop receiving, processing, and reporting any data of all your services. Existing data in Connect Health service will be retained for no more than 30 days.
If you want to stop data collection of specific server, please follow steps at deletion of specific servers. To stop tenant-wise data collection, follow the following steps to stop data collection and delete all services of the tenant. 1. Click on **General Settings** under configuration in the main blade.
active-directory Reference Connect Health Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-version-history.md
documentationcenter: '' ms.assetid: 8dd4e998-747b-4c52-b8d3-3900fe77d88f
active-directory Reference Connect Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-ports.md
documentationcenter: '' ms.assetid: de97b225-ae06-4afc-b2ef-a72a3643255b
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history-archive.md
The Azure Active Directory (Azure AD) team regularly updates Azure AD Connect wi
07/10/2020: Released for download ### Functional changes
-This release includes a public preview of the functionality to export the configuration of an existing Azure AD Connect server into a .JSON file which can then be used when installing a new Azure AD Connect server to create a copy of the original server.
+Includes a public preview of the functionality to export the configuration of an existing Azure AD Connect server into a .JSON file. This file can be used when installing a new Azure AD Connect server to create a copy of the original server.
A detailed description of this new feature can be found in [this article](./how-to-connect-import-export-config.md)
This hotfix build fixes an issue where unselected domains were getting incorrect
04/23/2020: Released for download ### Fixed issues
-This hotfix build fixes an issue introduced in build 1.5.20.0 where a tenant administrator with MFA was not able to enable DSSO.
+This hotfix build fixes an issue introduced in build 1.5.20.0 where a tenant administrator with MFA wasn't able to enable DSSO.
## 1.5.22.0
This hotfix build fixes an issue introduced in build 1.5.20.0 where a tenant adm
04/20/2020: Released for download ### Fixed issues
-This hotfix build fixes an issue in build 1.5.20.0 if you have cloned the **In from AD - Group Join** rule and have not cloned the **In from AD - Group Common** rule.
+This hotfix build fixes an issue in build 1.5.20.0 if you've cloned the **In from AD - Group Join** rule and haven't cloned the **In from AD - Group Common** rule.
## 1.5.20.0
This hotfix build fixes an issue in build 1.5.20.0 if you have cloned the **In f
04/09/2020: Released for download ### Fixed issues-- This hotfix build fixes an issue with build 1.5.18.0 if you have the Group Filtering feature enabled and use mS-DS-ConsistencyGuid as the source anchor.
+- This hotfix build fixes an issue with build 1.5.18.0 if you've the Group Filtering feature enabled and use mS-DS-ConsistencyGuid as the source anchor.
- Fixed an issue in the ADSyncConfig PowerShell module, where invoking DSACLS command used in all the Set-ADSync* Permissions cmdlets would cause one of the following errors: - `GrantAclsNoInheritance : The parameter is incorrect. The command failed to complete successfully.` - `GrantAcls : No GUID Found for computer …` > [!IMPORTANT]
-> If you have cloned the **In from AD - Group Join** sync rule and have not cloned the **In from AD - Group Common** sync rule and plan to upgrade, complete the following steps as part of the upgrade:
+> If you've cloned the **In from AD - Group Join** sync rule and haven't cloned the **In from AD - Group Common** sync rule and plan to upgrade, complete the following steps as part of the upgrade:
> 1. During Upgrade, uncheck the option **Start the synchronization process when configuration completes**. > 2. Edit the cloned join sync rule and add the following two transformations: > - Set direct flow `objectGUID` to `sourceAnchorBinary`.
This hotfix build fixes an issue in build 1.5.20.0 if you have cloned the **In f
### Functional changes ADSyncAutoUpgrade -- Added support for the mS-DS-ConsistencyGuid feature for group objects. This allows you to move groups between forests or reconnect groups in AD to Azure AD where the AD group objectID has changed, e.g. when an AD server is rebuilt after a calamity. For more information see [Moving groups between forests](how-to-connect-migrate-groups.md).-- The mS-DS-ConsistencyGuid attribute is automatically set on all synced groups and you do not have to do anything to enable this feature. -- Removed the Get-ADSyncRunProfile because it is no longer in use.
+- Added support for the mS-DS-ConsistencyGuid feature for group objects. Allows you to move groups between forests or reconnect groups in AD to Azure AD where the AD group objectID has changed. For more information, see [Moving groups between forests](how-to-connect-migrate-groups.md).
+- The mS-DS-ConsistencyGuid attribute is automatically set on all synced groups and you don't have to do anything to enable this feature.
+- Removed the Get-ADSyncRunProfile because it's no longer in use.
- Changed the warning you see when attempting to use an Enterprise Admin or Domain Admin account for the AD DS connector account to provide more context. -- Added a new cmdlet to remove objects from the connector space the old CSDelete.exe tool is removed, and it is replaced with the new Remove-ADSyncCSObject cmdlet. The Remove-ADSyncCSObject cmdlet takes a CsObject as input. This object can be retrieved by using the Get-ADSyncCSObject cmdlet.
+- Added a new cmdlet to remove objects from the connector space the old CSDelete.exe tool is removed, and it's replaced with the new Remove-ADSyncCSObject cmdlet. The Remove-ADSyncCSObject cmdlet takes a CsObject as input. This object can be retrieved by using the Get-ADSyncCSObject cmdlet.
>[!NOTE] >The old CSDelete.exe tool has been removed and replaced with the new Remove-ADSyncCSObject cmdlet
This hotfix build fixes an issue in build 1.5.20.0 if you have cloned the **In f
- Fixed a bug in the group writeback forest/OU selector on rerunning the Azure AD Connect wizard after disabling the feature. - Introduced a new error page that will be displayed if the required DCOM registry values are missing with a new help link. Information is also written to log files. -- Fixed an issue with the creation of the Azure Active Directory synchronization account where enabling Directory Extensions or PHS may fail because the account has not propagated across all service replicas before attempted use. -- Fixed a bug in the sync errors compression utility that was not handling surrogate characters correctly. -- Fixed a bug in the auto upgrade which left the server in the scheduler suspended state.
+- Fixed an issue with the creation of the Azure Active Directory synchronization account where enabling Directory Extensions or PHS may fail because the account hasn't propagated across all service replicas before attempted use.
+- Fixed a bug in the sync errors compression utility that wasn't handling surrogate characters correctly.
+- Fixed a bug in the auto upgrade that left the server in the scheduler suspended state.
## 1.4.38.0 ### Release status 12/9/2019: Release for download. Not available through auto-upgrade. ### New features and improvements-- We updated Password Hash Sync for Azure AD Domain Services to properly account for padding in Kerberos hashes. This will provide a performance improvement during password synchronization from Azure AD to Azure AD Domain Services.
+- We updated Password Hash Sync for Azure AD Domain Services to properly account for padding in Kerberos hashes. Provides a performance improvement during password synchronization from Azure AD to Azure AD Domain Services.
- We added support for reliable sessions between the authentication agent and service bus. - We added a DNS cache for websocket connections between authentication agent and cloud services. - We added the ability to target specific agent from cloud to test for agent connectivity. ### Fixed issues-- Release 1.4.18.0 had a bug where the PowerShell cmdlet for DSSO was using the login Windows credentials instead of the admin credentials provided while running ps. As a result of which it was not possible to enable DSSO in multiple forest through the Azure AD Connect user interface.
+- Release 1.4.18.0 had a bug where the PowerShell cmdlet for DSSO was using the login Windows credentials instead of the admin credentials provided while running ps. As a result of which it wasn't possible to enable DSSO in multiple forest through the Azure AD Connect user interface.
- A fix was made to enable DSSO simultaneously in all forest through the Azure AD Connect user interface ## 1.4.32.0
This hotfix build fixes an issue in build 1.5.20.0 if you have cloned the **In f
### Fixed issues This version fixes an issue with existing Hybrid Azure AD joined devices. This release contains a new device sync rule that corrects this issue.
-Note that this rule change may cause deletion of obsolete devices from Azure AD. This is not a cause for concern, as these device objects are not used by Azure AD during Conditional Access authorization. For some customers, the number of devices that will be deleted through this rule change can exceed the deletion threshold. If you see the deletion of device objects in Azure AD exceeding the Export Deletion Threshold, it is advised to allow the deletions to go through. [How to allow deletes to flow when they exceed the deletion threshold](how-to-connect-sync-feature-prevent-accidental-deletes.md)
+This rule change may cause deletion of obsolete devices from Azure AD. These device objects aren't used by Azure AD during Conditional Access authorization. For some customers, the number of devices that will be deleted through this rule change can exceed the deletion threshold. If you see the deletion of device objects in Azure AD exceeding the Export Deletion Threshold, it's advised to allow the deletions to go through. [How to allow deletes to flow when they exceed the deletion threshold](how-to-connect-sync-feature-prevent-accidental-deletes.md)
## 1.4.25.0
This version fixes a bug where some servers that were auto-upgraded from a previ
### Fixed issues
-Under certain circumstances, servers that were auto upgraded to version 1.4.18.0 did not re-enable Self-service password reset and Password Writeback after the upgrade was completed. This auto upgrade release fixes that issue and re-enables Self-service password reset and Password Writeback.
+Under certain circumstances, servers that were auto upgraded to version 1.4.18.0 didn't re-enable Self-service password reset and Password Writeback after the upgrade was completed. This auto upgrade release fixes that issue and re-enables Self-service password reset and Password Writeback.
-We fixed a bug in the sync errors compression utility that was not handling surrogate characters correctly.
+We fixed a bug in the sync errors compression utility that wasn't handling surrogate characters correctly.
## 1.4.18.0
We fixed a bug in the sync errors compression utility that was not handling surr
>We are investigating an incident where some customers are experiencing an issue with existing Hybrid Azure AD joined devices after upgrading to this version of Azure AD Connect. We advise customers who have deployed Hybrid Azure AD join to postpone upgrading to this version until the root cause of these issues are fully understood and mitigated. More information will be provided as soon as possible. >[!IMPORTANT]
->With this version of Azure AD Connect some customers may see some or all of their Windows devices disappear from Azure AD. This is not a cause for concern, as these device identities are not used by Azure AD during Conditional Access authorization. For more information see [Understanding Azure AD Connect 1.4.xx.x device disappearnce](/troubleshoot/azure/active-directory/reference-connect-device-disappearance)
+>With this version of Azure AD Connect some customers may see some or all of their Windows devices disappear from Azure AD. These device identities aren't used by Azure AD during Conditional Access authorization. For more information, see [Understanding Azure AD Connect 1.4.xx.x device disappearnce](/troubleshoot/azure/active-directory/reference-connect-device-disappearance)
### Release status
We fixed a bug in the sync errors compression utility that was not handling surr
- Add support for national clouds in Azure AD Connect troubleshooting script. - Customers should be informed that the deprecated WMI endpoints for MIIS_Service have now been removed. Any WMI operations should now be done via PS cmdlets. - Security improvement by resetting constrained delegation on AZUREADSSOACC object.-- When adding/editing a sync rule, if there are any attributes used in the rule that are in the connector schema but not added to the connector, the attributes automatically added to the connector. The same is true for the object type the rule affects. If anything is added to the connector, the connector will be marked for full import on the next sync cycle.
+- When adding/editing a sync rule, if there are any attributes used in the rule that are in the connector schema, but not added to the connector, the attributes will automatically be added to the connector. The same is true for the object type the rule affects. If anything is added to the connector, the connector will be marked for full import on the next sync cycle.
- Using an Enterprise or Domain admin as the connector account is no longer supported in new Azure AD Connect Deployments. Current Azure AD Connect deployments using an Enterprise or Domain admin as the connector account will not be affected by this release.-- In the Synchronization Manager a full sync is run on rule creation/edit/deletion. A pop-up will appear on any rule change notifying the user if full import or full sync is going to be run.
+- In the Synchronization Manager, a full sync is run on rule creation/edit/deletion. A pop-up will appear on any rule change notifying the user if full import or full sync is going to be run.
- Added mitigation steps for password errors to 'connectors > properties > connectivity' page. - Added a deprecation warning for the sync service manager on the connector properties page. This warning notifies the user that changes should be made through the Azure AD Connect wizard. - Added new error for issues with a user's password policy.-- Prevent misconfiguration of group filtering by domain and OU filters. Group filtering will show an error when the domain/OU of the entered group is already filtered out and keep the user from moving forward until the issue is resolved.
+- Prevent misconfiguration of group filtering by domain and OU filters. Group filtering will show an error when the domain/OU of the entered group is already filtered out. Group filtering will keep the user from moving forward until the issue is resolved.
- Users can no longer create a connector for Active Directory Domain Services or Windows Azure Active Directory in the Synchronization Service Manager UI. - Fixed accessibility of custom UI controls in the Synchronization Service Manager. - Enabled six federation management tasks for all sign-in methods in Azure AD Connect. (Previously, only the "Update AD FS TLS/SSL certificate" task was available for all sign-ins.)
We fixed a bug in the sync errors compression utility that was not handling surr
- Resolved sync error issue for the scenario where a user object taking over its corresponding contact object has a self-reference (e.g. user is their own manager). - Help pop-ups now show on keyboard focus. - For Auto upgrade, if any conflicting app is running from 6 hours, kill it and continue with upgrade.-- Limit the number of attributes a customer can select to 100 per object when selecting directory extensions. This will prevent the error from occurring during export as Azure has a maximum of 100 extension attributes per object.
+- Limit the number of attributes a customer can select to 100 per object when selecting directory extensions. This limit will prevent the error from occurring during export as Azure has a maximum of 100 extension attributes per object.
- Fixed a bug to make the AD Connectivity script more robust. - Fixed a bug to make Azure AD Connect install on a machine using an existing Named Pipes WCF service more robust.-- Improved diagnostics and troubleshooting around group policies that do not allow the ADSync service to start when initially installed.
+- Improved diagnostics and troubleshooting around group policies that don't allow the ADSync service to start when initially installed.
- Fixed a bug where display name for a Windows computer was written incorrectly. - Fix a bug where OS type for a Windows computer was written incorrectly. - Fixed a bug where non-Windows 10 computers were syncing unexpectedly. Note that the effect of this change is that non-Windows-10 computers that were previously synced will now be deleted. This does not affect any features as the sync of Windows computers is only used for Hybrid Azure AD domain join, which only works for Windows-10 devices.
We fixed a bug in the sync errors compression utility that was not handling surr
>[!IMPORTANT] >There is a known issue with upgrading Azure AD Connect from an earlier version to 1.3.21.0 where the Microsoft 365 portal does not reflect the updated version even though Azure AD Connect upgraded successfully. >
-> To resolve this, you need to import the **AdSync** module and then run the `Set-ADSyncDirSyncConfiguration` PowerShell cmdlet on the Azure AD Connect server. You can use the following steps:
+> To resolve this issue, you need to import the **AdSync** module and then run the `Set-ADSyncDirSyncConfiguration` PowerShell cmdlet on the Azure AD Connect server. You can use the following steps:
> >1. Open PowerShell in administrator mode. >2. Run `Import-Module "ADSync"`.
We fixed a bug in the sync errors compression utility that was not handling surr
### Fixed issues -- Fixed an elevation of privilege vulnerability that exists in Microsoft Azure Active Directory Connect build 1.3.20.0. This vulnerability, under certain conditions, may allow an attacker to execute two PowerShell cmdlets in the context of a privileged account, and perform privileged actions. This security update addresses the issue by disabling these cmdlets. For more information see [security update](https://portal.msrc.microsoft.com/security-guidance/advisory/CVE-2019-1000).
+- Fixed an elevation of privilege vulnerability that exists in Microsoft Azure Active Directory Connect build 1.3.20.0. This vulnerability, under certain conditions, may allow an attacker to execute two PowerShell cmdlets in the context of a privileged account, and perform privileged actions. This security update addresses the issue by disabling these cmdlets. For more information, see [security update](https://portal.msrc.microsoft.com/security-guidance/advisory/CVE-2019-1000).
## 1.3.20.0
We fixed a bug in the sync errors compression utility that was not handling surr
- Upgrade to ADAL 3.19.8 to pick up a WS-Trust fix for Ping and add support for new Azure instances - Modify Group Sync Rules to flow samAccountName, DomainNetbios and DomainFQDN to cloud - needed for claims - Modified Default Sync Rule Handling ΓÇô read more [here](how-to-connect-fix-default-rules.md).-- Added a new agent running as a Windows service. This agent, named ΓÇ£Admin AgentΓÇ¥, enables deeper remote diagnostics of the Azure AD Connect server to help Microsoft Engineers troubleshoot when you open a support case. This agent is not installed and enabled by default. For more information on how to install and enable the agent see [What is the Azure AD Connect Admin Agent?](whatis-aadc-admin-agent.md).
+- Added a new agent running as a Windows service. This agent, named ΓÇ£Admin AgentΓÇ¥, enables deeper remote diagnostics of the Azure AD Connect server to help Microsoft Engineers troubleshoot when you open a support case. This agent is not installed and enabled by default. For more information on how to install and enable the agent, see [What is the Azure AD Connect Admin Agent?](whatis-aadc-admin-agent.md).
- Updated the End User License Agreement (EULA) - Added auto upgrade support for deployments that use AD FS as their login type. This also removed the requirement of updating the AD FS Azure AD Relying Party Trust as part of the upgrade process. - Added an Azure AD trust management task that provides two options: analyze/update trust and reset trust.
This build updates the non-standard connectors (for example, Generic LDAP Connec
12/11/2018: Released for download ### Fixed issues
-This hotfix build allows the user to select a target domain, within the specified forest, for the RegisteredDevices container when enabling device writeback. In the previous versions that contain the new Device Options functionality (1.1.819.0 ΓÇô 1.2.68.0), the RegisteredDevices container location was limited to the forest root and did not allow child domains. This limitation only manifested itself on new deployments ΓÇô in-place upgrades were unaffected.
+This hotfix build allows the user to select a target domain, within the specified forest, for the RegisteredDevices container when enabling device writeback. In the previous versions that contain the new Device Options functionality (1.1.819.0 ΓÇô 1.2.68.0), the RegisteredDevices container location was limited to the forest root and didn't allow child domains. This limitation only manifested itself on new deployments ΓÇô in-place upgrades were unaffected.
-If any build containing the updated Device Options functionality was deployed to a new server and device writeback was enabled, you will need to manually specify the location of the container if you do not want it in the forest root. To do this, you need to disable device writeback and re-enable it which will allow you to specify the container location on the ΓÇ£Writeback forestΓÇ¥ page.
+If any build containing the updated Device Options functionality was deployed to a new server and device writeback was enabled, you will need to manually specify the location of the container if you don't want it in the forest root. To do this, you need to disable device writeback and re-enable it which will allow you to specify the container location on the ΓÇ£Writeback forestΓÇ¥ page.
This hotfix build fixes a regression in the previous build where Password Writeb
- Changed the functionality of attribute write-back to ensure hosted voice-mail is working as expected. Under certain scenarios, Azure AD was overwriting the msExchUcVoicemailSettings attribute during write-back with a null value. Azure AD will now no longer clear the on-premises value of this attribute if the cloud value is not set. - Added diagnostics in the Azure AD Connect wizard to investigate and identify Connectivity issues to Azure AD. These same diagnostics can also be run directly through PowerShell using the Test- AdSyncAzureServiceConnectivity Cmdlet. -- Added diagnostics in the Azure AD Connect wizard to investigate and identify Connectivity issues to AD. These same diagnostics can also be run directly through PowerShell using the Start-ConnectivityValidation function in the ADConnectivityTools PowerShell module. For more information see [What is the ADConnectivityTool PowerShell Module?](how-to-connect-adconnectivitytools.md)
+- Added diagnostics in the Azure AD Connect wizard to investigate and identify Connectivity issues to AD. These same diagnostics can also be run directly through PowerShell using the Start-ConnectivityValidation function in the ADConnectivityTools PowerShell module. For more information, see [What is the ADConnectivityTool PowerShell Module?](how-to-connect-adconnectivitytools.md)
- Added an AD schema version pre-check for Hybrid Azure Active Directory Join and device write-back - Changed the Directory Extension page attribute search to be non-case sensitive.-- Added full support for TLS 1.2. This release supports all other protocols being disabled and only TLS 1.2 being enabled on the machine where Azure AD Connect is installed. For more information see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md)
+- Added full support for TLS 1.2. This release supports all other protocols being disabled and only TLS 1.2 being enabled on the machine where Azure AD Connect is installed. For more information, see [TLS 1.2 enforcement for Azure AD Connect](reference-connect-tls-enforcement.md)
Azure AD Connect Upgrade fails if SQL Always On Availability is configured for t
- Fixed a bug to prevent an error happening due to incorrect multi-thread handing in the wizard - When Group Sync Filtering page encounters an LDAP error when resolving security groups, Azure AD Connect now returns the exception with full fidelity. The root cause for the referral exception is still unknown and will be addressed by a different bug. - Fixed a bug where permissions for STK and NGC keys (ms-DS-KeyCredentialLink attribute on User/Device objects for WHfB) were not correctly set. -- Fixed a bug where 'Set-ADSyncRestrictedPermissionsΓÇÖ was not called correctly
+- Fixed a bug where 'Set-ADSyncRestrictedPermissionsΓÇÖ wasn't called correctly
- Adding support for permission granting on Group Writeback in Azure ADConnect's installation wizard-- When changing sign in method from Password Hash Sync to AD FS, Password Hash Sync was not disabled.
+- When changing sign in method from Password Hash Sync to AD FS, Password Hash Sync wasn't disabled.
- Added verification for IPv6 addresses in AD FS configuration - Updated the notification message to inform that an existing configuration exists. - Device writeback fails to detect container in untrusted forest. This has been updated to provide a better error message and a link to the appropriate documentation
New features and improvements
- Azure AD Connect Wizard: AD FS Multi Domain Regex is not correct when user UPN has ' special character Regex update to support special characters - Azure AD Connect Wizard: Remove spurious "Configure source anchor attribute" message when no change - Azure AD Connect Wizard: AD FS support for the dual federation scenario-- Azure AD Connect Wizard: AD FS Claims are not updated for added domain when converting a managed domain to federated
+- Azure AD Connect Wizard: AD FS Claims aren't updated for added domain when converting a managed domain to federated
- Azure AD Connect Wizard: During detection of installed packages, we find stale Dirsync/Azure AD Sync/Azure AD Connect related products. We will now attempt to uninstall the stale products. - Azure AD Connect Wizard: Correct Error Message Mapping when installation of passthrough authentication agent fails - Azure AD Connect Wizard: Removed "Configuration" container from Domain OU Filtering page
There was a problem in the configuration retry logic that would result in an Arg
## 1.1.750.0 Status 3/22/2018: Released for auto-upgrade and download. >[!NOTE]
->When the upgrade to this new version completes, it will automatically trigger a full sync and full import for the Azure AD connector and a full sync for the AD connector. Since this may take some time, depending on the size of your Azure AD Connect environment, make sure that you have taken the necessary steps to support this or hold off on upgrading until you have found a convenient moment to do so.
+>When the upgrade to this new version completes, it will automatically trigger a full sync and full import for the Azure AD connector and a full sync for the AD connector. Since this may take some time, depending on the size of your Azure AD Connect environment, make sure that you've taken the necessary steps to support this or hold off on upgrading until you've found a convenient moment to do so.
>[!NOTE] >ΓÇ£AutoUpgrade functionality was incorrectly disabled for some tenants who deployed builds later than 1.1.524.0. To ensure that your Azure AD Connect instance is still eligible for AutoUpgrade, run the following PowerShell cmdlet: ΓÇ£Set-ADSyncAutoUpgrade -AutoupGradeState EnabledΓÇ¥
Status 3/22/2018: Released for auto-upgrade and download.
#### Fixed issues * Set-ADSyncAutoUpgrade cmdlet would previously block Autoupgrade if auto-upgrade state is set to Suspended. This functionality has now changed so it does not block AutoUpgrade of future builds.
-* Changed the **User Sign-in** page option "Password Synchronization" to "Password Hash Synchronization". Azure AD Connect synchronizes password hashes, not passwords, so this aligns with what is actually occurring. For more information see [Implement password hash synchronization with Azure AD Connect sync](how-to-connect-password-hash-synchronization.md)
+* Changed the **User Sign-in** page option "Password Synchronization" to "Password Hash Synchronization". Azure AD Connect synchronizes password hashes, not passwords, so this aligns with what is actually occurring. For more information, see [Implement password hash synchronization with Azure AD Connect sync](how-to-connect-password-hash-synchronization.md)
## 1.1.749.0 Status: Released to select customers >[!NOTE]
->When the upgrade to this new version completes, it will automatically trigger a full sync and full import for the Azure AD connector and a full sync for the AD connector. Since this may take some time, depending on the size of your Azure AD Connect environment, please make sure that you have taken the necessary steps to support this or hold off on upgrading until you have found a convenient moment to do so.
+>When the upgrade to this new version completes, it will automatically trigger a full sync and full import for the Azure AD connector and a full sync for the AD connector. Since this may take some time, depending on the size of your Azure AD Connect environment, please make sure that you've taken the necessary steps to support this or hold off on upgrading until you've found a convenient moment to do so.
### Azure AD Connect #### Fixed issues * Fix timing window on background tasks for Partition Filtering page when switching to next page.
Status: Released to select customers
#### New features and improvements
-* Adding Privacy Settings for the General Data Protection Regulation (GDPR). For more information see the article [here](reference-connect-user-privacy.md).
+* Adding Privacy Settings for the General Data Protection Regulation (GDPR). For more information, see the article [here](reference-connect-user-privacy.md).
[!INCLUDE [Privacy](../../../includes/gdpr-intro-sentence.md)]
Status: December 12th, 2017
### Azure AD Connect An improvement has been added to Azure AD Connect version 1.1.654.0 (and after) to ensure that the recommended permission changes described under section [Lock down access to the AD DS account](#lock) are automatically applied when Azure AD Connect creates the AD DS account. -- When setting up Azure AD Connect, the installing administrator can either provide an existing AD DS account, or let Azure AD Connect automatically create the account. The permission changes are automatically applied to the AD DS account that is created by Azure AD Connect during setup. They are not applied to existing AD DS account provided by the installing administrator.
+- When setting up Azure AD Connect, the installing administrator can either provide an existing AD DS account, or let Azure AD Connect automatically create the account. The permission changes are automatically applied to the AD DS account that is created by Azure AD Connect during setup. They aren't applied to existing AD DS account provided by the installing administrator.
- For customers who have upgraded from an older version of Azure AD Connect to 1.1.654.0 (or after), the permission changes will not be retroactively applied to existing AD DS accounts created prior to the upgrade. They will only be applied to new AD DS accounts created after the upgrade. This occurs when you are adding new AD forests to be synchronized to Azure AD. >[!NOTE]
Status: October 27 2017
Status: October 19 2017 > [!IMPORTANT]
-> There is a known compatibility issue between Azure AD Connect version 1.1.647.0 and Azure AD Connect Health Agent (for sync) version 3.0.127.0. This issue prevents the Health Agent from sending health data about the Azure AD Connect Synchronization Service (including object synchronization errors and run history data) to Azure AD Health Service. Before manually upgrading your Azure AD Connect deployment to version 1.1.647.0, please verify the current version of Azure AD Connect Health Agent installed on your Azure AD Connect server. You can do so by going to *Control Panel → Add Remove Programs* and look for application *Microsoft Azure AD Connect Health Agent for Sync*. If its version is 3.0.127.0, it is recommended that you wait for the next Azure AD Connect version to be available before upgrade. If the Health Agent version isn't 3.0.127.0, it is fine to proceed with the manual, in-place upgrade. Note that this issue does not affect swing upgrade or customers who are performing new installation of Azure AD Connect.
+> There is a known compatibility issue between Azure AD Connect version 1.1.647.0 and Azure AD Connect Health Agent (for sync) version 3.0.127.0. This issue prevents the Health Agent from sending health data about the Azure AD Connect Synchronization Service (including object synchronization errors and run history data) to Azure AD Health Service. Before manually upgrading your Azure AD Connect deployment to version 1.1.647.0, please verify the current version of Azure AD Connect Health Agent installed on your Azure AD Connect server. You can do so by going to *Control Panel → Add Remove Programs* and look for application *Microsoft Azure AD Connect Health Agent for Sync*. If its version is 3.0.127.0, it's recommended that you wait for the next Azure AD Connect version to be available before upgrade. If the Health Agent version isn't 3.0.127.0, it's fine to proceed with the manual, in-place upgrade. This issue does not affect swing upgrade or customers who are performing new installation of Azure AD Connect.
> > ### Azure AD Connect #### Fixed issues * Fixed an issue with the *Change user sign-in* task in Azure AD Connect wizard:
- * The issue occurs when you have an existing Azure AD Connect deployment with Password Synchronization **enabled**, and you are trying to set the user sign-in method as *Pass-through Authentication*. Before the change is applied, the wizard incorrectly shows the "*Disable Password Synchronization*" prompt. However, Password Synchronization remains enabled after the change is applied. With this fix, the wizard no longer shows the prompt.
+ * The issue occurs when you've an existing Azure AD Connect deployment with Password Synchronization **enabled**, and you are trying to set the user sign-in method as *Pass-through Authentication*. Before the change is applied, the wizard incorrectly shows the "*Disable Password Synchronization*" prompt. However, Password Synchronization remains enabled after the change is applied. With this fix, the wizard no longer shows the prompt.
* By design, the wizard does not disable Password Synchronization when you update the user sign-in method using the *Change user sign-in* task. This is to avoid disruption to customers who want to keep Password Synchronization, even though they are enabling Pass-through Authentication or federation as their primary user sign-in method. * If you wish to disable Password Synchronization after updating the user sign-in method, you must execute the *Customize Synchronization Configuration* task in the wizard. When you navigate to the *Optional features* page, uncheck the *Password Synchronization* option.
- * Note that the same issue also occurs if you try to enable/disable Seamless Single Sign-On. Specifically, you have an existing Azure AD Connect deployment with Password Synchronization enabled and the user sign-in method is already configured as *Pass-through Authentication*. Using the *Change user sign-in* task, you try to check/uncheck the *Enable Seamless Single Sign-On* option while the user sign-in method remains configured as "Pass-through Authentication". Before the change is applied, the wizard incorrectly shows the "*Disable Password Synchronization*" prompt. However, Password Synchronization remains enabled after the change is applied. With this fix, the wizard no longer shows the prompt.
+ * Note that the same issue also occurs if you try to enable/disable Seamless Single Sign-On. Specifically, you've an existing Azure AD Connect deployment with Password Synchronization enabled and the user sign-in method is already configured as *Pass-through Authentication*. Using the *Change user sign-in* task, you try to check/uncheck the *Enable Seamless Single Sign-On* option while the user sign-in method remains configured as "Pass-through Authentication". Before the change is applied, the wizard incorrectly shows the "*Disable Password Synchronization*" prompt. However, Password Synchronization remains enabled after the change is applied. With this fix, the wizard no longer shows the prompt.
* Fixed an issue with the *Change user sign-in* task in Azure AD Connect wizard:
- * The issue occurs when you have an existing Azure AD Connect deployment with Password Synchronization **disabled**, and you are trying to set the user sign-in method as *Pass-through Authentication*. When the change is applied, the wizard enables both Pass-through Authentication and Password Synchronization. With this fix, the wizard no longer enables Password Synchronization.
+ * The issue occurs when you've an existing Azure AD Connect deployment with Password Synchronization **disabled**, and you are trying to set the user sign-in method as *Pass-through Authentication*. When the change is applied, the wizard enables both Pass-through Authentication and Password Synchronization. With this fix, the wizard no longer enables Password Synchronization.
* Previously, Password Synchronization was a pre-requisite for enabling Pass-through Authentication. When you set the user sign-in method as *Pass-through Authentication*, the wizard would enable both Pass-through Authentication and Password Synchronization. Recently, Password Synchronization was removed as a pre-requisite. As part of Azure AD Connect version 1.1.557.0, a change was made to Azure AD Connect to not enable Password Synchronization when you set the user sign-in method as *Pass-through Authentication*. However, the change was only applied to Azure AD Connect installation. With this fix, the same change is also applied to the *Change user sign-in* task.
- * Note that the same issue also occurs if you try to enable/disable Seamless Single Sign-On. Specifically, you have an existing Azure AD Connect deployment with Password Synchronization disabled and the user sign-in method is already configured as *Pass-through Authentication*. Using the *Change user sign-in* task, you try to check/uncheck the *Enable Seamless Single Sign-On* option while the user sign-in method remains configured as "Pass-through Authentication". When the change is applied, the wizard enables Password Synchronization. With this fix, the wizard no longer enables Password Synchronization.
+ * Note that the same issue also occurs if you try to enable/disable Seamless Single Sign-On. Specifically, you've an existing Azure AD Connect deployment with Password Synchronization disabled and the user sign-in method is already configured as *Pass-through Authentication*. Using the *Change user sign-in* task, you try to check/uncheck the *Enable Seamless Single Sign-On* option while the user sign-in method remains configured as "Pass-through Authentication". When the change is applied, the wizard enables Password Synchronization. With this fix, the wizard no longer enables Password Synchronization.
* Fixed an issue that caused Azure AD Connect upgrade to fail with error "*Unable to upgrade the Synchronization Service*". Further, the Synchronization Service can no longer start with event error "*The service was unable to start because the version of the database is newer than the version of the binaries installed*". The issue occurs when the administrator performing the upgrade does not have sysadmin privilege to the SQL server that is being used by Azure AD Connect. With this fix, Azure AD Connect only requires the administrator to have db_owner privilege to the ADSync database during upgrade.
Status: October 19 2017
* Fixed an issue that caused Azure AD Connect wizard to always show the ΓÇ£*Configure Source Anchor*ΓÇ¥ prompt on the *Ready to Configure* page, even if no changes related to Source Anchor were made.
-* When performing manual in-place upgrade of Azure AD Connect, the customer is required to provide the Global Administrator credentials of the corresponding Azure AD tenant. Previously, upgrade could proceed even though the Global Administrator's credentials belonged to a different Azure AD tenant. While upgrade appears to complete successfully, certain configurations are not correctly persisted with the upgrade. With this change, the wizard prevents the upgrade from proceeding if the credentials provided do not match the Azure AD tenant.
+* When performing manual in-place upgrade of Azure AD Connect, the customer is required to provide the Global Administrator credentials of the corresponding Azure AD tenant. Previously, upgrade could proceed even though the Global Administrator's credentials belonged to a different Azure AD tenant. While upgrade appears to complete successfully, certain configurations aren't correctly persisted with the upgrade. With this change, the wizard prevents the upgrade from proceeding if the credentials provided don't match the Azure AD tenant.
* Removed redundant logic that unnecessarily restarted Azure AD Connect Health service at the beginning of a manual upgrade. #### New features and improvements
-* Added logic to simplify the steps required to set up Azure AD Connect with Microsoft Germany Cloud. Previously, you are required to update specific registry keys on the Azure AD Connect server for it to work correctly with Microsoft Germany Cloud, as described in this article. Now, Azure AD Connect can automatically detect if your tenant is in Microsoft Germany Cloud based on the global administrator credentials provided during setup.
+* Added logic to simplify the steps required to set up Azure AD Connect with Microsoft Germany Cloud. Previously, you are required to update specific registry keys on the Azure AD Connect server for it to work correctly with Microsoft Germany Cloud, as described in this article. Now, Azure AD Connect can automatically detect if your tenant is in Microsoft Germany Cloud based on the Hybrid Identity Administrator credentials provided during setup.
### Azure AD Connect Sync > [!NOTE]
Status: October 19 2017
### AD FS Management #### Fixed issue
-* Fixed an issue related to the use of [ms-DS-ConsistencyGuid as Source Anchor](./plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor) feature. This issue affects customers who have configured *Federation with AD FS* as the user sign-in method. When you execute *Configure Source Anchor* task in the wizard, Azure AD Connect switches to using *ms-DS-ConsistencyGuid as source attribute for immutableId. As part of this change, Azure AD Connect attempts to update the claim rules for ImmutableId in AD FS. However, this step failed because Azure AD Connect did not have the administrator credentials required to configure AD FS. With this fix, Azure AD Connect now prompts you to enter the administrator credentials for AD FS when you execute the *Configure Source Anchor* task.
+* Fixed an issue related to the use of [ms-DS-ConsistencyGuid as Source Anchor](./plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor) feature. This issue affects customers who have configured *Federation with AD FS* as the user sign-in method. When you execute *Configure Source Anchor* task in the wizard, Azure AD Connect switches to using *ms-DS-ConsistencyGuid as source attribute for immutableId. As part of this change, Azure AD Connect attempts to update the claim rules for ImmutableId in AD FS. However, this step failed because Azure AD Connect didn't have the administrator credentials required to configure AD FS. With this fix, Azure AD Connect now prompts you to enter the administrator credentials for AD FS when you execute the *Configure Source Anchor* task.
Status: September 05 2017
### Azure AD Connect #### Known issues
-* There is a known issue that is causing Azure AD Connect upgrade to fail with error "*Unable to upgrade the Synchronization Service*". Further, the Synchronization Service can no longer start with event error "*The service was unable to start because the version of the database is newer than the version of the binaries installed*". The issue occurs when the administrator performing the upgrade does not have sysadmin privilege to the SQL server that is being used by Azure AD Connect. Dbo permissions are not sufficient.
+* There is a known issue that is causing Azure AD Connect upgrade to fail with error "*Unable to upgrade the Synchronization Service*". Further, the Synchronization Service can no longer start with event error "*The service was unable to start because the version of the database is newer than the version of the binaries installed*". The issue occurs when the administrator performing the upgrade does not have sysadmin privilege to the SQL server that is being used by Azure AD Connect. Dbo permissions aren't sufficient.
* There is a known issue with Azure AD Connect Upgrade that is affecting customers who have enabled [Seamless Single Sign-On](how-to-connect-sso.md). After Azure AD Connect is upgraded, the feature appears as disabled in the wizard, even though the feature remains enabled. A fix for this issue will be provided in future release. Customers who are concerned about this display issue can manually fix it by enabling Seamless Single Sign-On in the wizard.
Status: September 05 2017
* Added a Troubleshoot task to Azure AD Connect wizard under Additional Tasks. Customers can leverage this task to troubleshoot issues related to password synchronization and collect general diagnostics. In the future, the Troubleshoot task will be extended to include other directory synchronization-related issues. * Azure AD Connect now supports a new installation mode called **Use Existing Database**. This installation mode allows customers to install Azure AD Connect that specifies an existing ADSync database. For more information about this feature, refer to article [Use an existing database](how-to-connect-install-existing-database.md). * For improved security, Azure AD Connect now defaults to using TLS1.2 to connect to Azure AD for directory synchronization. Previously, the default was TLS1.0.
-* When Azure AD Connect Password Synchronization Agent starts up, it tries to connect to Azure AD well-known endpoint for password synchronization. Upon successful connection, it is redirected to a region-specific endpoint. Previously, the Password Synchronization Agent caches the region-specific endpoint until it is restarted. Now, the agent clears the cache and retries with the well-known endpoint if it encounters connection issue with the region-specific endpoint. This change ensures that password synchronization can failover to a different region-specific endpoint when the cached region-specific endpoint is no longer available.
+* When Azure AD Connect Password Synchronization Agent starts up, it tries to connect to Azure AD well-known endpoint for password synchronization. Upon successful connection, it's redirected to a region-specific endpoint. Previously, the Password Synchronization Agent caches the region-specific endpoint until it's restarted. Now, the agent clears the cache and retries with the well-known endpoint if it encounters connection issue with the region-specific endpoint. This change ensures that password synchronization can failover to a different region-specific endpoint when the cached region-specific endpoint is no longer available.
* To synchronize changes from an on-premises AD forest, an AD DS account is required. You can either (i) create the AD DS account yourself and provide its credential to Azure AD Connect, or (ii) provide an Enterprise Admin's credentials and let Azure AD Connect create the AD DS account for you. Previously, (i) is the default option in the Azure AD Connect wizard. Now, (ii) is the default option. ### Azure AD Connect Health
Status: July 23 2017
#### New features and improvements * [Automatic Upgrade feature](how-to-connect-install-automatic-upgrade.md) has been expanded to support customers with the following configurations:
- * You have enabled the device writeback feature.
- * You have enabled the group writeback feature.
+ * You've enabled the device writeback feature.
+ * You've enabled the group writeback feature.
* The installation is not an Express settings or a DirSync upgrade.
- * You have more than 100,000 objects in the metaverse.
+ * You've more than 100,000 objects in the metaverse.
* You are connecting to more than one forest. Express setup only connects to one forest. * The AD Connector account is not the default MSOL_ account anymore. * The server is set to be in staging mode.
- * You have enabled the user writeback feature.
+ * You've enabled the user writeback feature.
>[!NOTE]
- >The scope expansion of the Automatic Upgrade feature affects customers with Azure AD Connect build 1.1.105.0 and after. If you do not want your Azure AD Connect server to be automatically upgraded, you must run following cmdlet on your Azure AD Connect server: `Set-ADSyncAutoUpgrade -AutoUpgradeState disabled`. For more information about enabling/disabling Automatic Upgrade, refer to article [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
+ >The scope expansion of the Automatic Upgrade feature affects customers with Azure AD Connect build 1.1.105.0 and after. If you don't want your Azure AD Connect server to be automatically upgraded, you must run following cmdlet on your Azure AD Connect server: `Set-ADSyncAutoUpgrade -AutoUpgradeState disabled`. For more information about enabling/disabling Automatic Upgrade, refer to article [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
## 1.1.558.0 Status: Will not be released. Changes in this build are included in version 1.1.561.0.
Status: Will not be released. Changes in this build are included in version 1.1.
#### New features and improvements * [Automatic Upgrade feature](how-to-connect-install-automatic-upgrade.md) has been expanded to support customers with the following configurations:
- * You have enabled the device writeback feature.
- * You have enabled the group writeback feature.
+ * You've enabled the device writeback feature.
+ * You've enabled the group writeback feature.
* The installation is not an Express settings or a DirSync upgrade.
- * You have more than 100,000 objects in the metaverse.
+ * You've more than 100,000 objects in the metaverse.
* You are connecting to more than one forest. Express setup only connects to one forest. * The AD Connector account is not the default MSOL_ account anymore. * The server is set to be in staging mode.
- * You have enabled the user writeback feature.
+ * You've enabled the user writeback feature.
>[!NOTE]
- >The scope expansion of the Automatic Upgrade feature affects customers with Azure AD Connect build 1.1.105.0 and after. If you do not want your Azure AD Connect server to be automatically upgraded, you must run following cmdlet on your Azure AD Connect server: `Set-ADSyncAutoUpgrade -AutoUpgradeState disabled`. For more information about enabling/disabling Automatic Upgrade, refer to article [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
+ >The scope expansion of the Automatic Upgrade feature affects customers with Azure AD Connect build 1.1.105.0 and after. If you don't want your Azure AD Connect server to be automatically upgraded, you must run following cmdlet on your Azure AD Connect server: `Set-ADSyncAutoUpgrade -AutoUpgradeState disabled`. For more information about enabling/disabling Automatic Upgrade, refer to article [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
## 1.1.557.0 Status: July 2017
Status: July 2017
### Azure AD Connect #### Fixed issue
-* Fixed an issue with the Initialize-ADSyncDomainJoinedComputerSync cmdlet that caused the verified domain configured on the existing service connection point object to be changed even if it is still a valid domain. This issue occurs when your Azure AD tenant has more than one verified domains that can be used for configuring the service connection point.
+* Fixed an issue with the Initialize-ADSyncDomainJoinedComputerSync cmdlet that caused the verified domain configured on the existing service connection point object to be changed even if it's still a valid domain. This issue occurs when your Azure AD tenant has more than one verified domains that can be used for configuring the service connection point.
#### New features and improvements * Password writeback is now available for preview with Microsoft Azure Government cloud and Microsoft Cloud Germany. For more information about Azure AD Connect support for the different service instances, refer to article [Azure AD Connect: Special considerations for instances](reference-connect-instances.md).
The issue that arises is that the **Sync all domains and OUs option** is always
* Fixed an issue with Password writeback that allows an Azure AD Administrator to reset the password of an on-premises AD privileged user account. The issue occurs when Azure AD Connect is granted the Reset Password permission over the privileged account. The issue is addressed in this version of Azure AD Connect by not allowing an Azure AD Administrator to reset the password of an arbitrary on-premises AD privileged user account unless the administrator is the owner of that account. For more information, refer to [Security Advisory 4033453](/security-updates/SecurityAdvisories/2017/4033453).
-* Fixed an issue related to the [ms-DS-ConsistencyGuid as Source Anchor](./plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor) feature where Azure AD Connect does not writeback to on-premises AD ms-DS-ConsistencyGuid attribute. The issue occurs when there are multiple on-premises AD forests added to Azure AD Connect and the *User identities exist across multiple directories option* is selected. When such configuration is used, the resultant synchronization rules do not populate the sourceAnchorBinary attribute in the Metaverse. The sourceAnchorBinary attribute is used as the source attribute for ms-DS-ConsistencyGuid attribute. As a result, writeback to the ms-DSConsistencyGuid attribute does not occur. To fix the issue, following sync rules have been updated to ensure that the sourceAnchorBinary attribute in the Metaverse is always populated:
+* Fixed an issue related to the [ms-DS-ConsistencyGuid as Source Anchor](./plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor) feature where Azure AD Connect does not writeback to on-premises AD ms-DS-ConsistencyGuid attribute. The issue occurs when there are multiple on-premises AD forests added to Azure AD Connect and the *User identities exist across multiple directories option* is selected. When such configuration is used, the resultant synchronization rules don't populate the sourceAnchorBinary attribute in the Metaverse. The sourceAnchorBinary attribute is used as the source attribute for ms-DS-ConsistencyGuid attribute. As a result, writeback to the ms-DSConsistencyGuid attribute does not occur. To fix the issue, following sync rules have been updated to ensure that the sourceAnchorBinary attribute in the Metaverse is always populated:
* In from AD - InetOrgPerson AccountEnabled.xml * In from AD - InetOrgPerson Common.xml * In from AD - User AccountEnabled.xml
The issue that arises is that the **Sync all domains and OUs option** is always
#### New features and improvements
-* Previously, the [ms-DS-ConsistencyGuid as Source Anchor](./plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor) feature was available to new deployments only. Now, it is available to existing deployments. More specifically:
+* Previously, the [ms-DS-ConsistencyGuid as Source Anchor](./plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor) feature was available to new deployments only. Now, it's available to existing deployments. More specifically:
* To access the feature, start the Azure AD Connect wizard and choose the *Update Source Anchor* option. * This option is only visible to existing deployments that are using objectGuid as sourceAnchor attribute. * When configuring the option, the wizard validates the state of the ms-DS-ConsistencyGuid attribute in your on-premises Active Directory. If the attribute isn't configured on any user object in the directory, the wizard uses the ms-DS-ConsistencyGuid as the sourceAnchor attribute. If the attribute is configured on one or more user objects in the directory, the wizard concludes the attribute is being used by other applications and is not suitable as sourceAnchor attribute and does not permit the Source Anchor change to proceed. If you are certain that the attribute isn't used by existing applications, you need to contact Support for information on how to suppress the error.
The issue that arises is that the **Sync all domains and OUs option** is always
* Fixed an issue that caused AD FS to generate incorrect claim value for IssuerID. The issue occurs if there are multiple verified domains in the Azure AD tenant and the domain suffix of the userPrincipalName attribute used to generate the IssuerID claim is at least 3-levels deep (for example, johndoe@us.contoso.com). The issue is resolved by updating the regex used by the claim rules. #### New features and improvements
-* Previously, the ADFS Certificate Management feature provided by Azure AD Connect can only be used with ADFS farms managed through Azure AD Connect. Now, you can use the feature with ADFS farms that are not managed using Azure AD Connect.
+* Previously, the ADFS Certificate Management feature provided by Azure AD Connect can only be used with ADFS farms managed through Azure AD Connect. Now, you can use the feature with ADFS farms that aren't managed using Azure AD Connect.
## 1.1.524.0 Released: May 2017
Azure AD Connect sync
* Fixed an issue that causes Automatic Upgrade to occur on the Azure AD Connect server even if customer has disabled the feature using the Set-ADSyncAutoUpgrade cmdlet. With this fix, the Automatic Upgrade process on the server still checks for upgrade periodically, but the downloaded installer honors the Automatic Upgrade configuration. * During DirSync in-place upgrade, Azure AD Connect creates an Azure AD service account to be used by the Azure AD connector for synchronizing with Azure AD. After the account is created, Azure AD Connect authenticates with Azure AD using the account. Sometimes, authentication fails because of transient issues, which in turn causes DirSync in-place upgrade to fail with error *ΓÇ£An error has occurred executing Configure AAD Sync task: AADSTS50034: To sign into this application, the account must be added to the xxx.onmicrosoft.com directory.ΓÇ¥* To improve the resiliency of DirSync upgrade, Azure AD Connect now retries the authentication step.
-* There was an issue with build 443 that causes DirSync in-place upgrade to succeed but run profiles required for directory synchronization are not created. Healing logic is included in this build of Azure AD Connect. When customer upgrades to this build, Azure AD Connect detects missing run profiles and creates them.
+* There was an issue with build 443 that causes DirSync in-place upgrade to succeed but run profiles required for directory synchronization aren't created. Healing logic is included in this build of Azure AD Connect. When customer upgrades to this build, Azure AD Connect detects missing run profiles and creates them.
* Fixed an issue that causes Password Synchronization process to fail to start with Event ID 6900 and error *ΓÇ£An item with the same key has already been addedΓÇ¥*. This issue occurs if you update OU filtering configuration to include AD configuration partition. To fix this issue, Password Synchronization process now synchronizes password changes from AD domain partitions only. Non-domain partitions such as configuration partition are skipped. * During Express installation, Azure AD Connect creates an on-premises AD DS account to be used by the AD connector to communicate with on-premises AD. Previously, the account is created with the PASSWD_NOTREQD flag set on the user-Account-Control attribute and a random password is set on the account. Now, Azure AD Connect explicitly removes the PASSWD_NOTREQD flag after the password is set on the account. * Fixed an issue that causes DirSync upgrade to fail with error *ΓÇ£a deadlock occurred in sql server which trying to acquire an application lockΓÇ¥* when the mailNickname attribute is found in the on-premises AD schema, but is not bounded to the AD User object class. * Fixed an issue that causes Device writeback feature to automatically be disabled when an administrator is updating Azure AD Connect sync configuration using Azure AD Connect wizard. This issue is caused by the wizard performing a pre-requisite check for the existing Device writeback configuration in on-premises AD and the check fails. The fix is to skip the check if Device writeback is already enabled previously.
-* To configure OU filtering, you can either use the Azure AD Connect wizard or the Synchronization Service Manager. Previously, if you use the Azure AD Connect wizard to configure OU filtering, new OUs created afterwards are included for directory synchronization. If you do not want new OUs to be included, you must configure OU filtering using the Synchronization Service Manager. Now, you can achieve the same behavior using Azure AD Connect wizard.
+* To configure OU filtering, you can either use the Azure AD Connect wizard or the Synchronization Service Manager. Previously, if you use the Azure AD Connect wizard to configure OU filtering, new OUs created afterwards are included for directory synchronization. If you don't want new OUs to be included, you must configure OU filtering using the Synchronization Service Manager. Now, you can achieve the same behavior using Azure AD Connect wizard.
* Fixed an issue that causes stored procedures required by Azure AD Connect to be created under the schema of the installing admin, instead of under the dbo schema. * Fixed an issue that causes the TrackingId attribute returned by Azure AD to be omitted in the Azure AD Connect Server Event Logs. The issue occurs if Azure AD Connect receives a redirection message from Azure AD and Azure AD Connect is unable to connect to the endpoint provided. The TrackingId is used by Support Engineers to correlate with service side logs during troubleshooting. * When Azure AD Connect receives LargeObject error from Azure AD, Azure AD Connect generates an event with EventID 6941 and message *ΓÇ£The provisioned object is too large. Trim the number of attribute values on this object.ΓÇ¥* At the same time, Azure AD Connect also generates a misleading event with EventID 6900 and message *ΓÇ£Microsoft.Online.Coexistence.ProvisionRetryException: Unable to communicate with the Windows Azure Active Directory service.ΓÇ¥* To minimize confusion, Azure AD Connect no longer generates the latter event when LargeObject error is received.
Azure AD Connect sync
* Added **preferredDataLocation** to the Metaverse schema and Azure AD Connector schema. Customers who want to update either attributes in Azure AD can implement custom sync rules to do so. * Added **userType** to the Metaverse schema and Azure AD Connector schema. Customers who want to update either attributes in Azure AD can implement custom sync rules to do so.
-* Azure AD Connect now automatically enables the use of ConsistencyGuid attribute as the Source Anchor attribute for on-premises AD objects. Further, Azure AD Connect populates the ConsistencyGuid attribute with the objectGuid attribute value if it is empty. This feature is applicable to new deployment only. To find out more about this feature, refer to article section [Azure AD Connect: Design concepts - Using ms-DS-ConsistencyGuid as sourceAnchor](plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor).
+* Azure AD Connect now automatically enables the use of ConsistencyGuid attribute as the Source Anchor attribute for on-premises AD objects. Further, Azure AD Connect populates the ConsistencyGuid attribute with the objectGuid attribute value if it's empty. This feature is applicable to new deployment only. To find out more about this feature, refer to article section [Azure AD Connect: Design concepts - Using ms-DS-ConsistencyGuid as sourceAnchor](plan-connect-design-concepts.md#using-ms-ds-consistencyguid-as-sourceanchor).
* New troubleshooting cmdlet Invoke-ADSyncDiagnostics has been added to help diagnose Password Hash Synchronization related issues. For information about using the cmdlet, refer to article [Troubleshoot password hash synchronization with Azure AD Connect sync](tshoot-connect-password-hash-synchronization.md). * Azure AD Connect now supports synchronizing Mail-Enabled Public Folder objects from on-premises AD to Azure AD. You can enable the feature using Azure AD Connect wizard under Optional Features. To find out more about this feature, refer to article [Office 365 Directory Based Edge Blocking support for on-premises Mail Enabled Public Folders](https://techcommunity.microsoft.com/t5/exchange/office-365-directory-based-edge-blocking-support-for-on-premises/m-p/74218). * Azure AD Connect requires an AD DS account to synchronize from on-premises AD. Previously, if you installed Azure AD Connect using the Express mode, you could provide the credentials of an Enterprise Admin account and Azure AD Connect would create the AD DS account required. However, for a custom installation and adding forests to an existing deployment, you were required to provide the AD DS account instead. Now, you also have the option to provide the credentials of an Enterprise Admin account during a custom installation and let Azure AD Connect create the AD DS account required.
-* Azure AD Connect now supports SQL AOA. You must enable SQL AOA before installing Azure AD Connect. During installation, Azure AD Connect detects whether the SQL instance provided is enabled for SQL AOA or not. If SQL AOA is enabled, Azure AD Connect further figures out if SQL AOA is configured to use synchronous replication or asynchronous replication. When setting up the Availability Group Listener, it is recommended that you set the RegisterAllProvidersIP property to 0. This recommendation is because Azure AD Connect currently uses SQL Native Client to connect to SQL and SQL Native Client does not support the use of MultiSubNetFailover property.
+* Azure AD Connect now supports SQL AOA. You must enable SQL AOA before installing Azure AD Connect. During installation, Azure AD Connect detects whether the SQL instance provided is enabled for SQL AOA or not. If SQL AOA is enabled, Azure AD Connect further figures out if SQL AOA is configured to use synchronous replication or asynchronous replication. When setting up the Availability Group Listener, it's recommended that you set the RegisterAllProvidersIP property to 0. This recommendation is because Azure AD Connect currently uses SQL Native Client to connect to SQL and SQL Native Client does not support the use of MultiSubNetFailover property.
* If you are using LocalDB as the database for your Azure AD Connect server and has reached its 10-GB size limit, the Synchronization Service no longer starts. Previously, you need to perform ShrinkDatabase operation on the LocalDB to reclaim enough DB space for the Synchronization Service to start. After which, you can use the Synchronization Service Manager to delete run history to reclaim more DB space. Now, you can use Start-ADSyncPurgeRunHistory cmdlet to purge run history data from LocalDB to reclaim DB space. Further, this cmdlet supports an offline mode (by specifying the -offline parameter) which can be used when the Synchronization Service is not running. Note: The offline mode can only be used if the Synchronization Service is not running and the database used is LocalDB. * To reduce the amount of storage space required, Azure AD Connect now compresses sync error details before storing them in LocalDB/SQL databases. When upgrading from an older version of Azure AD Connect to this version, Azure AD Connect performs a one-time compression on existing sync error details. * Previously, after updating OU filtering configuration, you must manually run Full import to ensure existing objects are properly included/excluded from directory synchronization. Now, Azure AD Connect automatically triggers Full import during the next sync cycle. Further, Full import is only be applied to the AD connectors affected by the update. Note: this improvement is applicable to OU filtering updates made using the Azure AD Connect wizard only. It is not applicable to OU filtering update made using the Synchronization Service Manager.
Released: April 2017
Azure AD Connect sync * Fixed an issue where the sync scheduler skips the entire sync step if one or more connectors are missing run profile for that sync step. For example, you manually added a connector using the Synchronization Service Manager without creating a Delta Import run profile for it. This fix ensures that the sync scheduler continues to run Delta Import for other connectors.
-* Fixed an issue where the Synchronization Service immediately stops processing a run profile when it is encounters an issue with one of the run steps. This fix ensures that the Synchronization Service skips that run step and continues to process the rest. For example, you have a Delta Import run profile for your AD connector with multiple run steps (one for each on-premises AD domain). The Synchronization Service will run Delta Import with the other AD domains even if one of them has network connectivity issues.
+* Fixed an issue where the Synchronization Service immediately stops processing a run profile when it's encounters an issue with one of the run steps. This fix ensures that the Synchronization Service skips that run step and continues to process the rest. For example, you've a Delta Import run profile for your AD connector with multiple run steps (one for each on-premises AD domain). The Synchronization Service will run Delta Import with the other AD domains even if one of them has network connectivity issues.
* Fixed an issue that causes the Azure AD Connector update to be skipped during Automatic Upgrade. * Fixed an issue that causes Azure AD Connect to incorrectly determine whether the server is a domain controller during setup, which in turn causes DirSync upgrade to fail. * Fixed an issue that causes DirSync in-place upgrade to not create any run profile for the Azure AD Connector.
Azure AD Connect sync
* A local user account * Previously, if you upgrade to a new build of Azure AD Connect containing connectors update or sync rule changes, Azure AD Connect will trigger a full sync cycle. Now, Azure AD Connect selectively triggers Full Import step only for connectors with update, and Full Synchronization step only for connectors with sync rule changes. * Previously, the Export Deletion Threshold only applies to exports which are triggered through the sync scheduler. Now, the feature is extended to include exports manually triggered by the customer using the Synchronization Service Manager.
-* On your Azure AD tenant, there is a service configuration which indicates whether Password Synchronization feature is enabled for your tenant or not. Previously, it is easy for the service configuration to be incorrectly configured by Azure AD Connect when you have an active and a staging server. Now, Azure AD Connect will attempt to keep the service configuration consistent with your active Azure AD Connect server only.
+* On your Azure AD tenant, there is a service configuration which indicates whether Password Synchronization feature is enabled for your tenant or not. Previously, it's easy for the service configuration to be incorrectly configured by Azure AD Connect when you've an active and a staging server. Now, Azure AD Connect will attempt to keep the service configuration consistent with your active Azure AD Connect server only.
* Azure AD Connect wizard now detects and returns a warning if on-premises AD does not have AD Recycle Bin enabled. * Previously, Export to Azure AD times out and fails if the combined size of the objects in the batch exceeds certain threshold. Now, the Synchronization Service will reattempt to resend the objects in separate, smaller batches if the issue is encountered. * The Synchronization Service Key Management application has been removed from Windows Start Menu. Management of encryption key will continue to be supported through command-line interface using miiskmu.exe. For information about managing encryption key, refer to article [Abandoning the Azure AD Connect Sync encryption key](./how-to-connect-sync-change-serviceacct-pass.md#abandoning-the-adsync-service-account-encryption-key).
-* Previously, if you change the Azure AD Connect sync service account password, the Synchronization Service will not be able start correctly until you have abandoned the encryption key and reinitialized the Azure AD Connect sync service account password. Now, this process is no longer required.
+* Previously, if you change the Azure AD Connect sync service account password, the Synchronization Service will not be able start correctly until you've abandoned the encryption key and reinitialized the Azure AD Connect sync service account password. Now, this process is no longer required.
Desktop SSO
Released: March 2017
Azure AD Connect sync * Fixed an issue which causes Azure AD Connect wizard to fail if the display name of the Azure AD Connector does not contain the initial onmicrosoft.com domain assigned to the Azure AD tenant. * Fixed an issue which causes Azure AD Connect wizard to fail while making connection to SQL database when the password of the Sync Service Account contains special characters such as apostrophe, colon and space.
-* Fixed an issue which causes the error ΓÇ£The dimage has an anchor that is different than the imageΓÇ¥ to occur on an Azure AD Connect server in staging mode, after you have temporarily excluded an on-premises AD object from syncing and then included it again for syncing.
-* Fixed an issue which causes the error ΓÇ£The object located by DN is a phantomΓÇ¥ to occur on an Azure AD Connect server in staging mode, after you have temporarily excluded an on-premises AD object from syncing and then included it again for syncing.
+* Fixed an issue which causes the error ΓÇ£The dimage has an anchor that is different than the imageΓÇ¥ to occur on an Azure AD Connect server in staging mode, after you've temporarily excluded an on-premises AD object from syncing and then included it again for syncing.
+* Fixed an issue which causes the error ΓÇ£The object located by DN is a phantomΓÇ¥ to occur on an Azure AD Connect server in staging mode, after you've temporarily excluded an on-premises AD object from syncing and then included it again for syncing.
AD FS management * Fixed an issue where Azure AD Connect wizard does not update AD FS configuration and set the right claims on the relying party trust after Alternate Login ID is configured.
Released: November 2016
**Fixed issues:**
-* Sometimes, installing Azure AD Connect fails because it is unable to create a local service account whose password meets the level of complexity specified by the organization's password policy.
-* Fixed an issue where join rules are not reevaluated when an object in the connector space simultaneously becomes out-of-scope for one join rule and become in-scope for another. This can happen if you have two or more join rules whose join conditions are mutually exclusive.
-* Fixed an issue where inbound synchronization rules (from Azure AD), which do not contain join rules, are not processed if they have lower precedence values than those containing join rules.
+* Sometimes, installing Azure AD Connect fails because it's unable to create a local service account whose password meets the level of complexity specified by the organization's password policy.
+* Fixed an issue where join rules aren't reevaluated when an object in the connector space simultaneously becomes out-of-scope for one join rule and become in-scope for another. This can happen if you've two or more join rules whose join conditions are mutually exclusive.
+* Fixed an issue where inbound synchronization rules (from Azure AD), which don't contain join rules, aren't processed if they have lower precedence values than those containing join rules.
**Improvements:**
Released: August 2016
**Fixed issues:**
-* Changes to sync interval do not take place until after the next sync cycle is complete.
+* Changes to sync interval don't take place until after the next sync cycle is complete.
* Azure AD Connect wizard does not accept an Azure AD account whose username starts with an underscore (\_). * Azure AD Connect wizard fails to authenticate the Azure AD account if the account password contains too many special characters. Error message "Unable to validate credentials. An unexpected error has occurred." is returned. * Uninstalling staging server disables password synchronization in Azure AD tenant and causes password synchronization to fail with active server. * Password synchronization fails in uncommon cases when there is no password hash stored on the user. * When Azure AD Connect server is enabled for staging mode, password writeback is not temporarily disabled. * Azure AD Connect wizard does not show the actual password synchronization and password writeback configuration when server is in staging mode. It always shows them as disabled.
-* Configuration changes to password synchronization and password writeback are not persisted by Azure AD Connect wizard when server is in staging mode.
+* Configuration changes to password synchronization and password writeback aren't persisted by Azure AD Connect wizard when server is in staging mode.
**Improvements:**
-* Updated the Start-ADSyncSyncCycle cmdlet to indicate whether it is able to successfully start a new sync cycle or not.
+* Updated the Start-ADSyncSyncCycle cmdlet to indicate whether it's able to successfully start a new sync cycle or not.
* Added the Stop-ADSyncSyncCycle cmdlet to terminate sync cycle and operation, which are currently in progress. * Updated the Stop-ADSyncScheduler cmdlet to terminate sync cycle and operation, which are currently in progress. * When configuring [Directory extensions](how-to-connect-sync-feature-directory-extensions.md) in Azure AD Connect wizard, the Azure AD attribute of type "Teletex string" can now be selected.
Released: March 2016
**Fixed issues:** * Made sure Express installation cannot be used on Windows Server 2008 (pre-R2) because password sync is not supported on this operating system.
-* Upgrade from DirSync with a custom filter configuration did not work as expected.
+* Upgrade from DirSync with a custom filter configuration didn't work as expected.
* When upgrading to a newer release and there are no changes to the configuration, a full import/synchronization should not be scheduled. ## 1.1.110.0
Released: February 2016
**New features:** * [Automatic upgrade](how-to-connect-install-automatic-upgrade.md) feature for Express settings customers.
-* Support for the global admin by using Azure AD Multi-Factor Authentication and Privileged Identity Management in the installation wizard.
+* Support for the Hybrid Identity Administrator by using Azure AD Multi-Factor Authentication and Privileged Identity Management in the installation wizard.
* You need to allow your proxy to also allow traffic to https://secure.aadcdn.microsoftonline-p.com if you use Multi-Factor Authentication. * You need to add https://secure.aadcdn.microsoftonline-p.com to your trusted sites list for Multi-Factor Authentication to properly work. * Allow changing the user's sign-in method after initial installation.
Released: February 2016
* The verify DNS domains page didn't always recognize the domains. * Prompts for domain admin credentials when configuring AD FS.
-* The on-premises AD accounts are not recognized by the installation wizard if located in a domain with a different DNS tree than the root domain.
+* The on-premises AD accounts aren't recognized by the installation wizard if located in a domain with a different DNS tree than the root domain.
## 1.0.9131.0 Released: December 2015
Released: December 2015
**Fixed issues:** * Password sync might not work when you change passwords in Active Directory Domain Services (AD DS), but works when you do set a password.
-* When you have a proxy server, authentication to Azure AD might fail during installation, or if an upgrade is canceled on the configuration page.
-* Updating from a previous release of Azure AD Connect with a full SQL Server instance fails if you are not a SQL Server system administrator (SA).
+* When you've a proxy server, authentication to Azure AD might fail during installation, or if an upgrade is canceled on the configuration page.
+* Updating from a previous release of Azure AD Connect with a full SQL Server instance fails if you aren't a SQL Server system administrator (SA).
* Updating from a previous release of Azure AD Connect with a remote SQL Server shows the ΓÇ£Unable to access the ADSync SQL databaseΓÇ¥ error. ## 1.0.9125.0
Released: August 2015
**Fixed issues:** * Azure AD Connect installation wizard crashes if another user continues installation rather than the person who first started the installation.
-* If a previous uninstallation of Azure AD Connect fails to uninstall Azure AD Connect sync cleanly, it is not possible to reinstall.
+* If a previous uninstallation of Azure AD Connect fails to uninstall Azure AD Connect sync cleanly, it's not possible to reinstall.
* Cannot install Azure AD Connect using Express installation if the user is not in the root domain of the forest or if a non-English version of Active Directory is used. * If the FQDN of the Active Directory user account cannot be resolved, a misleading error message ΓÇ£Failed to commit the schemaΓÇ¥ is shown. * If the account used on the Active Directory Connector is changed outside the wizard, the wizard fails on subsequent runs.
Released: April 2015
* The Active Directory Connector does not process deletes correctly if the recycle bin is enabled and there are multiple domains in the forest. * The performance of import operations has been improved for the Azure Active Directory Connector.
-* When a group has exceeded the membership limit (by default, the limit is set to 50,000 objects), the group was deleted in Azure Active Directory. With the new behavior, the group is not deleted, an error is thrown, and new membership changes are not exported.
+* When a group has exceeded the membership limit (by default, the limit's set to 50,000 objects), the group was deleted in Azure Active Directory. With the new behavior, the group is not deleted, an error is thrown, and new membership changes aren't exported.
* A new object cannot be provisioned if a staged delete with the same DN is already present in the connector space. * Some objects are marked for synchronization during a delta sync even though there's no change staged on the object. * Forcing a password sync also removes the preferred DC list.
Released: October 2014
**Upgrading from AADSync 1.0 GA**
-If you already have Azure AD Sync installed, there is one additional step you have to take in case you have changed any of the out-of-box synchronization rules. After you have upgraded to the 1.0.470.1023 release, the synchronization rules you have modified are duplicated. For each modified sync rule, do the following:
+If you already have Azure AD Sync installed, there is one additional step you've to take in case you've changed any of the out-of-box synchronization rules. After you've upgraded to the 1.0.470.1023 release, the synchronization rules you've modified are duplicated. For each modified sync rule, do the following:
-1. Locate the sync rule you have modified and take a note of the changes.
+1. Locate the sync rule you've modified and take a note of the changes.
1. Delete the sync rule. 1. Locate the new sync rule that is created by Azure AD Sync and then reapply the changes.
active-directory Tshoot Connect Attribute Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-attribute-not-syncing.md
documentationcenter: '' na
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
If you use a **Microsoft account** rather than a **school or organization** acco
![A Microsoft Account is used](./media/tshoot-connect-connectivity/unknownerror.png) ### The MFA endpoint cannot be reached
-This error appears if the endpoint **https://secure.aadcdn.microsoftonline-p.com** cannot be reached and your global admin has MFA enabled.
+This error appears if the endpoint **https://secure.aadcdn.microsoftonline-p.com** cannot be reached and your Hybrid Identity Administrator has MFA enabled.
![nomachineconfig](./media/tshoot-connect-connectivity/nomicrosoftonlinep.png) * If you see this error, verify that the endpoint **secure.aadcdn.microsoftonline-p.com** has been added to the proxy.
User was authenticated successfully. However user is not assigned global admin r
</div> ### Privileged Identity Management Enabled
-Authentication was successful. Privileged identity management has been enabled and you are currently not a global administrator. For more information, see [Privileged Identity Management](../privileged-identity-management/pim-getting-started.md).
+Authentication was successful. Privileged identity management has been enabled and you are currently not a Hybrid Identity Administrator. For more information, see [Privileged Identity Management](../privileged-identity-management/pim-getting-started.md).
<div id="get-msolcompanyinformation-failed"> <!--
active-directory Tshoot Connect Install Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-install-issues.md
documentationcenter: '' na
active-directory Tshoot Connect Objectsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-objectsync.md
documentationcenter: '' na
To run the troubleshooting task in the wizard, perform the following steps:
The following input parameters are needed by the troubleshooting task: 1. **Object Distinguished Name** ΓÇô This is the distinguished name of the object that needs troubleshooting 2. **AD Connector Name** ΓÇô This is the name of the AD forest where the above object resides.
-3. Azure AD tenant global administrator credentials
-![global administrator credentials](media/tshoot-connect-objectsync/objsynch1.png)
+3. Azure AD tenant Hybrid Identity Administrator credentials
+![Hybrid Identity Administratoristrator credentials](media/tshoot-connect-objectsync/objsynch1.png)
### Understand the results of the troubleshooting task The troubleshooting task performs the following checks:
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sso.md
If troubleshooting didn't help, you can manually reset the feature on your tenan
### Step 2: Get the list of Active Directory forests on which Seamless SSO has been enabled
-1. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. When prompted, enter your tenant's global administrator or hybrid identity administrator credentials.
+1. Run PowerShell as an administrator. In PowerShell, call `New-AzureADSSOAuthenticationContext`. When prompted, enter your tenant's Hybrid Identity Administrator or hybrid identity administrator credentials.
2. Call `Get-AzureADSSOStatus`. This command provides you with the list of Active Directory forests (look at the "Domains" list) on which this feature has been enabled. ### Step 3: Disable Seamless SSO for each Active Directory forest where you've set up the feature
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sync-errors.md
To resolve this issue:
1. Remove the Azure AD account (owner) from all admin roles. 1. Hard delete the quarantined object in the cloud.
-1. The next sync cycle will take care of soft-matching the on-premises user to the cloud account because the cloud user is now no longer a global admin.
+1. The next sync cycle will take care of soft-matching the on-premises user to the cloud account because the cloud user is now no longer a Hybrid Identity Administrator.
1. Restore the role memberships for the owner. >[!NOTE]
active-directory Tutorial Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-federation.md
na Previously updated : 08/16/2018 Last updated : 11/11/2022
The following tutorial will walk you through creating a hybrid identity environm
## Prerequisites The following are prerequisites required for completing this tutorial-- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It is suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
+- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
- An [Azure subscription](https://azure.microsoft.com/free) - A copy of Windows Server 2016
The following are prerequisites required for completing this tutorial
The first thing that we need to do, in order to get our hybrid identity environment up and running is to create a virtual machine that will be used as our on-premises Active Directory server. >[!NOTE]
->If you have never run a script in PowerShell on your host machine you will need to run `Set-ExecutionPolicy remotesigned` and say yes in PowerShell, prior to running scripts.
+>If you have never run a script in PowerShell on your host machine you'll need to run `Set-ExecutionPolicy remotesigned` and say yes in PowerShell, prior to running scripts.
Do the following:
In order to finish building the virtual machine, you need to finish the operatin
1. Hyper-V Manager, double-click on the virtual machine 2. Click on the Start button.
-3. You will be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
+3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
4. On the Windows Server start up screen select your language and click **Next**. 5. Click **Install Now**. 6. Enter your license key and click **Next**.
Set-ADUser -Identity $Identity -PasswordNeverExpires $true -ChangePasswordAtLogo
``` ## Create a certificate for AD FS
-Now we will create a TLS/SSL certificate that will be used by AD FS. This is will be a self-signed certificate and is only for testing purposes. Microsoft does not recommend using a self-signed certificate in a production environment. Do the following:
+Now we'll create a TLS/SSL certificate that will be used by AD FS. This is will be a self-signed certificate and is only for testing purposes. Microsoft doesn't recommend using a self-signed certificate in a production environment. Do the following:
1. Open up the PowerShell ISE as Administrator. 2. Run the following script.
Now we need to create an Azure AD tenant so that we can synchronize our users to
5. Provide a **name for the organization** along with the **initial domain name**. Then select **Create**. This will create your directory. 6. Once this has completed, click the **here** link, to manage the directory.
-## Create a global administrator in Azure AD
-Now that we have an Azure AD tenant, we will create a global administrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the global administrator account do the following.
+## Create a Hybrid Identity Administrator in Azure AD
+Now that we have an Azure AD tenant, we'll create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following.
1. Under **Manage**, select **Users**.</br>
-![Screenshot that shows the User option selected in the Manage section where you create a global administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br>
+![Screenshot that shows the User option selected in the Manage section where you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br>
2. Select **All users** and then select **+ New user**.
-3. Provide a name and username for this user. This will be your Global Admin for the tenant. You will also want to change the **Directory role** to **Global administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
+3. Provide a name and username for this user. This will be your Hybrid Identity Administrator for the tenant. You'll also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you're done, select **Create**.</br>
![Screenshot that shows the Create button you select when you create a global administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin2.png)</br> 4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new global administrator account and the temporary password.
-5. Change the password for the global administrator to something that you will remember.
+5. Change the password for the Hybrid Identity Administrator to something that you'll remember.
## Add the custom domain name to your directory
-Now that we have a tenant and a global administrator, we need to add our custom domain so that Azure can verify it. Do the following:
+Now that we have a tenant and a Hybrid Identity Administrator, we need to add our custom domain so that Azure can verify it. Do the following:
1. Back in the [Azure portal](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) be sure to close the **All Users** blade. 2. On the left, select **Custom domain names**. 3. Select **Add custom domain**.</br> ![Screenshot that shows the Add custom domain button highlighted.](media/tutorial-federation/custom1.png)</br> 4. On **Custom domain names**, enter the name of your custom domain in the box, and click **Add Domain**.
-5. On the custom domain name screen you will be supplied with either TXT or MX information. This information must be added to the DNS information of the domain registrar under your domain. So you need to go to your domain registrar, enter either the TXT or MX information in the DNS settings for your domain. This will allow Azure to verify your domain. This may take up to 24 hours for Azure to verify it. For more information, see the [add a custom domain](../../active-directory/fundamentals/add-custom-domain.md) documentation.</br>
+5. On the custom domain name screen you'll be supplied with either TXT or MX information. This information must be added to the DNS information of the domain registrar under your domain. So you need to go to your domain registrar, enter either the TXT or MX information in the DNS settings for your domain. This will allow Azure to verify your domain. This may take up to 24 hours for Azure to verify it. For more information, see the [add a custom domain](../../active-directory/fundamentals/add-custom-domain.md) documentation.</br>
![Screenshot that shows where you add the TXT or MX information.](media/tutorial-federation/custom2.png)</br>
-6. To ensure that it is verified, click the Verify button.</br>
+6. To ensure that it's verified, click the Verify button.</br>
![Screenshot that shows a successful verification message after you select Verify.](media/tutorial-federation/custom3.png)</br> ## Download and install Azure AD Connect
-Now it is time to download and install Azure AD Connect. Once it has been installed we will run through the express installation. Do the following:
+Now it's time to download and install Azure AD Connect. Once it has been installed we'll run through the express installation. Do the following:
1. Download [Azure AD Connect](https://www.microsoft.com/download/details.aspx?id=47594) 2. Navigate to and double-click **AzureADConnect.msi**.
Now it is time to download and install Azure AD Connect. Once it has been insta
9. On the Domain Administrator credentials page, enter the contoso\Administrator username and password and click **Next.** 10. On the AD FS farm screen, make sure **Configure a new AD FS farm** is selected. 11. Select **Use a certificate installed on the federation servers** and click **Browse**.
-12. Enter DC1 in the search box and select it when it is found. Click **Ok**.
+12. Enter DC1 in the search box and select it when it's found. Click **Ok**.
13. From the **Certificate File** drop-down, select **adfs.contoso.com** the certificate we created above. Click **Next**. ![Screenshot that shows where to select the certificate file that you created.](media/tutorial-federation/fed2.png)
-1. On the AD FS server screen, click **Browse** and enter DC1 in the search box and select it when it is found. Click **Ok**. Click **Next**.
+1. On the AD FS server screen, click **Browse** and enter DC1 in the search box and select it when it's found. Click **Ok**. Click **Next**.
![Federation](media/tutorial-federation/fed3.png) 1. On the Web application Proxy servers screen, click **Next**.
Now it is time to download and install Azure AD Connect. Once it has been insta
## Verify users are created and synchronization is occurring
-We will now verify that the users that we had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
+We'll now verify that the users that we had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. Be aware that this may take a few hours to complete. To verify users are synchronized do the following.
1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
We will now verify that the users that we had in our on-premises directory have
## Test signing in with one of our users 1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-2. Sign-in with a user account that was created in our new tenant. You will need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises.
+2. Sign-in with a user account that was created in our new tenant. You'll need to sign-in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign-in on-premises.
![Verify](media/tutorial-password-hash-sync/verify1.png) You have now successfully setup a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
active-directory Tutorial Passthrough Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-passthrough-authentication.md
Now we need to create an Azure AD tenant so that we can synchronize our users to
5. Provide a **name for the organization** along with the **initial domain name**. Then select **Create**. This will create your directory. 6. Once this has completed, click the **here** link, to manage the directory.
-## Create a global administrator in Azure AD
-Now that we have an Azure AD tenant, we will create a global administrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the global administrator account do the following.
+## Create a Hybrid Identity Administrator in Azure AD
+Now that we have an Azure AD tenant, we will create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following.
1. Under **Manage**, select **Users**.</br>
-![Screenshot that shows the User option selected in the Manage section where you create a global administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br>
+![Screenshot that shows the User option selected in the Manage section where you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br>
2. Select **All users** and then select **+ New user**.
-3. Provide a name and username for this user. This will be your Global Admin for the tenant. You will also want to change the **Directory role** to **Global administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
-![Screenshot that shows the Create button you select when you create a global administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin2.png)</br>
-4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new global administrator account and the temporary password.
-5. Change the password for the global administrator to something that you will remember.
+3. Provide a name and username for this user. This will be your Global Admin for the tenant. You will also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
+![Screenshot that shows the Create button you select when you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin2.png)</br>
+4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new Hybrid Identity Administrator account and the temporary password.
+5. Change the password for the Hybrid Identity Administrator to something that you will remember.
## Add the custom domain name to your directory
-Now that we have a tenant and a global administrator, we need to add our custom domain so that Azure can verify it. Do the following:
+Now that we have a tenant and a Hybrid Identity Administrator, we need to add our custom domain so that Azure can verify it. Do the following:
1. Back in the [Azure portal](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) be sure to close the **All Users** blade. 2. On the left, select **Custom domain names**.
active-directory Tutorial Password Hash Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-password-hash-sync.md
Now we need to create an Azure AD tenant so that we can synchronize our users to
5. Provide a **name for the organization** along with the **initial domain name**. Then select **Create**. This will create your directory. 6. Once this has completed, click the **here** link, to manage the directory.
-## Create a global administrator in Azure AD
-Now that we have an Azure AD tenant, we will create a global administrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the global administrator account do the following.
+## Create a Hybrid Identity Administrator in Azure AD
+Now that we have an Azure AD tenant, we will create a Hybrid Identity Administratoristrator account. This account is used to create the Azure AD Connector account during Azure AD Connect installation. The Azure AD Connector account is used to write information to Azure AD. To create the Hybrid Identity Administrator account do the following.
1. Under **Manage**, select **Users**.</br>
-![Screenshot that shows the User option selected in the Manage section where you create a global administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br>
+![Screenshot that shows the User option selected in the Manage section where you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin1.png)</br>
2. Select **All users** and then select **+ New user**.
-3. Provide a name and username for this user. This will be your Global Admin for the tenant. You will also want to change the **Directory role** to **Global administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
-![Screenshot that shows the Create button you select when you create a global administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin2.png)</br>
-4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new global administrator account and the temporary password.
-5. Change the password for the global administrator to something that you will remember.
+3. Provide a name and username for this user. This will be your Hybrid Identity Administrator for the tenant. You will also want to change the **Directory role** to **Hybrid Identity Administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
+![Screenshot that shows the Create button you select when you create a Hybrid Identity Administrator in Azure AD.](media/tutorial-password-hash-sync/gadmin2.png)</br>
+4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new Hybrid Identity Administrator account and the temporary password.
+5. Change the password for the Hybrid Identity Administrator to something that you will remember.
## Download and install Azure AD Connect Now it is time to download and install Azure AD Connect. Once it has been installed we will run through the express installation. Do the following:
Now it is time to download and install Azure AD Connect. Once it has been insta
3. On the Welcome screen, select the box agreeing to the licensing terms and click **Continue**. 4. On the Express settings screen, click **Use express settings**.</br> ![Screenshot that shows the Express settings screen and the Use express settings button.](media/tutorial-password-hash-sync/express1.png)</br>
-5. On the Connect to Azure AD screen, enter the username and password the global administrator for Azure AD. Click **Next**.
+5. On the Connect to Azure AD screen, enter the username and password the Hybrid Identity Administrator for Azure AD. Click **Next**.
6. On the Connect to AD DS screen, enter the username and password for an enterprise admin account. Click **Next**. 7. On the Ready to configure screen, click **Install**. 8. When the installation completes, click **Exit**.
active-directory Tutorial Phs Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-phs-backup.md
Do the following:
1. Double-click the Azure AD Connect icon that was created on the desktop 2. Click **Configure**. 3. On the Additional tasks page, select **Customize synchronization options** and click **Next**.
-4. Enter the username and password for your global administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your Hybrid Identity Administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-hybrid-identity-administrator-in-azure-ad) in the previous tutorial.
5. On the **Connect your directories** screen, click **Next**. 6. On the **Domain and OU filtering** screen, click **Next**. 7. On the **Optional features** screen, check **Password hash synchronization** and click **Next**.
Now, we will show you how to switch over to password hash synchronization. Befor
2. Click **Configure**. 3. Select **Change user sign-in** and click **Next**. ![Change](media/tutorial-phs-backup/backup2.png)</br>
-4. Enter the username and password for your global administrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your Hybrid Identity Administratoristrator or your hybrid identity administrator. This account was created [here](tutorial-federation.md#create-a-hybrid-identity-administrator-in-azure-ad) in the previous tutorial.
5. On the **User sign-in** screen, select **Password Hash Synchronization** and place a check in the **Do not convert user accounts** box. 6. Leave the default **Enable single sign-on** selected and click **Next**. 7. On the **Enable single sign-on** screen click **Next**.
Now, we will show you how to switch back to federation. To do this, do the foll
1. Double-click the Azure AD Connect icon that was created on the desktop 2. Click **Configure**. 3. Select **Change user sign-in** and click **Next**.
-4. Enter the username and password for your global administrator or your hybrid identity administrator. This is the account that was created [here](tutorial-federation.md#create-a-global-administrator-in-azure-ad) in the previous tutorial.
+4. Enter the username and password for your Hybrid Identity Administrator or your hybrid identity administrator. This is the account that was created [here](tutorial-federation.md#create-a-hybrid-identity-administrator-in-azure-ad) in the previous tutorial.
5. On the **User sign-in** screen, select **Federation with AD FS** and click **Next**. 6. On the Domain Administrator credentials page, enter the contoso\Administrator username and password and click **Next.** 7. On the AD FS farm screen, click **Next**.
active-directory Whatis Aadc Admin Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-aadc-admin-agent.md
The Azure AD Connect Administration Agent binaries are placed in the Azure AD Co
2. Navigate to the directory where the application is located cd "C:\Program Files\Microsoft Azure Active Directory Connect\Tools" 3. Run ConfigureAdminAgent.ps1
-When prompted, please enter your Azure AD global admin credentials. These credentials should be the same credentials entered during Azure AD Connect installation.
+When prompted, please enter your Azure AD Hybrid Identity Administrator credentials. These credentials should be the same credentials entered during Azure AD Connect installation.
After the agent is installed, you'll see the following two new programs in the "Add/Remove Programs" list in the Control Panel of your server:
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
This article describes administrative units in Azure Active Directory (Azure AD)
Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](permissions-reference.md#helpdesk-administrator) role to regional support specialists, so they can manage users only in the region that they support.
+Users can be members of multiple administrative units. For example, you might add users to administrative units by geography and division; Megan Bowen might be in the "Seattle" and "Marketing" administrative units.
+ ## Deployment scenario It can be useful to restrict administrative scope by using administrative units in organizations that are made up of independent divisions of any kind. Consider the example of a large university that's made up of many autonomous schools (School of Business, School of Engineering, and so on). Each school has a team of IT admins who control access, manage users, and set policies for their school.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Lifecycle Workflows Administrator](#lifecycle-workflows-administrator) | Create and manage all aspects of workflows and tasks associated with Lifecycle Workflows in Azure AD. | 59d46f88-662b-457b-bceb-5c3809e5908f | > | [Message Center Privacy Reader](#message-center-privacy-reader) | Can read security messages and updates in Office 365 Message Center only. | ac16e43d-7b2d-40e0-ac05-243ff356ab5b | > | [Message Center Reader](#message-center-reader) | Can read messages and updates for their organization in Office 365 Message Center only. | 790c1fb9-7f7d-4f88-86a1-ef1f95c05c1b |
+> | [Microsoft Hardware Warranty Administrator](#microsoft-hardware-warranty-administrator) | Create and manage all aspects warranty claims and entitlements for Microsoft manufactured hardware, like Surface and HoloLens. | 1501b917-7653-4ff9-a4b5-203eaf33784f |
+> | [Microsoft Hardware Warranty Specialist](#microsoft-hardware-warranty-specialist) | Create and read warranty claims for Microsoft manufactured hardware, like Surface and HoloLens. | 281fe777-fb20-4fbb-b7a3-ccebce5b0d96 |
> | [Modern Commerce User](#modern-commerce-user) | Can manage commercial purchases for a company, department or team. | d24aef57-1500-4070-84db-2666f29cf966 | > | [Network Administrator](#network-administrator) | Can manage network locations and review enterprise network design insights for Microsoft 365 Software as a Service applications. | d37c8bed-0711-4417-ba38-b4abe66ce4c2 | > | [Office Apps Administrator](#office-apps-administrator) | Can manage Office apps cloud services, including policy and settings management, and manage the ability to select, unselect and publish 'what's new' feature content to end-user's devices. | 2b745bdf-0803-4d80-aa65-822c4493daac |
+> | [Organizational Messages Writer](#organizational-messages-writer) | Write, publish, manage, and review the organizational messages for end-users through Microsoft product surfaces. | 507f53e4-4e52-4077-abd3-d2e1558b6ea2 |
> | [Partner Tier1 Support](#partner-tier1-support) | Do not use - not intended for general use. | 4ba39ca4-527c-499a-b93d-d9b492c50246 | > | [Partner Tier2 Support](#partner-tier2-support) | Do not use - not intended for general use. | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 | > | [Password Administrator](#password-administrator) | Can reset passwords for non-administrators and Password Administrators. | 966707d0-3269-4727-9be2-8c3a10f19b9d |
Do not use. This role is automatically assigned to the Azure AD Connect service,
## Directory Writers
-Users in this role can read and update basic information of users, groups, and service principals. Assign this role only to applications that donΓÇÖt support the [Consent Framework](../develop/quickstart-register-app.md). It should not be assigned to any users.
+Users in this role can read and update basic information of users, groups, and service principals.
> [!div class="mx-tableFixed"] > | Actions | Description |
Users with this role have access to all administrative features in Azure Active
> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.messageCenter/securityMessages/read | Read security messages in Message Center in the Microsoft 365 admin center | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
-> | microsoft.office365.organizationalMessages/allEntities/allProperties/allTasks | Manage all aspects of Microsoft 365 organizational message center |
+> | microsoft.office365.organizationalMessages/allEntities/allProperties/allTasks | Manage all authoring aspects of Microsoft 365 Organizational Messages |
> | microsoft.office365.protectionCenter/allEntities/allProperties/allTasks | Manage all aspects of the Security and Compliance centers | > | microsoft.office365.search/content/manage | Create and delete content, and read and update all properties in Microsoft Search | > | microsoft.office365.securityComplianceCenter/allEntities/allTasks | Create and delete all resources, and read and update standard properties in the Office 365 Security & Compliance Center |
Users with this role **cannot** do the following:
> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.messageCenter/securityMessages/read | Read security messages in Message Center in the Microsoft 365 admin center | > | microsoft.office365.network/performance/allProperties/read | Read all network performance properties in the Microsoft 365 admin center |
-> | microsoft.office365.organizationalMessages/allEntities/allProperties/read | Read all aspects of Microsoft 365 organizational message center |
+> | microsoft.office365.organizationalMessages/allEntities/allProperties/read | Read all aspects of Microsoft 365 Organizational Messages |
> | microsoft.office365.protectionCenter/allEntities/allProperties/read | Read all properties in the Security and Compliance centers | > | microsoft.office365.securityComplianceCenter/allEntities/read | Read standard properties in Microsoft 365 Security and Compliance Center | > | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports |
This role can create and manage all security groups. However, Intune Administrat
> | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.cloudPC/allEntities/allProperties/allTasks | Manage all aspects of Windows 365 | > | microsoft.intune/allEntities/allTasks | Manage all aspects of Microsoft Intune |
-> | microsoft.office365.organizationalMessages/allEntities/allProperties/read | Read all aspects of Microsoft 365 organizational message center |
+> | microsoft.office365.organizationalMessages/allEntities/allProperties/read | Read all aspects of Microsoft 365 Organizational Messages |
> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users in this role can monitor notifications and advisory health updates in [Mes
> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Microsoft Hardware Warranty Administrator
+
+Assign the Microsoft Hardware Warranty Administrator role to users who need to do the following tasks:
+
+- Create new warranty claims for Microsoft manufactured hardware, like Surface and HoloLens
+- Search and read opened or closed warranty claims
+- Search and read warranty claims by serial number
+- Create, read, update, and delete shipping addresses
+- Read shipping status for open warranty claims
+- Create and manage service requests in the Microsoft 365 admin center
+- Read Message center announcements in the Microsoft 365 admin center
+
+A warranty claim is a request to have the hardware repaired or replaced in accordance with the terms of the warranty. For more information, see [Self-serve your Surface warranty & service requests](/surface/self-serve-warranty-service).
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.office365.messageCenter/messages/read | Read messages in Message Center in the Microsoft 365 admin center, excluding security messages |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+
+## Microsoft Hardware Warranty Specialist
+
+Assign the Microsoft Hardware Warranty Specialist role to users who need to do the following tasks:
+
+- Create new warranty claims for Microsoft manufactured hardware, like Surface and HoloLens
+- Read warranty claims that they created
+- Read and update existing shipping addresses
+- Read shipping status for open warranty claims they created
+- Create and manage service requests in the Microsoft 365 admin center
+
+A warranty claim is a request to have the hardware repaired or replaced in accordance with the terms of the warranty. For more information, see [Self-serve your Surface warranty & service requests](/surface/self-serve-warranty-service).
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+ ## Modern Commerce User Do not use. This role is automatically assigned from Commerce, and is not intended or supported for any other use. See details below.
Users in this role can manage Microsoft 365 apps' cloud settings. This includes
> | microsoft.office365.userCommunication/allEntities/allTasks | Read and update what's new messages visibility | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+## Organizational Messages Writer
+
+Assign the Organizational Messages Writer role to users who need to do the following tasks:
+
+- Write, publish, and delete organizational messages using Microsoft 365 admin center or Microsoft Endpoint Manager
+- Manage organizational message delivery options using Microsoft 365 admin center or Microsoft Endpoint Manager
+- Read organizational message delivery results using Microsoft 365 admin center or Microsoft Endpoint Manager
+- View usage reports and most settings in the Microsoft 365 admin center, but can't make changes
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | microsoft.office365.organizationalMessages/allEntities/allProperties/allTasks | Manage all authoring aspects of Microsoft 365 Organizational Messages |
+> | microsoft.office365.usageReports/allEntities/standard/read | Read tenant-level aggregated Office 365 usage reports |
+> | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
+ ## Partner Tier1 Support Do not use. This role has been deprecated and will be removed from Azure AD in the future. This role is intended for use by a small number of Microsoft resale partners, and is not intended for general use.
active-directory Confluencemicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/confluencemicrosoft-tutorial.md
As of now, following versions of Confluence are supported:
- Confluence: 5.0 to 5.10 - Confluence: 6.0.1 to 6.15.9-- Confluence: 7.0.1 to 7.19.0
+- Confluence: 7.0.1 to 7.20.0
> [!NOTE] > Please note that our Confluence Plugin also works on Ubuntu Version 16.04
active-directory Jiramicrosoft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/jiramicrosoft-tutorial.md
Use your Microsoft Azure Active Directory account with Atlassian JIRA server to
To configure Azure AD integration with JIRA SAML SSO by Microsoft, you need the following items: - An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- JIRA Core and Software 6.4 to 8.22.1 or JIRA Service Desk 3.0 to 4.22.1 should installed and configured on Windows 64-bit version.
+- JIRA Core and Software 6.4 to 9.1.0 or JIRA Service Desk 3.0 to 4.22.1 should installed and configured on Windows 64-bit version.
- JIRA server is HTTPS enabled. - Note the supported versions for JIRA Plugin are mentioned in below section. - JIRA server is reachable on the Internet particularly to the Azure AD login page for authentication and should able to receive the token from Azure AD.
To get started, you need the following items:
## Supported versions of JIRA
-* JIRA Core and Software: 6.4 to 8.22.1.
+* JIRA Core and Software: 6.4 to 9.1.0
* JIRA Service Desk 3.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](jira52microsoft-tutorial.md).
active-directory Ms Confluence Jira Plugin Adminguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md
Note the following information before you install the plug-in:
The plug-in supports the following versions of Jira and Confluence:
-* Jira Core and Software: 6.0 to 8.22.1.
+* Jira Core and Software: 6.0 to 9.1.0
* Jira Service Desk: 3.0.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). * Confluence: 5.0 to 5.10. * Confluence: 6.0.1 to 6.15.9.
-* Confluence: 7.0.1 to 7.19.0.
+* Confluence: 7.0.1 to 7.20.0.
## Installation
No. The plug-in supports only on-premises versions of Jira and Confluence.
The plug-in supports these versions:
-* Jira Core and Software: 6.0 to 8.22.1.
+* Jira Core and Software: 6.0 to 9.1.0.
* Jira Service Desk: 3.0.0 to 4.22.1. * JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md). * Confluence: 5.0 to 5.10. * Confluence: 6.0.1 to 6.15.9.
-* Confluence: 7.0.1 to 7.19.0.
+* Confluence: 7.0.1 to 7.20.0.
### Is the plug-in free or paid?
aks Operator Best Practices Advanced Scheduler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-advanced-scheduler.md
description: Learn the cluster operator best practices for using advanced scheduler features such as taints and tolerations, node selectors and affinity, or inter-pod affinity and anti-affinity in Azure Kubernetes Service (AKS) Previously updated : 03/09/2021 Last updated : 11/11/2022 # Best practices for advanced scheduler features in Azure Kubernetes Service (AKS) As you manage clusters in Azure Kubernetes Service (AKS), you often need to isolate teams and workloads. Advanced features provided by the Kubernetes scheduler let you control:+ * Which pods can be scheduled on certain nodes.
-* How multi-pod applications can be appropriately distributed across the cluster.
+* How multi-pod applications can be appropriately distributed across the cluster.
This best practices article focuses on advanced Kubernetes scheduling features for cluster operators. In this article, you learn how to:
This best practices article focuses on advanced Kubernetes scheduling features f
## Provide dedicated nodes using taints and tolerations
-> **Best practice guidance:**
+> **Best practice guidance:**
> > Limit access for resource-intensive applications, such as ingress controllers, to specific nodes. Keep node resources available for workloads that require them, and don't allow scheduling of other workloads on the nodes.
-When you create your AKS cluster, you can deploy nodes with GPU support or a large number of powerful CPUs. You can use these nodes for large data processing workloads such as machine learning (ML) or artificial intelligence (AI).
+When you create your AKS cluster, you can deploy nodes with GPU support or a large number of powerful CPUs. You can use these nodes for large data processing workloads such as machine learning (ML) or artificial intelligence (AI).
-Since this node resource hardware is typically expensive to deploy, limit the workloads that can be scheduled on these nodes. Instead, you'd dedicate some nodes in the cluster to run ingress services and prevent other workloads.
+Because this node resource hardware is typically expensive to deploy, limit the workloads that can be scheduled on these nodes. Instead, dedicate some nodes in the cluster to run ingress services and prevent other workloads.
-This support for different nodes is provided by using multiple node pools. An AKS cluster provides one or more node pools.
+This support for different nodes is provided by using multiple node pools. An AKS cluster supports one or more node pools.
The Kubernetes scheduler uses taints and tolerations to restrict what workloads can run on nodes. * Apply a **taint** to a node to indicate only specific pods can be scheduled on them. * Then apply a **toleration** to a pod, allowing them to *tolerate* a node's taint.
-When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes whose taint aligns with the toleration. For example, assume you added a node pool in your AKS cluster for nodes with GPU support. You define name, such as *gpu*, then a value for scheduling. Setting this value to *NoSchedule* restricts the Kubernetes scheduler from scheduling pods with undefined toleration on the node.
+When you deploy a pod to an AKS cluster, Kubernetes only schedules pods on nodes whose taint aligns with the toleration. Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node, marking the the node so that it does not accept any pods that do not tolerate the taints.
+
+For example, assume you added a node pool in your AKS cluster for nodes with GPU support. You define name, such as *gpu*, then a value for scheduling. Setting this value to *NoSchedule* restricts the Kubernetes scheduler from scheduling pods with undefined toleration on the node.
```azurecli-interactive az aks nodepool add \
For more information about how to use multiple node pools in AKS, see [Create an
When you upgrade a node pool in AKS, taints and tolerations follow a set pattern as they're applied to new nodes: #### Default clusters that use VM scale sets+ You can [taint a node pool][taint-node-pool] from the AKS API to have newly scaled out nodes receive API specified node taints. Let's assume:+ 1. You begin with a two-node cluster: *node1* and *node2*. 1. You upgrade the node pool. 1. Two additional nodes are created: *node3* and *node4*.
Let's assume:
#### Clusters without VM scale set support Again, let's assume:+ 1. You have a two-node cluster: *node1* and *node2*. 1. You upgrade the node pool. 1. An additional node is created: *node3*.
When you scale a node pool in AKS, taints and tolerations do not carry over by d
## Control pod scheduling using node selectors and affinity
-> **Best practice guidance**
->
+> **Best practice guidance**
+>
> Control the scheduling of pods on nodes using node selectors, node affinity, or inter-pod affinity. These settings allow the Kubernetes scheduler to logically isolate workloads, such as by hardware in the node. Taints and tolerations logically isolate resources with a hard cut-off. If the pod doesn't tolerate a node's taint, it isn't scheduled on the node.
spec:
cpu: 4.0 memory: 16Gi nodeSelector:
- hardware: highmem
+ hardware:
+ values: highmem
``` When you use these scheduler options, work with your application developers and owners to allow them to correctly define their pod specifications.
For more information about using node selectors, see [Assigning Pods to Nodes][k
### Node affinity
-A node selector is a basic solution for assigning pods to a given node. *Node affinity* provides more flexibility, allowing you to define what happens if the pod can't be matched with a node. You can:
+A node selector is a basic solution for assigning pods to a given node. *Node affinity* provides more flexibility, allowing you to define what happens if the pod can't be matched with a node. You can:
+ * *Require* that Kubernetes scheduler matches a pod with a labeled host. Or, * *Prefer* a match but allow the pod to be scheduled on a different host if no match is available.
spec:
- matchExpressions: - key: hardware operator: In
- values: highmem
+ values:
+ - highmem
``` The *IgnoredDuringExecution* part of the setting indicates that the pod shouldn't be evicted from the node if the node labels change. The Kubernetes scheduler only uses the updated node labels for new pods being scheduled, not pods already scheduled on the nodes.
For more information, see [Affinity and anti-affinity][k8s-affinity].
One final approach for the Kubernetes scheduler to logically isolate workloads is using inter-pod affinity or anti-affinity. These settings define that pods either *shouldn't* or *should* be scheduled on a node that has an existing matching pod. By default, the Kubernetes scheduler tries to schedule multiple pods in a replica set across nodes. You can define more specific rules around this behavior.
-For example, you have a web application that also uses an Azure Cache for Redis.
-1. You use pod anti-affinity rules to request that the Kubernetes scheduler distributes replicas across nodes.
-1. You use affinity rules to ensure each web app component is scheduled on the same host as a corresponding cache.
+For example, you have a web application that also uses an Azure Cache for Redis.
+
+* You use pod anti-affinity rules to request that the Kubernetes scheduler distributes replicas across nodes.
+* You use affinity rules to ensure each web app component is scheduled on the same host as a corresponding cache.
The distribution of pods across nodes looks like the following example:
The distribution of pods across nodes looks like the following example:
| webapp-1 | webapp-2 | webapp-3 | | cache-1 | cache-2 | cache-3 |
-Inter-pod affinity and anti-affinity provide a more complex deployment than node selectors or node affinity. With the deployment, you logically isolate resources and control how Kubernetes schedules pods on nodes.
+Inter-pod affinity and anti-affinity provide a more complex deployment than node selectors or node affinity. With the deployment, you logically isolate resources and control how Kubernetes schedules pods on nodes.
For a complete example of this web application with Azure Cache for Redis example, see [Co-locate pods on the same node][k8s-pod-affinity].
This article focused on advanced Kubernetes scheduler features. For more informa
* [Authentication and authorization][aks-best-practices-identity] <!-- EXTERNAL LINKS -->
-[k8s-taints-tolerations]: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
[k8s-node-selector]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ [k8s-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity [k8s-pod-affinity]: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#always-co-located-in-the-same-node <!-- INTERNAL LINKS --> [aks-best-practices-scheduler]: operator-best-practices-scheduler.md
-[aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md
[aks-best-practices-identity]: operator-best-practices-identity.md [use-multiple-node-pools]: use-multiple-node-pools.md [taint-node-pool]: use-multiple-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
The minimum number of replicas suitable for production is three, preferably comb
By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
+## Performance
+
+We recommend reducing container logs to warnings (`warn`) to improve for performance. Learn more in our [self-hosted gateway configuration reference](self-hosted-gateway-settings-reference.md).
+ ## Security The self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway securely.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
For your custom Windows image, you must choose the right [parent image (base ima
It takes some time to download a parent image during app start-up. However, you can reduce start-up time by using one of the following parent images that are already cached in Azure App Service: -- [https://mcr.microsoft.com/windows/servercore:ltsc2022](https://mcr.microsoft.com/windows/servercore:ltsc2022)-- [https://mcr.microsoft.com/windows/servercore:ltsc2019](https://mcr.microsoft.com/windows/servercore:ltsc2019)-- [https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2022](https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2022)-- [https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019](https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019)-- [https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-ltsc2022)-- [https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-1809](https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-1809)-- [https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-ltsc2022)-- [https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-1809](https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-1809)-- [https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-ltsc2022)-- [https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-1809](https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-1809)-- [https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-ltsc2022)-- [https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-1809](https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-1809)
+- [https://mcr.microsoft.com/windows/servercore:ltsc2022](https://mcr.microsoft.com/product/windows/servercore/about)
+- [https://mcr.microsoft.com/windows/servercore:ltsc2019](https://mcr.microsoft.com/product/windows/servercore/about)
+- [https://mcr.microsoft.com/dotnet/framework/aspnet](https://mcr.microsoft.com/product/dotnet/framework/aspnet/tags):4.8-windowsservercore-ltsc2022
+- [https://mcr.microsoft.com/dotnet/framework/aspnet](https://mcr.microsoft.com/product/dotnet/framework/aspnet/tags):4.8-windowsservercore-ltsc2019
+- [https://mcr.microsoft.com/dotnet/runtime](https://mcr.microsoft.com/product/dotnet/runtime/tags):6.0-nanoserver-ltsc2022
+- [https://mcr.microsoft.com/dotnet/runtime](https://mcr.microsoft.com/product/dotnet/runtime/tags):6.0-nanoserver-1809
+- [https://mcr.microsoft.com/dotnet/aspnet](https://mcr.microsoft.com/product/dotnet/aspnet/tags):6.0-nanoserver-ltsc2022
+- [https://mcr.microsoft.com/dotnet/aspnet](https://mcr.microsoft.com/product/dotnet/aspnet/tags):6.0-nanoserver-1809
::: zone-end
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
A private endpoint is a network interface that uses a private IP address from th
> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener won't be shown as a _Target sub-resource_. > [!Note]
-> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID and Frontend Configuration ID as the target sub-resource. For example, if the frontend configuration of the gateway was named _PrivateFrontendIp_, the target sub-resource would be as follows: _/subscriptions/xxxx-xxxx-xxxx-xxxx-xxxx/resourceGroups/resourceGroupname/providers/Microsoft.Network/applicationGateways/appgwname/frontendIPConfigurations/PrivateFrontendIp_.
+> If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID and the _Name_ of the Frontend IP configuration as the target sub-resource. For example, if I had a private IP associated to the Application Gateway and the Name listed in Frontend IP configuration of the portal for the private IP is _PrivateFrontendIp_, the target sub-resource value would be: _PrivateFrontendIp_.
# [Azure PowerShell](#tab/powershell)
azure-arc Backup Restore Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-restore-postgresql.md
# Back up and restore Azure Arc-enabled PostgreSQL servers
-Backup and restore of Azure Arc-enabled PostgreSQL server is not supported in the current preview release.
+Automated backups can be enabled by including the `--storage-class-backups` argument when creating an Azure Arc-enabled PostgreSQL server. Restore is not supported in the current preview release.
- Read about [scaling up or down (increasing/decreasing memory/vcores)](scale-up-down-postgresql-server-using-cli.md) your server.
azure-arc Create Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server.md
The main parameters should consider are:
- **the name of the server** you want to deploy. Indicate either `--name` or `-n` followed by a name whose length must not exceed 11 characters. - **The storage classes** you want your server to use. It is important you set the storage class right at the time you deploy a server as this setting cannot be changed after you deploy. You may specify the storage classes to use for the data, logs and the backups. By default, if you do not indicate storage classes, the storage classes of the data controller will be used.
- - To set the storage class for the data, indicate the parameter `--storage-class-data` or `-scd` followed by the name of the storage class.
- - To set the storage class for the logs, indicate the parameter `--storage-class-logs` or `-scl` followed by the name of the storage class.
- - The support of setting storage classes for the backups has been temporarily removed as we temporarily removed the backup/restore functionalities as we finalize designs and experiences.
+ - To set the storage class for the backups, indicate the parameter `--storage-class-backups` followed by the name of the storage class. Excluding this parameter disables automated backups
+ - To set the storage class for the data, indicate the parameter `--storage-class-data` followed by the name of the storage class.
+ - To set the storage class for the logs, indicate the parameter `--storage-class-logs` followed by the name of the storage class.
> [!IMPORTANT] > If you need to change the storage class after deployment, extract the data, delete your server, create a new server, and import the data.
azure-arc Limitations Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-postgresql.md
This article describes limitations of Azure Arc-enabled PostgreSQL.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
-## Back up and restore
+## Backup and restore
-Back up and restore have been temporarily removed as we finalize designs and experiences.
+Enable automated backups. Include the `--storage-class-backups` argument when you create an Azure Arc-enabled PostgreSQL server. Restore has been temporarily removed as we finalize designs and experiences.
## High availability
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Title: Azure Arc-enabled data services - Release notes
-description: Latest release notes
+description: This article provides highlights for the latest release, and a history of features introduced in previous releases.
#Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature. + # Release notes - Azure Arc-enabled data services This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## November 8, 2022
+
+### Image tag
+
+`v1.13.0_2022-11-08`
+
+For complete release version information, see [Version log](version-log.md#november-8-2022).
+
+New for this release:
+
+- Azure Arc data controller
+ - Support database as resource in Azure Arc data resource provider
+
+- Arc-enabled PostgreSQL server
+ - Add support for automated backups
+
+- `arcdata` Azure CLI extension
+ - CLI support for automated backups: Setting the `--storage-class-backups` parameter for the create command will enable automated backups
+ ## October 11, 2022 ### Image tag
azure-arc Using Extensions In Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/using-extensions-in-postgresql-server.md
# Use PostgreSQL extensions in your Azure Arc-enabled PostgreSQL server
-PostgreSQL is at its best when you use it with extensions. In fact, a key element of our own Hyperscale functionality is the Microsoft-provided `citus` extension that is installed by default, which allows Postgres to transparently shard data across multiple nodes.
+PostgreSQL is at its best when you use it with extensions.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Supported extensions
-The standard [`contrib`](https://www.postgresql.org/docs/12/contrib.html) extensions and the following extensions are already deployed in the containers of your Azure Arc-enabled PostgreSQL server:
-- [`citus`](https://github.com/citusdata/citus), v: 10.2. The Citus extension by [Citus Data](https://www.citusdata.com/) is loaded by default as it brings the Hyperscale capability to the PostgreSQL engine. Dropping the Citus extension from your Azure Arc PostgreSQL server is not supported.-- [`pg_cron`](https://github.com/citusdata/pg_cron), v: 1.3-- [`pgaudit`](https://www.pgaudit.org/), v: 1.4-- plpgsql, v: 1.0-- [`postgis`](https://postgis.net), v: 3.0.2-- [`plv8`](https://plv8.github.io/), v: 2.3.14-- [`pg_partman`](https://github.com/pgpartman/pg_partman), v: 4.4.1/-- [`tdigest`](https://github.com/tvondra/tdigest), v: 1.0.1
+For this preview, the following standard [`contrib`](https://www.postgresql.org/docs/14/contrib.html) extensions are already deployed in the containers of your Azure Arc-enabled PostgreSQL server:
+- adminpack
+- amcheck
+- autoinc
+- bloombtree_gin
+- btree_gist
+- citext
+- cube
+- dblink
+- dict_int
+- dict_xsyn
+- earthdistance
+- file_fdw
+- fuzzystrmatch
+- hstore
+- insert_username
+- intagg
+- intarray
+- isn
+- lo
+- ltree
+- moddatetime
+- old_snapshot
+- pageinspect
+- pg_buffercache
+- pg_freespacemap
+- pg_prewarm
+- pg_stat_statements
+- pg_surgery
+- pg_trgm
+- pg_visibility
+- pgcrypto
+- pgrowlocks
+- pgstattuple
+- postgres_fdw
+- refint
+- seg
+- sslinfo
+- tablefunc
+- tcn
+- tsm_system_rows
+- tsm_system_time
+- unaccent
+- xml2
Updates to this list will be posted as it evolves over time. > [!IMPORTANT]
-> While you may bring to your server group an extension other than those listed above, in this Preview, it will not be persisted to your system. It means that it will not be available after a restart of the system and you would need to bring it again.
-
-This guide will take in a scenario to use two of these extensions:
-- [`PostGIS`](https://postgis.net/)-- [`pg_cron`](https://github.com/citusdata/pg_cron)-
-## Which extensions need to be added to the shared_preload_libraries and created?
-
-|Extensions |Requires to be added to shared_preload_libraries |Requires to be created |
-|-|--|- |
-|`pg_cron` |No |Yes |
-|`pg_audit` |Yes |Yes |
-|`plpgsql` |Yes |Yes |
-|`postgis` |No |Yes |
-|`plv8` |No |Yes |
-
-## Add extensions to the `shared_preload_libraries`
-For details about that are `shared_preload_libraries`, read the PostgreSQL documentation [here](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES):
-- This step isn't needed for the extensions that are part of `contrib`-- this step isn't required for extensions that are not required to pre-load by shared_preload_libraries. For these extensions you may jump the next paragraph [Create extensions](#create-extensions).-
-### Add an extension to an instance that already exists
-```azurecli
-az postgres server-arc server edit -n <postgresql server> --extensions <extension names> --k8s-namespace <namespace> --use-k8s
-```
-
-## Show the list of extensions added to shared_preload_libraries
-Run either of the following command.
-
-### With CLI command
-```azurecli
-az postgres server-arc show -n <server name> --k8s-namespace <namespace> --use-k8s
-```
-Scroll in the output and notice the engine\extensions sections in the specifications of your server group. For example:
-```console
- "spec": {
- "dev": false,
- "engine": {
- "extensions": [
- {
- "name": "citus"
- }
- ],
-```
-### With kubectl
-```console
-kubectl describe postgresqls/<server name> -n <namespace>
-```
-Scroll in the output and notice the engine\extensions sections in the specifications of your server group. For example:
-```console
-Spec:
- Dev: false
- Engine:
- Extensions:
- Name: citus
-```
+> While you may bring to your server an extension other than those listed above, in this Preview, it will not be persisted to your system. It means that it will not be available after a restart of the system and you would need to bring it again.
## Create extensions
-Connect to your server group with the client tool of your choice and run the standard PostgreSQL query:
+Connect to your server with the client tool of your choice and run the standard PostgreSQL query:
```console CREATE EXTENSION <extension name>; ``` ## Show the list of extensions created
-Connect to your server group with the client tool of your choice and run the standard PostgreSQL query:
+Connect to your server with the client tool of your choice and run the standard PostgreSQL query:
```console select * from pg_extension; ``` ## Drop an extension
-Connect to your server group with the client tool of your choice and run the standard PostgreSQL query:
+Connect to your server with the client tool of your choice and run the standard PostgreSQL query:
```console drop extension <extension name>; ```
-## The `PostGIS` extension
-You do not need to add the `PostGIS` extension to the `shared_preload_libraries`.
-Get [sample data](http://duspviz.mit.edu/tutorials/intro-postgis/) from the MITΓÇÖs Department of Urban Studies & Planning. Run `apt-get install unzip` to install unzip as needed.
-
-```console
-wget http://duspviz.mit.edu/_assets/data/intro-postgis-datasets.zip
-unzip intro-postgis-datasets.zip
-```
-
-Let's connect to our database, and create the `PostGIS` extension:
-
-```console
-CREATE EXTENSION postgis;
-```
-
-> [!NOTE]
-> If you would like to use one of the extensions in the `postgis` package (for example `postgis_raster`, `postgis_topology`, `postgis_sfcgal`, `fuzzystrmatch`...) you need to first create the postgis extension and then create the other extension. For instance: `CREATE EXTENSION postgis`; `CREATE EXTENSION postgis_raster`;
-
-And create the schema:
-
-```sql
-CREATE TABLE coffee_shops (
- id serial NOT NULL,
- name character varying(50),
- address character varying(50),
- city character varying(50),
- state character varying(50),
- zip character varying(10),
- lat numeric,
- lon numeric,
- geom geometry(POINT,4326)
-);
-CREATE INDEX coffee_shops_gist ON coffee_shops USING gist (geom);
-```
-
-Now, we can combine `PostGIS` with the scale-out functionality, by making the coffee_shops table distributed:
-
-```sql
-SELECT create_distributed_table('coffee_shops', 'id');
-```
-
-Let's load some data:
-
-```console
-\copy coffee_shops(id,name,address,city,state,zip,lat,lon) from cambridge_coffee_shops.csv CSV HEADER;
-```
-
-And fill the `geom` field with the correctly encoded latitude and longitude in the `PostGIS` `geometry` data type:
-
-```sql
-UPDATE coffee_shops SET geom = ST_SetSRID(ST_MakePoint(lon,lat),4326);
-```
-
-Now we can list the coffee shops closest to MIT (77 Massachusetts Ave at 42.359055, -71.093500):
-
-```sql
-SELECT name, address FROM coffee_shops ORDER BY geom <-> ST_SetSRID(ST_MakePoint(-71.093500,42.359055),4326);
-```
--
-## The `pg_cron` extension
-
-Now, let's enable `pg_cron` on our PostgreSQL server group by adding it to the shared_preload_libraries:
-
-```azurecli
-az postgres server-arc update -n pg2 -ns arc --extensions pg_cron
-```
-
-Your server group will restart complete the installation of the extensions. It may take 2 to 3 minutes.
-
-We can now connect again, and create the `pg_cron` extension:
-
-```sql
-CREATE EXTENSION pg_cron;
-```
-
-For test purposes, lets make a table `the_best_coffee_shop` that takes a random name from our earlier `coffee_shops` table, and inserts the table contents:
-
-```sql
-CREATE TABLE the_best_coffee_shop(name text);
-```
-
-We can use `cron.schedule` plus a few SQL statements, to get a random table name (notice the use of a temporary table to store a distributed query result), and store it in `the_best_coffee_shop`:
-
-```sql
-SELECT cron.schedule('* * * * *', $$
- TRUNCATE the_best_coffee_shop;
- CREATE TEMPORARY TABLE tmp AS SELECT name FROM coffee_shops ORDER BY random() LIMIT 1;
- INSERT INTO the_best_coffee_shop SELECT * FROM tmp;
- DROP TABLE tmp;
-$$);
-```
-
-And now, once a minute, we'll get a different name:
-
-```sql
-SELECT * FROM the_best_coffee_shop;
-```
-
-```console
- name
- B & B Snack Bar
-(1 row)
-```
-
-See the [pg_cron README](https://github.com/citusdata/pg_cron) for full details on the syntax.
-- ## Next steps-- Read documentation on [`plv8`](https://plv8.github.io/)-- Read documentation on [`PostGIS`](https://postgis.net/)-- Read documentation on [`pg_cron`](https://github.com/citusdata/pg_cron)
+- **Try it out.** Get started quickly with [Azure Arc Jumpstart](https://github.com/microsoft/azure_arc#azure-arc-enabled-data-services) on Azure Kubernetes Service (AKS), AWS Elastic Kubernetes Service (EKS), Google Cloud Kubernetes Engine (GKE) or in an Azure VM.
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## November 8, 2022
++
+|Component|Value|
+|--|--|
+|Container images tag |`v1.13.0_2022-11-08`|
+|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v7<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1 through v2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`telemetrycollectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3 *use to be otelcollectors*<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1, v1beta2<br/>|
+|Azure Resource Manager (ARM) API version|2022-06-15-preview|
+|`arcdata` Azure CLI extension version|1.4.8 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|Arc-enabled Kubernetes helm chart extension version|1.13.0|
+|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.7.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.7.0 ([Download](https://aka.ms/ads-azcli-ext))|
+ ## October 11, 2022 |Component|Value|
azure-cache-for-redis Cache Best Practices Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-client-libraries.md
For information on client library-specific guidance best practices, see the foll
- [Java - Which client should I use?](https://gist.github.com/warrenzhu25/1beb02a09b6afd41dff2c27c53918ce7#file-azure-redis-java-best-practices-md) - [Lettuce (Java)](https://github.com/Azure/AzureCacheForRedis/blob/main/Lettuce%20Best%20Practices.md) - [Jedis (Java)](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-java-jedis-md)
+- [Redisson (Java)](cache-best-practices-client-libraries.md#redisson-java)
- [Node.js](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-node-js-md) - [PHP](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-php-md) - [HiRedisCluster](https://github.com/Azure/AzureCacheForRedis/blob/main/HiRedisCluster%20Best%20Practices.md) - [ASP.NET Session State Provider](https://gist.github.com/JonCole/925630df72be1351b21440625ff2671f#file-redis-bestpractices-session-state-provider-md)
+## Redisson (Java)
+
+We _recommend_ you use redisson 3.14.1 or higher. Older versions contain known connection leak issues that cause problems after failovers. Monitor the Redisson changelog for other known issues can affect features used by your application. For more information, see[`CHANGELOG`](https://github.com/redisson/redisson/blob/master/CHANGELOG.md) and the [Redisson FAQ](https://github.com/redisson/redisson/wiki/16.-FAQ).
+
+Other notes:
+
+- Redisson defaults to 'read from replica' strategy, unlike some other clients. To change this, modify the 'readMode' config setting.
+- Redisson has a connection pooling strategy with configurable minimum and maximum settings, and the default minimum values are large. The large defaults could contribute to aggressive reconnect behaviors or 'connection storms'. To reduce the risk, consider using fewer connections because you can efficiently pipeline commands, or batches of commands, over a few connections.
+- Redisson has a default idle connection timeout of 10 seconds, which leads to more closing and reopening of connections than ideal.
+
+Here's a recommended baseline configuration for cluster mode that you can modify as needed:
+
+```yml
+clusterServersConfig:
+ idleConnectionTimeout: 30000
+ connectTimeout: 15000
+ timeout: 5000
+ retryAttempts: 3
+ retryInterval: 3000
+ failedSlaveReconnectionInterval: 15000
+ failedSlaveCheckInterval: 60000
+ subscriptionsPerConnection: 5
+ clientName: "redisson"
+ loadBalancer: !<org.redisson.connection.balancer.RoundRobinLoadBalancer> {}
+ subscriptionConnectionMinimumIdleSize: 1
+ subscriptionConnectionPoolSize: 50
+ slaveConnectionMinimumIdleSize: 2
+ slaveConnectionPoolSize: 24
+ masterConnectionMinimumIdleSize: 2
+ masterConnectionPoolSize: 24
+ readMode: "MASTER"
+ subscriptionMode: "MASTER"
+ nodeAddresses:
+ - "redis://mycacheaddress:6380"
+ scanInterval: 1000
+ pingConnectionInterval: 60000
+ keepAlive: false
+ tcpNoDelay: true
+```
+ ## How to use client libraries
-Besides the reference documentation, you can find tutorials showing how to get started with Azure Cache for Redis using different languages and cache clients.
+Besides the reference documentation, you can find tutorials showing how to get started with Azure Cache for Redis using different languages and cache clients.
For more information on using some of these client libraries in tutorials, see the following articles:
azure-functions Event Messaging Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/event-messaging-bindings.md
As a cloud computing service, Azure Functions is frequently used to move data between various Azure services. To make it easier for you to connect your code to other services, Functions implements a set of binding extensions to connect to these services. To learn more, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
-By definition, Azure Functions executions are stateless. If you need to connect your code to services in a more stateful way, consider instead using [Durable Functions](durable/durable-functions-overview.md) or [Azure Logic Apps?](../logic-apps/logic-apps-overview.md).
+By definition, Azure Functions executions are stateless. If you need to connect your code to services in a more stateful way, consider instead using [Durable Functions](durable/durable-functions-overview.md) or [Azure Logic Apps](../logic-apps/logic-apps-overview.md).
Triggers and bindings are provided to consuming and emitting data easier. There may be cases where you need more control over the service connection, or you just feel more comfortable using a client library provided by a service SDK. In those cases, you can use a client instance from the SDK in your function execution to access the service as you normally would. When using a client directly, you need to pay attention to the effect of scale and performance on client connections. To learn more, see the [guidance on using static clients](manage-connections.md#static-clients).
For more information about Event Grid trigger and output binding definitions and
To learn more about Event Grid with Functions, see the following articles: + [Azure Event Grid bindings for Azure Functions](functions-bindings-event-grid.md)
-+ [Tutorial: Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2fazure%2fazure-functions%2ftoc.json)
++ [Tutorial: Automate resizing uploaded images using Event Grid](../event-grid/resize-images-on-storage-blob-upload-event.md?toc=%2fazure%2fazure-functions%2ftoc.json)
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL trigger for Functions (preview)
+> [!NOTE]
+> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not supported.
+ The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted. For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md).
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL bindings for Azure Functions overview (preview)
+> [!NOTE]
+> The Azure SQL trigger is only supported on **Premium and Dedicated** plans. Consumption is not supported. Azure SQL input/output bindings are supported for all plans.
+ This set of articles explains how to work with [Azure SQL](/azure/azure-sql/index) bindings in Azure Functions. Azure Functions supports input bindings, output bindings, and a function trigger for the Azure SQL and SQL Server products. | Action | Type |
azure-functions Functions Kubernetes Keda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-kubernetes-keda.md
You can remove KEDA from your cluster in one of the following ways:
KEDA has support for the following Azure Function triggers: * [Azure Storage Queues](functions-bindings-storage-queue.md)
-* [Azure Service Bus Queues](functions-bindings-service-bus.md)
+* [Azure Service Bus](functions-bindings-service-bus.md)
* [Azure Event / IoT Hubs](functions-bindings-event-hubs.md) * [Apache Kafka](https://github.com/azure/azure-functions-kafka-extension) * [RabbitMQ Queue](https://github.com/azure/azure-functions-rabbitmq-extension)
For more information, see the following resources:
* [Code and test Azure Functions locally](functions-develop-local.md) * [How the Azure Function Consumption plan works](functions-scale.md)
-[func init]: functions-core-tools-reference.md#func-init
+[func init]: functions-core-tools-reference.md#func-init
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
The `unit` feature class defines a physical and non-overlapping area that can be
| Property | Type | Required | Description | |-|--|-|-|
-|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID.<BR>When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined.<BR>Maximum length allowed is 1,000 characters.|
+|`originalId` | string |false | When the dataset is created through the [conversion service][conversion], the original ID is automatically set to the Azure Maps internal ID. When the [dataset][datasetv20220901] is created from a GeoJSON package, the original ID can be user defined. Maximum length allowed is 1,000 characters.|
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`categoryId` | [category.Id](#category) |true | The ID of a [`category`](#category) feature.| |`isOpenArea` | boolean (Default value is `null`.) |false | Represents whether the unit is an open area. If set to `true`, [structures](#structure) don't surround the unit boundary, and a navigating agent can enter the `unit` without the need of an [`opening`](#opening). By default, units are surrounded by physical barriers and are open only where an opening feature is placed on the boundary of the unit. If walls are needed in an open area unit, they can be represented as a [`lineElement`](#lineelement) or [`areaElement`](#areaelement) with an `isObstruction` property equal to `true`.|
The `level` class feature defines an area of a building at a set elevation. For
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.| | `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
-| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
+| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. |
| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. | | `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
The `level` class feature defines an area of a building at a set elevation. For
|`externalId` | string |false | An ID used by the client to associate the feature with another feature in a different dataset, such as in an internal database. Maximum length allowed is 1,000 characters.| |`facilityId` | [facility.Id](#facility) |true | The ID of a [`facility`](#facility) feature.| | `ordinal` | integer | true | The level number. Used by the [`verticalPenetration`](#verticalpenetration) feature to determine the relative order of the floors to help with travel direction. The general practice is to start with 0 for the ground floor. Add +1 for every floor upwards, and -1 for every floor going down. It can be modeled with any numbers, as long as the higher physical floors are represented by higher ordinal values. |
-| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button. Maximum length allowed is 1,000 characters.|
+| `abbreviatedName` | string | false | A four-character abbreviated level name, like what would be found on an elevator button.|
| `heightAboveFacilityAnchor` | double | false | Vertical distance of the level's floor above [`facility.anchorHeightAboveSeaLevel`](#facility), in meters. | | `verticalExtent` | double | false | Vertical extent of the level, in meters. If not provided, defaults to [`facility.defaultLevelVerticalExtent`](#facility).| |`name` | string | false | Name of the feature in local language. Maximum length allowed is 1,000 characters.|
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
+
+ Title: How to create Azure Maps applications using the JavaScript REST SDK (preview)
+
+description: How to develop applications that incorporate Azure Maps using the JavaScript SDK Developers Guide.
++ Last updated : 11/07/2021+++++
+# JavaScript/TypeScript REST SDK Developers Guide (preview)
+
+The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searching using the [Azure Maps search Rest API][search], like searching for an address, fuzzy searching for a point of interest (POI), and searching by coordinates. This article will help you get started building location-aware applications that incorporate the power of Azure Maps.
+
+> [!NOTE]
+> Azure Maps JavaScript SDK supports the LTS version of Node.js. For more information, seeΓÇ»[Node.js Release Working Group][Node.js Release].
+
+## Prerequisites
+
+- [Azure Maps account][Azure Maps account].
+- [Subscription key][Subscription key] or other form of [authentication][authentication].
+- [Node.js][Node.js].
+
+> [!TIP]
+> You can create an Azure Maps account programmatically, Here's an example using the Azure CLI:
+>
+> ```azurecli
+> az maps account create --kind "Gen2" --account-name "myMapAccountName" --resource-group "<resource group>" --sku "G2"
+> ```
+
+## Install the search package
+
+To use Azure Maps JavaScript SDK, you'll need to install the search package. Each of the Azure Maps services including search, routing, rendering and geolocation are each in their own package.
+
+```powershell
+npm install @azure/maps-search
+```
+
+Once the package is installed, create a `search.js` file in the `mapsDemo` directory:
+
+```text
+mapsDemo
++-- package.json++-- package-lock.json++-- node_modules/++-- search.js
+```
+
+### Azure Maps search service
+
+| Service Name  | NPM package  | Samples  |
+||-|--|
+| [Search][search readme] | [Azure.Maps.Search][search package] | [search samples][search sample] |
+| [Route][js route readme] | [@azure-rest/maps-route][js route package] | [route samples][js route sample] |
+
+## Create a Node.js project
+
+The example below creates a new directory then a Node.js program named _mapsDemo_ using NPM:
+
+```powershell
+mkdir mapsDemo
+cd mapsDemo
+npm init
+```
+
+## Create and authenticate a MapsSearchClient
+
+You'll need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps][authentication].
+
+> [!TIP]
+> The`MapsSearchClient` is the primary interface for developers using the Azure Maps search library. See [Azure Maps Search client library][JS-SDK] to learn more about the search methods available.
+
+### Using an Azure AD credential
+
+You can authenticate with Azure AD using the [Azure Identity library][Identity library]. To use the [DefaultAzureCredential][defaultazurecredential] provider, you'll need to install the `@azure/identity` package:
+
+```powershell
+npm install @azure/identity
+```
+
+You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps.
+
+Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resourceΓÇÖs client ID as environment variables:
+
+| Environment Variable | Description |
+|--|--|
+| AZURE_CLIENT_ID | Application (client) ID in your registered application |
+| AZURE_CLIENT_SECRET | The value of the client secret in your registered application |
+| AZURE_TENANT_ID | Directory (tenant) ID in your registered application |
+| MAPS_CLIENT_ID | The client ID in your Azure Map account |
+
+You can use a `.env` file for these variables. You'll need to install the [dotenv][dotenv] package:
+
+```powershell
+npm install dotenv
+```
+
+Next, add a `.env` file in the **mapsDemo** directory and specify these properties:
+
+```text
+AZURE_CLIENT_ID="<client-id>"
+AZURE_CLIENT_SECRET="<client-secret>"
+AZURE_TENANT_ID="<tenant-id>"
+MAPS_CLIENT_ID="<maps-client-id>"
+```
+
+Once your environment variables are created, you can access them in your JavaScript code:
+
+```JavaScript
+const { MapsSearchClient } = require("@azure/maps-search");
+const { DefaultAzureCredential } = require("@azure/identity");
+require("dotenv").config();
+
+const credential = new DefaultAzureCredential();
+const client = new MapsSearchClient(credential, process.env.MAPS_CLIENT_ID);
+```
+
+### Using a subscription key credential
+
+You can authenticate with your Azure Maps subscription key. Your subscription key can be found in the **Authentication** section in the Azure Maps account as shown in the following screenshot:
++
+You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code.
+
+You can accomplish this by using a `.env` file to store the subscription key variable. You'll need to install the [dotenv][dotenv] package to retrieve the value:
+
+```powershell
+npm install dotenv
+```
+
+Next, add a `.env` file in the **mapsDemo** directory and specify the property:
+
+```text
+MAPS_SUBSCRIPTION_KEY="<subscription-key>"
+```
+
+Once your environment variable is created, you can access it in your JavaScript code:
+
+```JavaScript
+const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search");
+require("dotenv").config();
+
+const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY);
+const client = new MapsSearchClient(credential);
+```
+
+## Fuzzy search an entity
+
+The following code snippet demonstrates how, in a simple console application, to import the `azure-maps-search` package and perform a fuzzy search on ΓÇ£StarbucksΓÇ¥ near Seattle:
+
+```JavaScript
+
+const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search");
+require("dotenv").config();
+
+async function main() {
+ // Authenticate with Azure Map Subscription Key
+ const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY);
+ const client = new MapsSearchClient(credential);
+
+ // Setup the fuzzy search query
+ const response = await client.fuzzySearch({
+ query: "Starbucks",
+ coordinates: [47.61010, -122.34255],
+ countryCodeFilter: ["US"],
+ });
+
+ // Log the result
+ console.log(`Starbucks search result nearby Seattle:`);
+ response.results.forEach((result) => {
+ console.log(`\
+ * ${result.address.streetNumber} ${result.address.streetName}
+ ${result.address.municipality} ${result.address.countryCode} ${result.address.postalCode}
+ Coordinate: (${result.position[0].toFixed(4)}, ${result.position[1].toFixed(4)})\
+ `);
+}
+
+main().catch((err) => {
+ console.error(err);
+});
+
+```
+
+In the above code snippet, you create a `MapsSearchClient` object using your Azure credentials. This is done using your Azure Maps subscription key, however you could use the [Azure AD credential](#using-an-azure-ad-credential) discussed in the previous section. You then pass the search query and options to the `fuzzySearch` method. Search for Starbucks (`query: "Starbucks"`) near Seattle (`coordinates: [47.61010, -122.34255], countryFilter: ["US"]`). For more information, see [FuzzySearchRequest][FuzzySearchRequest] in the [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK].
+
+The method `fuzzySearch` provided by `MapsSearchClient` will forward the request to Azure Maps REST endpoints. When the results are returned, they're written to the console. For more information, see [SearchAddressResult][SearchAddressResult].
+
+Run `search.js` with Node.js:
+
+```powershell
+node search.js
+```
+
+## Search an Address
+
+The [searchAddress][searchAddress] method can be used to get the coordinates of an address. Modify the `search.js` from the sample as follows:
+
+```JavaScript
+const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search");
+require("dotenv").config();
+
+async function main() {
+ const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY);
+ const client = new MapsSearchClient(credential);
+
+ const response = await client.searchAddress(
+ "1912 Pike Pl, Seattle, WA 98101, US"
+ );
+
+ console.log(`The coordinate is: ${response.results[0].position}`);}
+
+main().catch((err) => {
+ console.error(err);
+});
+```
+
+The results returned from `client.searchAddress` are ordered by confidence score and in this example only the first result returned with be displayed to the screen.
+
+## Batch reverse search
+
+Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so you can wait until completion or query the result periodically. The example below demonstrates how to call batched reverse search method:
+
+```JavaScript
+ const poller = await client.beginReverseSearchAddressBatch([
+ // This is an invalid query
+ { coordinates: [148.858561, 2.294911] },
+ {
+ coordinates: [47.61010, -122.34255],
+ },
+ { coordinates: [47.6155, -122.33817] },
+ options: { radiusInMeters: 5000 },
+ ]);
+```
+
+In this example, three queries are passed into the _batched reverse search_ request. The first query is invalid, see [Handing failed requests](#handing-failed-requests) for an example showing how to handle the invalid query.
+
+Use the `getResult` method from the poller to check the current result. You check the status using `getOperationState` to see if the poller is still running. If it is, you can keep calling `poll` until the operation is finished:
+
+```JavaScript
+ while (poller.getOperationState().status === "running") {
+ const partialResponse = poller.getResult();
+ logResponse(partialResponse)
+ await poller.poll();
+ }
+```
+
+Alternatively, you can wait until the operation has completed, by using `pollUntilDone()`:
+
+```JavaScript
+const response = await poller.pollUntilDone();
+logResponse(response)
+```
+
+A common scenario for LRO is to resume a previous operation later. Do that by serializing the pollerΓÇÖs state with the `toString` method, and rehydrating the state with a new poller using `resumeReverseSearchAddressBatch`:
+
+```JavaScript
+ const serializedState = poller.toString();
+ const rehydratedPoller = await client.resumeReverseSearchAddressBatch(
+ serializedState
+ );
+ const response = await rehydratedPoller.pollUntilDone();
+ logResponse(response);
+```
+
+Once you get the response, you can log it:
+
+```JavaScript
+function logResponse(response) {
+ console.log(
+ `${response.totalSuccessfulRequests}/${response.totalRequests} succeed.`
+ );
+ response.batchItems.forEach((item, idx) => {
+ console.log(`The result for ${idx + 1}th request:`);
+ // Check if the request is failed
+ if (item.response.error) {
+ console.error(item.response.error);
+ } else {
+ item.response.results.forEach((result) => {
+ console.log(result.address.freeformAddress);
+ });
+ }
+ });
+}
+```
+
+### Handing failed requests
+
+Handle failed requests by checking for the `error` property in the response batch item. See the `logResponse` function in the completed batch reverse search example below.
+
+### Completed batch reverse search example
+
+The complete code for the reverse address batch search example:
+
+```JavaScript
+const { MapsSearchClient, AzureKeyCredential } = require("@azure/maps-search");
+require("dotenv").config();
+
+async function main() {
+ const credential = new AzureKeyCredential(process.env.MAPS_SUBSCRIPTION_KEY);
+ const client = new MapsSearchClient(credential);
+
+ const poller = await client.beginReverseSearchAddressBatch([
+ // This is an invalid query
+ { coordinates: [148.858561, 2.294911] },
+ {
+ coordinates: [47.61010, -122.34255],
+ },
+ { coordinates: [47.6155, -122.33817] },
+ options: { radiusInMeters: 5000 },
+ ]);
+
+ // Get the partial result and keep polling
+ while (poller.getOperationState().status === "running") {
+ const partialResponse = poller.getResult();
+ logResponse(partialResponse);
+ await poller.poll();
+ }
+
+ // You can simply wait for the operation is done
+ // const response = await poller.pollUntilDone();
+ // logResponse(response)
+
+ // Resume the poller
+ const serializedState = poller.toString();
+ const rehydratedPoller = await client.resumeReverseSearchAddressBatch(
+ serializedState
+ );
+ const response = await rehydratedPoller.pollUntilDone();
+ logResponse(response);
+}
+
+function logResponse(response) {
+ console.log(
+ `${response.totalSuccessfulRequests}/${response.totalRequests} succeed.`
+ );
+ response.batchItems.forEach((item, idx) => {
+ console.log(`The result for ${idx + 1}th request:`);
+ if (item.response.error) {
+ console.error(item.response.error);
+ } else {
+ item.response.results.forEach((result) => {
+ console.log(result.address.freeformAddress);
+ });
+ }
+ });
+}
+
+main().catch((err) => {
+ console.error(err);
+});
+```
+
+## Additional information
+
+- The [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK].
+
+[JS-SDK]: /javascript/api/overview/azure/maps-search-readme?view=azure-node-preview
+
+[defaultazurecredential]: https://github.com/Azure/azure-sdk-for-js/tree/@azure/maps-search_1.0.0-beta.1/sdk/identity/identity#defaultazurecredential
+
+[searchAddress]: /javascript/api/@azure/maps-search/mapssearchclient?view=azure-node-preview#@azure-maps-search-mapssearchclient-searchaddress
+
+[FuzzySearchRequest]: /javascript/api/@azure/maps-search/fuzzysearchrequest?view=azure-node-preview
+
+[SearchAddressResult]: /javascript/api/@azure/maps-search/searchaddressresult?view=azure-node-preview
+
+[search]: /rest/api/maps/search
+[Node.js Release]: https://github.com/nodejs/release#release-schedule
+[Node.js]: https://nodejs.org/en/download/
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account
+
+[authentication]: azure-maps-authentication.md
+[Identity library]: /javascript/api/overview/azure/identity-readme
+
+[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources
+[dotenv]: https://github.com/motdotla/dotenv#readme
+
+[search package]: https://www.npmjs.com/package/@azure/maps-search
+[search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search/README.md
+[search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search/samples/v1-beta
+
+[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md
+[js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
+[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Azure Maps Python SDK supports Python version 3.7 or later. Check theΓÇ»[Azure S
| Service Name  | PyPi package  | Samples  | ||-|--| | [Search][py search readme] | [azure-maps-search][py search package] | [search samples][py search sample] |
-| [Routing][py routing readme] | [azure-maps-routing][py routing package] | [routing samples][py routing sample] |
-| [Rendering][py rendering readme]| [azure-maps-rendering][py rendering package]|[rendering sample][py rendering sample] |
+| [Route][py route readme] | [azure-maps-route][py route package] | [route samples][py route sample] |
+| [Render][py render readme]| [azure-maps-render][py render package]|[render sample][py render sample] |
| [Geolocation][py geolocation readme]|[azure-maps-geolocation][py geolocation package]|[geolocation sample][py geolocation sample] | <!--For more information, see the [python SDK Developers Guide](how-to-dev-guide-py-sdk.md).-->
Azure Maps JavaScript/TypeScript SDK supports LTS versions of [Node.js][Node.js]
| Service Name  | NPM package  | Samples  | ||-|--|
-| [Search][js search readme] | [azure-maps-search][js search package] | [search samples][js search sample] |
+| [Search][js search readme] | [@azure/maps-search][js search package] | [search samples][js search sample] |
+| [Route][js route readme] | [@azure/maps-route][js route package] | [route samples][js route sample] |
<!--For more information, see the [JavaScript/TypeScript SDK Developers Guide](how-to-dev-guide-js-sdk.md).-->
Azure Maps Java SDK supports [Java 8][Java 8] or above.
[py search package]: https://pypi.org/project/azure-maps-search [py search readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-search/README.md [py search sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-search/samples
-[py routing package]: https://pypi.org/project/azure-maps-route
-[py routing readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-routing/README.md
-[py routing sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-routing/samples
-[py rendering package]: https://pypi.org/project/azure-maps-render
-[py rendering readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-rendering/README.md
-[py rendering sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-rendering/samples
+[py route package]: https://pypi.org/project/azure-maps-route
+[py route readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-routing/README.md
+[py route sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-routing/samples
+[py render package]: https://pypi.org/project/azure-maps-render
+[py render readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-render/README.md
+[py render sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-render/samples
[py geolocation package]: https://pypi.org/project/azure-maps-geolocation [py geolocation readme]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/maps/azure-maps-geolocation/README.md [py geolocation sample]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/maps/azure-maps-geolocation/samples <!-- JavaScript/TypeScript SDK Developers Guide > [Node.js]: https://nodejs.org/en/download/
-[js search package]: https://www.npmjs.com/package/@azure/maps-search
[js search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search/README.md
+[js search package]: https://www.npmjs.com/package/@azure/maps-search
[js search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search/samples/v1-beta/javascript
+[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md
+[js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
+[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+ <!-- Java SDK Developers Guide > [Java 8]: https://www.java.com/en/download/java8_update.jsp [java search package]: https://repo1.maven.org/maven2/com/azure/azure-maps-search
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
If your ConfigMap doesn't yet have the `log_collection_settings.schema` field, y
Example: `kubectl apply -f container-azm-ms-agentconfig.yaml` >[!NOTE]
->* The configuration change can take a few minutes to complete before it takes effect. All OMS agent pods in the cluster will restart.
->* The restart is a rolling restart for all OMS agent pods. It won't restart all of them at the same time.
+>* The configuration change can take a few minutes to complete before it takes effect. All ama-logs pods in the cluster will restart.
+>* The restart is a rolling restart for all ama-logs pods. It won't restart all of them at the same time.
## Next steps * Configure [Basic Logs](../logs/basic-logs-configure.md) for ContainerLogv2.
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
The following table lists the contents of Azure Monitor workspaces. This table w
## Workspace design
-A single Azure Monitor workspace can collect data from multiple sources, but there may be circumstances where you require multiple workspaces to address your particular business requirements. Azure Monitor workspace design is similar to [Log Analytics workspace design](../logs/workspace-design.md). There are several reasons that you may consider creating additional workspaces including the following.
+A single Azure Monitor workspace can collect data from multiple sources, but there may be circumstances where you require multiple workspaces to address your particular business requirements. Azure Monitor workspace design is similar to [Log Analytics workspace design](../logs/workspace-design.md), and you choose to match that design. Since Azure Monitor workspaces currently only contain Prometheus metrics, and metric data is typically not as sensitive as log data, you may choose to further consolidate your Azure Monitor workspaces for simplicity.
+
+There are several reasons that you may consider creating additional workspaces including the following.
- Azure tenants. If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant. - Azure regions. Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations.
A single Azure Monitor workspace can collect data from multiple sources, but the
> [!NOTE] > You cannot currently query across multiple Azure Monitor workspaces.
-## Workspace limits
-These are currently only related to Prometheus metrics, since this is the only data currently stored in Azure Monitor workspaces.
-
-Many customers will choose an Azure Monitor workspace design to match their Log Analytics workspace design. Since Azure Monitor workspaces currently only contain Prometheus metrics, and metric data is typically not as sensitive as log data, you may choose to further consolidate your Azure Monitor workspaces for simplicity.
+## Limitations
+See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus.
+- Private Links aren't supported for Prometheus metrics collection into Azure monitor workspace.
+- Azure monitor workspaces are currently only supported in public clouds.
## Create an Azure Monitor workspace
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
The primary method for visualizing Prometheus metrics is [Azure Managed Grafana]
Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](/azure/azure-monitor/alerts/action-groups) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types. For your AKS cluster, a set of [predefined Prometheus alert rules](/azure/azure-monitor/containers/container-insights-metric-alerts) and [recording rules ](/azure/azure-monitor/essentials/prometheus-metrics-scrape-default#recording-rules)is provided to allow easy quick start. ## Limitations
-See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor managed service for Prometheus.
-- Private Links aren't supported for Prometheus metrics collection into Azure monitor workspace.
+See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor workspaces.
+ - Azure monitor managed service for Prometheus is only supported in public clouds.
+- Private Links aren't supported for data collection into Azure monitor workspace.
- Metrics addon doesn't work on AKS clusters configured with HTTP proxy. - Scraping and storing metrics at frequencies less than 1 second isn't supported.
azure-resource-manager Decompile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/decompile.md
Title: Decompile ARM template JSON to Bicep description: Describes commands for decompiling Azure Resource Manager templates to Bicep files. Previously updated : 09/28/2022 Last updated : 11/11/2022
az bicep decompile --file main.json
The command creates a file named _main.bicep_ in the same directory as _main.json_. If _main.bicep_ exists in the same directory, use the **--force** switch to overwrite the existing Bicep file.
+You can also decompile ARM template JSON to Bicep from Visual Studio Code by using the **Decompile into Bicep** command. For more information, see [Visual Studio Code](./visual-studio-code.md#decompile-into-bicep).
+ > [!CAUTION] > Decompilation attempts to convert the file, but there is no guaranteed mapping from ARM template JSON to Bicep. You may need to fix warnings and errors in the generated Bicep file. Or, decompilation can fail if an accurate conversion isn't possible. To report any issues or inaccurate conversions, [create an issue](https://github.com/Azure/bicep/issues).
azure-resource-manager Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/migrate.md
description: Describes the recommended workflow when migrating Azure resources a
Previously updated : 03/16/2022 Last updated : 11/11/2022 # Migrate to Bicep
-There are a number of benefits to defining your Azure resources in Bicep including: simpler syntax, modularization, automatic dependency management, type validation and IntelliSense, and an improved authoring experience.
+There are many benefits to defining your Azure resources in Bicep including: simpler syntax, modularization, automatic dependency management, type validation and IntelliSense, and an improved authoring experience.
-When you have existing JSON Azure Resource Manager templates (ARM templates) and/or deployed resources, and you want to safely migrate those to Bicep, we suggest following a recommended workflow, consisting of five phases:
+When migrating existing JSON Azure Resource Manager templates (ARM templates) to Bicep, we recommend following the five-phase workflow:
:::image type="content" source="./media/migrate/five-phases.png" alt-text="Diagram of the five phases for migrating Azure resources to Bicep: convert, migrate, refactor, test, and deploy." border="false":::
-The first step in the process is to capture an initial representation of your Azure resources. If required, you then decompile the JSON file to an initial Bicep file, which you improve upon by refactoring. When you have a working file, you test and deploy using a process that minimizes the risk of breaking changes to your Azure environment.
+The first step in the process is to capture an initial representation of your Azure resources. If necessary, you then decompile the JSON file to an initial Bicep file, which you improve upon by refactoring. When you have a working file, you test and deploy using a process that minimizes the risk of breaking changes to your Azure environment.
:::image type="content" source="./media/migrate/migrate-bicep.png" alt-text="Diagram of the recommended workflow for migrating Azure resources to Bicep." border="false":::
-In this article we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/training/modules/migrate-azure-resources-bicep/).
+In this article, we summarize this recommended workflow. For detailed guidance, see [Migrate Azure resources and JSON ARM templates to use Bicep](/training/modules/migrate-azure-resources-bicep/).
## Phase 1: Convert
The convert phase consists of two steps, which you complete in sequence:
1. **Capture a representation of your Azure resources.** If you have an existing JSON template that you're converting to Bicep, the first step is easy - you already have your source template. If you're converting Azure resources that were deployed by using the portal or another tool, you need to capture the resource definitions. You can capture a JSON representation of your resources using the Azure portal, Azure CLI, or Azure PowerShell cmdlets to *export* single resources, multiple resources, and entire resource groups. You can use the **Insert Resource** command within Visual Studio Code to import a Bicep representation of your Azure resource.
-1. **If required, convert the JSON representation to Bicep using the _decompile_ command.** [The Bicep tooling includes the `decompile` command to convert templates.](decompile.md) You can invoke the `decompile` command from either the Azure CLI, or from the Bicep CLI. The decompilation process is a best-effort process and doesn't guarantee a full mapping from JSON to Bicep. You may need to revise the generated Bicep file to meet your template best practices before using the file to deploy resources.
+1. **If required, convert the JSON representation to Bicep using the _decompile_ command.** [The Bicep tooling includes the `decompile` command to convert templates.](decompile.md) You can invoke the `decompile` command from [Visual Studio Code with the Bicep extension](./visual-studio-code.md#decompile-into-bicep), the [Azure CLI](./bicep-cli.md#decompile), or from the [Bicep CLI](./bicep-cli.md#decompile). The decompilation process is a best-effort process and doesn't guarantee a full mapping from JSON to Bicep. You may need to revise the generated Bicep file to meet your template best practices before using the file to deploy resources.
> [!NOTE] > You can import a resource by opening the Visual Studio Code command palette. Use <kbd>Ctrl+Shift+P</kbd> on Windows and Linux and <kbd>Γîÿ+Shift+P</kbd> on macOS.
The migrate phase consists of three steps, which you complete in sequence:
1. **Copy each resource from your decompiled template.** Copy each resource individually from the converted Bicep file to the new Bicep file. This process helps you resolve any issues on a per-resource basis and to avoid any confusion as your template grows in size.
-1. **Identify and recreate any missing resources.** Not all Azure resource types can be exported through the Azure portal, Azure CLI, or Azure PowerShell. For example, virtual machine extensions such as the DependencyAgentWindows and MMAExtension (Microsoft Monitoring Agent) aren't supported resource types for export. For any resource that wasn't exported, such as virtual machine extensions, you'll need to recreate those resources in your new Bicep file. There are several tools and approaches you can use to recreate resources, including [Azure Resource Explorer](../templates/export-template-portal.md?azure-portal=true), the [Bicep and ARM template reference documentation](/azure/templates/?azure-portal=true), and the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates?azure-portal=true) site.
+1. **Identify and recreate any missing resources.** Not all Azure resource types can be exported through the Azure portal, Azure CLI, or Azure PowerShell. For example, virtual machine extensions such as the DependencyAgentWindows and MMAExtension (Microsoft Monitoring Agent) aren't supported resource types for export. For any resource that wasn't exported, such as virtual machine extensions, you'll need to recreate those resources in your new Bicep file. You can recreate resources using a variety of tools and approaches, including [Azure Resource Explorer](../templates/export-template-portal.md?azure-portal=true), the [Bicep and ARM template reference documentation](/azure/templates/?azure-portal=true), and the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates?azure-portal=true) site.
## Phase 3: Refactor
-In the _refactor_ phase of migrating your resourced to Bicep, the goal is to improve the quality of your Bicep code. These improvements can include changes, such as adding code comments, that bring the template in line with your template standards.
+In the _refactor_ phase of migrating your resourced to Bicep, the goal is to improve the quality of your Bicep code. These enhancements may include changes such as adding code comments that align the template with your template standards.
The deploy phase consists of eight steps, which you complete in any order:
-1. **Review resource API versions.** When exporting Azure resources, the exported template may not have the latest API version for a resource type. If there are specific properties that you need for future deployments, update the API to the appropriate version. It's good practice to review the API versions for each exported resource.
+1. **Review resource API versions.** When you export Azure resources, the exported template may not contain the most recent API version for a resource type. If there are specific properties that you need for future deployments, update the API to the appropriate version. It's good practice to review the API versions for each exported resource.
-1. **Review the linter suggestions in your new Bicep file.** When creating Bicep files using the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep&azure-portal=true), the [Bicep linter](linter.md) runs automatically and highlights suggestions and errors in your code. Many of the suggestions and errors include an option to apply a quick fix of the issue. Review these recommendations and adjust your Bicep file.
+1. **Review the linter suggestions in your new Bicep file.** When you use the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep&azure-portal=true) to create Bicep files, the [Bicep linter](linter.md) runs automatically and highlights suggestions and errors in your code. Many of the suggestions and errors include an option to apply a quick fix of the issue. Review these recommendations and adjust your Bicep file.
1. **Revise parameters, variables, and symbolic names.** It's possible the names of parameters, variables, and symbolic names generated by the decompiler won't match your standard naming convention. Review the generated names and make adjustments as necessary. 1. **Simplify expressions.** The decompile process may not always take advantage of some of Bicep's features. Review any expressions generated in the conversion and simplify them. For example, the decompiled template may include a `concat()` or `format()` function that could be simplified by using [string interpolation](bicep-functions-string.md#concat). Review any suggestions from the linter and make adjustments as necessary.
-1. **Review child and extension resources.** With Bicep, there are multiple ways to declare [child resources](child-resource-name-type.md) and [extension resources](scope-extension-resources.md), including concatenating the names of your resources, using the `parent` keyword, and using nested resources. Consider reviewing these resources after decompilation and make sure the structure meets your standards. For example, ensure that you don't use string concatenation to create child resource names - you should use the `parent` property or a nested resource. Similarly, subnets can either be referenced as properties of a virtual network, or as a separate resource.
+1. **Review child and extension resources.** There are several ways to declare [child resources](child-resource-name-type.md) and [extension resources](scope-extension-resources.md) in Bicep, including concatenating the names of your resources, using the `parent` keyword, and using nested resources. Consider reviewing these resources after decompilation and make sure the structure meets your standards. For example, ensure that you don't use string concatenation to create child resource names - you should use the `parent` property or a nested resource. Similarly, subnets can either be referenced as properties of a virtual network, or as a separate resource.
1. **Modularize.** If you're converting a template that has many resources, consider breaking the individual resource types into [modules](modules.md) for simplicity. Bicep modules help to reduce the complexity of your deployments and increase the reusability of your Bicep code.
In the _deploy_ phase of migrating your resources to Bicep, the goal is to deplo
The deploy phase consists of four steps, which you complete in sequence:
-1. **Prepare a rollback plan.** The ability to recover from a failed deployment is crucial. Develop a rollback plan in the event of any breaking changes introduced into your environments. Take inventory of the types of resources that are deployed, such as virtual machines, web apps, and databases. Each resource's data plane should be considered as well. Do you have a way to recover a virtual machine and its data? Do you have a way to recover a database after deletion? A well-developed rollback plan will help to keep your downtime to a minimum if any issues arise from a deployment.
+1. **Prepare a rollback plan.** The ability to recover from a failed deployment is crucial. Create a rollback strategy in the event that any breaking changes are introduced into your environments. Take inventory of the types of resources that are deployed, such as virtual machines, web apps, and databases. Each resource's data plane should be considered as well. Do you have a way to recover a virtual machine and its data? Do you have a way to recover a database after deletion? A well-developed rollback plan will help to keep your downtime to a minimum if any issues arise from a deployment.
1. **Run the what-if operation against production.** Before deploying your final Bicep file to production, run the what-if operation against your production environment, making sure to use production parameter values, and consider documenting the results.
-1. **Deploy manually.** If you're going to use the converted template in a pipeline, such as [Azure DevOps](add-template-to-azure-pipelines.md) or [GitHub Actions](deploy-github-actions.md), consider running the deployment from your local machine first. It is better to verify the functionality of the template before adding it to your production pipeline. That way, you can respond quickly if there's a problem.
+1. **Deploy manually.** If you're going to use the converted template in a pipeline, such as [Azure DevOps](add-template-to-azure-pipelines.md) or [GitHub Actions](deploy-github-actions.md), consider running the deployment from your local machine first. It's preferable to test the template's functionality before incorporating it into your production pipeline. That way, you can respond quickly if there's a problem.
-1. **Run smoke tests.** After your deployment completes, it is a good idea to run a series of *smoke tests* - simple checks that validate that your application or workload is functioning properly. For example, test to see if your web app is accessible through normal access channels, such as the public Internet or across a corporate VPN. For databases, attempt to make a database connection and execute a series of queries. With virtual machines, log in to the virtual machine and make sure that all services are up and running.
+1. **Run smoke tests.** After your deployment is complete, you should run a series of *smoke tests* to ensure that your application or workload is working properly. For example, test to see if your web app is accessible through normal access channels, such as the public Internet or across a corporate VPN. For databases, attempt to make a database connection and execute a series of queries. With virtual machines, log in to the virtual machine and make sure that all services are up and running.
## Next steps
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
Title: Create Bicep files by using Visual Studio Code description: Describes how to create Bicep files by using Visual Studio Code Previously updated : 11/02/2022 Last updated : 11/11/2022 # Create Bicep files by using Visual Studio Code
These commands include:
- [Build ARM Template](#build-arm-template) - [Create Bicep Configuration File](#create-bicep-configuration-file)
+- [Decompile into Bicep](#decompile-into-bicep)
- [Deploy Bicep File](#deploy-bicep-file) - [Generate Parameters File](#generate-parameters-file) - [Insert Resource](#insert-resource)
These commands include:
These commands are also shown in the context menu when you right-click a Bicep file: +
+When you right-click a JSON file:
+ ### Build ARM template
To create a Bicep configuration file:
1. Open Visual Studio Code. 1. From the **View** menu, select **Command Palette** (or press **[CTRL/CMD]**+**[SHIRT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. 1. Select the file directory where you want to place the file.
-1. Save the configuration file when you are done.
+1. Save the configuration file when you're done.
+
+### Decompile into Bicep
+
+This command decompiles an ARM JSON template into a Bicep file, and places it in the same directory as the ARM JSON template. The new file has the same file name with the *.bicep* extension. If a Bicep file with the same file name already exists in the same folder, Visual Studio Code prompts you to overwrite the existing file or create a copy.
### Deploy Bicep file
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
az resource delete \
## Required access and deletion failures
-To delete a resource group, you need access to the delete action for the **Microsoft.Resources/subscriptions/resourceGroups** resource. You also need delete for all resources in the resource group.
+To delete a resource group, you need access to the delete action for the **Microsoft.Resources/subscriptions/resourceGroups** resource.
For a list of operations, see [Azure resource provider operations](../../role-based-access-control/resource-provider-operations.md). For a list of built-in roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
This article discusses best practices and useful tips for using the Azure Batch
- **Multiple compute nodes:** Individual nodes aren't guaranteed to always be available. While uncommon, hardware failures, operating system updates, and a host of other issues can cause individual nodes to be offline. If your Batch workload requires deterministic, guaranteed progress, you should allocate pools with multiple nodes. -- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with.
+- **Images with impending end-of-life (EOL) dates:** We strongly recommended avoiding images with impending Batch support end of life (EOL) dates. These dates can be discovered via the [`ListSupportedImages` API](/rest/api/batchservice/account/listsupportedimages), [PowerShell](/powershell/module/az.batch/get-azbatchsupportedimage), or [Azure CLI](/cli/azure/batch/pool/supported-images). It's your responsibility to periodically refresh your view of the EOL dates pertinent to your pools and migrate your workloads before the EOL date occurs. If you're using a custom image with a specified node agent, ensure that you follow Batch support end-of-life dates for the image for which your custom image is derived or aligned with. An image without a specified `batchSupportEndOfLife` date indicates that such a date has not been determined yet by the Batch service. Absence of a date does not indicate that the respective image will be supported indefinitely. An EOL date may be added or updated in the future at anytime.
- **Unique resource names:** Batch resources (jobs, pools, etc.) often come and go over time. For example, you may create a pool on Monday, delete it on Tuesday, and then create another similar pool on Thursday. Each new resource you create should be given a unique name that you haven't used before. You can create uniqueness by using a GUID (either as the entire resource name, or as a part of it) or by embedding the date and time that the resource was created in the resource name. Batch supports [DisplayName](/dotnet/api/microsoft.azure.batch.jobspecification.displayname), which can give a resource a more readable name even if the actual resource ID is something that isn't human-friendly. Using unique names makes it easier for you to differentiate which particular resource did something in logs and metrics. It also removes ambiguity if you ever have to file a support case for a resource.
For the purposes of isolation, if your scenario requires isolating jobs or tasks
Batch node agents aren't automatically upgraded for pools that have non-zero compute nodes. To ensure your Batch pools receive the latest security fixes and updates to the Batch node agent, you need to either resize the pool to zero compute nodes or recreate the pool. It's recommended to monitor the [Batch Node Agent release notes](https://github.com/Azure/Batch/blob/master/changelogs/nodeagent/CHANGELOG.md) to understand changes to new Batch node agent versions. Checking regularly for updates when they were released enables you to plan upgrades to the latest agent version.
-Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This is further discussed in the [Nodes](#nodes) section.
+Before you recreate or resize your pool, you should download any node agent logs for debugging purposes if you're experiencing issues with your Batch pool or compute nodes. This process is further discussed in the [Nodes](#nodes) section.
> [!NOTE] > For general guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md).
For User Defined Routes (UDRs), it's recommended to use `BatchNodeManagement.<re
Ensure that your systems honor DNS Time-to-Live (TTL) for your Batch account service URL. Additionally, ensure that your Batch service clients and other connectivity mechanisms to the Batch service don't rely on IP addresses.
-If your requests receive 5xx level HTTP responses and there's a "Connection: close" header in the response, your Batch service client should observe the recommendation by closing the existing connection, re-resolving DNS for the Batch account service URL, and attempt following requests on a new connection.
+Any HTTP requests with 5xx level status codes along with a "Connection: close" header in the response requires adjusting your Batch service client behavior. Your Batch service client should observe the recommendation by closing the existing connection, re-resolving DNS for the Batch account service URL, and attempt following requests on a new connection.
### Retry requests automatically
batch Tutorial Parallel Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/tutorial-parallel-python.md
In this tutorial, you convert MP4 media files in parallel to MP3 format using th
## Prerequisites
-* [Python version 2.7 or 3.6+](https://www.python.org/downloads/)
+* [Python version 3.7+](https://www.python.org/downloads/)
* [pip](https://pip.pypa.io/en/stable/installing/) package manager
cognitive-services Long Audio Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/long-audio-api.md
Previously updated : 09/25/2022 Last updated : 11/11/2022
Prepare an input text file, in either plain text or SSML text, then add the foll
> [!NOTE] > `concatenateResult` is an optional parameter. If this parameter isn't set, the audio outputs will be generated per paragraph. You can also concatenate the audios into one output by including the parameter.
-> `outputFormat` is also optional. By default, the audio output is set to `riff-16khz-16bit-mono-pcm`. For more information about supported audio output formats, see [Audio output formats](#audio-output-formats).
+> `outputFormat` is also optional. By default, the audio output is set to `riff-24khz-16bit-mono-pcm`. For more information about supported audio output formats, see [Audio output formats](#audio-output-formats).
```python def submit_synthesis():
def submit_synthesis():
'description': 'sample description', 'locale': locale, 'voices': json.dumps(voice_identities),
- 'outputformat': 'riff-16khz-16bit-mono-pcm',
+ 'outputformat': 'riff-24khz-16bit-mono-pcm',
'concatenateresult': True, }
response.status_code: 200
} ], "properties": {
- "outputFormat": "riff-16khz-16bit-mono-pcm",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
"concatenateResult": false, "totalDuration": "PT5M57.252S", "billableCharacterCount": 3048
response.status_code: 200
} ], "properties": {
- "outputFormat": "riff-16khz-16bit-mono-pcm",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
"concatenateResult": false, "totalDuration": "PT1S", "billableCharacterCount": 5
response.status_code: 200
} ], "properties": {
- "outputFormat": "riff-16khz-16bit-mono-pcm",
+ "outputFormat": "riff-24khz-16bit-mono-pcm",
"concatenateResult": false, "totalDuration": "PT5M57.252S", "billableCharacterCount": 3048
The Long audio API is available in multiple regions with unique endpoints.
We support flexible audio output formats. You can generate audio outputs per paragraph or concatenate the audio outputs into a single output by setting the `concatenateResult` parameter. The following audio output formats are supported by the Long Audio API: > [!NOTE]
-> The default audio format is riff-16khz-16bit-mono-pcm.
+> The default audio format is riff-24khz-16bit-mono-pcm.
> > The sample rate for long audio voices is 24kHz, not 48kHz. Other sample rates can be obtained through upsampling or downsampling when synthesizing.
cognitive-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md
To clean up your Cognitive Services subscription, you can delete the resource or
* [Portal](../cognitive-services-apis-create-account.md#clean-up-resources) * [Azure CLI](../cognitive-services-apis-create-account-cli.md#clean-up-resources)
+## Download the quickstart trained model.
+If you'd like download a Personalizer model that has been trained on 5,000 events from the QuickStart example, you can visit the [Azure-Samples repository](https://github.com/Azure-Samples/cognitive-services-personalizer-samples/tree/master/quickstarts) and download the model zip file, then upload this to your Personalizer instance under the "Setup" -> "Model Import/Export" section.
+ ## Next steps > [!div class="nextstepaction"]
cognitive-services Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/use-key-vault.md
If you're using a multi-service resource or Language resource, you can update [y
[!INCLUDE [key-vault-cli-authentication](includes/key-vault-cli-authentication.md)]
-## Create a python application
+## Create a Python application
Create a new folder named `keyVaultExample`. Then use your preferred code editor to create a file named `program.py` inside the newly created folder.
communication-services Is Sdk Active In Multiple Tabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/is-sdk-active-in-multiple-tabs.md
+
+ Title: Verify if an application is active in multiple tabs of a browser
+
+description: Learn how to detect if an application is active in multiple tabs of a browser using the Azure Communication Services Calling SDK for JavaScript
++ Last updated : 10/17/2022++++++++
+# How to detect if an application using Azure Communication Services' SDK is active in multiple tabs of a browser
+
+Based on best practices, your application should not connect to calls from multiple browser tabs simultaneously. Handling multiple calls on multiple tabs of a browser on mobile can cause undefined behavior due to resource allocation for microphone and camera on the device.
+In order to detect if an application is active in multiple tabs of a browser, a developer can use the method `isCallClientActiveInAnotherTab` and the event `isCallClientActiveInAnotherTabChanged` of a `CallClient` instance.
++
+```javascript
+const callClient = new CallClient();
+// Check if an application is active in multiple tabs of a browser
+const isCallClientActiveInAnotherTab = callClient.feature(SDK.Features.DebugInfo).isCallClientActiveInAnotherTab;
+...
+// Subscribe to the event to listen for changes
+callClient.feature(Features.DebugInfo).on('isCallClientActiveInAnotherTabChanged', () => {
+ // callback();
+});
+...
+// Unsubscribe from the event to stop listening for changes
+callClient.feature(Features.DebugInfo).off('isCallClientActiveInAnotherTabChanged', () => {
+ // callback();
+});
+```
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Last updated 06/17/2022
This article provides background about virtual network scenarios, limitations, and resources. For deployment examples using the Azure CLI, see [Deploy container instances into an Azure virtual network](container-instances-vnet.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
+> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For more details on which regions have virtual network capabilities, see [Regions and resource availability](container-instances-region-availability.md).
## Scenarios
Container groups deployed into an Azure virtual network enable scenarios like:
## Other limitations
-* Currently, only Linux containers are supported in a container group deployed to a virtual network.
* To deploy container groups to a subnet, the subnet can't contain other resource types. Remove all existing resources from an existing subnet prior to deploying container groups to it, or create a new subnet. * To deploy container groups to a subnet, the subnet and the container group must be on the same Azure subscription. * You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network.
cosmos-db Readpreference Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/readpreference-global-distribution.md
Refer to the detailed [MongoDB Read Preference behavior](https://docs.mongodb.co
Based on common scenarios, we recommend using the following settings:
-1. If **low latency reads** are required, use the **NEAREST** read preference mode. This setting directs the read operations to the nearest available region. Note that if the nearest region is the WRITE region, then these operations are directed to that region.
+1. If **high availability and low latency reads** are required, use the **NEAREST** read preference mode. This setting directs the read operations to the nearest available region. Note that if the nearest region is the WRITE region, then these operations are directed to that region.
2. If **high availability and geo distribution of reads** are required (latency is not a constraint), then use the **PRIMARY PREFERRED** or **SECONDARY PREFERRED** read preference mode. This setting directs read operations to an available WRITE or READ region respectively. If the region is not available, then requests are directed to the next available region as per the read preference behavior. The following snippet from the sample application shows how to configure NEAREST Read Preference in NodeJS:
cosmos-db Tutorial Global Distribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-global-distribution.md
var collection = database.GetCollection<BsonDocument>(collectionName);
collection = collection.WithReadPreference(new ReadPreference(ReadPreferenceMode.Nearest)); ```
-For applications with a primary read/write region and a secondary region for disaster recovery (DR) scenarios, we recommend setting your collection's read preference to *secondary preferred*. A read preference of *secondary preferred* is configured to read from the secondary region when the primary region is unavailable.
+For applications with a primary read/write region and a secondary region for disaster recovery (DR) scenarios, we recommend setting your collection's read preference to *primary preferred*. A read preference of *primary preferred* is configured to read from the secondary region when the primary region is unavailable.
```csharp var collection = database.GetCollection<BsonDocument>(collectionName);
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: Azure portal administration for direct Enterprise Agreements
description: This article explains the common tasks that a direct enterprise administrator accomplishes in the Azure portal. Previously updated : 08/29/2022 Last updated : 11/11/2022
After the account owner receives an account ownership email, they need to confir
After account ownership is confirmed, you can create subscriptions and purchase resources with the subscriptions.
+### To activate an enrollment account with a .onmicrosoft.com email account
+
+If you're a new EA account owner with a .onmicrosoft.com email account, you might not have a forwarding email address by default. In that situation, you might not receive the activation email. If this situation applies to you, use the following steps to activate your account ownership.
+
+1. Sign into the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes).
+1. Navigate to **Cost Management + Billing** and select a billing scope.
+1. Select your account.
+1. In the left menu under **Settings**, select **Activate Account**.
+1. On the Activate Account page, select **Yes, I wish to continue** and the select **Activate this account**.
+ :::image type="content" source="./media/direct-ea-administration/activate-account.png" alt-text="Screenshot showing the Activate Account page for onmicrosoft.com accounts." lightbox="./media/direct-ea-administration/activate-account.png" :::
+1. After the activation process completes, copy and paste the following link to your browser. The page will open and create a subscription that's associated with your enrollment.
+ `https://signup.azure.com/signup?offer=MS-AZR-0017P&appId=IbizaCatalogBlade`
+ ## Change Azure subscription or account ownership EA admins can use the Azure portal to transfer account ownership of selected or all subscriptions in an enrollment. When you complete a subscription or account ownership transfer, Microsoft updates the account owner.
data-factory Concepts Data Flow Performance Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sources.md
Previously updated : 06/20/2022 Last updated : 10/11/2022 # Optimizing sources
data-factory Connector Troubleshoot Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-lake.md
Previously updated : 08/10/2022 Last updated : 11/08/2022
This article provides suggestions to troubleshoot common problems with the Azure
1. The file name contains `_metadata`. 2. The file name starts with `.` (dot).
+### Error code: ADLSGen2ForbiddenError
+
+- **Message**: `ADLS Gen2 failed for forbidden: Storage operation % on % get failed with 'Operation returned an invalid status code 'Forbidden'.`
+
+- **Cause**: There are two possible causes:
+
+ 1. The integration runtime is blocked by network access in Azure storage account firewall settings.
+ 2. The service principal or managed identity doesnΓÇÖt have enough permission to access the data.
+
+- **Recommendation**:
+
+ 1. Check your Azure storage account network settings to see whether the public network access is disabled. If disabled, use a managed virtual network integration runtime and create a private endpoint to access. For more information, see [Managed virtual network](managed-virtual-network-private-endpoint.md) and [Build a copy pipeline using managed VNet and private endpoints](tutorial-copy-data-portal-private.md).
+
+ 1. If you have enabled selected virtual networks and IP addresses in your Azure storage account network setting:
+
+ 1. It's possible because some IP address ranges of your integration runtime are not allowed by your storage account firewall settings. Add the Azure integration runtime IP addresses or the self-hosted integration runtime IP address to your storage account firewall. For Azure integration runtime IP addresses, see [Azure Integration Runtime IP addresses](azure-integration-runtime-ip-addresses.md), and to learn how to add IP ranges in the storage account firewall, see [Managing IP network rules](../storage/common/storage-network-security.md#managing-ip-network-rules).
+
+ 1. If you allow trusted Azure services to access this storage account in the firewall, you must use [managed identity authentication](connector-azure-data-lake-storage.md#managed-identity) in copy activity.
+
+ For more information about the Azure storage account firewalls settings, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
+
+ 1. If you use service principal or managed identity authentication, grant service principal or managed identity appropriate permissions to do copy. For source, at least the **Storage Blob Data Reader** role. For sink, at least the **Storage Blob Data Contributor** role. For more information, see [Copy and transform data in Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#service-principal-authentication).
+ ## Next steps For more troubleshooting help, try these resources:
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Two commands are currently available in the package:
### Export ARM template
-Run `npm run build export <rootFolder> <factoryId> [outputFolder]` to export the ARM template by using the resources of a given folder. This command also runs a validation check prior to generating the ARM template. Here's an example:
+Run `npm run build export <rootFolder> <factoryId> [outputFolder]` to export the ARM template by using the resources of a given folder. This command also runs a validation check prior to generating the ARM template. Here's an example using a resource group named **testResourceGroup**:
```dos npm run build export C:\DataFactories\DevDataFactory /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/DevDataFactory ArmTemplateOutput
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
Previously updated : 07/20/2022 Last updated : 10/27/2022 # Data Flow activity in Azure Data Factory and Azure Synapse Analytics
To use a Data Flow activity in a pipeline, complete the following steps:
1. Select the new Data Flow activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details. :::image type="content" source="media/control-flow-execute-data-flow-activity/data-flow-activity.png" alt-text="Shows the UI for a Data Flow activity.":::
-1. Checkpoint key is used to set the checkpoint when data flow is used for changed data capture. You can overwrite it. Data flow activities use a guid value as checkpoint key instead of ΓÇ£pipelinename + activitynameΓÇ¥ so that it can always keep tracking customerΓÇÖs change data capture state even thereΓÇÖs any renaming actions. All existing data flow activity will use the old pattern key for backward compatibility. Checkpoint key option after publishing a new data flow activity with change data capture enabled data flow resource is shown as below.
+1. Checkpoint key is used to set the checkpoint when data flow is used for changed data capture. You can overwrite it. Data flow activities use a guid value as checkpoint key instead of ΓÇ£pipeline name + activity nameΓÇ¥ so that it can always keep tracking customerΓÇÖs change data capture state even thereΓÇÖs any renaming actions. All existing data flow activity will use the old pattern key for backward compatibility. Checkpoint key option after publishing a new data flow activity with change data capture enabled data flow resource is shown as below.
:::image type="content" source="media/control-flow-execute-data-flow-activity/data-flow-activity-checkpoint.png" alt-text="Shows the UI for a Data Flow activity with checkpoint key."::: 3. Select an existing data flow or create a new one using the New button. Select other options as required to complete your configuration.
data-factory Control Flow Power Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-power-query-activity.md
Previously updated : 12/12/2021 Last updated : 10/27/2022 # Power Query activity in Azure Data Factory The Power Query activity allows you to build and execute Power Query mash-ups to execute data wrangling at scale in a Data Factory pipeline. You can create a new Power Query mash-up from the New resources menu option or by adding a Power Activity to your pipeline.
-![Screenshot that shows Power Query in the factory resources pane.](media/data-flow/power-query-activity-1.png)
You can work directly inside of the Power Query mash-up editor to perform interactive data exploration and then save your work. Once complete, you can take your Power Query activity and add it to a pipeline. Azure Data Factory will automatically scale it out and operationalize your data wrangling using Azure Data Factory's data flow Spark environment.
To use a Power Query activity in a pipeline, complete the following steps:
:::image type="content" source="media/control-flow-power-query-activity/for-each-activity-using-power-query-output.png" alt-text="Shows the ForEach Activity's Settings tab with &nbsp;Add dynamic content&nbsp; link for the Items property.":::
-1. Any activity outputs are displayed and can be used when defining your dynamic content by selecting them in the **Add dynamic content** pane.
+1. Any activity outputs are displayed and can be used when defining your dynamic content by selecting them in the **Pipeline expression builder** pane.
:::image type="content" source="media/control-flow-power-query-activity/using-power-query-output-in-dynamic-content.png" alt-text="Shows the &nbsp;Add dynamic content&nbsp; pane referencing the Power Query defined above.":::
data-factory Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/credentials.md
Below are the generic steps for using a **user-assigned managed identity** in th
> [!NOTE]
-> You can use SDK/ PowerShell/ REST APIs for the above actions.
+> You can use [SDK](/dotnet/api/microsoft.azure.management.synapse?preserve-view=true&view=azure-dotnet-preview)/ [PowerShell](/powershell/module/az.synapse/?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&view=azps-9.1.0)/ [REST APIs](/rest/api/synapse/) for the above actions.
> Linked services with user-assigned managed identity are currently not supported in Synapse Spark. ## Next steps
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 09/29/2022 Last updated : 11/02/2022 # Troubleshoot mapping data flows in Azure Data Factory
This section lists common error codes and messages reported by mapping data flow
- **Message**: Azure Cosmos DB throughput scale operation cannot be performed because another scale operation is in progress, please retry after sometime. - **Cause**: The throughput scale operation of the Azure Cosmos DB can't be performed because another scale operation is in progress.-- **Recommendation**: Login to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput.
+- **Recommendation**: Log in to Azure Cosmos DB account, and manually change container throughput to be auto scale or add a custom activity after mapping data flows to reset the throughput.
### Error code: DF-Cosmos-IdPropertyMissed
This section lists common error codes and messages reported by mapping data flow
- **Cause**: The short data type is not supported in the Azure Cosmos DB instance. - **Recommendation**: Add a derived column transformation to convert related columns from short to integer before using them in the Azure Cosmos DB sink transformation.
+### Error code: DF-CSVWriter-InvalidQuoteSetting
+
+- **Message**: Job failed while writing data with error: Quote character and escape character cannot be empty if column value contains column delimiter
+- **Cause**: Both quote characters and escape characters are empty when the column value contains column delimiter.
+- **Recommendation**: Set your quote character or escape character.
+ ### Error code: DF-Delimited-ColumnDelimiterMissed - **Message**: Column delimiter is required for parse.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: Key column(s) are missed for non-insertable operations. - **Recommendation**: Specify key column(s) on delta sink to have non-insertable operations.
+### Error code: DF-Dynamics-InvalidNullAlternateKeyColumn
+
+- **Message**: Any column value of alternate Key can't be NULL.
+- **Cause**: Your alternate key column value can't be null.
+- **Recommendation**: Confirm that your column value of your alternate key is not NULL.
+
+### Error code: DF-Dynamics-TooMuchAlternateKey
+
+- **Cause**: One lookup field with more than one alternate key reference is not valid.
+- **Recommendation**: Check your schema mapping and confirm that each lookup field has a single alternate key.
+ ### Error code: DF-Excel-DifferentSchemaNotSupport - **Message**: Read excel files with different schema is not supported now.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: Possible problems with the JSON file: unsupported encoding, corrupt bytes, or using JSON source as a single document on many nested lines. - **Recommendation**: Verify that the JSON file's encoding is supported. On the source transformation that's using a JSON dataset, expand **JSON Settings** and turn on **Single Document**.
+### Error code: DF-Executor-UnauthorizedStorageAccess
+
+- **Cause**: You are not permitted to access the storage account either due to missing roles for managed identity/service principal authentication or network firewall settings.
+- **Recommendation**: When using managed identity/service principal authentication,
+ 1. For source: In Storage Explorer, grant the managed identity/service principal at least **Execute** permission for ALL upstream folders and the file system, along with **Read** permission for the files to copy. Alternatively, in Access control (IAM), grant the managed identity/service principal at least the **Storage Blob Data Reader** role.
+ 2. For sink: In Storage Explorer, grant the managed identity/service principal at least **Execute** permission for ALL upstream folders and the file system, along with **Write** permission for the sink folder. Alternatively, in Access control (IAM), grant the managed identity/service principal at least the **Storage Blob Data Contributor** role. <br>
+
+ Also please ensure that the network firewall settings in the storage account are configured correctly, as turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses.
+
+### Error code: DF-Executor-UnreachableStorageAccount
+
+- **Message**: System is not able to resolve the IP address of the host. Please verify that your host name is correct or check if your DNS server is able to resolve the host to an IP address successfully
+- **Cause**: Unable to reach the given storage account.
+- **Recommendation**: Check the name of the storage account and make sure the storage account exists.
+ ### Error code: DF-File-InvalidSparkFolder - **Message**: Failed to read footer for file.
This section lists common error codes and messages reported by mapping data flow
- **Cause**: SQL server configuration error. - **Recommendations**: Install a trusted certificate on your SQL server, or change `encrypt` connection string setting to false and `trustServerCertificate` connection string setting to true. -- ### Error code: DF-PGSQL-InvalidCredential - **Message**: User/password should be specified.
This section lists common error codes and messages reported by mapping data flow
| Your context value can't be empty when reading data. | Specify the context. | | Your context value can't be empty when browsing object names. | Specify the context. |
+### Error code: DF-SAPODP-DataflowSystemError
+
+- **Recommendation**: Reconfigure the activity and run it again. If the issue persists, you can contact Microsoft support for further assistance.
+
+### Error code: DF-SAPODP-DataParsingFailed
+
+- **Cause**: Mostly you have hidden column settings in your SAP table. When you use SAP mapping data flow to read data from SAP server, it returns all the schema (columns, including hidden ones), but returned data do not contain related values. So, data misalignment happened and led to parse value issue or wrong data value issue.
+- **Recommendation**: There are two recommendations for this issue:
+ 1. Remove hidden settings from the related column(s) through SAP GUI.
+ 2. If you want to keep existed SAP settings unchanged, use hidden feature (manually add DSL property `enableProjection:true` in script) in SAP mapping data flow to filter the hidden column(s) and continue to read data.
+ ### Error code: DF-SAPODP-ObjectInvalid - **Cause**: The object name is not found or not released.
This section lists common error codes and messages reported by mapping data flow
### Error code: DF-SAPODP-OOM - **Message**: No more memory available to add rows to an internal table-- **Cause**: SAP Table connector has its limitation for big table extraction. SAP Table underlying relies on an RFC which will read all the data from the table into the memory of SAP system, so out of memory (OOM) issue will happen when we extracting big tables.
+- **Cause**: SAP Table connector has its limitation for big table extraction. SAP Table underlying relies on an RFC which will read all the data from the table into the memory of SAP system, so out of memory (OOM) issue will happen when extracting big tables.
- **Recommendation**: Use SAP CDC connector to do full load directly from your source system, then move delta to SAP Landscape Transformation Replication Server (SLT) after init without delta is released. ### Error code: DF-SAPODP-SourceNotSupportDelta
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
Title: Incrementally copy data using Change Tracking using Azure portal
-description: In this tutorial, you create an Azure Data Factory with a pipeline that loads delta data based on change tracking information in the source database in Azure SQL Database to an Azure blob storage.
+ Title: Incrementally copy data by using change tracking in the Azure portal
+description: Learn how to create a data factory with a pipeline that loads delta data based on change tracking information from Azure SQL Database and moves it to Azure Blob Storage.
Last updated 07/05/2021
-# Incrementally load data from Azure SQL Database to Azure Blob Storage using change tracking information using the Azure portal
+# Incrementally copy data from Azure SQL Database to Blob Storage by using change tracking in the Azure portal
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-In this tutorial, you create an Azure Data Factory with a pipeline that loads delta data based on **change tracking** information in the source database in Azure SQL Database to an Azure blob storage.
+In a data integration solution, incrementally loading data after initial data loads is a widely used scenario. The changed data within a period in your source data store can be easily sliced (for example, `LastModifyTime`, `CreationTime`). But in some cases, there's no explicit way to identify the delta data from the last time that you processed the data. You can use the change tracking technology supported by data stores such as Azure SQL Database and SQL Server to identify the delta data.
+
+This tutorial describes how to use Azure Data Factory with change tracking to incrementally load delta data from Azure SQL Database into Azure Blob Storage. For more information about change tracking, see [Change tracking in SQL Server](/sql/relational-databases/track-changes/about-change-tracking-sql-server).
You perform the following steps in this tutorial:
You perform the following steps in this tutorial:
> * Add or update data in the source table. > * Create, run, and monitor the incremental copy pipeline.
-## Overview
-In a data integration solution, incrementally loading data after initial data loads is a widely used scenario. In some cases, the changed data within a period in your source data store can be easily to sliced up (for example, LastModifyTime, CreationTime). In some cases, there is no explicit way to identify the delta data from last time you processed the data. The Change Tracking technology supported by data stores such as Azure SQL Database and SQL Server can be used to identify the delta data. This tutorial describes how to use Azure Data Factory with SQL Change Tracking technology to incrementally load delta data from Azure SQL Database into Azure Blob Storage. For more concrete information about SQL Change Tracking technology, see [Change tracking in SQL Server](/sql/relational-databases/track-changes/about-change-tracking-sql-server).
-
-## End-to-end workflow
-Here are the typical end-to-end workflow steps to incrementally load data using the Change Tracking technology.
+## High-level solution
+In this tutorial, you create two pipelines that perform the following operations.
> [!NOTE]
-> Both Azure SQL Database and SQL Server support the Change Tracking technology. This tutorial uses Azure SQL Database as the source data store. You can also use a SQL Server instance.
-
-1. **Initial loading of historical data** (run once):
- 1. Enable Change Tracking technology in the source database in Azure SQL Database.
- 2. Get the initial value of SYS_CHANGE_VERSION in the database as the baseline to capture changed data.
- 3. Load full data from the source database into an Azure blob storage.
-2. **Incremental loading of delta data on a schedule** (run periodically after the initial loading of data):
- 1. Get the old and new SYS_CHANGE_VERSION values.
- 3. Load the delta data by joining the primary keys of changed rows (between two SYS_CHANGE_VERSION values) from **sys.change_tracking_tables** with data in the **source table**, and then move the delta data to destination.
- 4. Update the SYS_CHANGE_VERSION for the delta loading next time.
+> This tutorial uses Azure SQL Database as the source data store. You can also use SQL Server.
-## High-level solution
-In this tutorial, you create two pipelines that perform the following two operations:
+1. **Initial loading of historical data**: You create a pipeline with a copy activity that copies the entire data from the source data store (Azure SQL Database) to the destination data store (Azure Blob Storage):
+ 1. Enable change tracking technology in the source database in Azure SQL Database.
+ 2. Get the initial value of `SYS_CHANGE_VERSION` in the database as the baseline to capture changed data.
+ 3. Load full data from the source database into Azure Blob Storage.
-1. **Initial load:** you create a pipeline with a copy activity that copies the entire data from the source data store (Azure SQL Database) to the destination data store (Azure Blob Storage).
+ :::image type="content" source="media/tutorial-incremental-copy-change-tracking-feature-portal/full-load-flow-diagram.png" alt-text="Diagram that shows full loading of data.":::
+1. **Incremental loading of delta data on a schedule**: You create a pipeline with the following activities and run it periodically:
+ 1. Create *two lookup activities* to get the old and new `SYS_CHANGE_VERSION` values from Azure SQL Database.
+ 2. Create *one copy activity* to copy the inserted, updated, or deleted data (the delta data) between the two `SYS_CHANGE_VERSION` values from Azure SQL Database to Azure Blob Storage.
- :::image type="content" source="media/tutorial-incremental-copy-change-tracking-feature-portal/full-load-flow-diagram.png" alt-text="Full loading of data":::
-1. **Incremental load:** you create a pipeline with the following activities, and run it periodically.
- 1. Create **two lookup activities** to get the old and new SYS_CHANGE_VERSION from Azure SQL Database and pass it to copy activity.
- 2. Create **one copy activity** to copy the inserted/updated/deleted data between the two SYS_CHANGE_VERSION values from Azure SQL Database to Azure Blob Storage.
- 3. Create **one stored procedure activity** to update the value of SYS_CHANGE_VERSION for the next pipeline run.
+ You load the delta data by joining the primary keys of changed rows (between two `SYS_CHANGE_VERSION` values) from `sys.change_tracking_tables` with data in the source table, and then move the delta data to the destination.
+ 3. Create *one stored procedure activity* to update the value of `SYS_CHANGE_VERSION` for the next pipeline run.
- :::image type="content" source="media/tutorial-incremental-copy-change-tracking-feature-portal/incremental-load-flow-diagram.png" alt-text="Increment load flow diagram":::
+ :::image type="content" source="media/tutorial-incremental-copy-change-tracking-feature-portal/incremental-load-flow-diagram.png" alt-text="Diagram that shows incremental loading of data.":::
+## Prerequisites
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
+* **Azure subscription**. If you don't have one, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* **Azure SQL Database**. You use a database in Azure SQL Database as the *source* data store. If you don't have one, see [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) for steps to create it.
+* **Azure storage account**. You use Blob Storage as the *sink* data store. If you don't have an Azure storage account, see [Create a storage account](../storage/common/storage-account-create.md) for steps to create one. Create a container named *adftutorial*.
-## Prerequisites
-* **Azure SQL Database**. You use the database as the **source** data store. If you don't have a database in Azure SQL Database, see the [Create a database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) article for steps to create one.
-* **Azure Storage account**. You use the blob storage as the **sink** data store. If you don't have an Azure storage account, see the [Create a storage account](../storage/common/storage-account-create.md) article for steps to create one. Create a container named **adftutorial**.
-### Create a data source table in Azure SQL Database
+## Create a data source table in Azure SQL Database
-1. Launch **SQL Server Management Studio**, and connect to SQL Database.
-2. In **Server Explorer**, right-click your **database** and choose the **New Query**.
-3. Run the following SQL command against your database to create a table named `data_source_table` as data source store.
+1. Open SQL Server Management Studio, and connect to SQL Database.
+2. In Server Explorer, right-click your database, and then select **New Query**.
+3. Run the following SQL command against your database to create a table named `data_source_table` as the source data store:
```sql create table data_source_table
If you don't have an Azure subscription, create a [free](https://azure.microsoft
(5, 'eeee', 22); ```
-4. Enable **Change Tracking** mechanism on your database and the source table (data_source_table) by running the following SQL query:
+4. Enable change tracking on your database and the source table (`data_source_table`) by running the following SQL query.
> [!NOTE]
- > - Replace &lt;your database name&gt; with the name of the database in Azure SQL Database that has the data_source_table.
- > - The changed data is kept for two days in the current example. If you load the changed data for every three days or more, some changed data is not included. You need to either change the value of CHANGE_RETENTION to a bigger number. Alternatively, ensure that your period to load the changed data is within two days. For more information, see [Enable change tracking for a database](/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server#enable-change-tracking-for-a-database)
+ > - Replace `<your database name>` with the name of the database in Azure SQL Database that has `data_source_table`.
+ > - The changed data is kept for two days in the current example. If you load the changed data for every three days or more, some changed data is not included. You need to either change the value of `CHANGE_RETENTION` to a bigger number or ensure that your period to load the changed data is within two days. For more information, see [Enable change tracking for a database](/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-server#enable-change-tracking-for-a-database).
+ ```sql ALTER DATABASE <your database name> SET CHANGE_TRACKING = ON
If you don't have an Azure subscription, create a [free](https://azure.microsoft
ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON) ```
-5. Create a new table and store the ChangeTracking_version with a default value by running the following query:
+5. Create a new table and store called `ChangeTracking_version` with a default value by running the following query:
```sql create table table_store_ChangeTracking_version
If you don't have an Azure subscription, create a [free](https://azure.microsoft
``` > [!NOTE]
- > If the data is not changed after you enabled the change tracking for SQL Database, the value of the change tracking version is 0.
-6. Run the following query to create a stored procedure in your database. The pipeline invokes this stored procedure to update the change tracking version in the table you created in the previous step.
+ > If the data is not changed after you enable change tracking for SQL Database, the value of the change tracking version is `0`.
+6. Run the following query to create a stored procedure in your database. The pipeline invokes this stored procedure to update the change tracking version in the table that you created in the previous step.
+
```sql CREATE PROCEDURE Update_ChangeTracking_Version @CurrentTrackingVersion BIGINT, @TableName varchar(50) AS
If you don't have an Azure subscription, create a [free](https://azure.microsoft
END ```
-### Azure PowerShell
+## Create a data factory
+
+1. Open the Microsoft Edge or Google Chrome web browser. Currently, only these browsers support the Data Factory user interface (UI).
+1. In the [Azure portal](https://ms.portal.azure.com/), on the left menu, select **Create a resource**.
+1. Select **Integration** > **Data Factory**.
+ ![Screenshot that shows selection of a data factory in creating a resource.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-azure-data-factory-menu.png)
+1. On the **New data factory** page, enter **ADFTutorialDataFactory** for the name.
+
+ The name of data factory must be globally unique. If you get an error that says the name that you chose is not available, change the name (for example, to **yournameADFTutorialDataFactory**) and try creating the data factory again. For more information, see [Azure Data Factory naming rules](naming-rules.md).
+
+1. Select the Azure subscription in which you want to create the data factory.
+1. For **Resource Group**, take one of the following steps:
-Install the latest Azure PowerShell modules by following instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps).
+ - Select **Use existing**, and then select an existing resource group from the dropdown list.
+ - Select **Create new**, and then enter the name of a resource group.
+
+ To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
+1. For **Version**, select **V2**.
+1. For **Region**, select the region for the data factory.
-## Create a data factory
+ The dropdown list displays only locations that are supported. The data stores (for example, Azure Storage and Azure SQL Database) and computes (for example, Azure HDInsight) that a data factory uses can be in other regions.
+1. Select **Next: Git configuration**. Set up the repository by following the instructions in [Configuration method 4: During factory creation](/azure/data-factory/source-control#configuration-method-4-during-factory-creation), or select the **Configure Git later** checkbox.
+ ![Screenshot that shows options for Git configuration in creating a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-azure-data-factory-menu-git-configuration.png)
+1. Select **Review + create**.
+1. Select **Create**.
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**:
+ On the dashboard, the **Deploying Data Factory** tile shows the status.
- ![Screenshot that shows the data factory selection in the New pane.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-azure-data-factory-menu.png)
-1. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
+ :::image type="content" source="media/tutorial-incremental-copy-change-tracking-feature-portal/deploying-data-factory.png" alt-text="Screenshot of the tile that shows the status of deploying a data factory.":::
+1. After the creation is complete, the **Data Factory** page appears. Select the **Launch studio** tile to open the Azure Data Factory UI on a separate tab.
- ![New data factory page.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-azure-data-factory-menu.png)
-1. The name of the Azure Data Factory must be **globally unique**. If you receive the following error, change the name of the data factory (for example, yournameADFTutorialDataFactory) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
+## Create linked services
- *Data factory name ΓÇ£ADFTutorialDataFactoryΓÇ¥ is not available*
-3. Select your Azure **subscription** in which you want to create the data factory.
-4. For the **Resource Group**, do one of the following steps:
+You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your Azure storage account and your database in Azure SQL Database.
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-1. Select **V2** for the **version**.
-1. Select the **Region** for the data factory. Only locations that are supported are displayed in the drop-down list. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions.
-1. Select **Next : Git configuration** and setup the repository following the instructions in [Configuration method 4: During factory creation](/azure/data-factory/source-control) or select **Configure Git later** checkbox.
- ![Create Data Factory - Git Configuration.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-azure-data-factory-menu-git-configuration.png)
-1. Select **Review + create**.
-7. Click **Create**.
-8. On the dashboard, you see the following tile with status: **Deploying data factory**.
+### Create an Azure Storage linked service
- :::image type="content" source="media/tutorial-incremental-copy-change-tracking-feature-portal/deploying-data-factory.png" alt-text="deploying data factory tile":::
-1. After the creation is complete, you see the **Data Factory** page as shown in the image.
-1. Select **Launch studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
-11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image:
+To link your storage account to the data factory:
+
+1. In the Data Factory UI, on the **Manage** tab, under **Connections**, select **Linked services**. Then select **+ New** or the **Create linked service** button.
+ ![Screenshot that shows selections for creating a linked service.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-connection-button-storage.png)
+1. In the **New Linked Service** window, select **Azure Blob Storage**, and then select **Continue**.
+1. Enter the following information:
+ 1. For **Name**, enter **AzureStorageLinkedService**.
+ 1. For **Connect via integration runtime**, select the integration runtime.
+ 1. For **Authentication type**, select an authentication method.
+ 1. For **Storage account name**, select your Azure storage account.
+1. Select **Create**.
+
+### Create an Azure SQL Database linked service
+
+To link your database to the data factory:
+
+1. In the Data Factory UI, on the **Manage** tab, under **Connections**, select **Linked services**. Then select **+ New**.
+1. In the **New Linked Service** window, select **Azure SQL Database**, and then select **Continue**.
+1. Enter the following information:
+ 1. For **Name**, enter **AzureSqlDatabaseLinkedService**.
+ 1. For **Server name**, select your server.
+ 1. For **Database name**, select your database.
+ 1. For **Authentication type**, select an authentication method. This tutorial uses SQL authentication for demonstration.
+ 1. For **User name**, enter the name of the user.
+ 1. For **Password**, enter a password for the user. Or, provide the information for **Azure Key Vault - AKV linked service**, **Secret name**, and **Secret version**.
+1. Select **Test connection** to test the connection.
+1. Select **Create** to create the linked service.
+
+ ![Screenshot that shows settings for an Azure SQL Database linked service.](media/tutorial-incremental-copy-change-tracking-feature-portal/azure-sql-database-linked-service-setting.png)
-## Create linked services
-You create linked services in a data factory to link your data stores and compute services to the data factory. In this section, you create linked services to your Azure Storage account and your database in Azure SQL Database.
-
-### Create Azure Storage linked service.
-In this step, you link your Azure Storage Account to the data factory.
-
-1. Navigate to **Linked services** in **Connections** under **Manage** tab and click **+ New** or click on **Create linked service** button.
- ![New connection button.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-connection-button-storage.png)
-1. In the **New Linked Service** window, select **Azure Blob Storage**, and click **Continue**.
-
-1. In the **New Linked Service** window, do the following steps:
-1. Enter **AzureStorageLinkedService** for the **Name** field.
-1. Select the integration runtime in **Connect via integrationruntime**.
-1. Select the integration runtime in **Connect via integrationruntime**.
-1. Select the **Authentication type**.
-1. Select your Azure Storage account for **Storage account name**.
-1. Click **Create**.
-### Create Azure SQL Database linked service.
-In this step, you link your database to the data factory.
-
-1. Select **Linked services** under **Connections**, and click **+ New**.
-1. In the **New Linked Service** window, select **Azure SQL Database**, and click **Continue**.
-1. In the **New Linked Service** window, do the following steps:
-
- 1. Enter **AzureSqlDatabaseLinkedService** for the **Name** field.
- 2. Select your server for the **Server name** field.
-1. Select your database for the **Database name** field.
-1. Select the authentication type for the **Authentication type** field.
-1. We are using SQL authentication for this demo, enter name of the user for the **User name** field.
-1. Enter password for the user for the **Password** field or provide the **Azure Key Vault - AKV linked service** name, **Secret name** and **secret version**.
-1. Click **Test connection** to test connection.
-1. Click **Create** to create the linked service.![Azure SQL Database linked service settings.](media/tutorial-incremental-copy-change-tracking-feature-portal/azure-sql-database-linked-service-setting.png)
## Create datasets
-In this step, you create datasets to represent data source, data destination. and the place to store the SYS_CHANGE_VERSION.
+
+In this section, you create datasets to represent the data source and data destination, along with the place to store the `SYS_CHANGE_VERSION` values.
### Create a dataset to represent source data
-In this step, you create a dataset to represent the source data.
-
-1. Click **+ (plus)** and click **Dataset** in the treeview under the **Author** tab or click on the ellipsis for Dataset actions.
- ![New Dataset menu 1.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-dataset-menu.png)
-1. Select **Azure SQL Database**, and click **Continue**.
-1. In the **Set Properties** window, do the following steps:
- 1. Set the name of the dataset to **SourceDataset**.
- 1. Select **AzureSqlDatabaseLinkedService** for **Linked service**.
- 1. Select **dbo.data_source_table** for **Table name.**
- 1. Select the radio button to **Import schema** for **From connection/store**.
-1. Click **OK**.
-
- ![Source Dataset Properties.](media/tutorial-incremental-copy-change-tracking-feature-portal/source-dataset-properties.png)
-
-### Create a dataset to represent data copied to sink data store.
-In this step, you create a dataset to represent the data that is copied from the source data store. You created the adftutorial container in your Azure Blob Storage as part of the prerequisites. Create the container if it does not exist (or) set it to the name of an existing one. In this tutorial, the output file name is dynamically generated by using the expression: `@CONCAT('Incremental-', pipeline().RunId, '.txt')`.
-
-1. Click **+ (plus)** and click **Dataset** in the treeview under the **Author** tab or click on the ellipsis for Dataset actions.
- ![New Dataset menu 2.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-dataset-menu.png)
-1. Select **Azure Blob Storage**, and click **Continue**.
-1. Select the format of the data type as **DelimitedText** and click **Continue**.
-1. In the **Set** **properties** window, change the name of the dataset to **SinkDataset**.
-1. In the **Set properties** window, do the following steps:
-1. Change the name of the dataset to **SinkDataset**.
-1. Change the name of the dataset to **SinkDataset**.
-1. Select **AzureBlobStorageLinkedService** for **Linked service**.
-1. Enter **adftutorial/incchgtracking** for **folder** part of the **filePath**.
-1. Click **OK**.
-
- ![Sink dataset - Properties.](media/tutorial-incremental-copy-change-tracking-feature-portal/source-dataset-properties.png)
-1. The dataset will be visible in the treeview, do the following steps:
- 1. In **Connection** tab, click in the text box field for **File name**. **Add dynamic content** option will appear, click on it.
- ![Sink Dataset - Setting dynamic file path.](media/tutorial-incremental-copy-change-tracking-feature-portal/sink-dataset-filepath.png)
- 1. Click on **Add dynamic content[Alt+Shift+D].**
- 1. **Pipeline expression builder** window will appear. Paste the following in the text box filed, @concat('Incremental-',pipeline().RunId,'.csv')
- 1. Click **OK**.
+
+1. In the Data Factory UI, on the **Author** tab, select the plus sign (**+**). Then select **Dataset**, or select the ellipsis for dataset actions.
+
+ ![Screenshot that shows selections for starting the creation of a dataset.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-dataset-menu.png)
+1. Select **Azure SQL Database**, and then select **Continue**.
+1. In the **Set Properties** window, take the following steps:
+ 1. For **Name**, enter **SourceDataset**.
+ 1. For **Linked service**, select **AzureSqlDatabaseLinkedService**.
+ 1. For **Table name**, select **dbo.data_source_table**.
+ 1. For **Import schema**, select the **From connection/store** option.
+ 1. Select **OK**.
+
+ ![Screenshot that shows property settings for a source dataset.](media/tutorial-incremental-copy-change-tracking-feature-portal/source-dataset-properties.png)
+
+### Create a dataset to represent data copied to the sink data store
+
+In the following procedure, you create a dataset to represent the data that's copied from the source data store. You created the *adftutorial* container in Azure Blob Storage as part of the prerequisites. Create the container if it doesn't exist, or set it to the name of an existing one. In this tutorial, the output file name is dynamically generated from the expression `@CONCAT('Incremental-', pipeline().RunId, '.txt')`.
+
+1. In the Data Factory UI, on the **Author** tab, select **+**. Then select **Dataset**, or select the ellipsis for dataset actions.
+
+ ![Screenshot that shows selections for starting the creation of a dataset.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-dataset-menu.png)
+1. Select **Azure Blob Storage**, and then select **Continue**.
+1. Select the format of the data type as **DelimitedText**, and then select **Continue**.
+1. In the **Set properties** window, take the following steps:
+ 1. For **Name**, enter **SinkDataset**.
+ 1. For **Linked service**, select **AzureBlobStorageLinkedService**.
+ 1. For **File path**, enter **adftutorial/incchgtracking**.
+ 1. Select **OK**.
+1. After the dataset appears in the tree view, go to the **Connection** tab and select the **File name** text box. When the **Add dynamic content** option appears, select it.
+
+ ![Screenshot that shows the option for setting a dynamic file path for a sink dataset.](media/tutorial-incremental-copy-change-tracking-feature-portal/sink-dataset-filepath.png)
+1. The **Pipeline expression builder** window appears. Paste `@concat('Incremental-',pipeline().RunId,'.csv')` in the text box.
+1. Select **OK**.
+ ### Create a dataset to represent change tracking data
-In this step, you create a dataset for storing the change tracking version. You created the table table_store_ChangeTracking_version as part of the prerequisites.
-
-1. In the treeview, click **+ (plus)**, and click **Dataset**.
-1. Select **Azure SQL Database**, and click **Continue**.
-1. In the **Set Properties** window, do the following steps:
-1. Set the name of the dataset to **ChangeTrackingDataset**.
-1. Select **AzureSqlDatabaseLinkedService** for **Linked service**.
-1. Select **dbo.table_store_ChangeTracking_version** for **Table name.**
-1. Select the radio button to **Import schema** for **From connection/store**.
-1. Click **OK**.
+
+In the following procedure, you create a dataset for storing the change tracking version. You created the `table_store_ChangeTracking_version` table as part of the prerequisites.
+
+1. In the Data Factory UI, on the **Author** tab, select **+**, and then select **Dataset**.
+1. Select **Azure SQL Database**, and then select **Continue**.
+1. In the **Set Properties** window, take the following steps:
+ 1. For **Name**, enter **ChangeTrackingDataset**.
+ 1. For **Linked service**, select **AzureSqlDatabaseLinkedService**.
+ 1. For **Table name**, select **dbo.table_store_ChangeTracking_version**.
+ 1. For **Import schema**, select the **From connection/store** option.
+ 1. Select **OK**.
+ ## Create a pipeline for the full copy
-In this step, you create a pipeline with a copy activity that copies the entire data from the source data store (Azure SQL Database) to the destination data store (Azure Blob Storage).
-1. Click **+ (plus)** in the left pane, and click **Pipeline > Pipeline**.
- ![Screenshot shows the Pipeline option for a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-pipeline-menu.png)
-1. You see a new tab for configuring the pipeline. You also see the pipeline in the treeview. In the **Properties** window, change the name of the pipeline to **FullCopyPipeline**.
-1. In the **Activities** toolbox, expand **Move & transform**, and drag-drop the **Copy** activity to the pipeline designer surface or search the **copy data** activity in search bar under **Activities**, and set the name **FullCopyActivity**.
-1. Switch to the **Source** tab, and select **SourceDataset** for the **Source Dataset** field.
-1. Switch to the **Sink** tab, and select **SinkDataset** for the **Sink Dataset** field.
+In the following procedure, you create a pipeline with a copy activity that copies the entire data from the source data store (Azure SQL Database) to the destination data store (Azure Blob Storage):
-1. To validate the pipeline definition, click **Validate** on the toolbar. Confirm that there is no validation error. Close the **Pipeline Validation output** by clicking **Close**.
-1. To publish entities (linked services, datasets, and pipelines), click **Publish all**. Wait until the publishing succeeds.
-8. Wait until you see the **Successfully published** message.
+1. In the Data Factory UI, on the **Author** tab, select **+**, and then select **Pipeline** > **Pipeline**.
+
+ ![Screenshot that shows selections for starting to create a pipeline for a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-pipeline-menu.png)
+1. A new tab appears for configuring the pipeline. The pipeline also appears in the tree view. In the **Properties** window, change the name of the pipeline to **FullCopyPipeline**.
+1. In the **Activities** toolbox, expand **Move & transform**. Take one of the following steps:
+ - Drag the copy activity to the pipeline designer surface.
+ - On the search bar under **Activities**, search for the copy data activity, and then set the name to **FullCopyActivity**.
+1. Switch to the **Source** tab. For **Source Dataset**, select **SourceDataset**.
+1. Switch to the **Sink** tab. For **Sink Dataset**, select **SinkDataset**.
+1. To validate the pipeline definition, select **Validate** on the toolbar. Confirm that there is no validation error. Close the pipeline validation output.
+1. To publish entities (linked services, datasets, and pipelines), select **Publish all**. Wait until you see the **Successfully published** message.
+
+ :::image type="content" source="./media/tutorial-incremental-copy-change-tracking-feature-portal/publishing-succeeded.png" alt-text="Screenshot of the message that says publishing succeeded.":::
+1. To see notifications, select the **Show Notifications** button.
- :::image type="content" source="./media/tutorial-incremental-copy-change-tracking-feature-portal/publishing-succeeded.png" alt-text="Publishing succeeded":::
-1. You can also see notifications by clicking the **Show Notifications** button on the left. To close the notifications window, click **X** or **Close** button on the bottom of the plane.
### Run the full copy pipeline
-1. Click **Add** **trigger** on the toolbar for the pipeline, and click **Trigger Now**.
- ![Screenshot shows the Trigger Now option selected from the Trigger menu.](media/tutorial-incremental-copy-change-tracking-feature-portal/trigger-now-menu.png)
-1. Click **OK** on the Pipeline run window.
+
+1. In the Data Factory UI, on the toolbar for the pipeline, select **Add trigger**, and then select **Trigger now**.
+
+ ![Screenshot that shows the option for triggering a full copy now.](media/tutorial-incremental-copy-change-tracking-feature-portal/trigger-now-menu.png)
+1. In the **Pipeline run** window, select **OK**.
- ![Pipeline run confirmation with parameter check.](media/tutorial-incremental-copy-change-tracking-feature-portal/trigger-pipeline-run-confirmation.png)
+ ![Screenshot that shows a pipeline run confirmation with a parameter check.](media/tutorial-incremental-copy-change-tracking-feature-portal/trigger-pipeline-run-confirmation.png)
+ ### Monitor the full copy pipeline
-1. Click the **Monitor** tab on the left. You see the pipeline run in the list and its status. To refresh the list, click **Refresh**. Hover on the pipeline run to get the option to **Rerun** or check **consumption**.
- ![Screenshot shows pipeline runs for a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/monitor-full-copy-pipeline-run.png)
-1. To view activity runs associated with the pipeline run, click the pipeline name from **Pipeline name** column. There is only one activity in the pipeline, so you see only one entry in the list. To switch back to the pipeline runs view, click **All** **pipeline runs** link at the top.
+1. In the Data Factory UI, select the **Monitor** tab. The pipeline run and its status appear in the list. To refresh the list, select **Refresh**. Hover over the pipeline run to get the **Rerun** or **Consumption** option.
+
+ ![Screenshot that shows a pipeline run and status.](media/tutorial-incremental-copy-change-tracking-feature-portal/monitor-full-copy-pipeline-run.png)
+1. To view activity runs associated with the pipeline run, select the pipeline name from the **Pipeline name** column. There's only one activity in the pipeline, so there's only one entry in the list. To switch back to the view of pipeline runs, select the **All pipeline runs** link at the top.
+ ### Review the results
-You see a file named `incremental-<GUID>.csv` in the `incchgtracking` folder of the `adftutorial` container.
-![Output file from full copy.](media/tutorial-incremental-copy-change-tracking-feature-portal/full-copy-output-file.png)
+
+The *incchgtracking* folder of the *adftutorial* container includes a file named `incremental-<GUID>.csv`.
+
+![Screenshot of an output file from a full copy.](media/tutorial-incremental-copy-change-tracking-feature-portal/full-copy-output-file.png)
+ The file should have the data from your database: ```
PersonID,Name,Age
5,"eeee",22 ```+ ## Add more data to the source table
-Run the following query against your database to add a row and update a row.
+Run the following query against your database to add a row and update a row:
```sql INSERT INTO data_source_table
SET [Age] = '10', [name]='update' where [PersonID] = 1
``` ## Create a pipeline for the delta copy
-In this step, you create a pipeline with the following activities, and run it periodically. The **lookup activities** get the old and new SYS_CHANGE_VERSION from Azure SQL Database and pass it to copy activity. The **copy activity** copies the inserted/updated/deleted data between the two SYS_CHANGE_VERSION values from Azure SQL Database to Azure Blob Storage. The **stored procedure activity** updates the value of SYS_CHANGE_VERSION for the next pipeline run.
-
-1. In the Data Factory UI, switch to the **Author** tab. Click **+ (plus)** in the left pane treeview, and click **Pipeline > Pipeline**.
- ![Screenshot shows how to create a pipeline in a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-pipeline-menu-2.png)
-2. You see a new tab for configuring the pipeline. You also see the pipeline in the treeview. In the **Properties** window, change the name of the pipeline to **IncrementalCopyPipeline**.
-3. Expand **General** in the **Activities** toolbox, and drag-drop the **Lookup** activity to the pipeline designer surface or search in the **Search activities** search box. Set the name of the activity to **LookupLastChangeTrackingVersionActivity**. This activity gets the change tracking version used in the last copy operation that is stored in the table **table_store_ChangeTracking_version**.
-4. Switch to the **Settings** in the **Properties** window, and select **ChangeTrackingDataset** for the **Source Dataset** field.
+In the following procedure, you create a pipeline with activities and run it periodically. When you run the pipeline:
-5. Drag-and-drop the **Lookup** activity from the **Activities** toolbox to the pipeline designer surface. Set the name of the activity to **LookupCurrentChangeTrackingVersionActivity**. This activity gets the current change tracking version.
+- The *lookup activities* get the old and new `SYS_CHANGE_VERSION` values from Azure SQL Database and pass them to the copy activity.
+- The *copy activity* copies the inserted, updated, or deleted data between the two `SYS_CHANGE_VERSION` values from Azure SQL Database to Azure Blob Storage.
+- The *stored procedure activity* updates the value of `SYS_CHANGE_VERSION` for the next pipeline run.
-6. Switch to the **Settings** in the **Properties** window, and do the following steps:
-
- 1. Select **SourceDataset** for the **Source Dataset** field.
- 2. Select **Query** for **Use Query**.
- 3. Enter the following SQL query for **Query**.
+1. In the Data Factory UI, switch to the **Author** tab. Select **+**, and then select **Pipeline** > **Pipeline**.
+
+ ![Screenshot that shows how to create a pipeline in a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/new-pipeline-menu-2.png)
+2. A new tab appears for configuring the pipeline. The pipeline also appears in the tree view. In the **Properties** window, change the name of the pipeline to **IncrementalCopyPipeline**.
+3. Expand **General** in the **Activities** toolbox. Drag the lookup activity to the pipeline designer surface, or search in the **Search activities** box. Set the name of the activity to **LookupLastChangeTrackingVersionActivity**. This activity gets the change tracking version used in the last copy operation that's stored in the `table_store_ChangeTracking_version` table.
+4. Switch to the **Settings** tab in the **Properties** window. For **Source Dataset**, select **ChangeTrackingDataset**.
+5. Drag the lookup activity from the **Activities** toolbox to the pipeline designer surface. Set the name of the activity to **LookupCurrentChangeTrackingVersionActivity**. This activity gets the current change tracking version.
+6. Switch to the **Settings** tab in the **Properties** window, and then take the following steps:
+
+ 1. For **Source dataset**, select **SourceDataset**.
+ 2. For **Use query**, select **Query**.
+ 3. For **Query**, enter the following SQL query:
```sql SELECT CHANGE_TRACKING_CURRENT_VERSION() as CurrentChangeTrackingVersion ```
+ ![Screenshot that shows a query added to the Settings tab in the Properties window.](media/tutorial-incremental-copy-change-tracking-feature-portal/second-lookup-activity-settings.png)
+7. In the **Activities** toolbox, expand **Move & transform**. Drag the copy data activity to the pipeline designer surface. Set the name of the activity to **IncrementalCopyActivity**. This activity copies the data between the last change tracking version and the current change tracking version to the destination data store.
+8. Switch to the **Source** tab in the **Properties** window, and then take the following steps:
- ![Screenshot shows a query added to the Settings tab in the Properties window.](media/tutorial-incremental-copy-change-tracking-feature-portal/second-lookup-activity-settings.png)
-7. In the **Activities** toolbox, expand **Move & transform**, drag-drop the **Copy** **data** activity to the pipeline designer surface. Set the name of the activity to **IncrementalCopyActivity**. This activity copies the data between last change tracking version and the current change tracking version to the destination data store.
-8. Switch to the **Source** tab in the **Properties** window, and do the following steps:
-
- 1. Select **SourceDataset** for **Source Dataset**.
- 2. Select **Query** for **Use Query**.
- 3. Enter the following SQL query for **Query**.
+ 1. For **Source dataset**, select **SourceDataset**.
+ 2. For **Use query**, select **Query**.
+ 3. For **Query**, enter the following SQL query:
```sql SELECT data_source_table.PersonID,data_source_table.Name,data_source_table.Age, CT.SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION from data_source_table RIGHT OUTER JOIN CHANGETABLE(CHANGES data_source_table, @{activity('LookupLastChangeTrackingVersionActivity').output.firstRow.SYS_CHANGE_VERSION}) AS CT ON data_source_table.PersonID = CT.PersonID where CT.SYS_CHANGE_VERSION <= @{activity('LookupCurrentChangeTrackingVersionActivity').output.firstRow.CurrentChangeTrackingVersion} ```
- ![Copy Activity - source settings.](media/tutorial-incremental-copy-change-tracking-feature-portal/inc-copy-source-settings.png)
+ ![Screenshot that shows a query added to the Source tab in the Properties window.](media/tutorial-incremental-copy-change-tracking-feature-portal/inc-copy-source-settings.png)
-9. Switch to the **Sink** tab, and select **SinkDataset** for the **Sink Dataset** field.
-
-10. **Connect both Lookup activities to the Copy activity** one by one. Drag the **green** button attached to the **Lookup** activity to the **Copy** activity.
-
-11. Drag-and-drop the **Stored Procedure** activity from the **Activities** toolbox to the pipeline designer surface. Set the name of the activity to **StoredProceduretoUpdateChangeTrackingActivity**. This activity updates the change tracking version in the **table_store_ChangeTracking_version** table.
-
-12. Switch to the **Settings** tab, and do the following steps:
+9. Switch to the **Sink** tab. For **Sink Dataset**, select **SinkDataset**.
+10. Connect both lookup activities to the copy activity one by one. Drag the green button that's attached to the lookup activity to the copy activity.
+11. Drag the stored procedure activity from the **Activities** toolbox to the pipeline designer surface. Set the name of the activity to **StoredProceduretoUpdateChangeTrackingActivity**. This activity updates the change tracking version in the `table_store_ChangeTracking_version` table.
+12. Switch to the **Settings** tab, and then take the following steps:
- 1. Select **AzureSqlDatabaseLinkedService** for **Linked service**.
+ 1. For **Linked service**, select **AzureSqlDatabaseLinkedService**.
2. For **Stored procedure name**, select **Update_ChangeTracking_Version**. 3. Select **Import**.
- 4. In the **Stored procedure parameters** section, specify following values for the parameters:
+ 4. In the **Stored procedure parameters** section, specify the following values for the parameters:
- | Name | Type | Value |
- | - | - | -- |
- | CurrentTrackingVersion | Int64 | @{activity('LookupCurrentChangeTrackingVersionActivity').output.firstRow.CurrentChangeTrackingVersion} |
- | TableName | String | @{activity('LookupLastChangeTrackingVersionActivity').output.firstRow.TableName} |
+ | Name | Type | Value |
+ | - | - | -- |
+ | `CurrentTrackingVersion` | Int64 | `@{activity('LookupCurrentChangeTrackingVersionActivity').output.firstRow.CurrentChangeTrackingVersion}` |
+ | `TableName` | String | `@{activity('LookupLastChangeTrackingVersionActivity').output.firstRow.TableName}` |
- ![Stored Procedure Activity - Parameters.](media/tutorial-incremental-copy-change-tracking-feature-portal/stored-procedure-parameters.png)
+ ![Screenshot that shows setting parameters for the stored procedure activity.](media/tutorial-incremental-copy-change-tracking-feature-portal/stored-procedure-parameters.png)
-13. **Connect the Copy activity to the Stored Procedure Activity**. Drag-and-drop the **green** button attached to the Copy activity to the Stored Procedure activity.
+13. Connect the copy activity to the stored procedure activity. Drag the green button that's attached to the copy activity to the stored procedure activity.
+14. Select **Validate** on the toolbar. Confirm that there are no validation errors. Close the **Pipeline Validation Report** window.
+15. Publish entities (linked services, datasets, and pipelines) to the Data Factory service by selecting the **Publish all** button. Wait until the **Publishing succeeded** message appears.
-14. Click **Validate** on the toolbar. Confirm that there are no validation errors. Close the **Pipeline Validation Report** window by clicking **Close**.
-15. Publish entities (linked services, datasets, and pipelines) to the Data Factory service by clicking the **Publish All** button. Wait until you see the **Publishing succeeded** message.
-
- ![Screenshot shows the Publish All button for a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/publish-button-2.png)
+ ![Screenshot that shows the button for publishing all entities for a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/publish-button-2.png)
### Run the incremental copy pipeline
-1. Click **Trigger** on the toolbar for the pipeline, and click **Trigger Now**.
- ![Screenshot shows a pipeline with activities and the Trigger Now option selected from the Trigger menu.](media/tutorial-incremental-copy-change-tracking-feature-portal/trigger-now-menu-2.png)
+1. Select **Add trigger** on the toolbar for the pipeline, and then select **Trigger now**.
+
+ ![Screenshot that shows the option for triggering an incremental copy now.](media/tutorial-incremental-copy-change-tracking-feature-portal/trigger-now-menu-2.png)
2. In the **Pipeline Run** window, select **OK**.+ ### Monitor the incremental copy pipeline
-1. Click the **Monitor** tab on the left. You see the pipeline run in the list and its status. To refresh the list, click **Refresh**. The links in the **Pipeline name** column let you view activity runs associated with the pipeline run and to rerun the pipeline.
- ![Screenshot shows pipeline runs for a data factory including your pipeline.](media/tutorial-incremental-copy-change-tracking-feature-portal/inc-copy-pipeline-runs.png)
-1. To view activity runs associated with the pipeline run, click the **IncrementalCopyPipeline** link in the **Pipeline name** column.
- ![Screenshot shows pipeline runs for a data factory with several marked as succeeded.](media/tutorial-incremental-copy-change-tracking-feature-portal/inc-copy-activity-runs.png)
+
+1. Select the **Monitor** tab. The pipeline run and its status appear in the list. To refresh the list, select **Refresh**.
+
+ ![Screenshot that shows pipeline runs for a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/inc-copy-pipeline-runs.png)
+1. To view activity runs associated with the pipeline run, select the **IncrementalCopyPipeline** link in the **Pipeline name** column. The activity runs appear in a list.
+
+ ![Screenshot that shows activity runs for a data factory.](media/tutorial-incremental-copy-change-tracking-feature-portal/inc-copy-activity-runs.png)
+ ### Review the results
-You see the second file in the `incchgtracking` folder of the `adftutorial` container.
-![Output file from incremental copy.](media/tutorial-incremental-copy-change-tracking-feature-portal/incremental-copy-output-file.png)The file should have only the delta data from your database. The record with `U` is the updated row in the database and `I` is the one added row.
+The second file appears in the *incchgtracking* folder of the *adftutorial* container.
+
+![Screenshot that shows the output file from an incremental copy.](media/tutorial-incremental-copy-change-tracking-feature-portal/incremental-copy-output-file.png)
+
+The file should have only the delta data from your database. The record with `U` is the updated row in the database, and `I` is the one added row.
``` PersonID,Name,Age,SYS_CHANGE_VERSION,SYS_CHANGE_OPERATION 1,update,10,2,U 6,new,50,1,I ```
-The first three columns are changed data from data_source_table. The last two columns are the metadata from change tracking system table. The fourth column is the SYS_CHANGE_VERSION for each changed row. The fifth column is the operation: U = update, I = insert. For details about the change tracking information, see [CHANGETABLE](/sql/relational-databases/system-functions/changetable-transact-sql).
+The first three columns are changed data from `data_source_table`. The last two columns are the metadata from the table for the change tracking system. The fourth column is the `SYS_CHANGE_VERSION` value for each changed row. The fifth column is the operation: `U` = update, `I` = insert. For details about the change tracking information, see [CHANGETABLE](/sql/relational-databases/system-functions/changetable-transact-sql).
``` ==================================================================
PersonID Name Age SYS_CHANGE_VERSION SYS_CHANGE_OPERATION
``` ## Next steps
-Advance to the following tutorial to learn about copying new and changed files only based on their LastModifiedDate:
-
-> [!div class="nextstepaction"]
-> [Copy new files by lastmodifieddate](tutorial-incremental-copy-lastmodified-copy-data-tool.md)
--
+Advance to the following tutorial to learn about copying only new and changed files, based on `LastModifiedDate`:
+> [!div class="nextstepaction"]
+> [Incrementally copy new and changed files based on LastModifiedDate by using the Copy Data tool](tutorial-incremental-copy-lastmodified-copy-data-tool.md)
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
Check out our [What's New video archive](https://www.youtube.com/playlist?list=P
- Export up to 1000 rows from data flow preview [Learn more](concepts-data-flow-debug-mode.md?tabs=data-factory#data-preview) - SQL CDC in Mapping Data Flows now available (Public Preview) [Learn more](connector-sql-server.md?tabs=data-factory#native-change-data-capture) - Unlock advanced analytics with Microsoft 365 Mapping Data Flow Connector [Learn more](https://devblogs.microsoft.com/microsoft365dev/scale-access-to-microsoft-365-data-with-microsoft-graph-data-connect/)-
-### Data movement
- - SAP Change Data Capture (CDC) in now generally available [Learn more](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector)-- Azure-SSIS Integration Runtime now generally available in Azure Synapse Analytics [Learn more](https://techcommunity.microsoft.com/t5/sql-server-integration-services/azure-ssis-integration-runtime-now-available-in-azure-synapse/ba-p/3171763) ### Developer productivity
databox Data Box Disk Deploy Set Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-set-up.md
Previously updated : 09/04/2019 Last updated : 10/26/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
Before you begin, make sure that:
1. Use the included cable to connect the disk to the client computer running a supported OS as stated in the prerequisites. ![Data Box Disk connect](media/data-box-disk-deploy-set-up/data-box-disk-connect-unlock.png)-
+
2. In the Azure portal, navigate to your Data Box Disk Order. Search for it by navigating to **General > All resources**, then select your Data Box Disk Order. Use the copy icon to copy the passkey. This passkey will be used to unlock the disks. ![Data Box Disk unlock passkey](media/data-box-disk-deploy-set-up/data-box-disk-get-passkey.png)
Depending on whether you are connected to a Windows or Linux client, the steps t
## Unlock disks on Windows client
-Perform the following steps to connect and unlock your disks.
+Perform the following steps to connect and unlock your disks.
1. In the Azure portal, navigate to your Data Box Disk Order. Search for it by navigating to **General > All resources**, then select your Data Box Disk Order. 2. Download the Data Box Disk toolset corresponding to the Windows client. This toolset contains 3 tools: Data Box Disk Unlock tool, Data Box Disk Validation tool, and Data Box Disk Split Copy tool.
Perform the following steps to connect and unlock your disks.
![Data Box Disk contents](media/data-box-disk-deploy-set-up/data-box-disk-content.png)
+ > [!NOTE]
+ > Don't format or modify the contents or existing file structure of the disk.
+ If you run into any issues while unlocking the disks, see how to [troubleshoot unlock issues](data-box-disk-troubleshoot-unlock.md). ## Unlock disks on Linux client
+Perform the following steps to connect and unlock your disks.
+ 1. In the Azure portal, go to **General > Device details**. 2. Download the Data Box Disk toolset corresponding to the Linux client.
If you run into any issues while unlocking the disks, see how to [troubleshoot u
![Data Box Disk contents 2](media/data-box-disk-deploy-set-up/data-box-disk-content-linux.png)
+ > [!NOTE]
+ > Don't format or modify the contents or existing file structure of the disk.
If you run into any issues while unlocking the disks, see how to [troubleshoot unlock issues](data-box-disk-troubleshoot-unlock.md).
ddos-protection Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/alerts.md
na Previously updated : 10/12/2022 Last updated : 11/11/2022 # Tutorial: View and configure Azure DDoS Protection alerts
With these templates, you will be able to configure alerts for all public IP add
This [Azure Monitor alert rule](https://aka.ms/DDOSmitigationstatus) will run a simple query to detect when an active DDoS mitigation is occurring. This indicates a potential attack. Action groups can be used to invoke actions as a result of the alert.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAzure%2520Monitor%2520Alert%2520-%2520DDoS%2520Mitigation%2520Started%2FDDoSMitigationStarted.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAlert%2520-%2520DDOS%2520Mitigation%2520started%2520azure%2520monitor%2520alert%2FDDoSMitigationStarted.json)
### Azure Monitor alert rule with Logic App This [DDoS Mitigation Alert Enrichment template](https://aka.ms/ddosalert) deploys the necessary components of an enriched DDoS mitigation alert: Azure Monitor alert rule, action group, and Logic App. The result of the process is an email alert with details about the IP address under attack, including information about the resource associated with the IP. The owner of the resource is added as a recipient of the email, along with the security team. A basic application availability test is also performed and the results are included in the email alert.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FDDoS%2520Mitigation%2520Alert%2520Enrichment%2FEnrich-DDoSAlert.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2FAzure-Network-Security%2Fmaster%2FAzure%2520DDoS%2520Protection%2FAutomation%2520-%2520DDoS%2520Mitigation%2520Alert%2520Enrichment%2FEnrich-DDoSAlert.json)
## Configure alerts through portal
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Title: Get right-sized Azure recommendation for your on-premises SQL Server database(s) (Preview)
-description: Learn how to use the Azure SQL migration extension in Azure Data Studio to get SKU recommendation to migrate SQL Server database(s) to the right-sized Azure SQL Managed Instance, SQL Server on Azure Virtual Machines or, Azure SQL Database.
+ Title: Get Azure recommendations for your SQL Server migration
+description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to get SKU recommendation when you migrate SQL Server databases to the Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database.
Last updated 02/22/2022
-# Get right-sized Azure recommendation for your on-premises SQL Server database(s) (Preview)
+# Get Azure recommendations to migrate your SQL Server database (preview)
-The [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) provides a unified experience to assess, get right-sized SKU recommendations and migrate your SQL Server database(s) to Azure.
+Learn how to use the unified experience in the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to assess your database requirements, get right-sized SKU recommendations for Azure resources, and migrate your SQL Server databases to Azure.
-Before migrating your SQL Server databases to Azure, it is important to assess them to identify any migration issues (if any) so you can remediate them and confidently migrate them to Azure. Moreover, it is equally important to identify the right-sized configuration in Azure to ensure your database workload performance requirements are met with minimal cost.
+Before you migrate your SQL Server databases to Azure, it's important to assess the databases to identify any potential migration issues. You can remediate anticipated issues, and then confidently migrate your databases to Azure.
-The Azure SQL migration extension for Azure Data Studio provides both the assessment and SKU recommendation (right-sized Azure recommended configuration) capabilities when you are trying to select the best option to migrate your SQL Server database(s) to Azure SQL Managed Instance, SQL Server on Azure Virtual Machines or, Azure SQL Database (Preview). The extension provides a user friendly interface to run the assessment and generate recommendations within a short timeframe.
+It's equally important to identify the right-sized Azure resource to migrate to so that your database workload performance requirements are met with minimal cost.
+
+The Azure SQL Migration extension for Azure Data Studio provides both the assessment and SKU recommendations when you're trying to choose the best option to migrate your SQL Server databases to Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database (preview). The extension has an intuitive interface to help you efficiently run the assessment and generate recommendations.
> [!NOTE]
-> Assessment and Azure recommendation feature in the Azure SQL migration extension for Azure Data Studio also supports source SQL Server running on Linux.
+> Assessment and the Azure recommendation feature in the Azure SQL Migration extension for Azure Data Studio supports source SQL Server instances running on Windows or Linux.
+
+## Prerequisites
+
+To get an Azure recommendation for your SQL Server database migration, you must meet the following prerequisites:
+
+- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permissions.
## Performance data collection and SKU recommendation
-With the Azure SQL migration extension, you can get a right-sized Azure recommendation to migrate your SQL Server databases to Azure SQL Managed Instance, SQL Server on Azure Virtual Machines or, Azure SQL Database (Preview). The extension collects and analyzes performance data from your SQL Server instance to generate a recommended SKU each for Azure SQL Managed Instance, SQL Server on Azure Virtual Machines or Azure SQL Database (Preview) that meets your database(s)' performance characteristics with the lowest cost.
+The Azure SQL Migration extension first collects performance data from your SQL Server instance. Then, it analyzes the data to generate a recommended SKU for Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, or Azure SQL Database. The SKU recommendation is designed to meet your database performance requirements at the lowest cost in the Azure service.
+
+The following diagram shows the workflow for data collection and SKU recommendations:
+
-The workflow for data collection and SKU recommendation is illustrated below.
+The following list describes each step in the workflow:
+(1) **Performance data collection**: To start the performance data collection process in the migration wizard, select **Get Azure recommendation** and choose the option to collect performance data. Enter the folder path where the collected data will be saved, and then select **Start**.
-1. **Performance data collection**: To start the performance data collection process in the migration wizard, select **Get Azure recommendation** and choose the option to collect performance data as shown below. Provide the folder where the collected data will be saved and select **Start**.
- :::image type="content" source="media/ads-sku-recommend/collect-performance-data.png" alt-text="Collect performance data for SKU recommendation":::
-
- When you start the data collection process in the migration wizard, the Azure SQL migration extension for Azure Data Studio collects data from your SQL Server instance that includes information about the hardware configuration and aggregated SQL Server specific performance data from system Dynamic Management Views (DMVs) such as CPU utilization, memory utilization, storage size, IO, throughput and IO latency.
- > [!IMPORTANT]
- > - The data collection process runs for 10 minutes to generate the first recommendation. It is important to start the data collection process when your database workload reflects usage close to your production scenarios.</br>
- > - After the first recommendation is generated, you can continue to run the data collection process to refine recommendations especially if your usage patterns vary for an extended duration of time.
+
+When you start the data collection process in the migration wizard, the Azure SQL Migration extension for Azure Data Studio collects data from your SQL Server instance. The data collection includes hardware configuration and aggregated SQL Server-specific performance data from system Dynamic Management Views like CPU utilization, memory utilization, storage size, input/output (I/O), throughput, and I/O latency.
-1. **Save generated data files locally**: The performance data is periodically aggregated and written to your local filesystem (in the folder that you selected while starting data collection in the migration wizard). Typically, you will see a set of CSV files with the following suffixes in the folder you selected:
- - **_CommonDbLevel_Counters.csv** : This file contains static configuration data about the database file layout and metadata.
- - **_CommonInstanceLevel_Counters.csv** : This file contains static data about the hardware configuration of the server instance.
- - **_PerformanceAggregated_Counters.csv** : This file contains aggregated performance data that is updated frequently.
-1. **Analyze and recommend SKU**: The SKU recommender analyzes the captured common and performance data to recommend the minimum configuration with the least cost that will meet your database's performance requirements. You can also view details about the reason behind the recommendation and source properties that were analyzed. *For SQL Server on Azure Virtual Machines, the SKU recommender also recommends the desired storage configuration for data files, log files and tempdb.*</br> The SKU recommender provides optional parameters that can be modified to refine recommendations based on your inputs about the production workload.
- - **Scale factor**: Scale ('comfort') factor used to inflate or deflate SKU recommendation based on your understanding of the production workload. For example, if it is determined that there is a 4 vCore CPU requirement with a scale factor of 150%, then the true CPU requirement will be 6 vCores. (Default value: 100)
- - **Percentage utilization**: Percentile of data points to be used during aggregation of the performance data. (Default: 95th Percentile)
- - **Enable preview features**: Enabling this option will include the latest hardware generations that have significantly improved performance and scalability. These SKUs are currently in Preview and may not yet be available in all regions. (Default value: Yes)
+> [!IMPORTANT]
+>
+> - The data collection process runs for 10 minutes to generate the first recommendation. It's important to start the data collection process when your active database workload reflects usage that's similar to your production scenarios.
+> - After the first recommendation is generated, you can continue to run the data collection process to refine recommendations. This option is especially useful if your usage patterns vary over time.
+(2) **Save generated data files locally**: The performance data is periodically aggregated and written to the local folder that you selected in the migration wizard. You typically see a set of CSV files with the following suffixes in the folder:
- > [!IMPORTANT]
- > The data collection process will terminate if you close Azure Data Studio. However, the data that was collected until that point will be saved in your folder.</br>
- >If you close Azure Data Studio while the data collection is in progress, you can either
- > - return to import the data files that are saved in your local folder to generate a recommendation from the collected data; Or
- > - return to start the data collection again from the migration wizard;
+- **_CommonDbLevel_Counters.csv** : Contains static configuration data about the database file layout and metadata.
+- **_CommonInstanceLevel_Counters.csv** : Contains static data about the hardware configuration of the server instance.
+- **_PerformanceAggregated_Counters.csv** : Contains aggregated performance data that's updated frequently.
+
+(3) **Analyze and recommend SKU**: The SKU recommendation process analyzes the captured common and performance data to recommend the minimum configuration with the least cost that will meet your database's performance requirements. You can also view details about the reason behind the recommendation and the source properties that were analyzed. *For SQL Server on Azure Virtual Machines, the process also includes a recommendation for storage configuration for data files, log files, and tempdb.*
+
+You can use optional parameters as inputs about the production workload to refine recommendations:
+
+- **Scale factor**: Scale (*comfort*) factor is used to inflate or deflate a SKU recommendation based on your understanding of the production workload. For example, if you determine that a four-vCore CPU requirement has a scale factor of 150%, the true CPU requirement is six vCores. The default scale factor volume is 100%.
+- **Percentage utilization**: The percentile of data points to be used as performance data is aggregated. The default value is the 95th percentile.
+- **Enable preview features**: Enabling this option includes the latest hardware generations that have improved performance and scalability. Currently, these SKUs are in preview, and they might not be available yet in all regions. This option is enabled by default.
+
+> [!IMPORTANT]
+> The data collection process terminates if you close Azure Data Studio. The data that was collected up to that point is saved in your folder.
+>
+> If you close Azure Data Studio while data collection is in progress, use one of the following options to restart data collection:
+>
+> - Reopen Azure Data Studio and import the data files that are saved in your local folder. Then, generate a recommendation from the collected data.
+> - Reopen Azure Data Studio and start data collection again by using the migration wizard.
### Import existing performance data
-Any existing Performance data that you collected previously using the Azure SQL migration extension or [using the console application in Data Migration Assistant](/sql/dma/dma-sku-recommend-sql-db) can be imported in the migration wizard to view the recommendation.</br>
-Simply provide the folder location where the performance data files are saved and select **Start** to instantly view the recommendation and its details.</br>
- :::image type="content" source="media/ads-sku-recommend/import-sku-data.png" alt-text="Import performance data for SKU recommendation":::
-## Prerequisites
-The following prerequisites are required to get Azure recommendation:
-* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
-* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
+You can import any existing performance data that you collected earlier by using the Azure SQL Migration extension or by using the [console application in Data Migration Assistant](/sql/dma/dma-sku-recommend-sql-db).
+
+In the migration wizard, enter the folder path where the performance data files are saved. Then, select **Start** to view the recommendation and related details.
+ ## Next steps -- For an overview of the architecture to migrate databases, see [Migrate databases with Azure SQL migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
+- Learn how to [migrate databases by using the Azure SQL Migration extension in Azure Data Studio](migration-using-azure-data-studio.md).
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Title: Migrate using Azure Data Studio
-description: Learn how to use the Azure SQL migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service.
+ Title: Migrate databases by using the Azure SQL Migration extension for Azure Data Studio
+description: Learn how to use the Azure SQL Migration extension in Azure Data Studio to migrate databases with Azure Database Migration Service.
Last updated 09/28/2022
-# Migrate databases with Azure SQL migration extension for Azure Data Studio
+# Migrate databases by using the Azure SQL Migration extension for Azure Data Studio
-The [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) enables you to assess, get right-sized Azure recommendations and migrate your SQL Server databases to Azure.
+Learn how to use the unified experience in [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) to assess your database requirements, get right-sized SKU recommendations for Azure resources, and migrate your SQL Server database to Azure.
-The key benefits of using the Azure SQL migration extension for Azure Data Studio are:
+The Azure SQL Migration extension for Azure Data Studio offers these key benefits:
+
+- A responsive UI for an end-to-end migration experience. The extension starts with a migration readiness assessment and SKU recommendation (preview) (based on performance data).
+
+- An enhanced assessment mechanism that can evaluate SQL Server instances. The extension identifies databases that are ready to migrate to Azure SQL targets.
+
+ > [!NOTE]
+ > You can use the Azure SQL Migration extension to assess SQL Server databases running on Windows or Linux.
+
+- An SKU recommendation engine that collects performance data from the on-premises source SQL Server instance and then generates right-sized SKU recommendations based on your Azure SQL target.
-- A responsive user interface that provides you with an end-to-end migration experience that starts with a migration readiness assessment, SKU recommendation (based on performance data).-- An enhanced assessment mechanism that can evaluate SQL Server instances, identifying databases ready for migration to the different Azure SQL targets.
- > [!NOTE]
- > You can assess SQL Server databases running on Windows and Linux Operating systems using the Azure SQL migration extension.
-- An SKU recommendation engine (Preview) that collects performance data from the source SQL Server instance on-premises, generating right-sized SKU recommendations based on your Azure SQL target. - A reliable Azure service powered by Azure Database Migration Service that orchestrates data movement activities to deliver a seamless migration experience.-- The ability to run online (for migrations requiring minimal downtime) or offline (for migrations where downtime persists through the migration) migration modes to suit your business requirements.-- The flexibility to create and configure a self-hosted integration runtime to provide your own compute for accessing the source SQL Server and backups in your on-premises environment.
-Check the following step-by-step tutorials for more information about each specific migration scenario by Azure SQL target:
+- You can run your migration online (for migrations that require minimal downtime) or offline (for migrations where downtime persists through the migration) depending on your business requirements.
+
+- You can create and configure a self-hosted integration runtime to use your own compute resources to access the source SQL Server instance and backups in your on-premises environment.
+
+For information about specific migration scenarios and Azure SQL targets, see the list of tutorials in the following table:
| Migration scenario | Migration mode ||| SQL Server to Azure SQL Managed Instance| [Online](./tutorial-sql-server-managed-instance-online-ads.md) / [Offline](./tutorial-sql-server-managed-instance-offline-ads.md)
-SQL Server to SQL Server on Azure Virtual Machine|[Online](./tutorial-sql-server-to-virtual-machine-online-ads.md) / [Offline](./tutorial-sql-server-to-virtual-machine-offline-ads.md)
-SQL Server to Azure SQL Database (Preview)| [Offline](./tutorial-sql-server-azure-sql-database-offline-ads.md)
+SQL Server to SQL Server on an Azure virtual machine|[Online](./tutorial-sql-server-to-virtual-machine-online-ads.md) / [Offline](./tutorial-sql-server-to-virtual-machine-offline-ads.md)
+SQL Server to Azure SQL Database (preview)| [Offline](./tutorial-sql-server-azure-sql-database-offline-ads.md)
> [!IMPORTANT]
-> If your target is Azure SQL Database (Preview), make sure to deploy the database schema before starting the migration. You can use tools as [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or, [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
+> If your target is Azure SQL Database, make sure you deploy the database schema before you begin the migration. You can use tools like the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
+
+The following 16-minute video explains recent updates and features added to the Azure SQL Migration extension for Azure Data Studio, including the new workflow for SQL Server database assessments and SKU recommendations:
+
+<br />
+
+> [!VIDEO https://learn-video.azurefd.net/vod/player?show=data-exposed&ep=assess-get-recommendations-migrate-sql-server-to-azure-using-azure-data-studio]
+
+## Architecture of the Azure SQL Migration extension for Azure Data Studio
+
+Azure Database Migration Service is a core component of the Azure SQL Migration extension architecture. Database Migration Service provides a reliable migration orchestrator to support database migrations to Azure SQL. You can create an instance of Database Migration Service or use an existing instance by using the Azure SQL Migration extension in Azure Data Studio.
-The following 16-minute video explains recent updates and features added to the Azure SQL migration extension in Azure Data Studio. Including the new workflow for SQL Server database assessments and SKU recommendations.
+Database Migration Service uses the Azure Data Factory self-hosted integration runtime to access and upload valid backup files from your on-premises network share or from your Azure storage account.
-<iframe src="https://aka.ms/docs/player?show=data-exposed&ep=assess-get-recommendations-migrate-sql-server-to-azure-using-azure-data-studio" width="800" height="450"></iframe>
+The workflow of the migration process is illustrated in the following diagram:
-## Architecture of Azure SQL migration extension for Azure Data Studio
-Azure Database Migration Service (DMS) is one of the core components in the overall architecture. DMS provides a reliable migration orchestrator to enable database migrations to Azure SQL.
-Create or reuse an existing DMS using the Azure SQL migration extension in Azure Data Studio (ADS).
-DMS uses Azure Data Factory's self-hosted integration runtime to access and upload valid backup files from your on-premises network share or your Azure Storage account.
+The following list describes each step in the workflow:
-The workflow of the migration process is illustrated below.
-[ ![Architecture](media/migration-using-azure-data-studio/architecture-sql-migration.png)](media/migration-using-azure-data-studio/architecture-sql-migration-expanded.png#lightbox)
+(1) **Source SQL Server**: An on-premises instance of SQL Server that's in a private cloud or an instance of SQL Server on a virtual machine in a public cloud. SQL Server 2008 and later versions on Windows or Linux are supported.
-1. **Source SQL Server**: SQL Server instance on-premises, private cloud, or any public cloud virtual machine. All versions of SQL Server 2008 and above are supported.
-2. **Target Azure SQL**: Supported Azure SQL targets are **Azure SQL Managed Instance**, **SQL Server on Azure Virtual Machines** (*registered with SQL IaaS extension - [full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes)*), or **Azure SQL Database (Preview)**.
-3. **Network File Share**: Server Message Block (SMB) network file share where backup files are stored for the database(s) to be migrated. Azure Storage blob containers and Azure Storage file share are also supported.
-4. **Azure Data Studio**: Download and install the [Azure SQL migration extension in Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
-5. **Azure DMS**: Azure service that orchestrates migration pipelines to do data movement activities from on-premises to Azure. DMS is associated with Azure Data Factory's (ADF) self-hosted integration runtime (IR) and provides the capability to register and monitor the self-hosted IR.
-6. **Self-hosted integration runtime (IR)**: Self-hosted IR should be installed on a machine that can connect to the source SQL Server and the location of the backup file. DMS provides the authentication keys and registers the self-hosted IR.
-7. **Backup files upload to Azure Storage**: DMS uses self-hosted IR to upload valid backup files from the on-premises backup location to your Azure Storage account. Data movement activities and pipelines are automatically created in the migration workflow to upload the backup files.
-8. **Restore backups on target Azure SQL**: DMS restores backup files from your Azure Storage account to the supported target Azure SQL.
+(2) **Target Azure SQL**: Supported Azure SQL targets are Azure SQL Managed Instance, SQL Server on Azure Virtual Machines (registered with the SQL infrastructure as a service extension in [full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes)), and Azure SQL Database.
- > [!NOTE]
- > If your migration target is Azure SQL Database (Preview), you don't need backups to perform this migration. The migration to Azure SQL Database is considered a logical migration involving the database's pre-creation and data movement (performed by DMS).
+(3) **Network file share**: A Server Message Block (SMB) network file share where backup files are stored for the databases to be migrated. Azure storage blob containers and Azure storage file share also are supported.
- > [!IMPORTANT]
- > With online migration mode, DMS continuously uploads the backup source files to Azure Storage and restores them to the target until you complete the final step of cutting over to the target.
- >
- > In offline migration mode, DMS uploads the backup source files to Azure Storage and restores them to the target without requiring you to perform a cutover.
+(4) **Azure Data Studio**: Download and install the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+
+(5) **Azure Database Migration Service**: An Azure service that orchestrates migration pipelines to do data movement activities from an on-premises environment to Azure. Database Migration Service is associated with the Azure Data Factory self-hosted integration runtime and provides the capability to register and monitor the self-hosted integration runtime.
+
+(6) **Self-hosted integration runtime**: Install a self-hosted integration runtime on a computer that can connect to the source SQL Server instance and to the location of the backup file. Database Migration Service provides the authentication keys and registers the self-hosted integration runtime.
+
+(7) **Backup files upload to your Azure storage account**: Database Migration Service uses a self-hosted integration runtime to upload valid backup files from the on-premises backup location to your Azure storage account. Data movement activities and pipelines are automatically created in the migration workflow to upload the backup files.
+
+(8) **Restore backups on target Azure SQL**: Database Migration Service restores backup files from your Azure storage account to the supported target Azure SQL instance.
+
+> [!NOTE]
+> If your migration target is Azure SQL Database, you don't need backups for this migration. Database migration to Azure SQL Database is considered a logical migration that involves the database's pre-creation and data movement (performed by Database Migration Service).
+
+> [!IMPORTANT]
+> In online migration mode, Database Migration Service continuously uploads the backup source files to your Azure storage account and restores them to the target until you complete the final step of cutting over to the target.
+>
+> In offline migration mode, Database Migration Service uploads the backup source files to Azure storage and restores them to the target without requiring a cutover.
## Prerequisites
The following sections walk through the prerequisites for each supported Azure S
[!INCLUDE [dms-ads-sqlvm-prereq](../../includes/dms-ads-sqlvm-prereq.md)]
-### [Azure SQL Database (Preview)](#tab/azure-sql-db)
+### [Azure SQL Database (preview)](#tab/azure-sql-db)
[!INCLUDE [dms-ads-sqldb-prereq](../../includes/dms-ads-sqldb-prereq.md)]
-### Recommendations for using self-hosted integration runtime for database migrations
+### Recommendations for using a self-hosted integration runtime for database migrations
+ - Use a single self-hosted integration runtime for multiple source SQL Server databases.-- Install only one instance of self-hosted integration runtime on any single machine.-- Associate only one self-hosted integration runtime with one DMS.-- The self-hosted integration runtime uses resources (memory / CPU) on the machine where it's installed. Install the self-hosted integration runtime on a machine different from your source SQL Server. However, having the self-hosted integration runtime close to the data source reduces the time for the self-hosted integration runtime to connect to the data source. -- Use the self-hosted integration runtime only when you have your database backups in an on-premises SMB network share. Self-hosted integration runtime isn't required for database migrations if your source database backups are already in the Azure storage blob container.-- We recommend up to 10 concurrent database migrations per self-hosted integration runtime on a single machine. To increase the number of concurrent database migrations, scale-out self-hosted runtime up to four nodes or create separate self-hosted integration runtime on different machines.-- Configure self-hosted integration runtime to auto-update to automatically apply any new features, bug fixes, and enhancements that are released. To learn more, see [Self-hosted Integration Runtime Auto-update](../data-factory/self-hosted-integration-runtime-auto-update.md).-
-## Monitor database migration progress from the Azure portal
-When you migrate the database(s) using the Azure SQL migration extension for Azure Data Studio, the migrations are orchestrated by the Azure Database Migration Service that was selected in the wizard. To monitor database migrations from the Azure portal,
-- Open the [Azure portal](https://portal.azure.com/)-- Search for your Azure Database Migration Service by the resource name
- :::image type="content" source="media/migration-using-azure-data-studio/search-dms-portal.png" alt-text="Search Azure Database Migration Service resource in portal":::
-- Select the **Monitor migrations** tile on the **Overview** page to view the details of your database migrations.
- :::image type="content" source="media/migration-using-azure-data-studio/dms-ads-monitor-portal.png" alt-text="Monitor migrations in Azure portal":::
+- Install only one instance of a self-hosted integration runtime on any single computer.
+
+- Associate only one self-hosted integration runtime with one instance of Database Migration Service.
+
+- The self-hosted integration runtime uses resources (memory and CPU) on the computer it's installed on. Install the self-hosted integration runtime on a computer that's separate from your source SQL Server instance. But the two computers should be in close proximity. Having the self-hosted integration runtime close to the data source reduces the time it takes for the self-hosted integration runtime to connect to the data source.
+
+- Use the self-hosted integration runtime only when you have your database backups in an on-premises SMB network share. A self-hosted integration runtime isn't required for database migrations if your source database backups are already in the storage blob container.
+
+- We recommend up to 10 concurrent database migrations per self-hosted integration runtime on a single computer. To increase the number of concurrent database migrations, scale out the self-hosted runtime to up to four nodes or create separate instances of the self-hosted integration runtime on different computers.
+
+- Configure the self-hosted integration runtime to auto-update and automatically apply any new features, bug fixes, and enhancements that are released. For more information, see [Self-hosted integration runtime auto-update](../data-factory/self-hosted-integration-runtime-auto-update.md).
+
+## Monitor database migration progress in the Azure portal
+
+When you migrate databases by using the Azure SQL Migration extension for Azure Data Studio, the migrations are orchestrated by the Database Migration Service instance that you selected in the migration wizard.
+
+To monitor database migrations in the Azure portal:
+
+1. In the [Azure portal](https://portal.azure.com/), search for your instance of Database Migration Service by using the resource name.
+
+ :::image type="content" source="media/migration-using-azure-data-studio/search-dms-portal.png" alt-text="Screenshot that shows how to search for a resource name in the Azure portal.":::
+
+1. In the Database Migration Service instance overview, select **Monitor migrations** to view the details of your database migrations.
+
+ :::image type="content" source="media/migration-using-azure-data-studio/dms-ads-monitor-portal.png" alt-text="Screenshot that shows how to monitor migrations in the Azure portal.":::
## Known issues and limitations-- Overwriting existing databases using DMS in your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine isn't supported.-- Configuring high availability and disaster recovery on your target to match source topology isn't supported by DMS.+
+- Overwriting existing databases by using Database Migration Service in your target instance of Azure SQL Managed Instance or SQL Server on Azure Virtual Machines isn't supported.
+
+- Configuring high availability and disaster recovery on your target to match source topology isn't supported by Database Migration Service.
+ - The following server objects aren't supported:
- - Logins
- - SQL Server Agent jobs
- - Credentials
- - SSIS packages
- - Server roles
- - Server audit
-- SQL Server 2008 and below as target versions aren't supported when migrating to SQL Server on Azure Virtual Machines.-- If you're using SQL Server 2012 or SQL Server 2014, you need to store your source database backup files on an Azure Storage Blob Container instead of using the network share option. Store the backup files as page blobs since block blobs are only supported in SQL 2016 and after.-- You can't use an existing self-hosted integration runtime created from Azure Data Factory for database migrations with DMS. Initially, the self-hosted integration runtime should be created using the Azure SQL migration extension in Azure Data Studio and can be reused for further database migrations. +
+ - Logins
+ - SQL Server Agent jobs
+ - Credentials
+ - SQL Server Integration Services packages
+ - Server roles
+ - Server audit
+
+- SQL Server 2008 and earlier as target versions aren't supported for migrations to SQL Server on Azure Virtual Machines.
+
+- If you use SQL Server 2014 or SQL Server 2012, you must store your source database backup files in an Azure storage blob container instead of by using the network share option. Store the backup files as page blobs. Block blobs are supported only in SQL Server 2016 and later versions.
+
+- You can't use an existing self-hosted integration runtime that was created in Azure Data Factory for database migrations with Database Migration Service. Initially, create the self-hosted integration runtime by using the Azure SQL Migration extension for Azure Data Studio. You can reuse that self-hosted integration runtime in future database migrations.
## Pricing-- Azure Database Migration Service is free to use with the Azure SQL migration extension in Azure Data Studio. You can migrate multiple SQL Server databases using the Azure Database Migration Service at no charge using the service or the Azure SQL migration extension.-- There's no data movement or data ingress cost for migrating your databases from on-premises to Azure. If the source database is moved from another region or an Azure VM, you may incur [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) based on your bandwidth provider and routing scenario.-- Provide your machine or on-premises server to install Azure Data Studio.-- A self-hosted integration runtime is needed to access database backups from your on-premises network share.
-## Regional Availability
-For the list of Azure regions that support database migrations using the Azure SQL migration extension for Azure Data Studio (powered by Azure DMS), see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration)
+- Azure Database Migration Service is free to use with the Azure SQL Migration extension for Azure Data Studio. You can migrate multiple SQL Server databases by using Database Migration Service at no charge.
+
+- No data movement or data ingress costs are assessed when you migrate your databases from an on-premises environment to Azure. If the source database is moved from another region or from an Azure virtual machine, you might incur [bandwidth charges](https://azure.microsoft.com/pricing/details/bandwidth/) depending on your bandwidth provider and routing scenario.
+
+- Use a virtual machine or an on-premises server to install Azure Data Studio.
+
+- A self-hosted integration runtime is required to access database backups from your on-premises network share.
+
+## Region availability
+
+For the list of Azure regions that support database migrations by using the Azure SQL Migration extension for Azure Data Studio (powered by Azure Database Migration Service), see [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration).
## Next steps -- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
+- Learn how to install the [Azure SQL Migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension).
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
In addition to Azure Database Migration Service prerequisites that are common to
When using the Azure Database Migration Service to perform SQL Server to Azure SQL Database migrations, in addition to the prerequisites that are common to all migration scenarios, be sure to address the following additional prerequisites:
-* Create an instance of Azure SQL Database instance, which you do by following the detail in the article [Create a database in Azure SQL Database in the Azure portal](/azure/azure-sql/database/single-database-create-quickstart).
+* Create an instance of Azure SQL Database, which you do by following the detail in the article [Create a database in Azure SQL Database in the Azure portal](/azure/azure-sql/database/single-database-create-quickstart).
* Download and install the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later. * Open your Windows Firewall to allow the Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. * If you are running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that the Azure Database Migration Service can connect to a named instance on your source server.
When using the Azure Database Migration Service to perform SQL Server to Azure S
## Next steps
-For an overview of the Azure Database Migration Service and regional availability, see the article [What is the Azure Database Migration Service](dms-overview.md).
+For an overview of the Azure Database Migration Service and regional availability, see the article [What is the Azure Database Migration Service](dms-overview.md).
dms Resource Custom Roles Sql Database Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-database-ads.md
Title: "Custom roles for SQL Server to Azure SQL Database (Preview) migrations using ADS"
+ Title: "Custom roles for SQL Server to Azure SQL Database (preview) migrations in Azure Data Studio"
-description: Learn to use the custom roles for SQL Server to Azure SQL Database (Preview) migrations.
+description: Learn how to use custom roles for SQL Server to Azure SQL Database (preview) migrations in Azure Data Studio.
Last updated 09/28/2022
-# Custom roles for SQL Server to Azure SQL Database (Preview) migrations using ADS
+# Custom roles for SQL Server to Azure SQL Database (preview) migrations in Azure Data Studio
-This article explains how to set up a custom role in Azure for Database Migrations. The custom role will only have the permissions necessary to create and run a Database Migration Service with Azure SQL Database (Preview) as a target.
+This article explains how to set up a custom role in Azure for SQL Server database migrations. A custom role will have only the permissions that are required to create and run an instance of Azure Database Migration Service with Azure SQL Database (preview) as a target.
-The AssignableScopes section of the role definition json string allows you to control where the permissions appear in the **Add Role Assignment** UI in the portal. You'll likely want to define the role at the resource group or even resource level to avoid cluttering the UI with extra roles. This doesn't perform the actual role assignment.
+Use the AssignableScopes section of the role definition JSON string to control where the permissions appear in the **Add role assignment** UI in the Azure portal. To avoid cluttering the UI with extra roles, you might want to define the role at the level of the resource group, or even the level of the resource. The resource that the custom role applies to doesn't perform the actual role assignment.
```json {
The AssignableScopes section of the role definition json string allows you to co
"roleName": "DmsCustomRoleDemoForSqlDB", "description": "", "assignableScopes": [
- "/subscriptions/<sqlDbSubscription>/resourceGroups/<sqlDbRG>",
- "/subscriptions/<DMSSubscription>/resourceGroups/<dmsServiceRG>"
+ "/subscriptions/<SQLDatabaseSubscription>/resourceGroups/<SQLDatabaseResourceGroup>",
+ "/subscriptions/<DatabaseMigrationServiceSubscription>/resourceGroups/<DatabaseMigrationServiceResourceGroup>"
], "permissions": [ {
The AssignableScopes section of the role definition json string allows you to co
} } ```
-You can use either the Azure portal, AZ PowerShell, Azure CLI or Azure REST API to create the roles.
-For more information, see the articles [Create custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
+You can use either the Azure portal, Azure PowerShell, the Azure CLI, or the Azure REST API to create the roles.
-## Description of permissions needed to migrate to Azure SQL Database (Preview)
+For more information, see [Create custom roles by using the Azure portal](../role-based-access-control/custom-roles-portal.md) and [Azure custom roles](../role-based-access-control/custom-roles.md).
-| Permission Action | Description |
+## Permissions required to migrate to Azure SQL Database (preview)
+
+| Permission action | Description |
| - | --|
-| Microsoft.Sql/servers/read | Return the list of SqlDb resources or gets the properties for the specified SqlDb. |
-| Microsoft.Sql/servers/write | Creates a SqlDb with the specified parameters or update the properties or tags for the specified SqlDb. |
-| Microsoft.Sql/servers/databases/read | Gets existing SqlDb database. |
-| Microsoft.Sql/servers/databases/write | Creates a new database or updates an existing database. |
-| Microsoft.Sql/servers/databases/delete | Deletes an existing SqlDb database. |
-| Microsoft.DataMigration/locations/operationResults/read | Get the status of a long-running operation related to a 202 Accepted response. |
+| Microsoft.Sql/servers/read | Return the list of SQL database resources or get the properties for the specified SQL database. |
+| Microsoft.Sql/servers/write | Create a SQL database with the specified parameters or update the properties or tags for the specified SQL database. |
+| Microsoft.Sql/servers/databases/read | Get an existing SQL database. |
+| Microsoft.Sql/servers/databases/write | Create a new database or update an existing database. |
+| Microsoft.Sql/servers/databases/delete | Delete an existing SQL database. |
+| Microsoft.DataMigration/locations/operationResults/read | Get the results of a long-running operation related to a 202 Accepted response. |
| Microsoft.DataMigration/locations/operationStatuses/read | Get the status of a long-running operation related to a 202 Accepted response. |
-| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve Service Operation Results. |
-| Microsoft.DataMigration/databaseMigrations/write | Create or Update Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/read | Retrieve the Database Migration resource. |
-| Microsoft.DataMigration/databaseMigrations/delete | Delete Database Migration resource. |
+| Microsoft.DataMigration/locations/sqlMigrationServiceOperationResults/read | Retrieve service operation results. |
+| Microsoft.DataMigration/databaseMigrations/write | Create or update a database migration resource. |
+| Microsoft.DataMigration/databaseMigrations/read | Retrieve a database migration resource. |
+| Microsoft.DataMigration/databaseMigrations/delete | Delete a database migration resource. |
| Microsoft.DataMigration/databaseMigrations/cancel/action | Stop ongoing migration for the database. |
-| Microsoft.DataMigration/sqlMigrationServices/write | Create a new or change properties of existing Service |
-| Microsoft.DataMigration/sqlMigrationServices/delete | Delete existing Service. |
-| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve details of Migration Service. |
-| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the List of Authentication Keys. |
-| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate the Authentication Keys. |
-| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | De-register the IR node. |
-| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | Lists the Monitoring Data for all migrations. |
+| Microsoft.DataMigration/sqlMigrationServices/write | Create a new service or change the properties of an existing service. |
+| Microsoft.DataMigration/sqlMigrationServices/delete | Delete an existing service. |
+| Microsoft.DataMigration/sqlMigrationServices/read | Retrieve the details of the migration service. |
+| Microsoft.DataMigration/sqlMigrationServices/listAuthKeys/action | Retrieve the list of authentication keys. |
+| Microsoft.DataMigration/sqlMigrationServices/regenerateAuthKeys/action | Regenerate authentication keys. |
+| Microsoft.DataMigration/sqlMigrationServices/deleteNode/action | Deregister the integration runtime node. |
+| Microsoft.DataMigration/sqlMigrationServices/listMonitoringData/action | List the monitoring data for all migrations. |
| Microsoft.DataMigration/sqlMigrationServices/listMigrations/read | Lists the migrations for the user. |
-| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the Monitoring Data. |
+| Microsoft.DataMigration/sqlMigrationServices/MonitoringData/read | Retrieve the monitoring data. |
+
+## Assign a role
-## Role assignment
+To assign a role to a user or an app ID:
-To assign a role to users/APP ID, open the Azure portal, perform the following steps:
+1. In the Azure portal, go to the resource.
-1. Navigate to the resource, go to **Access Control**, and then scroll to find the custom roles you created.
+1. In the left menu, select **Access control (IAM)**, and then scroll to find the custom roles you created.
-2. Select the appropriate role, select the User or APP ID, and then save the changes.
+1. Select the roles to assign, select the user or app ID, and then save the changes.
- The user or APP ID(s) now appears listed on the **Role assignments** tab.
+ The user or app ID now appears on the **Role assignments** tab.
## Next steps
-* Review the migration guidance for your scenario in the Microsoft [Database Migration Guide](https://datamigration.microsoft.com/).
+- Review the [migration guidance for your scenario](https://datamigration.microsoft.com/).
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
Title: Database migration scenario status
+ Title: Supported database migration scenarios
-description: Learn about the status of the migration scenarios supported by Azure Database Migration Service.
+description: Learn which migration scenarios are currently supported for Azure Database Migration Service and their availability status.
-# Status of migration scenarios supported by Azure Database Migration Service
+# Azure Database Migration Service supported scenarios
-Azure Database Migration Service is designed to support different migration scenarios (source/target pairs) for both offline (one-time) and online (continuous sync) migrations. The scenario coverage provided by Azure Database Migration Service is being extended over time. New scenarios are being added regularly. This article identifies migration scenarios currently supported by Azure Database Migration Service and the status (private preview, public preview, or general availability) for each scenario.
+Azure Database Migration Service supports a mix of database migration scenarios (source and target pairs) for both offline (one-time) and online (continuous sync) database migrations. New scenarios are added over time to extend Database Migration Service scenario coverage. This article is updated over time to list the migration scenarios that are currently supported by Database Migration Service and their availability status, preview or generally available.
-## Offline versus online migrations
+## Offline vs. online migration
-With Azure Database Migration Service, you can do an offline or an online migration. With *offline* migrations, application downtime begins at the same time that the migration starts. To limit downtime to the time required to cut over to the new environment when the migration completes, use an *online* migration. It's recommended to test an offline migration to determine whether the downtime is acceptable; if not, do an online migration.
+In Database Migration Service, you can migrate your databases offline or while they're online. In an *offline* migration, application downtime starts when the migration starts. To limit downtime to the time it takes you to cut over to the new environment after the migration, use an *online* migration. We recommend that you test an offline migration to determine whether the downtime is acceptable. If the expected downtime isn't acceptable, do an online migration.
## Migration scenario status
-The status of migration scenarios supported by Azure Database Migration Service varies with time. Generally, scenarios are first released in **private preview**. After private preview, the scenario status changes to **public preview**. Azure Database Migration Service users can try out migration scenarios in public preview directly from the user interface. No sign-up is required. However, migration scenarios in public preview may not be available in all regions and may undergo more changes before final release. After public preview, the scenario status changes to **generally availability**. General availability (GA) is the final release status, and the functionality is complete and accessible to all users.
+The status of migration scenarios that are supported by Database Migration Service varies over time. Generally, scenarios are first released in *preview*. In preview, Database Migration Service users can try out migration scenarios directly in the UI. No sign-up is required. Migration scenarios that have a preview release status might not be available in all regions, and they might be revised before final release.
-## Migration scenario support
+After preview, the scenario status changes to *general availability* (GA). GA is the final release status. Scenarios that have a status of GA have complete functionality and are accessible to all users.
-The following tables show which migration scenarios are supported when using Azure Database Migration Service.
+## Supported migration scenarios
+
+The tables in the following sections show the status of specific migration scenarios that are supported in Database Migration Service.
> [!NOTE]
-> If a scenario listed as supported below does not appear within the user interface, please contact the [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) alias for additional information.
+> If a supported scenario doesn't appear in the UI, contact [Ask Azure Database Migrations](mailto:AskAzureDatabaseMigrations@service.microsoft.com) for information.
### Offline (one-time) migration support
-The following table shows Azure Database Migration Service support for **offline** migrations.
+The following table describes the current status of Database Migration Service support for *offline* migrations:
| Target | Source | Support | Status | | - | - |:-:|:-:|
-| **Azure SQL DB** | SQL Server <sup>1</sup> | Γ£ö | PP |
-| | Amazon RDS SQL Server | Γ£ö | PP |
+| **Azure SQL Database** | SQL Server <sup>1</sup> | Γ£ö | Preview |
+| | Amazon RDS SQL Server | Γ£ö | Preview |
| | Oracle | X | |
-| **Azure SQL DB MI** | SQL Server <sup>1</sup> | Γ£ö | GA |
+| **Azure SQL Database Managed Instance** | SQL Server <sup>1</sup> | Γ£ö | GA |
| | Amazon RDS SQL Server | X | | | | Oracle | X | | | **Azure SQL VM** | SQL Server <sup>1</sup> | Γ£ö | GA | | | Amazon RDS SQL Server | X | | | | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure DB for MySQL - Single Server** | MySQL | Γ£ö | GA |
+| **Azure Database for MySQL - Single Server** | MySQL | Γ£ö | GA |
| | Amazon RDS MySQL | Γ£ö | GA |
-| | Azure DB for MySQL <sup>2</sup> | Γ£ö | GA |
-| **Azure DB for MySQL - Flexible Server** | MySQL | Γ£ö | GA |
+| | Azure Database for MySQL <sup>2</sup> | Γ£ö | GA |
+| **Azure Database for MySQL - Flexible Server** | MySQL | Γ£ö | GA |
| | Amazon RDS MySQL | Γ£ö | GA |
-| | Azure DB for MySQL <sup>2</sup> | Γ£ö | GA |
-| **Azure DB for PostgreSQL - Single server** | PostgreSQL | X |
+| | Azure Database for MySQL <sup>2</sup> | Γ£ö | GA |
+| **Azure Database for PostgreSQL - Single Server** | PostgreSQL | X |
| | Amazon RDS PostgreSQL | X | |
-| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | X |
+| **Azure Database for PostgreSQL - Flexible Server** | PostgreSQL | X |
| | Amazon RDS PostgreSQL | X | |
-| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | X |
+| **Azure Database for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | X |
| | Amazon RDS PostgreSQL | X | |
+<sup>1</sup> Offline migrations through the Azure SQL Migration extension for Azure Data Studio are supported for Azure SQL Managed Instance, SQL Server on Azure Virtual Machines, and Azure SQL Database. For more information, see [Migrate databases by using the Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
-1. Offline migrations using the Azure SQL Migration extension for Azure Data Studio are supported for the following Azure SQL targets: **Azure SQL Managed Instance**, **SQL Server on Azure Virtual Machines** and, **Azure SQL Database (Preview)**. For more information, see [Migrate databases with Azure SQL migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
-
-2. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation.
+<sup>2</sup> If your source database is already in an Azure platform as a service (PaaS) like Azure Database for MySQL or Azure Database for PostgreSQL, choose the corresponding engine when you create your migration activity. For example, if you're migrating from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server, choose MySQL as the source engine when you create your scenario. If you're migrating from Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine when you create your scenario.
### Online (continuous sync) migration support
-The following table shows Azure Database Migration Service support for **online** migrations.
+The following table describes the current status of Database Migration Service support for *online* migrations:
| Target | Source | Support | Status | | - | - |:-:|:-:|
-| **Azure SQL DB** | SQL Server <sup>1</sup>| X | |
+| **Azure SQL Database** | SQL Server <sup>1</sup>| X | |
| | Amazon RDS SQL | X | | | | Oracle | X | |
-| **Azure SQL DB MI** | SQL Server <sup>1</sup>| Γ£ö | GA |
+| **Azure SQL Database MI** | SQL Server <sup>1</sup>| Γ£ö | GA |
| | Amazon RDS SQL | X | | | | Oracle | X | | | **Azure SQL VM** | SQL Server <sup>1</sup>| Γ£ö | GA | | | Amazon RDS SQL | X | | | | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure DB for MySQL - Flexible Server** | Azure DB for MySQL - Single Server | Γ£ö | Preview |
+| **Azure Database for MySQL - Flexible Server** | Azure Database for MySQL - Single Server | Γ£ö | Preview |
| | MySQL | Γ£ö | Preview | | | Amazon RDS MySQL | Γ£ö | Preview |
-| **Azure DB for PostgreSQL - Single server** | PostgreSQL | Γ£ö | GA |
-| | Azure DB for PostgreSQL - Single server <sup>2</sup> | Γ£ö | GA |
+| **Azure Database for PostgreSQL - Single Server** | PostgreSQL | Γ£ö | GA |
+| | Azure Database for PostgreSQL - Single Server <sup>2</sup> | Γ£ö | GA |
| | Amazon RDS PostgreSQL | Γ£ö | GA |
-| **Azure DB for PostgreSQL - Flexible server** | PostgreSQL | Γ£ö | GA |
-| | Azure DB for PostgreSQL - Single server <sup>2</sup> | Γ£ö | GA |
+| **Azure Database for PostgreSQL - Flexible Server** | PostgreSQL | Γ£ö | GA |
+| | Azure Database for PostgreSQL - Single Server <sup>2</sup> | Γ£ö | GA |
| | Amazon RDS PostgreSQL | Γ£ö | GA |
-| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | Γ£ö | GA |
+| **Azure Database for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | Γ£ö | GA |
| | Amazon RDS PostgreSQL | Γ£ö | GA |
-1. Online migrations (minimal downtime) using the Azure SQL Migration extension for Azure Data Studio are supported for the following Azure SQL targets: **Azure SQL Managed Instance** and **SQL Server on Azure Virtual Machines**. For more information, see [Migrate databases with Azure SQL migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
+<sup>1</sup> Online migrations (minimal downtime) through the Azure SQL Migration extension for Azure Data Studio are supported for Azure SQL Managed Instance and SQL Server on Azure Virtual Machines targets. For more information, see [Migrate databases by using the Azure SQL Migration extension for Azure Data Studio](migration-using-azure-data-studio.md).
-2. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation.
+<sup>2</sup> If your source database is already in an Azure PaaS like Azure Database for MySQL or Azure Database for PostgreSQL, choose the corresponding engine when you create your migration activity. For example, if you're migrating from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server, choose MySQL as the source engine when you create the scenario. If you're migrating from Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine when you create the scenario.
## Next steps
-For an overview of Azure Database Migration Service and regional availability, see the article [What is the Azure Database Migration Service](dms-overview.md).
+- Learn more about [Azure Database Migration Service](dms-overview.md) and region availability.
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Database (Preview) offline using Azure Data Studio"
+ Title: "Tutorial: Migrate SQL Server to Azure SQL Database (preview) offline in Azure Data Studio"
-description: Migrate SQL Server to an Azure SQL Database (Preview) offline using Azure Data Studio with Azure Database Migration Service
+description: Learn how to migrate on-premises SQL Server to Azure SQL Database (preview) offline by using Azure Data Studio and Azure Database Migration Service.
Last updated 09/28/2022
-# Tutorial: Migrate SQL Server to an Azure SQL Database offline using Azure Data Studio with DMS (Preview)
-You can use the Azure SQL migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Database (Preview).
+# Tutorial: Migrate SQL Server to Azure SQL Database (preview) offline in Azure Data Studio
-In this tutorial, you'll learn how to migrate the **AdventureWorks2019** database from an on-premises instance of SQL Server to Azure SQL Database (Preview) by using the Azure SQL Migration extension for Azure Data Studio. This tutorial focuses on the offline migration mode that considers an acceptable downtime during the migration process.
+You can use Azure Database Migration Service and the Azure SQL Migration extension for Azure Data Studio to migrate databases from an on-premises instance of SQL Server to Azure SQL Database (preview) offline and with minimal downtime.
+
+> [!NOTE]
+> The option to migrate a SQL Server database to Azure SQL Database by using Azure Data Studio currently is in preview. Azure SQL Database migration targets are available only by using the [Azure Data Studio Insiders](/sql/azure-data-studio/download-azure-data-studio#download-the-insiders-build-of-azure-data-studio) version of the Azure SQL Migration extension.
+
+In this tutorial, learn how to migrate the example AdventureWorks2019 database from an on-premises instance of SQL Server to an instance of Azure SQL Database by using the Azure SQL Migration extension for Azure Data Studio. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process.
In this tutorial, you learn how to: > [!div class="checklist"] >
-> * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio
-> * Run an assessment of your source SQL Server database(s)
-> * Collect performance data from your source SQL Server
-> * Get a recommendation of the Azure SQL Database (Preview) SKU best suited for your workload
-> * Deploy your on-premises database schema to Azure SQL Database
-> * Create a new Azure Database Migration Service
-> * Start and, monitor the progress for your migration through to completion
+> - Open the Migrate to Azure SQL wizard in Azure Data Studio
+> - Run an assessment of your source SQL Server databases
+> - Collect performance data from your source SQL Server instance
+> - Get a recommendation of the Azure SQL Database SKU that will work best for your workload
+> - Deploy your on-premises database schema to Azure SQL Database
+> - Create an instance of Azure Database Migration Service
+> - Start your migration and monitor progress to completion
[!INCLUDE [online-offline](../../includes/database-migration-service-offline-online.md)] > [!IMPORTANT]
-> **Online** migrations for Azure SQL Database targets, are not currently available.
+> Currently, *online* migrations for Azure SQL Database targets aren't available.
## Prerequisites
-To complete this tutorial, you need to:
-
-* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
-* Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target Azure SQL Database
- - Reader role for the Azure Resource Groups containing the target Azure SQL Database.
- - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
- - As an alternative to using the above built-in roles, you can assign a custom role as defined in [this article.](resource-custom-roles-sql-database-ads.md)
- > [!IMPORTANT]
- > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
-* Create a target [Azure SQL Database](/azure/azure/azure-sql/database/single-database-create-quickstart).
-* Ensure that the SQL Server login to connect the source SQL Server is a member of the `db_datareader` and the login for the target SQL server is `db_owner`.
-* Migrate database schema from source to target using [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or, [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) for Azure Data Studio.
-* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
-
-## Launch the Migrate to Azure SQL wizard in Azure Data Studio
-
-1. Open Azure Data Studio, navigate to the connections section, then select and connect to your on-premises SQL Server (or SQL Server on Azure Virtual Machine).
-2. On the server connection, right-click and select **Manage**.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/azure-data-studio-manage-panel.png" alt-text="Screenshot of server connection.":::
-3. On the server's home page, navigate to the **General** panel section, then select **Azure SQL Migration** extension.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/launch-migrate-to-azure-sql-wizard-1.png" alt-text="Screenshot of Azure Data Studio general panel.":::
-4. The Azure SQL Migration dashboard will open, then click on the **Migrate to Azure SQL** button to launch the migration wizard.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/launch-migrate-to-azure-sql-wizard-2.png" alt-text="Screenshot of Migrate to Azure SQL wizard.":::
-5. The wizard's first page will allow you to start a new session or resume a previously saved one. If no previous session is saved, the **Database assessment** page will be displayed.
-
-## Run database assessment, collect performance data and get right-sized recommendations
-
-1. Select the database(s) to run the assessment, then click **Next**.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment-database-selection.png" alt-text="Screenshot of database assessment.":::
-2. Select **Azure SQL Database (Preview)** as the target.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment-target-selection.png" alt-text="Screenshot of selection of the Azure SQL Database target.":::
-3. Click on the **View/Select** button at the bottom of this page to view details of the assessment results.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment.png" alt-text="Screenshot of view/select assessment results.":::
-4. Select the database(s), and review the assessment report making sure no issues are found.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment-issues-details.png" alt-text="Screenshot of assessment report.":::
-5. Click the **Get Azure recommendation** button to open the recommendations panel.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/get-azure-recommendation.png" alt-text="Screenshot of Azure recommendations.":::
-6. Select the **Collect performance data now** option, then choose a folder from your local drive to store the performance logs. When you're ready, then click the **Start** button.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/get-azure-recommendation-zoom.png" alt-text="Screenshot of performance data collection.":::
-7. Azure Data Studio will now collect performance data until you either stop the collection or close Azure Data Studio.
-8. After 10 minutes, you'll see a **recommendation available** for your Azure SQL Database (Preview). After the first recommendation is generated, you can use the **Restart data collection** option to continue the data collection process to refine the SKU recommendation, especially if your usage patterns vary for an extended time.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/get-azure-recommendation-collected.png" alt-text="Screenshot of performance data collected.":::
-9. Navigate to your Azure SQL target section, with **Azure SQL Database (Preview)** selected click the **View details** button to open the detailed SKU recommendation report. You can also click on the save recommendation report button at the bottom of this page for later analysis.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/azure-sku-recommendation-zoom.png" alt-text="Screenshot of SKU recommendation details.":::
-Close the recommendations page, and press the **Next**** button to continue with your migration.
+Before you begin the tutorial:
+
+- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- Have an Azure account that's assigned to one of the following built-in roles:
+
+ - Contributor for the target instance of Azure SQL Database
+ - Reader role for the Azure resource group that contains the target instance of Azure SQL Database
+ - Owner or Contributor role for the Azure subscription (required if you create a new instance of Azure Database Migration Service)
+
+ As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md).
+
+ > [!IMPORTANT]
+ > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio.
+
+- Create a target instance of [Azure SQL Database](/azure/azure/azure-sql/database/single-database-create-quickstart).
+
+- Make sure that the SQL Server login that connects to the source SQL Server instance is a member of the db_datareader role and that the login for the target SQL Server instance is a member of the db_owner role.
+
+- Migrate the database schema from source to target by using the [SQL Server dacpac extension](/sql/azure-data-studio/extensions/sql-server-dacpac-extension) or the [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension) in Azure Data Studio.
+
+- If you're using Azure Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).
+
+## Open the Migrate to Azure SQL wizard in Azure Data Studio
+
+To open the Migrate to Azure SQL wizard:
+
+1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
+
+1. Right-click the server connection and select **Manage**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/azure-data-studio-manage-panel.png" alt-text="Screenshot that shows a server connection and the Manage option in Azure Data Studio." lightbox="media/tutorial-sql-server-azure-sql-database-offline-ads/azure-data-studio-manage-panel.png":::
+
+1. In the server menu under **General**, select **Azure SQL Migration**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/launch-migrate-to-azure-sql-wizard-1.png" alt-text="Screenshot that shows the Azure Data Studio server menu.":::
+
+1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/launch-migrate-to-azure-sql-wizard-2.png" alt-text="Screenshot that shows the Migrate to Azure SQL wizard.":::
+
+1. On the first page of the wizard, start a new session or resume a previously saved session.
+
+## Run database assessment, collect performance data, and get Azure recommendations
+
+1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment-database-selection.png" alt-text="Screenshot that shows selecting a database for assessment.":::
+
+1. In **Step 2: Assessment results and recommendations**, complete the following steps:
+
+ 1. In **Choose your Azure SQL target**, select **Azure SQL Database (PREVIEW)**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment-target-selection.png" alt-text="Screenshot that shows selecting the Azure SQL Database target.":::
+
+ 1. Select **View/Select** to view the assessment results.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment.png" alt-text="Screenshot that shows view/select assessment results.":::
+
+ 1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/assessment-issues-details.png" alt-text="Screenshot that shows assessment report.":::
+
+ 1. Select **Get Azure recommendation** to open the recommendations pane.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/get-azure-recommendation.png" alt-text="Screenshot that shows Azure recommendations.":::
+
+ 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/get-azure-recommendation-zoom.png" alt-text="Screenshot that shows performance data collection.":::
+
+ Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio.
+
+ After 10 minutes, Azure Data Studio indicates that a recommendation is available for Azure SQL Database. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/get-azure-recommendation-collected.png" alt-text="Screenshot that shows performance data collected.":::
+
+ 1. In the selected **Azure SQL Database (PREVIEW)** target, select **View details** to open the detailed SKU recommendation report:
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/get-azure-recommendation-view-details.png" alt-text="Screenshot that shows the View details link for the target database recommendations.":::
+
+ 1. In **Review Azure SQL Database Recommendations**, review the recommendation. To save a copy of the recommendation, select **Save recommendation report**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/azure-sku-recommendation-zoom.png" alt-text="Screenshot that shows SKU recommendation details.":::
+
+1. Select **Close** to close the recommendations pane.
+
+1. Select **Next** to continue your database migration in the wizard.
## Configure migration settings
-1. Start with the upper section, specifying Azure account details. Select your subscription, location, and resource group from the corresponding drop-down lists.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/configuration-azure-target-account.png" alt-text="Screenshot of Azure account details.":::
-2. Move to the next section, selecting the target Azure SQL Database server (logical server) from the drop-down list. Specify the target username and password, then click the **Connect** button to verify the connectivity with the specified credentials.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/configuration-azure-target-database.png" alt-text="Screenshot of Azure SQL Database details.":::
-3. Map the source and target databases for the migration, using the drop-down list for the Azure SQL Database target. Then click **Next** to move to the next step in the migration wizard.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/configuration-azure-target-map.png" alt-text="Screenshot of source and target mapping.":::
-4. Select **Offline migration** as the migration mode and click Next.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-mode.png" alt-text="Screenshot of offline migrations selection.":::
-5. Enter source SQL Server credentials, then click **Edit** to select the list of tables to migrate between source and target.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-source-credentials.png" alt-text="Screenshot of source SQL Server credentials.":::
-6. Select the tables that you want to migrate to target. Notice the **Has rows** column indicates whether the target table has rows in the target database. You can select one or more tables. In the example below, a text filter was applied to select just the tables that contain the word **Employee**. However, you can select the list of tables based on your migration needs. When you're ready, click **Update** to proceed with the next step.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-source-tables.png" alt-text="Screenshot of table selection.":::
-7. The list of selected tables can be updated anytime before starting the migration. Click **Next** when you're ready to move to the next step in the migration wizard.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-target-tables.png" alt-text="Screenshot of selected tables to migrate.":::
+
+1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, complete these steps for your target Azure SQL Database instance:
+
+ 1. Select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the Azure SQL Database deployment.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/configuration-azure-target-account.png" alt-text="Screenshot that shows Azure account details.":::
+
+ 1. For **Azure SQL Database Server**, select the target Azure SQL Database server (logical server). Enter a username and password for the target database deployment. Then, select **Connect**. Enter the credentials to verify connectivity to the target database.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/configuration-azure-target-database.png" alt-text="Screenshot that shows Azure SQL Database details.":::
+
+ 1. Next, map the source database and the target database for the migration. For **Target database**, select the Azure SQL Database target. Then, select **Next** to move to the next step in the migration wizard.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/configuration-azure-target-map.png" alt-text="Screenshot that shows source and target mapping.":::
+
+1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-mode.png" alt-text="Screenshot that shows offline migrations selection.":::
+
+1. In **Step 5: Data source configuration**, complete the following steps:
+
+ 1. Under **Source credentials**, enter the source SQL Server credentials.
+
+ 1. Under **Select tables**, select the **Edit** pencil icon.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-source-credentials.png" alt-text="Screenshot that shows source SQL Server credentials.":::
+
+ 1. In **Select tables for \<database-name\>**, select the tables to migrate to the target. The **Has rows** column indicates whether the target table has rows in the target database. You can select one or more tables. Then, select **Update**.
+
+ You can update the list of selected tables anytime before you start the migration.
+
+ In the following example, a text filter is applied to select only tables that contain the word **Employee**. Select a list of tables based on your migration needs.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-source-tables.png" alt-text="Screenshot that shows table selection.":::
+
+1. Review your table selections, and then select **Next** to move to the next step in the migration wizard.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/migration-target-tables.png" alt-text="Screenshot that shows selected tables to migrate.":::
+ > [!NOTE]
-> If no tables are selected or credentials fields are empty, the **Next** button will be disabled.
-
-## Create Azure Database Migration Service
-
-1. Create a new Azure Database Migration Service or reuse an existing Service that you previously created.
- > [!NOTE]
- > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused.
-2. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown will list any existing DMS in the selected resource group.
-3. To reuse an existing DMS, select it from the dropdown list and press **Next** to view the summary screen. When ready to begin the migration, press the **Start migration** button.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/create-dms.png" alt-text="Screenshot of DMS selection.":::
-4. To create a new DMS, select **Create new**. On the **Create Azure Database Migration Service**, the screen provides the name for your DMS, and select **Create**.
-5. After successfully creating DMS, you'll be provided details to set up **integration runtime**.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/create-dms-integration-runtime-details.png" alt-text="Screenshot of Integration runtime.":::
-6. Select **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the prerequisites of connecting to the source SQL Server.
-7. After the installation is complete, the **Microsoft Integration Runtime Configuration Manager** will automatically launch to begin the registration process.
-8. Copy and paste one of the authentication keys provided in the wizard screen in Azure Data Studio. If the authentication key is valid, a green check icon is displayed in the Integration Runtime Configuration Manager, indicating that you can continue to **Register**.
-9. After completing the registration of the self-hosted integration runtime, close the **Microsoft Integration Runtime Configuration Manager** and switch back to the migration wizard in Azure Data Studio.
- > [!Note]
- > Refer [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) for additional information regarding self-hosted integration runtime.
-10. Select **Test connection** in the **Create Azure Database Migration Service** screen in Azure Data Studio to validate that the newly created DMS is connected to the newly registered self-hosted integration runtime.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/create-dms-integration-runtime-connected.png" alt-text="Screenshot of IR connectivity test.":::
-11. Review the summary and select **Start migration** to start the database migration.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/summary-start-migration.png" alt-text="Screenshot of start migration.":::
-
-## Monitor your migration
-1. On The Azure SQL Migration dashboard, navigate to the **Database Migration Status** section.
-2. Using the different options in the panel below, you can track migrations in progress, completed, and failed migrations (if any) or list all database migrations together.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard.png" alt-text="Screenshot of monitor migration dashboard.":::
-3. Click on **Database migrations in progress** to view ongoing migrations and get further details.
- :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard-details.png" alt-text="Screenshot of database migration details.":::
-4. DMS will return the latest known migration status every time the migration status section gets refreshed. Use the following table to learn more about the possible statuses:
+> If no tables are selected or if a username and password aren't entered, the **Next** button isn't available to select.
+
+## Create a Database Migration Service instance
+
+In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Azure Database Migration Service or reuse an existing instance that you created earlier.
+
+> [!NOTE]
+> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio.
+
+### Use an existing instance of Database Migration Service
+
+To use an existing instance of Database Migration Service:
+
+1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service.
+
+1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group.
+
+1. Select **Next**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/create-dms.png" alt-text="Screenshot that shows Database Migration Service selection.":::
+
+### Create a new instance of Database Migration Service
+
+To create a new instance of Database Migration Service:
+
+1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service.
+
+1. Under **Azure Database Migration Service**, select **Create new**.
+
+1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**.
+
+1. Under **Set up integration runtime**, complete the following steps:
+
+ 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites to connect to the source SQL Server instance.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/create-dms-integration-runtime-download.png" alt-text="Screenshot that shows the Download and install integration runtime link.":::
+
+ When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process.
+
+ 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/create-dms-integration-runtime-authentication-key.png" alt-text="Screenshot that highlights the authentication key table in the wizard.":::
+
+ If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**.
+
+ After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager.
+
+ > [!NOTE]
+ > For more information about how to use the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
+
+1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/create-dms-integration-runtime-connected.png" alt-text="Screenshot that shows IR connectivity test.":::
+
+1. Return to the migration wizard in Azure Data Studio.
+
+## Start the database migration
+
+In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration.
++
+## Monitor the database migration
+
+1. In Azure Data Studio, in the server menu under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL Database migrations.
+
+ Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard.png" alt-text="Screenshot that shows monitor migration dashboard." lightbox="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard.png":::
+
+1. Select **Database migrations in progress** to view active migrations.
+
+ To get more information about a specific migration, select the database name.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard-details.png" alt-text="Screenshot that shows database migration details." lightbox="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard-details.png":::
+
+ Database Migration Service returns the latest known migration status each time migration status refreshes. The following table describes possible statuses:
| Status | Description | |--|-|
- |Preparing for copy| Disabling autostats, triggers, and indexes for target table |
- |Copying| Data is being copied from source to target |
- |Copy finished| Data copy has finished and, waiting on other tables to finish copying to begin final steps to return tables to original schema|
- |Rebuilding indexes| Rebuilding indexes on target tables|
- |Succeeded| This table has all data copied to it and, indexes rebuilt |
-
-5. The migration details page displays the current status per database. As you can see from the screenshot below, the **AdventureWorks2019** database migration is in *Creating* status.
-6. Click on the **Refresh** link to update the status. As you can see from the screenshot below, DMS has updated the migration status to *In progress*.
-7. Click on the database name link to open the table-level view. The upper section of this dashboard displays the current status of the migration, while the lower section provides a detailed status of each table.
-8. After all table data is migrated to the Azure SQL Database (Preview) target, DMS will update the migration status from *In progress* to *Succeeded*.
-9. Once again, you can click on the database name link to open the table-level view. The upper section of this dashboard displays the current status of the migration; the lower section provides information to verify all data is the same on both the source and the target (*rows read vs. rows copied*).
+ |Preparing for copy| The service is disabling autostats, triggers, and indexes in the target table. |
+ |Copying| Data is being copied from the source database to the target database. |
+ |Copy finished| Data copy is finished. The service is waiting on other tables to finish copying to begin the final steps to return tables to their original schema. |
+ |Rebuilding indexes| The service is rebuilding indexes on target tables. |
+ |Succeeded| All data is copied and the indexes are rebuilt. |
+
+1. Check the migration details page to view the current status for each database.
+
+ Here's an example of the AdventureWorks2019 database migration with the status **Creating**:
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard-creating.png" alt-text="Screenshot that shows a creating migration status." lightbox="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard-creating.png":::
+
+1. In the menu bar, select **Refresh** to update the migration status.
+
+ After migration status is refreshed, the updated status for the example AdventureWorks2019 database migration is **In progress**:
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard-in-progress.png" alt-text="Screenshot that shows a migration in progress status." lightbox="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-dashboard-in-progress.png":::
+
+1. Select a database name to open the table view. In this view, you see the current status of the migration, the number of tables that currently are in that status, and a detailed status of each table.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-monitoring-panel-in-progress.png" alt-text="Screenshot that shows monitoring table migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-monitoring-panel-in-progress.png":::
+
+ When all table data is migrated to the Azure SQL Database target, Database Migration Service updates the migration status from **In progress** to **Succeeded**.
+
+ :::image type="content" source="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-monitoring-panel-succeeded.png" alt-text="Screenshot that shows succeeded migration." lightbox="media/tutorial-sql-server-azure-sql-database-offline-ads/monitor-migration-monitoring-panel-succeeded.png":::
> [!NOTE]
-> DMS optimizes migration by skipping tables with no data (0 rows). Such tables will not show up in the list even if selected when creating the migration.
+> Database Migration Service optimizes migration by skipping tables with no data (0 rows). Tables that don't have data don't appear in the list, even if you select the tables when you create the migration.
-At this point, you've completed the migration to Azure SQL Database. We encourage you to go through a series of post-migration tasks to ensure everything functions smoothly and efficiently.
+You've completed the migration to Azure SQL Database. We encourage you to go through a series of post-migration tasks to ensure that everything functions smoothly and efficiently.
> [!IMPORTANT]
-> Be sure to take advantage of the advanced cloud-based features offered by Azure SQL Database, such as [built-in high availability](/azure/azure-sql/database/high-availability-sla), [threat detection](/azure/azure-sql/database/azure-defender-for-sql), and [monitoring and tuning your workload](/azure/azure-sql/database/monitor-tune-overview).
+> Be sure to take advantage of the advanced cloud-based features of Azure SQL Database. The features include [built-in high availability](/azure/azure-sql/database/high-availability-sla), [threat detection](/azure/azure-sql/database/azure-defender-for-sql), and [monitoring and tuning your workload](/azure/azure-sql/database/monitor-tune-overview).
## Next steps
-* For a tutorial showing you how to create an Azure SQL Database using the Azure portal, PowerShell, or AZ CLI commands, see [Create a single database - Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart).
-* For information about Azure SQL Database, see [What is Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
-* For information about connecting to Azure SQL Database, see [Connect applications](/azure/azure-sql/database/connect-query-content-reference-guide).
+- Complete a quickstart to [create an Azure SQL Database instance](/azure/azure-sql/database/single-database-create-quickstart).
+- Learn more about [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
+- Learn how to [connect apps to Azure SQL Database](/azure/azure-sql/database/connect-query-content-reference-guide).
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline using Azure Data Studio"
+ Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio"
-description: Migrate SQL Server to an Azure SQL Managed Instance offline using Azure Data Studio with Azure Database Migration Service
+description: Learn how to migrate on-premises SQL Server to Azure SQL Managed Instance offline by using Azure Data Studio and Azure Database Migration Service.
Last updated 10/05/2021
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using Azure Data Studio with DMS
-You can use the Azure SQL migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Managed Instance. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
+# Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio
-In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the offline migration mode that considers an acceptable downtime during the migration process.
+You can use Azure Database Migration Service and the Azure SQL Migration extension in Azure Data Studio to migrate databases from an on-premises instance of SQL Server to Azure SQL Managed Instance offline and with minimal downtime.
+
+For database migration methods that might require some manual configuration, see [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
+
+In this tutorial, learn how to migrate the AdventureWorks database from an on-premises instance of SQL Server to an instance of Azure SQL Managed Instance by using Azure Data Studio and Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process.
In this tutorial, you learn how to: > [!div class="checklist"] >
-> * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio
-> * Run an assessment of your source SQL Server database(s)
-> * Collect performance data from your source SQL Server
-> * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload
-> * Specify details of your source SQL Server backup location and your target Azure SQL Managed Instance
-> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups.
-> * Start and monitor the progress for your migration through to completion
+> - Open the Migrate to Azure SQL wizard in Azure Data Studio
+> - Run an assessment of your source SQL Server databases
+> - Collect performance data from your source SQL Server instance
+> - Get a recommendation of the Azure SQL Managed Instance SKU that will work best for your workload
+> - Specify details of your source SQL Server instance, backup location, and target instance of Azure SQL Managed Instance
+> - Create an instance of Azure Database Migration Service
+> - Start your migration and monitor progress to completion
[!INCLUDE [online-offline](../../includes/database-migration-service-offline-online.md)]
-This article describes an offline migration from SQL Server to a SQL Managed Instance. For an online migration, see [Migrate SQL Server to Azure SQL Managed Instance online using Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
+This tutorial describes an offline migration from SQL Server to Azure SQL Managed Instance. For an online migration, see [Migrate SQL Server to Azure SQL Managed Instance online in Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
## Prerequisites
-To complete this tutorial, you need to:
-
-* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
-* Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target Azure SQL Managed Instance (and Storage Account to upload your database backup files from SMB network share).
- - Reader role for the Azure Resource Groups containing the target Azure SQL Managed Instance or the Azure storage account.
- - Owner or Contributor role for the Azure subscription (required if creating a new DMS service).
- - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-managed-instance-ads.md)
- > [!IMPORTANT]
- > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
-* Create a target [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
-* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
-* Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files, which Azure Database Migration Service can use for database migration.
- > [!IMPORTANT]
- > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows DMS service to upload the database backup files to and use for migrating databases. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
- > - You need to take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017).
- > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
- > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
-* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
-* If you're migrating a database protected by Transparent Data Encryption (TDE), the certificate from the source SQL Server instance must be migrated to your target Azure SQL Managed Instance before database restoration. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](/azure/azure-sql/managed-instance/tde-certificate-migrate).
+Before you begin the tutorial:
+
+- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- Have an Azure account that's assigned to one of the following built-in roles:
+
+ - Contributor for the target instance of Azure SQL Managed Instance and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share
+ - Reader role for the Azure resource groups that contain the target instance of Azure SQL Managed Instance or your Azure storage account
+ - Owner or Contributor role for the Azure subscription (required if you create a new Database Migration Service instance)
+
+ As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md).
+
+ > [!IMPORTANT]
+ > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio.
+
+- Create a target instance of [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
+
+- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permission.
+
+- Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files. Database Migration Service uses the backup location during database migration.
+
+ > [!IMPORTANT]
+ >
+ > - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service.
+ > - Database Migration Service doesn't initiate any backups. Instead, the service uses existing backups for the migration. You might already have these backups as part of your disaster recovery plan.
+ > - Make sure you [create backups by using the WITH CHECKSUM option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server?preserve-view=true&view=sql-server-2017).
+ > - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported.
+ > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
+
+- Ensure that the service account that's running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
+
+- If you're migrating a database that's protected by Transparent Data Encryption (TDE), the certificate from the source SQL Server instance must be migrated to your target managed instance before you restore the database. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](/azure/azure-sql/managed-instance/tde-certificate-migrate).
+ > [!TIP]
- > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance.
+ > If your database contains sensitive data that's protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process automatically migrates your Always Encrypted keys to your target managed instance.
-* Provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups **if your database backups are in a network file share**. The migration wizard will provide you with the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you would install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
+- If your database backups are on a network file share, provide a computer on which you can install a [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard gives you the download link and authentication keys to download and install your self-hosted integration runtime.
- | Domain names | Outbound ports | Description |
+ In preparation for the migration, ensure that the computer on which you install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
+
+ | Domain names | Outbound port | Description |
| -- | -- | |
- | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For new created Data Factory in public cloud, find the FQDN from your Self-hosted Integration Runtime key that is in format {datafactory}.{region}.datafactory.azure.net. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, use *.frontend.clouddatahub.net instead. |
+ | Public cloud: `{datafactory}.{region}.datafactory.azure.net`<br />or `*.frontend.clouddatahub.net` <br /><br /> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br /><br /> Azure China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to Database Migration Service. <br/><br/>For a newly created data factory in a public cloud, locate the fully qualified domain name (FQDN) from your self-hosted integration runtime key, in the format `{datafactory}.{region}.datafactory.azure.net`. <br /><br /> For an existing data factory, if you don't see the FQDN in your self-hosted integration key, use `*.frontend.clouddatahub.net` instead. |
| `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled auto-update, you can skip configuring this domain. |
- | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime to connect to the Azure storage account for uploading database backups from your network share |
+ | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account to upload database backups from your network share |
> [!TIP]
- > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process.
+ > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime isn't required during the migration process.
+
+- If you use a self-hosted integration runtime, make sure that the computer on which the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located.
+
+- Enable outbound port 445 to allow access to the network file share. For more information, see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations).
+
+- If you're using Database Migration Service for the first time, make sure that the Microsoft.DataMigration resource provider is registered in your subscription. You can complete the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).
+
+## Open the Migrate to Azure SQL wizard in Azure Data Studio
+
+To open the Migrate to Azure SQL wizard:
-* When using self-hosted integration runtime, ensure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located.
-* Outbound port 445 should be enabled to access the network file share. Also, see [recommendations for using self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-self-hosted-integration-runtime-for-database-migrations)
-* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
+1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
-## Launch the Migrate to Azure SQL wizard in Azure Data Studio
+1. Right-click the server connection and select **Manage**.
+
+1. In the server menu, under **General**, select **Azure SQL Migration**.
+
+1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard.
-1. Open Azure Data Studio and select the server icon to connect to your on-premises SQL Server (or SQL Server on Azure Virtual Machine).
-1. On the server connection, right-click and select **Manage**.
-1. On the server's home page, Select **Azure SQL Migration** extension.
-1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard.
:::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard":::
-1. The wizard's first page will allow you to start a new session or resume a previously saved one. Pick the first option to start a new session.
-## Run database assessment, collect performance data, and get Azure recommendation
-1. Select the database(s) to run the assessment and select **Next**.
-1. Select Azure SQL Managed Instance as the target.
+1. On the first page of the wizard, start a new session or resume a previously saved session.
+
+## Run a database assessment, collect performance data, and get Azure recommendations
+
+1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**.
+
+1. In **Step 2: Assessment results and recommendations**, complete the following steps:
+
+ 1. In **Choose your Azure SQL target**, select **Azure SQL Managed Instance**.
+ :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation":::
-1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**.
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/assessment-issues-details.png" alt-text="Database assessment details":::
-1. Click the **Get Azure recommendation** button.
-1. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and select the **Start** button.
-1. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard, or close Azure Data Studio.
-After 10 minutes, you'll see a recommended configuration for your Azure SQL Managed Instance. You can also press the **Refresh recommendation** link after the initial
-10 minutes to refresh the recommendation with the additional data collected.
-1. In the above Azure SQL Managed Instance box, click the **View details** button for more information about your recommendation.
-1. Close the view details box and press the **Next** button.
+
+1. Select **View/Select** to view the assessment results.
+
+1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found.
+
+ 1. Select **Get Azure recommendation** to open the recommendations pane.
+
+ 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**.
+
+ Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio.
+
+ After 10 minutes, Azure Data Studio indicates that a recommendation is available for Azure SQL Managed Instance. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time.
+
+ 1. In the selected **Azure SQL Managed Instance** target, select **View details** to open the detailed SKU recommendation report:
+
+ 1. In **Review Azure SQL Managed Instance Recommendations**, review the recommendation. To save a copy of the recommendation, select the **Save recommendation report** checkbox.
+
+1. Select **Close** to close the recommendations pane.
+
+1. Select **Next** to continue your database migration in the wizard.
## Configure migration settings
-1. Specify your **Azure SQL Managed Instance** by selecting your subscription, location, and resource group from the corresponding drop-down lists and then select **Next**.
-1. Select **Offline migration** as the migration mode.
- > [!NOTE]
- > In the offline migration mode, the source SQL Server database should not be used for write activity while database backups are restored on target Azure SQL Managed Instance. Application downtime needs to be considered till the migration completes.
-1. Select the location of your database backups. Your database backups can be located on an on-premises network share or in an Azure storage blob container.
+1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the target instance of Azure SQL Managed Instance. Then, select **Next**.
+
+1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**.
+ > [!NOTE]
- > For backups located on a network share, provide the details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded.
-
- |Field |Description |
- ||-|
- |**Source Credentials - Username** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Source Credentials - Password** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backups files in the network share that don't belong to the valid backup set will be automatically ignored during the migration process. |
- |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. |
- |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. |
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
- |**Storage account details** |The resource group and storage account where backup files will be uploaded to. You don't need to create a container as DMS will automatically create a blob container in the specified storage account during the upload process. |
-
-* For backups stored in an Azure storage blob container, specify the below details of the **Target database name**, **Resource group**, **Azure storage account**, **Blob container**, and **Last backup file from** the corresponding drop-down lists.
-
- |Field |Description |
- ||-|
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
- |**Storage account details** |The resource group, storage account and container where backup files are located.
- |**Last Backup File** |The file name of the last backup of the database that you're migrating.
+ > In offline migration mode, the source SQL Server database shouldn't be used for write activity while database backups are restored on a target instance of Azure SQL Managed Instance. Application downtime needs to be considered until the migration is finished.
-> [!IMPORTANT]
-> If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
+1. In **Step 5: Data source configuration**, select the location of your database backups. Your database backups can be located either on an on-premises network share or in an Azure storage blob container.
-## Create Azure Database Migration Service
+ - For backups that are located on a network share, enter or select the following information:
-1. Create a new Azure Database Migration Service or reuse an existing Service that you previously created.
- > [!NOTE]
- > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused.
-1. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown will list any existing DMS in the selected resource group.
-1. To reuse an existing DMS, select it from the dropdown list and press Next to view the summary screen and when ready to begin the migration, press the **Start** migration button.
-1. To create a new DMS, select **Create new**. On the **Create Azure Database Migration Service**, screen provide the name for your DMS and select **Create**.
-1. After successfully creating DMS, you'll be provided with details to set up **integration runtime**.
-1. Select **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the pre-requisites of connecting to the source SQL Server and the location containing the source backup.
-1. After the installation is complete, the **Microsoft Integration Runtime Configuration Manager** will automatically launch to begin the registration process.
-1. Copy and paste one of the authentication keys provided in the wizard screen in Azure Data Studio. If the authentication key is valid, a green check icon is displayed in the Integration Runtime Configuration Manager, indicating that you can continue to **Register**.
-After completing the registration of the self-hosted integration runtime, close the **Microsoft Integration Runtime Configuration Manager** and switch back to the migration wizard in Azure Data Studio.
-1. Select **Test connection** in the **Create Azure Database Migration Service** screen in Azure Data Studio to validate that the newly created DMS is connected to the newly registered self-hosted integration runtime.
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime":::
-1. Review the summary and select **Done** to start the database migration.
-
-## Monitor your migration
-
-1. On the **Database Migration Status**, you can track the migrations in progress, migrations completed, and migrations failed (if any).
+ |Name |Description |
+ ||-|
+ |**Source Credentials - Username** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
+ |**Source Credentials - Password** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
+ |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backup files in the network share that don't belong to the valid backup set are automatically ignored during the migration process. |
+ |**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. |
+ |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. |
+ |**Target database name** |You can modify the target database name during the migration process. |
+ |**Storage account details** |The resource group and storage account where backup files are uploaded. You don't need to create a container. Database Migration Service automatically creates a blob container in the specified storage account during the upload process. |
+
+ - For backups that are stored in an Azure storage blob container, enter or select the following information:
+
+ |Name |Description |
+ ||-|
+ |**Target database name** |You can modify the target database name during the migration process. |
+ |**Storage account details** |The resource group, storage account, and container where backup files are located.
+ |**Last Backup File** |The file name of the last backup of the database you're migrating.
+
+ > [!IMPORTANT]
+ > If loopback check functionality is enabled and the source SQL Server instance and file share are on the same computer, the source can't access the file share by using an FQDN. To fix this issue, [disable loopback check functionality](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd).
+
+## Create a Database Migration Service instance
+
+In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Azure Database Migration Service or reuse an existing instance that you created earlier.
+
+> [!NOTE]
+> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio.
+
+### Use an existing instance of Database Migration Service
+
+To use an existing instance of Database Migration Service:
+
+1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service.
+
+1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group.
+
+1. Select **Next**.
+
+### Create a new instance of Database Migration Service
+
+To create a new instance of Database Migration Service:
+
+1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service.
+
+1. Under **Azure Database Migration Service**, select **Create new**.
+
+1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**.
+
+1. Under **Set up integration runtime**, complete the following steps:
+
+ 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites to connect to the source SQL Server instance.
+
+ When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process.
+
+ 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio. If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**.
+
+ After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager.
+
+ > [!NOTE]
+ > For more information about how to use the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
+
+1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime.
+
+ :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime":::
+
+1. Return to the migration wizard in Azure Data Studio.
+
+## Start the database migration
+
+In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration.
+
+## Monitor the database migration
+
+1. In Azure Data Studio, in the server menu, under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL migrations.
+
+ Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations.
:::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard":::
-1. Select **Database migrations in progress** to view ongoing migrations and get further details by selecting the database name.
-1. The migration details page displays the backup files and their corresponding status:
+
+1. Select **Database migrations in progress** to view active migrations.
+
+ To get more information about a specific migration, select the database name.
+
+ The migration details pane displays the backup files and their corresponding status:
| Status | Description | |--|-|
- | Arrived | Backup file arrived in the source backup location and validated |
- | Uploading | Integration runtime is currently uploading the backup file to Azure storage|
- | Uploaded | Backup file is uploaded to Azure storage |
- | Restoring | Azure Database Migration Service is currently restoring the backup file to Azure SQL Managed Instance|
- | Restored | Backup file is successfully restored on Azure SQL Managed Instance |
- | Canceled | Migration process was canceled |
- | Ignored | Backup file was ignored as it doesn't belong to a valid database backup chain |
+ | Arrived | The backup file arrived in the source backup location and was validated. |
+ | Uploading | The integration runtime is uploading the backup file to the Azure storage account. |
+ | Uploaded | The backup file was uploaded to the Azure storage account. |
+ | Restoring | The service is restoring the backup file to Azure SQL Managed Instance. |
+ | Restored | The backup file is successfully restored in Azure SQL Managed Instance. |
+ | Canceled | The migration process was canceled. |
+ | Ignored | The backup file was ignored because it doesn't belong to a valid database backup chain. |
- :::image type="content" source="media/tutorial-sql-server-to-managed-instance-offline-ads/offline-to-mi-migration-details-inprogress-backups-restored.png" alt-text="offline backup restore details":::
-After all database backups are restored on Azure SQL Managed Instance, the Azure DMS will initiate an automatic migration cutover to ensure the migrated database in Azure SQL Managed Instance is ready for use and the migration status changes from *in progress* to *Succeeded*.
+After all database backups are restored on the instance of Azure SQL Managed Instance, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**.
> [!IMPORTANT]
-> After the migration, availability of SQL Managed Instance with Business Critical service tier can take significantly longer than General Purpose as three secondary replicas have to be seeded for Always On High Availability group. This operation duration depends on the size of data, for more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
+> After the migration, the availability of SQL Managed Instance with Business Critical service tier might take significantly longer than the General Purpose tier because three secondary replicas have to be seeded for an Always On High Availability group. The duration of this operation depends on the size of the data. For more information, see [Management operations duration](/azure/azure-sql/managed-instance/management-operations-overview#duration).
## Next steps
-* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).
-* For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).
-* For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
-
+- Complete a quickstart to [migrate a database to SQL Managed Instance by using the T-SQL RESTORE command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart).
+- Learn more about [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview).
+- Learn how to [connect apps to SQL Managed Instance](/azure/azure-sql/managed-instance/connect-application-instance).
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online using Azure Data Studio"
+ Title: "Tutorial: Migrate SQL Server to Azure SQL Managed Instance online by using Azure Data Studio"
-description: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with Azure Database Migration Service
+description: Migrate SQL Server to an Azure SQL Managed Instance online by using Azure Data Studio with Azure Database Migration Service
Last updated 10/05/2021
-# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using Azure Data Studio with DMS
+# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online by using Azure Data Studio with DMS
-Use the Azure SQL migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
+Use the Azure SQL migration extension in Azure Data Studio to migrate database(s) from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For methods that might require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to Azure SQL Managed Instance with minimal downtime by using Azure Data Studio with Azure Database Migration Service (DMS). This tutorial focuses on the online migration mode where application downtime is limited to a short cutover at the end of the migration. In this tutorial, you learn how to: > [!div class="checklist"] >
-> * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio.
+> * Launch the *Migrate to Azure SQL* wizard in Azure Data Studio
> * Run an assessment of your source SQL Server database(s) > * Collect performance data from your source SQL Server > * Get a recommendation of the Azure SQL Managed Instance SKU best suited for your workload > * Specify details of your source SQL Server, backup location and your target Azure SQL Managed Instance
-> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups.
-> * Start and monitor the progress for your migration.
-> * Perform the migration cutover when you are ready.
+> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups
+> * Start and monitor the progress for your migration
+> * Perform the migration cutover when you are ready
> [!IMPORTANT] > Prepare for migration and reduce the duration of the online migration process as much as possible to minimize the risk of interruption caused by instance reconfiguration or planned maintenance. In case of such an event, migration process will start from the beginning. In case of planned maintenance, there is a grace period of 36 hours where the target Azure SQL Managed Instance configuration or maintenance will be held before migration process is restarted.
To complete this tutorial, you need to:
> [!IMPORTANT] > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
+ > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you might already have as part of your disaster recovery plan, for the migration.
> - You need to take [backups using the `WITH CHECKSUM` option](/sql/relational-databases/backup-restore/enable-or-disable-backup-checksums-during-backup-or-restore-sql-server).
- > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
+ > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (that is, full and t-log) into a single backup media isn't supported.
> - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups. * Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
-* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](/azure/azure-sql/managed-instance/tde-certificate-migrate) and [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to the target Azure SQL Managed Instance or SQL Server on Azure virtual machine before you migrate data. To learn more, see [Migrate a certificate of a TDE-protected database to Azure SQL Managed Instance](/azure/azure-sql/managed-instance/tde-certificate-migrate) and [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
> [!TIP]
- > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
+ > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process that uses Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target Azure SQL Managed Instance or SQL Server on Azure virtual machine.
* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard provides the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled: | Domain names | Outbound ports | Description | | -- | -- | |
- | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For new created Data Factory in public cloud, locate the FQDN from your Self-hosted Integration Runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, use *.frontend.clouddatahub.net instead. |
+ | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For a newly created data factory in the public cloud, locate the FQDN from your self-hosted integration runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For the old data factory, if you don't see the FQDN in your self-hosted integration key, use *.frontend.clouddatahub.net instead. |
| `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled auto-update, you can skip configuring this domain. | | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share | > [!TIP]
- > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process.
+ > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime is not required during the migration process.
-* When using self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-self-hosted-integration-runtime-for-database-migrations)
+* When you're using a self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations)
* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider) ## Launch the Migrate to Azure SQL wizard in Azure Data Studio
-1. Open Azure Data Studio and select the server icon to connect to your on-premises SQL Server (or SQL Server on Azure Virtual Machine).
+1. Open Azure Data Studio and select the server icon to connect to your on-premises SQL Server (or SQL Server on Azure virtual machine).
1. On the server connection, right-click and select **Manage**.
-1. On the server's home page, Select **Azure SQL Migration** extension.
+1. On the server's home page, select **Azure SQL Migration** extension.
1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard. :::image type="content" source="media/tutorial-sql-server-to-managed-instance-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard"::: 1. The first page of the wizard will allow you to start a new session or resume a previously saved one. Pick the first option to start a new session.
To complete this tutorial, you need to:
> In the online migration mode, the source SQL Server database can be used for read and write activity while database backups are continuously restored on target Azure SQL Managed Instance. Application downtime is limited to duration for the cutover at the end of migration. 1. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container. > [!NOTE]
- > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
+ > If your database backups are provided in an on-premises network share, DMS will require you to set up a self-hosted integration runtime in the next step of the wizard. If a self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to your Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you don't need to set up a self-hosted integration runtime.
-* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
+* For backups located on a network share, provide the following details of your source SQL Server, source backup location, target database name, and Azure storage account for the backup files to be uploaded to:
|Field |Description | ||-|
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio"
+ Title: "Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio"
-description: Migrate SQL Server to an SQL Server on Azure Virtual Machine offline using Azure Data Studio with Azure Database Migration Service
+description: Learn how to migrate on-premises SQL Server to SQL Server on Azure Virtual Machines offline by using Azure Data Studio and Azure Database Migration Service.
Last updated 10/05/2021
-# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machine offline using Azure Data Studio with DMS
+# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio
-Use the Azure SQL migration extension in Azure Data Studio to migrate the databases from a SQL Server instance to a [SQL Server on Azure Virtual Machine (SQL Server 2016 and above)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) with minimal downtime. For methods that may require some manual effort, see the article [SQL Server instance migration to SQL Server on Azure Virtual Machine](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview).
+You can use Azure Database Migration Service and the Azure SQL Migration extension in Azure Data Studio to migrate databases from an on-premises instance of SQL Server to [SQL Server on Azure Virtual Machines (SQL Server 2016 and later)](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) offline and with minimal downtime.
-In this tutorial, you migrate the **AdventureWorks** database from an on-premises instance of SQL Server to a SQL Server on Azure Virtual Machine with the offline migration method by using Azure Data Studio with Azure Database Migration Service.
+For database migration methods that might require some manual configuration, see [SQL Server instance migration to SQL Server on Azure Virtual Machines](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-migration-overview).
+
+In this tutorial, learn how to migrate the example AdventureWorks database from an on-premises instance of SQL Server to an instance of SQL Server on Azure Virtual Machines by using Azure Data Studio and Azure Database Migration Service. This tutorial uses offline migration mode, which considers an acceptable downtime during the migration process.
In this tutorial, you learn how to: > [!div class="checklist"] >
-> * Launch the Migrate to Azure SQL wizard in Azure Data Studio.
-> * Run an assessment of your source SQL Server database(s)
-> * Collect performance data from your source SQL Server
-> * Get a recommendation of the SQL Server on Azure Virtual Machine SKU best suited for your workload
-> * Specify details of your source SQL Server, backup location and your target SQL Server on Azure Virtual Machine
-> * Create a new Azure Database Migration Service and install the self-hosted integration runtime to access source server and backups.
-> * Start and monitor the progress for your migration through to completion
+> - Open the Migrate to Azure SQL wizard in Azure Data Studio
+> - Run an assessment of your source SQL Server databases
+> - Collect performance data from your source SQL Server instance
+> - Get a recommendation of the SQL Server on Azure Virtual Machines SKU that will work best for your workload
+> - Set the details of your source SQL Server instance, backup location, and target instance of SQL Server on Azure Virtual Machines
+> - Create an instance of Azure Database Migration Service
+> - Start your migration and monitor progress to completion
-This article describes an offline migration from SQL Server to a SQL Server on Azure Virtual Machine. For an online migration, see [Migrate SQL Server to a SQL Server on Azure Virtual Machine online using Azure Data Studio with DMS](tutorial-sql-server-to-virtual-machine-online-ads.md).
+This tutorial describes an offline migration from SQL Server to SQL Server on Azure Virtual Machines. For an online migration, see [Migrate SQL Server to SQL Server on Azure Virtual Machines online in Azure Data Studio](tutorial-sql-server-to-virtual-machine-online-ads.md).
## Prerequisites
-To complete this tutorial, you need to:
+Before you begin the tutorial:
-* [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio)
-* [Install the Azure SQL migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from the Azure Data Studio marketplace
-* Have an Azure account that is assigned to one of the built-in roles listed below:
- - Contributor for the target SQL Server on Azure Virtual Machine (and Storage Account to upload your database backup files from SMB network share).
- - Reader role for the Azure Resource Groups containing the target SQL Server on Azure Virtual Machine or the Azure storage account.
- - Owner or Contributor role for the Azure subscription.
- - As an alternative to using the above built-in roles you can assign a custom role as defined in [this article.](resource-custom-roles-sql-db-virtual-machine-ads.md)
- > [!IMPORTANT]
- > Azure account is only required when configuring the migration steps and is not required for assessment or Azure recommendation steps in the migration wizard.
-* Create a target [SQL Server on Azure Virtual Machine](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).
+- [Download and install Azure Data Studio](/sql/azure-data-studio/download-azure-data-studio).
+- [Install the Azure SQL Migration extension](/sql/azure-data-studio/extensions/azure-sql-migration-extension) from Azure Data Studio Marketplace.
+- Have an Azure account that's assigned to one of the following built-in roles:
- > [!IMPORTANT]
- > If you have an existing Azure Virtual Machine, it should be registered with [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes).
-* Ensure that the logins used to connect the source SQL Server are members of the *sysadmin* server role or have `CONTROL SERVER` permission.
-* Use one of the following storage options for the full database and transaction log backup files:
- - SMB network share
- - Azure storage account file share or blob container
+ - Contributor for the target instance of SQL Server on Azure Virtual Machines and for the storage account where you upload your database backup files from a Server Message Block (SMB) network share
+ - Reader role for the Azure resource group that contains the target instance of SQL Server on Azure Virtual Machines or for your Azure Storage account
+ - Owner or Contributor role for the Azure subscription
+
+ As an alternative to using one of these built-in roles, you can [assign a custom role](resource-custom-roles-sql-database-ads.md).
+
+ > [!IMPORTANT]
+ > An Azure account is required only when you configure the migration steps. An Azure account isn't required for the assessment or to view Azure recommendations in the migration wizard in Azure Data Studio.
+
+- Create a target instance of [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/create-sql-vm-portal).
> [!IMPORTANT]
- > - If your database backup files are provided in an SMB network share, [Create an Azure storage account](../storage/common/storage-account-create.md) that allows the DMS service to upload the database backup files. Make sure to create the Azure Storage Account in the same region as the Azure Database Migration Service instance is created.
- > - Azure Database Migration Service does not initiate any backups, and instead uses existing backups, which you may already have as part of your disaster recovery plan, for the migration.
- > - Each backup can be written to either a separate backup file or multiple backup files. However, appending multiple backups (i.e. full and t-log) into a single backup media is not supported.
- > - Use compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
-* Ensure that the service account running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
-* The source SQL Server instance certificate from a database protected by Transparent Data Encryption (TDE) needs to be migrated to SQL Server on Azure Virtual Machine before migrating data. To learn more, see [Move a TDE Protected Database to Another SQL Server](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+ > If you have an existing Azure virtual machine, it should be registered with the [SQL IaaS Agent extension in Full management mode](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management#management-modes).
+
+- Ensure that the logins that you use to connect the source SQL Server instance are members of the SYSADMIN server role or have CONTROL SERVER permission.
+
+- Provide an SMB network share, Azure storage account file share, or Azure storage account blob container that contains your full database backup files and subsequent transaction log backup files. Database Migration Service uses the backup location during database migration.
+
+ > [!IMPORTANT]
+ >
+ > - If your database backup files are in an SMB network share, [create an Azure storage account](../storage/common/storage-account-create.md) that Database Migration Service can use to upload database backup files to and to migrate databases. Make sure you create the Azure storage account in the same region where you create your instance of Database Migration Service.
+ > - Database Migration Service doesn't initiate any backups. Instead, the service uses existing backups for the migration. You might already have these backups as part of your disaster recovery plan.
+ > - You can write each backup to either a separate backup file or to multiple backup files. Appending multiple backups such as full and transaction logs into a single backup media isn't supported.
+ > - You can provide compressed backups to reduce the likelihood of experiencing potential issues associated with migrating large backups.
+
+- Ensure that the service account that's running the source SQL Server instance has read and write permissions on the SMB network share that contains database backup files.
+
+- If you're migrating a database that's protected by Transparent Data Encryption (TDE), the certificate from the source SQL Server instance must be migrated to SQL Server on Azure Virtual Machines before you migrate data. To learn more, see [Move a TDE-protected database to another SQL Server instance](/sql/relational-databases/security/encryption/move-a-tde-protected-database-to-another-sql-server).
+ > [!TIP]
- > If your database contains sensitive data that is protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), migration process using Azure Data Studio with DMS will automatically migrate your Always Encrypted keys to your target SQL Server on Azure Virtual Machine.
+ > If your database contains sensitive data that's protected by [Always Encrypted](/sql/relational-databases/security/encryption/configure-always-encrypted-using-sql-server-management-studio), the migration process automatically migrates your Always Encrypted keys to your target instance of SQL Server on Azure Virtual Machines.
+
+- If your database backups are on a network file share, provide a computer on which you can install a [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard gives you the download link and authentication keys to download and install your self-hosted integration runtime.
-* If your database backups are in a network file share, provide a machine to install [self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md) to access and migrate database backups. The migration wizard provides the download link and authentication keys to download and install your self-hosted integration runtime. In preparation for the migration, ensure that the machine where you plan to install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
+ In preparation for the migration, ensure that the computer on which you install the self-hosted integration runtime has the following outbound firewall rules and domain names enabled:
- | Domain names | Outbound ports | Description |
+ | Domain names | Outbound port | Description |
| -- | -- | |
- | Public Cloud: `{datafactory}.{region}.datafactory.azure.net`<br> or `*.frontend.clouddatahub.net` <br> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br> China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to the Data Migration service. <br>For new created Data Factory in public cloud, locate the FQDN from your Self-hosted Integration Runtime key, which is in format `{datafactory}.{region}.datafactory.azure.net`. For old Data factory, if you don't see the FQDN in your Self-hosted Integration key, use *.frontend.clouddatahub.net instead. |
+ | Public cloud: `{datafactory}.{region}.datafactory.azure.net`<br />or `*.frontend.clouddatahub.net` <br /><br /> Azure Government: `{datafactory}.{region}.datafactory.azure.us` <br /><br /> Azure China: `{datafactory}.{region}.datafactory.azure.cn` | 443 | Required by the self-hosted integration runtime to connect to Database Migration Service. <br/><br/>For a newly created data factory in a public cloud, locate the fully qualified domain name (FQDN) from your self-hosted integration runtime key, in the format `{datafactory}.{region}.datafactory.azure.net`. <br /><br /> For an existing data factory, if you don't see the FQDN in your self-hosted integration key, use `*.frontend.clouddatahub.net` instead. |
| `download.microsoft.com` | 443 | Required by the self-hosted integration runtime for downloading the updates. If you have disabled auto-update, you can skip configuring this domain. |
- | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account for uploading database backups from your network share |
+ | `*.core.windows.net` | 443 | Used by the self-hosted integration runtime that connects to the Azure storage account to upload database backups from your network share |
> [!TIP]
- > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process.
+ > If your database backup files are already provided in an Azure storage account, a self-hosted integration runtime isn't required during the migration process.
+
+- If you use a self-hosted integration runtime, make sure that the computer on which the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located.
+
+- Enable outbound port 445 to allow access to the network file share. For more information, see [recommendations for using a self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations).
+
+- If you're using Azure Database Migration Service for the first time, make sure that the Microsoft.DataMigration [resource provider is registered in your subscription](quickstart-create-data-migration-service-portal.md#register-the-resource-provider).
+
+## Open the Migrate to Azure SQL wizard in Azure Data Studio
+
+To open the Migrate to Azure SQL wizard:
+
+1. In Azure Data Studio, go to **Connections**. Select and connect to your on-premises instance of SQL Server. You also can connect to SQL Server on an Azure virtual machine.
+
+1. Right-click the server connection and select **Manage**.
+
+1. In the server menu under **General**, select **Azure SQL Migration**.
+
+1. In the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to open the migration wizard.
+
+ :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Screenshot that shows how to open the Migrate to Azure SQL wizard.":::
+
+1. On the first page of the wizard, start a new session or resume a previously saved session.
+
+## Run a database assessment, collect performance data, and get Azure recommendations
+
+1. In **Step 1: Databases for assessment** in the Migrate to Azure SQL wizard, select the databases you want to assess. Then, select **Next**.
+
+1. In **Step 2: Assessment results and recommendations**, complete the following steps:
+
+ 1. In **Choose your Azure SQL target**, select **SQL Server on Azure Virtual Machine**.
+
+ :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Screenshot that shows an assessment confirmation.":::
+
+ 1. Select **View/Select** to view the assessment results.
+
+ 1. In the assessment results, select the database, and then review the assessment report to make sure no issues were found.
+
+ 1. Select **Get Azure recommendation** to open the recommendations pane.
+
+ 1. Select **Collect performance data now**. Select a folder on your local computer to store the performance logs, and then select **Start**.
-* When using self-hosted integration runtime, make sure that the machine where the runtime is installed can connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-self-hosted-integration-runtime-for-database-migrations)
-* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider)
+ Azure Data Studio collects performance data until you either stop data collection or you close Azure Data Studio.
-## Launch the Migrate to Azure SQL wizard in Azure Data Studio
+ After 10 minutes, Azure Data Studio indicates that a recommendation is available for SQL Server on Azure Virtual Machines. After the first recommendation is generated, you can select **Restart data collection** to continue the data collection process and refine the SKU recommendation. An extended assessment is especially helpful if your usage patterns vary over time.
-1. Open Azure Data Studio and select on the server icon to connect to your on-premises SQL Server (or SQL Server on Azure Virtual Machine).
-1. On the server connection, right-click and select **Manage**.
-1. On the server's home page, Select **Azure SQL Migration** extension.
-1. On the Azure SQL Migration dashboard, select **Migrate to Azure SQL** to launch the migration wizard.
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-online-ads/launch-migrate-to-azure-sql-wizard.png" alt-text="Launch Migrate to Azure SQL wizard":::
-1. In the first step of the migration wizard, link your existing or new Azure account to Azure Data Studio.
+ 1. In the selected **SQL Server on Azure Virtual Machines** target, select **View details** to open the detailed SKU recommendation report:
-## Run database assessment, collect performance data and get Azure recommendation
+ 1. In **Review SQL Server on Azure Virtual Machines Recommendations**, review the recommendation. To save a copy of the recommendation, select the **Save recommendation report** checkbox.
-1. Select the database(s) to run assessment and select **Next**.
-1. Select SQL Server on Azure Virtual Machine as the target.
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/assessment-complete-target-selection.png" alt-text="Assessment confirmation":::
-1. Select on the **View/Select** button to view details of the assessment results for your database(s), select the database(s) to migrate, and select **OK**.
-1. Click the **Get Azure recommendation** button.
-2. Pick the **Collect performance data now** option and enter a path for performance logs to be collected and click the **Start** button.
-3. Azure Data Studio will now collect performance data until you either stop the collection, press the **Next** button in the wizard or close Azure Data Studio.
-4. After 10 minutes you will see a recommended configuration for your Azure SQL VM. You can also press the Refresh recommendation link after the initial 10 minutes to refresh the recommendation with the additional data collected.
-5. In the above **SQL Server on Azure Virtual Machine** box click the **View details** button for more information about your recommendation.
-6. Close the view details box and press the **Next** button.
+1. Select **Close** to close the recommendations pane.
+
+1. Select **Next** to continue your database migration in the wizard.
## Configure migration settings
-1. Specify your **target SQL Server on Azure Virtual Machine** by selecting your subscription, location, resource group from the corresponding drop-down lists and then select **Next**.
-2. Select **Offline migration** as the migration mode.
+1. In **Step 3: Azure SQL target** in the Migrate to Azure SQL wizard, select your Azure account, Azure subscription, the Azure region or location, and the resource group that contains the target SQL Server to Azure Virtual Machines instance. Then, select **Next**.
+
+1. In **Step 4: Migration mode**, select **Offline migration**, and then select **Next**.
+ > [!NOTE]
- > In the offline migration mode, the source SQL Server database should not be used for write activity while database backup files are restored on the target Azure SQL database. Application downtime persists through the start until the completion of the migration process.
-3. Select the location of your database backups. Your database backups can either be located on an on-premises network share or in an Azure storage blob container.
+ > In offline migration mode, the source SQL Server database shouldn't be used for write activity while database backup files are restored on the target instance of SQL Server to Azure Virtual Machines. Application downtime persists from the start of the migration process until it's finished.
+
+1. In **Step 5: Data source configuration**, select the location of your database backups. Your database backups can be located either on an on-premises network share or in an Azure storage blob container.
+ > [!NOTE]
- > If your database backups are provided in an on-premises network share, DMS will require you to setup self-hosted integration runtime in the next step of the wizard. Self-hosted integration runtime is required to access your source database backups, check the validity of the backup set and upload them to Azure storage account.<br/> If your database backups are already on an Azure storage blob container, you do not need to setup self-hosted integration runtime.
-* For backups located on a network share provide the below details of your source SQL Server, source backup location, target database name and Azure storage account for the backup files to be uploaded to.
+ > If your database backups are provided in an on-premises network share, you must set up a self-hosted integration runtime in the next step of the wizard. A self-hosted integration runtime is required to access your source database backups, check the validity of the backup set, and upload backups to Azure storage account.
+ >
+ > If your database backups are already in an Azure storage blob container, you don't need to set up a self-hosted integration runtime.
+
+ For backups that are located on a network share, enter or select the following information:
- |Field |Description |
+ |Name |Description |
||-|
- |**Source Credentials - Username** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Source Credentials - Password** |The credential (Windows / SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
- |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backups files in the network share that don't belong to the valid backup set will be automatically ignored during the migration process. |
+ |**Source Credentials - Username** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
+ |**Source Credentials - Password** |The credential (Windows and SQL authentication) to connect to the source SQL Server instance and validate the backup files. |
+ |**Network share location that contains backups** |The network share location that contains the full and transaction log backup files. Any invalid files or backup files in the network share that don't belong to the valid backup set are automatically ignored during the migration process. |
|**Windows user account with read access to the network share location** |The Windows credential (username) that has read access to the network share to retrieve the backup files. | |**Password** |The Windows credential (password) that has read access to the network share to retrieve the backup files. |
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
+ |**Target database name** |You can modify the target database name during the migration process. |
-* For backups stored in an Azure storage blob container specify the below details of the **Target database name**, **Resource group**, **Azure storage account**, **Blob container** and **Last backup file from** the corresponding drop-down lists.
+ For backups that are stored in an Azure storage blob container, enter or select the following information:
- |Field |Description |
+ |Name |Description |
||-|
- |**Target database name** |The target database name can be modified if you wish to change the database name on the target during the migration process. |
- |**Storage account details** |The resource group, storage account and container where backup files are located.
- |**Last Backup File** |The file name of the last backup of the database that you are migrating.
-
+ |**Target database name** |You can modify the target database name during the migration process. |
+ |**Storage account details** |The resource group, storage account, and container where backup files are located.
+ |**Last Backup File** |The file name of the last backup of the database you're migrating.
+ > [!IMPORTANT]
- > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, then source won't be able to access the files hare using FQDN. To fix this issue, disable loopback check functionality using the instructions [here](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd)
+ > If loopback check functionality is enabled and the source SQL Server and file share are on the same computer, the source won't be able to access the file share by using the FQDN. To fix this issue, [disable loopback check functionality](https://support.microsoft.com/help/926642/error-message-when-you-try-to-access-a-server-locally-by-using-its-fqd).
-## Create Azure Database Migration Service
+## Create a Database Migration Service instance
-1. Create a new Azure Database Migration Service or reuse an existing Service that you previously created.
- > [!NOTE]
- > If you had previously created DMS using the Azure Portal, you cannot reuse it in the migration wizard in Azure Data Studio. Only DMS created previously using Azure Data Studio can be reused.
-1. Select the **Resource group** where you have an existing DMS or need to create a new one. The **Azure Database Migration Service** dropdown will list any existing DMS in the selected resource group.
-1. To reuse an existing DMS, select it from the dropdown list and the status of the self-hosted integration runtime will be displayed at the bottom of the page.
-1. To create a new DMS, select on **Create new**.
-1. On the **Create Azure Database Migration Service**, screen provide the name for your DMS and select **Create**.
-1. After successful creation of DMS, you'll be provided with details to **Setup integration runtime**.
-1. Select on **Download and install integration runtime** to open the download link in a web browser. Complete the download. Install the integration runtime on a machine that meets the pre-requisites of connecting to source SQL Server and the location containing the source backup.
-1. After the installation is complete, the **Microsoft Integration Runtime Configuration Manager** will automatically launch to begin the registration process.
-1. Copy and paste one of the authentication keys provided in the wizard screen in Azure Data Studio. If the authentication key is valid, a green check icon is displayed in the Integration Runtime Configuration Manager indicating that you can continue to **Register**.
-1. After successfully completing the registration of self-hosted integration runtime, close the **Microsoft Integration Runtime Configuration Manager** and switch back to the migration wizard in Azure Data Studio.
-1. Select **Test connection** in the **Create Azure Database Migration Service** screen in Azure Data Studio to validate that the newly created DMS is connected to the newly registered self-hosted integration runtime and select **Done**.
- :::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/test-connection-integration-runtime-complete.png" alt-text="Test connection integration runtime":::
-1. Review the summary and select **Done** to start the database migration.
-
-## Monitor your migration
-
-1. On the **Database Migration Status**, you can track the migrations in progress, migrations completed, and migrations failed (if any).
+In **Step 6: Azure Database Migration Service** in the Migrate to Azure SQL wizard, create a new instance of Azure Database Migration Service or reuse an existing instance that you created earlier.
+
+> [!NOTE]
+> If you previously created a Database Migration Service instance by using the Azure portal, you can't reuse the instance in the migration wizard in Azure Data Studio. You can reuse an instance only if you created the instance by using Azure Data Studio.
+
+### Use an existing instance of Database Migration Service
+
+To use an existing instance of Database Migration Service:
+
+1. In **Resource group**, select the resource group that contains an existing instance of Database Migration Service.
+
+1. In **Azure Database Migration Service**, select an existing instance of Database Migration Service that's in the selected resource group.
+
+1. Select **Next**.
+
+### Create a new instance of Database Migration Service
+
+To create a new instance of Database Migration Service:
+
+1. In **Resource group**, create a new resource group to contain a new instance of Database Migration Service.
+
+1. Under **Azure Database Migration Service**, select **Create new**.
+
+1. In **Create Azure Database Migration Service**, enter a name for your Database Migration Service instance, and then select **Create**.
+
+1. Under **Set up integration runtime**, complete the following steps:
+
+ 1. Select the **Download and install integration runtime** link to open the download link in a web browser. Download the integration runtime, and then install it on a computer that meets the prerequisites to connect to the source SQL Server instance.
+
+ When installation is finished, Microsoft Integration Runtime Configuration Manager automatically opens to begin the registration process.
+
+ 1. In the **Authentication key** table, copy one of the authentication keys that are provided in the wizard and paste it in Azure Data Studio. If the authentication key is valid, a green check icon appears in Integration Runtime Configuration Manager. A green check indicates that you can continue to **Register**.
+
+ After you register the self-hosted integration runtime, close Microsoft Integration Runtime Configuration Manager.
+
+ > [!NOTE]
+ > For more information about how to use the self-hosted integration runtime, see [Create and configure a self-hosted integration runtime](../data-factory/create-self-hosted-integration-runtime.md).
+
+1. In **Create Azure Database Migration Service** in Azure Data Studio, select **Test connection** to validate that the newly created Database Migration Service instance is connected to the newly registered self-hosted integration runtime.
+
+1. Return to the migration wizard in Azure Data Studio.
+
+## Start the database migration
+
+In **Step 7: Summary** in the Migrate to Azure SQL wizard, review the configuration you created, and then select **Start migration** to start the database migration.
+
+## Monitor the database migration
+
+1. In Azure Data Studio, in the server menu under **General**, select **Azure SQL Migration** to go to the dashboard for your Azure SQL migrations.
+
+ Under **Database migration status**, you can track migrations that are in progress, completed, and failed (if any), or you can view all database migrations.
:::image type="content" source="media/tutorial-sql-server-to-virtual-machine-offline-ads/monitor-migration-dashboard.png" alt-text="monitor migration dashboard":::
-1. Select **Database migrations in progress** to view ongoing migrations and get further details by selecting the database name.
-1. The migration details page displays the backup files and their corresponding status:
+
+1. Select **Database migrations in progress** to view active migrations.
+
+ To get more information about a specific migration, select the database name.
+
+ The migration details pane displays the backup files and their corresponding status:
| Status | Description | |--|-|
- | Arrived | Backup file arrived in the source backup location and validated |
- | Uploading | Integration runtime is currently uploading the backup file to Azure storage|
- | Uploaded | Backup file is uploaded to Azure storage |
- | Restoring | Azure Database Migration Service is currently restoring the backup file to SQL Server on Azure Virtual Machine|
- | Restored | Backup file is successfully restored on SQL Server on Azure Virtual Machine |
- | Canceled | Migration process was canceled |
- | Ignored | Backup file was ignored as it does not belong to a valid database backup chain |
+ | Arrived | The backup file arrived in the source backup location and was validated. |
+ | Uploading | The integration runtime is uploading the backup file to Azure storage. |
+ | Uploaded | The backup file has been uploaded to Azure storage. |
+ | Restoring | The service is restoring the backup file to SQL Server on Azure Virtual Machines. |
+ | Restored | The backup file was successfully restored on SQL Server on Azure Virtual Machines. |
+ | Canceled | The migration process was canceled. |
+ | Ignored | The backup file was ignored because it doesn't belong to a valid database backup chain. |
-After all database backups are restored on SQL Server on Azure Virtual Machine, an automatic migration cutover will be initiated by the Azure DMS to ensure the migrated database is ready for use and the migration status changes from *in progress* to *Succeeded*.
+After all database backups are restored on the instance of SQL Server on Azure Virtual Machines, an automatic migration cutover is initiated by Database Migration Service to ensure that the migrated database is ready to use. The migration status changes from **In progress** to **Succeeded**.
## Next steps
-* For a tutorial showing you how to migrate a database to SQL Server on Azure Virtual Machines using the T-SQL RESTORE command, see [Migrate a SQL Server database to SQL Server on a virtual machine](/azure/azure-sql/virtual-machines/windows/migrate-to-vm-from-sql-server).
-* For information about SQL Server on Azure Virtual Machines, see [Overview of SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).
-* For information about connecting apps to SQL Server on Azure Virtual Machines, see [Connect applications](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
+- Complete a quickstart to [migrate a database to SQL Server on Azure Virtual Machines by using the T-SQL RESTORE command](/azure/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
+- Learn more about [SQL Server on Azure Windows Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview).
+-Learn how to [connect apps to SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/ways-to-connect-to-sql).
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
To complete this tutorial, you need to:
> [!TIP] > If your database backup files are already provided in an Azure storage account, self-hosted integration runtime is not required during the migration process.
-* Runtime is installed on the machine using self-hosted integration runtime. The machine will connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-self-hosted-integration-runtime-for-database-migrations)
+* Runtime is installed on the machine using self-hosted integration runtime. The machine will connect to the source SQL Server instance and the network file share where backup files are located. Outbound port 445 should be enabled to allow access to the network file share. Also see [recommendations for using self-hosted integration runtime](migration-using-azure-data-studio.md#recommendations-for-using-a-self-hosted-integration-runtime-for-database-migrations)
* If you're using the Azure Database Migration Service for the first time, ensure that Microsoft.DataMigration resource provider is registered in your subscription. You can follow the steps to [register the resource provider](quickstart-create-data-migration-service-portal.md#register-the-resource-provider) ## Launch the Migrate to Azure SQL wizard in Azure Data Studio
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dotnet-develop-multitenant-applications.md
A properly implemented multitenant application provides the following benefits t
In short, while there are many considerations that you must take into account to provide a highly scalable service, there are also multiple goals and requirements that are common to many multitenant applications. Some may not be relevant in specific scenarios, and the importance of individual goals and requirements will differ in each scenario. As a provider of the multitenant application, you'll also have goals and requirements, such as meeting the tenant's needs, profitability, billing, multiple service levels, provisioning, maintainability monitoring, and automation.
-For more information on additional design considerations of a multitenant application, see [Hosting a Multi-Tenant Application on Azure][Hosting a Multi-Tenant Application on Azure]. For information on common data architecture patterns of multi-tenant software-as-a-service (SaaS) database applications, see [Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database](/azure/azure-sql/database/saas-tenancy-app-design-patterns).
+For more information on additional design considerations of a multitenant application, see [Hosting a Multi-Tenant Application on Azure][Hosting a Multi-Tenant Application on Azure] and [Cross-Tenant Communication using Azure Service Bus Sample][Cross-Tenant Communication using Azure Service Bus Sample]. For information on common data architecture patterns of multi-tenant software-as-a-service (SaaS) database applications, see [Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database](/azure/azure-sql/database/saas-tenancy-app-design-patterns).
+## Cross-Tenant Communication using Azure Service Bus Sample
+The [Cross-Tenant Communication using Azure Service Bus Sample][Cross-Tenant Communication using Azure Service Bus Sample] demonstrates a multi-tenanted solution handling cross-tenant communication between a provider and one or more of its customers using Service Bus message queues. The provider comunicates securely with each of its customers, and each customer securely with the provider. To download the complete sample with instructions, see [Cross-Tenant Communication using Azure Service Bus Sample][Cross-Tenant Communication using Azure Service Bus Sample].
+
+## Azure features for Multitenant Applications
Azure provides many features that allow you to address the key problems encountered when designing a multitenant system. **Isolation**
Azure provides a number of ways to provision new tenants for the application. Fo
[Hosting a Multi-Tenant Application on Azure]: /previous-versions/msp-n-p/hh534480(v=pandp.10) [Designing Multitenant Applications on Azure]: https://msdn.microsoft.com/library/windowsazure/hh689716
+[Cross-Tenant Communication using Azure Service Bus Sample]: https://github.com/Azure-Samples/Cross-Tenant-Communication-Using-Azure-Service-Bus
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 05/24/2022 Last updated : 11/10/2022
To learn more, see [ExpressRoute in China](http://www.windowsazure.cn/home/featu
** ExpressRoute Local is not available in this location.
-### Germany
-| **Location** | **Service providers** |
-| | |
-| **Berlin** |e-shelter, Megaport+, T-Systems |
-| **Frankfurt** |Colt, Equinix, Interxion |
- ## <a name="c1partners"></a>Connectivity through Exchange providers If your connectivity provider is not listed in previous sections, you can still create a connection.
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei |
-| **[Fastweb](https://www.fastweb.it/grandi-aziende/cloud/scheda-prodotto/fastcloud-interconnect/)** | Supported |Supported | Milan |
+| **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | Supported |Supported | Milan |
| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** |Supported |Supported | Montreal, Quebec City, Toronto2 | | **[GBI](https://www.gbiinc.com/microsoft-azure/)** |Supported |Supported | Dubai2, Frankfurt | | **[GÉANT](https://www.geant.org/Networks)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, Marseille |
The following table shows locations by service provider. If you want to view ava
| **[Telus](https://www.telus.com)** |Supported |Supported | Montreal, Quebec City, Seattle, Toronto, Vancouver | | **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** |Supported |Supported | Cape Town, Johannesburg | | **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur |
-| **[Tivit](https://www.tivit.com/cloud-solutions/public-cloud/public-cloud-azure/)** |Supported |Supported | Sao Paulo2 |
+| **[Tivit](https://tivit.com/solucoes/public-cloud/)** |Supported |Supported | Sao Paulo2 |
| **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka, Tokyo2 | | **TPG Telecom**| Supported | Supported | Melbourne, Sydney | | **[Transtelco](https://transtelco.net/enterprise-services/)** |Supported |Supported | Dallas, Queretaro(Mexico)|
If you are remote and do not have fiber connectivity or you want to explore othe
| **[Epsilon Telecommunications Limited](https://www.epsilontel.com/solutions/cloud-connect/)** | Equinix | London, Singapore, Washington DC | | **[Eurofiber](https://eurofiber.nl/microsoft-azure/)** | Equinix | Amsterdam | | **[Exponential E](https://www.exponential-e.com/services/connectivity-services/)** | Equinix | London |
-| **[Fastweb S.p.A](https://www.fastweb.it/grandi-aziende/connessione-voce-e-wifi/scheda-prodotto/rete-privata-virtuale/)** | Equinix | Amsterdam |
+| **[Fastweb S.p.A](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | Equinix | Amsterdam |
| **[Fibrenoire](https://www.fibrenoire.ca/en/cloudextn)** | Megaport | Quebec City | | **[FPT Telecom International](https://cloudconnect.vn/en)** |Equinix |Singapore| | **[Gtt Communications Inc](https://www.gtt.net)** |Equinix | Washington DC |
If you are remote and do not have fiber connectivity or you want to explore othe
| **[QSC AG](https://www2.qbeyond.de/en/)** |Interxion | Frankfurt | | **[RETN](https://retn.net/products/cloud-connect)** | Equinix | Amsterdam | | **Rogers** | Cologix, Equinix | Montreal, Toronto |
-| **[Spectrum Enterprise](https://enterprise.spectrum.com/services/cloud/cloud-connect.html)** | Equinix | Chicago, Dallas, Los Angeles, New York, Silicon Valley |
+| **[Spectrum Enterprise](https://enterprise.spectrum.com/services/internet-networking/wan/cloud-connect.html)** | Equinix | Chicago, Dallas, Los Angeles, New York, Silicon Valley |
| **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London | | **[Tata Teleservices](https://www.tatatelebusiness.com/data-services/ez-cloud-connect/)** | Tata Communications | Chennai, Mumbai | | **[TDC Erhverv](https://tdc.dk/Produkter/cloudaccessplus)** | Equinix | Amsterdam |
If you are remote and do not have fiber connectivity or you want to explore othe
| **[Telekom Deutschland GmbH](https://cloud.telekom.de/de/infrastruktur/managed-it-services/managed-hybrid-infrastructure-mit-microsoft-azure)** | Interxion | Amsterdam, Frankfurt | | **[Telia](https://www.telia.se/foretag/losningar/produkter-tjanster/datanet)** | Equinix | Amsterdam | | **[ThinkTel](https://www.thinktel.ca/services/agile-ix-data/expressroute/)** | Equinix | Toronto |
-| **[United Information Highway (UIH)](https://www.uih.co.th/en/internet-solution/cloud-direct/uih-cloud-direct-for-microsoft-azure-expressroute)**| Equinix | Singapore |
+| **[United Information Highway (UIH)](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)**| Equinix | Singapore |
| **[Venha Pra Nuvem](https://venhapranuvem.com.br/)** | Equinix | Sao Paulo | | **[Webair](https://opti9tech.com/partners/)**| Megaport | New York | | **[Windstream](https://www.windstreamenterprise.com/solutions/)**| Equinix | Chicago, Silicon Valley, Washington DC |
firewall-manager Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/policy-overview.md
Policies can be associated with one or more virtual hubs or VNets. The firewall
## Classic rules and policies
-Azure Firewall supports both Classic rules and policies, but policies is the recommenced configuration. The following table compares policies and classic rules:
+Azure Firewall supports both Classic rules and policies, but policies is the recommended configuration. The following table compares policies and classic rules:
| Subject | Policy | Classic rules |
frontdoor Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md
# Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
+> [!NOTE]
+> Migration capability for Azure Front Door is currently in Public Preview without an SLA and isn't recommended for production environments.
+ Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users with the Microsoft global network. This article will guide you through the migration process to migrate your Front Door (classic) profile to either a Standard or Premium tier profile to begin using these latest features. ## Prerequisites
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
# About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
+> [!NOTE]
+> Migration capability for Azure Front Door is currently in Public Preview without an SLA and isn't recommended for production environments.
+ Azure Front Door Standard and Premium tiers were released in March 2022 as the next generation content delivery network service. The newer tiers combine the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall (WAF). With features such as Private Link integration, enhanced rules engine and advanced diagnostics you have the ability to secure and accelerate your web applications to bring a better experience to your customers. Azure recommends migrating to the newer tiers to benefit from the new features and improvements over the Classic tier. To help with the migration process, Azure Front Door provides a zero-downtime migration to migrate your workload from Azure Front Door (class) to either Standard or Premium tier.
governance Paginate Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/paginate-powershell.md
+
+ Title: 'Paginate Azure Resource Graph query results using Azure PowerShell'
+description: In this quickstart, you control the volume Azure Resource Graph query output by using pagination in Azure PowerShell.
Last updated : 11/11/2022+++++
+# Quickstart: Paginate Azure Resource Graph query results using Azure PowerShell
+
+By default, Azure Resource Graph returns a maximum of 1000 records for each query. However, you can
+use the *Search-AzGraph* cmdlet's `skipToken` parameter to adjust how many records you return per
+request.
+
+At the end of this quickstart, you'll be able to customize the output volume returned by your Azure Resource
+Graph queries by using Azure PowerShell.
+
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
+before you begin.
+
+## Add the Resource Graph module
+
+To enable Azure PowerShell to query Azure Resource Graph, the **Az.ResourceGraph** module must be
+added. This module can be used with locally installed PowerShell, with
+[Azure Cloud Shell](https://shell.azure.com), or with the
+[PowerShell Docker image](https://hub.docker.com/_/microsoft-powershell).
+
+### Base requirements
+
+The Azure Resource Graph module requires the following software:
+
+- Azure PowerShell 8.x or higher. If it isn't yet installed, follow
+ [these instructions](/powershell/azure/install-az-ps).
+
+- PowerShellGet 2.0.1 or higher. If it isn't installed or updated, follow
+ [these instructions](/powershell/scripting/gallery/installing-psget).
+
+### Install the module
+
+The Resource Graph module for PowerShell is **Az.ResourceGraph**.
+
+1. From a PowerShell prompt, run the following command:
+
+ ```powershell
+ # Install the Resource Graph module from PowerShell Gallery
+ Install-Module -Name Az.ResourceGraph -Scope CurrentUser -Repository PSGallery -Force
+ ```
+
+1. Validate that the module has been imported and is at least version `0.11.0`:
+
+ ```powershell
+ # Get a list of commands for the imported Az.ResourceGraph module
+ Get-Command -Module Az.ResourceGraph
+ ```
+
+## Paginate Azure Resource Graph query results
+
+With the Azure PowerShell module added to your environment of choice, it's time to try out a simple
+tenant-based Resource Graph query and work with paginating the results. We'll start with an ARG
+query that returns a list of all virtual machines (VMS) across all subscriptions associated with a
+given Azure Active Directory (Azure AD) tenant.
+
+We'll then configure the query to return five records (VMs) at a time.
+
+> [!NOTE]
+> This example query is adapted from the work of Microsoft Most Valuable Professional (MVP)
+> [Oliver Mossec](https://github.com/omiossec).
+
+1. Run the initial Azure Resource Graph query using the `Search-AzGraph` cmdlet:
+
+ ```powershell
+ # Login first with Connect-AzAccount if not using Cloud Shell
+
+ # Run Azure Resource Graph query Search-AzGraph -Query "Resources | join kind=leftouter
+ (ResourceContainers | where type=='microsoft.resources/subscriptions' | project subscriptionName
+ = name, subscriptionId) on subscriptionId | where type =~ 'Microsoft.Compute/virtualMachines' |
+ project VMResourceId = id, subscriptionName, resourceGroup, name"
+ ```
+
+1. Update the query to implement the `skipToken` parameter and return 5 VMs in each batch:
+
+ ```powershell
+ $kqlQuery = "Resources | join kind=leftouter (ResourceContainers | where
+ type=='microsoft.resources/subscriptions' | project subscriptionName = name,subscriptionId) on
+ subscriptionId | where type =~ 'Microsoft.Compute/virtualMachines' | project VMResourceId = id,
+ subscriptionName, resourceGroup,name"
+
+ $batchSize = 5
+ $skipResult = 0
+
+ [System.Collections.Generic.List[string]]$kqlResult
+
+ while ($true) {
+
+ if ($skipResult -gt 0) {
+ $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize -SkipToken $graphResult.SkipToken
+ }
+ else {
+ $graphResult = Search-AzGraph -Query $kqlQuery -First $batchSize
+ }
+
+ $kqlResult += $graphResult.data
+
+ if ($graphResult.data.Count -lt $batchSize) {
+ break;
+ }
+ $skipResult += $skipResult + $batchSize
+ }
+ ```
+
+## Clean up resources
+
+If you wish to remove the Resource Graph module from your Azure PowerShell environment, you can do
+so by using the following command:
+
+```powershell
+# Remove the Resource Graph module from the current session
+Remove-Module -Name Az.ResourceGraph
+
+# Uninstall the Resource Graph module from your computer
+Uninstall-Module -Name Az.ResourceGraph
+```
+
+## Next steps
+
+In this quickstart, you learned how to paginate Azure Resource Graph query results by using
+Azure PowerShell. To learn more about the Resource Graph language, review any of the following
+Microsoft Learn resources.
+
+- [Work with large data sets - Azure Resource Graph](concepts/work-with-data.md)
+- [Az.ResourceGraph PowerShell module reference](/powershell/module/az.resourcegraph)
+- [What is Azure Resource Graph?](overview.md)
healthcare-apis Use Smart On Fhir Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/use-smart-on-fhir-proxy.md
Last updated 06/03/2022
-# Tutorial: Azure Active Directory SMART on FHIR proxy
+# SMART on FHIR
[SMART on FHIR](https://docs.smarthealthit.org/) is a set of open specifications to integrate partner applications with FHIR servers and electronic medical records systems that have Fast Healthcare Interoperability Resources (FHIR&#174;) interfaces. One of the main purposes of the specifications is to describe how an application should discover authentication endpoints for an FHIR server and start an authentication sequence. Authentication is based on OAuth2. But because SMART on FHIR uses parameter naming conventions that arenΓÇÖt immediately compatible with Azure Active Directory (Azure AD), the Azure API for FHIR has a built-in Azure AD SMART on FHIR proxy that enables a subset of the SMART on FHIR launch sequences. Specifically, the proxy enables the [EHR launch sequence](https://hl7.org/fhir/smart-app-launch/#ehr-launch-sequence).
-This tutorial describes how to use the proxy to enable SMART on FHIR applications with Azure API for FHIR.
+Below tutorial describes how to use the proxy to enable SMART on FHIR applications with Azure API for FHIR.
-## Prerequisites
+## Tutorial: SMART on FHIR proxy
+**Prerequisites**
- An instance of the Azure API for FHIR - [.NET Core 2.2](https://dotnet.microsoft.com/download/dotnet-core/2.2)
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md
App's key functionalities:
- Built-in visualization and dashboards - Extensibility for custom solution development - This architecture consists of the following components. Some solutions may not require every component listed here.
An active Azure subscription. If you don't have an Azure subscription, create a
1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Energy** tab:
- :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-build.png" alt-text="Smart meter template":::
+ :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-build.png" alt-text="Screenshot showing the Azure IoT Central build site with the energy app templates.":::
1. Select **Create app** under **Smart meter monitoring**.
Adatum is a fictitious energy company, who monitors and manages smart meters. On
* Track the total energy consumption for planning and billing purposes. * Command and control operations such as reconnect meter and update firmware version. In the template, the command buttons show the possible functionalities and don't send real commands. ### Devices The app comes with a sample smart meter device. You can see the device details by clicking on the **Devices** tab. Click on the sample device **SM0123456789** link to see the device details. You can update the writable properties of the device on the **Update Properties** page, and visualize the updated values on the dashboard. ### Device Template Click on the **Device templates** tab to see the smart meter device model. The model has pre-define interface for Data, Property, Commands, and Views.
-## Clean up resources
+## Customize your application
-If you decide to not continue using this application, delete your application with the following these steps:
-1. From the left pane, open the **Application** tab.
-1. Select **Management** and then the **Delete** button.
+## Clean up resources
- :::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-delete-app.png" alt-text="Delete application.":::
## Next steps
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-solar-panel-app.md
Last updated 06/14/2022
- # Tutorial: Deploy and walk through the solar panel monitoring application template
Key application functionality:
- Built-in visualization and dashboards - Extensibility for custom solution development This architecture consists of the following components. Some applications may not require every component listed here.
An active Azure subscription. If you don't have an Azure subscription, create a
1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Energy** tab:
- :::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-build.png" alt-text="Smart meter template":::
+ :::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-build.png" alt-text="Screenshot showing the Azure IoT Central build site with the energy app templates.":::
1. Select **Create app** under **Solar panel monitoring**.
Adatum is a fictitious energy company that monitors and manages solar panels. On
The app comes with a sample solar panel device. To see device details, select **Devices**.
-Select the sample device, **SP0123456789**. From the **Update Properties** tab, you can update the writable properties of the device and see a visual of the updated values on the dashboard.
+Select the sample device, **SP0123456789**. From the **Update Properties** tab, you can update the writable properties of the device and see a visual of the updated values on the dashboard.
### Device template To see the solar panel device model, select the **Device templates** tab. The model has predefined interfaces for data, properties, commands, and views.
-## Clean up resources
+## Customize your application
-If you decide not to continue using this application, delete your application with the following steps:
-1. From the left pane, select **Application**.
-1. Select **Management** > **Delete**.
+## Clean up resources
- :::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-delete-app.png" alt-text="Screenshot of Solar Panel Monitoring Template Administration.":::
iot-central Tutorial Connected Waste Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-connected-waste-management.md
The _connected waste management_ application template helps you kickstart your IoT solution development to enable smart cities to remotely monitor to maximize efficient waste collection. ### Devices and connectivity (1,2)
An active Azure subscription. If you don't have an Azure subscription, create a
## Create connected waste management application 1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Government** tab:
- :::image type="content" source="media/tutorial-connectedwastemanagement/iot-central-government-tab-overview.png" alt-text="Connected waste management template":::
+
+ :::image type="content" source="media/tutorial-connected-waste-management/iot-central-government-tab-overview.png" alt-text="Screenshot showing the Azure IoT Central build site with the government app templates.":::
1. Select **Create app** under **Connected waste management**.
To learn more, see [Create an IoT Central application](../core/howto-create-iot-
The following sections walk you through the key features of the application:
-### Dashboard
+### Dashboard
After you deploy the application template, your default dashboard is **Wide World waste management dashboard**. -
-As a builder, you can create and customize views on the dashboard for operators. First, let's explore the dashboard.
+As a builder, you can create and customize views on the dashboard for operators. First, let's explore the dashboard.
>[!NOTE]
->All data shown in the dashboard is based on simulated device data, which you'll see more of in the next section.
+>All data shown in the dashboard is based on simulated device data, which you'll see more of in the next section.
The dashboard consists of different tiles:
-* **Wide World Waste utility image tile**: The first tile in the dashboard is an image tile of a fictitious waste utility, "Wide World Waste." You can customize the tile and put in your own image, or you can remove it.
+* **Wide World Waste utility image tile**: The first tile in the dashboard is an image tile of a fictitious waste utility, "Wide World Waste." You can customize the tile and put in your own image, or you can remove it.
-* **Waste bin image tile**: You can use image and content tiles to create a visual representation of the device that's being monitored, along with a description.
+* **Waste bin image tile**: You can use image and content tiles to create a visual representation of the device that's being monitored, along with a description.
-* **Fill level KPI tile**: This tile displays a value reported by a *fill level* sensor in a waste bin. Fill level and other sensors, like *odor meter* or *weight* in a waste bin, can be remotely monitored. An operator can take action, like dispatching a trash collection truck.
+* **Fill level KPI tile**: This tile displays a value reported by a *fill level* sensor in a waste bin. Fill level and other sensors, like *odor meter* or *weight* in a waste bin, can be remotely monitored. An operator can take action, like dispatching a trash collection truck.
* **Waste monitoring area map**: This tile uses Azure Maps, which you can configure directly in Azure IoT Central. The map tile displays device [location](../core/howto-use-location-data.md). Try to hover over the map and try the controls over the map, like zoom-in, zoom-out, or expand. * **Fill, odor, weight level bar chart**: You can visualize one or multiple kinds of device telemetry data in a bar chart. You can also expand the bar chart.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-dashboard-bar-chart.png" alt-text="Screenshot of Connected Waste Management Template Dashboard bar chart..":::
--
-* **Field Services**: The dashboard includes a link to how to integrate with Dynamics 365 Field Services from your Azure IoT Central application. For example, you can use Field Services to create tickets for dispatching trash collection services.
+ :::image type="content" source="media/tutorial-connected-waste-management/connected-waste-management-dashboard-bar-chart.png" alt-text="Screenshot of the expanded bar chart on the connected waste management application dashboard." lightbox="media/tutorial-connected-waste-management/connected-waste-management-dashboard-bar-chart.png":::
-### Customize the dashboard
+* **Field Services**: The dashboard includes a link to how to integrate with Dynamics 365 Field Services from your Azure IoT Central application. For example, you can use Field Services to create tickets for dispatching trash collection services.
-You can customize the dashboard by selecting the **Edit** menu. Then you can add new tiles or configure existing ones. Here's what the dashboard looks like in editing mode:
+### Customize the dashboard
+You can customize the dashboard by selecting the **Edit** menu. Then you can add new tiles or configure existing ones. Here's what the dashboard looks like in editing mode:
-You can also select **+ New** to create a new dashboard and configure from scratch. You can have multiple dashboards, and you can switch between your dashboards from the dashboard menu.
+You can also select **+ New** to create a new dashboard and configure from scratch. You can have multiple dashboards, and you can switch between your dashboards from the dashboard menu.
### Explore the device template
-A device template in Azure IoT Central defines the capabilities of a device, which can include telemetry, properties, or commands. As a builder, you can define device templates that represent the capabilities of the devices you will connect.
+A device template in Azure IoT Central defines the capabilities of a device, which can include telemetry, properties, or commands. As a builder, you can define device templates that represent the capabilities of the devices you will connect.
-The Connected waste management application comes with a sample template for a connected waste bin device.
+The connected waste management application comes with a sample template for a connected waste bin device.
To view the device template:
-1. In Azure IoT Central, from the left pane of your app, select **Device templates**.
+1. In Azure IoT Central, from the left pane of your app, select **Device templates**.
1. In the **Device templates** list, select **Connected Waste Bin**. 1. Examine the device template capabilities. You can see that it defines sensors like **Fill level**, **Odor meter**, **Weight**, and **Location**.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-device-template-connected-bin.png" alt-text="Screenshot showing the details of the Connected Waste Bin device template..":::
-
+ :::image type="content" source="media/tutorial-connected-waste-management/connected-waste-management-device-template-connected-bin.png" alt-text="Screenshot of the connected waste management device template." lightbox="media/tutorial-connected-waste-management/connected-waste-management-device-template-connected-bin.png":::
### Customize the device template
Try to customize the following features:
1. Find the **Odor meter** telemetry type. 1. Update the **Display name** of **Odor meter** to **Odor level**. 1. Try to update the unit of measurement, or set **Min value** and **Max value**.
-1. Select **Save**.
+1. Select **Save**.
-### Add a cloud property
+### Add a cloud property
Here's how: 1. From the device template menu, select **Cloud property**. 1. Select **+ Add Cloud Property**. In Azure IoT Central, you can add a property that is relevant to the device but isn't expected to be sent by a device. For example, a cloud property might be an alerting threshold specific to installation area, asset information, or maintenance information.
-1. Select **Save**.
-
-### Views
+1. Select **Save**.
-The connected waste bin device template comes with predefined views. Explore the views, and update them if you want to. The views define how operators see the device data and input cloud properties.
+### Views
+The connected waste bin device template comes with predefined views. Explore the views, and update them if you want to. The views define how operators see the device data and input cloud properties.
-### Publish
+### Publish
-If you made any changes, remember to publish the device template.
+If you made any changes, remember to publish the device template.
-### Create a new device template
+### Create a new device template
-To create a new device template, select **+ New**, and follow the steps. You can create a custom device template from scratch, or you can choose a device template from the Azure device catalog.
+To create a new device template, select **+ New**, and follow the steps. You can create a custom device template from scratch, or you can choose a device template from the Azure device catalog.
### Explore simulated devices
-In Azure IoT Central, you can create simulated devices to test your device template and application.
+In Azure IoT Central, you can create simulated devices to test your device template and application.
-The Connected waste management application has two simulated devices associated with the connected waste bin device template.
+The connected waste management application has two simulated devices associated with the connected waste bin device template.
### View the devices
The Connected waste management application has two simulated devices associated
1. Select **Connected Waste Bin** device.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-devices-bin-1.png" alt-text="Screenshot of Connected Waste Management Template Device Properties.":::
+ :::image type="content" source="media/tutorial-connected-waste-management/connected-waste-management-devices-bin.png" alt-text="Screenshot of the connected waste management application devices page." lightbox="media/tutorial-connected-waste-management/connected-waste-management-devices-bin.png":::
-Explore the **Device Properties** and **Device Dashboard** tabs.
+Explore the **Device Properties** and **Device Dashboard** tabs.
> [!NOTE] > All the tabs have been configured from the device template views. ### Add new devices
-You can add new devices by selecting **+ New** on the **Devices** tab.
+You can add new devices by selecting **+ New** on the **Devices** tab.
## Explore and configure rules In Azure IoT Central, you can create rules to automatically monitor device telemetry, and to trigger actions when one or more conditions are met. The actions might include sending email notifications, triggering an action in Power Automate, or starting a webhook action to send data to other services.
-The Connected waste management application has four sample rules.
+The connected waste management application has four sample rules.
### View rules
The Connected waste management application has four sample rules.
1. Select **Bin full alert**.
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-bin-full-alert.png" alt-text="Screenshot of Bin full alert.":::
-
+ :::image type="content" source="media/tutorial-connected-waste-management/connected-waste-management-bin-full-alert.png" alt-text="Screenshot of the connected waste management application bin full rule." lightbox="media/tutorial-connected-waste-management/connected-waste-management-bin-full-alert.png":::
- 1. The **Bin full alert** checks the following condition: **Fill level is greater than or equal to Bin full alert threshold**.
+1. The **Bin full alert** checks the following condition: **Fill level is greater than or equal to Bin full alert threshold**.
- The **Bin full alert threshold** is a cloud property that's defined in the connected waste bin device template.
+ The **Bin full alert threshold** is a cloud property that's defined in the connected waste bin device template.
-Now let's create an email action.
+Now create an email action.
### Create an email action In the **Actions** list of the rule, you can configure an email action:
-1. Select **+ Email**.
+
+1. Select **+ Email**.
1. For **Display name**, enter **High pH alert**.
-1. For **To**, enter the email address associated with your Azure IoT Central account.
+1. For **To**, enter the email address associated with your Azure IoT Central account.
1. Optionally, enter a note to include in the text of the email.
-1. Select **Done** > **Save**.
+1. Select **Done** > **Save**.
You'll now receive an email when the configured condition is met. >[!NOTE]
->The application sends email each time a condition is met. Disable the rule to stop receiving email from the automated rule.
+>The application sends email each time a condition is met. Disable the rule to stop receiving email from the automated rule.
To create a new rule, from the left pane of **Rules**, select **+New**. ## Configure jobs
-In Azure IoT Central, jobs allow you to trigger device or cloud properties updates on multiple devices. You can also use jobs to trigger device commands on multiple devices. Azure IoT Central automates the workflow for you.
+In Azure IoT Central, jobs allow you to trigger device or cloud properties updates on multiple devices. You can also use jobs to trigger device commands on multiple devices. Azure IoT Central automates the workflow for you.
-1. From the left pane of Azure IoT Central, select **Jobs**.
-1. Select **+New**, and configure one or more jobs.
+1. From the left pane of Azure IoT Central, select **Jobs**.
+1. Select **+New**, and configure one or more jobs.
-## Customize your application
+## Customize your application
-As a builder, you can change several settings to customize the user experience in your application.
-
-### Change the application theme
-
-Here's how:
-1. Go to **Application** > **Management**.
-1. Select **Change** to choose an image to upload for the **Application logo**.
-1. Select **Change** to choose an image to upload for the **Browser icon** (an image that will appear on browser tabs).
-1. You can also replace the default browser colors by adding HTML hexadecimal color codes. Use the **Header** and **Accent** fields for this purpose.
-
- :::image type="content" source="media/tutorial-connectedwastemanagement/connected-waste-management-customize-your-application.png" alt-text="Screenshot of Connected Waste Management Template Customize your application.":::
--
-1. You can also change application images. Select **Application** > **Management** > **Select image** to choose an image to upload as the application image.
-1. Finally, you can also change the theme by selecting **Settings** on the masthead of the application.
## Clean up resources
-If you're not going to continue to use this application, delete your application with the following steps:
-
-1. From the left pane of your Azure IoT Central app, select **Application**.
-1. Select **Management > Delete**.
## Next steps
iot-central Tutorial Water Consumption Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-consumption-monitoring.md
Last updated 06/16/2022
- # Tutorial: Deploy and walk through the water consumption monitoring application
Traditional water consumption tracking relies on water operators manually readin
The _water consumption monitoring_ application template helps you kickstart your IoT solution development to enable water utilities and cities to remotely monitor and control water flow to reduce consumption.
-![Water consumption monitoring architecture](./media/tutorial-waterconsumptionmonitoring/concepts-waterconsumptionmonitoring-architecture1.png)
### Devices and connectivity (1,2)
The following sections walk you through the key features of the application:
After you create the application, the sample **Wide World water consumption dashboard** opens. - You can create and customize views on the dashboard for operators.
To view the device template:
1. Select the **Flow meter** device template, and familiarize yourself with the device capabilities.
- ![Device template Flow meter](./media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-device-template-flow-meter.png)
### Customize the device template
To customize the device template:
1. Update the unit of measurement, or set the **Min value** and **Max value**. 1. Select **Save** to save any changes.
- ![Customize the device template.](./media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-device-template-customize.png)
### Add a cloud property
To learn more, see [Cloud properties](../core/concepts-device-templates.md#cloud
The water consumption monitor device template comes with predefined views. The views define how operators see the device data, and set the values of cloud properties.
- ![Device template views](./media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-device-template-views.png)
- To learn more, see [Views](../core/concepts-device-templates.md#views). ### Publish the device template
In Azure IoT Central, you can create simulated devices to test your device templ
1. Select **Smart Valve 1**.
- :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitor-device-1.png" alt-text="Smart Valve 1":::
+1. On the **Commands** tab, you can see the three device commands (**Close valve**, **Open valve**, and **Set valve position**) that are defined in the **Smart Valve** device template.
-1. On the **Commands** tab, you can see the three device commands (**Close valve**, **Open valve**, and **Set valve position**) that are capabilities defined in the **Smart Valve** device template.
+ :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitor-device-1.png" alt-text="Screenshot showing the water consumption monitoring application smart valve device." lightbox="media/tutorial-waterconsumptionmonitoring/water-consumption-monitor-device-1.png":::
1. Explore the **Device Properties** tab and the **Device Dashboard** tab. > [!NOTE]
-> The views you see on this page are configured using the **Device Template > Views** page.
+> The views you see on this page are configured using the **Device Template > Views** page.
### Add new devices
The water consumption monitoring application you created has three preconfigured
1. Select **High water flow alert**, which is one of the preconfigured rules in the application.
- :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-high-flow-alert.png" alt-text="High pH alert":::
+ :::image type="content" source="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-high-flow-alert.png" alt-text="Screenshot showing the water consumption monitoring application rule." lightbox="media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-high-flow-alert.png":::
The `High water flow alert` rule is configured to check against the condition `Flow` is `greater than` the `Max flow threshold`. Flow threshold is a cloud property defined in the **Smart Valve** device template. The value of `Max flow threshold` is set per device instance.
To learn more, see [How to run a job](../core/howto-manage-devices-in-bulk.md).
## Customize your application
-As an administrator, you can change several settings to customize the user experience in your application.
-
-1. Select **Application** > **Management**.
-1. To choose an image to upload as the **Application logo**, select the **Change** button.
-1. To choose a **Browser icon** image that will appear on browser tabs, select the **Change** button.
-1. You can also replace the default **Browser colors** by adding HTML hexadecimal color codes. For more information about **HEX Value** color notation, see [HTML Colors](https://www.w3schools.com/html/html_colors.asp).
-
- ![Selections for application logo, browser icon, and browser colors](./media/tutorial-waterconsumptionmonitoring/water-consumption-monitoring-customize-your-application.png)
-
-1. You can also change application images by selecting **Application** > **Management**. To choose an image to upload as the application image, select the **Select image** button.
- ## Clean up resources
-If you're not going to continue to use this application, delete it.
-
-1. Select **Application** > **Management** on the left pane of your Azure IoT Central application and then select **Delete** at the bottom of the page.
## Next steps
-
-The suggested next step is to learn about [Water quality monitoring](./tutorial-water-quality-monitoring.md).
+
+The suggested next step is to learn about [Water quality monitoring](./tutorial-water-quality-monitoring.md).
iot-central Tutorial Water Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/government/tutorial-water-quality-monitoring.md
Last updated 06/15/2022
-
Traditional water quality monitoring relies on manual sampling techniques and fi
The _water quality monitoring_ application template helps you kickstart your IoT solution development and enable water utilities to digitally monitor water quality in smart cities.
-![Water quality monitoring architecture](./media/tutorial-waterqualitymonitoring/concepts-water-quality-monitoring-architecture1.png)
### Devices and connectivity (1,2)
The following sections walk you through the key features of the application:
After you create the application, the **Wide World water quality dashboard** pane opens. As a builder, you can create and customize views on the dashboard for use by operators. But before you try to customize, first explore the dashboard.
To view the device template:
1. Select **Device templates** on the leftmost pane of your application in Azure IoT Central. 1. From the list of device templates, select **Water Quality Monitor** to open that device template. ### Customize the device template
Practice customizing the following device template settings:
The water quality monitoring device template comes with predefined views. The views define how operators see the device data and set cloud properties. Explore the views and practice making changes. - ### Publish the device template If you make any changes, be sure to select **Publish** to publish the device template.
The water quality monitoring application you created from the application templa
1. Select **Devices** on the leftmost pane of your application.
-1. Select one simulated device.
+1. Select a simulated device.
- :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitor-device1.png" alt-text="Select device 1":::
+ :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitor-device.png" alt-text="Screenshot showing a water quality monitoring device." lightbox="media/tutorial-waterqualitymonitoring/water-quality-monitor-device.png":::
1. On the **Cloud Properties** tab, change the **Acidity (pH) threshold** value to **9** and select **Save**. 1. Explore the **Device Properties** tab and the **Device Dashboard** tab.
The water quality monitoring application you created has two preconfigured rules
1. Select **High pH alert**, which is one of the preconfigured rules in the application.
- :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitoring-high-ph-alert.png" alt-text="The high pH alert rule.":::
+ :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitoring-high-ph-alert.png" alt-text="Screenshot showing the water quality monitoring dashboard high pH alert rule." lightbox="media/tutorial-waterqualitymonitoring/water-quality-monitoring-high-ph-alert.png":::
The **High pH alert** rule is configured to check the condition of acidity (pH) being greater than 8.
With Azure IoT Central jobs, you can trigger updates to device or cloud properti
## Customize your application
-As a builder, you can change several settings to customize the user experience in your application.
-
-1. Select **Application** > **Management**.
-1. Under **Masthead logo**, select **Change** to choose the image to upload as the logo.
-1. Under **Browser icon**, select **Change** to choose the image that appears on browser tabs.
-1. Under **Browser colors**, you can replace the default values with HTML hexadecimal color codes.
-
- :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitoring-customize-your-application1.png" alt-text="Customize your application":::
-
-### Update the application image
-
-1. Select **Application** > **Management**.
-
-1. Select **Change** to choose an image to upload as the application image.
## Clean up resources
-If you're not going to continue to use your application, delete the application with the following steps:
-
-1. Open the **Application** > **Management** tab on the leftmost pane of your application.
-1. Select **Your application** and select the **Delete** button.
- :::image type="content" source="media/tutorial-waterqualitymonitoring/water-quality-monitoring-application-settings-delete-app1.png" alt-text="Delete your application.":::
iot-central Tutorial Continuous Patient Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/healthcare/tutorial-continuous-patient-monitoring.md
Last updated 12/23/2021
- # Tutorial: Deploy and walkthrough the continuous patient monitoring application template
The application template enables you to:
- Export your patient health data to the Azure API for FHIR, a compliant data store. - Export the aggregated insights into existing or new business applications. ### Bluetooth Low Energy (BLE) medical devices (1)
After deploying the application template, you'll first land on the **Lamna in-pa
You can also select **Go to remote patient dashboard** to see the Burkville Hospital operator dashboard. This dashboard contains a similar set of actions, telemetry, and information. You can also see multiple devices in use and choose to **update the firmware** on each. ### Device templates
If you select **Device templates**, you see the two device types in the template
- **Smart Knee Brace**: This device represents a knee brace that patients use when recovering from a knee replacement surgery. If you select this template, you see capabilities such as device data, range of motion, and acceleration. ### Device groups
If you select **Rules**, you see the three rules in the template:
- **Patch battery low**: This rule is triggers when the battery level on the device goes below 10%. Use this rule to trigger a notification to the patient to charge their device. ### Jobs
The **Properties** tab lets you edit cloud properties and read/write device prop
The **Commands** tab lets you run commands on the device. - ## Clean up resources
-If you're not going to continue to use this application, delete the application by visiting **Application > Management** and click **Delete**.
- ## Next steps
iot-hub-device-update Create Update Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update-group.md
You can use the `--order-by` argument to order the groups returned based on aspe
3. You can also select an individual device within a group to be redirected to the device details page in IoT Hub. :::image type="content" source="media/create-update-group/device-details.png" alt-text="Screenshot of device details view." lightbox="media/create-update-group/device-details.png":::
+
+ :::image type="content" source="media/create-update-group/device-details-2.png" alt-text="Screenshot of device details view in IoT hub." lightbox="media/create-update-group/device-details-2.png":::
# [Azure CLI](#tab/cli)
iot-hub-device-update Device Update Azure Real Time Operating System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-azure-real-time-operating-system.md
Learn more about [Azure RTOS](/azure/rtos/).
} ```
-## Create an update group
+## View device groups
+
+Device Update uses groups to organize devices. Device Update automatically sorts devices into groups based on their assigned tags and compatibility properties. Each device belongs to only one group, but groups can have multiple subgroups to sort different device classes.
1. Go to the **Groups and Deployments** tab at the top of the page. :::image type="content" source="media/create-update-group/ungrouped-devices.png" alt-text="Screenshot that shows ungrouped devices." lightbox="media/create-update-group/ungrouped-devices.png":::
-1. Select **Add group** to create a new group.
-
- :::image type="content" source="media/create-update-group/add-group.png" alt-text="Screenshot that shows a device group addition." lightbox="media/create-update-group/add-group.png":::
-
-1. Select an **IoT Hub** tag and **Device class** from the list. Then select **Create group**.
-
- :::image type="content" source="media/create-update-group/select-tag.png" alt-text="Screenshot that shows tag selection." lightbox="media/create-update-group/select-tag.png":::
-
-1. After the group is created, you see that the update compliance chart and groups list are updated. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
+1. View the list of groups and the update compliance chart. The update compliance chart shows the count of devices in various states of compliance: **On latest update**, **New updates available**, and **Updates in progress**. [Learn about update compliance](device-update-compliance.md).
:::image type="content" source="media/create-update-group/updated-view.png" alt-text="Screenshot that shows the update compliance view." lightbox="media/create-update-group/updated-view.png":::
-1. You should see your newly created group and any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
+1. You should see a device group that contains the simulated device you set up in this tutorial along with any available updates for the devices in the new group. If there are devices that don't meet the device class requirements of the group, they'll show up in a corresponding invalid group. To deploy the best available update to the new user-defined group from this view, select **Deploy** next to the group.
+
+For more information about tags and groups, see [Manage device groups](create-update-group.md).
-[Learn more](create-update-group.md) about how to add tags and create update groups.
## Deploy new firmware
iot-hub-device-update Device Update Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-deployments.md
After deploying an update, it is critical to ensure that:
To enable device operators to meet these goals, update deployments can be configured with an automatic rollback policy from the cloud. This allows you to define a rollback trigger policy by setting thresholds in terms of percentage and minimum number of devices failed. Once the threshold has been met, all the devices in the group will be rolled back to the selected update version.
+## Deployment monitoring
+
+Deployment details give you information on the devices that are part of the deployment as well as their status. As the deployment progresses, devices will move from In progress to Completed or Failed state. If the deployment is Canceled, then all the devices within the deployment will also reflect the Canceled state.
+
+The devices may move directly to a terminal state i.e. Completed or Failed state, if the deployed update is very small or the network latency is high. These states are set when the service receives the deployment status from the Device Update agent. They cannot be manually changed.
++ ## Next steps [Deploy an update](./deploy-update.md)
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
When you're assigning roles, it helps to follow these tips:
- Ordinarily, only administrators should be members of a lab plan Owner or Contributor role. The lab plan might have more than one Owner or Contributor. - To give educators the ability to create new labs and manage the labs that they create, you need only assign them the Lab Creator role. - To give educators the ability to manage specific labs, but *not* the ability to create new labs, assign them either the Owner or Contributor role for each lab that they'll manage. For example, you might want to allow a professor and a teaching assistant to co-own a lab.
+-
+For more detail about the permissions assigned to each role, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles#lab-assistant)
## Content filtering
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-mlflow-batch.md
Use the following steps to deploy an MLflow model with a custom scoring script.
# AZUREML_MODEL_DIR is an environment variable created during deployment # It is the path to the model folder model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
- model = mlflow.pyfunc.load(model_path)
+ model = mlflow.pyfunc.load_model(model_path)
def run(mini_batch): results = pd.DataFrame(columns=['file', 'predictions'])
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The following are well-known ports used by services listed in this article. If a
| 443 | Secured web traffic (HTTPS) | | 445 | SMB traffic used to access file shares in Azure File storage | | 8787 | Used when connecting to RStudio on a compute instance |
+| 18881 | Used to connect to the language server to enable IntelliSense for notebooks on a compute instance. |
## Required public internet access
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
Previously updated : 09/28/2022 Last updated : 11/11/2022
steps:
- task: UsePythonVersion@0 inputs: versionSpec: '3.8'-- script: pip install -r sdk/dev-requirements.txt
+- script: pip install -r sdk/python/dev-requirements.txt
displayName: 'pip install notebook reqs' - task: Bash@3 inputs:
- filePath: 'sdk/setup.sh'
+ filePath: 'sdk/python/setup.sh'
displayName: 'set up sdk' - task: Bash@3
steps:
sed -i -e "s/<AML_WORKSPACE_NAME>/$(AZUREML_WORKSPACE_NAME)/g" sklearn-diabetes.ipynb sed -i -e "s/DefaultAzureCredential/AzureCliCredential/g" sklearn-diabetes.ipynb papermill -k python sklearn-diabetes.ipynb sklearn-diabetes.output.ipynb
- workingDirectory: 'sdk/jobs/single-step/scikit-learn/diabetes'
+ workingDirectory: 'sdk/python/jobs/single-step/scikit-learn/diabetes'
```
machine-learning Concept Train Machine Learning Model V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-train-machine-learning-model-v1.md
description: Learn how to train models with Azure Machine Learning (v1). Explore the different training methods and choose the right one for your project. --+++ Last updated 08/30/2022
machine-learning How To Authenticate Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-authenticate-web-service.md
Title: Configure authentication for models deployed as web services
description: Learn how to configure authentication for machine learning models deployed to web services in Azure Machine Learning. --+++ Last updated 08/15/2022
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-prep-synapse-spark-pool.md
---+++ Last updated 08/17/2022 #Customer intent: As a data scientist, I want to prepare my data at scale, and to train my machine learning models from a single notebook using Azure Machine Learning.
machine-learning How To Deploy Advanced Entry Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md
Last updated 08/15/2022--+++
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-migrate-from-estimators-to-scriptrunconfig.md
Title: Migrate from Estimators to ScriptRunConfig
description: Migration guide for migrating from Estimators to ScriptRunConfig for configuring training jobs. --++
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-move-data-in-out-of-pipelines.md
description: Learn how Azure Machine Learning pipelines ingest data, and how to
--+++ Last updated 08/18/2022
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-trigger-published-pipeline.md
description: Triggered pipelines allow you to automate routine, time-consuming t
--+++ Last updated 08/12/2022
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment.md
Last updated 08/15/2022--+++ #Customer intent: As a data scientist, I want to figure out why my model deployment fails so that I can fix it.
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-labeled-dataset.md
Title: Create and explore datasets with labels description: Learn how to export data labels from your Azure Machine Learning labeling projects and use them for machine learning tasks. --+++
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
--++ Last updated 05/10/2022
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md
--- Previously updated : 04/02/2021+++ Last updated : 11/11/2022
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| North Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| Qatar Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
| South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UK West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| USGov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| USGov Arizona | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| USGov Texas | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| West Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West US 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
-| Qatar Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+++ ## Contacts
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## November 2022
+
+- **General availability in Azure US Government regions**
+ The Azure Database for MySQL - Flexible Server is now available in the following Azure regions:
+ - USGov Virginia
+ - USGov Arizona
+ - USGov Texas
++ ## October 2022 - **AMD compute SKUs for General Purpose and Business Critical tiers in in Azure Database for MySQL - Flexible Server**
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| | | | | | | Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Southeast | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: (v3 only) | :x: | :heavy_check_mark: | :x: |
+| Brazil South | :heavy_check_mark: (v3 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
purview How To Create Import Export Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-create-import-export-glossary.md
Last updated 03/09/2022
# Create, import, export, and delete glossary terms
-This article describes how to work with the business glossary in Microsoft Purview. It provides steps to create a business glossary term in the Microsoft Purview data catalog. It also shows you how to import and export glossary terms by using .CSV files, and how to delete terms that you no longer need.
+This article describes how to work with the business glossary in Microsoft Purview. It provides steps to create a business glossary term in the Microsoft Purview Data Catalog. It also shows you how to import and export glossary terms by using .CSV files, and how to delete terms that you no longer need.
## Create a term
To create a glossary term, follow these steps:
:::image type="content" source="media/how-to-create-import-export-glossary/find-glossary.png" alt-text="Screenshot of the data catalog with the button for managing a glossary highlighted." border="true":::
-2. On the **Glossary terms** page, select **+ New term**.
+1. On the **Glossary terms** page, select **+ New term**.
- A pane opens with the **System default** template selected. Choose the template that you want to use to create a glossary term, and then select **Continue**.
+ A pane opens with the **System default** template selected. Choose the template, or templates, that you want to use to create a glossary term, and then select **Continue**.
+ Selecting multiple templates will allow you to use the custom attributes from those templates.
:::image type="content" source="media/how-to-create-import-export-glossary/new-term-with-default-template.png" alt-text="Screenshot of the button and pane for creating a new term." border="true":::
-3. Give your new term a name, which must be unique in the catalog.
+1. If you selected multiple templates, you can select and deselect templates from the **Term template** dropdown at the top of the page.
+
+1. Give your new term a name, which must be unique in the catalog.
> [!NOTE] > Term names are case-sensitive. For example, **Sample** and **sample** could both exist in the same glossary.
-4. For **Definition**, add a definition for the term.
+1. For **Definition**, add a definition for the term.
Microsoft Purview enables you to add rich formatting to term definitions. For example, you can add bold, underline, or italic formatting to text. You can also create tables, bulleted lists, or hyperlinks to external resources.
To create a glossary term, follow these steps:
> } >```
-5. For **Status**, select the status for the term. New terms default to **Draft**.
+1. For **Status**, select the status for the term. New terms default to **Draft**.
:::image type="content" source="media/how-to-create-import-export-glossary/overview-tab.png" alt-text="Screenshot of the status choices.":::
To create a glossary term, follow these steps:
> [!Important] > If an approval workflow is enabled on the term hierarchy, a new term will go through the approval process when it's created. The term is stored in the catalog only when it's approved. To learn about how to manage approval workflows for a business glossary, see [Approval workflow for business terms](how-to-workflow-business-terms-approval.md).
-6. Add **Resources** and **Acronym** information. If the term is part of a hierarchy, you can add parent terms at **Parent** on the **Overview** tab.
+1. Add **Resources** and **Acronym** information. If the term is part of a hierarchy, you can add parent terms at **Parent** on the **Overview** tab.
-7. Add **Synonyms** and **Related terms** information on the **Related** tab, and then select **Apply**.
+1. Add **Synonyms** and **Related terms** information on the **Related** tab, and then select **Apply**.
:::image type="content" source="media/how-to-create-import-export-glossary/related-tab.png" alt-text="Screenshot of tab for related terms and the box for adding synonyms." border="true":::
-8. Optionally, select the **Contacts** tab to add experts and stewards to your term.
+1. Optionally, select the **Contacts** tab to add experts and stewards to your term.
-9. Select **Create** to create your term.
+1. Select **Create** to create your term.
> [!Important] > If an approval workflow is enabled on the term's hierarchy path, you'll see **Submit for approval** instead of the **Create** button. Selecting **Submit for approval** will trigger the approval workflow for this term.
To create a glossary term, follow these steps:
## Import terms into the glossary
-The Microsoft Purview data catalog provides a template .CSV file for you to import terms from the catalog into your glossary. Duplicate terms include both spelling and capitalization, because term names are case-sensitive.
+The Microsoft Purview Data Catalog provides a template .CSV file for you to import terms from the catalog into your glossary. Duplicate terms include both spelling and capitalization, because term names are case-sensitive.
1. On the **Glossary terms** page, select **Import terms**. The term template page opens.
-2. Match the term template to the kind of .CSV file that you want to import, and then select **Continue**.
+1. Select the template, or templates, for the terms you want to import, and then select **Continue**.
+
+ You can select multiple templates and import terms for different templates from a single .csv file.
:::image type="content" source="media/how-to-create-import-export-glossary/select-term-template-for-import.png" alt-text="Screenshot of the template list for importing a term, with the system default template highlighted.":::
-3. Download the .csv template and use it to enter the terms that you want to add.
+1. Download the .csv template and use it to enter the terms that you want to add.
Give your template file a name that starts with a letter and includes only letters, numbers, spaces, an underscore (_), or other non-ASCII Unicode characters. Special characters in the file name will create an error.
The Microsoft Purview data catalog provides a template .CSV file for you to impo
:::image type="content" source="media/how-to-create-import-export-glossary/select-file-for-import.png" alt-text="Screenshot of the button for downloading a sample template file.":::
-4. After you finish filling out your .CSV file, select your file to import, and then select **OK**.
+1. After you finish filling out your .CSV file, select your file to import, and then select **OK**.
The system will upload the file and add all the terms to your glossary.
The system will upload the file and add all the terms to your glossary.
You can export terms from the glossary as long as the selected terms belong to same term template.
-When you're in the glossary, the **Export terms** button is disabled by default. After you select the terms that you want to export, the **Export terms** button is enabled if the selected terms belong to same template.
+When you're in the glossary, the **Export terms** button is disabled by default. After you select the terms that you want to export, the **Export terms** button is enabled.
+
+> [!NOTE]
+> Selected terms **don't** need to be from the same term template to be able to export them.
Select **Export terms** to download the selected terms.
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Security Reader](#security-reader) | View permissions for Microsoft Defender for Cloud. Can view recommendations, alerts, a security policy, and security states, but cannot make changes. | 39bc4728-0917-49c7-9d2c-d95423bc2eb4 | > | **DevOps** | | | > | [DevTest Labs User](#devtest-labs-user) | Lets you connect, start, restart, and shutdown your virtual machines in your Azure DevTest Labs. | 76283e04-6283-4c54-8f91-bcf1374a3c64 |
+> | [Lab Assistant](#lab-assistant) | Enables you to view an existing lab, perform actions on the lab VMs and send invitations to the lab. | ce40b423-cede-4313-a93f-9b28290b72e1 |
+> | [Lab Contributor](#lab-contributor) | Applied at lab level, enables you to manage the lab. Applied at a resource group, enables you to create and manage labs. | 5daaa2af-1fe8-407c-9122-bba179798270 |
> | [Lab Creator](#lab-creator) | Lets you create new labs under your Azure Lab Accounts. | b97fb8bc-a8b2-4522-a38b-dd33c7e65ead |
+> | [Lab Operator](#lab-operator) | Gives you limited ability to manage existing labs. | a36e6959-b6be-4b12-8e9f-ef4b474d304d |
+> | [Lab Services Contributor](#lab-services-contributor) | Enables you to fully control all Lab Services scenarios in the resource group. | f69b8690-cc87-41d6-b77a-a4bc3c0a966f |
+> | [Lab Services Reader](#lab-services-reader) | Enables you to view, but not change, all lab plans and lab resources. | 2a5c394f-5eb7-4d4f-9c8e-e8eae39faebc |
> | **Monitor** | | | > | [Application Insights Component Contributor](#application-insights-component-contributor) | Can manage Application Insights components | ae349356-3a1b-4a5e-921d-050484c6347e | > | [Application Insights Snapshot Debugger](#application-insights-snapshot-debugger) | Gives user permission to view and download debug snapshots collected with the Application Insights Snapshot Debugger. Note that these permissions are not included in the [Owner](#owner) or [Contributor](#contributor) roles. When giving users the Application Insights Snapshot Debugger role, you must grant the role directly to the user. The role is not recognized when it is added to a custom role. | 08954f03-6346-4c2e-81c0-ec3a5cfae23b |
Lets you connect, start, restart, and shutdown your virtual machines in your Azu
} ```
+### Lab Assistant
+
+Enables you to view an existing lab, perform actions on the lab VMs and send invitations to the lab.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/images/read | Get the properties of an image. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/read | Get the properties of a lab plan. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/read | Get the properties of a lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/schedules/read | Get the properties of a schedule. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/read | Get the properties of a user. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/invite/action | Send email invitation to a user to join the lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/read | Get the properties of a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/start | Start a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/stop | Stop and deallocate a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/reimage | Reimage a virtual machine to the last published image. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/redeploy | Redeploy a virtual machine to a different compute node. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/locations/usages/read | Get Usage in a location |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/skus/read | Get the properties of a Lab Services SKU. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/ce40b423-cede-4313-a93f-9b28290b72e1",
+ "properties": {
+ "roleName": "Lab Assistant",
+ "description": "The lab assistant role",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.LabServices/labPlans/images/read",
+ "Microsoft.LabServices/labPlans/read",
+ "Microsoft.LabServices/labs/read",
+ "Microsoft.LabServices/labs/schedules/read",
+ "Microsoft.LabServices/labs/users/read",
+ "Microsoft.LabServices/labs/users/invite/action",
+ "Microsoft.LabServices/labs/virtualMachines/read",
+ "Microsoft.LabServices/labs/virtualMachines/start/action",
+ "Microsoft.LabServices/labs/virtualMachines/stop/action",
+ "Microsoft.LabServices/labs/virtualMachines/reimage/action",
+ "Microsoft.LabServices/labs/virtualMachines/redeploy/action",
+ "Microsoft.LabServices/locations/usages/read",
+ "Microsoft.LabServices/skus/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+
+### Lab Contributor
+
+Applied at lab level, enables you to manage the lab. Applied at a resource group, enables you to create and manage labs.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/images/read | Get the properties of an image. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/read | Get the properties of a lab plan. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/saveImage/action | Create an image from a virtual machine in the gallery attached to the lab plan. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/read | Get the properties of a lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/write | Create new or update an existing lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/delete | Delete the lab and all its users, schedules and virtual machines. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/publish/action | Publish a lab by propagating image of the template virtual machine to all virtual machines in the lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/syncGroup/action | Updates the list of users from the Active Directory group assigned to the lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/schedules/read | Get the properties of a schedule. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/schedules/write | Create new or update an existing schedule. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/schedules/delete | Delete the schedule. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/read | Get the properties of a user. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/write | Create new or update an existing user. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/delete | Delete the user. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/invite/action | Send email invitation to a user to join the lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/read | Get the properties of a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/start | Start a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/stop | Stop and deallocate a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/reimage | Reimage a virtual machine to the last published image. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/redeploy | Redeploy a virtual machine to a different compute node. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/resetPassword/action | Reset local user's password on a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/locations/usages/read | Get Usage in a location |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/skus/read | Get the properties of a Lab Services SKU. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/createLab/action | Create a new lab from a lab plan. |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/5daaa2af-1fe8-407c-9122-bba179798270",
+ "properties": {
+ "roleName": "Lab Contributor",
+ "description": "The lab contributor role",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.LabServices/labPlans/images/read",
+ "Microsoft.LabServices/labPlans/read",
+ "Microsoft.LabServices/labPlans/saveImage/action",
+ "Microsoft.LabServices/labs/read",
+ "Microsoft.LabServices/labs/write",
+ "Microsoft.LabServices/labs/delete",
+ "Microsoft.LabServices/labs/publish/action",
+ "Microsoft.LabServices/labs/syncGroup/action",
+ "Microsoft.LabServices/labs/schedules/read",
+ "Microsoft.LabServices/labs/schedules/write",
+ "Microsoft.LabServices/labs/schedules/delete",
+ "Microsoft.LabServices/labs/users/read",
+ "Microsoft.LabServices/labs/users/write",
+ "Microsoft.LabServices/labs/users/delete",
+ "Microsoft.LabServices/labs/users/invite/action",
+ "Microsoft.LabServices/labs/virtualMachines/read",
+ "Microsoft.LabServices/labs/virtualMachines/start/action",
+ "Microsoft.LabServices/labs/virtualMachines/stop/action",
+ "Microsoft.LabServices/labs/virtualMachines/reimage/action",
+ "Microsoft.LabServices/labs/virtualMachines/redeploy/action",
+ "Microsoft.LabServices/labs/virtualMachines/resetPassword/action",
+ "Microsoft.LabServices/locations/usages/read",
+ "Microsoft.LabServices/skus/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.LabServices/labPlans/createLab/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+ ### Lab Creator Lets you create new labs under your Azure Lab Accounts. [Learn more](../lab-services/add-lab-creator.md)
Lets you create new labs under your Azure Lab Accounts. [Learn more](../lab-serv
} ```
+### Lab Operator
+
+Gives you limited ability to manage existing labs.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/images/read | Get the properties of an image. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/read | Get the properties of a lab plan. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/saveImage/action | Create an image from a virtual machine in the gallery attached to the lab plan. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/publish/action | Publish a lab by propagating image of the template virtual machine to all virtual machines in the lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/read | Get the properties of a lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/schedules/read | Get the properties of a schedule. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/schedules/write | Create new or update an existing schedule. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/schedules/delete | Delete the schedule. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/read | Get the properties of a user. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/write | Create new or update an existing user. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/delete | Delete the user. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/users/invite/action | Send email invitation to a user to join the lab. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/read | Get the properties of a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/start/action | Start a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/stop/action | Stop and deallocate a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/reimage/action | Reimage a virtual machine to the last published image. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/redeploy/action | Redeploy a virtual machine to a different compute node. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labs/virtualMachines/resetPassword/action | Reset local user's password on a virtual machine. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/locations/usages/read | Get Usage in a location. |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/skus/read | Get the properties of a Lab Services SKU. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/a36e6959-b6be-4b12-8e9f-ef4b474d304d",
+ "properties": {
+ "roleName": "Lab Operator",
+ "description": "The lab operator role",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.LabServices/labPlans/images/read",
+ "Microsoft.LabServices/labPlans/read",
+ "Microsoft.LabServices/labPlans/saveImage/action",
+ "Microsoft.LabServices/labs/publish/action",
+ "Microsoft.LabServices/labs/read",
+ "Microsoft.LabServices/labs/schedules/read",
+ "Microsoft.LabServices/labs/schedules/write",
+ "Microsoft.LabServices/labs/schedules/delete",
+ "Microsoft.LabServices/labs/users/read",
+ "Microsoft.LabServices/labs/users/write",
+ "Microsoft.LabServices/labs/users/delete",
+ "Microsoft.LabServices/labs/users/invite/action",
+ "Microsoft.LabServices/labs/virtualMachines/read",
+ "Microsoft.LabServices/labs/virtualMachines/start/action",
+ "Microsoft.LabServices/labs/virtualMachines/stop/action",
+ "Microsoft.LabServices/labs/virtualMachines/reimage/action",
+ "Microsoft.LabServices/labs/virtualMachines/redeploy/action",
+ "Microsoft.LabServices/labs/virtualMachines/resetPassword/action",
+ "Microsoft.LabServices/locations/usages/read",
+ "Microsoft.LabServices/skus/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+
+### Lab Services Contributor
+
+Enables you to fully control all Lab Services scenarios in the resource group.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/* | Create and manage lab services components. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/labPlans/createLab/action | Create a new lab from a lab plan. |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/f69b8690-cc87-41d6-b77a-a4bc3c0a966f",
+ "properties": {
+ "roleName": "Lab Services Contributor",
+ "description": "The lab services contributor role",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.LabServices/*",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.LabServices/labPlans/createLab/action"
+ ],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
++
+### Lab Services Reader
+
+Enables you to view, but not change, all lab plans and lab resources.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.LabServices](resource-provider-operations.md#microsoftlabservices)/*/read | Read lab services properties. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/2a5c394f-5eb7-4d4f-9c8e-e8eae39faebc",
+ "properties": {
+ "roleName": "Lab Services Reader",
+ "description": "The lab services reader role",
+ "assignableScopes": [
+ "/"
+ ],
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.LabServices/*/read",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ]
+ }
+}
+```
+ ## Monitor
security Code Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/code-integrity.md
Title: Platform code integrity - Azure Security description: Learn how Microsoft ensures that only authorized software is running. --++ Previously updated : 06/10/2021 Last updated : 11/10/2022 # Platform code integrity
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
na Previously updated : 08/13/2020 Last updated : 11/10/2022
All Managed Disks, Snapshots, and Images are encrypted using Storage Service Enc
#### Custom encryption at rest
-It is recommended that whenever possible, IaaS applications leverage Azure Disk Encryption and Encryption at Rest options provided by any consumed Azure services. In some cases, such as irregular encryption requirements or non-Azure based storage, a developer of an IaaS application may need to implement encryption at rest themselves. Developers of IaaS solutions can better integrate with Azure management and customer expectations by leveraging certain Azure components. Specifically, developers should use the Azure Key Vault service to provide secure key storage as well as provide their customers with consistent key management options with that of most Azure platform services. Additionally, custom solutions should use Azure-Managed Service Identities to enable service accounts to access encryption keys. For developer information on Azure Key Vault and Managed Service Identities, see their respective SDKs.
+It is recommended that whenever possible, IaaS applications leverage Azure Disk Encryption and Encryption at Rest options provided by any consumed Azure services. In some cases, such as irregular encryption requirements or non-Azure based storage, a developer of an IaaS application may need to implement encryption at rest themselves. Developers of IaaS solutions can better integrate with Azure management and customer expectations by leveraging certain Azure components. Specifically, developers should use the Azure Key Vault service to provide secure key storage as well as provide their customers with consistent key management options with that of most Azure platform services. Additionally, custom solutions should use Azure managed service identities to enable service accounts to access encryption keys. For developer information on Azure Key Vault and Managed Service Identities, see their respective SDKs.
## Azure resource providers encryption model support
security Firmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/firmware.md
Title: Firmware security - Azure Security description: Learn how Microsoft secures Azure hardware and firmware. --++ Previously updated : 06/24/2021 Last updated : 11/10/2022 # Firmware security
security Hypervisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/hypervisor.md
Title: Hypervisor security on the Azure fleet - Azure Security description: Technical overview of hypervisor security on the Azure fleet. --++ Previously updated : 06/24/2021 Last updated : 11/10/2022 # Hypervisor security on the Azure fleet
security Measured Boot Host Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/measured-boot-host-attestation.md
Title: Firmware measured boot and host attestation - Azure Security description: Technical overview of Azure firmware measured boot and host attestation. --++ Previously updated : 06/24/2021 Last updated : 11/10/2022 # Measured boot and host attestation
security Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/platform.md
Title: Azure platform integrity and security - Azure Security description: Technical overview of Azure platform integrity and security. --++ Previously updated : 06/24/2021 Last updated : 11/10/2022 # Platform integrity and security overview
security Project Cerberus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/project-cerberus.md
Title: Firmware integrity - Azure Security description: Learn about cryptographic measurements to ensure firmware integrity. --++ Previously updated : 06/24/2021 Last updated : 11/10/2022 # Project Cerberus
security Secure Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/secure-boot.md
Title: Firmware secure boot - Azure Security description: Technical overview of Azure firmware secure boot. --++ Previously updated : 06/24/2021 Last updated : 11/10/2022 # Secure Boot
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
Title: Quickstart - Use Azure Service Bus queues from .NET app
description: This quickstart shows you how to send messages to and receive messages from Azure Service Bus queues using the .NET programming language. dotnet Previously updated : 11/08/2022 Last updated : 11/10/2022 ms.devlang: csharp # Quickstart: Send and receive messages from an Azure Service Bus queue (.NET)
-In this quickstart, you will do the following steps:
+In this quickstart, you'll do the following steps:
1. Create a Service Bus namespace, using the Azure portal. 2. Create a Service Bus queue, using the Azure portal.
In this quickstart, you will do the following steps:
> [!NOTE] > - This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus queue and then receiving them. For an overview of the .NET client library, see [Azure Service Bus client library for .NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/README.md). For more samples, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
+> - This quick start shows you two ways of connecting to Azure Service Bus: **connection string** and **passwordless**. The first option shows you how to use a connection string to connect to a Service Bus namespace. The second option shows you how to use your security principal in Azure Active Directory and the role-based access control (RBAC) to connect to a Service Bus namespace. You don't need to worry about having hard-coded connection string in your code or in a configuration file or in secure storage like Azure Key Vault. If you are new to Azure, you may find the connection string option easier to follow. We recommend using the passwordless option in real-world applications and production environments. For more information, see [Authentication and authorization](service-bus-authentication-and-authorization.md).
+ ## Prerequisites
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/dotnet). - **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects.
-## [Connection String](#tab/connection-string)
-
-## [Passwordless](#tab/passwordless)
- [!INCLUDE [service-bus-passwordless-template-tabbed](../../includes/passwordless/service-bus/service-bus-passwordless-template-tabbed.md)]
+## Launch Visual Studio and sign-in to Azure
+
+You can authorize access to the service bus namespace using the following steps:
+1. Launch Visual Studio. If you see the **Get started** window, select the **Continue without code** link in the right pane.
+1. Select the **Sign in** button in the top right of Visual Studio.
+
+ :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/azure-sign-button-visual-studio.png" alt-text="Screenshot showing the button to sign in to Azure using Visual Studio.":::
+1. Sign-in using the Azure AD account you assigned a role to previously.
+
+ :::image type="content" source="..//storage/blobs/media/storage-quickstart-blobs-dotnet/sign-in-visual-studio-account-small.png" alt-text="Screenshot showing the account selection.":::
-> [!IMPORTANT]
-> Note down the connection string to the namespace, the queue name. You'll use them later in this tutorial.
## Send messages to the queue
This section shows you how to create a .NET console application to send messages
### Create a console application
-1. Start Visual Studio 2022.
-1. Select **Create a new project**.
+1. In Visual Studio, select **File** -> **New** -> **Project** menu.
1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**. 1. Select **C#** for the programming language. 1. Select **Console** for the type of the application.
- 1. Select **Console Application** from the results list.
+ 1. Select **Console App** from the results list.
1. Then, select **Next**. :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/new-send-project.png" alt-text="Image showing the Create a new project dialog box with C# and Console selected":::
This section shows you how to create a .NET console application to send messages
### Add the NuGet packages to the project
+### [Passwordless](#tab/passwordless)
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package.
+
+ ```powershell
+ Install-Package Azure.Messaging.ServiceBus
+ ```
+1. Run the following command to install the **Azure.Identity** NuGet package.
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+
+### [Connection String](#tab/connection-string)
+ 1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu. 1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
This section shows you how to create a .NET console application to send messages
Install-Package Azure.Messaging.ServiceBus ``` +++ ## Add code to send messages to the queue 1. Replace the contents of `Program.cs` with the following code. The important steps are outlined below, with additional information in the code comments.
- ### [Connection string](#tab/connection-string)
- > [!IMPORTANT]
- > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>` and `<QUEUE-NAME>`) in the code snippet with actual values you noted down earlier.
+ ### [Passwordless](#tab/passwordless)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
* Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue. * Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method. * Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage). * Sends the batch of messages to the Service Bus queue using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>` and `<QUEUE-NAME>`) in the code snippet with names of your Service Bus namespace and queue.
+++ ```csharp using Azure.Messaging.ServiceBus;-
+ using Azure.Identity;
+
+ // name of your Service Bus queue
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// of the application, which is best practice when messages are being published or read // regularly. //
- // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
-
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
- var clientOptions = new ServiceBusClientOptions()
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, ensure that ports 5671 and 5672 are open.
+ var clientOptions = new ServiceBusClientOptions
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+ //TODO: Replace the "<NAMESPACE-NAME>" and "<QUEUE-NAME>" placeholders.
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(),
+ clientOptions);
sender = client.CreateSender("<QUEUE-NAME>"); // create a batch
This section shows you how to create a .NET console application to send messages
Console.ReadKey(); ```
- ### [Passwordless](#tab/passwordless)
-
- > [!IMPORTANT]
- > Per the `TODO` comment, update the placeholder values in the code snippets with the values from the Service Bus you created.
+ ### [Connection string](#tab/connection-string)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
* Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus queue. * Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync) method. * Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage). * Sends the batch of messages to the Service Bus queue using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>` and `<QUEUE-NAME>`) in the code snippet with names of your Service Bus namespace and queue.
++ ```csharp using Azure.Messaging.ServiceBus;
- using Azure.Identity;
-
- // name of your Service Bus queue
+ // the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// of the application, which is best practice when messages are being published or read // regularly. //
- // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
- // If you use the default AmqpTcp, ensure that ports 5671 and 5672 are open.
- var clientOptions = new ServiceBusClientOptions
+ // set the transport type to AmqpWebSockets so that the ServiceBusClient uses the port 443.
+ // If you use the default AmqpTcp, you will need to make sure that the ports 5671 and 5672 are open
+
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- //TODO: Replace the "<NAMESPACE-NAME>" and "<QUEUE-NAME>" placeholders.
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential(),
- clientOptions);
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
sender = client.CreateSender("<QUEUE-NAME>"); // create a batch
This section shows you how to create a .NET console application to send messages
Console.WriteLine("Press any key to end the application"); Console.ReadKey(); ```+ 6. Build the project, and ensure that there are no errors.
This section shows you how to create a .NET console application to send messages
A batch of 3 messages has been published to the queue ```
+ > [!IMPORTANT]
+ > In most cases, it will take a minute or two for the role assignment to propagate in Azure. In rare cases, it may take up to **eight minutes**. If you receive authentication errors when you first run your code, wait a few moments and try again.
8. In the Azure portal, follow these steps: 1. Navigate to your Service Bus namespace. 1. On the **Overview** page, select the queue in the bottom-middle pane.
In this section, you'll create a .NET console application that receives messages
### Add the NuGet packages to the project
-### [Connection String](#tab/connection-string)
+### [Passwordless](#tab/passwordless)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
+1. Select **QueueReceiver** for **Default project**.
+
+ :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package.
```powershell Install-Package Azure.Messaging.ServiceBus ```
+1. Run the following command to install the **Azure.Identity** NuGet package.
- :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
+ ```powershell
+ Install-Package Azure.Identity
+ ```
-### [Passwordless](#tab/passwordless)
+### [Connection String](#tab/connection-string)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
```powershell Install-Package Azure.Messaging.ServiceBus
- Install-Package Azure.Identity
``` :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console."::: +
In this section, you'll add code to retrieve messages from the queue.
1. Within the `Program` class, add the following code:
- ### [Connection string](#tab/connection-string)
-
+ ### [Passwordless](#tab/passwordless)
+ ```csharp using System.Threading.Tasks;
+ using Azure.Identity;
using Azure.Messaging.ServiceBus; // the client that owns the connection and can be used to create senders and receivers
In this section, you'll add code to retrieve messages from the queue.
ServiceBusProcessor processor; ```
- ### [Passwordless](#tab/passwordless)
-
+ ### [Connection string](#tab/connection-string)
+
```csharp using System.Threading.Tasks;
- using Azure.Identity;
using Azure.Messaging.ServiceBus; // the client that owns the connection and can be used to create senders and receivers
In this section, you'll add code to retrieve messages from the queue.
// the processor that reads and processes messages from the queue ServiceBusProcessor processor; ```+ 1. Append the following methods to the end of the `Program` class.
In this section, you'll add code to retrieve messages from the queue.
1. Append the following code to the end of the `Program` class. The important steps are outlined below, with additional information in the code comments.
- ### [Connection string](#tab/connection-string)
+ ### [Passwordless](#tab/passwordless)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
- * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
* Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
+
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-NAME>` and `<QUEUE-NAME>`) in the code snippet with names of your Service Bus namespace and queue.
```csharp // The Service Bus client types are safe to cache and use as a singleton for the lifetime
In this section, you'll add code to retrieve messages from the queue.
// Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443. // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(),
+ clientOptions);
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder
In this section, you'll add code to retrieve messages from the queue.
} ```
- ### [Passwordless](#tab/passwordless)
+ ### [Connection string](#tab/connection-string)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
- * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string.
+ * Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
* Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
- * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
- * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
-
+ * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
+ * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object.
```csharp // The Service Bus client types are safe to cache and use as a singleton for the lifetime
In this section, you'll add code to retrieve messages from the queue.
// Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443. // If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
- // TODO: Replace the <NAMESPACE-NAME> placeholder
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
var clientOptions = new ServiceBusClientOptions() { TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential(),
- clientOptions);
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder
In this section, you'll add code to retrieve messages from the queue.
await client.DisposeAsync(); } ```+ 1. The completed `Program` class should match the following code:
- ### [Connection string](#tab/connection-string)
+ ### [Passwordless](#tab/passwordless)
```csharp
- using Azure.Messaging.ServiceBus;
- using System;
using System.Threading.Tasks;
+ using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the queue.
// of the application, which is best practice when messages are being published or read // regularly. //
- // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
// If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.
-
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
- var clientOptions = new ServiceBusClientOptions()
+
+ // TODO: Replace the <NAMESPACE-NAME> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
-
+ client = new ServiceBusClient("<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential(), clientOptions);
+
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
In this section, you'll add code to retrieve messages from the queue.
} ```
- ### [Passwordless](#tab/passwordless)
+ ### [Connection string](#tab/connection-string)
```csharp
- using System.Threading.Tasks;
using Azure.Messaging.ServiceBus;
- using Azure.Identity;
+ using System;
+ using System.Threading.Tasks;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the queue.
// of the application, which is best practice when messages are being published or read // regularly. //
- // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
+ // Set the transport type to AmqpWebSockets so that the ServiceBusClient uses port 443.
// If you use the default AmqpTcp, make sure that ports 5671 and 5672 are open.-
- // TODO: Replace the <NAMESPACE-NAME> and <QUEUE-NAME> placeholders
- var clientOptions = new ServiceBusClientOptions()
+
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> and <QUEUE-NAME> placeholders
+ var clientOptions = new ServiceBusClientOptions()
{ TransportType = ServiceBusTransportType.AmqpWebSockets };
- client = new ServiceBusClient("<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential(), clientOptions);
-
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>", clientOptions);
+
// create a processor that we can use to process the messages // TODO: Replace the <QUEUE-NAME> placeholder processor = client.CreateProcessor("<QUEUE-NAME>", new ServiceBusProcessorOptions());
In this section, you'll add code to retrieve messages from the queue.
return Task.CompletedTask; } ```+ 1. Build the project, and ensure that there are no errors.
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md
In this quickstart, you'll do the following steps:
> [!NOTE] > This quick start provides step-by-step instructions to implement a simple scenario of sending a batch of messages to a Service Bus topic and receiving those messages from a subscription of the topic. For more samples on other and advanced scenarios, see [Service Bus .NET samples on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus/Azure.Messaging.ServiceBus/samples).
+> - This quick start shows you two ways of connecting to Azure Service Bus: **connection string** and **passwordless**. The first option shows you how to use a connection string to connect to a Service Bus namespace. The second option shows you how to use your security principal in Azure Active Directory and the role-based access control (RBAC) to connect to a Service Bus namespace. You don't need to worry about having hard-coded connection string in your code or in a configuration file or in secure storage like Azure Key Vault. If you are new to Azure, you may find the connection string option easier to follow. We recommend using the passwordless option in real-world applications and production environments. For more information, see [Authentication and authorization](service-bus-authentication-and-authorization.md).
## Prerequisites
If you're new to the service, see [Service Bus overview](service-bus-messaging-o
- **Azure subscription**. To use Azure services, including Azure Service Bus, you need a subscription. If you don't have an existing Azure account, you can sign up for a [free trial](https://azure.microsoft.com/free/dotnet/). - **Visual Studio 2022**. The sample application makes use of new features that were introduced in C# 10. You can still use the Service Bus client library with previous C# language versions, but the syntax may vary. To use the latest syntax, we recommend that you install .NET 6.0 or higher and set the language version to `latest`. If you're using Visual Studio, versions before Visual Studio 2022 aren't compatible with the tools needed to build C# 10 projects.
-## [Connection String](#tab/connection-string)
-## [Passwordless](#tab/passwordless)
-- [!INCLUDE [service-bus-passwordless-template-tabbed](../../includes/passwordless/service-bus/service-bus-passwordless-template-tabbed.md)]
+## Launch Visual Studio and sign-in to Azure
+
+You can authorize access to the service bus namespace using the following steps:
-> [!IMPORTANT]
-> Note down the connection string to the namespace, the topic name, and the subscription name. You'll use them later in this tutorial.
+1. Launch Visual Studio. If you see the **Get started** window, select the **Continue without code** link in the right pane.
+1. Select the **Sign in** button in the top right of Visual Studio.
+
+ :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/azure-sign-button-visual-studio.png" alt-text="Screenshot showing the button to sign in to Azure using Visual Studio.":::
+1. Sign-in using the Azure AD account you assigned a role to previously.
+
+ :::image type="content" source="..//storage/blobs/media/storage-quickstart-blobs-dotnet/sign-in-visual-studio-account-small.png" alt-text="Screenshot showing the account selection.":::
## Send messages to the topic This section shows you how to create a .NET console application to send messages to a Service Bus topic.
This section shows you how to create a .NET console application to send messages
### Create a console application
-1. Start Visual Studio 2022.
-1. Select **Create a new project**.
+1. In Visual Studio, select **File** -> **New** -> **Project** menu.
1. On the **Create a new project** dialog box, do the following steps: If you don't see this dialog box, select **File** on the menu, select **New**, and then select **Project**. 1. Select **C#** for the programming language. 1. Select **Console** for the type of the application.
- 1. Select **Console Application** from the results list.
+ 1. Select **Console App** from the results list.
1. Then, select **Next**. :::image type="content" source="./media/service-bus-dotnet-get-started-with-queues/new-send-project.png" alt-text="Image showing the Create a new project dialog box with C# and Console selected":::
This section shows you how to create a .NET console application to send messages
### Add the NuGet packages to the project
+### [Passwordless](#tab/passwordless)
+ 1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package.
```powershell Install-Package Azure.Messaging.ServiceBus ```
+1. Run the following command to install the **Azure.Identity** NuGet package.
+
+ ```powershell
+ Install-Package Azure.Identity
+ ```
+### [Connection String](#tab/connection-string)
+
+1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
+
+ ```Powershell
+ Install-Package Azure.Messaging.ServiceBus
+ ```
+ ### Add code to send messages to the topic 1. Replace the contents of Program.cs with the following code. The important steps are outlined below, with additional information in the code comments.
- ## [Connection String](#tab/connection-string)
-
- > [!IMPORTANT]
- > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>` and `<TOPIC-NAME>`) in the code snippet with actual values you noted down earlier.
+ ## [Passwordless](#tab/passwordless)
- 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
+ 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic. 1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync). 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage). 1. Sends the batch of messages to the Service Bus topic using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.+
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-NAME>` and `<TOPIC-NAME>`) in the code snippet with names of your Service Bus namespace and topic.
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// The Service Bus client types are safe to cache and use as a singleton for the lifetime // of the application, which is best practice when messages are being published or read // regularly.
- //TODO: Replace the "<NAMESPACE-CONNECTION-STRING>" and "<TOPIC-NAME>" placeholders.
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>");
+
+ //TODO: Replace the "<NAMESPACE-NAME>" and "<TOPIC-NAME>" placeholders.
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
sender = client.CreateSender("<TOPIC-NAME>"); // create a batch
This section shows you how to create a .NET console application to send messages
Console.ReadKey(); ```
- ## [Passwordless](#tab/passwordless)
+ ## [Connection String](#tab/connection-string)
+
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>` and `<TOPIC-NAME>`) in the code snippet with actual values you noted down earlier.
1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace. 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
This section shows you how to create a .NET console application to send messages
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
This section shows you how to create a .NET console application to send messages
// The Service Bus client types are safe to cache and use as a singleton for the lifetime // of the application, which is best practice when messages are being published or read // regularly.-
- //TODO: Replace the "<NAMESPACE-NAME>" and "<TOPIC-NAME>" placeholders.
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential());
+ //TODO: Replace the "<NAMESPACE-CONNECTION-STRING>" and "<TOPIC-NAME>" placeholders.
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>");
sender = client.CreateSender("<TOPIC-NAME>"); // create a batch
This section shows you how to create a .NET console application to send messages
```bash A batch of 3 messages has been published to the topic ```+
+ > [!IMPORTANT]
+ > In most cases, it will take a minute or two for the role assignment to propagate in Azure. In rare cases, it may take up to **eight minutes**. If you receive authentication errors when you first run your code, wait a few moments and try again.
1. In the Azure portal, follow these steps: 1. Navigate to your Service Bus namespace. 1. On the **Overview** page, in the bottom-middle pane, switch to the **Topics** tab, and select the Service Bus topic. In the following example, it's `mytopic`. :::image type="content" source="./media/service-bus-dotnet-how-to-use-topics-subscriptions/select-topic.png" alt-text="Select topic":::
- 1. On the **Service Bus Topic** page, In the **Messages** chart in the bottom **Metrics** section, you can see that there are three incoming messages for the topic. If you don't see the value, wait for a few minutes and refresh the page to see the updated chart.
+ 1. On the **Service Bus Topic** page, In the **Messages** chart in the bottom **Metrics** section, you can see that there are three incoming messages for the topic. If you don't see the value, wait for a few minutes, and refresh the page to see the updated chart.
:::image type="content" source="./media/service-bus-dotnet-how-to-use-topics-subscriptions/sent-messages-essentials.png" alt-text="Messages sent to the topic" lightbox="./media/service-bus-dotnet-how-to-use-topics-subscriptions/sent-messages-essentials.png"::: 4. Select the subscription in the bottom pane. In the following example, it's **S1**. On the **Service Bus Subscription** page, you see the **Active message count** as **3**. The subscription has received the three messages that you sent to the topic, but no receiver has picked them yet.
In this section, you'll create a .NET console application that receives messages
### Add the NuGet packages to the project
-### [Connection String](#tab/connection-string)
+### [Passwordless](#tab/passwordless)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
+1. Select **SubscriptionReceiver** for **Default project** drop-down list.
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package.
- ```Powershell
+ ```powershell
Install-Package Azure.Messaging.ServiceBus ```
+1. Run the following command to install the **Azure.Identity** NuGet package.
- :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
+ ```powershell
+ Install-Package Azure.Identity
+ ```
-### [Passwordless](#tab/passwordless)
+### [Connection String](#tab/connection-string)
1. Select **Tools** > **NuGet Package Manager** > **Package Manager Console** from the menu.
-1. Run the following command to install the **Azure.Messaging.ServiceBus** and **Azure.Identity** NuGet packages:
+1. Run the following command to install the **Azure.Messaging.ServiceBus** NuGet package:
- ```powershell
+ ```Powershell
Install-Package Azure.Messaging.ServiceBus
- Install-Package Azure.Identity
```
- :::image type="content" source="media/service-bus-dotnet-get-started-with-queues/package-manager-console.png" alt-text="Screenshot showing QueueReceiver project selected in the Package Manager Console.":::
- ### Add code to receive messages from the subscription
In this section, you'll add code to retrieve messages from the subscription.
1. Replace the existing contents of `Program.cs` with the following properties and methods:
- ## [Connection String](#tab/connection-string)
+ ## [Passwordless](#tab/passwordless)
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// handle received messages async Task MessageHandler(ProcessMessageEventArgs args) {
- // TODO: Replace the <TOPIC-SUBSCRIPTION-NAME> placeholder
string body = args.Message.Body.ToString();
- Console.WriteLine($"Received: {body} from subscription: <TOPIC-SUBSCRIPTION-NAME>");
+ Console.WriteLine($"Received: {body} from subscription.");
// complete the message. messages is deleted from the subscription. await args.CompleteMessageAsync(args.Message);
In this section, you'll add code to retrieve messages from the subscription.
} ```
- ## [Passwordless](#tab/passwordless)
+
+ ## [Connection String](#tab/connection-string)
```csharp using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// handle received messages async Task MessageHandler(ProcessMessageEventArgs args) {
+ // TODO: Replace the <TOPIC-SUBSCRIPTION-NAME> placeholder
string body = args.Message.Body.ToString();
- Console.WriteLine($"Received: {body} from subscription.");
+ Console.WriteLine($"Received: {body} from subscription: <TOPIC-SUBSCRIPTION-NAME>");
// complete the message. messages is deleted from the subscription. await args.CompleteMessageAsync(args.Message);
In this section, you'll add code to retrieve messages from the subscription.
1. Append the following code to the end of `Program.cs`.
- ## [Connection String](#tab/connection-string)
-
- > [!IMPORTANT]
- > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>`, `<TOPIC-NAME>`, `<SUBSCRIPTION-NAME>`) in the code snippet with actual values you noted down earlier.
+ ## [Passwordless](#tab/passwordless)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the `DefaultAzureCredential` object. `DefaultAzureCredential` will automatically discover and use the credentials of your Visual Studio login to authenticate to Azure Service Bus.
* Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus topic. * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object. * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object. * When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-NAME>`, `<TOPIC-NAME>`, `<SUBSCRIPTION-NAME>`) in the code snippet with names of your Service Bus namespace, topic, and subscription.
+ For more information, see code comments. ```csharp
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
} ```
- ## [Passwordless](#tab/passwordless)
+ ## [Connection String](#tab/connection-string)
- * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the passwordless `DefaultAzureCredential` object.
+ > [!IMPORTANT]
+ > Update placeholder values (`<NAMESPACE-CONNECTION-STRING>`, `<TOPIC-NAME>`, `<SUBSCRIPTION-NAME>`) in the code snippet with actual values you noted down earlier.
+
+ * Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
* Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus topic. * Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object. * Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <NAMESPACE-NAME> placeholder
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential());
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
1. Here's what your `Program.cs` should look like:
- ## [Connection String](#tab/connection-string)
-
+ ## [Passwordless](#tab/passwordless)
+
```csharp using System; using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
+ using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
- client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
+ // TODO: Replace the <NAMESPACE-NAME> placeholder
+ client = new ServiceBusClient(
+ "<NAMESPACE-NAME>.servicebus.windows.net",
+ new DefaultAzureCredential());
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
} ```
- ## [Passwordless](#tab/passwordless)
-
+ ## [Connection String](#tab/connection-string)
+ ```csharp using System; using System.Threading.Tasks; using Azure.Messaging.ServiceBus;
- using Azure.Identity;
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
In this section, you'll add code to retrieve messages from the subscription.
// regularly. // // Create the clients that we'll use for sending and processing messages.
- // TODO: Replace the <NAMESPACE-NAME> placeholder
- client = new ServiceBusClient(
- "<NAMESPACE-NAME>.servicebus.windows.net",
- new DefaultAzureCredential());
+ // TODO: Replace the <NAMESPACE-CONNECTION-STRING> placeholder
+ client = new ServiceBusClient("<NAMESPACE-CONNECTION-STRING>">);
// create a processor that we can use to process the messages // TODO: Replace the <TOPIC-NAME> and <SUBSCRIPTION-NAME> placeholders
In this section, you'll add code to retrieve messages from the subscription.
await client.DisposeAsync(); } ```+ 1. Build the project, and ensure that there are no errors.
service-health Impacted Resources Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-outage.md
Last updated 11/9/2022
# Impacted Resources for Azure Outages
-[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) helps customers view any health events that impact their Subscriptions and Tenants. The Service Issues blade on Service Health shows any ongoing problems in Azure services that are impacting your resources. You can understand when the issue began, and what services and regions are impacted. Previously, the Potential Impact tab on the Service Issues blade was within the details of an incident. It showed any resources under a customer's Subscriptions or Tenants that may be impacted by an outage, and their resource health signal to help customers evaluate impact.
+[Azure Service Health](https://azure.microsoft.com/get-started/azure-portal/service-health/) helps customers view any health events that impact their Subscriptions and Tenants. The Service Issues blade on Service Health shows any ongoing problems in Azure services that are impacting your resources. You can understand when the issue began, and what services and regions are impacted. Previously, the Potential Impact tab on the Service Issues blade was within the details of an incident. It showed any resources under a customer's Subscriptions that may be impacted by an outage, and their resource health signal to help customers evaluate impact.
**In support of the impacted resource experience, Service Health has enabled a new feature to:**
This article details what is communicated to users and where they can view infor
The impacted resources tab under Azure portal-> Service Health ->Service Issues will display resources that are Confirmed to be impacted by an outage and resources that could Potentially be impacted by an outage. Below is an example of impacted resources tab for an incident on Service Issues with Confirmed and Potential impact resources. ##### Service Health provides the below information to users whose resources are impacted by an outage:
The health status listed under **[Resource Health](../service-health/resource-he
- A health status of available means your resource is healthy but it may have been affected by the service event at a previous point in time. - A health status of degraded or unavailable (caused by a customer-initiated action or platform-initiated action) means your resource is impacted but could be now healthy and pending a status update. >[!Note]
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 05/05/2022 Last updated : 11/11/2022
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
16.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1112-azure, 4.15.0-1113-azure | 16.04 LTS | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new 16.04 LTS kernels supported in this release. | |||
-18.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic |
+18.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic</br>4.15.0-1153-azure </br>4.15.0-194-generic </br>5.4.0-1094-azure </br>5.4.0-128-generic </br>5.4.0-131-generic |
18.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-191-generic </br> 4.15.0-192-generic </br>5.4.0-1089-azure </br>5.4.0-1090-azure </br>5.4.0-124-generic| 18.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-1146-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 4.15.0-189-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic | 18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-1137-azure </br> 4.15.0-1138-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 4.15.0-176-generic </br> 4.15.0-177-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-107-generic </br> 5.4.0-109-generic </br> 5.4.0-110-generic | 18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-92-generic </br> 4.15.0-166-generic </br> 4.15.0-1129-azure </br> 5.4.0-1065-azure </br> 4.15.0-1130-azure </br> 4.15.0-167-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic </br> 4.15.0-1131-azure </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure | |||
-20.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic |
+20.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic </br> 5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic |
20.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic </br> 5.4.0-124-generic </br> 5.4.0-125-generic | 20.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release. | 20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-109-generic </br> 5.4.0-110-generic </br> 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic |
Debian 9.1 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-f
Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-18-amd64 </br> 4.19.0-0.bpo.19-amd64 </br> 4.19.0-0.bpo.17-cloud-amd64 to 4.19.0-0.bpo.19-cloud-amd64 Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-16-amd64, 4.9.0-17-amd64 |||
-Debian 10 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 10 kernels supported in this release. |
+Debian 10 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 |
Debian 10 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release. Debian 10 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.19.0-21-amd64 </br> 4.19.0-21-cloud-amd64 </br> 5.10.0-0.bpo.15-amd64 </br> 5.10.0-0.bpo.15-cloud-amd64 Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-20-amd64 </br> 4.19.0-20-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64, 5.8.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.2-amd64, 5.9.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.5-amd64, 5.9.0-0.bpo.5-cloud-amd64, 5.10.0-0.bpo.7-amd64, 5.10.0-0.bpo.7-cloud-amd64, 5.10.0-0.bpo.9-amd64, 5.10.0-0.bpo.9-cloud-amd64, 5.10.0-0.bpo.11-amd64, 5.10.0-0.bpo.11-cloud-amd64, 5.10.0-0.bpo.12-amd64, 5.10.0-0.bpo.12-cloud-amd64 |
Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.112-azure |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>4.12.14-16.100-azure:5 </br> 4.12.14-16.103-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.12.14-16.94-azure:5 </br> 4.12.14-16.97-azure:5 </br> 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.121-default:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.47](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 15 Azure kernels supported in this release. |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.80-azure |
SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.75-azure:3 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>5.3.18-150300.38.69-azure:3 </br> SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.38.47-azure:3 </br> 5.3.18-150300.38.50-azure:3 </br> 5.3.18-150300.38.53-azure:3 </br> 5.3.18-150300.38.56-azure:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 </br> 5.3.18-36-azure:3 </br> 5.3.18-38.11-azure:3 </br> 5.3.18-38.14-azure:3 </br> 5.3.18-38.17-azure:3 </br> 5.3.18-38.22-azure:3 </br> 5.3.18-38.25-azure:3 </br> 5.3.18-38.28-azure:3 </br> 5.3.18-38.31-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-38.3-azure:3 </br> 5.3.18-38.8-azure:3 |
spring-apps How To Connect To App Instance For Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-connect-to-app-instance-for-troubleshooting.md
+
+ Title: Connect to an app instance for troubleshooting
+description: Learn how to connect to an app instance in Azure Spring Apps for troubleshooting.
++++ Last updated : 11/09/2021+++
+# Connect to an app instance for troubleshooting
+
+This article describes how to access the shell environment inside your application instances to do advanced troubleshooting.
+
+Although Azure Spring Apps offers various managed troubleshooting approaches, you may want to do advanced troubleshooting using the shell environment. For example, you may want to accomplish the following troubleshooting tasks:
+
+- Directly use Java Development Kit (JDK) tools.
+- Diagnose against an appΓÇÖs back-end services for network connection and API call latency for both virtual-network and non-virtual-network instances.
+- Diagnose storage capacity, performance, and CPU/memory issues.
+
+## Prerequisites
+
+- [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension. Use the following command to remove previous versions and install the latest extension. If you previously installed the `spring-cloud` extension, uninstall it to avoid configuration and version mismatches.
+
+ ```azurecli
+ az extension remove --name spring
+ az extension add --name spring
+ az extension remove --name spring-cloud
+ ```
+
+- A deployed application in Azure Spring Apps.
+- If you've deployed a custom container, a shell program. The default is `/bin/sh`.
+
+## Assign an Azure role
+
+Before connecting to an app instance, you must be granted the role *Azure Spring Apps Connect Role*. Connecting to an app instance requires the data action permission `Microsoft.AppPlatform/Spring/apps/deployments/connect/action`.
+
+Use the following command to assign the *Azure Spring Apps Connect Role* role:
+
+```azurecli
+az role assignment create \
+ --role 'Azure Spring Apps Connect Role' \
+ --scope '<service-instance-resource-id>' \
+ --assignee '<your-identity>'
+```
+
+## Connect to an app instance
+
+If your app contains only one instance, use the following command to connect to the instance:
+
+```azurecli
+az spring app connect \
+ --service <your-service-instance> \
+ --resource-group <your-resource-group> \
+ --name <app-name>
+```
+
+Otherwise, use the following command to specify the instance:
+
+```azurecli
+az spring app connect \
+ --service <your-service-instance> \
+ --resource-group <your-resource-group> \
+ --name <app-name> \
+ --instance <instance_name>
+```
+
+Use the following command to specify another deployment of the app:
+
+```azurecli
+az spring app connect \
+ --service <your-service-instance> \
+ --resource-group <your-resource-group> \
+ --name <app-name> \
+ --deployment green
+```
+
+By default, Azure Spring Apps launches the app instance with `/bin/sh` bundled in the base image of the container. Use the following command to switch to another bundled shell such as `/bin/bash`:
+
+```azurecli
+az spring app connect \
+ --service <your-service-instance> \
+ --resource-group <your-resource-group> \
+ --name <app-name> \
+ --shell-cmd /bin/bash
+```
+
+If your app is deployed with a custom image and shell, you can also use the `--shell-cmd` parameter to specify your shell.
+
+## Troubleshoot your app instance
+
+After you connect to an app instance, you can check the status of the heap memory.
+
+Use the following command to find the Java process ID, which is usually `1`:
+
+```azurecli
+jps
+```
+
+The output should look like the following example:
++
+Then use the following command to run the JDK tool to check the result:
+
+```azurecli
+jstat -gc 1
+```
+
+The output should look like the following example:
++
+## Disconnect from your app instance
+
+When you're done troubleshooting, use the `exit` command to disconnect from the app instance, or press `Ctrl+d`.
+
+## Troubleshooting tools
+
+The following list describes some of the pre-installed tools that you can use for troubleshooting:
+
+- `lsof` - Lists open files.
+- `top` - Displays system summary information and current utilization.
+- `ps` - Gets a snapshot of the running process.
+- `netstat` - Prints network connections and interface statistics.
+- `nslookup` - Queries internet name servers interactively.
+- `ping` - Tests whether a network host can be reached.
+- `nc` - Reads from and writes to network connections using TCP or UDP.
+- `wget` - Lets you download files and interact with REST APIs.
+- `df` - Displays the amount of available disk space.
+
+You can also use JDK-bundled tools such as `jps`, `jcmd`, and `jstat`.
+
+The available tools depend on your service tier and type of app deployment. The following table describes the availability of troubleshooting tools:
+
+| Tier | Deployment type | Common tools | JDK tools | Notes |
+||--|--||-|
+| Basic / Standard tier | Source code / Jar | Y | Y (for Java workloads only) | |
+| Basic / Standard tier | Custom image | N | N | Up to your installed tool set. |
+| Enterprise Tier | Source code / Artifacts | Y (for full OS stack), N (for base OS stack) | Y (for Java workloads only) | Depends on the OS stack of your builder. |
+| Enterprise Tier | Custom image | N | N | Depends on your installed tool set. |
+
+## Limitations
+
+Using the shell environment inside your application instances has the following limitation:
+
+- Because the app is running as a non-root user, you can't execute some actions requiring root permission. For example, you can't install new tools by using the system package manager `apt / yum`.
+
+- Because some Linux capabilities are prohibited, tools that require special privileges, such as `tcpdump`, don't work.
+
+## Next steps
+
+- [Self-diagnose and solve problems in Azure Spring Apps](./how-to-self-diagnose-solve.md)
storage Archive Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-cost-estimation.md
This scenario assumes a monthly ingest of 200,000 files totaling 10,240 GB in si
<tr> <td><strong>Cost to write (transactions * price of a write operation)</></strong></td> <td><strong>$2.00</strong></td>
- <td><strong>$2.</strong></td>
+ <td><strong>$2.00</strong></td>
<td><strong>$2.00</strong></td> <td><strong>$24.00</strong></td> </tr>
This article uses the following fictitious prices.
| Price of a single read operation (cost / 10,000) | $0.0005 | $0.000001 | | Price of high priority read transactions (per 10,000) | $50.00 | N/A | | Price of data retrieval (per GB) | $0.02 | $0.01 |
-| PRice of high priority data retrieval (per GB) | $0.10 | N/A |
+| Price of high priority data retrieval (per GB) | $0.10 | N/A |
For official prices, see [Azure Blob Storage pricing](/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](/pricing/details/storage/data-lake/).
storage Storage Blobs Static Site Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-static-site-github-actions.md
An Azure subscription and GitHub account.
build: runs-on: ubuntu-latest steps:
- - uses: actions/checkout@v2
+ - uses: actions/checkout@v3
- uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
An Azure subscription and GitHub account.
build: runs-on: ubuntu-latest steps:
- - uses: actions/checkout@v2
+ - uses: actions/checkout@v3
- uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
An Azure subscription and GitHub account.
build: runs-on: ubuntu-latest steps:
- - uses: actions/checkout@v2
+ - uses: actions/checkout@v3
- uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }}
An Azure subscription and GitHub account.
build: runs-on: ubuntu-latest steps:
- - uses: actions/checkout@v2
+ - uses: actions/checkout@v3
- uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }}
storage Storage Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md
Previously updated : 06/21/2021 Last updated : 11/11/2022
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculato
4. Modify the remaining options to see their affect on your estimate.
-> [!TIP]
-> To estimate the cost of archiving data that is rarely used, see [Estimate the cost of archiving data](../blobs/archive-cost-estimation.md).
+ > [!TIP]
+ > - To view an Excel template which can help you to itemize the amount of storage and number of operations required by your workloads, see [Estimating Pricing for Azure Block Blob Deployments](https://azure.github.io/Storage/docs/application-and-user-data/code-samples/estimate-block-blob/).
+ >
+ > You can use that information as input to the Azure pricing calculator.
+ >
+ > - For more information about how to estimate the cost of archiving data that is rarely used, see [Estimate the cost of archiving data](../blobs/archive-cost-estimation.md).
## Understand the full billing model for Azure Blob Storage
storage Storage Files Identity Auth Azure Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-azure-active-directory-enable.md
description: Learn how to enable identity-based Kerberos authentication for hybr
Previously updated : 11/10/2022 Last updated : 11/11/2022 + # Enable Azure Active Directory Kerberos authentication for hybrid identities on Azure Files
-This article focuses on enabling and configuring Azure Active Directory (Azure AD) for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD DS identities that are synced to Azure AD. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows access control lists (ACLs) and permissions for a user or group might require line-of-sight to the domain controller.
+This article focuses on enabling and configuring Azure Active Directory (Azure AD) for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD DS identities that are synced to Azure AD. This allows Azure AD users to access Azure file shares using Kerberos authentication. This configuration uses Azure AD to issue the necessary Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from hybrid Azure AD-joined and Azure AD-joined VMs. However, configuring Windows access control lists (ACLs)/directory and file-level permissions for a user or group requires line-of-sight to the on-premises domain controller.
For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information about Azure AD Kerberos, see [Deep dive: How Azure AD Kerberos works](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889).
To set share-level permissions, follow the instructions in [Assign share-level p
## Configure directory and file-level permissions
-Once your share-level permissions are in place, there are two options for configuring directory and file-level permissions with Azure AD Kerberos authentication:
+Once your share-level permissions are in place, you must assign directory/file-level permissions to the user or group. **This requires using a device with line-of-sight to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined.
-- **Windows Explorer experience:** If you choose this option, then the client must be domain-joined to the on-premises AD.-- **icacls utility:** If you choose this option, then the client needs line-of-sight to the on-premises AD.
+There are two options for configuring directory and file-level permissions with Azure AD Kerberos authentication:
-To configure directory and file-level permissions through Windows File explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required.
+- **Windows File Explorer:** If you choose this option, then the client must be domain-joined to the on-premises AD.
+- **icacls utility:** If you choose this option, then the client doesn't need to be domain-joined, but needs line-of-sight to the on-premises AD.
+
+To configure directory and file-level permissions through Windows File Explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required.
To configure directory and file-level permissions, follow the instructions in [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md).
stream-analytics Feature Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/feature-comparison.md
Title: Azure Stream Analytics feature comparison
+ Title: Choose a developer tool for building Stream Analytic jobs
description: This article compares the features supported for Azure Stream Analytics cloud and IoT Edge jobs in the Azure portal, Visual Studio, and Visual Studio Code.--++ Previously updated : 06/27/2019 Last updated : 11/09/2022
-# Azure Stream Analytics feature comparison
+# Choose developer tool for building Stream Analytic jobs
-With Azure Stream Analytics, you can create streaming solutions in the cloud and at the IoT Edge using [Azure portal](stream-analytics-quick-create-portal.md), [Visual Studio](stream-analytics-quick-create-vs.md), and [Visual Studio Code](quick-create-visual-studio-code.md). The tables in this article show which features are supported by each platform for both job types.
+Beside building your Stream Analytic jobs in the Azure portal, you can use the [Azure Stream Analytics Tools extension for Visual Studio Code](quick-create-visual-studio-code.md) to write, debug and run your streaming query locally for better development experience.
+
+This table shows what features are supported between Azure portal and Visual Studio Code.
> [!NOTE]
-> Visual Studio and Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions.
+> Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions.
## Cloud job features
+|Feature |Portal |Visual Studio Code |
+||||
+|Cross platform |Mac</br>Linux</br>Windows |Mac</br>Linux</br>Windows |
+|Script authoring |Yes |Yes |
+|Script Intellisense |Syntax highlighting |Syntax highlighting</br>Code completion</br>Error marker |
+|Define all types of inputs, outputs, and job configurations |Yes |Yes |
+|Source control |No |Yes |
+|CI/CD support |Partial |Yes |
+|Share inputs and outputs across multiple queries |No |Yes |
+|Query testing with a sample file |Yes |Yes |
+|Live data local testing |No |Yes |
+|List jobs and view job entities |Yes |Yes |
+|Export a job to a local project |No |Yes |
+|Submit, start, and stop jobs |Yes |Yes |
+|View job metrics and diagram |Yes |Yes |
+|View job runtime errors |Yes |Yes |
+|Resource logs |Yes |Yes |
+|Custom message properties |Yes |Yes |
+|C# custom code function and Deserializer|Read-only mode|Yes|
+|JavaScript UDF and UDA |Yes |Windows only |
+|Azure Machine Learning |Yes |Yes |
+|Compatibility level |1.0</br>1.1</br>1.2 (default) |1.0</br>1.1</br>1.2 (default) |
+|Built-in ML-based Anomaly Detection functions |Yes |Yes |
+|Built-in GeoSpatial functions |Yes |Yes |
+<!--
|Feature |Portal |Visual Studio |Visual Studio Code | ||||| |Cross platform |Mac</br>Linux</br>Windows |Windows |Mac</br>Linux</br>Windows |
With Azure Stream Analytics, you can create streaming solutions in the cloud and
|Compatibility level |1.0</br>1.1</br>1.2 (default) |1.0</br>1.1</br>1.2 (default) |1.0</br>1.1</br>1.2 (default) | |Built-in ML-based Anomaly Detection functions |Yes |Yes |Yes | |Built-in GeoSpatial functions |Yes |Yes |Yes |
+ -->
--
+<!--
## IoT Edge job features |Feature |Portal |Visual Studio |Visual Studio Code |
With Azure Stream Analytics, you can create streaming solutions in the cloud and
|List jobs and view job entities |Yes |Yes |No | |View job metrics and diagram |Yes |Partial |No | |View job runtime errors |Yes |Partial |No |
-|CI/CD support |No |No |No |
+|CI/CD support |No |No |No | -->
## Next steps
stream-analytics Quick Create Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-visual-studio-code.md
Title: Quickstart - Create an Azure Stream Analytics job in Visual Studio Code
-description: This quickstart shows you how to get started by creating a Stream Analytics job, configuring inputs and outputs, and defining a query with Visual Studio Code.
+ Title: Quickstart - Create a Stream Analytics job using Visual Studio Code
+description: This quickstart shows you how to create a Stream Analytics job using the ASA extension for Visual Studio Code.
-- Previously updated : 01/18/2020++ Last updated : 10/27/2022 #Customer intent: As an IT admin/developer, I want to create a Stream Analytics job, configure input and output, and analyze data by using Visual Studio Code.
-# Quickstart: Create an Azure Stream Analytics job in Visual Studio Code
+# Quickstart: Create a Stream Analytics job using Visual Studio Code
-This quickstart shows you how to create and run an Azure Stream Analytics job by using the Azure Stream Analytics Tools extension for Visual Studio Code. The example job reads streaming data from an Azure IoT Hub device. You define a job that calculates the average temperature when over 27┬░ and writes the resulting output events to a new file in blob storage.
+This quickstart shows you how to create, run and submit an Azure Stream Analytics (ASA) job using the ASA Tools extension for Visual Studio Code in your local machine. You learn to build an ASA job that reads real-time streaming data from IoT Hub and filters events with a temperature greater than 27┬░. The output results are sent to a file in blob storage. The input data used in this quickstart is generated by a Raspberry Pi online simulator.
> [!NOTE]
-> Visual Studio and Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions.
+> Visual Studio Code tools don't support jobs in the China East, China North, Germany Central, and Germany NorthEast regions.
## Prerequisites
-Here are the prerequisites for the quickstart:
--- Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).-- [Visual Studio Code](https://code.visualstudio.com/).
+* Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+* [Visual Studio Code](https://code.visualstudio.com/).
## Install the Azure Stream Analytics Tools extension
-1. Open Visual Studio Code.
-2. From **Extensions** on the left pane, search for **Azure Stream Analytics** and select **Install** on the **Azure Stream Analytics Tools** extension.
+1. Open Visual Studio Code (VS Code).
+2. From **Extensions** on the left pane, search for **stream analytics** and select **Install** on the **Azure Stream Analytics Tools** extension.
:::image type="content" source="./media/quick-create-visual-studio-code/install-extension.png" alt-text="Screenshot showing the Extensions page of Visual Studio Code with an option to install Stream Analytics extension.":::
-3. After the extension is installed, verify that **Azure Stream Analytics Tools** is visible in **Enabled Extensions**.
-
- :::image type="content" source="./media/quick-create-visual-studio-code/enabled-extensions.png" alt-text="Screenshot showing the Azure Stream Analytics extension in the list of enabled extensions.":::
-
-## Activate the Azure Stream Analytics Tools extension
-
-1. Select the **Azure** icon on the Visual Studio Code activity bar. Under **Stream Analytics** on the side bar, select **Sign in to Azure**.
+3. After it's installed, select the **Azure** icon on the activity bar and sign in to Azure.
:::image type="content" source="./media/quick-create-visual-studio-code/azure-sign-in.png" alt-text="Screenshot showing how to sign in to Azure.":::
-2. You may need to select a subscription as showing in the following image:
- :::image type="content" source="./media/quick-create-visual-studio-code/select-subscription.png" alt-text="Screenshot showing the selection of an Azure subscription.":::
-3. Keep Visual Studio Code open.
+4. Once you're signed in, you can see the subscriptions under your Azure account.
- > [!NOTE]
- > The Azure Stream Analytics Tools extension will automatically sign you in the next time if you don't sign out.
- > If your account has two-factor authentication, we recommend that you use phone authentication rather than using a PIN.
- > If you have issues with listing resources, signing out and signing in again usually helps. To sign out, enter the command `Azure: Sign Out`.
+> [!NOTE]
+> The ASA Tools extension will automatically sign you in every time you open VS Code. If your account has two-factor authentication, we recommend that you use phone authentication rather than using a PIN. To sign out your Azure account, press `Ctrl + Shift + P` and enter `Azure: Sign Out`.
## Prepare the input data
-Before you define the Stream Analytics job, you should prepare the data that's later configured as the job input. To prepare the input data that the job requires, complete the following steps:
+Before defining the Stream Analytics job, you should prepare the input data. The real-time sensor data is ingested to IoT Hub, which later configured as the job input. To prepare the input data required by the job, follow these steps:
1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Select **Create a resource** > **Internet of Things** > **IoT Hub**.
+2. Select **Create a resource > Internet of Things > IoT Hub**.
:::image type="content" source="./media/quick-create-visual-studio-code/create-resource-iot-hub-menu.png" alt-text="Screenshot showing the Create Resource page for Iot Hub.":::
-3. In the **IoT Hub** pane, enter the following information:
- |**Setting** |**Suggested value** |**Description** |
- ||||
- |Subscription | \<Your subscription\> | Select the Azure subscription that you want to use. |
- |Resource Group | asaquickstart-resourcegroup | Select **Create New** and enter a new resource-group name for your account. |
- |Region | \<Select the region that is closest to your users\> | Select a geographic location where you can host your IoT hub. Use the location that's closest to your users. |
- |IoT Hub Name | MyASAIoTHub | Select a name for your IoT hub. |
+3. On the **IoT Hub** page, enter the following information:
+ * **Subscription**, select your Azure subscription.
+ * **Resource group**, select an existing resource group or create a new resource group.
+ * **IoT hub name**, enter a name for your IoT hub.
+ * **Region**, select the region that's closest to you.
:::image type="content" source="./media/quick-create-visual-studio-code/create-iot-hub.png" alt-text="Screenshot showing the IoT Hub page for creation.":::
-4. Select **Next: Networking** at the bottom of the page to move to the **Networking** page of the creation wizard.
-1. On the **Networking** page, select **Next: Management** at the bottom of the page.
-1. On the **Management** page, for **Pricing and scale tier**, select **F1: Free tier**, if it's still available on your subscription. If the free tier is unavailable, choose the lowest pricing tier available. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
-6. Select **Review + create**. Review your IoT hub information and select **Create**. Your IoT hub might take a few minutes to create. You can monitor the progress on the **Notifications** pane.
-1. After the creation is successful, select **Go to resource** to navigate to the **IoT Hub** page for your IoT hub.
-1. On the **IoT Hub** page, select **Devices** under **Device management** on the left menu, and then select **Add Device** as shown in the image.
+
+4. Go to **Management** page, for **Pricing and scale tier**, select **F1: Free tier**, if it's still available on your subscription. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
+
+ :::image type="content" source="./media/quick-create-visual-studio-code/iot-management-page.png" alt-text="Screenshot showing the IoT Hub management page.":::
+
+5. Select **Review + create**. Review your IoT hub information and select **Create**. This process may take a few minutes to deploy your IoT hub.
+
+6. After your IoT hub is created, select **Go to resource** to navigate to the **IoT Hub** page. '
+
+7. On the **IoT Hub** page, select **Devices** on the left menu, and then select **\+ Add Device**.
:::image type="content" source="./media/quick-create-visual-studio-code/add-device-menu.png" alt-text="Screenshot showing the Add Device button on the Devices page.":::
-1. On your IoT hub's navigation menu, select **Add** under **IoT devices**. Add an ID for **Device ID**, and select **Save**.
+9. Enter a **Device ID** and select **Save**.
:::image type="content" source="./media/quick-create-visual-studio-code/add-device-iot-hub.png" alt-text="Screenshot showing the Add Device page.":::
-1. After the device is saved, select the device from the list. If it doesn't show up in the list, move to another page and switch back to the **Devices** page.
+10. Once the device is created, you should see the device from the **IoT devices** list. Select **Refresh** button on the page if you don't see it.
:::image type="content" source="./media/quick-create-visual-studio-code/select-device.png" alt-text="Screenshot showing the selection of the device on the Devices page.":::
-8. Copy the string in **Connection string (primary key)** and save it to a notepad to use later.
+11. Select your device from the list. Copy **Primary Connection String** and save it to a notepad to use later.
:::image type="content" source="./media/quick-create-visual-studio-code/save-iot-device-connection-string.png" alt-text="Screenshot showing the primary connection string of the device you created."::: ## Run the IoT simulator
-1. Open the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/) in a new browser tab or window.
+1. Open the [Raspberry Pi Azure IoT Online Simulator](https://azure-samples.github.io/raspberry-pi-web-simulator/) in a new browser tab.
2. Replace the placeholder in line 15 with the IoT hub device connection string that you saved earlier. 3. Select **Run**. The output should show the sensor data and messages that are being sent to your IoT hub.
Before you define the Stream Analytics job, you should prepare the data that's l
1. From the upper-left corner of the Azure portal, select **Create a resource** > **Storage** > **Storage account**. :::image type="content" source="./media/quick-create-visual-studio-code/create-storage-account-menu.png" alt-text="Screenshot showing the Create storage account menu.":::
-2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT hub that you created. Then select **Review** to create the account. Then, select **Create** to create the storage account. After the resource is created, select **Go to resource** to navigate to the **Storage account** page.
- :::image type="content" source="./media/quick-create-visual-studio-code/create-storage-account.png" alt-text="Screenshot showing the Create storage account page.":::
+2. In the **Create storage account** pane, enter a storage account name, location, and resource group. Choose the same location and resource group as the IoT hub that you created. Then select **Review** and **Create** to create the storage account.
+
+ :::image type="content" source="./media/quick-create-visual-studio-code/create-storage-account.png" alt-text="Screenshot showing the Create storage account page.":::
+
3. On the **Storage account** page, select **Containers** on the left menu, and then select **+ Container** on the command bar. :::image type="content" source="./media/quick-create-visual-studio-code/add-blob-container-menu.png" alt-text="Screenshot showing the Containers page."::: + 4. From the **New container** page, provide a **name** for your container, leave **Public access level** as **Private (no anonymous access)**, and select **OK**. :::image type="content" source="./media/quick-create-visual-studio-code/create-blob-container.png" alt-text="Screenshot showing the creation of a blob container page."::: ## Create a Stream Analytics project
-1. In Visual Studio Code, select **View** -> **Command palette** on the menu to open the command palette.
-
- :::image type="content" source="./media/quick-create-visual-studio-code/view-command-palette.png" alt-text="Screenshot showing the View -> Command palette menu.":::
-1. Then enter **ASA** and select **ASA: Create New Project**.
+1. In Visual Studio Code, press **Ctrl+Shift+P** and enter **ASA: Create New Project**.
:::image type="content" source="./media/quick-create-visual-studio-code/create-new-project.png" alt-text="Screenshot showing the selection of ASA: Create New Project in the command palette."::: + 2. Enter your project name, like **myASAproj**, and select a folder for your project. :::image type="content" source="./media/quick-create-visual-studio-code/create-project-name.png" alt-text="Screenshot showing entering an ASA project name.":::
-3. The new project is added to your workspace. A Stream Analytics project consists of three folders: **Inputs**, **Outputs**, and **Functions**. It also has the query script **(*.asaql)**, a **JobConfig.json** file, and an **asaproj.json** configuration file. You may need to select **Explorer** button on the left menu of the Visual Studio Code to see the explorer.
- The **asaproj.json** configuration file contains the inputs, outputs, and job configuration file information needed for submitting the Stream Analytics job to Azure.
+3. An ASA project is added to your workspace. It consists of three folders: **Inputs**, **Outputs**, and **Functions**. It also has the query script **(*.asaql)**, a **JobConfig.json** file, and an **asaproj.json** configuration file.
:::image type="content" source="./media/quick-create-visual-studio-code/asa-project-files.png" alt-text="Screenshot showing Stream Analytics project files in Visual Studio Code.":::
+ The **asaproj.json** file contains the inputs, outputs, and job configuration settings for submitting the Stream Analytics job to Azure.
+ > [!Note] > When you're adding inputs and outputs from the command palette, the corresponding paths are added to **asaproj.json** automatically. If you add or remove inputs or outputs on disk directly, you need to manually add or remove them from **asaproj.json**. You can choose to put the inputs and outputs in one place and then reference them in different jobs by specifying the paths in each **asaproj.json** file. ## Define the transformation query
-1. Open **myASAproj.asaql** from your project folder.
-2. Add the following query:
+1. Open **myASAproj.asaql** file and add the following query:
```sql SELECT *
Before you define the Stream Analytics job, you should prepare the data that's l
:::image type="content" source="./media/quick-create-visual-studio-code/query.png" lightbox="./media/quick-create-visual-studio-code/query.png" alt-text="Screenshot showing the transformation query.":::
-## Define a live input
+## Configure job input
1. Right-click the **Inputs** folder in your Stream Analytics project. Then select **ASA: Add Input** from the context menu. :::image type="content" source="./media/quick-create-visual-studio-code/add-input-from-inputs-folder.png" lightbox="./media/quick-create-visual-studio-code/add-input-from-inputs-folder.png" alt-text="Screenshot showing the ASA: Add input menu in Visual Studio Code.":::
- Or select **Ctrl+Shift+P** (or **View** -> **Command palette** menu) to open the command palette and enter **ASA: Add Input**.
+ Or press **Ctrl+Shift+P** to open the command palette and enter **ASA: Add Input**.
- :::image type="content" source="./media/quick-create-visual-studio-code/add-input.png" lightbox="./media/quick-create-visual-studio-code/add-input.png" alt-text="Screenshot showing the ASA: Add input in the command palette of Visual Studio Code.":::
2. Choose **IoT Hub** for the input type. :::image type="content" source="./media/quick-create-visual-studio-code/iot-hub.png" lightbox="./media/quick-create-visual-studio-code/iot-hub.png" alt-text="Screenshot showing the selection of your IoT hub in VS Code command palette.":::
-3. If you added the input from the command palette, choose the Stream Analytics query script that will use the input. It should be automatically populated with the file path to **myASAproj.asaql**.
- :::image type="content" source="./media/quick-create-visual-studio-code/asa-script.png" lightbox="./media/quick-create-visual-studio-code/asa-script.png" alt-text="Screenshot showing the selection of your Stream Analytics script in VS Code command palette.":::
-4. Choose **Select from your Azure Subscriptions** from the drop-down menu, and then press **ENTER**.
+3. Select an ASA script **\*.asaql** and **Azure Subscriptions** from the drop-down menu, and then press **ENTER**.
- :::image type="content" source="./media/quick-create-visual-studio-code/add-input-select-subscription.png" lightbox="./media/quick-create-visual-studio-code/add-input-select-subscription.png" alt-text="Screenshot showing the selection of your Azure subscription in VS Code command palette.":::
-5. Edit the newly generated **IoTHub1.json** file with the following values. Keep default values for fields not mentioned here.
+4. Under **Inputs** folder, you see an **IoTHub1.json** file is created. Replace settings with following suggested values and keep default values for fields not mentioned here.
- |Setting|Suggested value|Description|
- |-||--|
- |Name|Input|Enter a name to identify the job's input.|
- |IotHubNamespace|MyASAIoTHub|Choose or enter the name of your IoT hub. IoT hub names are automatically detected if they're created in the same subscription.|
- |SharedAccessPolicyName|iothubowner| |
+ |Setting|Suggested Value|Description|
+ |-||--|
+ |Name|**Input**|This input name is used for **FROM** statement in the query.|
+ |IotHubNamespace|**spiothub** |Name of your IoT hub. The IoT hub names are automatically detected if you **Select from your subscription**.|
+ |SharedAccessPolicyName|**iothubowner**||
- You can use the CodeLens feature to help you enter a string, select from a drop-down list, or change the text directly in the file. The following screenshot shows **Select from your Subscriptions** as an example. The credentials are auto-listed and saved in local credential manager.
+ :::image type="content" source="./media/quick-create-visual-studio-code/iothub-configuration.png" lightbox="./media/quick-create-visual-studio-code/iothub-configuration.png" alt-text="Screenshot showing the IoT Hub configuration in VS Code.":::
+
+ <!-- You can use the CodeLens feature to help you enter a string, select from a drop-down list, or change the text directly in the file. The following screenshot shows **Select from your Subscriptions** as an example. The credentials are auto-listed and saved in local credential manager.
:::image type="content" source="./media/quick-create-visual-studio-code/configure-input.png" lightbox="./media/quick-create-visual-studio-code/configure-input.png" alt-text="Screenshot showing the launch of CodeLens feature in VS Code."::: After you select a subscription, **select an IoT hub** if you have multiple hubs in that subscription.
- :::image type="content" source="./media/quick-create-visual-studio-code/select-iot-hub.png" lightbox="./media/quick-create-visual-studio-code/select-iot-hub.png" alt-text="Screenshot showing the selection of your IoT hub in VS Code.":::
+ :::image type="content" source="./media/quick-create-visual-studio-code/select-iot-hub.png" lightbox="./media/quick-create-visual-studio-code/select-iot-hub.png" alt-text="Screenshot showing the selection of your IoT hub in VS Code."::: -->
- > [!IMPORTANT]
- > Make sure that the name of the input is **Input** as the query expect it.
+5. Select **Preview data** to see if the input data is successfully configured for your job. It will fetch a sample of your IoT Hub and show in the preview window.
-## Preview input
+ :::image type="content" source="./media/quick-create-visual-studio-code/preview-live-input.png" lightbox="./media/quick-create-visual-studio-code/preview-live-input.png" alt-text="Screenshot showing the preview of input data in your IoT hub.":::
-Select **Preview data** in **IoTHub1.json** from the top line. Some input data will be fetched from the IoT hub and shown in the preview window. This process might take a while.
-## Define an output
+## Configure job output
-1. Select **Ctrl+Shift+P** to open the command palette. Then, enter **ASA: Add Output**.
+1. Press **Ctrl+Shift+P** to open the command palette and enter **ASA: Add Output**.
2. Choose **Data Lake Storage Gen2/Blob Storage** for the sink type.
-3. Choose the Stream Analytics query script that will use this input.
-4. Enter the output file name as **BlobStorage**.
-5. Edit **BlobStorage** by using the following values. Keep default values for fields not mentioned here. Use the **CodeLens** feature to help you select an Azure subscription and storage account name from a drop-down list or manually enter values.
+3. Select the query script using this output.
+4. Enter **BlobStorage1** as output file name.
+5. Edit the settings using the following values. Keep default values for fields not mentioned here.
|Setting|Suggested value|Description| |-||--|
- |Name|Output| Enter a name to identify the job's output.|
- |Storage Account| &lt;Name of your storage account&gt; |Choose or enter the name of your storage account. Storage account names are automatically detected if they're created in the same subscription.|
- |Container|container1|Select the existing container that you created in your storage account.|
- |Path Pattern|output|Enter the name of a file path to be created within the container.|
+ |Name| **Output** | This output name is used for **INTO** statement in the query.|
+ |Storage Account| **spstorageaccount0901** |Choose or enter the name of your storage account. Storage account names are automatically detected if they're created in the same subscription.|
+ |Container|**spcontainer**|Select the existing container that you created in your storage account.|
+
+ <!-- |Path Pattern|output|Enter the name of a file path to be created within the container.| -->
:::image type="content" source="./media/quick-create-visual-studio-code/configure-output.png" lightbox="./media/quick-create-visual-studio-code/configure-output.png" alt-text="Screenshot showing the configuration of output for the Stream Analytics job.":::
- > [!IMPORTANT]
- > Make sure that the name of the output is **Output** as the query expect it.
-## Compile the script
-Script compilation checks syntax and generates the Azure Resource Manager templates for automatic deployment. There are two ways to trigger script compilation:
+## Compile the script and submit to Azure
-- Select the script from the workspace and then compile from the command palette.
+Script compilation checks syntax and generates the Azure Resource Manager templates for automatic deployment.
- :::image type="content" source="./media/quick-create-visual-studio-code/compile-script-1.png" lightbox="./media/quick-create-visual-studio-code/compile-script-1.png" alt-text="Screenshot showing the compilation of script option from the command palette.":::
-- Right-click the script and select **ASA: Compile Script**.
+1. Right-click the script and select **ASA: Compile Script**.
:::image type="content" source="./media/quick-create-visual-studio-code/compile-script-2.png" lightbox="./media/quick-create-visual-studio-code/compile-script-2.png" alt-text="Screenshot showing the compilation of script option from the Stream Analytics explorer in VS Code.":::
-After compilation, you can see results in the **Output** window. You can find the two generated Azure Resource Manager templates in the **Deploy** subfolder in your project folder. These two files are used for automatic deployment.
+2. After compilation, you see a **Deploy** folder under your project with two Azure Resource Manager templates. These two files are used for automatic deployment.
+
+ :::image type="content" source="./media/quick-create-visual-studio-code/deployment-templates.png" lightbox="./media/quick-create-visual-studio-code/deployment-templates.png" alt-text="Screenshot showing the generated deployment templates in the project folder.":::
+3. Select **Submit to Azure** in the query editor.
-## Submit a Stream Analytics job to Azure
+ :::image type="content" source="./media/quick-create-visual-studio-code/submit-job.png" lightbox="./media/quick-create-visual-studio-code/submit-job.png" alt-text="Screenshot showing the submit job button to submit the Stream Analytics job to Azure.":::
-1. In the script editor window of your query script, select **Submit to Azure**.
+ Then follow the instructions to complete the process: **Select subscription > Select a job > Create New Job > Enter job name > Choose resource group and region.**
-2. Select your subscription from the pop-up list.
-3. Choose **Select a job**. Then choose **Create New Job**.
-4. Enter your job name, **myASAjob**. Then follow the instructions to choose the resource group and location.
-5. Select **Publish to Azure**. You can find the logs in the output window.
-6. When your job is created, you can see it in **Stream Analytics Explorer**. See the image in the next section.
+4. Select **Publish to Azure** and complete. Wait for it to open a new tab **Cloud Job View** showing your job's status.
+
+ :::image type="content" source="./media/quick-create-visual-studio-code/publish-to-azure.png" lightbox="./media/quick-create-visual-studio-code/publish-to-azure.png" alt-text="Screenshot showing the publish to Azure button in VS Code.":::
## Start the Stream Analytics job and check output
-1. Open **Stream Analytics Explorer** in Visual Studio Code and find your job, **myASAJob**.
-2. Select **Start** from the **Cloud view** page (OR) right-click the job name in Stream Analytics explorer, and select **Start** from the context menu.
+1. On the **Cloud Job View** tab, select **Start** to run your job in the cloud. This process may take a few minutes to complete.
:::image type="content" source="./media/quick-create-visual-studio-code/start-asa-job-vs-code.png" lightbox="./media/quick-create-visual-studio-code/start-asa-job-vs-code.png" alt-text="Screenshot showing the Start job button in the Cloud view page.":::
-4. Note that the job status has changed to **Running**. Right-click the job name and select **Open Job View in Portal** to see the input and output event metrics. This action might take a few minutes.
-5. To view the results, open the blob storage in the Visual Studio Code extension or in the Azure portal.
+
+2. If your job starts successfully, the job status is changed to **Running**. You can see a logical diagram showing how your ASA job is running.
+
+ :::image type="content" source="./media/quick-create-visual-studio-code/job-running-status.png" lightbox="./media/quick-create-visual-studio-code/job-running-status.png" alt-text="Screenshot showing the job running status in VS Code.":::
+
+3. To view the output results, you can open the blob storage in the Visual Studio Code extension or in the Azure portal.
:::image type="content" source="./media/quick-create-visual-studio-code/output-files.png" lightbox="./media/quick-create-visual-studio-code/output-files.png" alt-text="Screenshot showing the output file in the Blob container.":::
After compilation, you can see results in the **Output** window. You can find th
{"messageId":31,"deviceId":"Raspberry Pi Web Client","temperature":28.163585438418679,"humidity":60.0511571297096,"EventProcessedUtcTime":"2022-09-01T22:55:25.1528729Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:55:24.9050000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:55:24.9120000Z"}} {"messageId":32,"deviceId":"Raspberry Pi Web Client","temperature":31.00503387156985,"humidity":78.68821066044552,"EventProcessedUtcTime":"2022-09-01T22:55:43.2652127Z","PartitionId":3,"EventEnqueuedUtcTime":"2022-09-01T22:55:43.0480000Z","IoTHub":{"MessageId":null,"CorrelationId":null,"ConnectionDeviceId":"MyASAIoTDevice","ConnectionDeviceGenerationId":"637976642928634103","EnqueuedTime":"2022-09-01T22:55:43.0520000Z"}} ```
-
## Clean up resources
-When they're no longer needed, delete the resource group, the streaming job, and all related resources. Deleting the job avoids billing the streaming units that the job consumes.
-
-If you're planning to use the job in the future, you can stop it and restart it later. If you're not going to use this job again, use the following steps to delete all resources that you created in this quickstart:
+When no longer needed, delete the resource group, the Stream Analytics job, and all related resources. Deleting the job avoids billing the streaming units consumed by the job. If you're planning to use the job in future, you can stop it and restart it later when you need. If you aren't going to continue to use this job, delete all resources created by this quickstart by using the following steps:
1. From the left menu in the Azure portal, select **Resource groups** and then select the name of the resource that you created.
If you're planning to use the job in the future, you can stop it and restart it
## Next steps
-In this quickstart, you deployed a simple Stream Analytics job by using Visual Studio Code. You can also deploy Stream Analytics jobs by using the [Azure portal](stream-analytics-quick-create-portal.md), [PowerShell](stream-analytics-quick-create-powershell.md), and [Visual Studio](stream-analytics-quick-create-vs.md).
-
-To learn about Azure Stream Analytics Tools for Visual Studio Code, continue to the following articles:
+To learn more about ASA Tools extension for Visual Studio Code, continue to the following articles:
* [Test Stream Analytics queries locally with sample data using Visual Studio Code](visual-studio-code-local-run.md)
stream-analytics Quick Start Build Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-start-build-application.md
+
+ Title: Build a clickstream analyzer using one-click deployment
+description: This quickstart shows you how to get started ASA using a GitHub repository and PowerShell scripts with data generator.
+++ Last updated : 10/27/2022+++
+# Build a clickstream analyzer using one-click deployment
+
+This quickstart shows how to build a streaming application to analyze a website clickstream using a PowerShell script. It's an easy way to deploy Azure resources with auto-generated data streams and help you explore different stream analytic scenarios.
+
+You can choose the following scenarios for deployment:
+- Filter clickstream requests
+- Join clickstream with a file
+
+## Prerequisites
+* Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+* Install [Git](https://git-scm.com/downloads).
+* Azure PowerShell module. See [here](https://learn.microsoft.com/powershell/azure/install-Az-ps) to install or upgrade.
+
+## Filter clickstream requests
+
+In this example, you learn to extract `GET` and `POST` requests from a website clickstream and store the output results to an Azure Blob Storage. Here's the architecture for this example:
+![Clickstream one input](./media/quick-start-with-mock-data/clickstream-one-input.png)
+
+Sample of a website clickstream:
+
+```json
+{
+ "EventTime": "2022-09-09 08:58:59 UTC",
+ "UserID": 465,
+ "IP": "145.140.61.170",
+ "Request": {
+ "Method": "GET",
+ "URI": "/https://docsupdatetracker.net/index.html",
+ "Protocol": "HTTP/1.1"
+ },
+ "Response": {
+ "Code": 200,
+ "Bytes": 42682
+ },
+ "Browser": "Chrome"
+}
+```
+
+This guide is going to use [GitHub repository](https://github.com/Azure/azure-stream-analytics) for demo. Follow these steps to deploy resources:
+
+1. Open **PowerShell** from the Start menu, clone this GitHub repository to your working directory.
+
+ ```powershell
+ git clone https://github.com/Azure/azure-stream-analytics.git
+ ```
+
+2. Go to **BuildApplications** folder.
+
+ ```powershell
+ cd .\azure-stream-analytics\BuildApplications\
+ ```
+
+3. Sign in to Azure and enter your Azure credentials in the pop-up browser.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+4. Replace `$subscriptionId` with your Azure subscription ID and run the following command to deploy Azure resources. This process may take a few minutes to complete.
+
+ ```powershell
+ .\CreateJob.ps1 -job ClickStream-Filter -eventsPerMinute 11 -subscriptionid $subscriptionId
+ ```
+
+ * `eventsPerMinute` is the input rate for generated data. In this case, the input source generates 11 events per minute.
+ * You can find your subscription ID in **Azure portal > Subscriptions**.
+
+5. Once the deployment is completed, it opens your browser automatically, and you can see a resource group named **ClickStream-Filter-rg-\*** in the Azure portal. The resource group contains the following five resources:
+
+ | Resource Type | Name | Description |
+ | | | -- |
+ | Azure Function | clickstream* | Generate clickstream data |
+ | Event Hubs | clickstream* | Ingest clickstream data for consuming |
+ | Stream Analytics Job | ClickStream-Filter | Define a query to extract `GET` requests from the clickstream input |
+ | Blob Storage | clickstream* | Output destination for the ASA job |
+ | App Service Plan | clickstream* | A necessity for Azure Function |
+
+6. **Congratulation!** You've deployed a streaming application to extract requests from a website clickstream.
+
+7. The ASA job **ClickStream-Filter** uses the following query to extract HTTP requests from the clickstream. Select **Test query** in the query editor to preview the output results.
+
+ ```sql
+ SELECT System.Timestamp Systime, UserId, Request.Method, Response.Code, Browser
+ INTO BlobOutput
+ FROM ClickStream TIMESTAMP BY Timestamp
+ WHERE Request.Method = 'GET' or Request.Method = 'POST'
+ ```
+
+ ![Test Query](./media/quick-start-with-mock-data/test-query.png)
+
+8. There are sample codes in the query comments that you can use for other stream analytic scenarios with one stream input.
+
+ * Count clicks for every hour
+
+ ```sql
+ select System.Timestamp as Systime, count( * )
+ FROM clickstream
+ TIMESTAMP BY EventTime
+ GROUP BY TumblingWindow(hour, 1)
+ ```
+
+ * Select distinct user
+
+ ```sql
+ SELECT *
+ FROM clickstream
+ TIMESTAMP BY Time
+ WHERE ISFIRST(hour, 1) OVER(PARTITION BY userId) = 1
+ ```
+
+8. All output results are stored as `JSON` file in the Blog Storage. You can find it via: **Blob Storage > Containers > job-output**.
+![Blob Storage](./media/quick-start-with-mock-data/blog-storage-containers.png)
+
+## Clickstream-RefJoin
+
+If you want to find out the username for the clickstream using a user file in storage, you can join the clickstream with a reference input as following architecture:
+![Clickstream two input](./media/quick-start-with-mock-data/clickstream-two-inputs.png)
+
+Assume you've completed the steps for previous example, run following commands to create a new resource group:
+
+1. Replace `$subscriptionId` with your Azure subscription ID and run the following command to deploy Azure resources. This process may take a few minutes to complete.
+
+ ```powershell
+ .\CreateJob.ps1 -job ClickStream-RefJoin -eventsPerMinute 11 -subscriptionid $subscriptionId
+ ```
+
+2. Once the deployment is completed, it opens your browser automatically, and you can see a resource group named **ClickStream-RefJoin-rg-\*** in the Azure portal. The resource group contains five resources.
+
+3. The ASA job **ClickStream-RefJoin** uses the following query to join the clickstream with reference sql input.
+
+ ```sql
+ CREATE TABLE UserInfo(
+ UserId bigint,
+ UserName nvarchar(max),
+ Gender nvarchar(max)
+ );
+ SELECT System.Timestamp Systime, ClickStream.UserId, ClickStream.Response.Code, UserInfo.UserName, UserInfo.Gender
+ INTO BlobOutput
+ FROM ClickStream TIMESTAMP BY EventTime
+ LEFT JOIN UserInfo ON ClickStream.UserId = UserInfo.UserId
+ ```
+
+4. **Congratulation!** You've deployed a streaming application to join your user file with a website clickstream.
+
+## Clean up resources
+
+If you've tried out this project and no longer need the resource group, run this command on PowerShell to delete the resource group.
+
+```powershell
+Remove-AzResourceGroup -Name $resourceGroup
+```
+
+If you're planning to use this project in the future, you can skip deleting it, and stop the job for now.
+
+## Next steps
+
+To learn about Azure Stream Analytics, continue to the following articles:
+
+* [Quickstart: Create an Azure Stream Analytics job in VS Code](quick-create-visual-studio-code.md)
+
+* [Test ASA queries locally against live stream input](visual-studio-code-local-run-live-input.md)
+
+* [Use Visual Studio Code to view Azure Stream Analytics jobs](visual-studio-code-explore-jobs.md)
+
+* [Set up CI/CD pipelines by using the npm package](./cicd-overview.md)
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Last updated 10/08/2021
+searchScope:
+ - Continuous deployment
+ - CICD
+ - Synapse
# Continuous integration and delivery for an Azure Synapse Analytics workspace
synapse-analytics Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/source-control.md
Last updated 11/20/2020
+searchScope:
+ - Git integration
+ - CICD
+ - Synapse
# Source control in Synapse Studio
synapse-analytics How To Monitor Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-database.md
+
+ Title: Monitor Azure Synapse Link for Azure SQL Database through Synapse Studio and Azure Monitor
+description: Learn how to monitor your Azure Synapse Link for Azure SQL Database link connections.
++++ Last updated : 11/10/2022++++
+# Monitor Azure Synapse Link for Azure SQL Database through Synapse Studio and Azure Monitor
+
+This article provides a guide on how to get started with monitoring your Azure Synapse Link for Azure SQL Database connections. Before you go through this article, you should know how to create and start an Azure Synapse Link for Azure SQL Database link connection from [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md). Once you've created and started your Synapse Link connection, you can monitor your link connection through Synapse Studio or Azure Monitor.
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Monitor the status of an Azure Synapse Link for Azure SQL Database connection in Synapse Studio
+
+You can monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (*snapshotting*), and see which tables are in continuous replication mode (*replicating*) directly in Synapse Studio. In this section, we'll deep dive link-level monitoring and table-level monitoring:
+
+### Link-level monitoring
+
+1. Once your link connection is running in your Azure Synapse workspace, navigate to the **Monitor** hub, and then select **Link connections**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-link-connections-1.png" alt-text="Screenshot that shows how to monitor the status of the Azure Synapse Link connection from the monitor hub.":::
+
+1. Automatically all your link connections will show up on this page, along with link-level monitoring metrics that summarize a few details of your link connection.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-show-all-link-connections.png" alt-text="Screenshot that shows how to all of the Azure Synapse Link connections under the Link Connections tab." lightbox="../media/connect-synapse-link-sql-database/studio-monitor-show-all-link-connections.png":::
+
+1. The link-level connection grid contains the following columns:
+
+ | **Column Name** | **Description** |
+ | | |
+ | Link connection name | Name of the Link Connection |
+ | Source name | The name of the data source where the data is coming from (Azure SQL Database or SQL Server 2022) |
+ | Target name | The name of the destination location where the data is being replicated into (a dedicated SQL Pool) |
+ | Status | **Initial**, **Starting**, **Running**, **Stopping**, **Stopped**, **Pausing**, **Paused**, or **Resuming**. Details of what each status means can be found here: [Azure Synapse Link for Azure SQL Database](sql-database-synapse-link.md)|
+ | Start time | Start date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) |
+ | End time | End date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) |
+ | Landing zone SAS token expire time | Expiration date/time for the SAS token that is used to access the landing zone storage. More details can be found here: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/storage/common/sas-expiration-policy.md?context=/azure/synapse-analytics/context/context) |
+ | Continuous run ID | ID of the link connection run *Helpful when troubleshooting any issues and contacting Microsoft support. |
+
+1. You need to manually select the **Refresh** button to refresh the list of link connections and their corresponding monitoring details. Autorefresh is currently not supported.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-refresh-link-connections.png" alt-text="Screenshot that shows where to press the Refresh button to refresh the statuses and details of the Azure Synapse Link connections.":::
+
+### Table-level monitoring
+
+1. Follow the same steps 1 and 2 above from the link-level monitoring.
+
+1. Click on the Link connection name of the **link connection** that you want to monitor.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-click-on-link-connection.png" alt-text="Screenshot of clicking on an Azure Synapse Link connection.":::
+
+1. After clicking on your link connection, you'll see the tables and their corresponding table-level metrics that summarize a few details about the tables that you're replicating over in your link connection.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-show-all-tables.png" alt-text="Screenshot that shows the details of each of the tables under a particular Azure Synapse Link connection." lightbox="../media/connect-synapse-link-sql-database/studio-monitor-show-all-tables.png":::
+
+1. The table-level connection grid contains the following columns:
+
+ | **Column Name** | **Description** |
+ |||
+ | Source schema/table name | Name of the source table that is being replicated from |
+ | Target schema/table name | Name of the destination table that the source table is being replicated to |
+ | Status | **Waiting**, **Snapshotting**, **Replicating**, **Failed**, **Suspended**. Details of what each status means can be found here: [Azure Synapse Link for Azure SQL Database](sql-database-synapse-link.md) |
+ | Link table ID | ID of the table in the link connection. *Helpful when troubleshooting any issues and contacting Microsoft support. |
+ | Processed rows | Row counts processed by Synapse Link for SQL |
+ | Processed data volume | Data volume in bytes processed by Synapse Link for SQL |
+ | Time of last processed data | Time when last processed change data arrived in the landing zone (Month, Date, Year, HH:MM:SS AM/PM) |
+
+1. You need to manually select the **Refresh** button to refresh the list of tables in the link connections and their corresponding monitoring details. Autorefresh is currently not supported.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-refresh-tables.png" alt-text="Screenshot that shows where to press the Refresh button to refresh the statuses and details of the tables under a particular Azure Synapse Link connection.":::
+
+
+## Advanced monitoring with Azure Monitor
+
+No matter what cloud applications you're using, it's hard to manage and keep track of all the moving pieces. Azure Monitor provides base-level infrastructure metrics, alerts, and logs for most Azure services. Azure diagnostic logs are emitted by a resource and provide rich, frequent data about the operation of that resource. Azure Synapse Analytics can write diagnostic logs in Azure Monitor, to help you understand deep insights about your applications, improve application performance, and more.
+
+For more information, refer to [How to monitor Synapse Analytics using Azure Monitor](../monitoring/how-to-monitor-using-azure-monitor.md)
+
+In this section, we'll deep dive into setting up metrics, alerts, and logs in Azure Monitor to ensure that you understand more of the advanced capabilities of monitoring your link connection.
+
+### Metrics
+
+The most important type of Monitor data is the metric, which is also called the performance counter. Metrics are emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these metrics for monitoring and troubleshooting.
+
+Azure Synapse Link emits the following metrics to Azure Monitor:
+
+| **Metric** | **Aggregation types** | **Description** |
+||||
+| Link connection events | Sum | Number of Synapse Link connection events, including start, stop, and failure |
+| Link latency in seconds | Max, Min, Avg | Synapse Link data processing latency in seconds |
+| Link processed data volume (bytes) | Sum | Data volume in bytes processed by Synapse Link |
+| Link processed rows | Sum | Row counts processed by Synapse Link |
+| Link table events | Sum | Number of Synapse Link table events, including snapshot, removal, and failure |
+
+Now letΓÇÖs step through how we can see these metrics in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for your **Synapse workspace** that your link connection resides in.
+
+1. Once you've landed on the overview page for your Synapse Workspace, click on the **Metrics** tab underneath ΓÇ£MonitoringΓÇ¥.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-click-on-metrics.png" alt-text="Screenshot that shows where to go to get to the Metrics tab to create a new metric in the Azure portal.":::
+
+1. You'll then see a new chart that is automatically generated for you.
+
+1. Under the **Metric dropdown**, you'll see many different categories of metrics. You want to scroll down to the **INTEGRATION** category, and choose any one of the 5 Link metrics:
+
+ * Link connection events
+ * Link latency in seconds
+ * Link processed data volume (bytes)
+ * Link processed rows
+ * Link table events
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-select-a-metric.png" alt-text="Screenshot that shows how to select a link metric.":::
+
+1. After selecting a metric of your choosing, you can see a graph representation of the data below.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-display-of-metric.png" alt-text="Screenshot that shows the graph representation of the link metric that was chosen in the previous step." lightbox="../media/connect-synapse-link-sql-database/monitor-display-of-metric.png":::
+
+1. A few things that you can adjust on this screen (refer to the letter on the screenshot above to the bullet point letter below):
+
+ 1. You can add another chart.
+
+ 2. You can add another metric to the same chart. Then you can click between the metrics and their corresponding graphs.
+
+ 3. You can customize the aggregation. Some of the metrics only have one aggregation, but others have many. Refer to the chart above for the aggregations available for each metric.
+
+ 4. You can pick how far back you want the metrics to go. By default, the metrics show the past 24 hours, but you've the ability to customize the time period by clicking on the time.
+
+ 5. You can pin your metrics chart to your dashboard. This functionality makes it easy to look at your specific chart whenever you log in into your Azure portal.
+
+### Alerts
+
+Azure Monitor has set up built-in functionality to set up alerts to monitor all your Azure resources efficiently. Alerts allow you to monitor your telemetry and capture signals that indicate that something is happening on the specified resource. Once the signals are captured, an alert rule a defined to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, and notifications are sent through the appropriate channels.
+
+In this section, we're going to walk through how you can set up alerts for your Azure Synapse Link connection through Azure Synapse Analytics. LetΓÇÖs say, for example, that you're running your link connection and realize that you want to monitor the latency of your link connection. The workload requirements for this scenario require that any link connection with a maximum latency over 900 seconds (or 15 minutes) needs to be alerted to your Engineering team. LetΓÇÖs walk through how we would set up an alert for this example:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for your **Synapse workspace** that your link connection resides in.
+
+1. Once you've landed on the overview page for your Synapse Workspace, click on the **Alerts** tab underneath ΓÇ£MonitoringΓÇ¥.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-click-on-alerts.png" alt-text="Screenshot that shows where to go to get to the Alerts tab to create a new alert in the Azure portal.":::
+
+1. Click on the dropdown **Create**.
+
+1. Click on **Alert rule** to add a new alert rule.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-an-alert.png" alt-text="Screenshot that shows how to create a new alert.":::
+
+1. The first step is to define the **scope**. The scope is the target resource that you want to monitor, and in this case, your scope should be Azure Synapse Analytics. The scope should automatically be filled out as current Azure Synapse Analytics workspace that you're creating the alert for.
+
+1. Second, we need to define the **condition**. The condition defines the logic of when the alert rule should trigger.
+
+ a. Click **+Add condition**
+
+ b. You can see the 5 link connection **signal names**. For this example, letΓÇÖs choose the **Link latency in seconds** signal.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-select-an-alert.png" alt-text="Screenshot that shows how to select one of the Link signals.":::
+
+1. Third, we need to configure the alert logic, or when the alert rule should be triggered.
+
+ 1. Select **Static** for the **Threshold** field
+
+ 1. Enter the following values for **Aggregation type**, **Operator**, and **Unit** fields:
+ * **Aggregation type**: **Maximum**
+ * **Operator**: **Greater than**
+ * **Unit**: **Count**
+
+ 1. Input a **Threshold value** of **900** (***Note: this value is in seconds***)
+
+1. You can also configure the **Split by dimensions** value that monitors specific time series and provides context to the fired alert. These additions do have their own separate charge. For this example, we'll leave it blank.
+
+1. Choose **30 minutes** for **Check every** and **1 hour** for **Lookback period** fields. These fields define how often you want the checks to happen.
+
+1. The graph in the Preview shows the events based on the alert logic that we defined, along with the estimated cost per month.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-an-alert-rule.png" alt-text="Screenshot that shows all of the details, configurations, and preview of the price when creating an alert rule." lightbox="../media/connect-synapse-link-sql-database/monitor-create-an-alert-rule.png":::
+
+1. Fourth, we now need to set up **Actions**. we'll configure an action group, which is a set of actions that can be applied to an alert rule.
+
+ a. The **Select action group** option is chosen if you already have an action group that you want to choose. Let's click on **create action group**.
+
+1. On the **Basic** tab, pick a **Subscription**, **resource group**, and **region**. Then provide appropriate values for the **Action group name** and **Display name**. Then press **Next**.
+
+1. On the **Notifications** tab, under the Notification type, select **Email/SMS message/Push/Voice**. And give it an appropriate name.
+
+ a. Check the boxes for **Email** and **SMS**, and then provide the corresponding values. Then press **OK**.
+
+ b. Then press **Next**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-action-group.png" alt-text="Screenshot that shows how to create an action group and specify notifications when an alert rule's conditions are met.":::
+
+1. On the **Actions** tab, letΓÇÖs select an option for **Action type**.
+
+1. In this example, let's use the **Event Hubs** action type, so we'll need to input the **subscription name**, **Event Hub namespace**, and select an **Event Hub name**. Then click on **OK**.
+
+ a. If you donΓÇÖt have an Event Hub created, refer to the document here to create one: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/event-hubs/event-hub-create.md?context=/azure/synapse-analytics/context/context)
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-action-group-2.png" alt-text="Screenshot that shows how to create an action group and specify an action type when an alert rule's conditions are met.":::
+
+1. Click on **Review + Create** to review the settings, and then hit **Create**.
+
+1. Immediately we're taken back to the **Alerts homepage**. If we click on the **Alert Rules** on the top, we can see our newly created alert.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-display-alert-rules.png" alt-text="Screenshot that shows all of the alert rules that were created, including the one we just created.":::
+
+This was just one example of how to create an alert rule following an example. You've the ability to create multiple alerts for your Azure Synapse Link connections through Azure Synapse Analytics.
+
+### Logs
+
+Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources. Several features of Azure Monitor store their data in Logs and present this data in various ways to assist you in monitoring the performance and availability of your cloud and hybrid applications and their supporting components. You can analyze Logs data by using a sophisticated query language thatΓÇÖs capable of quickly analyzing millions of records.
+
+Now letΓÇÖs step through how we can see logs for our Azure Synapse Link connections in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for your **Synapse workspace** that your link connection resides in.
+
+1. Once you've landed on the overview page for your Synapse Workspace, click on the **Logs** tab underneath ΓÇ£MonitoringΓÇ¥.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-click-on-logs.png" alt-text="Screenshot that shows where to go to get to the Logs tab to create a new log in the Azure portal.":::
+
+1. You're immediately greeted with a workspace that is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)
+
+ a. There is a table called ΓÇ£**SynapseLinkEvent**ΓÇ¥ that stores many different values for each of the link connections. The table and the details are shown on the left-hand side.
+
+ b. You can perform a query in the query pane that retrieves a specific set of records. In this case, we'll type in ΓÇ£**SynapseLinkEvent**ΓÇ¥ in the query pane and then press the blue **Run** button. We can see the link connections that ran in the Results section where you can see details about each of the link connections.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-display-results-of-log-query.png" alt-text="Screenshot that shows the tables, query, and results of the log query that was run." lightbox="../media/connect-synapse-link-sql-database/monitor-display-results-of-log-query.png":::
++
+## Next steps
+
+If you're using a database other than an Azure SQL database, see:
+
+* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context)
+* [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
+* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
synapse-analytics How To Monitor Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-monitor-synapse-link-sql-server-2022.md
+
+ Title: Monitor Azure Synapse Link for SQL Server 2022 through Synapse Studio and Azure Monitor
+description: Learn how to monitor your Azure Synapse Link for SQL Server 2022 link connections.
++++ Last updated : 11/10/2022++++
+# Monitor Azure Synapse Link for SQL Server 2022 through Synapse Studio and Azure Monitor
+
+This article provides a guide on how to get started with monitoring your Azure Synapse Link for SQL Server 2022 connections. Before you go through this article, you should know how to create and start an Azure Synapse Link for SQL Server 2022 link connection from [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md). Once you've'created and started your Synapse Link connection, you can monitor your link connection through Synapse Studio or Azure Monitor.
+
+> [!IMPORTANT]
+> Azure Synapse Link for SQL is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Monitor the status of an Azure Synapse Link for SQL Server 2022 connection in Synapse Studio
+
+You can monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (*snapshotting*), and see which tables are in continuous replication mode (*replicating*) directly in Synapse Studio. In this section, we'll deep dive link-level monitoring and table-level monitoring:
+
+### Link-level monitoring
+
+1. Once your link connection is running in your Azure Synapse workspace, navigate to the **Monitor** hub, and then select **Link connections**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-link-connections-1.png" alt-text="Screenshot that shows how to monitor the status of the Azure Synapse Link connection from the monitor hub.":::
+
+1. Automatically all your link connections will show up on this page, along with link-level monitoring metrics that summarize a few details of your link connection.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-show-all-link-connections.png" alt-text="Screenshot that shows how to all of the Azure Synapse Link connections under the Link Connections tab." lightbox="../media/connect-synapse-link-sql-database/studio-monitor-show-all-link-connections.png":::
+
+1. The link-level connection grid contains the following columns:
+
+ | **Column Name** | **Description** |
+ |||
+ | Link connection name | Name of the Link Connection |
+ | Source name | The name of the data source where the data is coming from (Azure SQL Database or SQL Server 2022) |
+ | Target name | The name of the destination location where the data is being replicated into (a dedicated SQL Pool) |
+ | Status | **Initial**, **Starting**, **Running**, **Stopping**, **Stopped**, **Pausing**, **Paused**, or **Resuming**. Details of what each status means can be found here: [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md) |
+ | Start time | Start date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) |
+ | End time | End date and time for the link connection run (Month, Date, Year, HH:MM:SS AM/PM) |
+ | Landing zone SAS token expire time | Expiration date/time for the SAS token that is used to access the landing zone storage. More details can be found here: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/storage/common/sas-expiration-policy.md?context=/azure/synapse-analytics/context/context) |
+ | Continuous run ID | ID of the link connection run *Helpful when troubleshooting any issues and contacting Microsoft support. |
+
+1. You need to manually select the **Refresh** button to refresh the list of link connections and their corresponding monitoring details. Autorefresh is currently not supported.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-refresh-link-connections.png" alt-text="Screenshot that shows where to press the Refresh button to refresh the statuses and details of the Azure Synapse Link connections.":::
+
+### Table-level monitoring
+
+1. Follow the same steps 1 and 2 above from the link-level monitoring.
+
+1. Click on the Link connection name of the **link connection** that you want to monitor.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-click-on-link-connection.png" alt-text="Screenshot of clicking on an Azure Synapse Link connection.":::
+
+1. After clicking on your link connection, you'll see the tables and their corresponding table-level metrics that summarize a few details about the tables that you're replicating over in your link connection.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-show-all-tables.png" alt-text="Screenshot that shows the details of each of the tables under a particular Azure Synapse Link connection." lightbox="../media/connect-synapse-link-sql-database/studio-monitor-show-all-tables.png":::
+
+1. The table-level connection grid contains the following columns:
+
+ | **Column Name** | **Description** |
+ |||
+ | Source schema/table name | Name of the source table that is being replicated from |
+ | Target schema/table name | Name of the destination table that the source table is being replicated to |
+ | Status | **Waiting**, **Snapshotting**, **Replicating**, **Failed**, **Suspended**. Details of what each status means can be found here: [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md) |
+ | Link table ID | ID of the table in the link connection. *Helpful when troubleshooting any issues and contacting Microsoft support. |
+ | Processed rows | Row counts processed by Synapse Link for SQL |
+ | Processed data volume | Data volume in bytes processed by Synapse Link for SQL |
+ | Time of last processed data | Time when last processed change data arrived in the landing zone (Month, Date, Year, HH:MM:SS AM/PM) |
+
+1. You need to manually select the **Refresh** button to refresh the list of tables in the link connections and their corresponding monitoring details. Autorefresh is currently not supported.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-refresh-tables.png" alt-text="Screenshot that shows where to press the Refresh button to refresh the statuses and details of the tables under a particular Azure Synapse Link connection.":::
+
+
+## Advanced monitoring with Azure Monitor
+
+No matter what cloud applications you're using, it's hard to manage and keep track of all the moving pieces. Azure Monitor provides base-level infrastructure metrics, alerts, and logs for most Azure services. Azure diagnostic logs are emitted by a resource and provide rich, frequent data about the operation of that resource. Azure Synapse Analytics can write diagnostic logs in Azure Monitor, to help you understand deep insights about your applications, improve application performance, and more.
+
+For more information, refer to [How to monitor Synapse Analytics using Azure Monitor](../monitoring/how-to-monitor-using-azure-monitor.md)
+
+In this section, we'll deep dive into setting up metrics, alerts, and logs in Azure Monitor to ensure that you understand more of the advanced capabilities of monitoring your link connection.
+
+### Metrics
+
+The most important type of Monitor data is the metric, which is also called the performance counter. Metrics are emitted by most Azure resources. Azure Monitor provides several ways to configure and consume these metrics for monitoring and troubleshooting.
+
+Azure Synapse Link emits the following metrics to Azure Monitor:
+
+| **Metric** | **Aggregation types** | **Description** |
+||||
+| Link connection events | Sum | Number of Synapse Link connection events, including start, stop, and failure |
+| Link latency in seconds | Max, Min, Avg | Synapse Link data processing latency in seconds |
+| Link processed data volume (bytes) | Sum | Data volume in bytes processed by Synapse Link |
+| Link processed rows | Sum | Row counts processed by Synapse Link |
+| Link table events | Sum | Number of Synapse Link table events, including snapshot, removal, and failure |
+
+Now letΓÇÖs step through how we can see these metrics in the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for your **Synapse workspace** that your link connection resides in.
+
+1. Once you've landed on the overview page for your Synapse Workspace, click on the **Metrics** tab underneath ΓÇ£MonitoringΓÇ¥.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-click-on-metrics.png" alt-text="Screenshot that shows where to go to get to the Metrics tab to create a new metric in the Azure portal.":::
+
+1. You'll then see a new chart that is automatically generated for you.
+
+1. Under the **Metric dropdown**, you'll see many different categories of metrics. You want to scroll down to the **INTEGRATION** category, and choose any one of the 5 Link metrics:
+
+ * Link connection events
+ * Link latency in seconds
+ * Link processed data volume (bytes)
+ * Link processed rows
+ * Link table events
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-select-a-metric.png" alt-text="Screenshot that shows how to select a link metric.":::
+
+1. After selecting a metric of your choosing, you can see a graph representation of the data below.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-display-of-metric.png" alt-text="Screenshot that shows the graph representation of the link metric that was chosen in the previous step." lightbox="../media/connect-synapse-link-sql-database/monitor-display-of-metric.png":::
+
+1. A few things that you can adjust on this screen (refer to the letter on the screenshot above to the bullet point letter below):
+
+ 1. You can add another chart.
+
+ 2. You can add another metric to the same chart. Then you can click between the metrics and their corresponding graphs.
+
+ 3. You can customize the aggregation. Some of the metrics only have one aggregation, but others have many. Refer to the chart above for the aggregations available for each metric.
+
+ 4. You can pick how far back you want the metrics to go. By default, the metrics show the past 24 hours, but you've the ability to customize the time period by clicking on the time.
+
+ 5. You can pin your metrics chart to your dashboard. This functionality makes it easy to look at your specific chart whenever you log in into your Azure portal.
+
+### Alerts
+
+Azure Monitor has set up built-in functionality to set up alerts to monitor all your Azure resources efficiently. Alerts allow you to monitor your telemetry and capture signals that indicate that something is happening on the specified resource. Once the signals are captured, an alert rule a defined to see if the signal meets the criteria of the condition. If the conditions are met, an alert is triggered, and notifications are sent through the appropriate channels.
+
+In this section, we're going to walk through how you can set up alerts for your Azure Synapse Link connection through Azure Synapse Analytics. LetΓÇÖs say, for example, that you're running your link connection and realize that you want to monitor the latency of your link connection. The workload requirements for this scenario require that any link connection with a maximum latency over 900 seconds (or 15 minutes) needs to be alerted to your Engineering team. LetΓÇÖs walk through how we would set up an alert for this example:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for your **Synapse workspace** that your link connection resides in.
+
+1. Once you've landed on the overview page for your Synapse Workspace, click on the **Alerts** tab underneath ΓÇ£MonitoringΓÇ¥.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-click-on-alerts.png" alt-text="Screenshot that shows where to go to get to the Alerts tab to create a new alert in the Azure portal.":::
+
+1. Click on the dropdown **Create**.
+
+1. Click on **Alert rule** to add a new alert rule.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-an-alert.png" alt-text="Screenshot that shows how to create a new alert.":::
+
+1. The first step is to define the **scope**. The scope is the target resource that you want to monitor, and in this case, your scope should be Azure Synapse Analytics. The scope should automatically be filled out as current Azure Synapse Analytics workspace that you're creating the alert for.
+
+1. Second, we need to define the **condition**. The condition defines the logic of when the alert rule should trigger.
+
+ a. Click **+Add condition**
+
+ b. You can see the 5 link connection **signal names**. For this example, letΓÇÖs choose the **Link latency in seconds** signal.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-select-an-alert.png" alt-text="Screenshot that shows how to select one of the Link signals.":::
+
+1. Third, we need to configure the alert logic, or when the alert rule should be triggered.
+
+ 1. Select **Static** for the **Threshold** field
+
+ 1. Enter the following values for **Aggregation type**, **Operator**, and **Unit** fields:
+ * **Aggregation type**: **Maximum**
+ * **Operator**: **Greater than**
+ * **Unit**: **Count**
+
+ 1. Input a **Threshold value** of **900** (***Note: this value is in seconds***)
+
+1. You can also configure the **Split by dimensions** value that monitors specific time series and provides context to the fired alert. These additions do have their own separate charge. For this example, we'll leave it blank.
+
+1. Choose **30 minutes** for **Check every** and **1 hour** for **Lookback period** fields. These fields define how often you want the checks to happen.
+
+1. The graph in the Preview shows the events based on the alert logic that we defined, along with the estimated cost per month.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-an-alert-rule.png" alt-text="Screenshot that shows all of the details, configurations, and preview of the price when creating an alert rule." lightbox="../media/connect-synapse-link-sql-database/monitor-create-an-alert-rule.png":::
+
+1. Fourth, we now need to set up **Actions**. we'll configure an action group, which is a set of actions that can be applied to an alert rule.
+
+ a. The **Select action group** option is chosen if you already have an action group that you want to choose. Let's click on **create action group**.
+
+1. On the **Basic** tab, pick a **Subscription**, **resource group**, and **region**. Then provide appropriate values for the **Action group name** and **Display name**. Then press **Next**.
+
+1. On the **Notifications** tab, under the Notification type, select **Email/SMS message/Push/Voice**. And give it an appropriate name.
+
+ a. Check the boxes for **Email** and **SMS**, and then provide the corresponding values. Then press **OK**.
+
+ b. Then press **Next**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-action-group.png" alt-text="Screenshot that shows how to create an action group and specify notifications when an alert rule's conditions are met.":::
+
+1. On the **Actions** tab, letΓÇÖs select an option for **Action type**.
+
+1. In this example, let's use the **Event Hubs** action type, so we'll need to input the **subscription name**, **Event Hub namespace**, and select an **Event Hub name**. Then click on **OK**.
+
+ a. If you donΓÇÖt have an Event Hub created, refer to the document here to create one: [Configure an expiration policy for shared accessed signatures (SAS)](/azure/event-hubs/event-hub-create.md?context=/azure/synapse-analytics/context/context)
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-create-action-group-2.png" alt-text="Screenshot that shows how to create an action group and specify an action type when an alert rule's conditions are met.":::
+
+1. Click on **Review + Create** to review the settings, and then hit **Create**.
+
+1. Immediately we're taken back to the **Alerts homepage**. If we click on the **Alert Rules** on the top, we can see our newly created alert.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-display-alert-rules.png" alt-text="Screenshot that shows all of the alert rules that were created, including the one we just created.":::
+
+This was just one example of how to create an alert rule following an example. You've the ability to create multiple alerts for your Azure Synapse Link connections through Azure Synapse Analytics.
+
+### Logs
+
+Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources. Several features of Azure Monitor store their data in Logs and present this data in various ways to assist you in monitoring the performance and availability of your cloud and hybrid applications and their supporting components. You can analyze Logs data by using a sophisticated query language thatΓÇÖs capable of quickly analyzing millions of records.
+
+Now letΓÇÖs step through how we can see logs for our Azure Synapse Link connections in the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for your **Synapse workspace** that your link connection resides in.
+
+1. Once you've landed on the overview page for your Synapse Workspace, click on the **Logs** tab underneath ΓÇ£MonitoringΓÇ¥.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-click-on-logs.png" alt-text="Screenshot that shows where to go to get to the Logs tab to create a new log in the Azure portal.":::
+
+1. You're immediately greeted with a workspace that is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)
+
+ a. There's a table called ΓÇ£**SynapseLinkEvent**ΓÇ¥ that stores many different values for each of the link connections. The table and the details are shown on the left-hand side.
+
+ b. You can perform a query in the query pane that retrieves a specific set of records. In this case, we'll type in ΓÇ£**SynapseLinkEvent**ΓÇ¥ in the query pane and then press the blue **Run** button. We can see the link connections that ran in the Results section where you can see details about each of the link connections.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/monitor-display-results-of-log-query.png" alt-text="Screenshot that shows the tables, query, and results of the log query that was run." lightbox="../media/connect-synapse-link-sql-database/monitor-display-results-of-log-query.png":::
++
+## Next steps
+
+If you're using a database other than a SQL Server 2022 instance, see:
+
+* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context)
+* [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
+* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
This is the list of known limitations for Azure Synapse Link for SQL.
* Azure Synapse Link can't be enabled on the secondary database once a GeoDR failover has happened if the secondary database has a different name from the primary database. * If you enabled Azure Synapse Link for SQL on your database as a Microsoft Azure Active Directory (Azure AD) user, Point-in-time restore (PITR) will fail. PITR will only work when you enable Azure Synapse Link for SQL on your database as a SQL user. * If you create a database as an Azure AD user and enable Azure Synapse Link for SQL, a SQL authentication user (for example, even sysadmin role) won't be able to disable/make changes to Azure Synapse Link for SQL artifacts. However, another Azure AD user will be able to enable/disable Azure Synapse Link for SQL on the same database. Similarly, if you create a database as an SQL authentication user, enabling/disabling Azure Synapse Link for SQL as an Azure AD user won't work.
-* When enabling Azure Synapse Link for SQL on your Azure SQL Database, you should ensure that aggressive log truncation is disabled.
+* While enabling Azure Synapse Link for SQL on Azure SQL Database or SQL Server, please be aware that the aggressive log truncation feature of Accelerated Database Recovery (ADR) is automatically disabled. This is because Azure Synapse Link for SQL accesses the database transaction log. This behavior is similar to Changed Data Capture (CDC). Active transactions will continue to hold the transaction log truncation until the transaction commits and Azure Synapse Link for SQL catches up, or transaction aborts. This might result in the transaction log filling up more than usual and should be monitored so that the transaction log does not fill.
### SQL Server 2022 only * Azure Synapse Link for SQL can't be enabled on databases that are transactional replication publishers or distributors.
synapse-analytics Troubleshoot Sql Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/troubleshoot/troubleshoot-sql-azure-active-directory.md
+
+ Title: Troubleshooting guide for Azure Synapse Link for Azure SQL Database and Azure Active Directory user impersonation
+description: Learn how to troubleshoot user impersonation issues with Azure Synapse Link for Azure SQL Database and Azure Active Directory
++++++ Last updated : 11/09/2022++
+# Troubleshoot: Azure Synapse Link for Azure SQL Database and Azure Active Directory user impersonation
+
+This article is a guide to troubleshoot Azure Synapse Link for Azure SQL Database and Azure Active Directory (Azure AD) user impersonation. This article applies only to databases in Azure SQL Database.
+
+## Symptom
+
+If you create database using a login connected to Microsoft Azure Active Directory and then try to perform Azure Synapse Link database operations signed in with any SQL Authenticated principal, you will receive error messages due to an impersonation failure. The following sample errors are all a symptom of the same problem.
+
+| Database Operation | Sample Error |
+|:--|:--|
+| sp_change_feed_enable_db, sp_change_feed_disable_db | `The error/state returned was 33171/1: 'Only active directory users can impersonate other active directory users.'. Use the action and error to determine the cause of the failure and resubmit the request.` |
+| Restore an Azure Synapse Link enabled database | `Non retriable error occurred while restoring backup with index 11 - 22729 Could not remove the metadata. The failure occurred when executing the command 'sp_MSchange_feed_ddl_database_triggers 'drop''. The error/state returned was 33171/1: 'Only active directory users can impersonate other active directory users.'. Use the action and error to determine the cause of the failure and resubmit the request. RESTORE DATABASE successfully processed 0 pages in 0.751 seconds (0.000 MB/sec). `|
+| Restore a blank database and then enable Azure Synapse Link | `The error returned was 33171: 'Only active directory users can impersonate other active directory users.'. Use the action and error to determine the cause of the failure and resubmit the request.` |
+
+## Resolution
+
+Sign in to the Azure SQL Database with an Azure AD database principal. It doesn't have to be the same Azure AD account that created the database.
+
+## See also
+
+ - [Change data capture limitations](/sql/relational-databases/track-changes/about-change-data-capture-sql-server#limitations)
+
+## Next steps
+
+ - [Get started with Azure Synapse Link for Azure SQL Database](../connect-synapse-link-sql-database.md)
synapse-analytics Troubleshoot Sql Database Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/troubleshoot/troubleshoot-sql-database-failover.md
+
+ Title: Troubleshooting guide for Azure Synapse Link for Azure SQL Database after failover of an Azure SQL Database.
+description: Learn how to troubleshoot and configure Azure Synapse Link for Azure SQL Database after failover of an Azure SQL Database.
++++++ Last updated : 11/09/2022++
+# Troubleshoot: Azure Synapse Link for Azure SQL Database after failover of an Azure SQL Database
+
+This article is a guide to troubleshoot and configure Azure Synapse Link for Azure SQL Database after failover of an Azure SQL Database. This article applies only to databases in Azure SQL Database.
+
+## Symptom
+
+For the safety of data, users may choose to set [auto-failover group](/sql/azure-sql/database/failover-group-add-single-database-tutorial) for Azure SQL Database. By setting failover group, users can group multiple geo-replicated databases that can protect a potential data loss. However, when Azure Synapse Link for Azure SQL Database has been started for the table in the Azure SQL Database and the database experiences failover, Synapse Link will be disabled in the backend even though its status is still displayed as running.
+
+## Resolution
+
+You must stop Synapse Link manually and configure Synapse Link according to the new primary server's information so that it can continue to work normally.
+
+1. Launch [Synapse Studio](https://web.azuresynapse.net).
+1. Open the **Integrate** hub.
+1. Select the Synapse Link whose database has failover occurred.
+1. Select the **Stop** button.
+
+ :::image type="content" source="media/troubleshoot-sql-database-failover/synapse-studio-stop-link-connection.png" alt-text="A screenshot of Synapse Studio. The Integrate hub is open and the Link Connection linkconnection1 is selected. The Stop button is highlighted." lightbox="media/troubleshoot-sql-database-failover/synapse-studio-stop-link-connection.png":::
+
+1. Open the **Manage** hub. Under **External connections**, select **Linked services**.
+1. In the list of **Linked services**, select the linked service whose database failed over.
+
+ :::image type="content" source="media/troubleshoot-sql-database-failover/synapse-studio-linked-services.png" alt-text="A screenshot of Synapse Studio. The Manage hub is open. In the list of Linked services, the AzureSqlDatabase1 linked service is highlighted." lightbox="media/troubleshoot-sql-database-failover/synapse-studio-linked-services.png":::
+
+1. You must reset the linked service connection string based on the new primary server after failover so that Synapse Link can connect to the new primary logical server's database. There are two options:
+ * Use [the auto-failover group read/write listener endpoint](/sql/azure-sql/managed-instance/auto-failover-group-configure-sql-mi#locate-listener-endpoint) and use the Synapse workspace's managed identity (SMI) to connect your Synapse workspace to the source database. Because of Read/Write listener endpoint that automatically maps to the new primary server after failover, so you only need to set it once. If failover occurs later, it will automatically use the fully-qualified domain name (FQDN) of the listener endpoint. Note that you still need to take action on every failover to update the Resource ID and Managed Identity ID for the new primary (see next step).
+ * After each failover, edit the linked service **Connection string** with the **Server name**, **Database name**, and authentication information for the new primary server. You can use a managed identity or SQL Authentication.
+
+ The authentication account used to connect to the database, whether it be a managed identity or SQL Authenticated login to the Azure SQL Database, must have at least the CONTROL permission inside the database to perform the actions necessary for the linked service. The db_owner permission is similar to CONTROL.
+
+ To use the auto-failover group read/write listener endpoint:
+
+ :::image type="content" source="media/troubleshoot-sql-database-failover/synapse-studio-edit-linked-service-system-assigned-managed-identity.png" alt-text="Screenshot of the Azure Synapse Studio Edit linked service dialog. The FQDN of the read/write listener endpoint is entered manually." lightbox="media/troubleshoot-sql-database-failover/synapse-studio-edit-linked-service-system-assigned-managed-identity.png":::
+
+1. You must refresh the Resource ID and Managed Identity ID after every failover. Open the **Integrate** hub. Select your Synapse Link.
+1. The next step depends on the connection string you chose previously.
+ - If you choose to use the Read/Write listener endpoint for updating linked service connection string, you must update the **SQL logical server resource ID** and **Managed identity ID** corresponding to the new primary server manually.
+ - If you provided the new primary server's connection information, select the **Refresh** button.
+
+ :::image type="content" source="media/troubleshoot-sql-database-failover/synapse-studio-integrate-link-connection-refresh.png" alt-text="A screenshot of the Integrate hub of Synapse Studio. The Refresh button updates the SQL logical server resource ID and the managed identity ID." lightbox="media/troubleshoot-sql-database-failover/synapse-studio-integrate-link-connection-refresh.png":::
+
+1. Azure Synapse Link for Azure SQL Database currently cannot restart the synchronization from before the failover. Before restarting the Link connection, you must empty the target table in Azure Synapse if data is present. Or, check the option to **Drop and recreate table on target**, as seen in the following screenshot.
+
+ :::image type="content" source="media/troubleshoot-sql-database-failover/synapse-studio-start-drop-recreate-table-target.png" alt-text="A screenshot of the Integrate hub of Synapse Studio. The Drop and recreate table on target option is highlighted. The Start button is highlighted." lightbox="media/troubleshoot-sql-database-failover/synapse-studio-start-drop-recreate-table-target.png":::
+
+1. Finally, restart the Azure Synapse Link. On the **Integrate** hub and with the desired Link connection open, select the **Start** button.
++
+
+## Next steps
+
+ - [Tutorial: Add an Azure SQL Database to an auto-failover group](/sql/azure-sql/database/failover-group-add-single-database-tutorial)
+ - [Get started with Azure Synapse Link for Azure SQL Database](../connect-synapse-link-sql-database.md)
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
The following table lists the features of Azure Synapse Analytics that are curre
| **Apache Spark Optimized Write** | [Optimize Write is a Delta Lake on Azure Synapse](spark/optimize-write-for-apache-spark.md) feature reduces the number of files written by Apache Spark 3 (3.1 and 3.2) and aims to increase individual file size of the written data.| | **Apache Spark R language support** | Built-in [R support for Apache Spark](spark/apache-spark-r-language.md) is now in preview. | | **Azure Synapse Data Explorer** | The [Azure Synapse Data Explorer](./data-explorer/data-explorer-overview.md) provides an interactive query experience to unlock insights from log and telemetry data. Connectors for Azure Data Explorer are available for Synapse Data Explorer. |
-| **Azure Synapse Link to SQL** | Azure Synapse Link is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek). |
+| **Azure Synapse Link for SQL** | Azure Synapse Link is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and [watch our YouTube video](https://www.youtube.com/embed/pgusZy34-Ek). |
| **Browse ADLS Gen2 folders in the Azure Synapse Analytics workspace** | You can now browse an Azure Data Lake Storage Gen2 (ADLS Gen2) container or folder in your Azure Synapse Analytics workspace by connecting to a specific container or folder in Synapse Studio. To learn more, see [Browse an ADLS Gen2 folder with ACLs in Azure Synapse Analytics](how-to-access-container-with-access-control-lists.md).| | **Custom partitions for Synapse link for Azure Cosmos DB** | Improve query execution times for your Spark queries, by creating custom partitions based on fields frequently used in your queries. To learn more, see [Custom partitioning in Azure Synapse Link for Azure Cosmos DB (Preview)](../cosmos-db/custom-partitioning-analytical-store.md). | | **Data flow improvements to Data Preview** | To learn more, see [Data Preview and debug improvements in Mapping Data Flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254?wt.mc_id=azsynapseblog_mar2022_blog_azureeng). |
Azure Synapse Link is an automated system for replicating data from [SQL Server
|**Month** | **Feature** | **Learn more**| |:-- |:-- | :-- |
-| July 2022 | **Batch mode** | Decide between cost and latency in Azure Synapse Link to SQL by selecting *continuous* or *batch* mode to replicate your data. Batch mode allows you to save even more on costs by only paying for ingestion service during the batch loads instead of it being continuously on. You can select between 20 and 60 minutes for batch processing.|
+| July 2022 | **Batch mode** | Decide between cost and latency in Azure Synapse Link for SQL by selecting *continuous* or *batch* mode to replicate your data. Batch mode allows you to save even more on costs by only paying for ingestion service during the batch loads instead of it being continuously on. You can select between 20 and 60 minutes for batch processing.|
| May 2022 | **Synapse Link for SQL preview** | Azure Synapse Link for SQL is in preview for both SQL Server 2022 and Azure SQL Database. The Azure Synapse Link feature provides low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. Provide BI reporting on operational data in near real-time, with minimal impact on your operational store. The [Azure Synapse Link for SQL preview has been announced](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986). For more information, see [Blog: Azure Synapse Link for SQL Deep Dive](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/synapse-link-for-sql-deep-dive/ba-p/3567645).| ## Synapse SQL
traffic-manager Traffic Manager Nested Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-nested-profiles.md
na Previously updated : 04/22/2022 Last updated : 11/10/2022
Traffic Manager includes a range of traffic-routing methods that allow you to co
Each Traffic Manager profile specifies a single traffic-routing method. However, there are scenarios that require more sophisticated traffic routing than the routing provided by a single Traffic Manager profile. You can nest Traffic Manager profiles to combine the benefits of more than one traffic-routing method. Nested profiles allow you to override the default Traffic Manager behavior to support larger and more complex application deployments.
+To create a nested profile, you add a 'child' profile as an endpoint to a 'parent' profile. Some examples are provided in this article.
+
+## MinChildEndpoints
+
+When you add a child profile as an endpoint in the parent profile, the **MinChildEndpoints** parameter is created and assigned a default value of **1**. This parameter determines the minimum number of endpoints that must be available in the child profile for it to be healthy. Below this threshold, the parent profile will consider the entire child profile as unavailable, and direct traffic to the other parent profile endpoints.
+
+The following parameters are available in the parent profile:
+
+- **MinChildEndpoints**: The minimum number of healthy child endpoints for the nested profile status to be healthy.
+- **MinChildEndpointsIPv4**: The minimum number of healthy IPv4 child endpoints for the nested profile status to be healthy.
+- **MinChildEndpointsIPv6**: The minimum number of healthy IPv6 child endpoints for the nested profile status to be healthy.
+
+> [!IMPORTANT]
+> There must be at least one IPv4 and one IPv6 endpoint for any nested MultiValue profile. Always configure values for MinChildEndpointsIPv4 and MinChildEndpointsIPv6 based on your multivalue routing mechanism and do not simply use the default values.<br>
+> The value of **MinChildEndpoints** must be high enough to allow for all endpoint types to be available. An error message is displayed for values that are too low.
+ The following examples illustrate how to use nested Traffic Manager profiles in various scenarios. ## Example 1: Combining 'Performance' and 'Weighted' traffic routing
Returning to the previous example, suppose the production deployment in West Eur
![Nested Profile failover (default behavior)][3]
-You might be happy with this arrangement. Or you might be concerned that all traffic for West Europe is now going to the test deployment instead of a limited subset traffic. Regardless of the health of the test deployment, you want to fail over to the other regions when the production deployment in West Europe fails. To enable this failover, you can specify the 'MinChildEndpoints' parameter when configuring the child profile as an endpoint in the parent profile. The parameter determines the minimum number of available endpoints in the child profile. The default value is '1'. For this scenario, you set the MinChildEndpoints value to 2. Below this threshold, the parent profile considers the entire child profile to be unavailable and directs traffic to the other endpoints.
+You might be happy with this arrangement. Or you might be concerned that all traffic for West Europe is now going to the test deployment instead of a limited subset traffic. Regardless of the health of the test deployment, you want to fail over to the other regions when the production deployment in West Europe fails.
-The following figure illustrates this configuration:
+In the scenario below, the **MinChildEndpoints** value is set to 2. Below this threshold, the parent profile considers the entire child profile to be unavailable and directs traffic to the other endpoints:
![Nested Profile failover with 'MinChildEndpoints' = 2][4]
virtual-desktop Apply Windows License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/apply-windows-license.md
Title: Apply Windows license to session host virtual machines - Azure
description: Describes how to apply the Windows license for Azure Virtual Desktop VMs. Previously updated : 08/14/2019 Last updated : 11/11/2022
Customers who are properly licensed to run Azure Virtual Desktop workloads are eligible to apply a Windows license to their session host virtual machines and run them without paying for another license. For more information, see [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
-## Ways to use your Azure Virtual Desktop license
-Azure Virtual Desktop licensing allows you to apply a license to any Windows or Windows Server virtual machine that is registered as a session host in a host pool and receives user connections. This license does not apply to virtual machines that are running as file share servers, domain controllers, and so on.
+## Ways to apply an Azure Virtual Desktop license
-There are a few ways to use the Azure Virtual Desktop license:
-- You can create a host pool and its session host virtual machines using the [Azure Marketplace offering](./create-host-pools-azure-marketplace.md). Virtual machines created this way automatically have the license applied.-- You can create a host pool and its session host virtual machines using the [GitHub Azure Resource Manager template](https://github.com/Azure/RDS-Templates/tree/master/ARM-wvd-templates). Virtual machines created this way automatically have the license applied.-- You can apply a license to an existing session host virtual machine. To do this, first follow the instructions in [Create a host pool with PowerShell or the Azure CLI](./create-host-pools-powershell.md) to create a host pool and associated VMs, then return to this article to learn how to apply the license.
+Azure Virtual Desktop licensing allows you to apply a license to any Windows or Windows Server virtual machine (VM) that's registered as a session host in a host pool and receives user connections. This license doesn't apply to virtual machines running as file share servers, domain controllers, and so on.
+
+You can apply an Azure Virtual Desktop license to your VMs with the following methods:
+
+- You can create a host pool and its session host virtual machines [in the Azure portal](./create-host-pools-azure-marketplace.md). Creating VMs in the Azure portal automatically applies the license.
+- You can create a host pool and its session host virtual machines using the [GitHub Azure Resource Manager template](https://github.com/Azure/RDS-Templates/tree/master/ARM-wvd-templates). Creating VMs with this method automatically applies the license.
+- You can manually apply a license to an existing session host virtual machine. To apply the license this way, first follow the instructions in [Create a host pool with PowerShell or the Azure CLI](./create-host-pools-powershell.md) to create a host pool and associated VMs, then return to this article to learn how to apply the license.
## Apply a Windows license to a session host VM
-Make sure you have [installed and configured the latest Azure PowerShell](/powershell/azure/). Run the following PowerShell cmdlet to apply the Windows license:
+
+Before you start, make sure you've [installed and configured the latest version of Azure PowerShell](/powershell/azure/).
+
+Next, run the following PowerShell cmdlet to apply the Windows license:
```powershell $vm = Get-AzVM -ResourceGroup <resourceGroupName> -Name <vmName>
Update-AzVM -ResourceGroupName <resourceGroupName> -VM $vm
``` ## Verify your session host VM is utilizing the licensing benefit+ After deploying your VM, run this cmdlet to verify the license type:+ ```powershell Get-AzVM -ResourceGroupName <resourceGroupName> -Name <vmName> ```
$vms | Where-Object {$_.LicenseType -like "Windows_Client"} | Select-Object Reso
## Requirements for deploying Windows Server Remote Desktop Services If you deploy Windows Server as Azure Virtual Desktop hosts in your deployment, a Remote Desktop Services license server must be accessible from those virtual machines. The Remote Desktop Services license server can be located on-premises or in Azure. For more information, see [Activate the Remote Desktop Services license server](/windows-server/remote/remote-desktop-services/rds-activate-license-server).+
+## Known limitations
+
+If you create a Windows Server VM using the Azure Virtual Desktop host pool creation process, the process might automatically assign it an incorrect license type. To change the license type using PowerShell, follow the instructions in [Convert an existing VM using Azure Hybrid Benefit for Windows Server](../virtual-machines/windows/hybrid-use-benefit-licensing.md#powershell-1).
virtual-desktop Per User Access Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/remote-app-streaming/per-user-access-pricing.md
Before external users can connect to your deployment, you need to enroll your subscription in per-user access pricing. Per-user access pricing entitles users outside of your organization to access apps and desktops in your subscription using identities that you provide and manage. Your enrolled subscription will be charged each month based on the number of distinct users that connect to Azure Virtual Desktop resources.
+> [!IMPORTANT]
+> Per-user access pricing with Azure Virtual Desktop doesn't currently support Citrix DaaS and VMware Horizon Cloud.
+ >[!NOTE] >Take care not to confuse external *users* with external *identities*. Azure Virtual Desktop doesn't currently support external identities, including guest accounts or business-to-business (B2B) identities. Whether you're serving internal users or external users with Azure Virtual Desktop, you'll need to create and manage identities for those users yourself. Per-user access pricing is not a way to enable guest user accounts with Azure Virtual Desktop. For more information, see [Understanding licensing and per-user access pricing](licensing.md).
virtual-machines Dv3 Dsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dv3-dsv3-series.md
Previously updated : 09/22/2020 Last updated : 11/11/2022 # Dv3 and Dsv3-series
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
With this scope, you can manage platform updates that do not require a reboot on
Using this scope with maintenance configurations lets you decide when to apply upgrades to OS disks in your *Virtual Machine Scale Sets* through an easier and more predictable experience. An upgrade works by replacing the OS disk of a VM with a new disk created using the latest image version. Any configured extensions and custom data scripts are run on the OS disk, while data disks are retained. Some features and limitations unique to this scope are: -- Scale sets need to have [automatic OS upgrades](/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) enabled in order to use maintenance configurations.
+- Scale sets need to have [automatic OS upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) enabled in order to use maintenance configurations.
- You can schedule recurrence up to a week (7 days). - A minimum of 5 hours is required for the maintenance window.
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
In addition to having an HA and DR solution architected in Azure, you should hav
## Support for JD Edwards
-According to Oracle Support note [Doc ID 2178595.1](https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=573435677515785&id=2178595.1&_afrWindowMode=0&_adf.ctrl-state=o852dw7d_4), JD Edwards EnterpriseOne versions 9.2 and above are supported on **any public cloud offering** that meets their specific `Minimum Technical Requirements` (MTR). You need to create custom images that meet their MTR specifications for OS and software application compatibility.
+According to Oracle Support note [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html), JD Edwards EnterpriseOne versions 9.2 and above are supported on **any public cloud offering** that meets their specific `Minimum Technical Requirements` (MTR). You need to create custom images that meet their MTR specifications for OS and software application compatibility.
## Oracle WebLogic Server virtual machine offers
For related information, see KB article **860340.1** at [support.oracle.com](htt
You now have an overview of current Oracle solutions based on virtual machine images in Microsoft Azure. Your next step is to deploy your first Oracle database on Azure. > [!div class="nextstepaction"]
-> [Create an Oracle database on Azure](oracle-database-quick-create.md)
+> [Create an Oracle database on Azure](oracle-database-quick-create.md)
virtual-machines Weblogic Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/weblogic-aks.md
Beyond certifying WLS on AKS, Oracle and Microsoft jointly provide a [marketplac
After the offer performs most boilerplate resource provisioning and configuration, you can focus on deploying your WLS application to AKS, typically through a DevOps tool such as GitHub Actions and tools from WebLogic Kubernetes tooling such as the WebLogic Image Tool and WebLogic Deploy Tooling. You are completely free to customize the deployment further.
-You can find detailed documentation on the solution template [here](https://oracle.github.io/weblogic-kubernetes-operator/userguide/aks/).
+You can find detailed documentation on the solution template [here](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster).
## Guidance, scripts, and samples for WLS on AKS
Explore running Oracle WebLogic Server on the Azure Kubernetes Service.
> [WLS on AKS marketplace solution](https://portal.azure.com/#create/oracle.20210620-wls-on-aks20210620-wls-on-aks) > [!div class="nextstepaction"]
-> [WLS on AKS marketplace solution documentation](https://oracle.github.io/weblogic-kubernetes-operator/userguide/aks/)
+> [WLS on AKS marketplace solution documentation](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster)
> [!div class="nextstepaction"] > [Guidance, scripts and samples for running WLS on AKS](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/)
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
|This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) || | **SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition** May 11 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured you can start directly implementing your scenarios. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577) |
-| **SAP BusinessObjects BI Platform 4.3 SP02** October 05 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=f4626c2f-d9d8-49e0-b1ce-59371e0f8749&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This appliance contains SAP BusinessObjects BI Platform 4.3 Support Package 2 Patch 4: (i) On the Linux instance, the BI Platorm, Web Intelligence and Live Data Connect servers are running on the default installed Tomcat (ii) On the Windows instance, you can use the SAP BI SP2 Patch 4 version of the clients tools to connect to the server: Web Intelligence Rich Client, Information Design Tool, Universe Design Tool. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4626c2f-d9d8-49e0-b1ce-59371e0f8749) |
-| **SAP Solution Manager 7.2 SP15 & Focused Solutions SP10 (Baseline)** October 03 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=65c36516-6779-46e5-9c30-61b5d145209f&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This template contains a partly configured SAP Solution Manager 7.2 SP15 (incl. Focused Build and Focused Insights 2.0 SP10). Only the Mandatory Configuration and Focused Build configuration are performed. The system is clean and does not contain pre-defined demo scenarios. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/65c36516-6779-46e5-9c30-61b5d145209f) |
+| **SAP ABAP Platform 1909, Developer Edition** June 21 2021 | [Create Appliance](https://cal.sap.com/registration?sguid=7bd4548f-a95b-4ee9-910a-08c74b4f6c37&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|The SAP ABAP Platform on SAP HANA gives you access to SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements ΓÇô including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/7bd4548f-a95b-4ee9-910a-08c74b4f6c37) |
+| **SAP ERP 6.0 EhP 6 for Data Migration to SAP S/4HANA** October 24 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=56825489-df3a-4b6d-999c-329a63ef5e8a&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|update password of DDIC 100, SAP* 000 .This system can be used as source system for the "direct transfer" data migration scenarios of the SAP S/4HANA Fully-Activated Appliance. It might also be useful as an "open playground" for SP ERP 6.0 EhP6 scenarios, however, the contained business processes and data structures are not documented explicitly. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/56825489-df3a-4b6d-999c-329a63ef5e8a) |
| **SAP NetWeaver 7.5 SP15 on SAP ASE** January 20 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |SAP NetWeaver 7.5 SP15 on SAP ASE | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) |
virtual-machines Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/vm-extension-for-sap-new.md
The new VM Extension for SAP uses a managed identity that is assigned to the VM
1. Restart SAP Host Agent Log on to the virtual machine on which you enabled the VM Extension for SAP and restart the SAP Host Agent if it was already installed. SAP Host Agent does not use the VM Extension until it is restarted. It currently cannot detect that an extension was installed after it was started.+
+## <a name="ba74712c-4b1f-44c2-9412-de101dbb1ccc"></a>Manually configure the Azure VM extension for SAP solutions
+
+If you want to use Azure Resource Manager, Terraform or other tools to deploy the VM Extension for SAP, please use the following publisher and extension type:
+
+For Linux:
+* **Publisher**: Microsoft.AzureCAT.AzureEnhancedMonitoring
+* **Extension Type**: MonitorX64Linux
+* **Version**: 1.*
+
+For Windows:
+* **Publisher**: Microsoft.AzureCAT.AzureEnhancedMonitoring
+* **Extension Type**: MonitorX64Windows
+* **Version**: 1.*
+
+If you want to disable automatic updates for the VM extension or want to deploy a spefici version of the extension, you can retrieve the available versions with Azure CLI or Azure PowerShell.
+
+**Azure PowerShell**
+```powershell
+# Windows
+Get-AzVMExtensionImage -Location westeurope -PublisherName Microsoft.AzureCAT.AzureEnhancedMonitoring -Type MonitorX64Windows
+# Linux
+Get-AzVMExtensionImage -Location westeurope -PublisherName Microsoft.AzureCAT.AzureEnhancedMonitoring -Type MonitorX64Linux
+```
+
+**Azure CLI**
+```azurecli
+# Windows
+az vm extension image list --location westeurope --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Windows
+# Linux
+az vm extension image list --location westeurope --publisher Microsoft.AzureCAT.AzureEnhancedMonitoring --name MonitorX64Linux
+```
## <a name="5774c1db-1d3c-4b34-8448-3afd0b0f18ab"></a>Readiness check
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **GatewayManager** | Management traffic for deployments dedicated to Azure VPN Gateway and Application Gateway. | Inbound | No | No | | **GuestAndHybridManagement** | Azure Automation and Guest Configuration. | Outbound | No | Yes | | **HDInsight** | Azure HDInsight. | Inbound | Yes | No |
-| **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=56519). | Both | No | No |
+| **Internet** | The IP address space that's outside the virtual network and reachable by the public internet.<br/><br/>This service tag only applies where the traffic does not hit any other service tag.<br/><br/>The address range includes the [Azure-owned public IP address space](https://www.microsoft.com/download/details.aspx?id=56519). | Both | No | No |
| **LogicApps** | Logic Apps. | Both | No | No | | **LogicAppsManagement** | Management traffic for Logic Apps. | Inbound | No | No | | **M365ManagementActivityApi** | The Office 365 Management Activity API provides information about various user, admin, system, and policy actions and events from Office 365 and Azure Active Directory activity logs. Customers and partners can use this information to create new or enhance existing operations, security, and compliance-monitoring solutions for the enterprise.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory** tag. | Outbound | Yes | No |
vpn-gateway Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md
In this step, you configure P2S Azure AD authentication for the virtual network
For **Azure Active Directory** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values.
- * **Tenant**: https://login.microsoftonline.com/{TenantID}
+ * **Tenant**: `https://login.microsoftonline.com/{TenantID}`
* **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the ""Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered.
- * **Issuer**: https://sts.window.net/{TenantID} For the Issuer value, make sure to include a trailing **/** at the end.
+ * **Issuer**: `https://sts.window.net/{TenantID}` For the Issuer value, make sure to include a trailing **/** at the end.
1. Once you finish configuring settings, click **Save** at the top of the page.
web-application-firewall Waf Front Door Policy Configure Bot Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md
Previously updated : 11/07/2022 Last updated : 11/10/2022 zone_pivot_groups: web-application-firewall-configuration
$frontDoorWafPolicy = Get-AzFrontDoorWafPolicy `
-Name 'WafPolicy' ```
-## Add the bot protecton rule set
+## Add the bot protection rule set
Use the [New-AzFrontDoorWafManagedRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmanagedruleobject) cmdlet to select the bot protection rule set, including the version of the rule set. Then, add the rule set to the WAF's configuration.